tech
March 18, 2026
Why do lawyers keep using ChatGPT?
Posts from this topic will be added to your daily email digest and your homepage feed.

TL;DR
- Lawyers are submitting legal filings with AI-generated 'hallucinations' (fake cases or information).
- Attorneys are facing consequences, including fines and sanctions, for these errors.
- The primary reasons for continued AI use despite risks are time constraints and the perceived efficiency of AI tools.
- Many lawyers do not fully understand how Large Language Models (LLMs) work, sometimes mistaking them for advanced search engines.
- Legal research databases like LexisNexis and Westlaw are integrating AI, further increasing its adoption among lawyers.
- A significant percentage of lawyers have used AI for tasks like summarizing case law and legal research, viewing it as a time-saving tool.
- High-profile cases have demonstrated the negative impact of AI hallucinations on legal proceedings.
- Experts suggest AI can be useful for tasks like document review and brainstorming, but not as a substitute for legal judgment.
- The American Bar Association has issued guidance emphasizing the duty of technological competence for lawyers using AI.
- Concerns remain about the accuracy of AI output and the need for thorough verification by legal professionals.
Continue reading the original article