Is ChatGPT a Reliable Source for Legal Advice?
As artificial intelligence tools like ChatGPT gain popularity, many people wonder if these platforms can provide reliable legal guidance. On the surface, it might seem convenient to ask a computer for advice on complicated laws, but the reality is far more complicated. In practice, ChatGPT is not a trustworthy source for legal advice. Our experience, along with numerous real-world examples, shows that AI-generated legal information is often inaccurate and potentially misleading.
Clients frequently come to lawyers with information they’ve obtained from ChatGPT or other generative AI tools. Almost invariably, this information is flawed. Some answers are simply wrong, while others may reflect bias based on the datasets used to train the AI.
While technology can provide quick answers, legal reasoning requires more than processing large amounts of text. Lawyers interpret laws not just based on statutes but through the lens of common law principles, which include ethical values, reasonableness, fairness, and human judgment. These human considerations are beyond the reach of current AI systems, which cannot fully understand concepts like fairness or common sense.
What Happens When Lawyers Depend on AI?
There have been several cautionary tales about lawyers relying too heavily on AI in legal practice. One striking example is the case of Roberto Mata v. Avianca, Inc., 1:22-cv-01461, (S.D.N.Y.). where a New York lawyer submitted a legal filing generated entirely by ChatGPT. The lawyer did not verify the sources provided by the AI, resulting in six completely fabricated cases being cited, each with what the court called “bogus quotes and citations”. This incident highlights the potential dangers of treating AI output as authoritative legal research.
A similar case occurred in British Columbia in Zhang v. Chen, 2024 BCSC 285. A lawyer submitted a notice of application containing legal authorities created by ChatGPT that did not exist. Once the error was discovered, the lawyer withdrew the fake citations and testified that she had not realized AI could produce fabricated cases. The opposing counsel sought special costs against her as a sanction for the mistakes and for the extra work required to identify the fictitious authorities. While the court ultimately did not impose special costs, noting that such penalties were reserved for extreme misconduct, the lawyer was still responsible for the ordinary costs of the application. The case also triggered an investigation by the Law Society of BC.
These examples illustrate a critical point: AI-generated legal content cannot replace a lawyer’s professional judgment. Using AI without oversight can lead to serious consequences, including court sanctions, financial liability, and professional discipline.
What Do Lawyers Need to Know About AI?
To understand why AI cannot yet serve as a replacement for legal advice, it helps to clarify what these tools are and how they work. Artificial intelligence refers to the ability of machines to perform tasks that typically require human intelligence. Generative AI is a subset of AI that can create new content, such as text, images, or other media, in response to prompts. Large language models, like ChatGPT-4 or Google Bard, are specific types of generative AI trained on extensive collections of text. These systems generate responses based on patterns in the data they have seen, but they do not understand law in the human sense and cannot exercise judgment.
What Are a Lawyer’s Professional Responsibilities When Using AI?
Lawyers have strict duties when providing legal services. They must meet the standard of a competent lawyer, which includes assessing options, advising clients, applying research and analysis, implementing strategies effectively, communicating clearly, and acting with diligence, judgment, and professionalism. Using AI does not remove these obligations. Lawyers remain responsible for all legal work, including outputs generated by technology. Delegating tasks to AI is similar to delegating tasks to an assistant: oversight and verification are essential.
How Does AI Affect Confidentiality?
Confidentiality is a cornerstone of legal practice. Lawyers must be extremely careful about what information they input into AI tools. Sharing sensitive client information without proper safeguards could violate confidentiality rules. If redacting confidential information is not possible, lawyers should obtain informed consent from clients before using AI. Consent must be voluntary, fully informed, and documented. Additionally, lawyers must consider whether AI usage could compromise attorney-client privilege or inadvertently waive rights that are critical to their client’s case.
What Are the Duties of Honesty and Candour?
Lawyers must also be transparent with clients. They are obligated to inform clients of all factors that may affect their interests. This includes explaining how AI tools may be used in preparing legal materials. Being honest about the use of technology not only builds trust but also ensures clients understand the risks and limitations of AI-generated content.
Who Is Responsible for AI-Generated Work?
While AI tools are often marketed as helpful assistants capable of completing complex tasks, the responsibility for accuracy and compliance rests with the lawyer. Legal codes require lawyers to supervise anyone, human or AI, whose work they rely upon. This includes reviewing outputs frequently and ensuring they meet professional standards. Any errors in AI-generated work can have serious consequences for both the lawyer and the client.
What About Information Security?
Finally, lawyers must consider how AI tools handle information. Privacy and cybersecurity concerns are significant, especially when dealing with sensitive client data. Lawyers need to ensure that any AI tool they use complies with security standards and protects confidential information from unauthorized access or breaches.
Why AI Cannot Replace Human Legal Judgment
The cases we’ve discussed demonstrate that AI cannot currently replace human judgment in the law. Legal reasoning involves interpreting statutes, applying precedent, weighing ethical considerations, and exercising discretion based on context. While AI can assist with research or drafting ideas, it cannot replace the critical thinking, common sense, and ethical judgment that trained lawyers provide. AI is a tool, not a substitute for professional legal expertise.
Summary
ChatGPT and similar AI tools are not reliable sources for legal advice. While generative AI can assist with drafting, it cannot replace human judgment, ethical reasoning, or professional responsibility. Lawyers who rely on AI without verification risk serious consequences, including financial liability, court sanctions, and disciplinary action. Using AI responsibly requires careful supervision, client consent, protection of confidential information, and adherence to professional standards. In short, AI is a useful assistant but cannot replace the expertise, judgment, and ethics of a trained lawyer.
FAQ
Can I rely on ChatGPT for legal advice in my case?
No. ChatGPT can provide general information but frequently produces inaccurate or misleading results and cannot replace a qualified lawyer.
What are the risks if a lawyer uses AI-generated content without verification?
Courts have penalized lawyers for submitting fake authorities generated by AI. Consequences may include financial liability, sanctions, and law society investigations.
Is AI ever safe to use in legal work?
AI can be used cautiously for drafting ideas, but lawyers must verify all outputs, protect confidential information, and supervise the process closely.
Do clients need to consent to the use of AI?
Yes. If client data is used, consent must be informed, voluntary, and documented to ensure ethical compliance and protection of privilege.
How can lawyers mitigate risks when using AI?
Lawyers should treat AI as a tool, verify all information, maintain client confidentiality, ensure data security, and clearly communicate with clients about how AI is being used.
