Fictional Case Law: Unmasking the Challenges of Using ChatGPT in Legal Proceedings

ChatGPT should remain in the category of trust-and-then-verify class of tools for now.

3
 min. read
June 17, 2023
Fictional Case Law: Unmasking the Challenges of Using ChatGPT in Legal Proceedings

It's a story as old as time; a man claims he was injured by a serving cart while flying on Avianca airlines and ends up suing. Avianca asks to toss out the case only to get a 10 page missive in response from opposing counsel with scathing and vehement objections citing more than half-a-dozen relevant court cases.

There was just one problem; none of the cases existed. ChatGPT made it all up.

Just 4 days ago, another attorney, this time in Colorado Springs, leveraged ChatGPT to write and file his first motion for summary judgement. The AI did the same thing, hallucinating fake cases that the attorney included in his filing.

Everyone keeps saying ChatGPT and generative AI are going to be the end of attorneys, but with stories like this, I would imagine most in the legal profession are at best skeptical. And they should be.

Attorneys have been using a variety of software solutions over the years that have helped in preparing for legal proceedings. However, with the advent of ChatGPT and it's meteoric rise in the last 6 months it has made it center stage of not only what this technology can do but some of the scarier parts that lurk when you don't fully understand what or how the technology works.

A Large Language Model, such as ChatGPT or its underlying GPT-4 architecture, is a type of artificial intelligence that has been trained on a vast amount of text data. Its training involves learning patterns in the language, including grammar, facts, reasoning abilities, and even some degree of creativity, from the text data it was trained on. Once trained, it can generate human-like text that can seem surprisingly cogent and well-informed. It can be used for many applications, such as drafting emails, writing essays, answering questions, translating languages, and much more.

The term "hallucination" in the context of AI refers to the model generating information that isn't grounded in its training data. This can often occur when the model is generating a long sequence of text or when it's asked about very specific or novel topics that it doesn't have much precise training data on. In the case of legal matters, when asked to generate a response or create content, the AI might "hallucinate" case law - that is, it might generate legal scenarios or legal decisions that sound reasonable but are actually completely fictional. This is, of course, completely unacceptable if you want to use ChatGPT for drafting legal documents.

This happens because the AI has learned the patterns and structures of legal argumentation and case law but doesn't have an inherent understanding of the law itself or a way to verify the real-world accuracy of the specific legal cases it generates. It's striving to create a coherent, contextually appropriate response based on patterns it has seen before. While this can lead to the generation of creative and seemingly knowledgeable responses, it can also result in outputs that might be confused with actual, factual case law, causing potential issues like the ones you're observing.

For both the attorneys mentioned above, ChatGPT hallucinated new case law because it likely did not have enough training on case law that might be helpful. This can be solved with correct training. The biggest mistake here wasn't in using ChatGPT, it was in trusting it blindly.

Quick Tips for Attorneys and ChatGPT

  • Trust and then verify - If ChatGPT cites case law, double-check it from a trusted source.
  • Focus on what it does best; summarization - ChatGPT and other LLMs are really, really good at consuming data and providing compelling summaries in the format they are asked to. ChatGPT today is limited in size to the amount of data it can query but that is changing quickly.
  • Targeted use cases - Being able to use ChatGPT to help provide suggestions for existing written text or to insert specific clauses in contracts or filings.
  • Drafting complete documents - ChatGPT can draft a fairly coherent document or filing if given the correct prompts. In many cases, this can serve as a great outline for continued manual iteration.
  • No sensitive data in ChatGPT - While OpenAI, the makers of ChatGPT, claim that they do not use data you put into their platform for training purposes, they have had challenges securing their systems and are iterating quickly on their features. Do not push private client data into ChatGPT for now. New tools will emerge that solve this problem and will then become significantly more compelling for a larger audience but for now err on the side of caution when it comes to client data.
  • Prompts are iterative - Many people have a bad first impression with ChatGPT because they try a prompt and aren't impressed with the results. Unlike a Google search, your prompts are iterative with ChatGPT and you can refine your answers over-and-over. This is the real power of LLMs and querying them via a chat interface.

Here at CaseMark, we're focused on helping people in the legal profession stay up-to-date on the fast-paced world of generative AI. We host a weekly webinar around the basics of using ChatGPT and how you can leverage it safely and effectively immediately as well as discuss some of the emerging tools that can further streamline your workflow.

Fictional Case Law: Unmasking the Challenges of Using ChatGPT in Legal Proceedings

ChatGPT should remain in the category of trust-and-then-verify class of tools for now.

3
 min. read
June 17, 2023
Fictional Case Law: Unmasking the Challenges of Using ChatGPT in Legal Proceedings

It's a story as old as time; a man claims he was injured by a serving cart while flying on Avianca airlines and ends up suing. Avianca asks to toss out the case only to get a 10 page missive in response from opposing counsel with scathing and vehement objections citing more than half-a-dozen relevant court cases.

There was just one problem; none of the cases existed. ChatGPT made it all up.

Just 4 days ago, another attorney, this time in Colorado Springs, leveraged ChatGPT to write and file his first motion for summary judgement. The AI did the same thing, hallucinating fake cases that the attorney included in his filing.

Everyone keeps saying ChatGPT and generative AI are going to be the end of attorneys, but with stories like this, I would imagine most in the legal profession are at best skeptical. And they should be.

Attorneys have been using a variety of software solutions over the years that have helped in preparing for legal proceedings. However, with the advent of ChatGPT and it's meteoric rise in the last 6 months it has made it center stage of not only what this technology can do but some of the scarier parts that lurk when you don't fully understand what or how the technology works.

A Large Language Model, such as ChatGPT or its underlying GPT-4 architecture, is a type of artificial intelligence that has been trained on a vast amount of text data. Its training involves learning patterns in the language, including grammar, facts, reasoning abilities, and even some degree of creativity, from the text data it was trained on. Once trained, it can generate human-like text that can seem surprisingly cogent and well-informed. It can be used for many applications, such as drafting emails, writing essays, answering questions, translating languages, and much more.

The term "hallucination" in the context of AI refers to the model generating information that isn't grounded in its training data. This can often occur when the model is generating a long sequence of text or when it's asked about very specific or novel topics that it doesn't have much precise training data on. In the case of legal matters, when asked to generate a response or create content, the AI might "hallucinate" case law - that is, it might generate legal scenarios or legal decisions that sound reasonable but are actually completely fictional. This is, of course, completely unacceptable if you want to use ChatGPT for drafting legal documents.

This happens because the AI has learned the patterns and structures of legal argumentation and case law but doesn't have an inherent understanding of the law itself or a way to verify the real-world accuracy of the specific legal cases it generates. It's striving to create a coherent, contextually appropriate response based on patterns it has seen before. While this can lead to the generation of creative and seemingly knowledgeable responses, it can also result in outputs that might be confused with actual, factual case law, causing potential issues like the ones you're observing.

For both the attorneys mentioned above, ChatGPT hallucinated new case law because it likely did not have enough training on case law that might be helpful. This can be solved with correct training. The biggest mistake here wasn't in using ChatGPT, it was in trusting it blindly.

Quick Tips for Attorneys and ChatGPT

  • Trust and then verify - If ChatGPT cites case law, double-check it from a trusted source.
  • Focus on what it does best; summarization - ChatGPT and other LLMs are really, really good at consuming data and providing compelling summaries in the format they are asked to. ChatGPT today is limited in size to the amount of data it can query but that is changing quickly.
  • Targeted use cases - Being able to use ChatGPT to help provide suggestions for existing written text or to insert specific clauses in contracts or filings.
  • Drafting complete documents - ChatGPT can draft a fairly coherent document or filing if given the correct prompts. In many cases, this can serve as a great outline for continued manual iteration.
  • No sensitive data in ChatGPT - While OpenAI, the makers of ChatGPT, claim that they do not use data you put into their platform for training purposes, they have had challenges securing their systems and are iterating quickly on their features. Do not push private client data into ChatGPT for now. New tools will emerge that solve this problem and will then become significantly more compelling for a larger audience but for now err on the side of caution when it comes to client data.
  • Prompts are iterative - Many people have a bad first impression with ChatGPT because they try a prompt and aren't impressed with the results. Unlike a Google search, your prompts are iterative with ChatGPT and you can refine your answers over-and-over. This is the real power of LLMs and querying them via a chat interface.

Here at CaseMark, we're focused on helping people in the legal profession stay up-to-date on the fast-paced world of generative AI. We host a weekly webinar around the basics of using ChatGPT and how you can leverage it safely and effectively immediately as well as discuss some of the emerging tools that can further streamline your workflow.

Summary Type
Best for Case Types
Primary Purpose
Complexity Handling
Production Time
Best for Team Members
Key Information Highlighted
Narrative
General; personal injury
Initial review; client communication
Low to Medium
Medium
All; Clients
Overall story
Page Line
Complex litigation
Detailed analysis; trial prep
High
Low
Attorneys
Specific testimony details
Topical
Multi-faceted cases
Case strategy; trial prep
High
Medium
Attorneys; Paralegals
Theme-based information
Q&A
Witness credibility cases
Cross-examination prep
Medium
High
Attorneys
Context of statements
Chronological
Timeline-critical cases
Establishing sequence of events
Medium
High
All
Event timeline
Highlight and extract
All
Quick reference; key points
Low to Medium
High
Senior Attorneys
Critical statements
Comparative
Multi-witness cases
Consistency check
High
Low
Attorneys; Paralegals
Discrepancies; Agreements
Annotated
Complex legal issues
Training; in-depth analysis
High
Low
Junior Associates; Paralegals
Legal implications
Visual
Jury presentations
Client / jury communication
Low to Medium
Medium
All; Clients; Jury
Visual representation of key points
Summary Grid
Multi-witness; fact-heavy cases
Organized reference
High
Medium
All
Categorized information