Introduction to Legal Prompt Engineering

CaseMark has built LegalPromptGuide.com to help attorneys better master legal prompt engineering.

3
 min. read
October 25, 2023
Introduction to Legal Prompt Engineering

Legal Prompt Engineering (LPE) is a specialized skillset already being sought after by law firms around the world. It requires a higher degree of specificity and sophistication from the prompt author to help mitigate against some of the current shortcomings (which we'll discuss throughout) of generative AI. The last thing you want is to be featured alongside Mr. Schwartz in the New York Times because you didn't take the time to read this guide on how to become a competent prompt engineer.

Law is a knowledge intensive discipline heavily reliant on unstructured data – text. This makes it a prime candidate for disruption by LLMs, like ChatGPT, which are designed for text generation, summarization, question answering, and more. With all that said, let's take a look at why its important to draft effective prompts.

Common Pitfalls in Prompting ChatGPT

When working with AI, it's easy to fall into certain traps that hinder our ability to get the best results. This section focuses on common pitfalls many professionals encounter when prompting ChatGPT and offers practical tips on how to avoid them.

Let's start by taking a look at how most people currently prompt ChatGPT: using vague and short statements that lack context. For example, take a look at the following prompt:

Draft an operating agreement for a private trust company domiciled in Georgia.

Many people (maybe even yourself) may take a look at this prompt and say – "What's wrong with this prompt? It looks good to me. Heck it even follows the golden rule of clarity!" While we will not disagree with you over the directness of this prompt, it leaves much to be desired.

For starters, the three golden rules listed above (clarity, specificity, & context) are NOT mutually exclusive, therefore, try to write prompts prompts with characteristics tied to all of the golden rules to help the AI generate better results. Let's breakdown and discuss some of the problems found in our example prompt.

Problem #1: (lacks specificity) Be as specific as possible. This prompt is missing key pieces of information that could lead to far better results.

Problem #2: (missing context) Trust companies exist to serve as designated trustees to administer trust contracts, yet our example prompt fails to mention anything about the type of trust contract, the types of assets that will be titled to the trust, etc.

Here is a modified version of our example prompt that adds relevant context to improve the output generated by the LLM:

By updating prompt to include additional contextual information, we are inherently assisting the LLM towards generating better responses. The way this works is simple, LLMs, like ChatGPT can be thought of as very advanced text prediction machines. They try to guess the next word based on the words that came before it. For example, if you said "I'm going to the grocery store to buy some...", the AI might guess that you're going to say "milk" or "bread."

Now, the more information you give this AI, the better its guess will be. That's what we mean by "context." It's like if you said, "I'm making a sandwich, and I have everything except one ingredient. I'm going to the store to buy some..." Now the AI can make a better guess because it knows you're making a sandwich.

So when we give more context or information, the AI's responses get better. They become more relevant (they fit what we're asking about better), more specific (they give us more detailed information), more consistent (they stick to the topic better), and they can handle complex questions better.

So, when you provide more context to an LLM:

  1. The Response Becomes More Relevant: The AI model has a better understanding of what you're asking and can therefore generate a response that's more closely aligned with your needs.
  2. The Response Becomes More Specific: With more context, the model can generate a response that's more specific and detailed, as it has more information to draw from.
  3. Maintains Consistency: More context helps the model maintain consistency across longer conversations or documents, ensuring that the generated text logically follows from the prior content.
  4. Handles Complexity Better: Complex questions or tasks often require a lot of context. With more context, the AI model can handle these situations more effectively.

However, while more context generally leads to better responses, it's also important to be clear and concise in providing this context. Overloading the model with unnecessary information might lead to less focused responses. Lastly, it's crucial to remember that LLMs don't truly "understand" context in the human sense; they use patterns in the data they were trained on to generate responses.

Problem #3: (hallucination) There's a fundamental problem with our original prompt:

The creation of private trust companies in the U.S. is limited to a handful of states that have passed state statutes formally recognizing these legal entities, none of which are Georgia. Meaning that you can NOT create private trust companies in Georgia.

However, the AI did not alert us to this fact, and instead continued with the generation of an operating agreement. In the world of LLMs and AI, this phenomenon is referred to as "hallucination." It refers to a situation where the AI generates information that seems plausible but is actually made up or not based on facts.

Think of it this way: Imagine if you asked your friend about the weather, and they confidently said it was sunny outside, even though they hadn't looked out the window. They made an educated guess based on it being sunny yesterday, but they could be wrong if the weather changed. This is similar to how an AI can "hallucinate." This can have dire consequences in the field of law. Luckily, there are ways to solve this and make sure that AI doesn't hallucinate. This leads us to our problem our example prompt faces.

Problem #4: (needs structure) We can see significant improvements in the responses from LLMs by simply adding some structure to your prompt. We call these prompt templates, and they come in a variety of shapes and sizes. One of the most common prompt templates is one which we call the "Prompt Sandwich" which we've illustrated below.

Figure 1 - "The Prompt Sandwich"

It may seem like a nuisance to craft your prompts in such a verbose manner, but trust us, the results are well worth it. Let's take a look at how we can rewrite our original prompt using this template.

Now our prompt is looking much better. Not only is it more structured, but this prompt will lead to much more accurate results. For starters, when submitting our updated prompt to ChatGPT, we correctly receive the correct response...

"I cannot complete this request due to regulatory uncertainty."

If we updated the jurisdiction in question in our prompt to a state that did legally recognize private trust companies (ie. Nevada, Wyoming), the model would use the provided contextual information to produce a more accurate result. This is just one quick example demonstrating how you can score massive productivity gains with just a few really simple techniques.

For more advanced guides on legal prompt engineering, head over to LegalPromptGuide.com where we have specific examples and use cases that help you maximize your use of CaseMark.

Introduction to Legal Prompt Engineering

CaseMark has built LegalPromptGuide.com to help attorneys better master legal prompt engineering.

3
 min. read
October 25, 2023
Introduction to Legal Prompt Engineering

Legal Prompt Engineering (LPE) is a specialized skillset already being sought after by law firms around the world. It requires a higher degree of specificity and sophistication from the prompt author to help mitigate against some of the current shortcomings (which we'll discuss throughout) of generative AI. The last thing you want is to be featured alongside Mr. Schwartz in the New York Times because you didn't take the time to read this guide on how to become a competent prompt engineer.

Law is a knowledge intensive discipline heavily reliant on unstructured data – text. This makes it a prime candidate for disruption by LLMs, like ChatGPT, which are designed for text generation, summarization, question answering, and more. With all that said, let's take a look at why its important to draft effective prompts.

Common Pitfalls in Prompting ChatGPT

When working with AI, it's easy to fall into certain traps that hinder our ability to get the best results. This section focuses on common pitfalls many professionals encounter when prompting ChatGPT and offers practical tips on how to avoid them.

Let's start by taking a look at how most people currently prompt ChatGPT: using vague and short statements that lack context. For example, take a look at the following prompt:

Draft an operating agreement for a private trust company domiciled in Georgia.

Many people (maybe even yourself) may take a look at this prompt and say – "What's wrong with this prompt? It looks good to me. Heck it even follows the golden rule of clarity!" While we will not disagree with you over the directness of this prompt, it leaves much to be desired.

For starters, the three golden rules listed above (clarity, specificity, & context) are NOT mutually exclusive, therefore, try to write prompts prompts with characteristics tied to all of the golden rules to help the AI generate better results. Let's breakdown and discuss some of the problems found in our example prompt.

Problem #1: (lacks specificity) Be as specific as possible. This prompt is missing key pieces of information that could lead to far better results.

Problem #2: (missing context) Trust companies exist to serve as designated trustees to administer trust contracts, yet our example prompt fails to mention anything about the type of trust contract, the types of assets that will be titled to the trust, etc.

Here is a modified version of our example prompt that adds relevant context to improve the output generated by the LLM:

By updating prompt to include additional contextual information, we are inherently assisting the LLM towards generating better responses. The way this works is simple, LLMs, like ChatGPT can be thought of as very advanced text prediction machines. They try to guess the next word based on the words that came before it. For example, if you said "I'm going to the grocery store to buy some...", the AI might guess that you're going to say "milk" or "bread."

Now, the more information you give this AI, the better its guess will be. That's what we mean by "context." It's like if you said, "I'm making a sandwich, and I have everything except one ingredient. I'm going to the store to buy some..." Now the AI can make a better guess because it knows you're making a sandwich.

So when we give more context or information, the AI's responses get better. They become more relevant (they fit what we're asking about better), more specific (they give us more detailed information), more consistent (they stick to the topic better), and they can handle complex questions better.

So, when you provide more context to an LLM:

  1. The Response Becomes More Relevant: The AI model has a better understanding of what you're asking and can therefore generate a response that's more closely aligned with your needs.
  2. The Response Becomes More Specific: With more context, the model can generate a response that's more specific and detailed, as it has more information to draw from.
  3. Maintains Consistency: More context helps the model maintain consistency across longer conversations or documents, ensuring that the generated text logically follows from the prior content.
  4. Handles Complexity Better: Complex questions or tasks often require a lot of context. With more context, the AI model can handle these situations more effectively.

However, while more context generally leads to better responses, it's also important to be clear and concise in providing this context. Overloading the model with unnecessary information might lead to less focused responses. Lastly, it's crucial to remember that LLMs don't truly "understand" context in the human sense; they use patterns in the data they were trained on to generate responses.

Problem #3: (hallucination) There's a fundamental problem with our original prompt:

The creation of private trust companies in the U.S. is limited to a handful of states that have passed state statutes formally recognizing these legal entities, none of which are Georgia. Meaning that you can NOT create private trust companies in Georgia.

However, the AI did not alert us to this fact, and instead continued with the generation of an operating agreement. In the world of LLMs and AI, this phenomenon is referred to as "hallucination." It refers to a situation where the AI generates information that seems plausible but is actually made up or not based on facts.

Think of it this way: Imagine if you asked your friend about the weather, and they confidently said it was sunny outside, even though they hadn't looked out the window. They made an educated guess based on it being sunny yesterday, but they could be wrong if the weather changed. This is similar to how an AI can "hallucinate." This can have dire consequences in the field of law. Luckily, there are ways to solve this and make sure that AI doesn't hallucinate. This leads us to our problem our example prompt faces.

Problem #4: (needs structure) We can see significant improvements in the responses from LLMs by simply adding some structure to your prompt. We call these prompt templates, and they come in a variety of shapes and sizes. One of the most common prompt templates is one which we call the "Prompt Sandwich" which we've illustrated below.

Figure 1 - "The Prompt Sandwich"

It may seem like a nuisance to craft your prompts in such a verbose manner, but trust us, the results are well worth it. Let's take a look at how we can rewrite our original prompt using this template.

Now our prompt is looking much better. Not only is it more structured, but this prompt will lead to much more accurate results. For starters, when submitting our updated prompt to ChatGPT, we correctly receive the correct response...

"I cannot complete this request due to regulatory uncertainty."

If we updated the jurisdiction in question in our prompt to a state that did legally recognize private trust companies (ie. Nevada, Wyoming), the model would use the provided contextual information to produce a more accurate result. This is just one quick example demonstrating how you can score massive productivity gains with just a few really simple techniques.

For more advanced guides on legal prompt engineering, head over to LegalPromptGuide.com where we have specific examples and use cases that help you maximize your use of CaseMark.

Summary Type
Best for Case Types
Primary Purpose
Complexity Handling
Production Time
Best for Team Members
Key Information Highlighted
Narrative
General; personal injury
Initial review; client communication
Low to Medium
Medium
All; Clients
Overall story
Page Line
Complex litigation
Detailed analysis; trial prep
High
Low
Attorneys
Specific testimony details
Topical
Multi-faceted cases
Case strategy; trial prep
High
Medium
Attorneys; Paralegals
Theme-based information
Q&A
Witness credibility cases
Cross-examination prep
Medium
High
Attorneys
Context of statements
Chronological
Timeline-critical cases
Establishing sequence of events
Medium
High
All
Event timeline
Highlight and extract
All
Quick reference; key points
Low to Medium
High
Senior Attorneys
Critical statements
Comparative
Multi-witness cases
Consistency check
High
Low
Attorneys; Paralegals
Discrepancies; Agreements
Annotated
Complex legal issues
Training; in-depth analysis
High
Low
Junior Associates; Paralegals
Legal implications
Visual
Jury presentations
Client / jury communication
Low to Medium
Medium
All; Clients; Jury
Visual representation of key points
Summary Grid
Multi-witness; fact-heavy cases
Organized reference
High
Medium
All
Categorized information