100% found this document useful (1 vote)
111 views50 pages

Legal AI Handbook

The Legal AI Handbook discusses the transformative impact of AI on the legal industry, highlighting that 74% of legal professionals are currently using AI and expect to increase its usage. The guide aims to provide comprehensive knowledge on AI's applications, risks, and evaluation in legal work, emphasizing the importance of understanding AI to leverage its benefits responsibly. It includes a glossary of key AI terms, practical applications, and insights into how AI can enhance productivity in legal tasks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
111 views50 pages

Legal AI Handbook

The Legal AI Handbook discusses the transformative impact of AI on the legal industry, highlighting that 74% of legal professionals are currently using AI and expect to increase its usage. The guide aims to provide comprehensive knowledge on AI's applications, risks, and evaluation in legal work, emphasizing the importance of understanding AI to leverage its benefits responsibly. It includes a glossary of key AI terms, practical applications, and insights into how AI can enhance productivity in legal tasks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

The

Legal AI
Handbook
Foreword

When OpenAI released ChatGPT in 2022, the world seemed to immediately


change. We’ve been told AI will impact every aspect of our jobs, and our
lives, and hype, fear, excitement, optimism, and everything in between have
dominated conversations in the two years since. 

One thing we know for sure is that the legal industry is among the vanguard
when it comes to realizing the early benefits of generative AI. Earlier this

74%
year we released the State of AI in Legal Report, which found that 74% of
legal professionals are using AI for legal work—90% of which plan to use
AI more frequently next year. As of the publication of this guide, over 80%
of legal professionals
of Ironclad customers actively use our own AI.
use AI for legal work
But not only do lawyers have a high bar—the margin for error in legal work
is razor thin. Generative AI is a rapidly developing technology, and in order
to fully, and responsibly, wield its full potential, it’s critical that we
understand it. What are the risks? How can I actually use it? What security
measures do organizations need to take, and how can we properly evaluate
AI software?
Read More State of AI in Legal Report

This guide is meant to provide the knowledge you need to not only
understand AI - but deploy it intelligently in real world scenarios. It’s
important to note that while not exhaustive, this guide provides a
comprehensive look at AI in the legal field. See our table of contents to
guide you to the exact areas you’re looking for.

1
Table of Contents

1 The Basics 5 How to Evaluate AI Software

2 The Practical Power of AI 6 Tips on Getting Started

The Impact of AI-Powered


3 Contract Management
7 AI Innovation in the Legal Field

4 AI & Security: Lifting the Hood 8 How Ironclad Helps

2
CHAPTER ONE

The Basics

A glossary of AI terms
Artificial Intelligence (AI):

As with most any industry, artificial intelligence comes with its own
It’s a little bit tricky to nail down a single definition of AI that we
terminology. Here’s a rundown of key terms to be familiar with as you brave
can all agree upon, but this one from the MIT Technology
this new world:
Review is pretty good: 



AI is a catchall term for a set of technologies that make

computers do things that are thought to require intelligence

Algorithm when done by people. Think of recognizing faces,

understanding speech, driving cars, writing sentences,

A set of rules or instructions given to an AI/ML system, or computer, to answering questions, creating pictures. But even that definition

help it learn, make decisions, and solve problems. contains multitudes.

The basic idea is that AI refers to technology that can do

human-esque tasks at something approaching — or, eventually,

surpassing — human levels of competence. That’s a pretty broad

domain, but so too is the potential for AI applications.

3
Chain-of-thought prompting Deep Learning

A prompting technique that significantly improves the ability of large A subset of machine learning that uses artificial neural networks to learn
language models to perform complex reasoning. The technique centers from data without human domain knowledge. The “deep” in deep learning
around prompting an LLM to generate a series of intermediate reasoning refers to the use of multiple layers in the networks. Deep learning is
steps (aka a chain of thought) before arriving at its final response. generally more capable and accurate than traditional machine learning.

Conversational Assistant Extraction

Conversational assistants are programs designed to simulate human The process of identifying and pulling out specific data points or attributes
conversation. They’re often trained and deployed to handle specific use from documents, such as contracts. This process uses AI and machine
cases, like customer service. Voice assistants like Alexa (Amazon) and Siri learning to recognize patterns and extract relevant information,
(Apple) use a combination of NLP, speech recognition and synthesis, and transforming unstructured text into structured, searchable data. In a legal
other technologies to carry on conversations aloud with human users. context, metadata is often extracted from contracts because they are the
key documents in which all legal data lives.
Copilot
Generative AI (Gen AI)
A copilot is a type of assistant generally designed to help users accomplish
common tasks faster than before. The term is usually used to refer to a As the name suggests, GenAI refers to AI systems that generate new
workplace assistant, like a programming copilot or sales copilot. content, like writing, computer code, images, audio, and video. Generative
AI models identify the patterns and structures within existing data and use
that knowledge to create original content.

4
Hallucination Model

Hallucinations are false or misleading outputs generated by AI models. The An AI model is a program that has been trained on a set of data to
term is most often used to describe when an LLM makes something up and recognize patterns and make predictions. Chatbots like ChatGPT are
presents it as truth as part of a text response. powered by models that can understand requests and respond to them by
predicting the best possible response, one small chunk of data — or, token
— at a time. Image generation programs are similarly powered by models
Large Language Model (LLM) that predict the best visual response to a prompt, token by token. Models
can be trained on various data modalities (eg, Text, images, audio, etc), and
LLMs are algorithms that can recognize, predict, and generate content on data sets curated for specific industries or purposes.
using very large datasets. They are a type of Generative AI specifically
developed to generate text. GPT (OpenAI), Claude (Anthropic), Gemini
(Google), Llama (Meta), and Nemotron (NVIDIA) are all examples of LLMs. Natural Language Processing (NLP)

Technology that gives machines the ability to understand, interpret, and


Machine Learning generate human language. NLP combines computational linguistics,
machine learning, and deep learning to process human language.
A subset of AI that gives computers the ability to learn from data, without
explicit programming.
Optical Character Recognition (OCR)

Metadata OCR is a technology that turns images into text that can be read, searched,
and indexed by machines. For instance, OCR can be used to turn PDFs of
Key information and attributes that describe and categorize elements in a contracts into digital documents (that then become fully searchable).
document or file, like a contract.

5
Prompt Supervised Learning

An instruction given to an AI system. Eg, “Write an article outlining the A type of machine learning in which the model is trained on labeled data,
current state of GenAI in the legal profession.” like a set of human facial images each labeled with a person’s name.

Reinforcement Learning Training and Inference

A type of machine learning in which the model learns through trial and Training is the process of training an AI model on a data set. Inference is
error, receiving rewards or being penalized for its actions. when you pass data through the model to generate a response.

Retrieval Augmented Generation (RAG) Training DATA

RAG is a type of generative AI that relies on a database of knowledge to The initial data used to teach a machine learning model. Training datasets
retrieve information from. By combining retrieval mechanisms with used to train LLMs are typically enormous, ranging into the hundreds of
generation capabilities, RAG can provide contextually relevant, reliable, and billions of words in length.
accurate responses that are tailored to specific industry needs.
Unsupervised Learning
Sentiment Analysis
A type of machine learning in which the model finds patterns in unlabeled
An NLP technique that determines the emotional tone or attitude data, like a set of human facial images without any names, genders, or
expressed in a piece of text. In the context of contract analysis, it can be other identifying labels attached to them.
used to assess the overall tone of a contract or specific clauses, helping to
identify potentially adversarial, or favorable, language.

6
A brief timeline of AI history

To understand where we're going, it's helpful to know where we've been. Here's a quick timeline of AI's evolution:

1950s-1960s 1970s-1980s 1990s- 2000s

The concept of AI is born, with early AI winter sets in due to limitations in Resurgence of AI with advances in ML
experiments in ML and NLP computing power and overhyped algorithms and increased computing power

expectations
British mathematician Alan Turing develops a 1996: World chess champion Gary Kasparov
way to assess if a machine thinks on par with 1974: Sir James Lighthill, an applied defeated IBM’s Deep Blue computer in a
a human. Turing actually called his method mathematician, publishes a highly critical chess match, 5 games to 1.
“the imitation game,” but it was soon known report on academic AI research. Lighthill’s

as “the Turing test.” claims that researchers over-promised and


1997: Deep Blue wins a rematch over
under-delivered when it came to the
Kasparov, needing only 19 moves to take the
1956: Dartmouth mathematics professor
potential intelligence of machines resulted in
final game.
John McCarthy hosts a summer workshop
massive funding cuts across the field.
largely credited with founding the field of

“Artificial Intelligence.” 1986: Ernst Dickmanns, a scientist in

Germany, soups up a Mercedes van with

sensors and on-board computers to create

the first self-driving car. It could only drive

on roads empty of other vehicles.

7
2010s 2020s

Deep learning breakthroughs lead to significant advances in The rise of LLMs and GenAI, exemplified by tools like
image and speech recognition, NLP, and other AI applications ChatGPT and Midjourney, opens up new possibilities for
AI in various fields, including law
2012: AlexNet wins the ImageNet competition, shining a light on the
potential of neural networks and deep learning. AlexNet is one of the 2022: The first direct-to-consumer AI company OpenAI
most influential computer vision research papers ever published, and releases ChatGPT, a chatbot and virtual assistant based on
is credited with accelerating deep learning. As of mid-2024, the their GPT series of LLMs. ChatGPT becomes the fastest
AlexNet paper has been cited over 157,000 times growing consumer software application in history, racking
up 100+ million plus users in just over a month
2013: Geoffrey Hinton, a professor who co-authored the paper with
two graduate students, joins Google. One of the student authors, Ilya 2024: GenAI takes hold in mainstream use, with AI-powered
Sutskever, would later co-found OpenAI. business and consumer applications addressing use cases
from image generation and automated customer service to
2017: Eight scientists working at Google publish, “Attention is All You
legal operations and beyond.
Need.” The seminal paper introduced the transformer, a then-new
deep learning architecture that most LLMs are now based on.

8
How could AI make life better?

In what ways, if any, has AI been valuable to your work?


In general, AI tools excel at taking over repetitive tasks and turning mountains
of information into usable insights. AI tools augment legal work; they don’t do Doing better, faster and
it for you—but it can take remote, mundane tasks off your plate and give you more in-depth research

back time to focus on higher level tasks, planning, and strategy.

Saving me time
in my day
Our recent State of AI in Legal report uncovered a few examples of where
legal teams were interested in—and more importantly, trusting of—using AI in
Doing mundane tasks
their day-to-day roles: on my behalf

Being able to be more


What legal tasks would you trust AI to do on your behalf? strategic with my work

Tag metadata in documents Other

Flagging risky
clauses in contracts
N/A; I haven’t found
Contract analytics AI to be valuable for
and analysis anything in my work

Summarize case law 0% 10% 20% 30% 40% 50%

Preparing legal memos

Review contracts
But if we dig deeper, the gains from using AI go much deeper than the tasks
Review documents
for litigation
they perform. For instance, the aforementioned report found that the three
Replacing risky clauses leading benefits of AI were 1) doing better, faster, and more in-depth
in contracts
research, 2) saving time during the day, and 3) doing mundane tasks on
Other legal’s behalf. Today, over half of lawyers are unsatisfied with work—citing
N/A; I wouldn’t trust inundation with stressful deadlines and an overwhelming amount of tasks—
AI to handle any tasks
but 57% of those lawyers believe AI can help alleviate the dissatisfaction.
0% 10% 20% 30% 40%

9
A shift in mindset

But even before getting started, succeeding with AI is all about adopting the
right mindset. Don’t look at AI as a silver bullet designed to deliver instant
results, but instead as a powerful tool to help you iterate on content and
ideas, take some of the drudgery out of mundane tasks and help you work
faster and more efficiently.

Having a misguided mindset about what AI can and can’t do will derail your
project before it even gets off the ground. A few things to keep in mind are

AI might feel like magic—but just like any other tool, it takes time to
stand up. Especially with regards to training your AI, this will not happen
overnight. But the good news is the more you train it, the better it
becomes.

Keeping your data accurate and up to date is critical. As they say,


“garbage in, garbage out.

The rate of advancements in foundational models is exponentially


increasing—so even if a solution doesn’t meet your needs today, it will
likely surpass those needs soon.

10
CHAPTER TWO

The practical power of AI

We’re still in the early stages of the artificial intelligence revolution, but Data-driven decision making: AI can analyze vast amounts of data

companies, legal teams, and individual professionals are already seeing quickly, providing insights that can inform strategy and decision-

real benefits from using AI on the job—and not just for general tasks like making much faster than manual analysis

brainstorming, softening an email, or helping you figure out that Excel


24/7 availability: Unlike human workers, AI tools can work around
function. 

the clock, helping to meet tight deadlines and manage high volumes

For those of you who have been experimenting with AI already, this may of work

not be surprising, but a few of the primary ways to use AI to boost your
Personalized learning: AI can adapt to individual working styles and
personal productivity is:
preferences, offering personalized recommendations and

Task automation: AI can take over routine, time-consuming tasks, assistance. The rise of AI-powered “copilots” to help humans in the

freeing up lawyers to focus on high-value work that requires human workplace is widely thought to be one of the next big trends in

judgment and creativity. technology and business.

11
Conversational AI - what is it, and what can it Conversational AI differs from GenAI in that the former is built to hold
authentic, two-way conversations with human users, while GenAI is
do for productivity?

designed to produce original content when prompted. In the legal context,


One category of AI that gets outsized personal use–and certainly no conversational AI can take the form of chatbots, virtual assistants, or more
shortage of media attention–is conversational AI. Conversational AI refers sophisticated language models that can engage in complex legal
to technologies that allow computers to understand, process, and respond discussions.

to human language in a natural way. ChatGPT is a conversational AI chatbot,


When it comes to productivity for legal professionals, what you can do with
as are Claude, Gemini, and the countless other LLM-driven chatbots that
conversational AI runs the gamut, but some of the quickest categories to
have popped up in the past few years.
start seeing immediate benefits with include:

Drafting legal Reviewing legal


Legal research, like:
documents, like: documents, like:

Drafting contracts and policie Understanding new policies, Reviewing and redlining contract
regulations and local / federal law
Creating document summaries Interpreting legal language
Fact checking document
Drafting emails and other Answering questions about a specific
communications Extracting and aggregating data document based on historical dat
from multiple documents
Generating amendments, addenda, Comparing multiple documents and
and other ancillary documents document versions

12
AI in Contract Lifecycle Management

You’ve seen up until this point that there is no shortage of what AI tech
can and will do for legal work. But where do those capabilities actually
APPROVE live? How do early adopters access them today? The answer for many is a
category of tools that has become a must-have for both private firms and
in-house teams: contract lifecycle management software, or CLM. 

NEGOTIATE EXECUTE
CLM systems have been around for a long time, but, as with tech across
PRE-SIGNATURE virtually every sector today, they’ve all recently been turbocharged by AI.
Many could already streamline stages of the contract process, from
creation to negotiation, execution, and post-execution. But now, because
Contract
AI uses ML and NLP to mimic human processes, a truly robust AI-
powered CLM will do all that, plus help with

GENERATE
Management FULFILL Contract ingesting and tagging. CLMs use AI to analyze, tag, extract,
Cycle and report on contract data.

Building standardized templates and assisting in negotiating, redlining,


POST-SIGNATURE and reviewing contracts. AI-powered CLM tools can automatically
locate problematic clauses within contracts before you send them out

RENEW ANALYZE Providing contract analytics insights to help organizations make data-
driven decisions about their contractual relationships.

Boosting operations and productivity by streamlining the contract


OPTIMIZE creation, review, and renewal process.

13
Why contract data is an ideal place to start using AI tech

There’s another reason why CLMs are ideal spaces to start with AI: they

house contract data. And working with contract data presents an

opportunity for legal departments to drive real, data-driven business impact

through data locked in contracts—and serves as a perfect experimentation

environment for five main reasons:

1 2 3 4 5

Universality
Accuracy
Volume and Risk mitigation
Customization

repetitive structure
opportunities

Contracts are one of By nature, contracts are Manual contract review is time-

the most ubiquitous highly accurate. Having Organizations typically handle a consuming and prone to human Contract data is a good

business tools on the been reviewed by teams large number of contracts, many error, and unfavorable or risky place to get your feet wet

planet. Every of lawyers and of which share similar structures terms can often hide in contract with customization

department, within stakeholders, for the and clauses. This repetition data. AI excels at uncovering capabilities as most

every company in the most part, your creates a perfect learning problematic language, and at organizations usually have

world, uses contracts will be environment for AI systems, and creating guardrails so custom clauses, fields, and

contracts. complete, accurate, and, the structured format of most employees don't inadvertently metadata properties that

more or less, final. contracts also provides a introduce risk while drafting or are pre-approved for use.

consistent framework for AI editing contracts.

algorithms to analyze.

14
Conversational AI for contract data

Conversational AI, in particular, has tremendous potential to transform how


legal professionals interact with contract data. Why? Because it’s so natural
to use, there’s hardly any learning curve involved. 

Here’s a small sampling of some of the ways conversational AI can help you
get more out of working with contract data:

Ask questions about Brainstorm with AI. Gain insights on how


your day-to-day data. you negotiate.
Conversational AI tools understand natural Chatbots can do more than just answer Conversational AI can analyze your
language queries, so it’s easy to talk with questions about the data they’re trained on. historical contract negotiations and provide
them about everyday tasks. Ask questions They’re great at distilling complex valuable insights into your negotiation style
about your contracts in plain language, like, information into concise summaries and and tactics. By examining patterns in your
“What are our obligations under this make killer brainstorm partners as well. AI past negotiations, the AI can tell you which
contract?” or “Which contracts are up for chatbots can help you come up with new clauses are getting negotiated the most,
renewal next quarter?” Then, ask follow-ups ideas, strategies, and plans for improving which contract types take the most
and dig deeper into the data than you your contracting process or making data- negotiation time, and if there are major
would on your own. driven decisions across the business. Talk areas of revenue leakage in your historical
to the bot like a brainstorming partner, and contracts, amongst other things. A
see what you can come up with together. conversational AI chatbot can even simulate
negotiation scenarios to help you practice.

15
Customization in AI-powered contract management

1
AI systems are designed to be highly customizable. When we talk about
customization in AI-powered contract management, we’re not talking about
the software itself, or some kind of bespoke licensing agreement to use the Custom clause libraries

software. We're talking about how AI systems allow legal professionals to


AI systems can be trained on an organization's preferred clauses, ensuring
tailor their individual work processes and outputs. 

consistency across all contracts while maintaining the unique voice and
Let’s examine some concrete examples: requirements of the business. A legal department could use AI to create a
custom clause library for merger and acquisition (M&A) contracts by
analyzing thousands of past deals to suggest optimal clauses based on
transaction type, jurisdiction, and client industry. 

Here’s how this goes beyond simple template management

The system learns the nuances of your clause preferences based on


example text you provide, including specific language, formatting, and
contextual usage

As new clauses are approved and added to the library, AI adapts in real-
time, suggesting these new clauses as appropriate in future contracts

The system can identify when a proposed clause deviates from the
preferred library, flagging it for review and suggesting alternatives
Check out how Ironclad
Read More Over time, the AI can analyze the performance of different clauses,
is approaching custom AI
providing insights into which ones lead to faster negotiations or fewer
disputes.

16
2

Industry-specific models and retrieval-augmented generation


Alternatively, more and more applications are using retrieval-augmented
generation, or RAG for short, as a more cost-effective and broad approach.
While general contract AI models are powerful, industry-specific fine-tuning In a nutshell, RAG is a type of GenAI that relies on a trusted database of
takes their capabilities to the next level. An AI model fine-tuned on drug knowledge to retrieve information from. 

licensing agreements, for instance, could accurately identify and assess


complex royalty structures, regulatory compliance clauses, and intricate By combining retrieval mechanisms with generation capabilities, RAG can
intellectual property terms specific to biotech partnerships provide contextually relevant and accurate responses that are tailored to
specific industry needs
Fine-trained models can become fluent in industry-specific jargon,
regulations, and standard practices, reducing the need for constant RAG models can access up-to-date information and domain-specific
human oversight databases, ensuring that the generated content reflects the latest
trends, regulations, and best practices within an industry
The models can recognize and flag industry-specific risks that might be
overlooked by a generalist system By retrieving pertinent information, these models can produce more
informed outputs, improving the quality of responses in specialized
The models can suggest industry-standard clauses and terms that might fields such as law, finance, or healthcare
be missing from a draft contract
The retrieval component allows RAG to maintain a balance between
As regulations change, the AI model can be updated to ensure creativity and factual accuracy, enabling it to generate content that is
compliance across all new and existing contracts. not only coherent but also grounded in real-world data.

17
3 4

Risk scoring
Workflow Integration

Customizable risk assessment is a game-changer for legal teams, allowing Integrating AI into existing contracting workflows goes beyond simple

them to align AI outputs with their organization's risk tolerance and automation, offering a tailored approach to contract management

priorities. Imagine a multinational company that trained an AI contract


Create custom approval chains and task systems with routing contracts
analysis tool to assign risk scores based on its specific regulatory
to the appropriate stakeholders based on content, risk score, or other
landscape and risk tolerance. The tool could quickly identify high-risk
defined criteria
clauses in vendor agreements across multiple countries the company does

business in, allowing for renegotiation to significantly reduce their overall


Escalation rules can be fine-tuned to match the organization's hierarchy
risk profile
and decision-making processes

Organizations can define their own risk categories and weightings,


AI models can learn from past workflow patterns, suggesting
ensuring the AI system focuses on what matters most to them
optimizations to reduce bottlenecks and speed up contract cycles

The system can be trained to recognize subtle indicators of risk that are
Automated notifications and reminders can be customized to match the
specific to the organization's history and context
communication style and urgency levels preferred by different team

members.

Risk scores can be dynamically adjusted based on changing market

conditions or company priorities


Another way to take advantage of Generative AI for contract work is to

train the system on custom language your company and/or industry


The AI can provide both detailed explanations for its risk assessments
frequently uses. Let’s take a look at how it works.
and visualizations of the referenced data, allowing legal professionals to

understand and validate the scoring.

18
Data accuracy through custom training
Data extraction

Data accuracy is crucial for effective decision-making and risk mitigation. Custom training significantly improves an AI system's ability to extract
Custom training plays a pivotal role in enhancing an AI system's ability to relevant data points from contracts.

understand and process contracts accurately. Here's how it works:

An AI model can be taught to recognize and extract specific data fields that
are crucial to your company, but might be overlooked by generic systems.
The system can learn to interpret context-dependent information, like when
Clause identification
certain data points are relevant to the contract type or parties involved. The
model can also be trained to handle complex data structures, such as
AI systems can be trained to recognize and categorize specific clauses
nested clauses or interdependent terms. As data extraction accuracy
unique to your company or industry. This customization goes beyond
improves, the need for manual verification decreases, streamlining the
generic clause recognition, as an AI model can learn to identify company-
contract review process and freeing staff for other, higher-order work.

specific language and formatting preferences. 

Variation recognition

From there, It can recognize industry-specific clauses that may not be


common in general contract databases. The system can then be trained to AI can be taught to recognize acceptable variations of standard clauses,
differentiate between subtle variations of similar clauses, ensuring precise improving flexibility in contract analysis. First, the system learns to identify
categorization. As new clause types are introduced, the AI model can be when a clause is substantially the same as a standard clause, even if the
quickly updated to recognize and categorize them appropriately. wording is slightly different. It can then flag variations that fall outside of
acceptable parameters, flagging them for review by human experts.

The AI can suggest standardization opportunities when it encounters


frequently used variations of clauses. This flexibility allows for more
nuanced contract analysis, accommodating the real-world diversity of
contract language while maintaining consistency.

19
Error reduction

AI feeds on data, and AI models can learn from experience over time. As an
AI-powered contracting system learns from company-specific contracts, it
becomes better at identifying and flagging potential errors or
inconsistencies.  
1 Contract lifecycle time
The system develops a deep understanding of what "normal" looks like for
your organization's contracts, making it easier to spot anomalies. That
understanding makes it easier to identify potential errors in data entry, such WHAt it is
as incorrect dates or mismatched party names. The AI model can also flag
inconsistencies between different sections of a contract, helping to ensure Elapsed time from contract initiation to execution.

coherence within the document. Over time, the AI system can provide
insights into common error patterns, allowing for proactive improvements in WHY IT MATTERS
contract drafting and review processes.

Shorter cycle times can lead to faster deal closures and


improved business agility.

What data should I track?


HOW TO USE IT
When implementing AI in contract management, tracking the right data is Identify bottlenecks in the process and implement targeted
essential for overall performance and return on your investment. Here are improvements. For example, if the data shows that legal review
key metrics to consider tracking, along with tips on why the data matters of a particular contract type takes an average of 5 days, the
and how best to leverage it. team could implement an AI-assisted pre-review against an
approved language playbook to cut it down to 2 days.

20
2 Approval times 3 Negotiation rounds

WHAt it is WHAt it is

Time spent in various approval stages.

Number of back-and-forth exchanges during negotiation.

WHY IT MATTERS WHY IT MATTERS

Long approval times can delay contract execution and Excessive rounds can indicate inefficiencies or misalignments in
potentially lose business opportunities.

the negotiation process.

HOW TO USE IT HOW TO USE IT

Identify which types of contracts or clauses tend to cause Develop strategies to reduce negotiation rounds, such as
delays and streamline approval processes. For instance, improving initial contract drafts or providing negotiators with
analysis reveals that non-disclosure agreements (NDAs) with better data. For example, after noticing 5+ rounds of negotiation
non-standard confidentiality terms take 3 times longer to on payment terms, the team could create an AI-powered clause
approve, so pre-approved alternative clauses are created to library with pre-approved variations, reducing negotiation to 2-3
shorten approval times. rounds, on average.

21
4 Risk scores 5 Compliance metrics

WHAt it is WHAt it is

AI-generated risk assessments for each contract.

How well contracts adhere to internal policies and


external regulations.

WHY IT MATTERS
WHY IT MATTERS
Helps prioritize high-risk contracts for review and informs risk
management strategies.

Ensures legal and regulatory compliance, reducing the risk of


penalties or legal issues.

HOW TO USE IT
HOW TO USE IT
Adjust risk tolerance thresholds, focus resources on high-risk
areas, and track risk trends over time. For instance, the legal Identify areas of frequent non-compliance and implement
team could set an alert for contracts scoring above 7/10 on their targeted training or process improvements. For example, if AI
internally created risk scale, ensuring these receive priority analysis discovers that 30% of contracts exclude GDPR clauses,
review by senior attorneys. the team could implement an automated GDPR clause insertion
and verification step.

22
6 Value leakage 7 Contract renewals

WHAt it is WHAt it is

Instances where contract terms are not fully leveraged Upcoming renewal dates and associated values.

or enforced.

WHY IT MATTERS
WHY IT MATTERS
Proactive management of renewals can lead to better terms and
Represents lost value or missed opportunities for the business.

prevent unintended auto-renewals.

HOW TO USE IT HOW TO USE IT

Implement systems to better track and enforce contract terms, Set up automated alerts and processes for timely review and
and educate stakeholders on contract value maximization. For renegotiation of contracts approaching renewal. For example, an
instance, AI could flag unused volume discounts in supplier AI system that alerts the team 90 days before each SaaS
contracts, leading to cost reduction through better term contract renewal would allow time for usage analysis and
utilization. renegotiation, potentially saving on renewals.

23
8 Clause language flagging 9 Obligation fulfillment

WHAt it is WHAt it is

Frequency and context of specific clause usage across contracts.

Tracking of contractual obligations and their completion status.

WHY IT MATTERS WHY IT MATTERS

Provides insights into negotiation patterns and potential areas Ensures all parties are meeting their contractual commitments
for standardization.

and helps prevent disputes.

HOW TO USE IT HOW TO USE IT

Identify frequently negotiated clauses for potential pre- Set up automated reminders for upcoming obligations and
approval, and standardize common variations. For instance, if regularly review fulfillment status. For example, an AI system
analysis shows 5 common variations of a liability clause, could track delivery deadlines in manufacturing contracts,
creating a pre-approved clause menu could significantly reduce sending alerts ahead of due dates to potentially cut down on
negotiation time. late deliveries.

Tracking these metrics can help you gain valuable insights into contract
management processes, identify areas for improvement, and leverage AI
to drive better outcomes in your contractual relationships.

24
How can I use this data to do my job better?

The global legal technology market has grown significantly in recent Additionally, the insights gleaned from contract data enable legal teams to
years and GenAI will accelerate this growth, meaning the market will allocate resources more efficiently. By identifying which types of contracts
reach $50 billion in value by 2027, according to Gartner, Inc.1 This — and stages in the contract lifecycle — require the most time and attention,
investment is not just about automating routine tasks; it's about teams can prioritize their efforts where they'll have the greatest impact. A
transforming how legal professionals approach their work. As AI takes recent EY Law Survey revealed that 99% of organizations report that
over the mundane aspects of contract management, legal experts are managing current contracting workloads is a challenge, highlighting the
free to engage in more strategic, high-value activities.
potential impact of data-driven resource allocation.

The true power of contract data lies in how it can inform decision- Perhaps most importantly, leveraging contract data allows legal professionals
making and strategy. Predictive analytics, for instance, allows legal to contribute more directly to business strategy. By analyzing trends in
professionals to anticipate potential issues in new contracts or negotiation patterns, compliance metrics, and contract performance, legal
negotiations based on historical data. A 2024 survey by Lex Machina teams can provide valuable insights that inform broader business decisions.
found that 100% of legal analytics users find it valuable, with ​70% saying Moving forward, the ability to effectively leverage contract data will become
successful litigation outcomes drives their usage, and 69% saying a key differentiator for legal professionals and help them become drivers of
improved efficiency drives theirs. tangible business impact and strategic partners to the business.

1 Gartner, Gartner Predicts the Global Legal Technology Market Will Reach $50 Billion by
2027 as a Result of GenAI, 25 April 2024, Rob Van Der Meulen, et al.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in
the U.S. and internationally and is used herein with permission. All rights reserved.

25
CHAPTER THREE

Impact of AI-Powered Contract


automatically tagged/indexed over 4k contracts
Management Across the Organization

AI has already proven to significantly impact many aspects of


business operations. We’re seeing AI-powered contracting accelerate

‘‘
deal cycles, surface trends and lead to data-driven insights, and save
time and money through automation. It is also helping optimize team Ironclad AI automatically
workloads, improve accuracy and consistency, and free up legal reviews these contracts, flags
teams to focus on the higher-value strategic work that they went to
cut contract language and clauses that don't
law school for.
processing time work for us, and suggests
by 50% L'Oréal-approved provisions to
swap in. This cuts the review
process from hours to minutes,

‘‘
improves our team's efficiency,
Our goal is to keep legal out of 95% of and frees up time for the team
contracts - AI-driven workflows, permission to focus on more high impact
controls, and analytics help us get there.” work."
reviews contracts
Catherine Choe 80% faster Charles Hurr

26
Team-specific Benefits
2. Reduced Risk 

AI-powered analysis can catch potential issues human reviewers might miss. AI will handle
Clearly, we believe that AI is a game changer for the legal field (explained

below). But what about the rest of your organization? Let’s explore what it’ll Automated bulk review of documents, flagging anomalies overlooked in manual revie

do for you, and the benefits your various stakeholders could see within the Standardization of templates that ensure compliance with company guidelines 

context of contract lifecycle management.


EXAMPLE

AI flags an unusual indemnification clause in a vendor contract, preventing potential liability.

3. Data-Driven Insights 

LEGAL

Access to contract analytics can shape negotiation strategies, uncover bottlenecks, and prove

legal’s business value. AI will:


1. Time Savings 

Detect contract metadata in ingested documents for trend analyses


Automation of routine tasks allows focus on complex negotiations and
Analyze contract metadata to surface insights into risks, costs, abnormalities, and more 

strategic advisory roles. AI will handle:

Initial contract draftin EXAMPLE

Redline suggestions based on predefined guidelines and playbooks


Analytics reveal that certain clauses consistently lead to faster deal closure, informing future
Intelligent ingestion of legacy documents and their metadata 

contract drafting.

EXAMPLE 4. Improved Work-Life Balance

A lawyer spends 30% less time on routine NDAs, allowing them to AI helps legal teams manage their workloads more effectively, reducing burnout and making

contribute more to high-stakes M&A deals. them better business partners across the org. AI will:

Automate review of low risk, high volume contracts

Enable faster drafting, reviewing, metadata gathering, and risk management 

EXAMPLE

A legal team reduces average weekly overtime from 10 hours to 3 hours after implementing AI-

powered contract management.

27
SALES IT

1. Faster Deal Closure via Self-Service 


1. Data Visibility 

AI-powered systems can generate and process contracts faster, reducing AI-powered CLMs offer easier and more accurate contract data visibility,
time from proposal to signed agreement. AI will: which helps monitor security and spend. AI will:
Automate contract review on low risk, high volume contracts and Analyze large volumes of contract data to find patterns in software
remove legal as a bottleneck usage, spend, and data acces
Pull in relevant, pre-approved contract language to ensure compliance Use NLP to detect renegotiation or termination clauses in contracts
Provide visibility into contract status and details instantly 

Evaluate the success and compliance of different service providers


based on past contract data 

EXAMPLE
EXAMPLE
Average contract cycle time reduces from 4 weeks to 1 week, allowing sales
to close deals 75% faster.
You can ask a legal AI chatbot to pull the top five most and least expensive
2. Predictive Insights 
software contracts and map out corresponding adoption to savings
opportunities 

By analyzing historical data, AI models can provide insights into deal


likelihood, forecast accuracy, and process bottlenecks. AI will: 2. Risk Mitigation

Predict issues in deals based on the analysis of risky clause language AI-powered CLMs help IT teams reduce various technological and
Provide more accurate forecasting based on deal size, timing, and operational risks. AI can:
redline analysi Provide digestible audit logs for fast security and compliance reviews
Flag stages in the contract lifecycle that might slow the deal down 

Monitor contract performance and notify teams of unmet obligations,


EXAMPLE
mitigating legal and financial consequences 

EXAMPLE
AI flags that including a particular clause in a deal yields a 30% lower close
rate on average, allowing sales to proactively address the issue. The AI system automatically flags potential data privacy issues in contracts,
helping the IT team proactively address GDPR compliance risks before they
become problems.
28
PROCUREMENT 3. Cost Savings

Better contract analysis can identify opportunities for consolidation or


renegotiation. AI will:
1. Workflow Efficiencies 

Analyze total contract value across contract types and other segments,
AI-powered CLMs streamline procurement processes, addressing common
identifying overlaps or opportunities for consolidation or renegotiatio
operational challenges. AI will handle:
Consolidate spend data from multiple sources to provide a unified view
Initial drafting and redlining of procurement contracts across documents 

Sequential, customized approver routing using standardized guidelines  

EXAMPLE
EXAMPLE
AI analysis reveals $500,000 in potential annual savings by consolidating
Leveraging an AI-powered playbook, the CLM automatically routes office supply contracts across departments at a global organization 

contracts to the right approvers based on contract value and type,


4. Risk Management and Compliance 

reducing approval times and eliminating lost contracts.

AI can flag high-risk suppliers or contracts for closer monitoring and ensure
2. Supplier Management 

compliance with requirements based on pre-set risk factors. AI will:


AI-powered CLM can help track supplier performance against contract
Prioritize which suppliers or contracts need more attention.
terms. AI will:
Verify that contracts include necessary clauses and adhere to
Monitor deliverables, SLAs, and other performance metrics and regulatory requirements.

generate reports automaticall


Flag upcoming renewals and unmet obligations via contract data analysis

EXAMPLE

EXAMPLE The system identifies a supplier with recent negative press, prompting a
review of the relationship, while simultaneously ensuring all new contracts
The system flags a supplier consistently missing delivery deadlines, allowing meet GDPR requirements.
procurement to address the issue proactively.

29
CHAPTER FOUR

AI & Security: Lifting the Hood

Concerns about AI and security aren’t unfounded, but they shouldn’t deter
you from using the technology. Just as you would with any other
technology, you should take time to understand the basic issues Privacy
surrounding AI and security, and to learn — and follow! — best practices for
using artificial intelligence without putting your company or clients at risk. 

AI systems often require large amounts of data to function


When we talk about the risks of generative AI, we’re talking mainly about effectively. Questions of where the data training data comes from
two things: 1) what the system outputs (and why), and 2) who can access all — and what happens to data analyzed or otherwise passed into
of the data both going into and coming out of the AI model, including what these systems during use — raises concerns. Will sensitive client
the model was trained on. Let’s break that down into some key areas to be information be protected if I feed it to AI for analysis? Will my
aware of as you think about using AI securely in your own workplace: company’s intellectual property be used in training and
recommendations for competitors? How can we best leverage AI
while maintaining compliance with data protection regulations like
GDPR (European Union regulation on information privacy)? In the
case of rapidly evolving AI capabilities, consideration of answers
to these questions are often more important than the product
performance itself.

30
Security Bias and Fairness

AI systems can be targets for hackers and cybercriminals, AI systems can inadvertently perpetuate or amplify biases
especially if they govern critical infrastructure or power grids. present in their training data, potentially leading to unfair or
Security breaches could lead to stolen data, but also to discriminatory outcomes. The use of AI in legal decision-making,
compromised AI systems that produce altered or otherwise specifically, raises ethical questions about accountability, fairness,
unreliable output. A malicious actor could poison a system’s and the role of human judgment in the legal process.

training data, causing the model to learn incorrect behaviors.


They could also wage adversarial attacks designed to skew input Evaluate algorithm design: The way AI algorithms are designed
data to manipulate the AI's output without being detected.
can introduce biases or errors. For example, if certain factors are
given too much weight in a decision-making algorithm, it could
Look out for integration challenges: Poorly implemented AI tools lead to skewed results.
that don't integrate well with existing systems can create security
vulnerabilities or lead to data inconsistencies.

31
Lack of Transparency Accuracy and Reliability

Many AI systems, particularly deep learning models, operate as AI models are only as good as the data they're trained on.
"black boxes," making it difficult to understand how they arrive at Insufficient, poor quality, or biased training data can lead to
their conclusions. This can be problematic in legal contexts where inaccurate or unfair outcomes.

explainability is crucial.

Beware of hallucinations: hallucinations have been a scourge of


Look for Model Cards that show their work: AI model cards generative AI since day one, and while they're rapidly improving,
provide details about how a model was developed, including should be taken seriously. Make sure to double check any and all
architecture and training data. Seeing what kind of data was used outputs from Gen AI tools against trusted sources.
to train the model is key to understanding if the output of the
model will be biased. Bonus points for solutions that clearly lay
out how a model came to a specific conclusion.

32
How to think about mitigating those risks

Mitigating AI risks requires a multifaceted approach. Legal teams might


not be on the front lines for the bulk of the work, they should know how
to work with the right partners — Information Technology (IT) chief
among them. 

Here are the eleven things to consider with your colleagues in IT when
making your own AI risk plan:

Data Governance: Implement strong data governance practices


1 to ensure the quality, security, and ethical use of data used to
train and operate AI systems.

Regulatory Compliance: Stay informed about and comply with


2 relevant regulations and developing frameworks for AI
governance, like ISO 42001 and NIST. Know how the regulations
differ in various jurisdictions around the world.

Algorithmic Auditing: Regularly audit AI algorithms for bias and


3 fairness, and make necessary adjustments to ensure equitable
outcomes.

33
Audit Logs: Make sure you have access to application audit logs Ethics Guidelines: Develop and adhere to clear ethical
4 inside of your own network (eg, You don’t have to request them 8 guidelines for AI use in legal contexts. This begins with
from a third party). Audit logs let you monitor in real time, understanding and taking into account the algorithmic bias in
spotting threats or risks before they become full-on problems. any AI system used to influence decisions within legal work.
Your guidelines should also account for accuracy, privacy, and
human oversight of work done by AI.

Explainable AI: Where possible, use AI models that provide


5 explanations for their decisions, or develop supplementary
systems to interpret complex models. Training and Education: Ensure that all users of AI systems are
9 properly trained in their capabilities, limitations, and potential
risks. Develop policies for use of AI systems in the workplace
that address data and system security (see section below).
Human Oversight: Have humans review everything your AI
6 systems create, before you use it or send it to someone else.
Maintain human oversight of AI systems, especially for critical
decisions. AI should augment human decision-making, not Security Measures: Implement robust cybersecurity measures
replace it entirely. 10 to protect AI systems and the data they use from unauthorized
access or manipulation. Encrypt everything, and make sure you
hold the keys. From complying with regulations like GDPR and
HIPAA, to ensuring the integrity of your data, encryption is more
Continuous Monitoring: Legal teams, ask your friends in IT to vital than ever.
7 implement systems that continuously monitor AI performance
and outputs for anomalies or unexpected behaviors.

Diverse Development Teams: Encourage diversity in AI


11 development teams to help identify and mitigate potential biases.

34
How to be responsible (and CYA) when using AI

When using AI in legal work, it's crucial to approach it with the right

mindset and take appropriate precautions to ensure you’re using it

responsibly. In other words, always make sure to CYA - Cover Your… well,

you know. To that end:

Keep a human in the loop


AI is a tool, not a teammate

AI makes mistakes, so it's essential to adopt a "trust but verify" approach. Approach AI with the same level of skepticism you would use when

Have human experts review all AI-generated content before anyone uses searching on Google. AI won't do everything for you – it's only as good as

it. Treat AI as a tool to augment your work, not replace your judgment.
the prompts you provide it with. Remember the old computer

programming adage, "garbage in, garbage out." It applies to prompting AI


Treat AI output as your own 

models, too.

You are responsible for any work product that incorporates AI-generated
Craft your prompts carefully 

content. Not the AI company or the app maker who built AI into their app

— you. Review and validate all AI outputs before using them in official Be specific and clear in your instructions to AI. And remember that the

documents or communications.
more you use an AI model, the better it gets at responding with the kind

of output you’re looking for: Getting AI to give you the results you want is
Stay on top of your data 

usually an iterative process. Follow up with additional prompts as needed

to refine and improve the AI's output.

It’s critical to know what solutions–and the third parties they work with–

do with your data. Will they train models with it? Where will it be stored?
Set an internal policy for AI usage

Look for places to enable zero data retention (ZDR) when appropriate

and have a plan for deleting your data if and when third party Establishing clear guidelines for how AI should be used within your

relationships end. Some companies, like Anthropic, operate on ZDR by organization should be a top priority for every company.

default.

35
What should your internal policy for AI usage include?
Process documentation

You can see how Ironclad approaches this (and download our policy) Keep records of how AI was used in your work, including the prompts used
here, but a few basic elements you should consider for your team:

and any post-processing or verification steps taken.

Staying informed

Confidentiality
Keep up-to-date with the latest developments in AI technology and
relevant regulations or ethical guidelines in the legal industry.

Classify data according to your data classification matrix, clearly


outlining which type of data is appropriate for which use. Data classified Transparency

as confidential or higher should not be fed into AI prompts by default,


When appropriate, disclose the use of AI in your work to clients or
and company intellectual property should always be protected.

colleagues. This can help manage expectations and build trust.


Responsible Use

Employees should be held responsible for AI-generated content as if it


were their own. They must carefully review content for accuracy and
potential negative effects on third parties.

Read More How We Regulate AI Use at Ironclad


Service-specific Review

Not all AI services are created equal. Evaluate each service based on its
approach to data processing, compliance, and legal terms. (Find more in
Part 5, “How to evaluate AI software”).

36
CHAPTER FIVE

How to evaluate AI software 4. Audit Trail

The system should maintain detailed logs of all data access and changes
When evaluating AI software for legal applications, it's crucial to ensure the Look for immutable audit logs that cannot be altered
solution meets your needs, and also that it complies with legal and ethical Ensure the ability to generate comprehensive audit reports.

standards. Here's a look at a few key considerations:

5. Data Training, Retention, and Deletion

Check for configurable data retention policies


Understand if and how the software will train on your data.
What to look for from a data security perspective

Ensure the software supports secure data deletion methods (e.g.,


1. Data Encryption data wiping)
Look for features that allow for selective data deletion to comply with
Ensure the software uses strong encryption (e.g., AES-256) for data in "right to be forgotten" requests.

transit and at rest


Look for end-to-end encryption for sensitive communications and 6. Third-Party Audit
personally identifiable information (PII).

Check which third parties the software uses, and what their AI policies
2. Access Controls around retention and security are.
Prioritize software that undergoes regular third-party security audits
Seek granular access controls that allow role-based permissions Ask for recent audit reports and check how quickly past issues were
Check for multi-factor authentication options to enhance security.
resolved.

3. Data Residency 7. Compliance Certification

Understand where your data will be stored and processed Look for certifications like ISO 27001, SOC 2, or GDPR compliance
Ensure compliance with relevant regulations like GDPR or CCPA Check if the vendor maintains a compliance program with regular
Consider options for data localization if required by your jurisdiction. assessments.

37
8. Data flow diagram:

Ensure the DFD clearly depicts data sources, processes, data flows, and

storage, so you can easily trace and understand how your data flows

through their product

Look for adherence to relevant regulations (e.g., GDPR) and other data

protection measures during transfer and storage


When you’re ready to take the next steps in your

Evaluate how the solution integrates with existing systems – and whether
evaluation, be sure to check out the LLM Top 10 from

it takes into account future growth in data volume and complexity.

The Open Worldwide Application Security Project

(OWASP), a comprehensive risk/threat model that is


9. Sub-processor usage:
open-sourced and available for free.

Most industries are subject to strict data protection laws (like GDPR or

HIPAA), but investigate the sub-processors' data handling practices,

encryption standards, and breach notification procedures

Establish a clear communication channel to receive updates on any

changes in sub-processor arrangements.

38
What to look for from a policy perspective

When evaluating AI software for legal use, look closely at the vendor's Human Oversigh
policies and practices. These factors ensure ethical use, transparency, and Understand how human expertise is incorporated into the AI system's
reliability of the AI system in your legal operations.

decision-making process
Check for clear escalation pathways for AI-flagged issues.

5. Continuous Monitorin
1. Ethical AI Guideline
The vendor should have systems to continuously monitor AI performance
The vendor should have clear, publicly available ethical guidelines for AI and address issues
development and use Look for proactive alerting mechanisms for performance degradation or
Look for alignment with established AI ethics frameworks (e.g., IEEE unexpected outputs.

Ethically Aligned Design).

6. Update and Maintenance Policie


2. Transparenc
Understand the frequency and process of AI model updates
Seek vendors who provide detailed information about their AI models' Check how these updates are tested and validated before deployment
training data and methodologies Ensure there's a rollback mechanism in case of problematic updates.

Check if they offer model cards or datasheets describing AI system


characteristics.
7. Data Usage Policie

3. Bias Mitigatio Get clear information on how your data will be used, especially regarding
AI model training
The vendor should have robust processes to detect and mitigate bias in Ensure the vendor offers options to opt out of data sharing for model
their AI models improvement.
Look for regular bias audits and diverse teams involved in AI
development.

39
Common areas where the software fails

Hallucinations
Context Inconsistency in Bias amplification

misinterpretation
responses

AI models sometimes generate AI models can amplify biases

plausible-sounding but present in their training data.


AI systems can misunderstand LLMs often give different

incorrect information. Often, This can lead to unfair or


the context of legal language, answers to the same question

this occurs when the model is discriminatory outcomes in


leading to errors. Models often asked in slightly different ways,

given inputs outside of its legal analysis.

rely on statistical patterns in stemming from the

training data. LLMs are given


the data, and not a true probabilistic nature of
EXAMPLE
to generating very confident-
understanding of the content language model outputs. When

sounding text, even if it’s all


and context, which can cause you ask an LLM a question, the Consistently predicting higher
based on inaccuracies or
misinterpretation.

answer is essentially the model risk scores for certain


outright hallucinations.

predicting what a good human demographic groups in bail

EXAMPLE
response would sound like.

decisions.
EXAMPLE

Misinterpreting a clause's
EXAMPLE
An AI might invent non-
intent due to unusual phrasing

existent legal precedents that


or structure. Providing conflicting

sound convincing.
interpretations of a contract

clause when asked multiple

times.

40
Temporal confusion
Lack of common Difficulty with novel Lack of causal
AI models often struggle with sense reasoning
scenarios
understanding

understanding time-dependent Since AI models rely on AI often struggles when faced Cause-and-effect relationships
information because they're pattern matching, and not true with unique or unprecedented in legal contexts can be
trained on static datasets and human-like understanding of legal situations, like difficult for AI models to
don't have a true sense of information, AI can fail at tasks determining copyright process correctly. This is
time. A model’s “cutoff date” that require basic common ownership for artwork created because they're trained on
refers to the date through sense. This can lead to absurd by another AI. Similar to the correlations in data, not true
which the model’s training data conclusions.

cutoff date limitation, a model causal relationships.

runs.

can only draw from its training


EXAMPLE
EXAMPLE data, which may not cover EXAMPLE

The "how many Rs in every possible scenario.

Misattributing the cause of a


Applying outdated laws or 'Strawberry'" issue arises legal outcome by focusing on
regulations that have since because the model focuses on
EXAMPLE
irrelevant but correlated
been amended or repealed. the literal question rather than Failing to properly analyze legal factors.
understanding the concept of implications of new
spelling. technologies not present in
training data.

Understanding these failure modes is crucial for legal professionals using AI. It underscores the importance of human oversight, cross-verification, and treating AI outputs
as assistive tools rather than definitive answers.

41
CHAPTER SIX

Tips on Getting Started

The key to embarking on the AI journey in your legal department is to start


small. Focus on high-impact areas, then gradually expand your AI
implementation as you gain confidence and experience.

The key to embarking on the AI journey in your legal department is to start


small. Focus on high-impact areas, then gradually expand your AI
implementation as you gain confidence and experience. And don’t forget—
you don’t need to go at it alone! All of our peers are in experimental mode
right now. The more educated we get on AI and new technologies, the more
opportunity we have as a field to secure our seats at the proverbial table. 

To begin, consider focusing on four key areas where AI can make an


immediate impact

High volume, low risk work, like employment contracts and NDAs

Contract review to flag issues like risky clauses and non-standard term

Legal research and e-discovery for finding relevant cases and statute

Drafting of briefs, contracts, pleadings, and other legal documents

42
Some tips on prompting AI systems

As you begin to implement AI tools, learning how to effectively prompt a


system becomes crucial. Approach prompting as an iterative process,
expecting to refine your queries based on the AI's output. And always 5 ChatGPT Prompts to Boost
Read More
remember that while AI is a powerful tool, it can hallucinate and it doesn't Your Legal Ops Game
replace legal judgment. Use your own expertise when reviewing any AI output.

When it comes to crafting prompts:

Be specific about what you're looking Speaking of how you like your Don't shy away from using precise
for and provide necessary context. information presented, telling a chatbot legal terminology, as AI models
to emulate a specific voice or style in trained on legal texts understand
its responses can be really helpful. Ex: legal jargon.
“Summarize the following, emulating
the style of Stanford Law Review.”

Want your information presented in a


specific format? Make sure to specify Break more complex queries down
this in your prompt. For example, into smaller, more manageable steps
"Provide a bullet-point summary of the Take a “trust and verify” approach: presented in a straightforward, logical
key risks in this contract." always check the output for accuracy. manner.

43
Measure impact as you go

As you implement AI tools, it's crucial to measure their impact. Look for
software that tracks and readily surfaces metrics such as time savings,
accuracy improvements, user adoption rates, client satisfaction, and cost
savings. This data will help you refine your AI strategy and justify further
investments in AI technology.

How to Get Started with Generative


Read More
AI Within Your Legal Department

Legal Metrics Masterclass: The


Read More
Numbers Every GC Should Track

44
CHAPTER SEVEN

AI Innovation in the Legal Field


Arguably one of the most exciting things about any new technology is
seeing it in action, and learning from those on the frontlines using it to drive
real, tangible impact. Ironclad users have surprised us with the incredible
ways they’re putting our AI to work, and the more we spoke to other legal AI
pioneers it became obvious that these were not isolated incidents. 

So how is AI being used right now in the field of legal? Below are two use
cases of AI in the wild, being used to solve difficult problems, make
themselves 10x more efficient, and above all else—innovate.

45
AI helps find needles in the haystack during e-discovery after Lahaina fires

THE PROBLEM

Following the devastating fires in Lahaina in 2023, there was a significant


challenge in sifting through vast amounts of 911 call transcripts to determine
liability. On top of the physical and emotional devastation, the sheer volume
of data made it difficult for legal teams to extract relevant information


efficiently, hindering the preparation for litigation.

THE PROJECT
Over the last 10 years, the number of fires
caused by utilities have grown quite
To address this issue, Mr. McCullough utilized AI technology from Everlaw—a
cloud-based e-discovery and document review software company—to dramatically, and now we’re seeing entire
analyze 911 calls during e-discovery. By uploading the transcripts into the communities wiped out. Trying to help
platform and employing the Description Summary function, he was able to those communities rebuild, and get their
generate comprehensive deposition summaries within minutes—a task that, justice, is what fire litigation is all about.”
before AI, would have taken weeks.

THE RESULTS Greg McCullough

Efficiency of the review process improved significantly, giving Mr. Independent Fire litigation consultant
McCullough access to a bird’s eye view of the evidence faster than
traditional methods. While the output of AI-generated summaries required
review, this not only saved time but also enabled his clients to grasp critical
information more effectively, enhancing their understanding of the case. Not
only this, but his expertise in fire litigation and commitment to leveraging
technology for better outcomes were crucial in championing this innovation.

46
Gunderson Dettmer Leads by Example with ChatGD

THE PEOPLE THE problem

While the legal industry has traditionally been slow to adopt new
technologies, Gunderson Dettmer has consistently positioned itself at the
forefront of innovation. As an early advocate for responsible AI use, the firm
quickly recognized the rising demand from clients seeking guidance on AI’s
Gunderson Dettmer’s legal engineering team regulatory landscape, legal implications, and best practices. By being one of
the first law firms to strategically embrace and implement AI within its own
Joe Green, Chief Innovation Officer
operations, it has allowed the firm to guide clients through this rapidly-
evolving space with authenticity and expertise.

Naveen Pai, Chief Knowledge Officer


THE PROJECT

The firm has deployed third-party AI tools for years—including some


Lori Knowles, Assistant General Counsel
powered by LLMs—for a variety of use cases. With ChatGD, the firm
created its first proprietary tool to make generative AI technology widely
available internally, achieving a new milestone in Gunderson’s ability to marry
Laura Chao, Practice Innovation Attorney
its in-house legal expertise with cutting-edge engineering and technology. 

ChatGD allows the firm’s attorneys to query and manipulate documents


Stephanie Goutos, Lead Practice Innovation Attorney
using a secure, enterprise instance of OpenAI’s LLMs through Microsoft
Azure. Attorneys using ChatGD can leverage the underlying LLMs and the
Avi Saiger, Practice Innovation Attorney power of ChatGPT to accelerate and enhance their work as subject-matter
experts. It also gives attorneys the ability to upload proprietary legal
agreements and other relevant source material as context for processing
queries using retrieval-augmented generation (RAG).

47
THE RESULT

In the year following its August 2023 launch, ChatGD has been used by

more than half of the firm's attorneys and business leaders, many of whom Gunderson Dettmer has a responsibility to

‘‘
use it regularly. They have created tens of thousands of conversation
our clients to drive innovation from the
threads and messages in ChatGD. The firm’s use of the tool has led to
inside out. By rolling out this homegrown
significant workflow enhancements, including increased accuracy and

efficiency in delivering work product. Users have especially gravitated tool internally—and actually using it—our

toward using ChatGD for“iterative editing” and “text manipulation” use lawyers are not only working more
cases, where users provide pre-drafted language and then quickly iterate
efficiently, we’re leading by example. We’re
through different versions by asking the LLM to offer text refinements.
working to get our field comfortable with AI,

and a big part of that is being right on the

cutting edge and helping set a precedent

for how this technology can be applied and

used in a real-world setting.”

Joe Green

Chief Innovation Officer

48
Ironclad is an all-in-one AI-powered contract lifecycle
management (CLM) platform that integrates deeply with
security, sales, and compliance tools to mitigate risk, optimize
the contract management process, and drive business growth.

Our AI suite of tools, including Smart Import for automatically


ingesting and tagging contracts, guidelines and AI AssistTM for
automatic redlining, has saved our customers an estimated
cumulative 29 years of effort across contract uploading, review,
and redlining. 

We build security and compliance into everything we do, from


integrations with OneTrust, to CIS benchmarks and security
controls, and cloud security best practices such as the NIST
Cybersecurity Framework.

To learn more about Ironclad and how it can help your team
work smarter and faster with AI, request a demo today.

49

You might also like