No items found.
Artificial Intelligence

A Glossary of Artificial Intelligence Terms For Legal

Team Litify
Artificial Intelligence

A Glossary of Artificial Intelligence Terms For Legal

Team Litify

What are the potential challenges to implementing artificial intelligence on your legal team? In the 2024 Litify State of AI in Legal Report, 17% of respondents cited a lack of knowledge as a barrier to adoption.

As artificial intelligence is here to stay, it’s critical to familiarize yourself with some of the key terms and concepts so that you can effectively engage in conversations and decisions about AI in your workplace.

This glossary is for legal professionals to reference as you get started on your artificial intelligence journeys — because a lack of knowledge should never be a barrier to innovation. In addition to reading articles and research like this, we also encourage you to attend conferences, swap insights with your peers, or ask your organization for training opportunities. 

A glossary of artificial intelligence terms for legal

What is artificial intelligence?

Artificial intelligence, or AI for short, encompasses any piece of technology that can mirror creative human behavior, such as generating original content or making unique decisions or predictions based on a large dataset.

AI is a broad term describing the general ability of technology to emulate creative human thoughts, decisions, and behavior, but there are many specific forms of AI, such as machine learning or deep learning.

What is machine learning?

Machine learning is a type of AI that allows a piece of technology to automatically learn insights and recognize patterns from data — and apply that learning to make increasingly better decisions.

What is deep learning?

Deep learning is a type of machine learning that uses vast volumes of data and complex algorithms to train a model.

What is a large language model?

A Large Language Model (LLM) is a type of AI that uses machine learning to recognize and generate language. LLMs are trained on large amounts of data, often from the internet, to interpret human language and other complex data.

What is natural language processing?

Natural Language Processing (NLP) helps a Large Language Model interpret and respond to human language in a more natural way. NLP systems are trained on large amounts of data to be able to identify patterns, meanings, and how words fit together in human speech. 

What is the difference between training and fine-tuning an AI model?

Training (or pre-training) an AI model is an initial phase where the model is trained on an extremely large dataset to develop a general understanding of natural language in different contexts and across different types of knowledge.

Fine-tuning an AI model is an additional phase in which the model is further educated on a specific domain area or task. The purpose of fine-tuning is to adapt the model to perform better in an area that may not have been specifically covered during training or pre-training. 

Many of the popular AI models used in the legal industry today have been pre-trained on millions and trillions of data parameters from across the internet. Some legal technology platforms are taking one of those popular AI models and integrating it into the platform to deliver AI capabilities. Other providers are taking an “off-the-shelf” model and fine-tuning it with additional legal data to release a legal-specific AI model.

What is generative AI?

Generative artificial intelligence, or Gen AI, is a type of AI that uses machine learning and natural language processing to create new content, such as text, images, or videos. Generative AI models analyze patterns and structures in its training data and mimic those patterns to generate original responses to a user’s question or prompt. 

Take, for example, the popular AI chatbots: the chatbot has been trained on millions of data parameters from across the internet. If you ask the chatbot to tell you a story, it will generate a new story for you based on what it has learned about storytelling from its training — that a story often contains a hero and a villain, a storyline, and a happy ending.

What is traditional AI?

Comparatively, traditional AI is a type of AI that has been trained to follow a specific set of rules or to complete a specific task, but it doesn’t create anything new. Traditional AI is like playing chess with a computer. The computer has been trained on all the rules and can select its own moves against you from its pre-programmed strategies, but it’s not inventing new ways to play chess.

What is prompt engineering?

Prompt engineering is the act of programmatically asking questions of an AI model to help it perform a specific task. For example, as a legal professional, you could send the model a series of documents and ask it to identify if “yes, this is a medical bill” or “no, this is not a medical bill.” Over time, the prompts become increasingly specific to ask the model to identify the hospital that issued the medical bill or the treatment received.

With prompt engineering, there are several ways to improve the model’s accuracy, such as tweaking the specific questions you’re asking it or breaking down the document you’re asking it to analyze into smaller pieces.

What is an AI hallucination?

An AI hallucination occurs when a generative AI model produces incorrect or misleading information in response to a user’s question or prompt. Hallucinations can result from limited or poor training data, implicit biases in the data used to train the model, insufficient programming, a lack of understanding by the AI model of the user’s prompt or question — or a lack of understanding by the user in how to best use an AI model.

There have already been many examples of AI hallucinations within the legal industry:

All the more reason to begin building your understanding of AI so that you’re aware of its limitations!

What is the difference between an open AI model and a closed AI model?

An open AI model is characterized by its ability to learn and evolve using an open source of data, usually from the broader internet. This means that the data you share with the open AI model may be retained and used in future responses generated by the model.

A closed AI model does not learn from or expose information to an open source of data. This means that the data you share with a closed model is never retained.

What is the difference between single-tenant architecture and multi-tenant architecture?

A single-tenant architecture is a dedicated cloud infrastructure for each client, while a multi-tenant architecture is a cloud infrastructure that services multiple organizations. 

In the context of AI technology, an open AI model may be using multi-tenant architecture, meaning multiple organizations input data into a single model. A closed AI model may be using single-tenant architecture, meaning only a single organization can input data into that model, and the model is closed off from other sources of data.

The takeaway

AI is here, and it will continue to be transformative, but it must be adopted safely and responsibly. At Litify, we believe that legal needs a platform approach to AI — meaning it must be embedded into your existing business workflows to be truly effective. 

However, we also need to take this approach together by collectively building and expanding our knowledge of AI technologies and its limitations. Using this glossary, we hope you’ll join us on a journey toward the safe and effective use of AI within the legal industry.

Get more insights from the 2024 State of AI in Legal Report to see how your peers are approaching the use of AI and use the findings to inform your own AI strategy.

Team Litify
About the author
author social link