No items found.
Artificial Intelligence

AI Terms for Lawyers: Glossary of Artificial Intelligence in Legal Practice

Andrew Heffernan
VP, Engineering
Artificial Intelligence

AI Terms for Lawyers: Glossary of Artificial Intelligence in Legal Practice

Andrew Heffernan
VP, Engineering

As artificial intelligence (AI) is here to stay, legal professionals must familiarize themselves with key AI terms and concepts to effectively engage in conversations and make informed decisions about AI in the legal industry.

This glossary is designed for lawyers, paralegals, and legal teams to reference as they begin their legal AI journeys — because a lack of knowledge should never be a barrier to innovation. In addition to reading articles and research like this, we encourage attending conferences, swapping insights with peers, or requesting training opportunities from your organization.

A glossary of artificial intelligence terms for legal

What is artificial intelligence?

Artificial intelligence, or AI for short, refers to technology that mimics human cognition to perform tasks like prediction, decision-making, and content generation. AI mirrors human reasoning by analyzing data, predicting outcomes, and generating content that can assist lawyers and legal teams.

For law firms or legal departments, AI can be used for tasks such as document review, contract analysis, and to surface client insights and assess risk.

At Litify, responsible AI is embedded directly into existing legal workflows to help teams work more efficiently while maintaining high standards of accuracy and compliance.

What is machine learning?

Machine learning (ML) is a type of AI that allows a piece of technology to automatically learn insights and recognize patterns from data, and apply that learning to make increasingly better decisions.

In the legal context, ML is frequently used to automate contract classification, cluster documents for review, and risk detection. ML allows legal teams to save time and focus on higher-value work rather than repetitive analysis.

What is deep learning?

Deep learning is a type of machine learning that uses vast volumes of data and complex algorithms to train a model.

Legal applications of deep learning include document interpretation, semantic relevance ranking in discovery, and predictive analytics for case outcomes.

What is a large language model?

A large language model (LLM) is a type of AI that uses machine learning to recognize and generate language. LLMs are trained on large amounts of data, often from the internet, to interpret human language and other complex data.

LLMs power chatbots, drafting assistants, and other tools that help lawyers summarize contracts, draft legal documents, and generate insights from documents.

What is natural language processing?

Natural language processing (NLP) helps a large language model interpret and respond to human language more naturally. NLP systems are trained on large amounts of data to identify patterns, meanings, and how words fit together in human speech. 

In legal settings, NLP is critical for contract review and e-discovery. NLP and AI in legal operations can help teams work more effectively with unstructured legal data.

What is the difference between training and fine-tuning an AI model?

Training (or pre-training) an AI model is an initial phase where the model is trained on general data to develop an understanding of language across various contexts.

Fine-tuning an AI model is an additional phase in which the model is further trained on legal-specific data to improve precision in tasks like contract analysis or case prediction.

Many legal technology providers either integrate pre-trained AI models or fine-tune them to create legal-specific AI tools.

As part of our responsible approach to AI development, no client data is used to train the AI model. As our clients gain access to Litify AI capabilities, the information they share with the AI model will remain inaccessible to other clients and external users and won’t be retained by the model.

What is generative AI?

Generative artificial intelligence, or Gen AI, is a type of AI that utilizes machine learning and natural language processing to generate new content, including text, images, and videos. Generative AI analyzes patterns and structures in the data and mimics those patterns to generate original responses to a user’s question or prompt.

In legal practice, generative AI can draft client memos, summarize deposition transcripts, or generate initial contract language.

For example, with LitifyAI, you can summarize and retrieve information from your documents by asking questions or choosing from predefined prompts. AI can also help draft new documents that require narrative, arguments, and detail using the case data within Litify.

What is agentic AI?

Agentic AI combines multiple actions to create a workflow that requires minimal human interaction. If generative AI writes or summarizes, then agentic AI acts. It plans steps, takes action, and checks results, looping until it completes the task or hands it to a person for review. The AI agent maintains momentum without requiring you to use multiple tools.

For legal teams, agentic AI can review invoices, check for case conflicts, and identify missing time entries. 

What is prompt engineering?

Prompt engineering is the act of programmatically asking questions of an AI model to help it perform a specific task. For example, as a legal professional, you could send the model a series of documents and ask it to identify if “yes, this is a medical bill” or “no, this is not a medical bill.” Over time, the prompts become increasingly specific, asking the model to identify the hospital that issued the medical bill or the treatment received.

With legal software and prompt engineering, there are several ways to improve the model’s accuracy, such as tweaking the specific questions you’re asking it or breaking down the document you’re asking it to analyze into smaller pieces.

What is an AI hallucination?

An AI hallucination occurs when a generative AI model produces incorrect or misleading information in response to a user’s question or prompt. Hallucinations can result from limited or poor training data, implicit biases in the data used to train the model, insufficient programming, a lack of understanding by the AI model of the user’s prompt or question, or a lack of understanding by the user in how to best use an AI model.

There continue to be many examples of AI hallucinations within the legal industry:

All the more reason to begin building your understanding of AI so that you’re aware of its limitations! In addition, make sure you verify all AI-supported results to ensure accuracy.

What is the difference between an open AI model and a closed AI model?

An open AI model is characterized by its ability to learn and evolve using an open source of data, usually from the broader internet. This means that the data you share with the open AI model may be retained and used in future responses generated by the model.

A closed AI model does not learn from or expose information to an open source of data. This means that the data you share with a closed model is never retained.

Litify AI was built using a closed AI model, ensuring legal data is protected while still providing powerful insights.

What is the difference between single-tenant architecture and multi-tenant architecture?

A single-tenant architecture is a dedicated cloud infrastructure for each client, while a multi-tenant architecture is a cloud infrastructure that services multiple organizations. 

In the context of AI technology, an open AI model may be using multi-tenant architecture, meaning multiple organizations input data into a single model. A closed AI model legal software may be using single-tenant architecture, meaning only a single organization can input data into that model, and the model is closed off from other sources of data. This is typical of a single-tenant AI legal platform, which ensures that data remains isolated within one organization, providing added privacy but limiting integration and cross-platform functionality.

What is Retrieval-Augmented Generation (RAG)?

RAG combines large language models with real-time data retrieval to improve the accuracy of AI outputs. In legal practice, RAG enables the AI tool to generate briefs, summarize documents, or provide client insights, grounded in verified sources.

The takeaway

Why understanding AI terminology matters for legal professionals

A clear understanding of AI terms is essential for legal teams. Without a shared vocabulary, adoption slows and decision-making becomes fragmented. Understanding these key AI concepts enables legal, IT, and compliance teams to collaborate effectively and evaluate vendors with greater confidence. 

How Litify embeds AI responsibly into legal workflows

At Litify, we believe that legal needs to take a platform approach to AI, meaning it must be embedded into your existing business workflows to be truly effective. 

Our platform supports automated intake and legal service request triage, streamlines document analysis and matter summaries, and intelligently prioritizes tasks. This approach ensures that AI enhances efficiency without compromising accuracy or compliance, demonstrating practical legal workflow automation.

Building legal AI literacy together

AI knowledge empowers legal professionals to use technology responsibly. Continuous learning through reading, webinars, and using secure platforms like Litify helps teams stay informed about AI capabilities and limitations. By building AI literacy, lawyers can safely adopt AI tools, enhance their workflows, and make smarter, data-driven decisions.

Ready to unleash your team’s full potential? Find out how to turn AI into your competitive advantage.

FAQs

What are the most important AI terms for lawyers starting out?

Terms like generative AI, agentic AI, large language models (LLMs), machine learning (ML), and AI hallucinations are foundational.

How does generative AI impact legal drafting and review?

Generative AI can help lawyers and legal teams draft contracts, summarize documents, and create client memos, reducing manual workloads and creating consistency across the organization.

What is an AI hallucination, and how can legal teams mitigate it?

An AI hallucination is when a generative AI model provides incorrect or misleading information. This can stem from limited or poor training data, inherent biases, insufficient programming, or a misunderstanding between the AI and the user's prompt or usage.

Always cross-verify results and use secure, fine-tuned models to reduce the likelihood of AI hallucinations.

How does Litify use LLMs and NLP in legal workflows?

Litify leverages LLMs and NLP for intelligent document analysis, automated legal service request triage, matter prioritization, and more.

What is AI bias in law?

AI bias occurs when AI systems produce skewed or inaccurate results due to training data that is incomplete or unrepresentative of the actual data. In the legal industry, biased AI can affect predictive tools, risk assessments, or compliance algorithms, potentially reinforcing existing inequalities. 

Legal teams can mitigate bias by utilizing diverse datasets, conducting audits, and implementing transparent AI practices.

What is ethical AI in law?

Ethical AI ensures that AI systems are fair, transparent, and accountable. For legal professionals, this means respecting client confidentiality, minimizing bias, and maintaining human oversight to ensure accuracy and integrity.

How is AI currently used in legal operations?

AI powers document review, contract analysis, risk prediction, workflow automation, and client insights.

AI is used in legal operations to automate repetitive tasks, analyze documents, and improve decision-making through the use of real-time data. It can automatically summarize medical records and transcripts, draft documents, and check for billing or invoicing errors. AI can also help manage cases more efficiently by organizing files, capturing missed time, and providing analytics to identify potential risks or opportunities.

Andrew Heffernan
VP, Engineering
About the author
Andrew Heffernan is the VP of Engineering at Litify. He’s a seasoned engineering leader with diverse work experience in the technology industry who enjoys growing and maturing engineering teams.
author social link