Moving Upstream: Redefining Legal Operations with Curtis Brewer of Litify
Listen Here

As artificial intelligence (AI) is here to stay, legal professionals must familiarize themselves with key AI terms and concepts to effectively engage in conversations and make informed decisions about AI in the legal industry.
This glossary is designed for lawyers, paralegals, and legal teams to reference as they begin their legal AI journeys — because a lack of knowledge should never be a barrier to innovation. In addition to reading articles and research like this, we encourage attending conferences, swapping insights with peers, or requesting training opportunities from your organization.
Artificial intelligence, or AI for short, refers to technology that mimics human cognition to perform tasks like prediction, decision-making, and content generation. AI mirrors human reasoning by analyzing data, predicting outcomes, and generating content that can assist lawyers and legal teams.
For law firms or legal departments, AI can be used for tasks such as document review, contract analysis, and to surface client insights and assess risk.
At Litify, responsible AI is embedded directly into existing legal workflows to help teams work more efficiently while maintaining high standards of accuracy and compliance.
Machine learning (ML) is a type of AI that allows a piece of technology to automatically learn insights and recognize patterns from data, and apply that learning to make increasingly better decisions.
In the legal context, ML is frequently used to automate contract classification, cluster documents for review, and risk detection. ML allows legal teams to save time and focus on higher-value work rather than repetitive analysis.
Deep learning is a type of machine learning that uses vast volumes of data and complex algorithms to train a model.
Legal applications of deep learning include document interpretation, semantic relevance ranking in discovery, and predictive analytics for case outcomes.
A large language model (LLM) is a type of AI that uses machine learning to recognize and generate language. LLMs are trained on large amounts of data, often from the internet, to interpret human language and other complex data.
LLMs power chatbots, drafting assistants, and other tools that help lawyers summarize contracts, draft legal documents, and generate insights from documents.
Natural language processing (NLP) helps a large language model interpret and respond to human language more naturally. NLP systems are trained on large amounts of data to identify patterns, meanings, and how words fit together in human speech.
In legal settings, NLP is critical for contract review and e-discovery. NLP and AI in legal operations can help teams work more effectively with unstructured legal data.
Training (or pre-training) an AI model is an initial phase where the model is trained on general data to develop an understanding of language across various contexts.
Fine-tuning an AI model is an additional phase in which the model is further trained on legal-specific data to improve precision in tasks like contract analysis or case prediction.
Many legal technology providers either integrate pre-trained AI models or fine-tune them to create legal-specific AI tools.
As part of our responsible approach to AI development, no client data is used to train the AI model. As our clients gain access to Litify AI capabilities, the information they share with the AI model will remain inaccessible to other clients and external users and won’t be retained by the model.
Generative artificial intelligence, or Gen AI, is a type of AI that utilizes machine learning and natural language processing to generate new content, including text, images, and videos. Generative AI analyzes patterns and structures in the data and mimics those patterns to generate original responses to a user’s question or prompt.
In legal practice, generative AI can draft client memos, summarize deposition transcripts, or generate initial contract language.
For example, with LitifyAI, you can summarize and retrieve information from your documents by asking questions or choosing from predefined prompts. AI can also help draft new documents that require narrative, arguments, and detail using the case data within Litify.
Agentic AI combines multiple actions to create a workflow that requires minimal human interaction. If generative AI writes or summarizes, then agentic AI acts. It plans steps, takes action, and checks results, looping until it completes the task or hands it to a person for review. The AI agent maintains momentum without requiring you to use multiple tools.
For legal teams, agentic AI can review invoices, check for case conflicts, and identify missing time entries.
Prompt engineering is the act of programmatically asking questions of an AI model to help it perform a specific task. For example, as a legal professional, you could send the model a series of documents and ask it to identify if “yes, this is a medical bill” or “no, this is not a medical bill.” Over time, the prompts become increasingly specific, asking the model to identify the hospital that issued the medical bill or the treatment received.
With legal software and prompt engineering, there are several ways to improve the model’s accuracy, such as tweaking the specific questions you’re asking it or breaking down the document you’re asking it to analyze into smaller pieces.
An AI hallucination occurs when a generative AI model produces incorrect or misleading information in response to a user’s question or prompt. Hallucinations can result from limited or poor training data, implicit biases in the data used to train the model, insufficient programming, a lack of understanding by the AI model of the user’s prompt or question, or a lack of understanding by the user in how to best use an AI model.
There continue to be many examples of AI hallucinations within the legal industry:
All the more reason to begin building your understanding of AI so that you’re aware of its limitations! In addition, make sure you verify all AI-supported results to ensure accuracy.
An open AI model is characterized by its ability to learn and evolve using an open source of data, usually from the broader internet. This means that the data you share with the open AI model may be retained and used in future responses generated by the model.
A closed AI model does not learn from or expose information to an open source of data. This means that the data you share with a closed model is never retained.
Litify AI was built using a closed AI model, ensuring legal data is protected while still providing powerful insights.
A single-tenant architecture is a dedicated cloud infrastructure for each client, while a multi-tenant architecture is a cloud infrastructure that services multiple organizations.
In the context of AI technology, an open AI model may be using multi-tenant architecture, meaning multiple organizations input data into a single model. A closed AI model legal software may be using single-tenant architecture, meaning only a single organization can input data into that model, and the model is closed off from other sources of data. This is typical of a single-tenant AI legal platform, which ensures that data remains isolated within one organization, providing added privacy but limiting integration and cross-platform functionality.
RAG combines large language models with real-time data retrieval to improve the accuracy of AI outputs. In legal practice, RAG enables the AI tool to generate briefs, summarize documents, or provide client insights, grounded in verified sources.
A clear understanding of AI terms is essential for legal teams. Without a shared vocabulary, adoption slows and decision-making becomes fragmented. Understanding these key AI concepts enables legal, IT, and compliance teams to collaborate effectively and evaluate vendors with greater confidence.
At Litify, we believe that legal needs to take a platform approach to AI, meaning it must be embedded into your existing business workflows to be truly effective.
Our platform supports automated intake and legal service request triage, streamlines document analysis and matter summaries, and intelligently prioritizes tasks. This approach ensures that AI enhances efficiency without compromising accuracy or compliance, demonstrating practical legal workflow automation.
AI knowledge empowers legal professionals to use technology responsibly. Continuous learning through reading, webinars, and using secure platforms like Litify helps teams stay informed about AI capabilities and limitations. By building AI literacy, lawyers can safely adopt AI tools, enhance their workflows, and make smarter, data-driven decisions.
Ready to unleash your team’s full potential? Find out how to turn AI into your competitive advantage.
Terms like generative AI, agentic AI, large language models (LLMs), machine learning (ML), and AI hallucinations are foundational.
Generative AI can help lawyers and legal teams draft contracts, summarize documents, and create client memos, reducing manual workloads and creating consistency across the organization.
An AI hallucination is when a generative AI model provides incorrect or misleading information. This can stem from limited or poor training data, inherent biases, insufficient programming, or a misunderstanding between the AI and the user's prompt or usage.
Always cross-verify results and use secure, fine-tuned models to reduce the likelihood of AI hallucinations.
Litify leverages LLMs and NLP for intelligent document analysis, automated legal service request triage, matter prioritization, and more.
AI bias occurs when AI systems produce skewed or inaccurate results due to training data that is incomplete or unrepresentative of the actual data. In the legal industry, biased AI can affect predictive tools, risk assessments, or compliance algorithms, potentially reinforcing existing inequalities.
Legal teams can mitigate bias by utilizing diverse datasets, conducting audits, and implementing transparent AI practices.
Ethical AI ensures that AI systems are fair, transparent, and accountable. For legal professionals, this means respecting client confidentiality, minimizing bias, and maintaining human oversight to ensure accuracy and integrity.
AI powers document review, contract analysis, risk prediction, workflow automation, and client insights.
AI is used in legal operations to automate repetitive tasks, analyze documents, and improve decision-making through the use of real-time data. It can automatically summarize medical records and transcripts, draft documents, and check for billing or invoicing errors. AI can also help manage cases more efficiently by organizing files, capturing missed time, and providing analytics to identify potential risks or opportunities.