The five key criteria for evaluating legal AI products

LexisNexis buyer's guide sets out to separate fact from fiction when assessing legal Gen AI platforms

More than half of leading firms have already started investing in Gen AI tools, according to LexisNexis Shutterstock

Law firms and legal departments seeking to navigate the generative AI (Gen AI) hype and surge in the number of vendors should evaluate products against five key criteria, according to a new buyer's guide by LexisNexis.

The key to the successful implementation of an AI product, according to the report, is to address the following criteria: privacy and security, the Gen AI model itself, the accuracy and quality of answers, performance, and ethical AI principles.

The publication of the Legal AI Solution Buyer's Guide takes place against the background of ever-rising interest in Gen AI's potential to transform the delivery of legal services.

As many as 90% of respondents to Law360’s January Pulse Leaderboard survey anticipated that investment in GenAI will rise, while more than half of firms (53%) have already purchased Gen AI tools. This comes against a backdrop of the growing availability of commercial AI tools that are trained specifically for the legal profession. 

The challenge firms and legal departments are facing, however, is trying to separate the hype around Gen AI and the benefits of what it can actually deliver within current capabilities.

When it comes to privacy and security, firms need to assess how training data is obtained and used; whether there are any potential issues with the training data that could perpetuate unfair biases; whether the tool properly attributes source materials; how transparent AI providers are about how their models work; and any regulatory and governance considerations that need to be addressed. Firms also need to understand what safeguards exist to protect privileged or confidential client information.

Assessing the Gen AI model itself means getting to grips with the large language model (LLM) that underpins the technology, as not all LLMs are the same. LexisNexis says the first aspect to consider is the model’s architecture – for example, some models are better at translation while others are better at summarising text.

Another important aspect to consider is the size of the LLM. While large LLMs will likely have more capability, smaller ones may be more efficient for a specific task. How the LLM is trained is also critical – for legal users, it is usually vital that the model has been trained on legal data rather than general text. It is also helpful if the model can be further fine-tuned using niche datasets that are relevant to a particular domain.

The report notes that firms can also adopt a multi-model approach that takes elements from different LLMs to create an entirely new tool that accentuates the strengths of each LLM and helps reduce their individual weaknesses. Lexis+ AI’s tech, for example, uses multiple AI models simultaneously to process tasks in parallel, speeding up response times.

Evaluating the quality of the answers that the GenAI tool generates is also essential to give legal professionals the confidence that they can use any information that is presented. Choosing a tool that includes citations can help lawyers quickly verify responses and check for accuracy. Evaluating the depth and breadth of the underlying legal database that the AI is drawing its answers from is also critical to ensure the information being provided is comprehensive and complete and less likely to generate so-called hallucinations.

For performance, firms must evaluate how fast the AI tool can complete a task compared to a human, be it in areas such as legal research or case summarisation. And finally, firms need to ensure that the AI tool is ethical and that they have a framework in place that has pre-defined principles and policies that govern their approach to responsible AI use. This means having appropriate processes to monitor AI systems for unintended bias, fairness and ethical risks.

By following these five criteria, firms can be confident the legal Gen AI platform they are choosing is effective and reliable, while helping to avoid potential issues such as reputation risk at a later stage.

Click here to download the Legal AI Solution Buyer's Guide

The Global Legal Post has teamed up with LexisNexis to help inform readers' decision-making process in the selection of a Gen AI legal research solution. Click here to visit the Generative AI Legal Research Hub.

Email your news and story ideas to: [email protected]

Top