STAY CONNECTED

Exploring the essentials of responsible legal AI innovation

24 April 2024 15:00

In a recent episode of the legal talk podcast, Jo Wade, Senior Director of Global Products at LexisNexis® Asia Pacific and Seeta Bodke, Head of Core Product Pacific, LexisNexis discussed how LexisNexis builds its AI products, as well as the guardrails that are in place to ensure all AI development occurs responsibly. This blog summarises some of the key areas discussed. To listen to the full episode click here.

Among legal professionals, there are common questions that surround legal AI - “how does it work?”, “How do large language models power legal AI?”, and many questions about data privacy, security and accuracy, copyright, and, of course, hallucination. To be able to adopt AI, it’s essential to build trust and know what happens behind the scenes. This doesn’t mean that every lawyer needs to become an AI engineer, but rather to have a foundational understanding of the processes that go into the development of AI products, as well as how they return their responses.

Key considerations

With a technology as transformative as generative AI, there are important legal, ethical and security considerations that must be kept in mind from development to deployment. Particularly in an industry as highly scrutinised as law, results matter - as does explainability - and these key points have led to a degree of trepidation from many lawyers and firms. But avoiding risks like reputational damage, or damages stemming from privacy or data governance issues, is a more manageable task than many might consider. At its core, it comes down to diligence (which is par for the course in law), and using the right products. Off-the-shelf and free AI solutions are too generalised to be of significant value to lawyers - a legal-specific model is essential for generative AI to work.

“There’s no doubt that what we’re seeing is a huge step forward,” says Jo. “But I think the first thing that everyone really needs to understand is that generative AI is not just hype. As with any new technology, there’s a lot of excitement around it and sometimes that does not translate to reality - but in this case, it’s actually real. For example, in the US, over half of the top 200 law firms have already purchased generative AI tools.”

Large Language Models (LLMs) are already prevalent in the practice of law. As with the adoption of any new technology, it’s essential that the roadmap is developed by business need, and not just a desire to be using the shiniest new thing. There is already a broad range of use cases that LexisNexis has identified through a number of customer studies across the world. Most commonly, these are activities like legal research, drafting, or summarisation. But what is emerging is that generative AI capabilities can enhance these activities as well as address more, which will make it possible for AI to assist in the actual development of a litigation strategy. Today, AI provides inputs to those more complex tasks, but in the future, it will be able to complete those tasks end-to-end.

Being mindful of the limitations

But LLMs also have limitations, and there are plenty of well-publicised instances of generative AI getting it wrong. But Jo says this is something that can be mitigated through responsible development. “When we think of LLMs, we think of the famous public models like ChatGPT and Claude. Those LLMs are trained on massive amounts of public data and public information. So they’re trained on all this data, but we don’t know much about that data, and we can't control its quality and currency. At LexisNexis, we work with providers like Open AI and Anthropic and Google, but we do additional work to build our own legal-specific models that are unique to our business.”

Responsible AI with responses grounded in authoritative content LexisNexis tunes models with trusted legal content to make them legal domain-specific, then integrates them with our search engine using a technique called Retrieval Augmented Generation; all of which is backed by comprehensive testing and tuning with our legal subject matter experts to make sure that the output that comes from our model is accurate, useful and of a high quality.

Jo notes that with years of experience working with LLMs even prior to the release of public versions of ChatGPT or Bard, LexisNexis is well ahead of the curve when it comes to the challenges and responsibilities that arise when using this technology. Every AI project is developed under a clearly stated set of responsible AI principles:

  • To consider the real-world impact of our solutions on people
  • To take action to prevent the creation or reinforcement of unfair bias
  • To explain how solutions work
  • To create accountability through human oversight
  • To respect privacy and champion robust data governance

“So we have very comprehensive governance processes in place that also extend to the way we work with our third-party providers,” says Jo. “We make sure that we're protecting our own data and intellectual property, our customers’ data and intellectual property, and their clients’ as well.”

But even with closely governed development and deployment, human oversight is still an essential element of any AI use. Using out-of-the-box solutions can be risky because answers are opaque, and data quality can’t be verified - but even when using proprietary AI solutions, oversight is important. In law, AI should be treated as a legal assistant who has completed a task - as the more senior person in the room, a human needs to review any answer, and potentially refine it before it becomes usable. LexisNexis AI solutions reduce the effort that’s required in doing this by prioritising explainability and transparency in the responses provided.

“When we give you an output, we make sure that we can explain where that output has come from,” says Jo. “So even while you're using the solution, it's going to give you feedback about the approach it's taking. And when a response is returned, it provides the references that are linked so that you can go back and read the source material and actually decide for yourself whether you need to do further refining.”

By taking powerful models like GPT and Claude and integrating them with our best-in-class search technology, LexisNexis is able to ground the response in trusted content and trusted data. Importantly, this means there are no hallucinations in citations that are returned. They're always real, and they always link off to real content. “We also have humans in the loop at every stage of our development process as part of our accountability commitment,” says Jo. “So there’s a very, very dedicated component of our program, which involves legally trained subject matter experts testing the output of our AI models.”

With generative AI use accelerating across all industries, and its relevance and accuracy improving in leaps and bounds, the writing is on the wall for those who are on the fence about integrating AI into their workflows. For Jo, it’s important that businesses start to implement solutions sooner rather than later. “One of the ways you can prepare is to look at the use case within your own organisation,” she says. “So evaluate your internal use cases, consider new roles that you might need, and understand what your own clients are thinking about AI.” But the most important thing? “The time to start preparing is now.”

Stay tuned as we explore more of the key issues and opportunities around generative AI in the legal profession in the Legal Talk podcast series.

Lexis+ AI, now previewing in Australia, is the fastest legal generative AI – with conversational search, drafting, summarisation, document analysis, and linked hallucination-free legal citations that will transform your legal work & fuel your performance.

To learn more about Lexis+ AI visit: https://www.lexisnexis.com.au/en/products-and-services/lexis-plus-ai

Join Lexis AI Insider and help shape the future of legal tech with generative AI.

Latest Articles

Lexis+ AI

Subscribe to our Newsletter


RELX Trading Australia Pty Limited and our affiliates may further contact you in your professional capacity about related products, services and events. You will be able to opt-out at any time via the unsubscribe link provided within our communications. For more information, see our Privacy Policy.