Building the future of legal services with legal AI
06 February 2024 10:00
In a recent episode of the Legal Talk podcast, Greg Dickason, Managing Director, LexisNexis® Asia & Pacific, and Claire Linwood, Local Product Lead at LexisNexis Asia & Pacific, unpacked some of the key issues that decision makers in law and legal operations grapple with as they build the AI-powered futures of their organisations.
This blog post touches on just a few of the issues discussed. To listen to the full episode, click here.
The rise of generative AI since late 2022 has brought AI to the forefront of legal practice and knowledge work more broadly. With the rate of AI adoption increasing steadily across all industries, lawyers and law firms are also turning their attention to how best to integrate this step change in technology into their workflows.
As a lawyer or legal operations professional, it can be difficult to know how to get from the status quo to a future state in which AI can be used as a tool to make legal work more efficient and rewarding and to reduce risk.
At LexisNexis, we have been working with AI for many years to respond to our customers’ needs and to promote the rule of law – enhancing discoverability of content by building AI techniques into search and creating efficiencies in drafting and litigation with specialised, AI-powered products like Argument Analyser.
Unlocking the power of AI for lawyers is at the heart of our work at LexisNexis as we introduce the commercial preview of our generative AI solution Lexis+ AI™ in Australia.
As a decision maker, how do I know whether to invest in a legal AI solution?
Decision makers in law firms and legal departments may find themselves wondering whether, when and how much to invest in AI solutions and other novel technologies.
There is no doubt that some legal and legal ops practitioners are experiencing a sense of déjà vu, having seen many new technologies and workflow innovations come and go over the last decade – some not delivering anticipated returns. But while technologies such as blockchain, for example, may have only limited application in certain areas of law, generative AI is poised to revolutionise the industry.
For decision makers who are considering AI adoption, it is often helpful to reflect on the readiness of the organisation. Is the organisation culturally ready to adopt and implement AI? Are members of the organisation able to identify and explore the potential AI use cases that exist within the enterprise? If an AI solution is intended to benefit a particular team, is that team on board with adopting and using that solution? These are the sorts of factors that will be fundamental to the success of an AI project.
As with any effective change management initiatives within an organisation, AI implementation requires buy-in by executive leadership and within the organisation as a whole. Learning about the application of generative AI in law – and some of the methods used to ensure that AI output is appropriate for legal tasks like research and drafting – can be helpful.
How can LLMs work best for lawyers?
Generative AI models – including Large Language Models (LLMs) – are trained on vast datasets containing a great deal of material that is not relevant to legal use cases. What sorts of techniques are used to ensure that LLMs can be shaped to work optimally for the legal profession and legal use cases?
Fine-tuning an LLM for specific legal tasks can assist the model in understanding domain-specific legal phrasing and terminology. Fine-tuning in the context of LLMs essentially involves re-training a pre-trained, out-of-the-box model by providing it with a large amount of relevant data in order to improve its ability to perform certain tasks. An example of fine-tuning in a non-legal realm is improving the capacity of a model to perform sentiment analysis by feeding it a large volume of product reviews or social media posts.
Retrieval-Augmented Generation (RAG) is a means of increasing the accuracy and authoritativeness of LLM output by cross-referencing a trusted source of knowledge. A purpose-built legal AI solution – like Lexis+ AI – refers to specific, high-quality legal content sets to ground its responses in truth. RAG combined with linked citations enhances transparency by allowing the user to check the original content source. It is one of the techniques used to reduce the risk of AI ‘hallucination’ – where a model generates false information.
At its core, prompt engineering is about providing the best possible natural language input in order to get the most appropriate output from an LLM for a particular use case. Prompt engineering is an iterative process of experimenting with the prompts required for a desired output. Prompt engineering can be used in conjunction with RAG and fine-tuning to significantly reduce the risk of hallucinations in legal use cases. It is a particularly accessible discipline for lawyers due to the need for verbal reasoning skills.
The importance of content in legal AI
The techniques that allow LLMs to function effectively in a legal context come back to authoritative and trustworthy content. It is difficult to overstate the importance for legal AI of using high-quality, curated datasets.
At LexisNexis, our AI is grounded in our authoritative and trusted legal content. In addition to content, we have spent years ensuring that the search experience is as effective as possible so that users are given the most relevant set of responses for their search query.
We combine this content and search expertise with RAG in Lexis+ AI. Taking this approach ensures that the most relevant and up-to-date documents will be available to be surfaced as authority for LLM output.
What should I consider when choosing a legal AI partner?
Lawyers and legal ops professionals should feel equipped to ask questions of their potential vendor when considering a legal AI solution.
- How is your data treated once it is input as a prompt? Is it purged?
- Is your data used to train the model?
- How well does the model understand legal terms?
- Is the model able to understand different practice areas?
- Is the model using RAG to ground its responses in authoritative content? What is that content set?
Stay tuned over the coming weeks as we explore more of the key issues and opportunities around Generative AI in the legal profession in the Legal Talk podcast series.
This article explores the current Australian and European landscape with ai creating deepfakes, and the protections for individuals, and what businesses must be aware of. It also looks at cautionary tales and examples, then looks at what the future of Australian AI privacy law may look like.
You can’t scroll through LinkedIn without reading posts about the ‘5 things you need to know about generative AI’.