AI for Growth: Harnessing the Power of Large Language Models in the Legal Industry
12 July 2023 14:30
Authored by Greg Dickason, Managing Director Asia-Pacific, LexisNexis
Evolution of Large Language Models
There’s no doubt that AI is here. The surge in interest that has arisen from the introduction of new generative tools like ChatGPT or Google’s Bard is unlike anything we’ve seen for a long time. But the truth is that AI has been here for years – and even the Large Language Models that are now causing such excitement are actually already years old.
The new wave of generative tools has given us another iPhone moment. Many will recall Steve Jobs standing on stage in 2007 and presenting the first iPhone to the world. At that point, the world seemed to change. But the iPhone was not a new technology in and of itself – there were already Palm Pilots, Blackberries, and touch screens. But what the iPhone did so well was incorporating a lot of other technologies into a sleek, integrated offering. As a result, it captured people’s imaginations and built a huge swell of momentum in a very short space of time. And that’s what we’re seeing here, with the new wave of AI.
At LexisNexis®, we’ve been using AI in our products for years to help lawyers find better information faster, and to draw links between data points in ways that can help them build stronger arguments and see connections that may not have been visible in the past. Our approach is guided by a number of AI principles:
→ Remove Unfair Bias: We understand that mathematical accuracy doesn’t guarantee freedom from bias, which is why we act to prevent the creation or reinforcement of unfair bias.
→ Accountable for Output: Our technology assists our customers’ decision-making processes. It is important that humans have ownership and accountability over the development, use, and outcomes of our tools.
→ Explainable: We know that an appropriate level of transparency creates trustworthiness for users and regulatory bodies.
→ Respect Privacy: Some data sets include personal information. We are committed to handling personal information in accordance with all applicable privacy laws and regulations as well as our own Privacy Principles, which require that we always act as responsible stewards of personal information.
→ Robust Data Governance: AI systems function more accurately when they are fed large amounts of high-quality data, which is why we ensure we have in place robust data management and security policies and procedures.
Developing solutions based on customer needs
As a big business, we move quickly to deliver real solutions to real customer needs. So when ChatGPT went live, we already had over 150 technologists and data scientists working on large language models – but we pulled more resources from around the business to focus on GPT. The first thing we did was go out and talk to our customers, to find out what they want from a large commercial model. The core needs we identified were:
- Search: Help make my research journey easier by not overwhelming me with large amounts of information
- Summarise: Help me quickly and easily digest the information I’ve got back
- Draft: Help me quickly turn that into something that I can send out to my partners or clients - whether that’s an email, a briefing, or anything else
Part of our duty to lawyers in employing AI technologies to solve these problems is to ensure that our output is accurate, reliable, and explainable – the last one being of particular importance. We help our customers see the logic behind AI-led decision-making by being transparent on why we select or omit sources in our results. So, while you’re using a LexisNexis product, we explain how we got the results (even the results we omit). Importantly, we’re doing all of this in a way that respects your IP and data.
Our products are already bringing the power of large language models to bear across the full scope of research tasks. You can be working on a precedent or document, and Lexis® Clause Intelligence will be helping you by suggesting relevant, expert-drafted clauses where they fit. You can dump a brief, a document, or an email into Lexis®Argument Analyser and it will find legal references and citations within that document, helping you to extract the key issues and facts quickly and accurately.
How do you benefit from Generative AI while avoiding the risk?
As with any emergent technology, the fanfare is often tempered by an optimistic cautiousness which is usually driven, at least in part, by a lack of knowledge. For many lawyers and firms, the advantages of Generative AI are clear and present, but there’s an uncertainty about where their investments (whether financial, human, or other) are best spent. It comes back to the essential characteristics of legal AI - that it’s accurate, reliable, and explainable. There are a few questions that can be asked of vendors or internal teams to help guide decision-making.
- How accurate is the output? Where is the tool getting its information?
- How private are my interactions? How is my data treated?
- How customised is the tool? Does it understand legal terms? Does it understand different practice areas?
- Do I get citations and output so I can check the tools work?
Will AI replace lawyers?
As for the stubbornly enduring question around whether AI will replace lawyers, the answer is no. Lawyers intelligently using AI will replace lawyers who are not.
AI will make research easier, help summarise findings, and automate simple drafting. This will free up lawyers to focus on building a competitive edge. Your biggest competitors won’t be lawyers, they will be lawyers using AI - which makes keeping up with the pace of change even more important!
Ultimately, the biggest risk when it comes to AI and the law is to do nothing. So, if you haven’t yet started your journey, time to start now! At LexisNexis, our approach will continue to be defined by our responsible AI Principles, along with a customer-centric outlook that helps you to drive better outcomes.
If you’d like to chat more about how you can embed AI within your business, just get in touch.
Generative AI for Lawyers: What It Is, How It Works, and Using It for Maximum Impact
Few technologies have generated as much hype across the legal industry as Generative AI. Open AI’s ChatGPT, Google’s Bard, and a host of new solutions being developed on Large Language Models (LLMs) are making waves across the world.
Linda Przhedetsky is an Associate Professor at the University of Technology Sydney’s Human Technology Institute. She’s a policymaker and PhD candidate specialising in the ethical development, regulation and use of artificial intelligence (AI), and her research focuses on the role of automated decision-making tools in competitive essential services markets and the development of effective regulatory solutions that prevent consumer harms, while simultaneously promoting innovation. So to say that she’s an authority on the ethical use of AI is, in many ways, an understatement.