STAY CONNECTED

Responsible AI & Ethics

24 May 2023 00:00

Linda Przhedetsky is an Associate Professor at the University of Technology Sydney’s Human Technology Institute. She’s a policymaker and PhD candidate specialising in the ethical development, regulation and use of artificial intelligence (AI), and her research focuses on the role of automated decision-making tools in competitive essential services markets and the development of effective regulatory solutions that prevent consumer harms, while simultaneously promoting innovation. So to say that she’s an authority on the ethical use of AI is, in many ways, an understatement.

With debate continuing across the legal industry - and, admittedly, most industries - about responsibly integrating AI into the future of work, LexisNexis® Vice President, Regulatory Compliance- Global, Myfanwy Wallwork, sat down to discuss all things AI and ethics with Linda.

“So when we talk about responsible AI, I think it can be quite easy to grandstand and use all sorts of fancy language to talk about how we can use AI for good,” she continues. “But when we're being responsible, we need to know what we're actually trying to achieve and also what we're trying to avoid.” The things that should be avoided generally fall into three categories:

First, Failure. An AI system that doesn’t do what it’s meant to do. That could be security failures, or it could be a system that isn't treating people fairly. It might not be working as intended, or it might also be failing when it’s needed most. For example, a medical AI that’s assisting during surgery - what happens if it cuts out in the middle of the process?

Second, Malicious Deployment. Things like cyber attacks, spreading misinformation, creating deep fakes, and manipulating people.

Third, Overuse - meaning an AI system that is not designed to function in the context in which it's deployed. This could mean relying too heavily on technology when a human perspective is really needed. And this could also factor in environmental costs, such as the carbon footprint of excessive use.

“We need to make AI that's fair, that's fit for purpose, that is accurate and accountable,” says Linda, “and we have to remember that when we're talking about responsible AI, we’re not operating in a digital Wild West - the law still applies.”

She notes that the realm of global AI regulation is rapidly evolving - the EU is moving towards passing an AI Act. The GDPR (also EU legislation) had a significant impact on the handling of personal data and the operation of AI here in Australia. “Because these technologies allow us to function internationally, but we also need to be cognizant of the law that we need to comply with when we're operating in these different jurisdictions,” says Linda. “But I think that can have some majorly positive effects because if some international laws do set a higher standard, it gets us thinking, how do we lift our own compliance and our own standards when it comes to our own company policies.”

But despite the increasing reach of global regulation, there is still an ethics and values-led question to be answered. Laws are, at their core, a reflection of prevailing values after all. Linda notes that often, these questions are considered academically when what’s needed is real-world consideration. “I think it's important to acknowledge that when we're talking about ethical dilemmas, we're talking about harms,” she says. “So tangible negative impact. This could be a simple inconvenience like something not going into your spam folder when it should have, or it can be a really serious consequence like unlawful discrimination or loss of life.”

“Linda also notes that some organisations use AI ethics principles and frameworks. These are not a substitute for legal compliance, but can sometimes fill gaps where the law is silent. There are currently hundreds of principles-based AI ethics frameworks all around the world,” she says. “We have one that was released by the Australian Government in 2019, promoting human-centered values like fairness and contestability. But when we talk about ethics, some of the real challenges lie in how we actually operationalise things. And for example, if you were using an AI system to look at resumes to determine whether a job candidate was suitable or unsuitable, you'd really be asking where do we draw the line? What is a fair process? What kind of feedback should we be providing those candidates? So ethics lead us to some really interesting discussions and very interesting questions.

Linda’s area of research is in the impacts of AI within rental apps and platforms, where it is being used to assess applicant suitability for rentals. She notes that in a sector that already has well-documented cases of discrimination, technology has the potential to be part of the solution. But at this stage, the impact is variable. “Some of the worst examples I've seen have been apps in the North American market that look at people's data, including their social media profiles, and claim to be able to predict their neighbourliness, their reliability, their likelihood to pay rent, their willingness to pay rent and their eviction potential,” says Linda. “So these apps claim to use AI, but what they're really doing is spitting out a number. And we don’t know how accurate this number is, what it's based on, and frankly, I don't think that it's particularly fair to be using someone's social media profile to determine whether or not they are going to be a good neighbour. These products shouldn't be in the market but they are, and the real estate agents who are using them in their assessments don't necessarily know how to think about them critically because these products promise the world. But really, they could be harming the renters that are trying so hard in competitive property markets to put their best foot forward. And we really shouldn't be allowing these technologies to be involved in these decisions unless we know they are operating fairly.”

Linda says that when considering these sorts of tech vs real-world impact dilemmas, it’s important to remember that there are always trade-offs—for example, a biometric or facial recognition AI might mean making a trade-off between privacy and accuracy. To responsible, it’s essential that we have purposeful and deliberate conversations about what should be prioritised. “Ultimately, those are very important conversations to be having every day. AI systems are not set-and-forget, they can't be. You need to constantly interrogate what choices you’re making and why you’re making them, and whether they still suit the values your organisation is prioritising.”

She mentions an example from Amazon, which developed an algorithm to inform their hiring practices, particularly in technically-focused roles. They used historical data to find out of those who had who they hired who had been the best performers. “You might think that's neutral data, but really if you feed it into an algorithm, which is what they did, it turned out to preference male candidates,” says Linda. “That led to biased decisions, and it had a number of negative consequences. Not only to discriminate against potential female and non-binary applicants, but it also disadvantaged the organisation from getting the best candidates that they could possibly hire. So when we're thinking about implementing these algorithms into the organisation, we really need to be thinking not only about how to explain the decisions internally, but also how we want to be accountable to the people that they impact. So Amazon ultimately ended up deciding not to use this algorithm.”

She notes that when it comes to the law, there are so many applications of AI technology - from those that help with research and discovery, document review, and all manner of process-driven tasks. There are also those that handle higher-order legal tasks in the court system. Regardless of the AI’s job, Linda advocates for thoughtful human oversight and warns against the ‘lab-coat effect’, where people place an overly high value on information deemed to have come from an expert source, even when we’re unsure how they’ve drawn their conclusions. “It's important when we're using these legal tools to understand how they work, how reliable they are and really think through whether it's appropriate to be using them when they're informing very consequential things,” says Linda. “So, particularly in the courtroom, when we're talking about, for example, bail predictions or calculating the risk of recidivism, we can't predict the future. So we have to remember to take some of these predictions with a grain of salt and really think in depth about how we use them.”

When it comes to the use of Generative AI in law, Linda says it’s about skill development. “One of the biggest skills that lawyers are needing to develop is identifying correct prompts,” she says. “If they're going to use technology like generative AI, it’s essential they understand how these tools work and how to get the most out of them. So if you're asking for a research summary, how do you pose that question? The other thing that is increasingly important is still having that engagement and that full understanding of the law so that you can sense check whether the AI-produced outputs are in fact accurate. We're not at the stage where everything is coming out seamlessly, so we still need a lot of oversight from professionals in the field.”

Overall, Linda is positive about the development of responsible AI within the legal industry, and more broadly. The promise of this technology to improve lives - from improving judicial outcomes to reducing workload stress - is immense. There are already significant inroads being made across automation, machine learning, and now Generative AI - but Linda is firm on the fact that given how pivotal these technologies are, they must be built with care, thought, and perspective, to be truly useful and ethical.

This article is based on the second podcast in our AI Decoded podcast series. You can listen to the full episode here.

At LexisNexis, our purpose is to advance the rule of law. This purpose guides our actions and so when developing solutions, we ensure that all of our developments are in line with RELX’s responsible AI principles. These principles ensure that our solutions develop in line with our values and maintain our stance as a thought leader in the market.

Latest Articles

Lexis Argument Analyser


Deep case law research powered by artificial intelligence and trusted LexisNexis content

LEARN MORE

Subscribe to our Newsletter


RELX Trading Australia Pty Limited and our affiliates may further contact you in your professional capacity about related products, services and events. You will be able to opt-out at any time via the unsubscribe link provided within our communications. For more information, see our Privacy Policy.