Algorithmic bias in the courtroom: how to fight it?

18 January 2019

With the use of artificial intelligence technology being more widely accepted in today’s society, it should come as no surprise that even judges are now embracing AI in the courtroom.  In recent years, we have seen an increase in courtrooms around the world turn to AI technology to improve efficiency in the courts and the justice system - courtrooms in the United States successfully trialled algorithmic technology in order to decrease jail population without jeopardizing public safety1, while lawyers in Argentina used an algorithmic software app to generate draft rulings instead of humans with those rulings having a 100% approval rate from judges2.

But algorithmic technology isn’t perfect.  The downfall of algorithm technology is that it is dependent on the data itself – and held within this data is the human bias which we unconsciously hold inside of us, just now compounded even further.  One of the biggest controversies surrounding algorithm bias has to be the 2016 ProPublica saga where a software program used in court to determine the rate of recidivism incorrectly labelled black defendants as a risk almost twice as many times as it did for white defendants. What was worse was that it made the opposite mistake for white defendants – those labelled lower risk went on to reoffend at twice the rate of black defendants.

So how do we solve this problem?

Transparency

One of the most commonly suggested solutions to tackling the problem of algorithm bias is transparency: open up the algorithm design process and allow the public to see the source codes, inputs and outputs of the algorithm.  The public cannot challenge the validity of an algorithm if they haven’t had an opportunity to review making of it.

Regulation

Another possible solution would be to have an independent regulatory body which would keep artificial intelligence technology in check and have a say in the use of such technology – not unlike the Ethics and Governance of Artificial Intelligence Fund, a $27 million fund headed by Berkman Klein Centre for Internet and Society at Harvard and the MIT Media Lab. This fund was created to ensure that artificial intelligence and its development would occur in a way which is “ethical, accountable and advances the public interest.”  Transparency and independence are crucial to the success of any such regulation.

Better Data & Inclusiveness

Ultimately, however, the main problem with bias in algorithmic data is the data itself. Algorithms are mirrors. The results they produce are a direct reflection of what the designers, and society at large, tell them to produce. While it has been suggested that greater diversity and inclusion of varied races, backgrounds and worldviews would help ensure the data set is more balanced, it would be difficult to measure just how much ‘greater diversity’ is needed to effect such a balance. Furthermore, it may be that those who are directly affected by the algorithm are often unable to be part of such data collection and are therefore unrepresented in the data sets.

Education

Alternatively, it may be that we need to better educate anyone who is involved in designing an algorithm, including those moderating and training, on issues of human bias and legal due process. As much as companies claim their algorithms to be completely technologically-based, we know that artificial intelligence is as ‘fully automated’ as the Great and Powerful Oz from the Wizard of Oz – it’s nothing but a man systematically pulling levers from behind a curtain. Ana Beduschi, a senior law lecturer at the University of Exeter, suggests that by teaching algorithm designers about human rights, inherent discrimination and how the legal framework operates in this regard, developers may make better informed choices when designing their algorithms. Furthermore, with a better understanding of the intricacies of a right to a fair trial and its effects, designers would be able to develop a more unbiased choice of datasets which are not representations of society’s unconscious discrimination on ethnicity or race.


1https://www.nytimes.com/2017/12/20/upshot/algorithms-bail-criminal-justice-system.html#commentsContainer
2https://www.bloomberg.com/news/articles/2018-10-26/this-ai-startup-generates-legal-papers-without-lawyers-and-suggests-a-ruling

Subscribe to our Newsletter


RELX Trading Australia Pty Limited and our affiliates may further contact you in your professional capacity about related products, services and events. You will be able to opt-out at any time via the unsubscribe link provided within our communications. For more information, see our Privacy Policy.