Regulating the new world
This article is an extract from the Lawyers and Robots whitepaper. Click here to download the whitepaper.
Regulation of new technology is always a question of chickens and of eggs and the legal profession is no exception to the rule. Regulatory bodies are adapted to the particularities of the world as we know it, but technology creates shifts that in turn create the need for new regulation.
We talk to Professor Chris Reed from Queen Mary, University of London and Kristjana Çaka about what regulation in this area could look like.
How and where to strike balance in managing these shifts is the complex part of the question. Regulations are put in place in response to a need for protection of practitioners and their clients; while technological innovation might undo those safeguards, it might equally alleviate workloads and create efficiencies to the good of all.
Clear thought on the matter first of all requires an understanding of what is meant by ‘technology’; the word could equally apply to data storage—and other forms of technology that now seem comparatively traditional and familiar—as to automating time-intensive processes such as due-diligence. This latter application of technology, which takes us into areas of artificial intelligence, is consistently accompanied by concern that we could soon enter a world of unknowns, where deskilled humans and emboldened robotics create new problems that we are no longer able to manage or to control.
It is understood across business that too much regulation can be as problematic as too little, but what if that regulation is key to controlling a sea-change that could wipe away a profession as it has been known?
The obsolete lawyer
However futuristic artificial intelligence and automation by technology can seem, these issues have not sprung from nowhere. Since the turn of the millennium, Chris Reed has been writing about the impact of the internet on authentication processes and information ownership. His starting appraisal is that current regulatory and legal standards originate from a level of competence that is expected of humans. Technology raises two challenges: the first is that if human judgment and decision-making is replaced by technology decisions, is that adequate to meet regulatory and legal obligations about competence, care and skill? Perhaps of more serious implication for human lawyers, is Reed’s second question of how the profession might expect regulation to respond to new forms of artificial intelligence.
‘If technology is expected to perform better than humans—for example in due diligence or discovery searches—does this raise the required professional standard above that of a reasonably competent lawyer?’
Changing with the times
Law does not always adopt new technology enthusiastically though, nor necessarily should it seek to. Use of video evidence in criminal court, where it would often seem to simplify legal proceedings, is nonetheless subject to strict controls and is often inadmissible due to concerns of authenticity, origin or indeed where the data has been stored pre-trial. If we assume that cloud-based evidential data should be treated no differently to video data of unknown provenance, perhaps the proposition seems less alien.
Just as video footage might seem to offer incontrovertible evidence only to be judged below the evidentiary value required of a trial, so too is the technology used to store and sift data open to scrutiny. Where data is kept in cloud-based services, the decision of where and how it is stored might be driven by factors as varied as a country’s online privacy laws, or even the positive environmental impact of storage in a cold climate where servers benefit from natural cooling.
‘The major problem with using data as evidence is in producing an audit trail which demonstrates its integrity from the moment it was collected until the moment it is produced to the court. This is particularly problematic with distributed processing storage, such as use of the cloud’.
‘Data changes storage location regularly, under the control of various subprocessors, and none of these systems were designed with a view to producing reliable evidence. Failure to produce an audit trail does not render data inadmissible, but it does open its integrity up to challenge.’
The major problem with using data as evidence is in producing an audit trail which demonstrates its
integrity from the moment it was collected until the moment it is produced to the court.
According to Kristjana Çaka it may not be long before we start to see how technology is already playing a more direct role in the delivery of law to society.
‘There is evidence of courts around the world allowing an increasing reliance on technology. Perhaps the best example is Canada where the Ministry of Justice developed the Civil Resolution Tribunal, which is the first online small claims court to be operational in the world. The Ministry of Justice in England & Wales is said to be doing the same and there are certainly lawyers and advocates, and judges, who are encouraging the use oftechnology and AI in the legal world.’
If the standard experience of technological adaptation is anything to go by, we can expect transitional periods in which technology presents efficiency savings but imperfect services. The problem, however, is that those imperfect services often externalise their costs onto humans, much as an overseas call centre with few telephone operators loads irritable but sustainable frustrations onto customers. The high stakes at play in law make it a dangerous testing ground—algorithmic sentencing in the US has already been found to compound racial biases when software is used to predict future criminality.
If the standard experience of technological adaptation is anything to go by, we can expect transitional periods in which technology presents efficiency savings but imperfect services.
Given the borderless movement of data, it is unsurprising that supranational bodies such as the European Union will be the early candidates for oversight of technology in the legal industry, just as the EU has been ahead of national governments in responding to the dominance amassed by technology corporations like Google and Facebook. The General Data Protection Regulation, due to come into force in 2018, will certainly have ramifications for the handling of data.
While policy on the subject is far from advanced, there is talk that EU lawmakers will come to demand a standard of ‘explainability’ from AI platforms, so that decisions and methodology can be properly scrutinised, and so that technological errors—where they occur—can still be unpicked by human heads. This is in-keeping with existing software development theory, where programmers often define their work as a task of presenting both problems and proofs. It is often easier to address the problem, and even its solution, than the proof by which that solution was reached—correct outcomes can be found by fluke and even in error. AI and machine-learning, and the ‘convolutional neural networks’ from which they have built a field of so-called ‘deep learning’, will need to remain readable in human terms.
The concepts may seem daunting, but the principle for regulation is no more revelatory than the idea that a bookkeeper will show their workings out, not only the answers. Çaka notes that precedents are already being set in other fields dealing with automation:
‘The notion of “explainability” is not surprising when you consider the existing use of AI in medicine. AI systems are already giving treatment suggestions and it seems obviously critical that they must provide evidence for the treatment.’
While it is important that these matters are considered, as with so many facets of AI and automation, the scope for questions is at present vastly greater than answers currently available. It is probably too early to make any definitive proclamations.
‘There has yet to be any major movement in government or amongst other bodies towards more regulation of AI,’ Çaka concludes. ‘The main reason for this is probably because we are yet to understand the real force and impact of AI and the possibilities that might come with it.’