Justice by Algorithm: Do Machines Help Humans Make Better Decisions?

Topics: Access to Justice, Artificial Intelligence, Data Analytics, Diversity, Efficiency, Government, Legal Innovation, State Courthouses

legacy data

Matthias Spielkamp is the co-founder and executive director of AlgorithmWatch, a non-profit organization dedicated to evaluating the social impact of artificial intelligence (AI) and machine-learning systems on human decision-making.

Part of AlgorithmWatch’s mission is educating the public about the complexity of so-called automated decision-making systems (ADM), particularly in the justice system and other areas where the technology is used to predict or proscribe human behavior. Spielkamp believes in the promise of ADM technologies, and started AlgorithmWatch so that societies can maximize the benefits of ADM and AI, minimize the risks, and develop strategies for responsible oversight of such technologies as they become more powerful and pervasive.

Justice Ecosystem caught up with Spielkamp in his native Germany, where his organization is preparing to receive the Theodor Heuss Foundation’s highest award for “encouraging a differentiated debate about criteria for development of artificial intelligence.”

Justice Ecosystem:AlgorithmWatch analyzes both the risks and opportunities of ADM through machine-learning and artificial intelligence. In the U.S. court system, “algorithm bias” seems to be the biggest concern at the moment. What other risks should those in the legal system be aware of with regard to these technologies?

Matthias Spielkamp: Whenever mistakes creep into automation systems, there is the risk that
bad decisions will be made at scale. Meaning, if you have a “wrong” risk assessment, chances are it’s not just wrong in one single case but whenever the system is used. For example, if the system uses biased training data, it could lead to wrongful discrimination of a particular ethnic group. It’s hard to specifically say what exactly the risk is and where it might occur — these matters need to be assessed on case-by-case basis.

Whenever mistakes creep into automation systems, there is the risk that
bad decisions will be made at scale.

Justice Ecosystem:You are also optimistic about the contributions such systems can make to society and have expressed concerns that public misunderstanding could lead people to demonize ADM technology unfairly. What is the biggest misconception people have about the use of ADM in the justice system?

Matthias Spielkamp: There exists a rather paradoxical idea of the consequences of using “machines” in general, be it in the justice system or elsewhere. On the one hand, people ascribe some kind of objectivity to any result that was produced by a software program on the basis of data. At the same time, many people feel uneasy if they are being judged by a machine, particularly when it comes to decisions where stakes are high. Both perspectives seem to me shortsighted, if not misguided.


Matthias Spielkamp, of AlgorithmWatch

One the one hand there can be many errors in automated systems — so data or algorithms are by no means objective or neutral. But neither are people, and in many cases human decisions are much harder to audit than those done by machines. So, we’ll hopefully be looking at a future where we find a good combination of human decision-making assisted by well-designed and well-audited machines.

Justice Ecosystem:Does the use of AI in the courtroom pose any risks in terms of eroding the public’s trust in the legal system? Or can reducing human fallibility through AI actually serve to increase public trust?

Matthias Spielkamp: We need to be able to trust institutions to perform meaningful oversight of such systems. If we feel that the courts or police are using systems they are not capable of understanding themselves, this will most likely erode trust. What we need to see is the creation of oversight mechanisms that keep pace with these and other technological developments. This is an enormous challenge, because at the moment it looks like the most qualified people are working at private companies.

Justice Ecosystem:Do you think the courts are adopting AI technology at a pace that’s appropriate for the digital age? Or, to put it another way, are AI tools being adequately assessed before being put into practical use?

Matthias Spielkamp: We can only talk about the systems we know about, of course. And I see that there are mistakes being made — for example, in the case of deploying Compas without sufficient debate. [Note: Compas (which stands for Correctional Offender Management Profiling for Alternative Sanctions) is a risk-assessment program used in several states to help judges determine a defendant’s risk of recidivism.] At the same time, I see an imbalance in attention to a case like this, because it gives the impression that many processes in the court system are already automated. From what I know, this is not the case.

Justice Ecosystem:Used wisely, do you think AI technologies can deliver on the promise of greater fairness, efficiency, and accountability in the administration of justice?

Matthias Spielkamp: Yes.

Justice Ecosystem:What kinds of questions should those in the justice system be asking about ADM systems as they inevitably become more pervasive and powerful?

Matthias Spielkamp: There is no list of questions to ask. There needs to be an intense and thorough dialogue with companies, academic researchers, and civil society when designing and deploying these systems, so that we can recognize the risks early and address them in a responsible and intelligent way.