The Big Question for the Legal Ecosystem: Can Artificial Intelligence Be Trusted?

Topics: Access to Justice, Artificial Intelligence, Justice Ecosystem: Technology, Law Firms, Leadership, Legal Innovation, Midsize Law Firms Blog Posts

ones to watch

What happens when human beings grant decision-making power over their lives to a computer algorithm?

The United States is conducting an unprecedented mass experiment to answer this question. People already trust artificial intelligence and various machine-learning algorithms to recommend movies, books, music, and games; as well as to navigate traffic, trade stocks, and sort their Twitter feed. Algorithms are currently being used to evaluate bank loans, college applications, job résumes, insurance claims, credit scores, and dozens of other decisions that affect the arc of people’s lives.

In courtrooms across the country, lawyers and judges too are using computer algorithms to help them decide which jurors to select, whether an offender is a flight risk or likely to re-offend, and what sentencing guidelines to follow.

In fact, our entire culture is essentially doing a giant “trust fall” when it comes to incorporating technology into everyday life. The difference between the justice system and ordering an Uber, however, is that all other institutions in society depend upon trust in the law and a belief in the general concept of justice, both of which must be carefully safeguarded in order to ensure the stability upon which daily life depends.

The idea that the 2016 election could somehow have been influenced by Facebook’s news-gathering algorithm is a good example of what can happen when assumptions about technology go unchecked. Trust in the entire electoral process is now being tested in ways we’ve never experienced before.

By the same token, one can point to hundreds of different ways in which life is improved by the assistance of algorithms — even Facebook’s algorithms. This is “smart” technology’s double-edged sword: It can help, but it can also hurt, often in ways that are unintentional. As Harvard data scientist Cathy O’Neil explains in her 2016 book, Weapons of Math Destruction, algorithms do not have “moral imagination,” so it’s imperative to “embed better values into our algorithms, creating Big Data models that follow our ethical lead.”

Algorithmic Justice

In the court system, judges in several states now use sophisticated software programs to assess a defendant’s risk of recidivism, information that is used to assist primarily in determining bail and parole. In theory, these programs help judges make better decisions, faster, by giving them relevant, unbiased information on the offenders in their courtroom.

Some judges welcome these new tools and some do not, but all are under intense pressure to adjudicate cases more efficiently. The slippery-slope fear is that, over time, judges will become more comfortable accepting the computer’s recommendation, and that human judgment will be unduly influenced by an impersonal, data-crunching algorithm that judges come to trust a bit too much.

artificial intelligence

Most people don’t question the algorithms that influence their lives, so when they do, it can be eye-opening. Last year, Loomis vs. Wisconsin — a petition to the U.S. Supreme Court stemming from a sentencing case in the state — called into question the decision of a judge who used a program called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) to assess a defendant’s risk of recidivism. The defendant, Eric Loomis, argued that the program had unfairly flagged him as “high risk” because he was black. The defense requested access to the data used to program COMPAS’s algorithm, but the parent company, Northpointe Inc. (since re-named Equivant), refused, claiming that the company’s algorithm is proprietary.

The Wisconsin Supreme court ruled against Loomis on the grounds that he would have received the same sentence with or without the COMPAS report. But, crucially, the court did not take on the issue of private companies using proprietary algorithms to influence decision-making in the public court system.

A separate analysis of COMPAS by the non-profit investigative news organization ProPublica compared COMPAS’s predictions to what actually happened over the course of two years and in more than 10,000 cases. ProPublica found that COMPAS correctly predicted an offender’s recidivism 61% of the time but was only correct in predicting “violent” recidivism 20% of the time. ProPublica also found that even after controlling for prior crimes, future recidivism, age, and gender, black defendants were 45% more likely to be assigned higher risk scores than white defendants, and 77% more likely to be assessed a higher risk score for violent recidivism than white defendants. More recent studies—one out of Dartmouth and another from Stanford— found that COMPAS wasn’t much better at predicting recidivism than twenty random people, and no better than crowd-sourcing.

Equivant argues that these studies confirm COMPAS’s accuracy and says it has taken steps to address concerns over its algorithm, but there is currently no independent way to evaluate their databases and algorithms for evidence of continued bias. As with Facebook and Google, the general public has no way of knowing whether the problem is really being solved.

Trust In… Who? What?

There are plenty of reasons to believe that a responsible — dare we say judicious — approach to incorporating legal technology into the court system can serve to improve and solidify the bond of civic trust that holds the legal system together. But trust is an all-too-human trait, one that is hard to establish and easy to shake.

The American Bar Association’s (ABA’s) Model Rule 1.1 states that lawyers have an ethical duty to be aware of the “benefits and risks” of relevant technology as it pertains to their clients, businesses, and the legal system in general. Part of that responsibility is knowing when new technologies can be trusted and when they can’t… and developing the wisdom to know the difference.