Skip to content
Legal Technology

Algorithmophobia: Overcoming your fear of algorithms and AI

Zev Eigen  Founder and Chief Science Officer / Syndio Solutions

· 7 minute read

Zev Eigen  Founder and Chief Science Officer / Syndio Solutions

· 7 minute read

There are certain reasons why some people suffer from "algorithmophobia", from the fear that sentient AI will destroy all humans to less dramatic concerns

This surprisingly exhaustive catalogue of phobias does not include a fear of algorithms or fear of artificial intelligence. The closest on the list may be arithmophobia, which is a fear of numbers, or perhaps mechanophobia, which is a fear of machines.

However, algorithmophobia seems to be real, based on recent events and articles, especially among lawyers, law firms, legislators, and practitioners in the legal field. There are likely several reasons why some people suffer from algorithmophobia, ranging from the concern that AI will become sentient and destroy all humans, to less dramatic results.

Don’t fear your algos

This article focuses on the concern that unknown algorithms create bias or legal risk and sets forth three critical ways of avoiding or addressing this risk when adopting technology that relies on algorithms, or advising clients on their adoption of such technology.

1. Understand what the algorithm is doing and which processes are being automated

There is a difference between not being able to explain how an algorithm arrives at a prediction or classification because of complexity, and not being able to explain what the algorithm is doing. These two are often conflated for various reasons, and this is the source of much confusion and fear of algorithms, which in turn adds to the bias against these technologies.

It is helpful to know what the algorithm is doing, and you should expect that whoever built or is selling the algorithm can explain what it is doing with sufficient specificity. Is it classifying things? Based on which features? Does the system generate new features? Does it incorporate data from outside of the organization? What data? And the list of questions goes on…

The good news is that it is rarely the case that algorithms make decisions autonomously, especially in the legal space. What is significantly more common is algorithmic automation of processes.

For example, there are certain payroll processing software programs that automate certain processes around how to group employees for a pay equity analysis, and how to predict employees’ rate of pay based on legitimate, neutral policies and practices (and not on gender, race, or other protected categories). This algorithm automates processes that would take statisticians or labor economists hours or days to complete.

However, there is no algorithm that then takes this data and makes economic or non-economic adjustments to employees’ compensation — human users are always in the driver’s seat in making these decisions. Understanding specifically which processes are being automated and which are still the purview of humans should allay concerns that the AI is generating bias. It also enables users to diagnose sources of observed bias.

2. Don’t shoot the messenger: Understanding the sources of bias

Don’t think about bias as an on/off switch; instead, think about it as a percentage. It is better to frame the question as “How much of the variation in the predicted outcome may be attributed to bias,” rather than “Is the algorithm biased?”

There are two reasons why this is so:

First, bias comes in different forms from different sources. Isolating the source or sources and measuring how much they contribute (relatively speaking) will help tremendously in the quest to reduce or eliminate problems. For instance, in the employment space, employment decisions may be due to such factors as: i) imbalances in labor supply (when applicants are predominantly male for instance); ii) imbalances in labor demand (if the job requires graduation from institutions that are predominantly white); or iii) the algorithm itself. Be sure to work with experts to apply the right diagnostic statistical tests to determine what percentage of the algorithm’s predicted outcome may be due to these varying sources of bias. That way, you can avoid shooting the messenger by falsely attributing problems to the AI that are properly attributable to the data source or the selection criteria applied. You need to know whether the issue is with the data, your criteria for applying rules, or the model, as well as how much of the problem is attributable to each.

Second, this reframing is important because it helps avoid confirmation bias in evaluating the usefulness and effectiveness of a certain algorithmic approach. Invariably, users have pre-existing hypotheses about relationships between model inputs and outputs. As in our previous example, many users of payroll processing software programs are certain that their organizations “pay for performance.” But what happens when the model does not confirm that belief? If one is thinking in black and white, they question the model. If one understands bias as a percentage of the outcome explained, it leads to much more productive (and sometimes less hostile) conversations about the real drivers of pay. It is less about being “right or wrong,” and more about identifying how much something is impacting an outcome.

3. Compared to what?

Humans sometimes make bad, biased, suboptimal, ill-informed, and lazy decisions. We are poor at evenly and consistently weighting factors; and we are really poor at holding a lot of complex information in our heads at the same time. But that does not mean that we should ban all humans from rendering legal decisions or giving legal advice, right? Then why would we ban algorithms or AI just because some algorithmic evaluation sometimes produces bias-impacted results?

algorithms

One seemingly obvious, but frequently overlooked question to pose when evaluating whether and to what extent an algorithm (in isolation from data and other sources) is generating biased results, is “Compared to what?” This means establishing a base rate of comparison, which may require careful measurement of the status quo and additional experimentation.

If you are deciding whether to implement an algorithmic approach to hiring that (net of labor supply and labor demand effects) yields 70% men in its recommendations, is 70% good or bad? Is this producing bias? Aside from the pitfall of the dichotomized version of that question (see #2 above), one needs to again ask, “Compared to what?” To know whether 70% is great or terrible in statistical terms, we need to compare the “actual” results to the “expected” results in order to evaluate whether they are likely occurring non-randomly due to some factor like gender or race. Without the algorithm, let’s say that the company’s recruiting team was hiring men at an 80% rate. So, that algorithm reduced the gender imbalance by a whopping 10%, which is good. However, it is important to know that assuming that the answer to “Compared to what?” is perfect balance (in this example, 50% male/female) is holding algorithms to an unrealistic standard.

Hopefully, this article offered some helpful guidance to allay any algorithmophobia you and your teams may be experiencing. Remember too that algorithms, artificial intelligence, statistics, and machine learning are tools that are best used in tandem with expert advice.

In tests in medicine, law, and complex strategic games like chess, poker, and Go, algorithms beat human experts from time to time, and humans beat algorithms from time to time. But consistently, centaur systems — those that feature experts armed with advice from algorithms — consistently outperform AI and experts acting alone. Further, this is also a great way to check bias as humans learn to be better creators and consumers of algorithms.

More insights