As the legal industry — like many similarly situated industries across world markets — increasingly embraces artificial intelligence (AI) to jump-start the automation, efficiency, and interconnectivity of its operations, it may be wise to pause before throwing that switch.
Even while full adaptation of AI is still in its infancy in many areas of legal, there are still stories of ethical problems with the use of AI that have bubbled to the surface. These problems — including embedded biases in AI-spun algorithms, questions over security and privacy, and uncertainty over the role of human judgment — have made the full deployment of AI and its far-reaching consequences an area of concern. It’s even created a new field of “robo-ethics” and has piqued the interest of the United Nations and the World Economic Forum.
Mira Lane, Partner Director of Ethics & Society for Microsoft, recently discussed how important it is to consider the ethical aspect of expanding AI. “We don’t always see what it’s doing to us,” Lane explains. “AI tech can be a power multiplier, and it can help people scale very quickly.” However, she adds, algorithms that are trained on lots of data can also include biases within that data which are then reflected in these algorithmic models.
“Who will be impacted? What are the unintended consequences?” Lane asks. “Ultimately, it means thinking about responsibility and accountability.”
To further this critical discussion, Thomson Reuters is presenting a session on ethics in AI, as part of the TR Takeover of Legal Geek on March 10, a half-day event that will feature the latest insights on the future of the legal profession and the impact of the newest legal technology.
The session will highlight why it’s important to care about AI ethics, noting that collected data always reflects the social, historical, and political conditions in which it was created. “Artificial intelligence systems ‘learn’ based on the data they are given,” says Milda Norkute, Senior Designer at Thomson Reuters Labs, a team focused on AI innovation that will be presenting tangible examples of how it’s applying these principles and processes in practice.
You can register here to attend the TR Takeover of Legal Geek on March 10.
These pre-existing conditions in which the data is collected, along with many other factors, can lead to biased, inaccurate, and unfair outcomes, Norkute explains, adding that this problem only grows as artificial intelligence and related technologies are used to make decisions and predictions in such high-stakes domains as criminal justice, law enforcement, housing, hiring, and education. “These biased outcomes have the potential to impact basic rights and liberties in profound ways,” she says.
Nadja Herger, a Data Scientist at Thomson Reuters Labs, will walk attendees through how the idea of ethics in AI is considered throughout the design, development and deployment process, showing step-by-step how that high-level process unfolds.
“For AI ethics to appropriately be taken into account, it is essential to reflect on its implications at every step of the lifecycle,” Herger says, adding that means including questions such as: What is the impact of an imperfect AI system? Is there bias in our training data? How are users expected to interact with the AI system? How can we show how the AI system came to a certain decision to strengthen a user’s trust?
“It is essential for corporations to take a proactive approach with these issues, to ensure sustainable, ethical, and responsible use of AI,” she says.
Eric Wood, a partner at the law firm Chapman and Cutler will also discuss the specific impact of AI on companies and law firms in a discussion alongside Norkute and Herger. This session with attendees will examine topics such as creating AI guidelines for your company, how ethical considerations around AI can come up at work, or whether there needs to be stricter regulation of AI.
Overall, the session will address the most crucial issues faced by organizations when dealing with AI. How do you ensure that its use is being done ethically, and not accelerating the biases and problems already inherent in society at large?
Dr. Paola Cecchi-Dimeglio, a behavioral scientist and senior research fellow for Harvard Law School’s Center on the Legal Profession and the Harvard Kennedy School, noted previously that it’s very important for legal organizations or companies in general to determine why they are using AI in the first place. “You have to remember that with many legal organizations, the data they are looking at is either what is publicly available or data they have gathered from working with their clients. And when artificial intelligence (AI) starts working with this data, it can be a very positive thing for a law firm,” Cecchi-Dimeglio says, noting that this process allows firms to make better decisions about jurisdictions, judges, and client matters in comparable situations.
“But problems arise, especially problems with biases, when the organization isn’t careful about from where it’s taking its data or about what portion of data it’s using and not using,” she adds. “Because if you start out with a biased history, you’re going to have biased results.”