Tonya Custis is a Research Director at Thomson Reuters where she leads a team of scientists performing applied research in Artificial Intelligence (AI) technologies. Recently, she was a panelist on at Legaltech (part of ALM’s Legalweek New York 2018).
The panel, “Artificial Intelligence for Legal Research: Why Data Matters,” was sponsored by Thomson Reuters; the other panelists also included Don MacLeod, Manager of Knowledge Management at Debevoise & Plimpton; and Carlos Gámez, Senior Director of Innovation for the Legal Business at Thomson Reuters.
Justice Ecosystem asked Custis 5 questions:
1) In the session teaser to the panel, Thomson Reuters said: “All artificial intelligence applications start with data. AI for law requires a combination of domain expertise, annotated content, and technical expertise — you can’t simply throw AI at legal data and expect good results.” In Big Law, how can the technology teams educate senior lawyers?
Tonya Custis: The more lawyers know about how AI actually works, the more comfortable they will become with it. It’s important to understand how AI algorithms use data, and what the implications of that can be in the output of an AI system. People are afraid of what they don’t understand, so it’s important that they take the time to learn about the big picture of AI and how it applies to Legal.
It’s also important to understand that most AI systems get better as they are used more — they adapt to and learn from users’ behavior — so, the more they are used, the better they will get. Most consumer products and other products (like Westlaw) already contain AI — it’s not really new, it’s just receiving a lot more attention lately.
2) What are some of the types of data that go into AI systems?
Tonya Custis: Generally, all types of data can go into an AI system — anything a computer can process: text, sounds, images, database records, user clicks, etc. At Thomson Reuters, we’re fortunate to have 150 years of editorial enhancements on top of our “regular” legal text data (cases, statutes, regulations, secondary sources, briefs, etc.). We also are able to build AI into our editorial pipelines to make attorney editors’ lives easier. We are able to reuse a lot of AI features and standalone products into new ones; and we are able to layer on top of existing AI to build more and more complex models.
Also, we are able to leverage Westlaw Key Numbers, KeyCite information, Headnotes, and other taxonomies and proprietary features in our models. This information, and other editorial enhancements, give us additional context about legal documents, which we can then add into our AI models. Having more domain-specific evidence allows the machine learning that models have to use to make more relevant decisions and suggestions. In addition, we have access to attorney editors to generate training data for us around our models — to grade the output of the models so we can again use that as input to make our models even better.
3) Why does AI expertise need to be “married” to legal domain experience to make them both work better?
Tonya Custis: The basic premise of AI is to mimic human behaviors or thinking. In Legal, you want the AI to behave or think like a lawyer — you want the models to mirror what a lawyer would do. For example, which answer or recommendation would a lawyer click on? That’s the one we want to train our model to show you first.
In order to get an AI model to behave in a way that is useful to attorneys, we need to train it with data that makes sense and which is valuable to attorneys. We need to tell the model what part of the data is important for the behavior we are trying to mimic — it takes domain expertise to understand what parts of the data should be relied on and which parts should be ignored in order to make a decision. The system learns to associate or map those important signals in the data that are important for the domain to reach the desired outcome. Without any domain expertise, an AI expert may stumble on some important domain features or discriminating metadata features to use in a model, but they are likely to miss many more and miss many of the nuances that only a legal expert knows.
Conversely, if some legal expert attempts to load their data into an AI tool with no AI expertise, they are likely to make errors due to not understanding how those tools work and how best to put the data in, so as to take advantage of how the data and the model work together.
4) How can researchers maximize AI features that are already built into legal research products?
Tonya Custis: Thomson Reuters’s Research & Development (R&D) group started in the 1990s and has been putting these technologies that we now call AI into products since then. I’ve been working in this field, mostly at Thomson Reuters, for 14 years. It’s not like my job suddenly has changed — it’s just that people have started noticing the work we do in Natural Language Processing and Machine Learning and have started calling it ‘AI’.
There are many Thomson Reuters Legal applications that already contain AI: Westlaw’s WestSearch algorithm, KeyCite, Folder Analysis, Research Recommendations, PeopleMap, Medical Litigator, Drafting Assistant… When AI is done well, you don’t notice it. If it’s working, it seems intuitive.
The confluence of more computing power, more (digitized) data, and wider availability of AI software and toolkits has made AI accessible in the consumer space. Professionals expect that tools available in the consumer space will have analogues in their domains of expertise.
In order to maximize the AI in legal research products, lawyers need to trust it. They need to trust that it will get better, that it’s learning. They need to recognize that going deep in a linguistically nuanced domain such as the law is a little more difficult than the much more general shopping domain that has limited and clear intents and outcomes. They need to recognize what’s most valuable to them personally in their research — one lawyer might find some AI features helpful, but not others. Above all, the purpose of AI Legal applications is to make lawyers’ jobs easier, faster, and better.
5) What is the most common question that you get from lawyers?
Tonya Custis: The most common question I get from lawyers is if AI is going to replace them in their jobs. The answer is probably not. AI will help them do their jobs, but we are a very long way from a talking robot lawyer. And, if we weren’t, we’d still need lawyers to help train it. AI can augment a lawyer’s ability to do her job — save her some time doing research, surface some cases or information she may not have been aware of already, or perhaps suggest some articles or arguments she may have overlooked.
AI can automate some lower-level tasks, yes, but that should then only free up more time for attorneys to do the more complex (and perhaps more satisfying) lawyering tasks that require higher-level thinking and argumentation.