John Rabiej J.D., the Director of the Duke Law Center for Judicial Studies, has been studying the impact of technology on the judiciary for many years, and is concerned that the courts are not keeping pace with the rapid advancement of technology in the culture at large. The Center brings together judges, lawyers, researchers, government officials and other parties to advance the study and understanding of the judicial process and generate ideas for how it might be improved. Last year, Duke acquired EDRM, the leading e-discovery standards organization, which held its first conference under Duke’s leadership last spring.
Justice Ecosystem recently interviewed John Rabiej about the impact of technology on the courts and whether it’s time to re-think best practices and devise new ways for the courts to operate more efficiently and effectively.
With regard to technology’s evolving impact on the judicial system, what issues are on the Center’s radar now and in the next couple of years?
The Center has five discreet programs that bring judges and lawyers together to discuss issues and identify best practices, all of which interface with technology in one way or another. Last year, we held a conference with 15 federal judges and about 75 lawyers, where we talked a lot about e-discovery and TAR (Technology Assisted Review). In general, what we found was that about one-third of judges and lawyers are comfortable with these technologies, but two-thirds are not. This is cause for concern because, with TAR for instance, we have a technology that’s been around for 10 years, and is recognized to be more reliable and a lot cheaper than manual review. The question is: Why hasn’t this technology been more successful, especially in big cases?
Which begs the question: Why isn’t TAR used more often?
We found several reasons: One is that many judges and lawyers are unfamiliar with TAR. Many of them don’t think they need to learn about [technology]. It’s my view that the bench and bar have been given a free pass for too long on these matters. They have an obligation to stay up to speed on technology that has an effect on the administration of justice. Rule No. 1 is the right to a speedy and inexpensive process of justice, and technology helps with that. The younger generation of lawyers gets it, but some of the older generation doesn’t, and they often use their age and experience as an excuse not to learn about it.
The other big problem besides unfamiliarity is gamesmanship between plaintiffs and the defense. With TAR, for instance, in many cases the responding party doesn’t want to disclose how they train the computer to identify relevant matter, because they’re afraid it might reveal their strategy or inadvertently hand over sensitive information. The requesting party doesn’t trust the responding party, suspecting that the computer training is rigged. So each side might use TAR for other reasons, but not for production. This is a lost opportunity.
What can be done to make lawyers and judges more comfortable with technologies that can accelerate and improve the administration of justice?
Well, the bench and bar are notoriously slow to change. It’s taken us 10 years to get to the point where we can talk about best practices for TAR. Next year, the discussion is going to be all about artificial intelligence (AI), and it’s probably going to take us another 10 years to adapt to that. With regard to TAR, the Center will be coming up with some specific guidance in the next six to twelve months. One promising approach we will look into is to provide some information up front about what you’re doing, respond to and produce the requested matter, then give the requesting party a second bite at the apple. After reviewing it, a requesting party has to have confidence that they can go back to the judge and request information they think is missing. If the second-bite type of discovery is sufficiently targeted, both sides might be satisfied.
So does the blame for technology’s slow march in the judiciary lie mostly with lawyers, or do judges have a hand in it too?
What we hear from judges is that they’re looking for guidance on these issues. ‘We don’t know everything, and it’s up to lawyers to educate us,’ is what they’re thinking. Many judges want that information, and it is the responsibility of the bar to provide it.
That said, there are a lot of lawyers and judges who don’t want to learn, and to me that’s just not right. Look, the business and medical worlds have been using this technology to make life-or-death decisions for years, so how do judges and lawyers get away with saying the technology is not good enough or reliable enough?
What role do financial pressures play in the pace of adoption and implementation of beneficial new technologies?
On the litigation side, the amount of information lawyers must process is getting larger and larger, so they’re necessarily being pushed toward technologies that can help them handle the load. Their clients are pushing them to control costs, too. Where the bench is concerned, most judges are open to education because they are all very concerned about the cost of litigation, and about people being priced out of the courts and forced into alternative dispute resolutions.
In general, things are better on the federal side, but the states face greater challenges. The situation is much, much worse there. A lot of states don’t even have electronic filing, which really puts them behind. There, it’s really a question of money, elected officials, politics, and all the rest.
What advice would you give lawyers or judges who are skeptical of technology’s role in the judicial process?
I’d remind them that technology isn’t bad — it all depends on how you apply it. Yes, there are risks, but the beauty of our system is that it’s adversarial. How is e-discovery different from the paper world, really? The safeguard in both cases is that the information can be reviewed, people can be deposed, and any omissions can be revealed and addressed. The same thing applies here. Is it perfect? No. But it never has been. Where technology is concerned, a lot of people apply a very high standard of accuracy to TAR, even though the accuracy of manual, linear review really is quite inaccurate. Does anyone believe that having humans review millions of documents will uncover more responsive documents than a properly programmed computer?