The conversation around innovation in the legal industry is fashionable right now. As with any topic du jour, there’s a fair amount of hype around “legal innovation” and what it means — particularly when it comes to the disruptive potential of technologies like machine learning and artificial intelligence more broadly. Each week there seems to be a new “the robots are coming”-type headline or the announcement of some purportedly AI-based product or initiative.
Machine learning and AI are indeed promising technologies, already impacting the practice of law and what we do at Baker McKenzie. But beyond all the market hype and disruption (both real and feared), there are big, fundamental questions needing answers. And these questions are not just around what technologies like machine learning are doing or will do to the practice of law, but what the legal industry and law firms of the future should look like. How can these technologies enhance what we do, which, at its heart, is to be a trusted advisor for our clients?
Put another way, what does it mean to be a trusted advisor in an AI-enabled world? At Baker McKenzie, we actually think that is the question from which all answers about how we deploy and control these technologies flow — and one that needs to be answered to achieve industry-wide change.
How can technologies like machine learning and AI enhance what lawyers do, which, at its heart, is to be a trusted advisor for their clients?
The trusted advisor is what all good lawyers aspire to become. As law firms and the complexity of client relationships have grown, we and other large global firms have sought to institutionalize that bond. But pretty much every client care review we do tells us the same thing: when it comes to the crunch, most clients still pick lawyers, not firms. We also see this in-house — the lawyers who advance are the ones who earn the deep trust of senior management and are seen as business partners and counselors, not just legal advice-givers.
So, does AI in the legal industry mean that the trusted advisor is on a pathway to extinction? Well, let’s look at what lawyers do, both in-house and outside counsel. What is it that we sell? From the beginning of the profession, that’s really been four key things: information, labor, prediction, and judgment. Now into the first three of these AI will make — and is already making — significant inroads. On the information front, for example, we’re seeing nascent machine learning in things like automated precedents and self-serve legal apps. That also impacts the labor piece — not just in terms of efficiency, but in the richness of the talent pool we draw from. For a while now, our attorneys’ and paralegals’ work has been augmented by economists, project managers, and knowledge specialists. And in the very near future we’ll increasingly tap into the expertise of data analysts and visualizers, UX specialists, and other non-legal advisors.
Prediction is still delivered in a largely lawyer-based way, though we are seeing some very specific vertical prediction engines for defined tasks. A good example is perhaps the range of tools now available which provide legal analytics and insights based on mined litigation data. The way we see it, this is all part of a process that’s transforming what has historically been a very individual (i.e., lawyer-to-lawyer) experience into a much richer, organizational one, enabling our clients to benefit not just from what we and our attorneys know and have done, but from a range of legal and non-legal data and insights from inside and outside the firm.
The challenge though that’s often talked about when it comes to lawyers and AI is judgment. When a client asks “what is the law?” what they’re really asking is “what should I do?” Perhaps even more than that, they’re asking “what would you do if you were me?”. Answering that requires knowledge of much more than what the law says — it requires an ability to contextualize the law within a wider perspective of the client, the market, and society. The truth is, that’s really hard to replicate when it comes to AI. The narrow AI we’re baking into some of our legal services right now enables us to automate information systems and tasks, and even increase our predictive powers using data-validated insights. But while all of this informs judgment, it doesn’t replace judgment itself; and it doesn’t tell the client what they should do.
We also think the same is true for most applications of AI in big law in the near future, which we’re not naïve enough to put any longer than 3 to 5 years. But as the delivery of information, labor, and prediction fall away from what lawyers do as individuals, then what does that remaining judgment piece look like? Well for the firms and lawyers who can find clever and powerful ways to use AI and other forms of legal tech to augment, not replace, judgment and the empathy that necessarily underlies it, the future is looking very promising indeed.
The challenge that’s often talked about when it comes to lawyers and AI is judgment. When a client asks “What is the law?” what they’re really asking is “What should I do?”
One phrase we use to describe this vision of the future is “machine learning-enabled judgment”. In a nutshell, that’s the high-value service that we as a law firm of the future are working to ultimately deliver to our clients. And when we say “working” we really mean working at it, as there are a few foundational reasons why we and others are not there yet. First and foremost, the evolving AI capabilities we want to take advantage of only exist at scale in the cloud, and there are obvious legal and attitudinal obstacles to putting client data there. The other big issue is the fragmentation and varying degrees of quality and quantity of legal data across the more than 200 (at our last count) separate legal systems globally, not including State systems.
For a global firm with 78 offices across 46 markets, these challenges are particularly acute — and we are doing a lot of work internally to come up with solutions. For example, looking at different encryption models to help with getting client data in the cloud, and longer term projects to get the enormous amount of know-how we have structured and in one place. But the issues around use of the cloud and fragmentation of legal data are tough and systemic. They are collective problems requiring collective problem-solving that draws in the full range of legal, technical, business, regulatory, and academic stakeholders and expertise.
For this reason, one of the most important pillars of our R&D program is ecosystem engagement, including through collaborative hubs we’ve launched and partnered with in key regions — such as the World Economic Forum’s Centre for the Fourth Industrial Revolution in San Francisco, our WhiteSpace Legal Collab in Toronto, and Europe’s first legal innovation hub ReInvent Law. We recognize that transformational change — truly disruptive innovation — requires us to ask big questions and solve challenges that are far beyond the ability of one organization to tackle alone.
So, our plan is to keep working at it and collaborating as the tech and our own capabilities evolve. But this is not just the key to getting where we and our clients want to go. The firms and lawyers who do this well, we think, are the ones who’ll succeed in the machine learning-enabled future. To us, “innovation” is not just some new product, initiative, or topic du jour. It is (and really, has to be) who we are and how we work — and from what we can tell, that never goes out of fashion.