Forum Magazine: The Bryan Cave Case Study — How Data is Used in Firms

Topics: Data Analytics, Efficiency, Forum Magazine, Law Firm Profitability, Law Firms, Legal Innovation, Legaltech, Process Management, Technology

Bryan Cave Leighton Paisner (BCLP) has for many years recognized the importance of leveraging data and adopting a data-driven approach to decision making across three key strands: i) for the internal management of the firm; ii) for optimizing the delivery of clients’ legal work; and iii) for understanding the content and risks in clients’ legal portfolios.

This data-driven approach has been enabled by key nonlegal disciplines working closely with the firm’s lawyers. These disciplines include data analysis, financial analysis and planning, process engineering and software engineering. Many industries, including financial services, retail and manufacturing, have been radically transformed by becoming more data-driven or digitized; and we see this trend as still in its infancy within legal services, but the potential is significant. The following are some examples of BCLP’s application of data to enhance its legal service delivery.

Internal Uses

Rate Setting

Wanting to reduce the amount of time taken to complete the annual rate-setting process, the management committee saw an opportunity to leverage data and analytics to achieve the goal.

For several years, the committee had been using an application written into our financial dashboard to complete the process; thus, the data team knew most of the data the committee considered in the process. The data team used Minitab and multivariate regression analysis to review the 65 metrics the committee had available to them. Ultimately, the data team learned that 24 metrics explained 94% of the variability in the final decided rate for each fee earner.

With that knowledge, the data team was able to create an algorithm that used those metrics to predict each fee earner’s rate for the following year. Additionally, since the formula reduced the amount of time needed from two weeks down to 10 seconds, additional factors were added to the formula that the committee previously lacked time to incorporate, like geographic and practice area rate surveys. A moment of triumph occurred when the committee remarked on the algorithm’s accuracy: “The computer is winning.”

Rosetta Narratives

Another challenge was the firm’s metrics-only lawyer performance reports, which often created more questions than they answered. As a solution, the firm decided to try creating a narrative version of the performance reports that identified trends, root causes and relationships that would explain the themes to the lawyers in action- oriented language.

The first version was manually created for a significant portion of the firm’s 400 partners by the firm’s Practice Economics Group (PEG). Based upon the positive reception, PEG wrote Rosetta, an application that can reach into any system within the firm to pull relevant data, create complex metrics from pulled data, test the metrics for meaning, dynamically prioritize the way the story is told and then construct a rich narrative to explain what is occurring.

This technology is used annually for partner practice health reports and throughout the year on outstanding invoices, helping partners prioritize their collection efforts.

TASKER

In 2013, PEG recognized the challenge in producing meaningful budgets across all of the firm’s practice areas and against time entries that either did not use task codes or used task codes that did not align to the budget needed. The firm’s data simply was insufficient to meet client and partner expectations on budget turnaround times.

To solve that, PEG wrote a machine-learning application that leveraged a Bayesian classifier to learn how to properly task-code time entries for budgets. (The Bayesian classifier works similarly to how junk mail is found by artificial intelligence in email programs like Outlook®.) The first use case was for securities reporting that needed to comply with the Securities Exchange Act of 1934, which doesn’t have a task code set. The team trained the technology on a dozen different Exchange Act matters and the computer started auto-coding a custom task list on new matters with increasing accuracy.

Since then, the computer has been taught to auto-code all types of legal work, which saves PEG significant time and enables the group to build budgets against a larger population of matters.

Client Uses

Litigation Portfolios

Following the financial crisis of the late 2000s, a financial institution with a significant body of mortgage litigation looked to accurately estimate the remaining life in their litigation portfolio and use that information to reduce costs and select the optimal attorney team.

Drawing upon a large pool of mortgage-related litigation cases, the PEG team leveraged hours of data to create a statistical distribution model to predict case life. An estimation model incorporated monthly hours data to predict overall effort to resolve both anticipated new matters and matters that were already in progress for varying amounts of time. Finally, linear modeling enabled the identification of the least cost and most efficient legal team to handle the matters through their remaining life, including when to move attorneys from the team and onto different work as the remaining portfolio dwindled.

The optimized staffing suggested by the model allowed clients to realize a significant cost savings over traditional approaches and, in the end, the portfolio’s remaining life closely mirrored the model’s predictions.

Quantifying Legal Process Improvement

An insurance company client with significant spend in a litigation portfolio could not understand why its spend was so high compared to other similar portfolios. The company engaged the firm’s legal operations group to help them solve this problem.

First, the firm conducted a process-mapping exercise with each of the key parties in the litigation — coinsurers, insured, national coordinating counsel and local counsel in high-volume locations. Next, multiple years of the insurance company’s line-item invoice data were run through the previously mentioned TASKER tool to re-code the time entries based upon activities identified in the process mapping, which enabled a more precise quantification of effort and frequency per task. Additionally, deviance from the purported processes was identified in the data analysis.

The project concluded with a report describing how 30% of the legal spend could be avoided by changes to operational processes (e.g., by eliminating manual reporting processes that had since been replaced with conference calls).

Land Registry Use Case

In the U.K., all property ownership records are stored in a central government Land Registry. Large real estate transactions have historically required large volumes of ownership records to be individually downloaded as PDF documents and then manually reviewed and reported on.

The firm’s technology innovation group explored opportunities to drive efficiencies in this process and make it more data driven. This resulted in the implementation of a direct data interface to the Land Registry to allow bulk records to be extracted and analyzed as a digital portfolio. This approach has significantly reduced the amount of time and associated cost of this due diligence work as well as having optimized the accuracy of the data analysis.

Historically, manual approaches to legal due diligence have often necessitated using a sampling approach when reviewing contracts; but the use of a direct data interface and analysis tools have enabled entire portfolios to now be reviewed in-depth and at a lower cost than historical sampling.


This article was written by Chris Emerson, Chief of Legal Operations Solutions at Bryan Cave Leighton Paisner, and Bruce Braude, the newly appointed Chief Technology Officer at Deloitte Legal in the U.K. Braude was formerly the director of Legal Operations Solutions at Bryan Cave.