CodeX FutureLaw Conference: Progress in the Trenches (Part 2)

Topics: Access to Justice, Corporate Legal, Efficiency, Government, Law Firms, Legal Innovation, Stanford Law School

platforms

STANFORD, Calif. — If the various criticisms of the legal industry’s slow pace of change represented in Part 1 of this blog post set the tone for the annual CodeX FutureLaw 2016 conference, then the rest of the sessions represented the cutting edge of the many fronts where despite everything, change is moving forward. (Also, check out Ralph Baxter’s recent blog post that delved into what FutureLaw has come to mean for the legal profession.)

Cognitive Computing

The real All-Star panel was entitled Hot or Not — Watson and Beyond, and its theme was the question of whether concepts like data analytics, artificial intelligence (AI), cognitive computing and other advanced technologies are just hype or are really making inroads. The panelists, moderated by Chicago-Kent College of Law’s Dan Katz, included Noah Waisberg of Kira Systems; Khalid Al-Kofahi of Thomson Reuters; Charles Horowitz of the MITRE Corporation’s Center for Judicial Informatics, Science, and Technology; Andrew Arruda of ROSS Intelligence; and Himabindu Lakkaraju of Stanford University.

The panel did its best to defuse the hype around AI and cognitive computing, although Arruda did take the opportunity to announce that ROSS, a legal research tool built atop IBM Watson technology, has been adopted by a number of other new law firms, joining Latham & Watkins, which was announced earlier.

Waisberg noted that there are many flavors of cognitive computing, and the key to getting beyond the hype is to use the right flavor for the job. He identified some specific areas — eDiscovery, legal research and data extraction from contracts (Kira’s specialty) as fields that were yielding good results with targeted applications of machine learning. Al-Kofahi noted that Watson and many other machine-learning tools have yet to move up the value chain. They have proven to be good at “finding stuff”, but there are fewer successes around analyzing and understanding data, and making decisions based on that understanding. Lakkaraju underscored the idea that different algorithms are designed for different tasks — there is no one-size-fits-all tool that solves all problems. The burden remains on those trying to solve the problem to fit the tool to the solution, she said. Engineering machine-learning solutions is domain-specific, offered MITRE’s Horowitz, adding that everyone has underestimated the difficulty of that piece.

In the end, the question of whether something is just hype is easily solved, said Waisberg. “If you don’t see the meat, it’s not there.” Anyone purporting to have a solution ultimately must produce results and data that show it’s all working. Katz concluded with a similar call for greater transparency and public displays of “validation-only” that will move us beyond the hype.

Computational Law

Another meaty session focused on computational law, which, loosely defined, involves the representation of law in computable form that enables machines to analyze it and to automate and execute legal processes and actions.

futurelaw

Led by Harry Surden of the University of Colorado, this session revolved around the particular difficulties of working with legal data. The day before, Surden had in fact run a workshop and brainstorming session on access to legal data. The dual challenge with legal data is that, i) much of it (even when supplied by the government) is locked up behind paywalls; and ii) even when it’s freely available, legal data tends to suffer from being available in non-standard formats and is often in poor quality.

Nicole Shanahan, a CodeX fellow and also the CEO of her own company, ClearAccessIP, gave a typical example: Her company’s offering often relies on extracting data from PDFs, which can lead to numerous data problems. Ultimately, she said, computational law requires thinking about input. That comment led the discussion into a new direction on data quality, which is generally agreed to be poor in the industry. The general conclusion is that until courts and other public bodies — and lawyers themselves — start to produce more standardized digital content, progress in computational law will be slow.

E-Government

One of the more interesting sessions centered around the space where law, public services and consumers come together: e-government and the application of technology to public services. Johnathan Reichental, CTO of the City of Palo Alto, was on hand with some interesting perspectives on urbanization and technology. He noted that 3 million people a year are moving into urban areas, and cities are woefully unprepared for it. But tech can provide solutions — for example, with driverless cars. He predicted that human-driven cars will be banned in 30 years, and invited us to think about what the implications will be for city design.

Jenny Montoya Tansey from Code for America demonstrated how much difference a thoughtful process redesign could do to simplify online tasks such as applying for food benefits. Colleen Chien provided an insight about what the healthcare.gov fiasco looked like from the inside, and how it exposed the dangers of outdated IT and processes.

Moderator Jason Baron wrapped things up with the question of “what’s holding government back” from deploying more effective IT? His own answer is that the government needs help with the “open” part of open digital data. Records are going digital, but ironically becoming less accessible without clear information governance policies. Reichenthal pointed to simple demographics — the current generation of government IT workers needs to retire and move on. There is also an economic issue: Tech startups can help with much, but they need to be compensated by cash-strapped agencies. Chien also identified a cultural issue — a “fear of failure” — that’s pervasive in government IT.

In all, the picture painted was that there is still a lot to do to make IT properly serve some of the basic functions of government, and not just the more visible legal functions that courts and legislatures play.

Two Conclusions from FutureLaw

Two conclusions leapt out from this high-energy and wide-ranging edition of the FutureLaw conference. First, an inner circle of enthusiasts and evangelists have been joined by an increasingly large range of players who are implementing new legal technologies on the ground, with good results. Second, however, that wide range of stakeholders (mentioned by Jim Sandman in his keynote as potential levers of power and agents of change) are not really at the table yet.

The challenge for the next few years will be to continue to push this discussion up and out of the already converted and into the hands of the doers, influencers and funders who can make more of it happen.

The panel at the session entitled, "Barriers to LegalTech Adoption and Possible Solutions"

The panel at the session entitled, “Barriers to LegalTech Adoption and Possible Solutions”