William Orville Douglas, an Associate Justice of the U.S. Supreme Court from 1939 to 1975, famously indicated in 1948 that: “The law is not a series of calculating machines where definitions and answers come tumbling out when the right levers are pushed.”
As his point vividly suggests, when it comes to law and the practice of law, there is a great deal of human reasoning involved, far beyond the reach of any simplistic calculating machine.
Of course, his authoritative remark was proffered in the late 1940s when computers were massive in size and minuscule in computational capabilities when compared to the power of today’s computing, thus an intriguing present-day question is whether the more sophisticated tech and AI systems emerging might someday be able to perform legal tasks on par with that of humans.
A recent conference entitled FutureLaw 2020 provided an in-depth exploration of the ways that technology is transforming the law, along with the implications for how we will ultimately interact with legal institutions as a result of advances in legal systems.
This annual conference brings together topnotch legal experts from all realms, encompassing academics, research scholars, entrepreneurs, investors, lawyers, regulators, engineers, and the like. Due to the timing coinciding this year with the social distancing efforts underway to cope with the pandemic, the conference was switched into becoming an online virtual collection of videos and podcasts and has been posted for ready access at this link here.
Organized and undertaken by the Stanford Center for Legal Informatics, known generally as CodeX, Dr. Roland Vogl provided an overview of FutureLaw 2020 in his capacity as Executive Director of CodeX and of the Stanford Program in Law, Science and Technology (see his video at the link here).
CodeX has an emphasis on the research and development of Computational Law.
As stated by Stanford’s Professor Michael Genesereth in his excellent paper entitled Computational Law: The Cop In The Backseat (see the link here): “Computational Law is that branch of legal informatics concerned with the codification of regulations in precise, computable form. From a pragmatic perspective, Computational Law is important as the basis for computer systems capable of doing useful legal calculations, such as compliance checking, legal planning, regulatory analysis, and so forth.”
As a quick taste of the FutureLaw 2020 sessions, I’ll cover next some of the talks that especially raise key AI-related facets, meanwhile generally urging too that you take a further look at the additional videos and podcasts of this stellar collection.
Panel Session: Is Law’s Moat Evaporating?: Implications of Recent NLP Breakthroughs
Panelists:
Laura Safdie, COO, and General Counsel, Casetext
Khalid Al-Kofahi, VP R&D Thomson Reuters
Daniel Hoadley, Head of Design and Research, ICLR
Anne Tucker, Professor of Law, Georgia State University
This panel covered aspects of advances in Natural Language Processing (NLP), considered one of the tools or capabilities under the overarching umbrella of AI technologies.
On a daily basis, you are bound to find yourself in the midst of NLP, perhaps interacting with Alex or Siri, or maybe you use a chatbot when placing an online order. NLP keeps getting better, becoming more seemingly fluent in interaction and less clunky than prior versions.
For my explanation and analysis of NLP in AI and autonomous systems, see this link here.
During the FutureLaw 2020 session, among various NLP topics covered, the impact of Google’s BERT was discussed.
BERT is an NLP software program that is used by Google as part of their search engine efforts and aids in trying to interpret search queries to best find appropriate search results.
What makes BERT a step forward in NLP consists of its approach to examining not just words adjacent to other words in a query, but it also considers the surrounding words. Simpler NLP algorithms tend to only look at each adjacent word, and often parse an entered query on a strictly left-to-right sequence basis, rather than going in a bi-directional manner of scanning both from left-to-right and via right-to-left.
The acronym BERT stands for Bidirectional Encoder Representations from Transformers and attempts to bidirectionally identify the nature or representation of the words used in a query, doing so in a context-based manner. Some refer to BERT as a deep search engine, partially due to the aspect that it is shaped around the use of a deep artificial neural network and uses facets of Machine Learning (ML) and Deep Learning (DL).
Google has made BERT available in an open-source format for those that would like to incorporate the NLP capability into their own apps (see link here, various provisions apply to the open-source code usage).
In terms of the law, NLP continues to be added to LegalTech and LawTech systems, allowing for more ready access to volumes of legal documents that otherwise “lock away” vast amounts of potentially crucial legal information but for which human scrutiny alone would not easily be able to find and surface.
Could the advancement of NLP lead to AI that can autonomously do all of your legal research for you?
The panel tackled this thorny question and wrestled with potential time-frames under which AI might reach such a vaunted goal.
It is generally believed that AI will hopefully make the law scale-up or promulgate a future of law-at-scale, meaning that rather than today’s law and legal aspects having intractable barriers or hurdles beyond the means of some or many, AI might make the law more readily accessible to all.
Individual Session: LEX (Law, Education, and Experience) Talks: The Many Faces of Facial Recognition
Speaker: Stephen Caines, Residential Fellow, CodeX
Facial recognition was at first an amazing and exciting AI breakthrough that the general public found enthralling and handy for seemingly innocuous tasks. Want to get your smartphone to automatically recognize you, doing so without having to manually enter a password, well, just use a built-in facial recognition capability and merely look at your cell phone to get it to open.
Gradually, a realization has emerged that not all is necessarily uplifting about facial recognition.
For my explanation and analysis of facial recognition in AI and autonomous systems, see this link here.
As noted by expert Stephen Caines in his FutureLaw 2020 session, there are use cases of facial recognition that portend AI-as-good, along with uses that foretell the AI-is-bad side of the AI adoption tradeoffs debates.
He rightfully points out that facial recognition can misidentify people, which if undertaken by a governmental entity could lead to astounding adverse consequences for the public. Furthermore, there is the slippery slope of becoming a surveillance state, whereby initially the impetus was to use facial recognition to nab criminals and then evolves (or devolves) into a Big Brother that ensnares all of us.
Legislative paths toward shaping how facial recognition will be adopted are multi-faceted right now and include some attempts to ban facial recognition in certain contexts and regulate it in other contexts, leading currently toward a myriad of confounding and possibly conflicting approaches.
In discussing the regulatory efforts, Caines urges that we seek to find ways to use facial recognition for well-needed pursuits, such as protecting those in society that are otherwise vulnerable and could benefit by the added protection afforded by facial recognition, and that to see this occur fruitfully and with a proper balance that the onus is on us to take on personal responsibility to get engaged in the formation of local laws and via ongoing contact with our legislative representatives.
Individual Session: A Conversation About Legal Innovation, AI And Cybersecurity
Speaker: Brigadier General Patrick Huston, The Pentagon, Assistant Judge Advocate General
In an invigorating Q&A format, moderator Dr. Roland Vogl interacts with Brigadier General Patrick Huston about the intertwining facets of legal innovation, AI, and cybersecurity.
For my explanation and analysis of the recently released DoD AI Ethics principles, see this link here.
There are a smattering of online videos today that showcase the use of DeepFake AI, wherein a video of a noted celebrity or political figure seems to be saying things that don’t comport with what they might have actually said, or in which the head or face of such a notable person is shorn onto the body of someone else. Currently, you can often detect telltale verbal or visual clues that suggest that some form of audio or video trickery was used.
Unfortunately, the DeepFakes are getting more advanced and gradually it will become extremely hard to discern a true audio or video from one that has been transformed via these latest AI techniques and tools.
Brigadier General Patrick includes the foreboding expansion of DeepFakes into the lengthy list of things that lamentably keep him up at night.
As a long-time member of the armed services, along with being a West Point grad and an Army War College alum, some might be surprised to know that he is also a human rights lawyer. He points out that his military service and his devotion to human rights are not somehow at odds with each other, and in fact are complementary to each other.
Besides covering a range of topics such as the existent paradigm shift taking place in how the government develops tech (and the role of the commercial sector), along with pointing out that there isn’t some form of magical AI pixie dust that will overnight change our systems and what they do, he stridently emphasizes that we need to keep cybersecurity at the forefront of our thinking on these matters.
I especially appreciated his highlighting the role of cybersecurity since many of those rushing to put AI systems into practice are not seemingly aware of or concerned about how those systems can be cracked or otherwise overturned into performing evildoer tasks (see my discussion at this link here), regardless of whether the original intent was for something of an innocent and grandiose desired benefit.
Individual Session: LEX (Law, Education, and Experience) Talks: VC Investment In Time Of Crisis
Speaker: David Hornik, Venture Capitalist, August Capital
In this insightful and timely Q&A session with David Hornik, one question on the minds of many is whether Venture Capital (VC) is going to dry up or otherwise shift direction as a result of the pandemic, changing VC practices in the near-term and possibly for the longer term.
It makes a big difference to all those fledgling startups, some that have already gotten a modicum of VC funding and for those too that are fresh startups with a dreamy eye towards getting VC investments.
For my explanation and analysis of startups in AI and autonomy, see this link here.
Per notable venture capitalist David Hornik, in the near-term, startups ought to realize that getting VC funding is going to be a bit more arduous, thus those budding entrepreneurs need to hunker down and try to deal with their existing burn rates, stretching out whatever money they already have in the bank and making do accordingly.
Meanwhile, he points out that he’s continuing to push forward on his VC efforts, as are many VC’s, given that much of their work can be performed remotely. Those long days at the office have become long days at home, carrying on the continual series of phone calls and online interactions that are part-and-parcel of finding worthwhile startup investments.
In the legal realm and the use of tech, one particularly innovative avenue of AI would consist of using AI to try and predict the pricing for legal services. As anyone in the legal profession knows, there is an ongoing debate about pricing on an hourly basis versus pricing for the case at hand, encompassing tensions whichever way the pricing is ascertained.
Might it be feasible to analyze a large corpus of legal efforts to derive via say Machine Learning or Deep Learning patterns or models that could accurately predict the magnitude of a legal effort as required for a new case about to be started?
It’s an interesting proposition.
Speaking of which, when asked about which technologies or AI uses seem to be on his shopping list, Hornik pointed out that he tends to look for entrepreneurs that see transformative or disruptive opportunities, rather than opting for him to pick or choose specific tech advances per se.
This is reminiscent of the classic line in the VC/PE tech arena, namely, do you bet on the horse or on the rider (a topic extensively explored in my book on startups, see this link here). The horse is the underlying tech, while the rider being the entrepreneur. As seasoned investors know, you are likely better off to bet on the rider, an entrepreneur, one with the spunk and vision for the long haul, and do so since the odds are that they’ll find the right opportunities, even if it means pivoting to do so.
AI And Autonomy
Let’s further consider some of those insights gleaned from the aforementioned sessions.
Most AI systems today are at best semi-autonomous, and not yet fully autonomous.
AI advances keep pushing toward being able to have an AI system essentially do work on its own.
For example, we are gradually witnessing self-driving cars that are able to proceed on an autonomous basis. There won’t be a human driver at the wheel. Nor will a human driver need to be remotely connected to the vehicle to make it operational.
We’re not there yet.
Could we also see AI working autonomously in the legal field?
Well, for years there has been an endless parade of claimed robo-lawyers or robot lawyers, suggesting that an AI system can do whatever a human lawyer can do.
Nope, not the case.
As yet.
Do realize that there is a vast difference between autonomously driving a car and autonomously performing the tasks of a human lawyer.
In the handling of law, one deals with text, lots and lots of text, all of which is wide open to interpretation, indeed some lament overly open to interpretation.
One of the grand hurdles of AI as a lawyer involves coping with the semantically indeterminate nature of law and the practice of law.
This is a cognitive capability that does not boil down into mechanized rules and procedures, despite the belief by some that all you need to do is write down all the legal rules and voila, you’d have yourself an AI-based autonomous lawyer.
There’s more to it.
Some in AI have even flagrantly suggested that the law ought to be changed to fit with what AI can currently do, rather than continuing to try and advance AI to do what the law needs.
This sentiment reminds me of the quote by Montesquieu (1748), in De l’Esprit des Lois: “Thus when a man takes on absolute power, he first thinks of simplifying the law. In such a state one begins to be more affected by technicalities than by the freedom of the people, about which one no longer cares at all.”
I don’t think we want to start flattening or stifling law simply to make it more amenable to being implemented in AI.
That’s a bad idea and would undoubtedly have the sour (and dire) outcomes envisioned by Montesquieu.
AI is going to continue to advance and indubitably expand its encroachment into the law, presumably aiding and enabling human lawyers, and we’ll need to keep an eye out for the AI that one-day tips over from being semi-autonomous into autonomous (not necessarily requiring the vaunted singularity).
A fascinating course at Stanford is taking place this term on AI and the Rule of Law, co-taught by Stanford Professor David Engstrom and by Marietje Schaake, the International Policy Director and International Policy Fellow for Stanford’s HAI (Institute for Human-Centered AI), exploring how advances in AI and the like are transforming our world and regulations.
More on their findings in a future piece.
Conclusion
If William Shakespeare were alive today, and if he could revamp his famous line from Henry VI, do you think he might say that the first thing we do, let’s transform all the lawyers into AI?
A snarky person might say yes, while a pragmatist might ask how it might be done and what impacts would we experience.
Poetically, or perhaps prophetically, it could be the future of AI and law.