The No-Nonsense Comprehensive Compelling Case For Why Lawyers Need To Know About AI And The Law
The gauntlet had been thrown.
You see, I was the invited keynote speaker at a major legal industry conference and my heralded topic was squarely in my wheelhouse, namely Artificial Intelligence (AI) and the law (typically coined as AI & Law).
Rather than being entirely heralded, maybe the more apt phrasing is to say that the topic was met with a mixture of excitement by some and outright eyebrow-raising skepticism by others. The assembled collection of several hundred law firm partners and associates murmured and questioned subtly whether anything about AI and the law especially needed to be known by them. AI was generally perceived as a pie-in-the-sky topic. On top of that contention, AI when combined with the law was equally or even further at the outreaches of what daily hard-working nose-to-the-grind lawyers would seem to be thinking about.
Into this somewhat arms-folded show-me crucible I ardently ventured.
I’m pleased to say that my remarks were well-taken and the response was quite positive, including that this was the first time many of them had ever heard a no-nonsense compelling and comprehensive case made for why lawyers ought to know about AI and the law. The discussion got those top-notch legal-minded gears going and the attendees had plenty to ruminate on.
Let’s see if the same can be said for those of you that might be interested or at least intrigued by the AI & Law topic.
Here we go.
First, a vital facet to know is that AI & Law consists of two intertwined conceptions.
Simply stated, they are:
- (1) Law-applied-to-AI: The formulation, enactment, and enforcement of laws as applied to the regulating or governance of Artificial Intelligence in our society
- (2) AI-applied-to-Law: Artificial Intelligence technology devised and applied into the law including AI-based Legal Reasoning (AILR) infused into LegalTech high-tech apps to autonomously or semi-autonomously perform lawyering tasks
I want to emphatically make clear-cut that these are both bona fide and rapidly expanding ways in which AI and the law are being combined.
Many attorneys are only familiar with one or the other of the two perspectives, or oftentimes not familiar with either of the two.
Depending upon your lawyering preferences, it is perfectly fine to concentrate on one of the two and not particularly focus on the other. By and large, lawyers that seem less inclined toward having an interest in technology are bound to keep their eye on the law as applied to AI, wherein you don’t necessarily need to get your hands into the tech per se. Those lawyers that seem to relish the high-tech infusion into the legal realm are more apt to gravitate toward the realm of AI as applied to the law.
You are welcome to embrace both aspects and do so with your head held high.
Considerations Entailing The Law As Applied To AI
I’ll first herein do some meaty unpacking on the law as applied to AI.
When referring to the law as applied to AI, you should immediately be thinking about the emerging litany of new laws seeking to govern the advent of AI systems. Laws are springing up like wildfire. International laws are coming forth about AI & Law, federal laws too, state laws also, and local laws aplenty, see my ongoing coverage at the link here and the link here, just to name a few.
Why are we suddenly witnessing AI-related laws getting pell-mell rushed into existence?
That’s easily answered.
Just a few years ago, the latest era of AI was being lauded as providing AI For Good. This meant that AI was perceived as being proffered for the good of society. That didn’t last very long. For example, people began to discover that AI-based facial recognition technology could readily generate false indications and perform in seemingly discriminatory fashions (see my coverage at the link here). The same could be said for AI that was being put to use for hiring purposes. On and on the concerns began to mount.
AI For Bad had arrived (well, it was there all along, but now it was noticeable).
How are we to deal with AI For Bad?
The quick response was to identify soft laws consisting of AI Ethics precepts.
These Ethical AI guidelines regarding the devising of AI and the fielding of AI are to some degree stopgaps while more arduous efforts grind toward enacting hard laws. I don’t want to though leave the impression that AI Ethics and Ethical AI will somehow be no longer needed once an abundance of on-the-books laws exists. Not at all. We are going to continually and always need AI Ethics as a necessary complementary companion to formal laws. Everything going on right now with AI Ethics is going to have to remain intact and be the ultimate binding glue to keep AI heading in the right direction.
Here’s a helpful keystone list of Ethical AI criteria or characteristics regarding AI systems that I’ve previously closely explored in my column postings:
- Justice & Fairness
- Freedom & Autonomy
In prior columns, I’ve covered the various national and international efforts to craft and enact laws regulating AI, see the link here. I have also in-depth examined the various AI Ethics principles and guidelines that numerous nations have identified and adopted, including for example the United Nations effort such as the UNESCO set of AI Ethics that nearly 200 countries adopted, see the link here.
Those AI Ethics principles are earnestly supposed to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems.
All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions. It takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.
Shifting our discussion into the arena of the hard law, a lot is going on.
In the United States, the Algorithmic Accountability Act is meandering along an inch at a time through Congress, while the EU and its proposed Artificial Intelligence Act (AIA) are moving ahead at a seemingly more strident clip. You might also find of interest that a so-called AI Bill of Rights was recently unveiled by the existing U.S. White House administration, seeking to define human rights in an age of AI, see my analysis at the link here (note that the pronouncement was non-binding and lacks the force of law).
An everyday attorney might shrug their shoulders and say that all of this energy associated with AI Ethics and the gradually landing laws about AI are of idle interest. The whole matter is certainly mildly entertaining, but it bears no notable weight on their existing efforts.
Let’s noodle on this.
Suppose you have a client that is a large business and you are providing outside legal services to the client. Turns out that this client has been adopting several AI systems to make their business more effective and efficient. Good for them, you might be thinking, though it is of no concern to you.
The client adopts these AI applications and after a while of using them finds itself embroiled in lawsuits by customers that claim the AI acted in a discriminatory or undue biased manner. Furthermore, let’s go ahead and envision that several laws at the local, state and federal levels that are aimed at AI have gotten enacted. The client is not only facing civil lawsuits, but they are also now being hounded for potential criminal acts committed via AI adoption and fielding.
Where were you when this was all happening?
Apparently, not caring a whit about AI and whether AI and the law was up and coming.
Shame on you. Worse still, you might get summarily tossed out by the client. Worser on top of the worseness, you might face claims of legal malpractice for not having timely and astutely legally been advising the firm while it was leaping into the AI morass.
You were asleep at the legal wheel.
Attorneys that are savvy about AI are quick to point out that the American Bar Association (ABA) has already proffered official advice about the topic of AI usage and the practice of law. According to the ABA resolution adopted in August 2019 (yes, you read that correctly, this happened over 3 years ago), here’s the deal about AI:
- “RESOLVED, That the American Bar Association urges courts and lawyers to address the emerging ethical and legal issues related to the usage of artificial intelligence (“AI”) in the practice of law including (1) bias, explainability, and transparency of automated decisions made by AI; (2) ethical and beneficial usage of AI; and (3) controls and oversight of AI and the vendors that provide AI.”
In that sense, a somewhat persuasive argument can be made that lawyers have a duty to be aware of AI and ensure that they legally advise their clients accordingly (for my detailed analysis on this, see the link here).
If you carefully read the aforementioned proclamation it admittedly is wishy-washy such that you can rely upon the wording that it merely “urges” you to address the AI issue, rather than compelling you to do so. This might come as a sigh of relief.
But it could be a short-lived breather.
First, it could be said that you were put on notice by the ABA about the importance of AI in the law.
Second, if you have completely ignored the guidance, this could be held against you by a clever malpractice attorney trying to go after you on behalf of your soured and deep-pocketed client.
Third, though you would obviously try to counterargue that the asserted precept is non-binding in its lax wording, the chances are that a court might not see it that way, and nor would a jury likely be especially sympathetic to your trying to mince words. They might interpret the spirit of the resolution and weigh that over the precise wording of the declaration.
Spin the wheel and see how it goes.
Please also note that though the scenario involved your legal services as outside counsel, keep in mind that internal counsel such as the Chief Legal Officer (CLO), General Counsel (GC), or other corporate counsel is inextricably in this same game too. For those of you providing internal legal counsel, letting AI wantonly be devised or adopted inside your firm is going to come back and haunt you. That is a guaranteed ironclad foreboding outcome.
The most common reaction to this governance of AI consideration is that it surely must not apply to you since you don’t do legal work for tech companies (let’s assume that to be the case). Sure, if you were steeped in a tech company, all of this might be pertinent. But you are legal counsel for or within a non-tech firm. The company doesn’t make AI. They aren’t techie oriented.
Go back and carefully reread what the scenario consisted of.
Whereas the attention to AI was initially involving high-tech firms that devised AI, the bigger picture is that nearly every firm of any size or shape is incrementally adopting AI systems. They are users of AI. In turn, their products and services are dependent upon AI.
I’m guessing that you are going to try and argue that since these firms only adopted AI, they are magically off the hook when the AI goes awry. All you need to do is tell anyone knocking on the legal door that they should walk down the street to the company that devised the AI.
Good try, but undoubtedly not going to work.
The odds are that the firm using AI is going to be commingled into the midst of any legal entanglements, especially if they are a big firm and the AI maker is a small firm. Put your feet into the shoes of the attorney that is trying to sue. The big firm has the big bucks. Get the small firm and go after the big firm. Of course, all sorts of wrangling will happen about licensing and the rest. Nonetheless, the big firm is the juicy target, and no lawyer worth their salt is going to give up on going after it.
Adding fuel to that fire, the chances are that the big firm opted to make customizations or have the AI “trained” to make use of the data of the big firm. Oopsie, you are no longer at some kind of arm’s length away from the firm that was the AI maker. You brought in the AI and essentially made it your own. Gone is the seeming protection that you were innocent. Your firm intentionally modified or tailored the AI, for which the AI is now part and parcel of your big firm.
Live with it, own it.
There’s more to make your head further spin.
The new laws that are mounting up like airplanes stuck in a flight pattern at a major airport are going to have all kinds of messy legal issues embedded within them. Lawmakers are jamming these legal muddied nightmares through the lawmaking pipeline. Everyone wants to be the first to showcase that they are clamping down on AI For Bad. Glory awaits, along with helping those that are being harmed by foul AI.
Problems though are in this sausage-making. The legal language is often not well devised. Loopholes exist. Misstated wording prevails. Etc.
You can construe this as either a legal quagmire or a legal jackpot, depending upon which side of the fence you might be when seeking to interpret these laws. See for example my analysis of that vague and troubling New York City (NYC) law on AI Biases Auditing that is scheduled to go live this coming January and will be a humongous problem for thousands upon thousands of businesses (big and small) in New York City, see the link here.
So, we are seeing a surge of laws that are concentrated on AI.
That’s not all. We are going to have seemingly devoted-to-AI laws, and in addition, AI will also be mentioned either profoundly or modestly in laws that are seemingly tangential to AI. If you are thinking that you don’t have to worry about AI & Law because it is an extremely narrow legal niche, I would suggest you recalibrate that myopic thinking.
An additional twist is that we are going to have new laws that incorporate some legal mumbo jumbo about AI, which then will almost certainly be in conflict with existing laws that don’t mention AI. In short, you might be fully comfortable with some existing law, knowing it like the back of your hand. The next thing that happens is some obscure new law comes along and mentions AI. Turns out that the new law intersects with the beloved non-AI law that you know by heart.
Are you ready to contend with how the new laws that include AI are at odds with existing law that doesn’t mention AI?
Imagine this scenario.
One of your favorite clients has been happy with your legal services, year after year. You fully and without any qualification earnestly know the laws that pertain to this client. You can recite the laws while blindfolded.
A new law gets passed that includes various portions associated with AI. You aren’t paying attention to this new law because the theme or focus of the law doesn’t appear to be directly related to the laws that you believe are relevant to the client.
A clever attorney that has closely studied this new law realizes that the AI element opens the door to your legal realm. All of a sudden and to your shock and surprise, this other attorney manages to find a client that believes they have been legally harmed by your client. The passageway that connects the dots is this new law that can be stretched into your normally staid legal arena.
You never saw it coming.
If you believe that your specialty of law is somehow immune to the AI legal wrangling invasion, be careful in making that brash and eventually false assumption. Name a legal specialty and I can readily point out how AI is coming to that town. Criminal law? Of course. Family law? Yes. Tax law? Obviously. Maritime law? Yep. The list keeps going.
Sorry if that seems daunting and a bit overwhelming.
It is undoubtedly something that few attorneys are cognizant of.
An earthquake is coming, but few are preparing for it. Some don’t know that the earthquake is starting to rumble. Others think it won’t impact them. When the earthquake hits, only then is the AI legal-oriented acknowledgment going to light some bulbs.
Alright, so far, I’ve been hammering away at the hard law aspects. I believe the soft law side is getting jealous. Here’s what I mean.
Imagine this. A client that you have been doing legal services for opts to embrace AI Ethics. Let’s assume for the moment that there aren’t any hard laws enforcing this Ethical AI adoption. The client wanted to do “the right thing” and ergo decided to showcase to the world that they are fully onboard with AI Ethics.
You might not be privy to their having decided to incorporate AI Ethics into their company culture. Or, you might be aware of it, but figured it wasn’t something of interest to you. It is just a soft law approach. Who cares about it? Not you.
A few months down the pike and the company slips up. They only gave lip service to those Ethical AI principles. It was mainly a publicity stunt. Meanwhile, the AI they have brought into the firm is replete with all manner of Ethical AI violations.
That’s not good.
A clever lawyer that wishes to legally tackle your client is likely to be thanking their lucky stars that the AI Ethics embracing took place. If your client had never said anything about Ethical AI, perhaps there is a chance that they could play the gambit of not knowing what they didn’t know. Instead, they made a fuss about knowing about AI Ethics. To some extent, they have held themselves out to a higher level of responsibility for what they did with their AI.
Did you bring this to their attention at the get-go, and did you aid them in legally considering the nature of the AI Ethics and how they as a company was going to make use of them?
Darn, wasn’t in your bailiwick.
I could go on and on, but I trust that the theme is coming across loud and clear.
Law firms that are on their toes are starting to realize the bonanza associated with proffering legal services revolving around AI. They are beginning to set up new practices within their law firm that has AI & Law as a specialty.
Those within the AI & Law specialty are able to advise existing and prospective clients about their AI adoptions. In addition, the AI-minded partners and associates try to collaborate and inform across all other legal lines or specialties of the firm, seeking to make sure that those other areas do not get blindsided by new AI laws, including tangential ones.
In-house counsel is also slowly adding legal talent versed in AI & Law (mainly in companies that realize they are awash in AI) or assigning eager-to-get trained lawyers that become a handy internal resource on the AI legal topic. They are likely to still use an external legal resource for larger-scale AI-related matters, though by having in-house talent they’ve got a trusted internal member of the legal team that can serve as the vital linkage between what the company is doing and the likely deeper AI-versed legal capabilities of an outside counsel.
I don’t want to suggest that this is happening overnight as though a magic wand has been waved.
Law firms are famous or shall we say notorious for being slow to change.
Additionally, until the marketplace is clamoring for AI & Law talent, most law firms will take a wait-and-see approach. If the market arises, voila, they will probably take a whole-hog approach and acquire whatever new talent is needed. It is the classic just-in-time (JIT) strategy of trading in and trading out legal services.
Smaller law firms or solo practices are bound to find themselves in a difficult predicament. On the one hand, if there is a buck to be made in the AI & Law domain, this seems mighty attractive. They can little afford though to take a chance if the market isn’t already percolating sufficiently. You’ve got to make enough to keep things going. Keep to the cash cow of whatever existing legal domain you know, and be ready when the marketplace shunts open to vociferous calls for AI & Law legal services.
It is a Catch-22 of sorts.
Legal firms and attorneys are not going to expend energy and attention on AI & Law unless their clients and the marketplace erupts with heady demand for it. The thing is, most clients are unlikely to realize they need AI & Law related legal services. They are barreling ahead on the AI gravy train and the AI bandwagon, completely ignorant of the legal potholes and sheer cliffs up ahead.
Maybe it is the chicken-or-the-egg of which arrives first.
Clients that get into all manner of legal hot water and get rightfully irked about not having gotten legal advice about their AI use, and then angrily confront their legal services providers accordingly. Or shall we have savvy attorneys that get in front of the coming tsunamis and try to engage their clients in discussions that assuredly are going to be hard to do, since many clients will have deaf ears to the AI legal warning bells?
This brings up a related notion that is worthy of consideration.
Be aware that some pundits adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. They further forewarn that if we do enact these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages. Sometimes this opinion or viewpoint is denoted as being coined as AI Exceptionalism (meaning that we ought to allow significant latitude about legally pinning down AI because AI offers such exceptionally important benefits to society).
You are decidedly going to hear the same thing being said by many of your today’s clients.
If you go blindly into intense discussions with your clients and try to explain the legal ramification of their AI adoption, this is going to often be perceived as the legal side once again putting the brakes on good things. Yikes, you will be hurriedly told, please stop with all those wild and wide-eyed legal nightmarish scenarios.
We’ll be fine with the AI, they will insist. It is going to cut our costs, we can reduce our headcount, and we can finally scale up to provide our products and services to more and more consumers or customers. Everyone else is doing likewise. If they are doing it, we have to do so too. If your words of legal caution made any sense, those competitors of ours wouldn’t be adopting AI like it is the best thing since sliced bread.
Been there, done that.
Make sure that you couch your AI legal cautions in as business-favorable terms as you can. I’ve found it useful to point out highly publicized AI slip-ups that have already punished companies via a loss of business reputation and/or via costly lawsuits. I guess you could also try the old refrain that we all learned as children, namely, just because someone else is mindlessly jumping off a cliff, does that mean they should do so too?
You will want to framework the AI discussion by emphasizing that there are right ways and wrong ways to adopt AI. If you leave the impression that you are putting the kibosh on the use of AI in its entirety, well, that’s going to be a party-crushing assertion. It would seem to leave no room for leveraging AI. That’s not the message you are going to find successful to impart.
The better approach is to lay out how legal needs to be an integral part of the AI adoption life cycle. For each stage of AI development or customization or adoption, legal ought to be weighing in. None of that belated after-the-fact stuff whereby legal is asked at the very end to do a final signoff. That’s too late. The horse is already pretty much out of the barn.
The proverbial carrot-and-the-stick next comes to mind.
You could say that the carrot is that the law as applied to AI is going to be a big buck’s moneymaker for attorneys (prompting lots of those cherished billable hours), while the stick is that lawyers not paying attention might find themselves in deep trouble when clients get legally overwhelmed by AI-related legal risks and liabilities (likely leading to attorney loss of reputational value, and perhaps facing malpractice or similar jurisprudence maladies).
I would like to judiciously add another carrot to the pile, if I may.
Some attorneys are motivated by the novelty of unresolved legal challenges.
These kinds of lawyers aim to explore new legal turf. They want to make a mark on the legal landscape and not just push the same legal minutiae from one legal venue to another. For them, though billable hours are undoubtedly vital, they are amply and in a most heartfelt manner desirous of using their hard-won treasured legal mindedness toward solving especially vexing or challenging legal puzzles.
To those high-adventure legal seekers, I earnestly welcome you to the field of law as applied to AI.
Here’s the deal.
I’ll start with a question that might cause you to do a spit-take. Prepare yourself.
What is the definition of Artificial Intelligence (AI)?
Let me slightly but importantly rephrase the question: What is the legal definition of Artificial Intelligence (AI)?
You are assuredly assuming that the legal world has already nailed down to the most infinitesimal detail the specific definition and meaning when referring to that thing we call “AI” (by the way, some are now preferring to say Artificial General Intelligence (AGI), an attempt to bring back the goal of attaining true AI rather than today’s less impressive non-sentient AI).
As I’ve covered extensively, legal definitions of AI are veritably a dime a dozen, see my analysis at the link here.
Think about it this way.
All those shiny new AI laws at the international, federal, state, and local levels are presumably focused on AI, and yet it turns out that there isn’t any fully agreed all-standardized legally fully solidified definition of AI. Instead, everyone is pretty much making up their AI definitions when they compose a piece of law, or they borrow someone else’s definition even if nobody can concretely say whether it will stand the test of time and endure the rigors of court battles.
Ponder this for a moment.
If you opt to use in a new law an AI definition that is overly narrow, the chances are that all manner of AI that should have fallen within the scope will be able to escape the law (especially as legally argued by AI-savvy attorneys that realize the loophole exists). On the other hand, if the AI definition is overly broad, it could carry into the new law a wide array of software and systems that presumably have no genuine basis for being within the scope of that law. In short, part of the mess of the law applied to AI is that we are going to end up with a patchwork of new laws that cover something nebulously known as “AI” but that will be open to vastly differing legal interpretations.
Attorneys will variously find themselves in these postures related to legal definitional vagaries about AI:
- Defending a client that claims the law doesn’t apply to their alleged “AI” that the firm devised
- Defending a client that claims the law doesn’t apply to their alleged “AI” that the firm is using or adopted
- Representing a client that claims the law does apply to a firm that devised alleged “AI” of a harmful nature
- Representing a client that claims the law does apply to a firm that used or adopted alleged “AI” of a harmful nature
How can you as an attorney do something to aid in coping with this rapidly expanding problem and try to head off the enormously costly and byzantine legal downstream fracas that is decidedly going to ensue?
Help work with lawmakers to figure out the AI definitional particulars. Maybe become a lawmaker if that’s of interest to you. Become a leader in figuring out AI laws that make sense, will be well specified, can be dispute free (well, there is no such thing), or at least lessen the variance around which disputes will arise, and seek to prevent a legal maelstrom that is coming once all these new AI laws get onto the books and the horse is out of the barn.
How does that strike you as a legally intriguing proposition?
Some of you might be saying that the AI definitional woes aren’t an especially interesting challenge from your perspective. What else is on the docket, you might be wondering. I’ve got a bushel full of them, but let’s for now just take one more example.
Two words: Legal personhood.
Can or should AI be anointed with some form of legal personhood?
You’ve got to admit that seems like a mover and shaker. Suppose we all collectively decide to allow AI to have a semblance of legal personhood. Think about the ramifications. Some countries have already started down that path, such as contending that AI can hold Intellectual Property (IP) rights for AI-produced artifacts such as new inventions, new art, and the like.
Some say this makes abundant sense, others are vociferously irate and argue that the AI of today is not at all worthy of an iota of legal personhood.
It is a huge controversy.
So far, in the United States as related to IP, the general legal landing has been that the IP laws stipulate that the legal owner has to be a human, see my coverage on this at the link here. In a sense, this kind of silently sidesteps the question of AI legal personhood by simply indicating that the only applicable IP owner must be of human legal standing.
Somewhat related to the legal personhood matter is a closely paired topic entailing the Accountability of AI (at times referred to as the Responsibility of AI). Accountability or responsibility refers to trying to legally tie an AI to whom or what should be responsible or accountable when the AI causes harm, see my analysis at the link here.
The easy answer would seem to be that whoever devised the AI gets the blame. A human wrote the code, thus the human shoulders the responsibility or accountability. But suppose the AI was written to self-adjust itself. This might be done for clearly beneficial reasons. At the same time, the affronting AI has veered from its original devised mechanizations. Are we to still hold the programmers accountable?
For example, the programmers wrote the AI, and then someone else that fielded the AI changed the AI.
Now, who holds the accountability? More confounding is that the AI has passed through dozens of human hands, along with doing dozens of self-adjusting convolutions, and the resulting variant of the AI is no longer at all akin to the earlier versions. Does everyone get the blame? Does no one get the blame? Can you even trace down the lineage of the AI to definitively stipulate which humans were involved and what involvement they each had?
Here’s an added kicker, maybe the AI gets the blame.
If we are going to provide some modicum of legal personhood to AI, we presumably can also legally hold the AI accountable. But what does that mean? Will the AI legally possess assets to be grabbed for due compensation by those harmed? Can AI be imprisoned? I’ve covered these topics at the link here and the link here, among many of my writings on these topics.
I hope these various teasers are enough to whet your legal appetite.
Bottom-line is that for those of you dreaming of someday being able to legally make a mark and are chomping at the bit to get immersed into remarkable legal challenges and legal puzzles, consider the law as applied to AI.
Time to next take a look at the allied brethren, AI as applied to the law.
Considering The Rise Of AI As Applied To The Law
I said at the beginning of this discussion that there are two major ways of interconnecting AI and the law (AI & Law), which again consist of:
- (1) Law-applied-to-AI: The formulation, enactment, and enforcement of laws as applied to the regulating or governance of Artificial Intelligence in our society
- (2) AI-applied-to-Law: Artificial Intelligence technology devised and applied into the law including AI-based Legal Reasoning (AILR) infused into LegalTech high-tech apps to autonomously or semi-autonomously perform lawyering tasks
We’ve already now taken a quick tour of the first item regarding the law as applied to AI.
Whatever you do, please do not ignore or omit the other equally crucial mainstay, namely the use of AI as applied to the law. I assure you that there are as many carrots to be had with the use of AI as applied to the law as there were when discussing the law as applied to AI. Also, and though I don’t want to seem gloomy, there are also sticks related to AI as applied to the law realm, so be prepared for that too.
AI as applied to the law entails trying to leverage the latest and ongoing advances of AI to aid in performing legal tasks. The aim is to provide technology that can perform somewhat akin to a human lawyer, though for now in a rather constrained manner.
In the main, this is a hard problem.
A very hard problem.
Turns out that devising AI that can dutifully perform legal reasoning is a lot harder than many think it is. You see, the law is replete with semantic ambiguity and so far all manner of conventional programming and data structural representations have been unable to fully grasp the cognition embodied in human lawyers amid the lawyering task. For avid computer scientists seeking a notable challenge, AI & Law is markedly fertile ground. Lawyers wanting to make a leap forward in the annals of the law would also relish being part of these efforts.
Many insist that this is a moonshot type of problem. I’d agree. The good news is that akin to how AI self-driving was earlier presented as a moonshot, and for which great progress has been made, the same can be said for the aims of AI-based legal reasoning. With enough devout attention and resources, this hard problem is computationally tractable and we will end up landing on the moon, as it were.
You might be tempted to say that we will never be able to have AI do what attorneys do. Attorneys can never be “automated” is a frequent refrain.
As you soon will see herein, the debate tends to center on an all-or-nothing contrivance. The assertion is that either AI entirely replaces human lawyers and human lawyering, or it does nothing of the kind and has seemingly no value to add.
Kind of wacky.
Any attorney worth their salt knows about those kinds of wicked arguments. You seek to set up a strawman and then knock it down. Voila, you win the argument. Unless someone else equally versed in argumentation comes along and points out that the strawman was a fake artifice used to pander to a weak argument.
We can use AI in select ways and for particular legal tasks and don’t have to exclusively signup for being able to do all lawyering tasks in all ways imagined all at once.
That being said, there are of course serious concerns about what the AI is doing and whether it is able to perform sufficiently for the task at hand. You likely know well that one aspect of practicing law in the United States is that the Unauthorized Practice of Law (UPL) is a rather strict condition. Ostensibly, the UPL protects those seeking legal advice by ensuring that only those properly licensed to serve as an attorney will do so. A looming question is whether AI that meanders over into doing legal tasks is violating UPL, a complex topic that I’ve covered at the link here.
I am sure that some of you are waiting to hear whether we have attained AI that completely replaces human attorneys. That’s the usual scuttlebutt heard at law industry conferences. Are we there yet, the question lingers heavily in the air.
Replacing human lawyers is a contentious hot button, for sure.
The insider view is that rather than being preoccupied with whether AI is going to “replace” human attorneys, the more realistic scenario in the near term is that AI-armed attorneys are likely to “replace” attorneys that are unarmed with AI.
I’ll say more about this toward the end of this discussion.
A particularly onerous topic in the realm of AI as applied to the law involves those assorted tales of someday having Robo-lawyers, Robo-judges, and even perhaps Robo-jurors. I am not a fan of Robo-labeling. The problem is that there is so much preconceived baggage associated with the Robo-labeling that you cannot engage someone in an earnest conversation the moment that the Robo-notion gets tossed into the mix. I will hesitatingly include the Robo-trio in this discussion and ask that you put aside whatever crazy or wildly preconceived notions you might have.
Thanks for your gracious willingness to look anew at the topic of AI as applied to the law.
The first and most essential keystone consists of getting your head wrapped around an important framework that I’ve become known for regarding my take on AI Legal Reasoning (AILR) in the context of Levels of Autonomy (LoA), see my research paper as per the MIT Computational Law journal at the link here.
I’ll briefly introduce the framework to you momentarily.
The gist is that most lawyers seem to right away fall into the mental trap that if AI is going to be used for legal reasoning purposes then the AI is either all there or not there at all. This is a false dichotomy. It is an assumption that AI is either on or off. Seemingly, you either have AI or you don’t have AI. But that’s not how things work.
I’ll compare this same on-or-off falsehood to the emergence of self-driving cars, allowing you to more clearly see why it is crucial to take into account various Levels of Autonomy.
You almost certainly know that there are numerous efforts underway to devise AI-based self-driving cars, whereby the AI does the driving and there isn’t a need for a human driver at the wheel. There are ongoing tryouts of self-driving cars in several major cities in the U.S. In some instances, a human driver is still at the wheel, serving as safety or backup driver. Meanwhile, on a limited basis, there are self-driving cars being used without any human backup driver at all.
I’d bet that maybe you vaguely know that there are various levels of AI self-driving cars.
A standard exists that has established a scale of 0 to 5, denoting the Level of Autonomy that might exist.
The bottom of the scale is a zero (Level 0), for which there is essentially no automated driving whatsoever and the driving is entirely manually operated by a human driver. At the other end of the scale is the topmost scoring of a 5 (Level 5). The topmost score is indicative of a fully AI-driven vehicle that has no human driver and that can operate in just about any driving setting that a human driver could handle.
Most people think about the 0 and the 5, but do not consider the important levels in between the bottom and topmost scorings. Today’s cars are generally considered at Level 2, whereby they provide some automation to do driving. The human driver though is still legally considered the driver of the vehicle. Level 3 cars are just emerging and consist of a semi-autonomous driving arrangement. The human driver is still expected to be at the ready when needed, though the bulk of the driving can presumably be undertaken by the AI.
Level 4 is a somewhat confusing category for many. Level 4 is considered a fully autonomously driven car but it will only be able to drive in certain designated conditions, known as the Operational Design Domain (ODD). For example, I might say that I have an AI self-driving car that can only safely drive when in San Francisco, only during the daytime, and not when the weather turns foul such as in heavy rains. That is a particular ODD that scopes or bounds where the AI driving system is able to properly drive the autonomous vehicle.
Now that we’ve covered the basics of Levels of Autonomy, we are ready to take that same approach and see how it also applies to the act of legal reasoning.
Here are my stated Levels of Autonomy for AI-based legal reasoning:
- Level 0: No Automation for AI Legal Reasoning
- Level 1: Simple Assistance Automation for AI Legal Reasoning
- Level 2: Advanced Assistance Automation for AI Legal Reasoning
- Level 3: Semi-Autonomous Automation for AI Legal Reasoning
- Level 4: Domain Autonomous for AI Legal Reasoning
- Level 5: Fully Autonomous for AI Legal Reasoning
Level 0 is considered the no automation level. Legal reasoning and legal tasks are carried out via manual methods and principally occur via paper-based approaches.
Level 1 consists of simple assistance automation for AI legal reasoning. Examples of this category would include the use of everyday computer-based word processing, the use of everyday computer-based spreadsheets, access to online legal documents that are stored and retrieved electronically, and so on.
Level 2 consists of advanced assistance automation for AI legal reasoning. Examples of this category would include the use of query-style rudimentary Natural Language Processing (NLP), simplistic elements of Machine Learning (ML), statistical analysis tools for legal case predictions, etc.
Level 3 consists of semi-autonomous automation for AI legal reasoning. Examples of this category would include the use of advanced Knowledge-Based Systems (KBS) for legal reasoning, the use of Machine Learning and Deep Learning (ML/DL) for legal reasoning, advanced NLP, and so on.
Level 4 consists of domain autonomous computer-based systems for AI legal reasoning. This level reuses the conceptual notion of Operational Design Domains (ODDs), as utilized for self-driving cars, but as applied to the legal domain. Legal domains might be classified by functional areas, such as family law, real estate law, bankruptcy law, environmental law, tax law, etc.
Level 5 consists of fully autonomous computer-based systems for AI legal reasoning. In a sense, Level 5 is the superset of Level 4 in terms of encompassing all possible legal domains. Please realize that this is quite a tall order.
Take a moment to contemplate these Levels of Autonomy in the context of your legal services and legal activities.
Most law practices today are using some form of computer-based packages to aid in doing their legal work. The popular naming for these applications is that they are considered in the rubric of LegalTech (you might know that there is MedTech for medical technology, FinTech for financial technology, EdTech for educational technology, etc.).
I almost daily get asked whether LegalTech includes AI or does not include AI.
My answer is that this is not an on-or-off thing. You need to consider the Levels of Autonomy.
Some LegalTech packages have added AI capabilities into their package (or, in some cases, used AI at the core, to begin with), doing so to varying degrees. You need to examine carefully what AI has been added and what it accomplishes. Be cautious of the oft-used brazen and unsubstantiated claims about AI.
One of the most popular arenas consists of using AI for dealing with legal contracts. You might get a Contract Life Cycle Management (CLM) package to track and flow your legal contracts during the legal review process. Adding AI can be useful if the AI for instance can identify potential legal flaws or guffaws in draft contracts, or perhaps stitch together proposed legal language for a given contract being crafted.
Another focus of AI comes up when doing eDiscovery. You might be faced with a massive corpus of electronic documents during the discovery process. Various kinds of AI can potentially be used to search through the corpus and attempt to identify relevant items. In addition, the AI might be able to summarize or in other ways analyze the value of the found material to the case at hand.
And so on.
If the AI is only minimally doing any substantive legal reasoning or legal processing, you would classify that as AI being used on my scale at a Level 2. When AI is more robustly devised, LegalTech could be utilizing AI at Level 3. Generally, Level 4 only exists in rather narrow ODDs today. Level 5 is aspirational, and we’ll need to keep grinding away to see if we can attain it (just as the same open question exists in the realm of AI self-driving cars).
So, yes, we do have AI being used in LegalTech today and the AI varies considerably in terms of what it does and what it cannot do.
Meanwhile, strident efforts are underway to push the boundaries of AI and seek to couple this into doing legal tasks and performing various ranges and depths of legal reasoning. In my research lab (see the link here), we are using the advent of Large Language Models (LLMs) and applying this latest new tech to the law, along with bringing together an assorted array of AI techniques and technologies to try and synergistically and holistically crack the ongoing enigma of how to achieve heightened levels of autonomy in doing AI-based legal reasoning.
Returning back to the Robo-lawyer moniker (I don’t want to, but I feel that I must for sake of completeness herein), here’s a matter of consideration:
- What does a “Robo-lawyer” constitute in terms of scope and capability?
The usual reply is that a Robo-lawyer is an AI-based legal reasoning system that can do everything that a human lawyer can do. Period, full stop. If that’s the case, we would then seemingly say that a Robo-lawyer would have to be a Level 5 on the autonomy scale. Of course, this is somewhat argumentative because you can question whether any chosen human lawyer can really fully operate in all domains of the law, which is not particularly realistic. Most lawyers are typically in a particular legal domain or specialty, akin to the Level 4 ODDs.
Let’s look at the Robo-Lawyer in a Level 4 context.
Suppose we devise an AI system that can do “legal reasoning” for legally contending with parking tickets. The AI is able to perform an akin legal function as would a lawyer that was hired to do the legal wrangling about a conventional parking ticket.
Is this parking ticket lawyering-like AI a Robo-Lawyer or not?
In one sense, yes, since it is taking on a lawyering task albeit within a very narrow domain or subdomain of the law. On the other hand, if you are comparing this to a full-on Level 5 autonomous AI-based lawyering capacity (which we don’t have as yet), you probably would vehemently say this is not a Robo-lawyer.
Do you see why I dislike the Robo-lawyer naming?
It is ambiguous. It carries baggage. It implies things that aren’t necessarily articulated. The catchphrase is just awful and it would be better to dispense with it. Unfortunately, the phrase is catchy and alluring. It has stickiness and won’t readily go away. Sadly, there are ads by some LegalTech vendors that try to exploit the Robo-lawyer tag, and as such, it seems doubtful that anytime soon the naming will be expired or be expunged.
Shifting gears, we ought to examine the carrots and the sticks associated with AI as applied to the law.
If a law practice adopts an AI-based LegalTech package that due to the AI is able to dramatically increase the productivity of human lawyers, this would seem to be a good thing. Law firms generally welcome increasing the productivity of their human labor.
A counter-argument is that productivity enhancers will cut into billable hours. Your attorney that would have taken say 20 hours to perform a particular legal task is now able to do so in 15 hours. Ouch! That implies that you just gave up 5 hours of billable time. Of course, that’s a narrow view of the situation and omits the business acumen underlying seeking productivity boosters.
For example, you might be able to use those “saved” hours by using the attorney toward garnering new clients or getting other existing client work done sooner (presumably raising satisfaction with those clients). There is also the competitive angle to consider. If your law firm is less productive, the odds are that companies seeking out legal services will get wind of this. Are they to choose the law firm known for being highly efficient or the one that is less so?
You should also go beyond productivity and look at other key metrics such as the quality of the legal advisement.
If an AI-based legal reasoning element sparked legal options or stoked legal argumentation, the human lawyer using such an AI might very well identify higher-quality legal solutions. In a sense, this would be like having a senior partner acting as a mentor. Partners and associates are able to leverage the AI, any time of the day or night, from any location, prompting them to more mindfully consider a fuller range of legal avenues to be pursued.
Those are some of the carrots.
The sticks side is perhaps apparent.
A provocative argument being made is that attorneys that don’t adopt such AI will be usurping their duty to their clients. As attorneys, they are not availing themselves of the best possible means to advise their clients. They are undercutting their sworn obligation as a legal service provider.
You’ve likely also seen the emergence of so-called sandboxes in the legal field. Some believe that this might be the pivotal means to get AI into the legal game. The argument is that if conventional law firms aren’t going to willingly move more avidly toward AI, perhaps an alternative avenue is needed.
It is suggested that lawyers that do not use AI are seemingly going to be at a disadvantage in comparison to those that do. This especially worries some legal industry associations. The concern is that small law firms or solo lawyers will not be able to afford AI legal reasoning tools. They will be eclipsed further by Big Law which has the dough for making such investments.
A counter-argument is that if the cost of the AI legal reasoning tools is low enough, this could make small law firms and solo lawyers more competitive with Big Law. Clients that otherwise felt that the small law firm or solo wouldn’t have adequate legal resources could be perhaps persuaded otherwise via AI augmentation.
One of the biggest revelations involves the supposition that if AI legal reasoning of an autonomous nature could be made widely available and at an affordable price, this would potentially aid the overwhelming problem of A2J (access to justice). Vast scores of people have no ready access to lawyers and lawyering. They do not know how to cope with the legal ecosystem. Imagine what a 24×7 AI-based legal reasoning system might do to democratize the law and enhance the perception of the rule of law.
Those last few points touch upon some of the exciting legal puzzles or legal challenges awaiting those that might opt to engage in the advent of AI as applied to the law.
Some more teasers are that we can use AI to aid in preparing or drafting laws. Here’s the idea. Oftentimes a law is poorly composed. The failings of the legal language produce confusion about the law. This undercuts the rule of law. AI is being put to use to analyze proposed laws and seek to identify where there are gaps and issues that otherwise might not have seemed readily apparent. In addition to detecting these concerns (sometimes referred to as legal smells), the AI can suggest rewording to reduce ambiguity.
Another mind-bender invokes the contention that law is code.
In other words, if you look at most laws, they have a somewhat familiar resemblance to computer coding. Efforts are underway to automatically translate laws into pure programming code or its equivalent. This in turn might allow for computational and mathematical-oriented proofs associated with our laws.
You can also noodle on the idea of code as law.
If a law is going to be based on some computer algorithm, you could then suggest that the programming code per se represents the law. The code is the law. Envision going to court to argue about a piece of programming code that is considered legally as the verbatim indication of the law.
Ruminate on these enticing questions:
- By what means can we best devise AI that will more fully encapsulate legal reasoning?
- What type of online interaction between human lawyers and AI legal reasoning systems will be most conducive to usage?
- AI legal reasoning requires XAI (explainable AI), but how can Machine Learning and Deep Learning provide suitable explainability?
- How can we make AI legal reasoning available to those without sufficient A2J?
- Can we figure out the UPL issue when it comes to AI-based legal reasoning?
- Will our courts also adopt AI and if so how will this impact adjudication across the board?
- And so on
Some lawyers consider themselves to be entrepreneurs at heart.
There are lots of startups in the LegalTech space, and of those, there are many that are trying to invent or apply AI accordingly to legal tasks. You might have seen the recent crop of job ads asking for Legal Knowledge Engineers or simply listed as Legal Engineers.
Some lawyers are techies that come into this space.
Some lawyers aren’t techies that enter into the AI as applied to the law arena, for which they provide legal domain expertise to aid in developing AI legal reasoning systems. I mention this because the assumption by many lawyers and law students is that they must be a techie to get into this realm. That’s not a requirement per se. The willingness to explore the law in ways heretofore that you hadn’t examined and done so while working steadfastly and collaboratively with techies is the recent rising path.
You made it all the way to the end of this treatise.
Good for you.
I sincerely hope that you will have found this discussion informative, perhaps even modestly inspirational.
Why aren’t lawyers dashing toward AI & Law?
Very few know what you now know.
The bulk of attorneys has no or scant ideas of what AI & Law consists of. It is a topic rarely covered in law schools, though I’ve predicted that once the AI & Law tsunami hits, we’ll likely see a big change in that realization of where AI & Law needs to be covered.
I also mentioned earlier that there is a chicken or egg about AI & Law. Clients do not know as yet that AI & Law is going to be a momentous legal issue. Law firms tend to wait until clients start hollering for legal services. Right now, only some law firms see the writing on the wall. Likewise, only some clients see the writing on the wall. Few generally are aware of the earthquake, while fewer still are getting prepared for the earthquake.
Disruption is coming.
I realize the word “disruption” is bandied around quite a bit these days. Despite that overuse, I do think the legal field is on the cusp of a disruption associated with AI. It won’t be overnight. You won’t wake up one morning and during the night a radical transformation inexplicably happened. I’d say it is going to be a steady march ahead. One foot at a time, one step at a time.
A journey of many miles begins with a single step.
Per the famous remark made by William Randolph Hearst: “You must keep your mind on the objective, not on the obstacle.”