The Journal of Things We Like (Lots)
Select Page
Andrew M. Perlman, The Implications of ChatGPT for Legal Services and Society, The Practice Magazine, March/April 2023 issue (2022).

Andrew Perlman has made legal technology one of the themes of his successful deanship at Suffolk University Law School. He has also taken national leadership roles on law and technology issues, as the chief reporter of the ABA’s Commission on Ethics 20/20, with the charge of modernizing the Model Rules in light of globalization and digital technology, and as vice chair of the ABA Commission on the Future of Legal Services. He was selected as the inaugural chair of the governing council of the ABA’s Center for Innovation. Dean Perlman is therefore ideally positioned . . . to be replaced by a robot.

Many law professors have been playing around with ChatGPT, a chatbot released in November 2022. The developer, Open AI, an artificial intelligence research company, describes the chatbot on its website: ”We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.” One tweet, reproduced in an article on the technology, showed the output in response to the prompt, “Write a biblical verse in the style of the King James Bible explaining how to remove a peanut butter sandwich from a VCR.” I doubt that most humans – even a pretty good humor writer – could have done better. Anyone who follows law professors on Twitter has seen academics having a field day inputting their law school exams into ChatGPT or asking hard questions about technical areas of law to try to stump the system. In most cases, the chatbot has performed astonishingly well, providing not only technically correct answers but also demonstrating facility with style and rhetoric. ore ominously, a technology company CEO and a legal scholar had ChatGPT take the multiple-choice portion of the bar exam, the MBE, using the study questions published by the National Conference of Bar Examiners. The chatbot was correct on 50.3% of the questions, as compared with an average of 68% for human test-takers, and would have earned passing scores on the Torts and Evidence portions of the exam.

Perlman may have started out just playing around, but he ended up writing a serious article about ChatGPT and its impact on the practice of law. Actually, I should say, ChatGPT wrote a serious article, because all Perlman did was feed the system prompts, some of which the chatbot used to write portions of the paper (“Describe potential use cases for GPT-3 in the legal industry,” p. 4), others of which provided an opportunity for the system to demonstrate its capabilities (“Draft a brief to the United States Supreme Court on why its decision on same-sex marriage should not be overturned,” p. 5, or “I have a disagreement with my child’s school district in Massachusetts regarding the creation of an IEP. What should I do?” p. 11). But Perlman chose good questions to ask the chatbot and got remarkably sophisticated answers.

When I wrote something a few years ago for a symposium on lawyering in the age of artificial intelligence, the consensus seemed to be that AI systems were great at routine tasks like e-discovery and compliance reviews of thousands of contracts, but AI was a long way from threatening the core competencies of which the practice of law is comprised. These competencies include in-court appearances, negotiation, fact investigation, strategic decisionmaking, empathetic understanding of clients’ legal and non-legal needs, and the application of existing knowledge to new or rapidly evolving areas of law. Nevertheless, I tried to take seriously the potential of AI and tried to determine whether there was a boundary separating machine and human intelligence insofar as they bear on the practice of law. Josh Davis is in the final stages of a book taking a sophisticated theoretical approach to this question, but also landing on the conclusion that there are some tasks, such as adjudicating legal disputes, that cannot be automated. Davis and I both contend that there is an irreducible element of normative judgment in legal decisionmaking that cannot be modeled, at least with existing technologies. Davis ties this to specific views about the nature of consciousness within the philosophy of mind as well as a jurisprudential thesis about the relationship between law and morality; I tried to connect it with a claim about agency and reason-giving.

We have been hearing predictions for several decades about the impending obsolescence of lawyers. The threat is not just from technology but other trends such as the in-house counsel movement and the unbundling and outsourcing of routine legal services. Nevertheless, the traditionally organized legal profession, including the so-called BigLaw sector, has proven remarkably resilient in the face of these disruptive forces. Of course, this is what Kodak executives said about the threat of digital photography. The legal profession may be vulnerable to an industry-killing technological advance, and sophisticated AI-enabled natural-language database searching may be one of those advances.

Lawyers will be tempted to play defense by using unauthorized practice of law (UPL) rules against ChatGPT. In an influential opinion, a district court in Missouri concluded that software that prepared a legal document in response to answers given by a user to a series of questions is engaged in the unauthorized practice of law. There is a risk that the aggressive use of UPL rules may draw attention from the Federal Trade Commission, particularly if there are ways of regulating directly what UPL rules seek to do indirectly. As Perlman – or, I should say, the chatbot – points out, one of the risks inherent in using AI tools to provide legal advice is that the system may not be sufficiently reliable or be as alert to nuances in the law as a human lawyer would be. (Pp. 10-11.) This concern may be addressed, however, using traditional negligence and products liability principles. In theory there is no reason the ability of an AI system to provide legal advice could not be assessed in the same way as a court or jury would evaluate the performance of a Tesla self-driving car – if the automation does as well as a human at a particular task, then the AI system has satisfied the duty of reasonable care. If ChatGPT is just as good as a human lawyer in walking a parent through a dispute with a school district over the creation of an Individualized Education Program (one of Perlman’s prompts), the legal profession’s monopoly on the provision of these services becomes much harder to justify.

Like lawyers, ChatGPT can have self-congratulatory tendencies. Perlman asked the system to write a poem on how it will change legal services. Yeats it ain’t, but the middle two stanzas illustrate the promises of AI:

Gone are the days of endless research and tedious work
With ChatGPT at our side, answers will no longer elude
It will help us find solutions and avoid pitfalls and quirks
And make the practice of law more efficient and smooth

No case will be too complex or too hard
ChatGPT will guide us through with ease
It will be a trusted companion and guard
Helping us to provide the best legal services with expertise

(P. 14) I have been a skeptic for a long time about the claims that technology is going to fundamentally change the market for legal services, but with the release of the most recent version of ChatGPT I’m no longer so sure.

Download PDF
Cite as: W. Bradley Wendel, The ChatBots are Coming!, JOTWELL (May 12, 2023) (reviewing Andrew M. Perlman, The Implications of ChatGPT for Legal Services and Society, The Practice Magazine, March/April 2023 issue (2022)), https://legalpro.jotwell.com/the-chatbots-are-coming/.