Mandatory continuing legal education (MCLE) is one of the most ubiquitous regulatory measures aimed at ensuring continuing lawyer competence.1 It is also one of the most critiqued. Over the past several decades, many lawyers and academics have argued that MCLE should be reformed, if not abolished. While MCLE requirements have so far largely withstood these attacks, recently, lawyer regulators seem to have developed a new appetite for doing things differently. A recent international survey of approaches to lawyer continuing competence observed, “lawyer regulators around the world have sought to improve the ability of CPD [continuing professional development] to improve competence in a number of different ways, increasingly moving away from a generic durational requirement.”2
In light of these developments, Rima Sirota’s article, Can Continuing Legal Education Pass the Test? Empirical Lessons from the Medical World, is a timely contribution to the literature on lawyer regulation. Speaking of the American context, Sirota argues that “the mandatory CLE system in its current state is indefensible” given its high costs and the lack of empirical evidence suggesting that CLE leads to improved lawyer competence. (P. 3.) While others have previously made this general point,3 Sirota’s contribution stands out for her call to the legal profession “to take up the empirical challenge” of measuring CLE outcomes and her provision of a roadmap on how this could be done. (P. 45.) Moreover, the comparative approach taken by Sirota, which looks to the medical profession’s approach to continuing education for potential insights, provides a fresh take on long-standing concerns about MCLE in the legal profession.
The article begins by setting out how MCLE is “an enormously expensive undertaking, [that] has been subject to virtually no empirical study since its inception and remains mired in a pedagogical model that has been largely discredited by adult learning experts.” (P. 3.) This critique is likely familiar to many readers—as declared in a recent law blog, “CLE being terrible is common knowledge.”The clarity in Sirota’s writing, however, as well as her extensive sourcing, result in a usefully concise summary of the arguments against MCLE and the studies about MCLE’s effectiveness conducted to date. For those outside of the United States who are considering the value of continuing MCLE requirements in their jurisdictions, this synopsis may be particularly useful in informing their deliberations.
Noting that “no continuing education field has received more empirical attention than medicine,” Sirota then describes the ways that continuing medical education (CME) providers in the United States have robustly incorporated effective adult learning practices into their CME content. (P. 17.) She notes, for example, “such diverse activities as simulations, reflection-based exercises, case-based self-assessments, reading modules, and opportunities to learn alongside nurses, social workers, pharmacists, and other non-physician members of patient care teams.” (P. 21.) Beyond the diversity of offerings, Sirota highlights how CME content has evolved deliberately and in response “to decades of scholarship that examines every facet of physicians’ career-long learning.” (P. 17.)
Drawing on this rich body of CME scholarship, Sirota then sets out a new roadmap for research on the impact of CLE on lawyer competence. In her roadmap, she proposes a focus on one specific area of lawyer competence: client communication. More particularly, Sirota suggests that researchers study “CLE’s potential to impact lawyers’ ability to communicate effectively with their clients using client-centered techniques.” (Pp. 26-27.) While there is existing work by legal empiricists on the value of client-centered communication skills,4 and many CLE courses addressing client communication, Sirota notes that “no headway has been made regarding CLE’s ability to teach those skills.” (P. 36.) In other words, “[m]issing…are empirical studies exploring how desired communication skills can best be taught in a CLE format and whether such teaching can result in real-world impacts to client experiences, client outcomes, and complaints against lawyers.” (P. 38.) Such empirical studies have, however, taken place in the medical realm, as Sirota details, and can be a fruitful reference point for CLE researchers.
Moreover, Sirota suggests that CLE researchers adopt a methodological approach commonly used in CME effectiveness research: the “Kirkpatrick Model.” This model measures effectiveness at four different levels: “(1) the extent to which the learner feels satisfied with the CME program; (2) the extent to which the learner gains and retains knowledge from the program, (3) the extent to which the learner’s practice improves, and (4) the extent to which the learner’s patients experience improved health outcomes.” (P. 22.)
Sirota acknowledges certain challenges with applying the Kirkpatrick Model to study whether CLE programming can effectively teach client-centred communication skills. Although it would be relatively straightforward to measure the first level of the model – lawyer satisfaction with such courses – studying the other levels would be harder. Lawyers may be reticent to spend the time, and subject themselves to the necessary scrutiny, required to evaluate how CLE courses have impacted their knowledge of client-centered communication skills (Level 2). As strategies to address this potential barrier, Sirota notes that studies involving doctors have sometimes used extra CME credits, fee waivers and options for personal feedback as incentives and that “such enticements should be attractive to lawyers as well.” (P. 40.)
Additionally, Sirota notes that concerns relating to client confidentiality would likely arise in trying to assess whether CLE has improved how lawyers communicate with their clients or has led to better client outcomes (Levels 3 and 4). However, she tackles such concerns directly and, while not dismissing their seriousness, she is generally optimistic that they can be adequately addressed. On client confidentiality, for example, she points out that legal empiricists have grappled with this issue in other contexts and have developed anonymization methods that appear to work well.
Ultimately, Sirota sees two choices for the future of MCLE: the legal profession must either (1) “take up the empirical challenge” and provide evidence of MCLE’s effectiveness sufficient to support its continuation; or (2) end MCLE altogether. Her article focusses on how to pursue the first option.
However, one might wonder whether there is a third option that is not acknowledged by Sirota: can MCLE be justified in the absence of empirical evidence “proving” its worth? Stated otherwise, what if the reason that we lack empirical data on CLE effectiveness is because the topic is simply not well suited to precise empirical measurement?
Evaluating the quality of legal services is notoriously difficult. Attempting to demonstrate a causal link between specific educational initiatives and changes in quality of service generates even more challenges. To date, one way this difficulty has been addressed is by using post-CLE changes in client complaints and malpractice claims as a proxy for changes in legal service quality. But this is a highly imperfect measure. Numbers of claims and complaints against lawyers can fluctuate for various interrelated reasons. Moreover, measuring claims and complaints will tend to capture only those who have fallen below minimum standards of practice and will not reflect the full range of potential CLE impacts. While CLE can assist lawyers in not falling beneath the floor of their professional obligations, it also can help them reach higher. It has the potential to make good legal professionals even better, with tangible benefits accruing to their clients. These types of improvements are not well captured by counting complaints and claims from year to year.
To be sure, the Kirkpatrick Model provides at least a partial answer to such concerns: even if it is not possible to measure Level 4 outcomes, there are three other aspects of CLE effectiveness that can and should be examined. In pointing readers to the Kirkpatrick Model and a multi-leveled way of evaluating CLE effectiveness, Sirota does a great service by presenting a ready-made framework to introduce more nuance into this area of empirical research.
Perhaps also, however, we need to acknowledge that not all beneficial consequences of regulatory measures, even potentially significant ones, can be “proved” through rigorous study. Such recognition arguably stands in tension with the thoughtful, and in my view, needed, calls for lawyer regulators to take up “evidence-based regulation.” But commitments to evidence-based regulation need not be abandoned in the pursuit of evaluation models that incorporate both empirical and normative measures, as well as nuanced views of the limitations and benefits of empirical research. No doubt, questions about empirical evidence for regulatory measures should be raised and such evidence gathered where obtainable and appropriate. At the same time, however, we need to be clear-eyed about the limitations of such evidence and ask good questions about what the available data is actually telling us (or not telling us). Moreover, in evaluating lawyer regulation, we should also be careful to not completely throw out reasoned appeals to considerations that are not readily reduced to data points, such as, for example, enhancing professionalism, building community, boosting public confidence in the profession and fostering inclusion and innovation. While such values can be, and have been, shields for “bad” regulation with discriminatory or protectionist ends,5 they can also justifiably be part of the reason that lawyer regulators adopt policies and approaches to issues. Empirical study can yield important insights about lawyer regulation, but it should not necessarily be taken to be providing the full picture.
None of these final ruminations are meant to detract from the bottom line of this Jot: in offering a timely, fresh and pragmatic intervention into the MCLE debate, Rima Sirota’s article is well worth a read and reflection.
- See, e.g., Hook Tangaza, International Approaches to Ongoing Competence: A report for the LSB (March 2021) (stating, “The use of mandatory CLE as a tool to promote competence began in the US and gained traction in the 1970s and 1980s before spreading to other parts of the world from the late 1980s onwards. From New South Wales in 1987, to Hong Kong in 1991 and the rollout of CPD to cover all solicitors in England and Wales in 1998, mandatory CPD has now become widespread.”).
- Id. at 21.
- Sirota highlights and draws upon David D. Schein, Mandatory Continuing Legal Education: Productive or Just PR?, 33 Geo. J. Legal Ethics 301, 322-38 (2020) and Deborah L. Rhode & Lucy Ricca, Revisiting MCLE: Is Compulsory Passive Learning Building Better Lawyers?, 22 Prof. Law. 2, 3 (2014).
- Sirota reviews the literature in this area, including Christopher R. Trudeau, The Public Speaks: An Empirical Study of Legal Communication, 14 Scribes J. Legal Writing 121, 140-41 (2012); James M. Anderson et al., The Effects of Holistic Defense on Criminal Justice Outcomes, 132 Harv. L. Rev. 819 (2019); and Daniel Newman, Still Standing Accused: Addressing the Gap Between Work and Talk in Firms of Criminal Defence Lawyers, 19 Int’l J. Legal Prof. 3 (2012).
- Deborah Rhode, for example, has a considerable body of work raising such concerns in relation to the regulation of the unauthorized practice of law and moral character requirements for admission to the bar (for recent writings on these topics, see Deborah L. Rhode & Lucy Buford Ricca, Protecting the Profession or the Public? Rethinking Unauthorized-Practice Enforcement, 82 Fordham L. Rev. 2587 (2014) and Deborah L. Rhode, Virtue and the Law: The Good Moral Character Requirement in Occupational Licensing, Bar Regulation, and Immigration Proceedings, 43 Law & Soc. Inquiry 1027 (2018)).