By Joshua Rich --
Following in the footsteps of the U.S. Patent and Trademark Office[1] and the state bars of California,[2] Florida,[3] New Jersey,[4] New York,[5] and Pennsylvania,[6] the American Bar Association has weighed in on attorney's ethical use of Generative AI (GAI) tools with a formal ethics opinion entitled "Generative Artificial Intelligence Tools."[7] The ABA opinion highlights many of the same ethical rules as the previous guidance, opinions, and reports, but from a different perspective. As a result, it identifies issues and proposes ethical requirements slightly different from others. And while the ABA's suggested steps for discharging ethical obligations are not binding on any attorney, the concerns are universal and the suggested steps likely to be persuasive if complications arise.
Unlike the other guidance, the ABA formal opinion is limited to ethical considerations arising out of generative AI. The ABA recognizes that lawyers are already using AI in many contexts, ranging from legal research to technology-assisted document review to contract analytics. There are ethical issues that arise in those other contexts, but they are different from those that relate to GAI. Further, the opinion recognizes that the guidance would need to be updated as technology develops, "anticipat[ing] that [the ABA] Committee and state and local bar association ethics committees will likely offer updated guidance on professional conduct issues relevant to specific GAI tools as they develop."[8]
The ABA opinion starts where the Model Rules do, with the duty of competence.[9] There are three ways that the use of GAI implicates the duty of competence: knowing the GAI tools available to be used, understanding the capabilities and liabilities of any GAI tool the lawyer chooses to use, and ensuring that use of the GAI tool does not return inaccurate information.
On the first issue, knowledge of available GAI tools, the opinion counsels that:
[E]ven in the absence of an expectation for lawyers to use GAI tools as a matter of course, lawyers should become aware of the GAI tools relevant to their work so that they can make an informed decision, as a matter of professional judgment, whether to avail themselves of these tools or to conduct their work by other means. . . . Ultimately, any informed decision about whether to employ a GAI tool must consider the client's interests and objectives.[10]
That is, lawyers cannot remain competent by simply ignore the possible use of GAI tools; they must learn whether such a tool is reasonably necessary for their client's work.
Once lawyers decide to use a GAI tool, they must understand the tool well enough to be able to explain it to clients, to allow them to make an informed decision whether the tool should be used for their project.
This means that lawyers should either acquire a reasonable understanding of the benefits and risks of the GAI tools that they employ in their practices or draw on the expertise of others who can provide guidance about the relevant GAI tool's capabilities and limitations. This is not a static undertaking. Given the fast-paced evolution of GAI tools, technological competence presupposes that lawyers remain vigilant about the tools' benefits and risks. Although there is no single right way to keep up with GAI developments, lawyers should consider reading about GAI tools targeted at the legal profession, attending relevant continuing legal education programs, and, as noted above, consulting others who are proficient in GAI technology.[11]
For most lawyers, this means they will have to continually ensure they understand the benefits and risks of not only the technology they are currently using, but also updates and new tools. GAI tools will therefore add to the educational burden borne by lawyers.
Finally, the ABA's opinion highlights one of the most notorious risks of using a GAI tool, providing inaccurate responses such as "hallucinations" that would lead to incorrect legal advice or made up citations submitted to courts. The formal opinion asserts that lawyers must engage in "an appropriate degree of independent verification or review of [the] output," with the level of review dependent on the tool and task being performed.[12] For submission to a court or critical advice, careful review of every citation and statement would be in order;[13] for basic letters or other less important work, less effort might be needed.
The opinion next addresses the duty of confidentiality, perhaps the most acute concern for most lawyers in using GAI tools. All of the previous guidance identifies the risk of submitting a client's confidential information in prompts, which may run afoul of a lawyer's duty to avoid disclosure of such information. That is, client information included in a GAL tool prompt is put in the hands of the GAI tool model, and may be used to teach the model and get disclosed to others. But the opinion emphasizes another ethical risk unique to law firms: potential disclosure or use within the firm of one client's information for the benefit of another. The opinion identifies considerations that lawyers must consider in both situations, as well as how to discharge the related ethical duties.
As a general matter, a lawyer must first determine if client information will be adequately protected from disclosure. "In considering whether information relating to any representation is adequately protected, lawyers must assess the likelihood of disclosure and unauthorized access, the sensitivity of the information, the difficulty of implementing safeguards, and the extent to which safeguards negatively impact the lawyer's ability to represent the client."[14] Those considerations intersect with the duty of competence, as a lawyer must understand the GAI tool and associated issues to evaluate those considerations.
The novel concern addressed in the opinion is intra-firm disclosure of client confidences through the use of a GAI tool. The opinion sees no way to avoid such disclosure (as long as the firm uses the tool on more than one client's projects) and, instead, suggests that lawyers obtain informed consent for such potential disclosure from clients:
[A GAI tool] may disclose information relating to the representation to persons in the firm (1) who either are prohibited from access to said information because of an ethical wall or (2) who could inadvertently use the information from one client to help another client, not understanding that the lawyer is revealing client confidences. Accordingly, because many of today's self-learning GAI tools are designed so that their output could lead directly or indirectly to the disclosure of information relating to the representation of a client, a client's informed consent is required prior to inputting information relating to the representation into such a GAI tool.[15]
Of course, if client confidences are segregated within a GAI tool, the risk of disclosure dissipates; using the tool in that way, however, severely limits the benefits of the tool. More likely, the lawyer will have to obtain the client's informed consent to a potential disclosure through use of the GAI tool. In either circumstance, however, under the duty to reasonably consult with the client, "clients would need to be informed in advance, and to give informed consent, if the lawyer proposes to input information relating to the representation into the GAI tool."[16]
For the consent to be informed, the client must have the lawyer's best judgment about why the GAI tool is being used, the extent of and specific information about the risk, including particulars about the kinds of client information that will be disclosed, the ways in which others might use the information against the client's interests, and a clear explanation of the GAI tool's benefits to the representation. Part of informed consent requires the lawyer to explain the extent of the risk that later users or beneficiaries of the GAI tool will have access to information relating to the representation. To obtain informed consent when using a GAI tool, merely adding general, boiler-plate provisions to engagement letters purporting to authorize the lawyer to use GAI is not sufficient.[17]
In order to provide the fulsome explanation necessary to obtain informed consent, a lawyer will have become educated about the specific GAI tool, at least in terms of the legal obligations related to access to information:
As a baseline, all lawyers should read and understand the Terms of Use, privacy policy, and related contractual terms and policies of any GAI tool they use to learn who has access to the information that the lawyer inputs into the tool or consult with a colleague or external expert who has read and analyzed those terms and policies. Lawyers may need to consult with IT professionals or cyber security experts to fully understand these terms and policies as well as the manner in which GAI tools utilize information.[18]
This required self-education is not unlike that which a lawyer must undertake in other situations where they entrust data to supervised personnel or third parties. They must also establish clear policies for permissible use of GAI and take reasonable steps to ensure compliance with those policies (and all professional obligations) by subordinate lawyers, other firm personnel, and third parties.[19]
Finally, the opinion raises potential effects that GAI tools may have on the reasonableness of fees charged. Lawyers charging an hourly rate must bill only their actual time worked; they cannot "value bill" for the efficiency realized through use of a GAI tool. Even if the lawyer charges a flat fee, if the use of a GAI tool avoids all or nearly all work, the fee may be unreasonable. And charging the client for the use of a GAI tool may not be ethical. "To the extent a particular tool or service functions similarly to equipping and maintaining a legal practice, a lawyer should consider its cost to be overhead and not charge the client for its cost absent a contrary disclosure to the client in advance."[20] And a lawyer cannot charge for all of the education needed to learn about the GAI tool and other issues necessary to obtain informed consent from the client.
In short, the ABA formal opinion points out the many potential ethical pitfalls that arise out of the use of a GAI tool. But the opinion also provides some guidance on how to avoid those pitfalls. As tools develop and become better integrated into law firm practice, the requirements set forth in the opinion should become less burdensome and easier to meet.
[1] "Guidance on Use of Artificial Intelligence-Based Tools in Practice Before the United States Patent and Trademark Office," 89 Fed. Reg. 25,609 (Apr. 11, 2024). An outstanding discussion of the PTO's Guidance is available at https://www.patentdocs.org/2024/04/the-usptos-guidance-on-use-of-ai-based-tools-in-practice.html.
[2] State Bar of Cal. Standing Comm. On Prof'l Resp. & Conduct, "Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law" (2023), available at https://www.calbar.ca.gov/Portals/0/documents/ethics/Generative-AI-Practical-Guidance.pdf.
[3] Fla. State Bar Ass'n, Prof'l Ethics Comm., Op. 24-1 (Jan. 19, 2024), available at https://www.floridabar.org/etopinions/opinion-24-1/.
[4] NJ S. Ct. Comm. on AI & the Cts., "Preliminary Guidelines on New Jersey Lawyers' Use of Artificial Intelligence" (Jan. 24, 2024), available at https://www.njcourts.gov/sites/default/files/notices/2024/01/n240125a.pdf?cb=aac0e368.
[5] NY State Bar Ass'n Task Force on Artificial Intelligence, "Report and Recommendations of the New York State Bar Association Task Force on Artificial Intelligence" (Apr. 6, 2024), available at https://nysba.org/app/uploads/2022/03/2024-April-Report-and-Recommendations-of-the-Task-Force-on-Artificial-Intelligence.pdf .
[6] Pa. State Bar Ass'n Comm. on Legal Ethics & Prof'l Resp. & Philadelphia Bar Ass'n Prof'l Guidance Comm., Joint Formal Op. 2024-200 "Ethical Issues Regarding the Use of Artificial Intelligence," (May 22, 2024), available at https://www.pabar.org/Members/catalogs/Ethics%20Opinions/Formal/Joint%20Formal%20Opinion%202024-200.pdf.
[7] Am. Bar Ass'n Standing Comm. on Ethics & Prof'l Resp., "Generative Artificial Intelligence Tools" Formal Op. 512 (July 29, 2024), available at https://www.americanbar.org/content/dam/aba/administrative/professional_responsibility/ethics-opinions/aba-formal-opinion-512.pdf.
[8] ABA Formal Op. 512, p. 2.
[9] See Model Rules R. 1.1.
[10] ABA Formal Op. 512, p. 5.
[11] ABA Formal Op. 512, p. 3.
[12] ABA Formal Op. 512, p. 4.
[13] As the opinion points out, submission of information to a tribunal that has been provided by a GAI tool also implicates Model Rules 3.1, 3.3, and 8.4(c). ABA Model Op. 512, p. 9-10. The same duties would apply with regard to submissions to the USPTO.
[14] ABA Formal Op. 512, p. 6.
[15] ABA Formal Op. 512, p. 7.
[16] ABA Formal Op 512, p. 8 (citing Model Rules of Prof'l Conduct R. 1.4). Even if no client information will be inputted, the client must be informed that a GAI tool is used if it asks.
[17] ABA Formal Op. 512, p. 7.
[18] ABA Formal Op. 512, p. 7.
[19] ABA Formal Op. 512, p. 10-11 (citing Model Rules of Prof'l Conduct R. 5.1, 5.3).
[20] ABA Formal Op. 512, p. 13.
Clear as mud.
Posted by: skeptical | August 25, 2024 at 03:35 PM