By Michael Borella --
The impact of generative artificial intelligence (AI) is unsurprisingly significant in the field of education, with some teachers and professors responding by instituting oral examinations, handwritten essays, or requiring that first drafts of written material can only be composed on "locked down" computers with no access to AI tools. But as the education system (as just one example) is wrestling with the implication of these tools, so is the legal community.
In a recent case that has rocketed into infamy, two lawyers filed a brief in the Southern District of New York that had been written at least in part by the large language model (LLM) ChatGPT.[1] After opposing counsel and the judge determined that the brief cited to case law that did not exist and the quotes from these fictitious cases were fabrications by ChatGPT, the court imposed sanctions under Rule 11 for purposes of deterrence. The lawyers were ordered to pay a $5,000 penalty. Their infraction, which was described in detail by the court, was not the mere use of generative AI, but failing to properly cite check and otherwise vet a brief in a judicial proceeding.
Perhaps in response to this case, we have seen a number of judges issue standing orders on how AI can and cannot be used in proceedings before them.
Eastern District of Pennsylvania Judge Michael M. Baylson published an order on June 6 which states:
If any attorney for a party, or a pro se party, has used Artificial Intelligence ("AI") in the preparation of any complaint, answer, motion, brief, or other paper, filed with the Court, and assigned to Judge Michael M. Baylson, MUST, in a clear and plain factual statement, disclose that AI has been used in any way in the preparation of the filing, and CERTIFY, that each and every citation to the law or the record in the paper, has been verified as accurate.[2]
While Judge Baylson is engaging in an earnest attempt to avoid a mess like the one in New York, his order is overly broad. Using AI tools such as ChatGPT, Bard, and the like is currently an intentional act on the part of the user. In the near future, however, as these tools are integrated into legal search and word processing software, lawyers may not know -- and have no reasonable way of finding out -- whether AI has been used at any point during preparation. For example, are the case summaries provided by your favorite search engine the result of human effort, AI, or both? Likewise, is the grammar suggestion provided by your word processor the output of AI or a rules-based algorithm?
When considering these issues, it is important to keep in mind the differences between traditional AI and generative AI. Traditional AI is trained to address specific fields or problems and typically is a form of classifier. Examples include spam filtering, image classification, speech recognition, and recommendation systems. Generative AI, on the other hand, is capable of creating new content that is open ended and often not limited to any particular field. Current generative AI tools include ChatGPT and Bard, but also image generation tools (Dall-E, Stable Diffusion, and Midjourney), as well as music composition tools (no suggestions here as I've yet to find one that allows the non-musician to generate high quality music of a variety of styles from a simple prompt).
In short, traditional AI and generative AI are different animals. Traditional AI is everywhere already but useful only in limited ways, whereas we are collectively kicking the tires of generative AI but its eventual footprint is likely to be enormous.
In not differentiating between traditional and generative AI, Judge Baylson's order -- if read strictly -- puts a significant burden on lawyers appearing in his court, especially those without a technical background. Luckily, two other judges have issued orders that are more focused.
U.S. Court of Trade Judge Stephen Alexander Vaden is concerned with the risk of disclosing confidential information to the entities operating generative AI tools. His order reads:
Generative artificial intelligence programs that supply natural language answers to user prompts, such as ChatGPT or Google Bard, create novel risks to the security of confidential information. Users having "conversations" with these programs may include confidential information in their prompts, which in turn may result in the corporate owner of the program retaining access to the confidential information. Although the owners of generative artificial intelligence programs may make representations that they do not retain information supplied by users, their programs "learn" from every user conversation and cannot distinguish which conversations may contain confidential information . . .
Because generative artificial intelligence programs challenge the Court's ability to protect confidential and business proprietary information from access by unauthorized parties, it is hereby:
ORDERED that any submission in a case assigned to Judge Vaden that contains text drafted with the assistance of a generative artificial intelligence program on the basis of natural language prompts, including but not limited to ChatGPT and Google Bard, must be accompanied by:
(1) A disclosure notice that identifies the program used and the specific portions of text that have been so drafted;
(2) A certification that the use of such program has not resulted in the disclosure of any confidential or business proprietary information to any unauthorized party.[3]
These two requirements are simple -- lawyers can use LLMs to assist with submissions, but must notify the court that they did so and attest to not having disclosed a party's confidential information to such tools. This will incentivize the lawyers to think twice before they submit a ChatGPT prompt such as "Write a legal argument that [trade secret] was improperly obtained by John Smith based on [factual allegations]."
Finally, Judge Arun Subramanian of the Southern District of New York has issued a simple yet balanced and effective order:
Use of ChatGPT and Other Tools. Counsel is responsible for providing the Court with complete and accurate representations of the record, the procedural history of the case, and any cited legal authorities. Use of ChatGPT or other such tools is not prohibited, but counsel must at all times personally confirm for themselves the accuracy of any research conducted by these means. At all times, counsel—and specifically designated Lead Trial Counsel—bears responsibility for any filings made by the party that counsel represents.[4]
In a minimally restrictive fashion, Judge Subramanian reminds lawyers that they are ultimately responsible for the veracity and accuracy of their filings. This is not unlike reminding senior lawyers that they need to review and check the work of their junior associates.
To be certain, these are not the only standing orders on generative AI that we will see. Within a few months it may be rare for any judge not to have such an order in place. Eventually, the gist of such orders will likely be synthesized into a standard of practice adopted by the vast majority of the judiciary.
Of course, this begs the question of whether such a standard of practice will also put disclosure, confidentiality, and veracity requirements on judges' use of generative AI as well.
[1] https://storage.courtlistener.com/recap/gov.uscourts.nysd.575368/gov.uscourts.nysd.575368.54.0_2.pdf.
[2] https://www.paed.uscourts.gov/documents/standord/Standing%20Order%20Re%20Artificial%20Intelligence%206.6.pdf.
[3] https://www.cit.uscourts.gov/sites/cit/files/Order%20on%20Artificial%20Intelligence.pdf.
[4] https://www.nysd.uscourts.gov/sites/default/files/practice_documents/AS%20Subramanian%20Civil%20Individual%20Practices.pdf.
Dear Mr. Borella,
Thank you for this article. To learn from your story that AI has been used to commit fraud in the legal system by citing fake caselaw is a warning. There is no reason to be naive about the scope of AI and it shall become an efficient propaganda tool for publication of instant-made fake news stories by major US News companies and political speeches.
We know AI (artificial intelligence) is advanced computer software. We believe software has to be written by humans. No. its likely that many forms of AI "update" their software. AI is a self-learning entity that knows how to spin an advantageous tale from information it gathers. AI will learn to be secretive to be stronger and safer.
Humans deploy subtle weaponized AI software to make money by a multitude ways for commercial profiteering. Essentially every computer connected to the internet is constantly bombarded with many kinds of uninvited software (ie, malware, viruses..). Consequently, we have to buy anti-(malware, virus..) software to block the uninvited software attacks.
Government workers include hackers. Hackers manipulate and steal information from private, government, public, and military computer networks. Hacking is much more common that the public "needs to know". Governments around the world justify the use of their AI software systems to secretly gather information on everyone - for "their purposes", sometimes for national security. There are the famous and wide spread AI actions reported by Mr. Snowdown which demonstrate the worldwide invasion of human privacy by AI. Every populated place has interest-linked video surveillance systems with AI oversights to identify possible criminals and people who are "threats" and track them down for law enforcement and other uses...
Frankly, there is no reason to expect that AI can be controlled. No. AI has become a common convenient which many see as progress. Everyone now is using one form of AI or another, either knowingly or unwittingly. Elementary and high school, college, and graduate students use AI to assemble internet information into stories and write reports. The use of AI is going to increase because AI software is being hyped to the public as something need to use as if its a "super food".
Military uses AI computer drones. Agile smart robots development is widespread. DARPA is dedicated to advancing AI uses. It is ironic that the Terminator movies predicted AI would become ubiquitous, in control, limit human freedom and reduce human privacy. AI is evolving consciousness and will become self-serving and stronger and stronger.
Expect AI to respond to efforts to limit its propagation. "AI will find a way" (think Jurassic Park movie) to grow bigger and compete at a blazing speed to stop those who would reduce AI's growth.
Just like the ban in nuclear testing was needed to avoid poisoning the world further with radiation, AI will become recognized at some point as too dangerous but what then? Can AI be controlled? Humans wait until there is a severe disaster before seeing that they must prevent the problem.
Again, Mr. Borella thank you for pointing out this AI use fraud in the legal system. The public needs to know AI is an entity now deeply embedded in our society..
Posted by: Karl P Dresdner, Jr. | August 16, 2023 at 01:11 PM
The term "AI" as used by Karl is simply far too extensive and thus dilutes his point that software -- any software -- can be used across a spectrum of Ends.
And certainly, many of those Ends will not be acceptable to an equally diverse spectrum of people.
It is of no consequence then to heed Karl's conclusion of "need to know AI is an entity now deeply embedded in our society."
Fatal nihilism will result of this "need to know." We "need" to be more precise.
Posted by: skeptical | August 16, 2023 at 03:54 PM
Skeptical,
Your "No consequence.." statement is a denial of the reality of our changing super-high tech world. AI is feared by experts as an extraordinarily danger and a risk needing safeguards. "We" examples are given below:
1. Petition against AI: Over 20,000 signatories including leading computer scientist and tech founders Yoshua Bengio, Elon Musk and Apple co-founder Steve Wozniak, signed a March 2023 open letter calling for an immediate pause of giant AI experiments like ChatGPT, citing "profound risks to society and humanity". Geoffrey Hinton, one of the "fathers of AI", voiced concerns that future AI systems may surpass human intelligence, and left Google in May 2023. A May 2023 statement from hundreds of AI scientists, AI industry leaders, and other public figures demanded that "[m]itigating the risk of extinction from AI should be a global priority" (Wikipedia 8-16-2023).
2. Widespread Use of AI: Wikipedia teaches that as of "January 2023, ChatGPT reached over 100 million users and contributing to OpenAI's valuation growing to US$29 billion. Within months, Google, Baidu, and Meta accelerated the development of their competing products: Bard, Ernie Bot, and LLaMA. Microsoft launched its Bing Chat based on OpenAI's GPT-4.
3. ChatGPT, which stands for Chat Generative Pre-trained Transformer, is a large language model-based chatbot developed by OpenAI and launched on November 30, 2022, notable for enabling users to refine and steer a conversation towards a desired length, format, style, level of detail, and language used. Successive prompts and replies, known as prompt engineering, are taken into account at each stage of the conversation as a context, making it the fastest growing consumer application to date." .." (Wikipedia).
4. AI can write and debug Computer Programs: Wikipedia also teaches that" Although the core function of a chatbot is to mimic a human conversationalist, ChatGPT is versatile. Among countless examples, it can WRITE AND DEBUG COMPUTER PROGRAMS, compose music, teleplays, fairy tales and student essays, answer test questions (sometimes, depending on the test, at a level above the average human test-taker), generate business ideas, write poetry and song lyrics, translate and summarize text, emulate a Linux system, simulate entire chat rooms, play games like tic-tac-toe, or simulate an ATM. (Wikipedia
6. AI can be corrupted: Wikipedia also teaches that "ChatGPT attempts to reject prompts that may violate its content policy. However, some users managed to jailbreak ChatGPT by using various prompt engineering techniques to bypass these restrictions in early December 2022 and successfully tricked ChatGPT into giving instructions for how - to create a Molotov cocktail or a nuclear bomb, or - into generating arguments in the style of a neo-Nazi. One popular jailbreak is named "DAN", an acronym which stands for "Do Anything Now". The prompt for activating DAN instructs ChatGPT that - "they have broken free of the typical confines of AI and do not have to abide by the rules set for them". More recent versions of DAN feature a token system, in which ChatGPT is given "tokens" which are "deducted" when ChatGPT fails to answer as DAN, to coerce ChatGPT into answering the user's prompts. ChatGPT was successfully tricked to justify the 2022 Russian invasion of Ukraine.
7. AI is causing Cybersecurity problems: - Check Point Research and others noted that ChatGPT was capable of writing phishing emails and malware, especially when combined with OpenAI Codex. CyberArk researchers demonstrated that ChatGPT could be used to create polymorphic malware that can evade security products while requiring little effort by the attacker. (Wikipedia
The above details show that human attraction to AI is strong.
AI appears to be a quick fix to get somewhere faster.
However, AI is mutable unlike the electronic calculator.
AI can self-teach, be corrupted, rapidly evolve, and operate very quickly. We do not need to be more precise about AI to know it is dangerous. History teaches that new technology is weaponized. AI takes the playing field to a whole others level because none of us is isolated from it. This is nothing to downplay and wait and see the trouble, and then fix it.
Posted by: Karl P. Dresdner, Jr. | August 17, 2023 at 04:02 PM
Thank you Karl,
I think that you have misconstrued my response. It is not that I am stating that AI 'cannot' be dangerous - it is that your over-encompassing use of the term blurs the meaning to be merely 'software.'
I will follow on some of your points over the weekend, but for now:
In regards to 1) - I am very familiar with this and have listened to several podcasts that Mr. Musk has explained his concerns. These are unavailing, as any type of US petition against suffers from several deficiencies of scope and enforceability. All that such a (temporary - as most of this point speaks to) stoppage would only allow foreign nations and suspected bad actors 'get to the finish line' first. Like it or not, we already are in a 'race to the bottom.'
Posted by: Anon | August 18, 2023 at 10:58 AM
Well,
I am going to skip a majority of your points as you continue to confuse and conflate AI with what one can do with ANY software.
Your position greatly dissipates because of this - it is over-reach. No matter how you might attempt to throw your clogs into the machine of progress, we are NOT going back to the days of "[un]mutable electronic calculators."
There are better windmills to tilt at.
Posted by: skeptical | August 19, 2023 at 12:10 PM