By Michael Borella --
When boiled down to a fundamental level, all technologies are double-edged swords. A spear can be used to hunt game or to wage war. A hammer can be used to build a shelter or to murder fellow humans. Social media can be used to connect lonely and geographically-distanced affinity groups in an emotionally meaningful way or to foster misinformation and possibly even genocide.
Artificial intelligence (AI) is no different.
We are largely unaware of the prevalence of AI. From content recommendation, to detection of financial fraud, to drug discovery, to spam filters, these computational models exist in the background of everyday life. AI mostly hangs out behind the scenes impacting our lives in invisible ways. But the slow rollout of autonomous vehicles and the significantly faster adoption of personal digital assistants are more overt examples.
In science fiction, there is no shortage of utopian stories in which menial tasks carried out by humans are performed by various types of robots, ostensibly leaving humans with more time to think, create, relax, and enjoy life. In reality, replacement of human labor with non-intelligent automations has so far proven to be disruptive to many societies. For the most, part knowledge workers such as lawyers have escaped this disruption. We believe, perhaps arrogantly, that the value we provide to our clients requires a generalized intelligence and a sense of empathy that is missing from modern AI. Thus, we may look down our noses at the thought of incorporating AI into our workflows.
It is time to reassess that viewpoint.
All lawyers, especially those of us in patent law, employ various types of technological assists. We draft, edit, and review on computers. We look things up in search engines and Wikipedia. We use docketing software and reminders to stay on top of our schedules and deadlines. The technology-assisted lawyer is already here, and those who eschew these technologies are hard-pressed to keep up.
But let's not forget the aforementioned dual nature of these technologies. The same tools that help us do our jobs can also distract us with non-stop notifications. Moreover, search engine results can be misleading, and Wikipedia can contain mistakes. But we've adapted to use these tools by applying the same skeptical and inquisitive frame of mind that makes us suited for the profession. We take non-verified information for what it is -- information. When in doubt, we double and triple source it. Indeed, being a patent attorney who needs to understand new and complex science and engineering inventions would be frustrating and hard if not for our technological assists even accepting that they are not 100% reliable.
Generative AI is yet another assistive tool, though with bigger caveats.
The latest large language models, such as ChatGPT, are remarkably good at producing human-like text focused on a particular topic. While ChatGPT output generally falls far short of a well-trained and experienced human author, it can often exceed the quality of writing of an average human. Thus, it is premature to state that patent lawyers (or other types of lawyers) are going to be replaced anytime soon. However, ignoring the trends in large language models may cause some of us to be gradually obsoleted.
Currently, these models are useful yet unreliable. They frequently produce insightful results, but also can "hallucinate" pure nonsense, falsehoods, and fabrications. This is because they are little more than sophisticated sentence autocompleters, with arguably no understanding of what they write. Still their output can be cogent and detailed, in the form of a paragraph or essay.
Thus, one might be tempted to cut and paste these results into a legal document. Of course, that would be a mistake at least due to said hallucinations. Instead, large language model output, when relevant, should be edited and/or recrafted. In a sense, this is not that different from what one might do when paraphrasing or otherwise incorporating information found in a web search, in case law, or from Wikipedia. But large language models put you closer to the finish line by writing a first draft for you.
For example, when writing a patent application, one might describe how an invention can be used. It could make a handful of existing technologies more efficient -- faster, better, etc. We can spend an hour or two writing descriptions of each of these technologies from scratch, or we can farm that task out to ChatGPT. Within minutes it will provide workmanlike descriptions that can be edited for stylistic consistency and accuracy,[1] and then we can explain how the invention improves each.
Not unlike junior associates or talented paralegals, today's large language models can help us shave off a couple of hours of time per application by helping us write the background section and describe the prior art. Beyond that, current technology is hit or miss. For example, ChatGPT can draft patent claims but there are numerous reasons not to use it for this purpose.
Like their predecessors, these tools can be used for various purposes, some constructive and other destructive.[2] The key is to use them for what they are good at doing, and not for tasks at which they are likely to fail. Over time, ChatGPT may evolve to a point where it can automate even more of the drafting process, perhaps even taking a first pass at office action responses, as well as validity or invalidity arguments. This may be as little as five to ten years out, though no one knows for sure whether these models will continue to improve at their current pace or hit some unforeseen plateau.
Regardless, the AI-assisted patent attorney is just the latest iteration of the technology-assisted patent attorney. As the world changes, we need to be flexible and adapt to new professional and business realities. ChatGPT and its many rival models that are in the process of development and launch represent just one of these realities.
[1] In this scenario, ChatGPT is likely to provide a reasonably on-point result because it is describing something well-known.
[2] One of the more troubling abilities of ChatGPT is that it can reduce the marginal cost of generating massive amounts of disinformation to nearly zero. In the wrong hands and without safeguards built into the model, it may not be long before our social media and news channels are overflowing with nonsense at a level well beyond what we already see. This is not quite what Orwell predicted, but likely just as bad.
My concern about using ChatGPT in patent drafting is that the person assigning the writing task to ChatGPT is giving access to both the task and the written response to Open AI (the owner of ChatGPT). Presumably these events are occurring prior to the filing of the patent application.
If for some reason the drafting of the patent application takes more than a year, the early ChatGPT drafting may constitute information otherwise available to the public before the effective filing date of the application. In other words, prior art.
Posted by: Walter Scott | February 28, 2023 at 12:22 PM
Walter,
This is indeed an issue. See the second link in the article for a more thorough discussion.
Mike
Posted by: Michael Borella | February 28, 2023 at 06:11 PM
ChatGPT drafting has to start from a "prompt", which in the case of patent drafting has to be a description of the invention provided by the patent attorney. But communications with ChatGPT are not private, as set forth in OpenAI's terms of use. This entails the risk that the prompt fed to ChatGPT may be considered public disclosure by a court, as explained by Aaron Gin and Yuri Lewin-Schwartz in their Patentdocs post of 6 February :
"As such, patent attorneys must take care not disclose confidential information to publicly-accessible large language models like ChatGPT. A court could consider the content of the messages as public disclosure of the invention because OpenAI has no obligation to secrecy."
Posted by: francis hagel | March 01, 2023 at 12:18 AM
"For example, when writing a patent application, one might describe (to ChatGPT) how an invention can be used".
Mike, at what point is the descriptor/ChatGPT user creating prior art here? I'd argue (as a layman) that submitting the description of an invention to ChatGPT is no different than handing a paper document to a legal practice or more broadly some other 3rd party that can help determine patentability, run a search, and/or draft the application. Inventors generally have formal engagement agreements in place with 3rd parties prior to sharing such information on the invention. And now a two-part question for you:
1) Is a ChatGPT user who submits a description of an invention creating prior art for the invention in doing so? If not, why not?
2) If so, would having a formal engagement agreement in place with the endpoint (MS/OpenAI, etc) that establishes IP ownership, confidentiality, and/or the nature of the engagement resolve that? Why or why not?
Interested in your take on this.
Posted by: David Adler | March 01, 2023 at 07:52 AM
David,
The answer to both questions is in the agreement between the user and the entity operating the LLM. In the case of ChatGPT, that agreement does not provide confidentiality and thus risks public disclosure of the submitted invention description.
Mike
Posted by: Michael Borella | March 01, 2023 at 08:45 PM