By Joshua Rich and Michael Borella --
Since President Biden issued his Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, the U.S. Patent and Trademark Office has been investigating the potential pitfalls of practitioners' use of AI in patent and trademark practice. On April 11, the Office issued its "Guidance on Use of Artificial Intelligence-Based Tools in Practice Before the United States Patent and Trademark Office."[1] And while the Guidance does not include or propose any new rules, it provides useful reminders of how to ensure compliance with the rules (along with a dash of advocacy for the USPTO's positions on prosecution strategy).
The starting point for the USPTO's analysis of the effects of AI is the potential efficiencies and cost savings that use of AI-based tools can provide. After all, the USPTO is already using such tools itself:
For example, patent examiners are performing AI-enabled prior art searches using features like More Like This Document (MLTD) and Similarity Search in the Office's Patents End-to-End (PE2E) Search tool.
The Office recognizes that practitioners are already (and increasingly) using tools to locate prior art, review patent applications, and monitor examiner behavior. But along with the benefits of using AI-tools come risks: hallucination, disclosure of confidential client information, and violation of export control laws and rules. The Guidance attempts to show practitioners how they should -- and (in some cases) must -- mitigate those risks based on the existing rules and regulations.
The USPTO's guidelines rely on six different sets of rules and policies as sources for the obligations for practitioners.
• The duty of candor: "Each individual associated with the filing and prosecution of a patent application has a duty of candor and good faith in dealing with the Office, which includes a duty to disclose to the Office all information known to that individual to be material to patentability as defined in this section."[2]
• The signature requirement for nearly all submissions to the USPTO, which carries with it the implied certification that the person signing or submitting the paper was the one actually signing it and vouching under penalty of perjury that ""[a]ll statements made therein of the party's own knowledge are true, all statements made therein on information and belief are believed to be true"[3] and an obligation to make a reasonable inquiry to ensure the accuracy of those statements.[4]
• The obligation of confidentiality: "A practitioner shall not reveal information relating to the representation of a client unless the client gives informed consent, the disclosure is impliedly authorized in order to carry out the representation, the disclosure is permitted by paragraph (b) of this section, or the disclosure is required by paragraph (c) of this section."[5]
• The foreign filing license requirement and regulations governing export of technology, including ITAR (International Traffic in Arms Regulations), EAR (Export Administration Regulations), and AFAEAR (Assistance to Foreign Atomic Energy Activities Regulations). Notably, the USPTO has previously said that "[a] foreign filing license from the USPTO does not authorize the exporting of subject matter abroad for the preparation of patent applications to be filed in the United States."[6] Further, ITAR, EAR, and AFAEAR prohibit not only exporting certain technical data in the traditional sense -- that is, sending the information overseas -- they also prohibit allowing nationals of certain foreign countries to have access to that information, even within the United States.
• Policies regarding access to the USPTO's electronic systems, including Patent Center, P-TACTS, ESTTA, and USPTO.gov. Among other things, only individuals may have USPTO.gov accounts and must not share the account with others (including AI-based tools).
• Professional responsibility duties to clients, including the duty of competence (which includes a duty to be competent with technology used to handle client matters before the USPTO)[7] and the duty to "reasonably consult with the client about the means by which the client's objectives are to be accomplished" and "explain a matter to the extent reasonably necessary to permit the client to make informed decisions regarding the representation."[8]
Those rules and regulations apply to the use of AI in at least four different contexts of USPTO patent and trademark practice.[9] Those contexts can involve different AI tools and considerations, but there in one unifying theme: use of an AI tool does not relieve a practitioner of compliance with existing duties. Indeed, the USPTO does not expect any more out of practitioners who use AI tools than it does from practitioners who use assistance of junior attorneys or paralegals -- the practitioner is responsible for reviewing work product and ensuring that it is technically and legally correct. Notably, however, the USPTO seeks to put a thumb on the scales balancing certain duties owed by practitioners.
First, AI tools are increasingly being used in the drafting of prosecution (and PTAB-related) documents. Even word processing software such as Microsoft Word are beginning to incorporate AI tools. But specialized AI tools are also being rolled out that can assist in patent and claim drafting, responding to office actions, and preparing forms (such as IDSs). While such tools are growing more and more robust and useful, they still have deficiencies that must be double-checked. Double-checking is not only good practice, the Guidelines make it clear that it is a requirement of both the duty of candor and the signature requirement. "Therefore, if an AI tool is used in drafting or editing a document, the party must still review its contents and ensure the paper is in accordance with the certifications being made." A practitioner should also reasonably consult with the client to ensure that the client agrees to the means for accomplishing its goals.
Given the requirement for an attorney or agent to double-check the entirety of a submission, the Guidance does not suggest a general obligation to disclose the use of AI tools in preparing papers. There are potential exceptions, however. For example, practitioners often include language to broaden the disclosure of an invention using alternative embodiments and potential substitutions known in the art. While there may be a question whether this makes the patent attorney a joint inventor, the issue is more concerning if an AI tool has come up with the alternatives. Not only may it throw inventorship in question (and require disclosure of the AI drafting tool), it may exceed the true scope of the invention.
The Office also counsels that "Practitioners are also under a duty to refrain from filing or prosecuting patent claims that are known to be unpatentable. Therefore, in situations where an AI tool is used to draft patent claims, the practitioner is under a duty to modify those claims as needed to present them in patentable form before submitting them to the USPTO." The first sentence is uncontroversial: it is improper to seek claims that you know -- not suspect, know -- to which the client is not entitled. But the second sentence is much more opaque. Whether an AI tool is used or not, if "patentable form" is intended to mean something more than the opposite of "known to be unpatentable" is seems to be placing a higher obligation on the review of AI-drafted claims than that imposed on the review of human-drafted claims. That would appear to be more an Office request than an obligation.
Similarly, the Guidance points out that the obligation to review an IDS requires more than checking to see that it is in proper format. Rather, it requires "reviewing each piece of prior art listed in the form." But the Guidance then asserts that the review requires a practitioner to cull not only those references that are irrelevant but also those that include "marginally pertinent cumulative information." Of course, the duty of disclosure requires the citation of material prior art and what may be "marginally pertinent" in the eyes of one may be "material" to another. Practitioners and Examiners often disagree vehemently over the relevance of cited art. By emphasizing the obligation to eliminate irrelevant art and extending it to "marginally pertinent" art, the Guidance seem to be trying to tip the scales away from citing all potentially relevant art. That is, the Office seems to be trying to use the vehicle of the Guidance ease the Examiner's burden of reviewing submitted art at the cost of greater risk of violation of the duty of disclosure. Whether an AI tool is used or not should not affect that calculus.
The issue is that, on one hand, the USPTO has set forth the duty of candor in Rule 56 and elsewhere. But, on the other hand, the Guidance implicitly incentivizes practitioners to NOT strictly follow Rule 56. The notion of what a "reasonable Examiner" might consider to be relevant art varies dramatically between Examiners and art units. Further, there is no practically useful test for what constitutes cumulative art. Given this, the burden of determining whether to disclose art that falls into either of these gray areas should not fall on practitioners. Instead, practitioners should be encouraged to err on the side of disclosure when in doubt.
With regard to trademark filings, many of the concerns related to document preparation are the same. The Guidance brings up one additional example of the perils of AI hallucination, specifically, submission of an AI-generated specimen of use. Again, the AI tool must be double-checked to make sure it has provided accurate information.
Second, AI tools may be useful in the mechanical process of filing documents with the USPTO. But the rules seem to throw a wrench into that prospect, at least as far as they currently are written. Almost all submissions require a person's signature; the Guidance unequivocally states, "It would not be acceptable for the correspondence to have the signature of an AI tool or other non-natural person." Thus, even if a person has prepared a document, it would be a violation of rules to have an AI tool affix a signature and submit the document. That is especially true because a non-natural person cannot have a USPTO.gov account. Thus, for now, AI tools cannot be used to file USPTO documents.
Third, AI tools could be used to access USPTO systems, for example, to scrape filings to include in a large language model. Again, however, an AI tool cannot have a USPTO.gov account and certain actions violate the terms of service of USPTO websites. As the Guidance warns, "Users should also be extremely careful when attempting to data mine information from USPTO databases. Using computer tools, including AI systems, in a manner that generates unusually high numbers of database accesses violates the Terms of Use for USPTO websites, and users using tools in this way will be denied access to USPTO servers without notice and could be subject to applicable state criminal and civil laws." As the Guidance points out, however, the USPTO does offer bulk access to its data for purposes of mass downloading, and those interfaces should be used instead.
Fourth, the Guidance raises the concern of potential unintended disclosure of client confidential information when using an AI tool. As the Guidance cautions:
This can happen, for example, when aspects of an invention are input into AI systems to perform prior art searches or generate drafts of specification, claims, or responses to Office actions. AI systems may retain the information that is entered by users. This information can be used in a variety of ways by the owner of the AI system including using the data to further train its AI models or providing the data to third parties in breach of practitioners' confidentiality obligations to their clients under, inter alia, 37 CFR 11.106. If confidential information is used to train AI, that confidential information or some parts of it may filter into outputs from the AI system provided to others.
This is an especially acute concern because current large language models are "black boxes" with terms of service that may change without much notice. But the use of a tool with terms of service that prohibit use of inputs to teach the model may not be sufficient if the owner of the tool is not especially trustworthy. The Guidance suggests that "practitioners must be especially vigilant to ensure that confidentiality of client data is maintained" when client data is used with an AI tool (but also when client data is stored on third-party storage). The concern may extend to knowing where the model's servers are based, since disclosure to the model may violate the foreign filing license rules or export control regulations. In short, the Guidance suggests extreme caution in maintaining the confidentiality of client data.
All in all, the Guidance is intended to reinforce that the use of AI tools does not relieve practitioners of their obligations to comply with existing rules and regulations, even if it may simplify or quicken prosecution tasks. There are clear risks and pitfalls, and the Guidance helps highlight them without placing additional obligations on practitioners.
[1] 89 Fed. Reg. 25,609 (Apr. 11, 2024).
[2] 37 C.F.R. § 1.56. There are analogous duties of candor and good faith in 37 C.F.R. § 1.555(a) and 37 C.F.R. § 42.11.
[3] 37 C.F.R. § 11.18(b)(1).
[4] 37 C.F.R. § 11.18(b)(2).
[5] 37 C.F.R. § 11.106(a).
[6] Scope of Foreign Filing Licenses, 73 Fed. Reg. 42,781 (July 23, 2008).
[7] 37 C.F.R. § 11.101.
[8] 37 C.F.R. § 11.104.
[9] In the Guidance, the Office included a fifth context, "Fraud and Intentional Misconduct." But the Guidance does nothing more than reiterate the previously-stated concerns with the submission (or omission) of materials in violation of the duty of candor and use of AI tools to violate the terms of service of USPTO websites.
This (of course) also ties into the February update from the USPTO.
It should be abundantly clear that the Office is aiming to separate content from that which a human may rightfully claim the legal status of inventor from any inputs provided NOT by the named humans.
It STILL remains a Hobson's choice about how to treat ANY such "co-invented" material not traceable to a human:
A) machines CAN (objectively) invent. They are just not legal 'inventors' insofar as having the ability to BE a "named inventor." This draws parallel to the objective fact that a simian WAS the photographer, but was not a legal holder of the copyright in the existing photograph.
B) machines can NOT (objectively) invent. This means that whatever cannot be legally traced to an actual human MUST be (logically) merely something that THAT other legal fiction (the Person Having Ordinary Skill In The Art) must have known prior to incorporation into the patent application.
Let's have a poll to see which choice is more popular.
Posted by: skeptical | April 16, 2024 at 09:25 AM