By Brian A. Pattengale* and Anthony D. Sabatelli** --
Decades after the science-fiction visions of Stanley Kubrick’s 2001: A Space Odyssey and Isaac Asimov’s I, Robot, artificial intelligence ("AI") is finally moving to the mainstream. Many of us use digital assistants like Apple's Siri or Google's Alexa every day, and we gape, with a mixture of awe and terror, over videos of the feats of robotic animals and stories over whether AI being developed at Google is displaying signs of sentience. Along with this, we have finally reached the point in the development of AI where we must confront the legal and policy questions of whether AI can (or should) be able to be an inventor on a patent.
That very question was recently addressed by the Court of Appeals for the Federal Circuit ("CAFC") in Thaler v. Vidal.1 The patent applicant in the Thaler case argued that he did not invent the subject matter of his patent applications; rather, he asserted they were invented solely by an AI system.2 In its decision, the CAFC swiftly came to the conclusion that the answer to this question is clear from the statute. U.S. Patent Law clearly and unambiguously states that only natural persons can be named as inventors on patents. The Court stated ". . . .it might seem that resolving this issue would involve an abstract inquiry . . . however we need not ponder these aphysical matters . . . [i]nstead, our task begins -- and ends -- with consideration of the . . . statute." The Court reiterated that the Patent Act requires that inventors are "individuals" and therefore limited to human beings. End of story.
In this article we consider what this very clear pronouncement by the CAFC means for drug discovery: Will the next blockbuster drug potentially be denied patent protection if it is discovered solely through highly sophisticated AI methods?
According to a 2020 study, it generally takes on the order of 10 years and over a billion dollars to bring a new drug to market, from initial discovery in the lab through preclinical and clinical testing, to eventual approval of the new drug application ("NDA"), and then to marketing and sales.3 Furthermore, the success rate is very low -- only approximately 10% of drug candidates make it all the way through the process to approval for marketing. The drug discovery process is still very empirical, often involving medicinal chemists and their teams synthesizing and evaluating thousands of compounds. The process also typically relies upon numerous structure activity relationship ("SAR") decisions by those involved to direct the process. Computer-aided drug discovery ("CADD") technologies have significantly improved this very empirical process, allowing for more efficient utilization of bio- and chemoinformatic information to propose, screen, and perform target-based analyses on drug candidates. AI-based technologies represent the next forefront of drug discovery, offering the potential to identify optimized drug molecules based on training data without further human input during the identification process.4 To be clear, scenarios such as this are not science fiction -- the German biotechnology company Evotec recently partnered with UK-based Exscientia to use AI to identify a new anticancer drug candidate that is currently in Phase I clinical trials.5 Pharma giant Merck is also highly invested in utilizing AI platforms for drug discovery.6
Experts in the field generally utilize AI systems to perform machine learning or what is known as "deep learning" to make decisions and predictions based upon correlations in training data provided to the system. For example, in the field of drug discovery, one could start with a general core structure for a drug compound and add to it a large set of chemical substituents for altering the properties of the compound. With appropriate training data, AI can be used to predict and sift through several tens of thousands, if not millions, of theoretical substituent combinations to identify a limited set of compounds– or even a single specific compound — having selected target properties. In this scenario, would AI then be the rightful "inventor" of the target compound, which otherwise might never have been identified but for the use of AI?
Inventorship in the U.S. is based upon conception of an invention. Activities undertaken simply to reduce an invention to practice are not sufficient to confer inventorship. The Manual of Patent Examination Procedures ("MPEP"), provides unambiguous guidance for inventorship, even specifically for chemical compounds. MPEP § 2109 II states: "General knowledge regarding the anticipated biological properties of groups of complex chemical compounds is insufficient to confer inventorship status with respect to specifically claimed compounds".7 Therefore, in a simplified scenario, a medicinal chemist having expert knowledge of a broad genus of compounds would not necessarily be the inventor of a specific chemical compound falling under that genus, if that compound was discovered by another chemist on the team. That other chemist would most likely be the inventor. Suppose now, instead of that compound having been identified by another chemist, it is identified by AI. Following the same logic, would not AI now likely be the rightful inventor?
For the sake of discussion, let's assume that AI fully predicts a previously unknown compound as a particular drug target. That target compound is tested for efficacy, safety, etc. and becomes a candidate that is eventually approved for use and reaches blockbuster sales status. Even though humans were involved in the synthesis, testing, clinical trials, and approval process, none of those activities would constitute conception (i.e., invention) of this drug compound. The humans were "told" by AI exactly which compound to make and would only be reducers to practice, having therefore not contributed to the original conception of that particular drug compound.
Taking this scenario to its seemingly perverse conclusion, under current U.S. Patent Law as recently pronounced by the CAFC, the AI-discovered target compound would be unpatentable because AI cannot be an inventor on the patent and no human could specifically be identified as the discoverer of that compound. In the absence of patent protection, this AI-identified compound could then likely end up in the public domain. This result is at odds with the public policy standpoint of our patent system which rewards innovation by placing patented inventions before the public to encourage further innovation, while providing the patentee with an exclusivity for the term of the patent, which is generally twenty years.8
The above thought exercise, to our knowledge, has not yet played out. But it soon could. There may, however, be a solution to this apparent dilemma, albeit not one that is completely satisfying or sufficiently tested through litigation. Because U.S. inventorship is based upon the patent claims, and a co-inventor need only contribute to the conception of a single patent claim to be listed as an inventor on a patent, it is entirely possible that a human inventor could conceive of a single claim or claim limitation, or even an alternative claimed embodiment, in order to be rightfully included on a patent otherwise directed to an AI-generated compound. For example, the chemists on the AI project may conceive of or discover salts or solvates of the compound, functional limitations related to the compound, closely related compounds, methods of using the compound, methods of synthesizing the compound, etc., which could likely be sufficient for establishing human inventorship — at least to those claims reciting these limitations.
In Thaler, the Federal Circuit intentionally left certain questions unresolved by stating that the Court was " . . . not confronted today with the question of whether inventions made by human beings with the assistance of AI are eligible for patent protection". It seems that there are arguments that a human being would have contributed to the inventions (i.e., claims) thereby meeting the inventorship threshold of the Patent Act. Time will only tell how this scenario plays out. Perhaps the more important million-dollar question (or, rather, multi-billion dollar question with respect to drug discovery) is whether or not a patent claim directed to a drug compound identified solely through AI, and having no human inventor, would be valid in view of Thaler.
1 Thaler v. Vidal, U.S. Court of Appeals for the Federal Circuit, 2022, case number 21-2347
2 Mr. Thaler was issued a patent in South Africa with his AI system DABUS listed as the inventor, a world first, and Europe and Australia both rejected the applications. An Australian judge initially ruled that AI can be an inventor, and that decision was overturned earlier this year by a higher court.
3 https://www.biospace.com/article/median-cost-of-bringing-a-new-drug-to-market-985-million/
4 Paul, D. et al. "Artificial intelligence in drug discovery and development", 2021, Drug Discov. Today, 26:1, pp. 80–93
5 https://www.fiercebiotech.com/medtech/merck-selects-saama-add-machine-learning-tech-drug-development-process; https://www.emdgroup.com/en/research/science-space/envisioning-tomorrow/precision-medicine/generativeai.html
6 https://www.nature.com/articles/d43747-021-00045-7
7 Ex parte Smernoff, 215 U.S.P.Q. 545, 547 (Bd. App. 1982)
8 In Thaler v. Vidal, Mr. Thaler did include related policy arguments, which the Federal Circuit dismissed in view of its reliance on the statutory language that Congress chose.
* Dr. Pattengale is a patent agent at Wiggin and Dana. Dr. Pattengale received his Ph.D., Physical and Materials Chemistry from Marquette University, and his B.S., Biochemistry from Carroll University.
** Dr. Sabatelli is Patent Counsel at Wiggin and Dana. Dr. Sabatelli received his J.D., cum laude, from the Salmon P. Chase College of Law. He received his Ph.D., with Honors, in Organic Chemistry from Yale University, and his B.S., summa cum laude, in Chemistry from Fairfield University.
This article was originally published on the Wiggin and Dana website on August 17, 2022.
RE AI inventions:
Identifying AI as the inventor is not necessary nor even consistent with our everyday understanding of the use of tools: if I use a wrench to tighten a bolt to attach two parts together, where my puny human strength would not be able to do so without the wrench, no one says that the wrench attached the parts together - everyone would agree that I attached them, using a tool.
Thus, identifying AI as an inventor is unnecessary and inconsistent with our usage regarding all other tools created and used by humans.
Posted by: James Fox | August 29, 2022 at 11:13 AM
An obvious, and surprisingly not clearly addressed, question is whether inventorship simply tracks (devolves) to the human inventor / operator of the AI -- as for any other tool used in any endeavor.
The article seems to assume what has not been established. That is the article seems to assume that an "AI" tool (whatever all fits in that box) somehow has some form of "right" different from other software used as a tool by a human. This assumption without explanation seems like a bias in favor of AI by those that propound AI. If there's any issue here, and maybe there really is not (as suggested by the CAFC), it would seem to be this assumption.
Posted by: Occam | August 29, 2022 at 11:48 AM
Here's a follow-up, highlighting the "What's AI?" question.
If a human working in drug discovery, instead of having an AI engine, has a random drug candidate generator, and a candidate of this generator somehow completes "synthesis, testing, clinical trials, and approval process" as in the article, now is the "random drug candidate generator" the inventor?
What distinguishes a random generator from AI from other tools for data analysis and knowledge discovery? When, if ever, do the human inventors of those tools become removed somehow to the point of no longer being inventors of downstream results? What software properties might argue that inventorship should reside in the software?
If, analogously, as statute presently says, inventorship resides solely in humans, then what human qualities would software have to possess in order justify changing such law? So, possibly this also turns into the question of when does software merit a presently human right such as recognition for inventorship, and are we really anywhere near that situation?
Posted by: Occam | August 29, 2022 at 12:05 PM
I strongly agree with the other commentators here, and particularly appreciated the "wrench" analogy and the random invention generator. I have previously written up a little thought experiment in this vane, see here for anyone interested: https://www.kilburnstrode.com/knowledge/ai/ai-musings/artificial-inventors-again
Posted by: Alexander Korenberg | August 31, 2022 at 05:40 AM
There is a fundamental disconnect present above in EACH of the comments that seek to attribute to AI a "mere use as tool."
That fundamental disconnect has to do with "use of." TRUE AI is NOT mere "use of."
This is most clearly understood in the converse: where an AI would satisfy the LEGAL definition of inventor (and perhaps more critically, would satisfy the LEGAL definition of CO-inventor), the HUMAN involved (and in the case of CO-inventor, for the particular inventive aspect), would NOT merit meeting the LEGAL definition of inventor.
Merely opening some 'black box' into which the work of another (that work BEING the invention) is deposited, does NOT make the human gazing upon that work TO BE the LEGAL inventor.
Posted by: skeptical | September 04, 2022 at 03:13 PM