By Aaron Gin --
Over the last two months, the United States legislature has introduced three new bills intended to establish a Federal Advisory Committee on the rapidly-evolving field of artificial intelligence (AI) and to analyze and report on the impact and growth of the technology.
FUTURE of AI Act
On December 12, 2017, Rep. John Delaney (D-MD) and Sen. Maria Cantwell (D-WA) introduced identical bills in the House (as H.R. 4625) and Senate (as S. 2217). The twin bills were entitled the "Fundamentally Understanding The Usability and Realistic Evolution of Artificial Intelligence Act of 2017" -- or FUTURE of AI Act.
One aim of the bills is to better understand how AI might maximally benefit the economic prosperity and social stability of the United States. The bills would also direct the Secretary of Commerce to establish a Federal Advisory Committee, which would be tasked with providing guidance on the development and implementation of artificial intelligence. Among other roles, the Advisory Committee is to promote a "climate of investment and innovation," "optimize the development of [AI]," support the "unbiased development and application of [AI]," and "protect the privacy rights of individuals."
The Advisory Committee is asked to provide advice to the Secretary of Commerce with regard to several specific topics including:
(A) The competitiveness of the United States, including matters relating to the promotion of public and private sector investment and innovation into the development of artificial intelligence.
(B) Workforce, including matters relating to the potential for using artificial intelligence for rapid retraining of workers, due to the possible effect of technological displacement.
(C) Education, including matters relating to science, technology, engineering, and mathematics education to prepare the United States workforce as the needs of employers change.
(D) Ethics training and development for technologists working on artificial intelligence.
(E) Matters relating to open sharing of data and the open sharing of research on artificial intelligence.
(F) International cooperation and competitiveness, including matters relating to the competitive international landscape for artificial intelligence-related industries.
(G) Accountability and legal rights, including matters relating to the responsibility for any violations of laws by an artificial intelligence system and the compatibility of international regulations.
(H) Matters relating to machine learning bias through core cultural and societal norms.
(I) Matters relating to how artificial intelligence can serve or enhance opportunities in rural communities.
(J) Government efficiency, including matters relating to how to promote cost saving and streamline operations.
The Advisory Committee is additionally required to conduct a study on various intersections between AI and society, including analyzing the effects of AI on the economy, workforce, and competiveness of the United States. The study also seeks to identify and eliminate bias in AI algorithms, identify potential "harmful outcomes," and contemplate the incorporation of ethical standards in AI development. Furthermore, within 540 days of enactment of the Act, the Advisory Committee is required to provide a report based on the study to the Secretary of Commerce and Congress.
The bills direct that the Advisory Committee shall include 19 voting members appointed by the Secretary of Commerce. Specifically, the voting members will be selected from "diverse" geographical locations within the United States and include: five members from academia or the AI research community, six from private industry, six from civil society, and two from labor organizations.
One of the notable features of the twin House and Senate bills is the broad range of technologies being considered under the term "artificial intelligence." The bills both define AI as:
(A) Any artificial systems that perform tasks under varying and unpredictable circumstances, without significant human oversight, or that can learn from their experience and improve their performance. Such systems may be developed in computer software, physical hardware, or other contexts not yet contemplated. They may solve tasks requiring humanlike perception, cognition, planning, learning, communication, or physical action. In general, the more human-like the system within the context of its tasks, the more it can be said to use artificial intelligence.
(B) Systems that think like humans, such as cognitive architectures and neural networks.
(C) Systems that act like humans, such as systems that can pass the Turing test or other comparable test via natural language processing, knowledge representation, automated reasoning, and learning.
(D) A set of techniques, including machine learning, that seeks to approximate some cognitive task.
(E) Systems that act rationally, such as intelligent software agents and embodied robots that achieve goals via perception, planning, reasoning, learning, communicating, decision making, and acting.
H.R. 4625 and S. 2217 also contemplate "artificial general intelligence," which they define as "a notional future artificial intelligence system that exhibits apparently intelligent behavior at least as advanced as a person across the range of cognitive, emotional, and social behaviors." Yet further, the bills consider "narrow artificial intelligence," or artificial intelligence systems that address specific applications such as playing strategic games, language translation, self-driving vehicles, and image recognition.
AI Jobs Act of 2018
On January 18, 2018, Rep. Darren Soto (D-FL) introduced H.R. 4829 entitled the "AI Jobs Act of 2018". The bill promotes a "21st century artificial intelligence workforce," and focuses on training and retraining American workers in light of the possible effects of AI on the workforce and human worker demand.
H.R. 4829 requires the Secretary of Labor to undertake an AI study to determine its potential impacts on the workforce. A report based on the study is to include:
(1) An outline of the specific data, and the availability of such data, necessary to properly analyze the impact and growth of artificial intelligence.
(2) Identification of industries that are projected to have the most growth in artificial intelligence use, and whether the technology will result in the enhancement of workers' capabilities or their replacement.
(3) Analysis of the expertise and education (including computer science literacy) needed to develop, operate, or work alongside artificial intelligence over the next two decades, as compared to the levels of such expertise and education among the current workforce.
(4) Analysis of which demographics (including ethnic, gender, economic, age, and regional) may experience expanded career opportunities, and which such demographics may be vulnerable to career displacement, due to artificial intelligence.
(5) Any recommendations to alleviate workforce displacement, prepare future workforce members for the artificial-intelligence economy, and any other relevant observations or recommendations within the field of artificial intelligence.
In preparing the report, the Secretary of Labor is tasked with conducting a series of public hearing or roundtables with various organizations, such as industrial stakeholders, heads of Federal agencies, local educational agencies, institutions of higher education, workforce training organizations, and National Laboratories.
While some language is common, the new House bill defines "artificial intelligence" more succinctly than the earlier twin bills. Specifically, H.R. 4829 characterizes AI as systems that:
(A) think like humans (including cognitive architectures and neural networks);
(B) act like humans (such as passing the Turing test using natural language processing, knowledge representation, automated reasoning, and learning);
(C) think rationally (such as logic solvers, inference, and optimization);
(D) act rationally (such as intelligent software agents and embodied robots that achieve goals via perception, planning, reasoning, learning, communicating, decision-making, and acting); or
(E) automate or replicate intelligent behavior.
If enacted, the Acts and their resulting reports will undoubtedly lead to further discussion regarding artificial intelligence and provide multiple perspectives on how AI might impact society for better and, perhaps, for worse. Additionally, as the bills target the Departments of Commerce and Labor separately, it will also be interesting to see how the different organizations read and react based on their respective AI studies.
For additional information regarding this topic, please see:
• H.R.4625 - FUTURE of Artificial Intelligence Act of 2017
• S.2217 - FUTURE of Artificial Intelligence Act of 2017
• H.R.4829 - AI JOBS Act of 2018
• "Lawmakers introduce bipartisan AI legislation," The Hill, December 12, 2017
• "Microsoft Sees Need for AI Laws, Regulations," Industry Week, January 18, 2018
• "Forget Killer Robots—Bias Is the Real AI Danger," MIT Technology Review, October 3, 2017
Aaron,
I suspect that everyone in a science field knows that the future of the United States in commerce will depend on software and biotechnology inventions. These inventions are coming in use everywhere in the US and around the world but MPEP guidelines have become dysfunctional in disallowing patents to protect these inventions. The scope of what invention can be patented under 37 CFR 101 needs to have broader boundaries. However the boundaries to Section 101 have narrowed in the last 10 years. A PhD knows that the biotechnology inventions are seldom ordinary occurring biological events. Likewise, a PdD knows that software inventions are not mere mental acts. Improvements beyond what naturally occurs should be patentable. Non-scientists in legal positions of power are making ignorant decisions. If patent litigation lawyers and judges were required to have a science PhD degree, then such legally-profound people would support biotechnology and software patenting instead of destroying it. It has been obvious to young and old alike since WWII that US commerce is lost to off-shore competitors rather easily. US patent laws and courts need to respect the future technology of our modern society.
Posted by: Karl P. Dresdner, Jr., PhD | February 16, 2018 at 09:09 PM
Aaron,
Correction. I meant 35 USC 101.
Posted by: Karl P. Dresdner, Jr., PhD | February 16, 2018 at 11:29 PM
Dr. Dresdner,
I agree with you. On the other hand, unlimited eligibility for process inventions that result only in information would create insoluble tension with other Constitutional rights, could never be reasonably examined, and could never be reliably adjudicated.
There are reasonable compromises available, but policy makers and courts will need to be in the mode of compromise. To me, the difference between human use of information and non-human use of information is THE bright line that should be used for eligibility purposes.
Posted by: Martin H Snyder | February 17, 2018 at 05:34 PM
Mr. Snyder,
Would you care to outline any details of your "parade of horribles?"
Perhaps a though or two to flesh out each avenue.
An avenue of "insoluble tension with other Constitutional rights" is most likely based on some error of yours as to thinking that perhaps a patent can be infringed outside of its "all elements" rule.
An avenue of "could never be reasonably examined" is most likely not a reasonable approximation vis a vis ANY OTHER type of patent.
An avenue of "could never be reliably adjudicated" appears to be unsupportable (just as vis a vis ANY OTHER type of patent).
It is more likely than not that you are not appreciating exactly what a patent is, what a patent covers, and what would remain safely out of the way of a patent coverage for process inventions (of any sort).
No one (anywhere) is indicating that "process inventions" cover processes totally within the human mind.
Posted by: skeptical | February 17, 2018 at 06:46 PM
Ha. On this very website, two items below, you find this claim, which now moves forward toward inevitable litigation if asserted:
1. A method of identifying and treating a patient undergoing periodic hemodialysis treatments at increased risk for death, comprising:
a) determining at least one clinical or biochemical parameter associated with an increased risk of death of the patient and monitoring said parameter periodically before and/or after the patient is undergoing hemodialysis treatments;
b) determining a significant change in the rate of change of the at least one clinical or biochemical parameter from a retrospective record review of parameter values of the patient determined at prior hemodialysis treatments;
c) identifying the patient as having an increased risk for death because the patient has the significant change in the rate of change of the at least one clinical or biochemical parameter; and
d) treating the patient having an increased risk for death within a sufficient lead time to decrease the patient's risk of death.
This type of "intellectual property" forecloses the basic practice of medicine in relation to dialysis. It's obnoxious to freedom.
As to software patents that can't be examined? There are billions of lines of unpatented code in the world- for which there are no references. It's akin to only providing a copyright after review of the world's entire literature. It won't fly.
If we want to patent new, useful, non-obvious information, we are going to need some court procedures and limiting doctrines that at least narrow the universe and encourage enough patents to create a body of references that can be examined with some practicality.
The parade of horribles is well known in the wider economy. Everyone knows that.
Posted by: Martin H Snyder | February 17, 2018 at 07:36 PM
"The parade of horribles is well known in the wider economy. Everyone knows that."
No.
That does not provide the light requested for YOUR positions here.
For example, the claim you wish to becry has an element (treatment) that you appear to have overlooked for any of the avenues of your parade.
As well, your "whine" of "This type of "intellectual property" forecloses the basic practice of medicine in relation to dialysis. It's obnoxious to freedom." is a fallacy, as ALL patents foreclose (and only do so for a limited time) that which is claimed. Whether or not that particular foreclosure is to a "basic practice" is a meaningless assertion on your part, as validity is not in question. Further, innovation is indeed PROMOTED when one is faced with such "but this blocks," as it is in the face of "there is no other way" that the very best of innovation brings forth a completely unforeseen other way. YOUR path is the path of indolence and "shrug" of the shoulder - verily, it is the path of ANTI-patent.
As to your confusion between software PATENT and code for copyright, well, you appear to need to understand the basics of BOTH, as each are geared to separate aspects of protection (as well, your indolence of "but there's no reference" IGNORES the very reason for having these types of protection systems: TO CREATE the references. You confuse what it takes to earn a copyright from what it takes to earn a patent - and do not appear to understand WHY there are those differences.
Please try again (or for the first time).
Posted by: skeptical | February 18, 2018 at 07:40 AM
Martin,
My focus is on the need for a broadening of the kinds of inventions that can be patented under 35 USC 101 so as to meet the growing need for patents to protect the making, using, selling, and importing of new US biotechnology and software (in hardware use). To be allowed by the USPTO, a US patent claim must be novel and non-obvious and useful. A method claim drawn to an invention practiced for decades cannot now be patented. The pending claim is rejected under 35 USC 102b. For a software invention to be patented, it needs to be used for a purpose and claimed with its used defined in a hardware device. The hardware that is claimed could include a memory chip for storing the software; a processor chip for processing the software in the memory chip and input data from a keyboard; and the processed data could be stored in a memory chip or be sent to computer monitor, or to a mechanical servo. Thus the software with generic hardware components can be found in the marketplace. Software alone is not used.
Posted by: Karl P. Dresdner, Jr. PhD, US patent agent no. 63,319 | February 18, 2018 at 08:59 AM
Dr. Dresdner,
Merely reciting the use of some item of hardware will not confer eligibility to non-statutory elements. The US Supreme Court has been absolutely firm that drafting conventions cannot create eligibility where there is otherwise "abstract ideas".
I'd be interested in your thoughts about this:
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2709289
Posted by: Martin H Snyder | February 18, 2018 at 05:15 PM
Mr. Snyder, you have been drinking the kool-aid of Mr. Heller for too long with your "confer eligibility TO non-statutory elements."
Eligibility is not an "TO element" thing.
Eligibility is to the claim. As a whole.
You keep on wanting to point to a broken scoreboard when the issue is that the scoreboard is broken.
Posted by: skeptical | February 19, 2018 at 06:09 AM
"claim as a whole" is utterly subjective, and merely another shorthand for "I know it when I see it" unless the claim is composed of a single word.
Ironically, just another manipulated scoreboard- which naturally utterly nullifies your beloved "judicial exception to printed matter", which can never be contemplated "as a whole".
Posted by: Martin H Snyder | February 19, 2018 at 10:29 AM
Mr. Snyder,
Would you care to explain how "claim as a whole" is "utterly subjective?"
It appears that you are using words that you simply do not understand.
Further, there is no "nullification" of the exception to printed matter that you allude to here. Claims - including elements and pieces that fit exceptions to judicial doctrines - are contemplated "as a whole" all of the time, as is proper. Again, it appears that you are using legal terms without understanding what those terms mean.
Posted by: Skeptical | February 19, 2018 at 11:57 AM
Please. Separating printed matter from its substrate means taking claims apart. I'm not going to engage in the tedium of arguing that claim analysis is necessarily a continuum from nearly "whole" to the meaning of individual words.
Posted by: Martin H Snyder | February 19, 2018 at 07:35 PM
No one is "separating" anything when the doctrine of printed matter - and its exceptions - are understood in its proper legal sense.
So, "please" and get yourself some legal understanding.
As is, your comments exhibit only the indolence of someone who cannot bother with the understanding required to partake in a meaningful discussion of the legal realm. As to your "not going to engage in the tedium," that translates into your not willing to take that necessary step of understanding. If you are not going to take that necessary step, then please save yourself (and everyone else) the trouble and just don't bother to comment.
Posted by: Skeptical | February 20, 2018 at 08:33 AM
Martin,
Regarding your Feb 18, 2018 statement and question, I provide the following opinion. I believe laws need amending if harmful or dysfunctional. Current boundaries of 35 USC 101 are so. The boundaries of 35 USC 101 should be amended to allow patent claims to well-described, new and nonobvious software inventions useful for electronic processor devices. Now too many issued US patents covering many claims concerning software are being invalidated. Software inventions usefully enable computers to perform innumerable tasks at higher speeds and with more complexity than a human mind or a group of human minds ever can in a timely manner.
Quoting prior decisions by any court is returning to a swamp of narrow minded conventionalism with mislabels, eg “abstract ideas” that are out of date with the need for patent based fostering of software. The lack of technically qualified lawyers in positions on courts explains their isolated views from reality.
Posted by: Karl P. Dresdner, Jr. PhD, US patent Agent 63,319 | May 09, 2018 at 01:31 PM