By Michael Borella --
After using a large language model, such as ChatGPT, for a while, it is not hard to image an array of nightmarish scenarios that these generative artificial intelligence (AI) programs could bring about. While ChatGPT and its emerging rivals currently have "guardrails" -- ethical limits on what it will do in response to a prompt -- the bounds thereof are not well understood. Through clever prompting, it is not hard to convince the current iteration of ChatGPT to do away with certain guardrails from time to time. Further, the companies behind these models have not defined the extent of the guardrails, while the very structures underlying the models are well known to behave in unpredictable ways. Not to mention what might happen if a "jailbroken" large language model is ever released to the public.
As an example, a user might ask the model to describe terrorist attack vectors that no human has ever previously conceived of. Or, a model might generate software code and convince a gullible user to download and execute it on their computer, resulting in personal financial information being sent to a third party.
Perhaps one of the most relevant risks of large language models is that once they are implemented and deployed, the marginal cost of creating misinformation becomes close to zero. If a political campaign, interest group, or government wishes to inundate social media with misleading posts about a public figure, a policy, or a law, it will be able to do so at volume without having to employ a roomful of humans.
In 2021, the European Commission of the European Union (EU) proposed harmonized rules for the regulation of AI. The Commission recognized both the perils and the benefits of AI and attempted to come up with a framework for regulation that employs oversight in proportion to the specific dangers inherent in certain uses of AI. The resulting laws enacted by member states would potentially have the Brussels Effect, in that EU regulation of its own markets become a de facto standard for the rest of the world. This is largely what happened for the EU's General Data Protection Regulation (GDPR) laws.
But very few people saw generative AI coming or the meteoric rise of ChatGPT at the end of 2022. Thus, the Commission is in the process of re-evaluating its rules in view of these paradigm-breaking technologies.
The Commission's proposal places all AI systems into one of three risk levels: (i) unacceptable risk, (ii) high risk, and (iii) low or minimal risk. The amount of regulation would be the greatest for category (i) and the least (e.g., none) for category (iii).
Uses of AI that create an unacceptable risk include those that violate fundamental rights, manipulate individuals subliminally, exploit specific vulnerable groups (e.g., children and persons with disabilities), engage in social scoring (evaluating the trustworthiness of persons based on their social behavior), and facilitate real-time biometric recognition for purposes of law enforcement. These uses would be prohibited.
A high risk AI may be classified as such based on its intended purpose and modalities of use. There are two main types of high risk systems: (i) those intended to be used as safety component of products (e.g., within machinery, toys, radio equipment, recreational vehicles, and medical devices), and (ii) other systems explicitly listed (e.g., involving biometrics, critical infrastructure, education, employment, law enforcement, and immigration). These categories are quite broad and would impact many diverse industries. The proposal sets forth detailed legal requirements for such systems relating to data governance, documentation and recording keeping, transparency and provision of information to users, human oversight, robustness, accuracy, and security, as well as conformity assessment procedures.
Regarding low or minimal risk AI systems, their use would be permitted with no restrictions. However, the Commission envisions these systems potentially adhering to voluntary codes of conduct relating to transparency concerns.
To that point, the proposal also states that "[t]ransparency obligations will apply for systems that (i) interact with humans, (ii) are used to detect emotions or determine association with (social) categories based on biometric data, or (iii) generate or manipulate content ('deep fakes')." In these situations, there is an obligation to disclose that the content has been machine-generated in order to allow the users to make informed choices.
Currently, the Commission is considering whether to place ChatGPT and its ilk in the high risk category, thus subjecting it to significant regulation. There has been pushback, however, from parties who believe that the regulations should distinguish between harmful uses of these models (e.g., spreading misinformation) and minimal-risk uses (e.g., coming up with new recipes, composing funny poems). In other words, the amount of regulation that applied to ChatGPT should vary based on its use -- and aesthetically pleasing goal but one that would be difficult to carry out in practice because of the model's broad scope and general applicability.
Whether this results in the proposed regulations being delayed and/or rewritten remains to be seen. The Commission will be taking up the issue.
I would point out that any legislation on the EU side as to "harmful uses" may well NOT be amenable to US legislation, given our First Amendment.
The entire notion of "spreading disinformation" SHOULD BE SEEN as a logistical/political/ideological swampland.
As we 'learn' more and more about such things as COVID and the like, what was largely denigrated as "conspiracy theory" and "disinformation" has turned out to be plainly factual information.
Posted by: skeptical | May 01, 2023 at 10:50 AM
@skeptical
Isn't "disinformation" meaningless for generative AI ? For a generative AI, truth does not exist.
As to "harmful uses", I assume a generative AI has built-in moderation algorithms which block offensive language and the like for obvious reasons.
Posted by: francis hagel | May 03, 2023 at 10:03 AM
francis,
I cannot agree.
Generative AI may well yield a result that is a factual dissertation, and thus "truth" very much is reachable.
As for "built-in," these have been labeled "guard rails," and -- like it or not -- have been shown to not only be rather easily circumventable, but with the proliferation of AI tools (including ones that are 'let loose' and interact outside of guard rails and bootstrap with OTHER AI tools, there is no doubt whatsoever that "harmful uses" will abound.
Quite in fact, several notable pioneers in the AI field have been quite public about this in just the last few days.
Posted by: skeptical | May 03, 2023 at 04:58 PM
There is a problem with your conclusion that truth is reachable with generation AIs : you have no access to the sources, the output is not vérifiable.
Posted by: Francis hagel | May 04, 2023 at 08:00 AM
Francis,
Access to source is not a constraint to verifiability of outcome.
Quite in fact, most accepted norms of verifiability encourage second source verifications.
Regardless - this has zero effect on whether or not truth is in fact reachable. There is NO problem with my conclusion.
Posted by: skeptical | May 05, 2023 at 10:39 AM