By Michael Borella --
After two-and-a-half years of negotiation disrupted by the rise of generative models, the European Parliament and the European Council have reached a provisional understanding of how artificial intelligence (AI) should be regulated within the European Union (EU). The goal is to promote the investment in and use of safe AI that honors fundamental human rights.
As was the case for earlier proposals, AI systems are categorized based on the level of risk that they pose. High risk AI will be more strictly governed than low risk AI. For example, a high-risk AI system must undergo a fundamental rights impact assessment before being put into the market, and may also be subject to enhanced transparency requirements.
AI capabilities viewed as unacceptable will be banned from the EU. The latter include "cognitive behavioural manipulation, the untargeted scrapping of facial images from the internet or CCTV footage, emotion recognition in the workplace and educational institutions, social scoring, biometric categorization to infer sensitive data, such as sexual orientation or religious beliefs, and some cases of predictive policing for individuals."
New in this agreement are a set of provisions addressing large, foundational models that can be used for multiple purposes (this includes the current wave of generative AI chatbots capable of producing text, images, and code), as well as where these models are integrated into other high-risk systems.
Nonetheless, there are exceptions. Member states are not prevented from using AI for military, defense, or national security purposes. Also, the regulations will not affect AI systems used solely for research or non-professional reasons.
Penalties for violation of the regulations would be based on a percentage of a company's annual revenue or a fixed amount, whichever is higher. These quantities would vary based on the severity of the offense, from 7% or €35 million down to 1.5% or €7.5 million. Lesser fines may be set forth for small and medium-sized businesses.
The details of the regulations are yet to be finalized. It may be weeks or months before the text of the regulations is completed, and even longer before it is ratified.
The proposed legislation establishes the EU's lead in AI regulation. The U.S. is currently nowhere near federal legislation setting forth restrictions on AI development or use. While the Biden administration's recent executive order expressed many of the same concerns, it is questionable whether this directive will have any power if President Biden is not re-elected in November of 2024.
Regardless, the EU's stance (and to some extent, that of the Biden administration) is that technology companies cannot be trusted to self-regulate in this space. The dangers are too impactful and the field is too dynamic to leave much room for error. This is in contrast to the "effective accelerationist" movement embraced by some in Silicon Valley, a rather oddball amateur philosophy (backed by an enormous amount of capital) that calls for unrestricted advancement in AI and can be summarized as "Don't regulate us, bro!"
Beyond just "devil in the details," I can foresee the "as applied" loopholes inviting fleets of trucks.
Posted by: skeptical | December 12, 2023 at 07:38 AM