Three’s Company: European Parliament Adopts Its Version of AI Act, Commencing Negotiations with Council and Commission
An overwhelming majority of the European Parliament (Parliament) recently voted to pass the Artificial Intelligence Act (AI Act), marking another major step toward the legislation becoming law. As we previously reported, the AI Act regulates artificial intelligence (AI) systems according to risk level and imposes highly prescriptive requirements on systems considered to be high-risk. The AI Act has a broad extraterritorial scope, sweeping into its purview providers and deployers of AI systems regardless of whether they are established in the EU. Businesses serving the EU market and selling AI-derived products or deploying AI systems in their operations should continue preparing for compliance.
Where are we in the legislative process? The European Commission (Commission) began the process by proposing legislation (EC Proposal) in April 2021.1 The Council of the European Union (Council) then adopted its own common position (Common Position) on the AI Act in December 2022.2 On June 14, 2023, the Parliament created a third version of the legislation by adopting a series of 771 discrete amendments to the EC Proposal. Now, the Parliament, Council, and Commission have embarked on the trilogue, a negotiation among the three bodies to arrive at a final version for ratification by the Parliament and Council. They aim for ratification before the end of 2023 with the AI Act to come into force two (or possibly three) years later.
Below, we summarize the major changes introduced by the Parliament and guide businesses on preparing for compliance with the substantial new mandates the legislation will impose.
Key Takeaways
|
The Parliament’s Major Changes
The Parliament introduced several important changes to the AI Act.
A. Narrower Scope of Definition of AI System
Defining “AI system” has been one of the most controversial aspects of the legislative process because the definition will determine the legislation’s reach. The Commission’s initial proposed definition was criticized as overly broad because it could have reached statistical processes and other techniques in wide use that fall outside the common conception of “AI.” The Council attempted to address these criticisms by narrowing the scope, but its efforts also were criticized — in part for lack of “interoperability” by diverging from the Organization for Economic Cooperation and Development (OECD) definition to which EU members of the OECD had agreed several years ago. The Parliament resolved that problem by adopting the OECD definition:
“Artificial intelligence system” (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate output such as predictions, recommendations, or decisions influencing physical or virtual environments.3
The Parliament also replaced the confusing term “user” with the more precise term “deployer.”4
B. Expansion of Prohibited Practices
The Parliament expanded the list of prohibited practices proposed by the Council and the Commission, adding the following:
- “The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces”*5
- “Post” remote biometric identification systems, except where pre-judicial authorization is obtained where “strictly necessary” for a targeted search related to a serious crime6
- AI systems used by law enforcement to assess the likelihood of natural persons offending or reoffending, or the occurrence or reoccurrence of an actual or potential criminal offense(s), based on profiling*7
- Indiscriminate and untargeted scraping of biometric data from the internet or closed-circuit television footage to create or expand facial recognition databases 8
- AI systems that recognize emotions or physical or physiological features when deployed for law enforcement or border control or in workplaces or educational institutions*9
- AI systems that categorize natural persons by known or inferred sensitive or protected characteristics. The characteristics enumerated include “gender, gender identity, race, ethnic origin, migration or citizenship status, political orientation, sexual orientation, religion, disability or any other grounds on which discrimination is prohibited under Article 21 of the Charter of Fundamental Rights of the European Union” or under General Data Protection Regulation (GDPR) article 910
(Asterisked practices were classified by the Council and the Commission as high-risk, but not prohibited, use cases.)
These additional prohibitions set up perhaps the toughest political dispute for resolution in the trilogue. Many of the national governments represented on the Council want greater freedom to deploy remote biometric-identification systems for law enforcement purposes than the Parliamentary majority, which is more protective of civil liberties. Press reports suggest the Parliament may yield on this point in exchange for other concessions, rather than see the legislation fail to emerge from the trilogue. However, if the trilogue breaks down, it most likely will be over this issue.
For a discussion of how the Council changed the Commission’s proposed list of prohibited practices, please see our prior Advisory.
C. Clarification of Prohibited Practices
The Parliament further clarified some of the practices it wishes to proscribe. First, Parliament specified that the prohibition with respect to AI systems with the objective to or the effect of materially distorting human behavior includes “neuro-technologies assisted by AI systems that are used to monitor, use, or influence neural data gathered through brain-computer interfaces insofar as they are materially distorting the behavior of a natural person in a manner that causes or is likely to cause that person or another person significant harm.”11
Second, the Parliament enlarged the prohibition against distorting behaviors or exploiting the vulnerabilities of certain groups of people to include prohibiting the exploitation based on “known or predicted personality traits,” 12in addition to age, physical, or mental incapacities, and social or economic situation. The Parliament also explained that “it is not necessary for the provider or the deployer to have the intention to cause the significant harm, as long as such harm results from the manipulative or exploitative AI-enabled practices.”13
(In several charts below, we summarize selected aspects of the AI Act, showing the Commission proposal in black; Council changes in blue; Parliament changes in red; and shared Council and Parliament changes in purple.)
Comparison of the EC Proposal, Council’s Common Position, and the Parliament Position | |
Prohibited Uses | |
|
|
D. High-Risk AI Use Cases
In addition to relocating multiple use cases to the prohibited uses list, the Parliament made several further modifications to the list of high-risk use cases. The Parliament added AI systems intended to influence voter behavior or the outcome of an election (except AI systems where natural persons are not directly exposed to outputs — principally internal campaign-management tools).14 It also included the AI systems used by “very large online platforms” (as designated under Digital Services Act article 33) to recommend user-generated content.15
Some of the high-risk categories are quite broad. Recognizing that not all use cases in those categories actually present significant risks, the Parliament joined the Council in exempting from treatment as high-risk those applications that are not likely to lead to a significant risk to the health, safety, or fundamental rights. 16The Parliament also proposed exempting critical infrastructure uses that do not pose a significant risk to the environment.17 The Parliament added a process for providers to apply to take advantage of these exemptions.18
For a discussion of how the Council changed the Commission’s proposed list of high-risk use cases, please see our prior Advisory.
Comparison of the EC Proposal, Council’s Common Position, and the Parliament Position |
||
High-Risk Uses | ||
|
|
|
E. Requirements for High-Risk AI Systems
Once an AI system is classified as high-risk, the AI Act subjects it to numerous detailed requirements. The Parliament further clarified and expanded on existing obligations for providers and deployers of high-risk systems and other parties, including with respect to risk management systems;19 data sets used for training, validation, and testing;20 as well as recordkeeping and technical documentation requirements.21 Recognizing that deployers are in the best position to identify risks related to their high-risk systems, the Parliament proposed to require them to conduct fundamental rights22 impact assessments prior to use of any such system,23 in addition to any data protection impact assessments that may be required under the GDPR.24
The Parliament’s clarifications of the EC Proposal, like the Council’s, would make it easier for providers and deployers to comply with the requirements for high-risk AI systems although the two co-legislators took slightly different approaches. Exactly how burdensome compliance will be will depend on the precise details of the legislation that emerges from the trilogue.
For a discussion of how the Council changed the Commission’s requirements for high-risk use cases, please see our prior Advisory.
Comparison of the EC Proposal, Council’s Common Position, and the Parliament Position |
||
Requirements for High-Risk AI Systems | ||
Compliance (Providers) | ||
|
|
|
Compliance (Others) | ||
|
|
|
Human Oversight
|
||
Documentation, Disclosure, and Explainability
|
||
Robustness, Accuracy, and Cybersecurity
|
||
Retention of Records and Data
|
||
Requirements for Certain AI Systems | ||
Transparency — For high-risk and low-risk systems, if applicable:
|
||
Penalties for Violations Up to greater of:
|
F. New Plan to Address Foundation Models and Generative AI
During the more than two years since the Commission first proposed the AI Act, AI technology has advanced dramatically. The speed of these changes is reflected in the evolution of the legislation from version to version. For example, the Council introduced provisions on “general purpose AI,” which the Commission had not contemplated. Likewise, ChatGPT™ burst onto the scene around the time the Council completed its work on the Common Proposal. Having had several more months to consider the impact of foundation models and generative AI, the Parliament was able to address these more recent technological developments.
The Parliament’s Proposal included several new provisions related to general purpose AI systems, including a revised definition: “[a]n AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed.”25 Parliament also proposed restrictions on foundation models and generative AI — a subcategory of foundation models, which are themselves a type of general purpose AI.
The Parliament defined a foundation model as “[a]n AI system model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks.”26 The Parliament proposed a number of obligations on providers of foundation models, which are similar to the regime established under the AI Act for providers of high-risk AI systems.27 The requirements include:
- Reducing reasonably foreseeable risks
- Establishing data governance measures to assess the suitability of datasets, protect against bias, and mitigate risks; achieving appropriate levels of performance, predictability, interpretability, corrigibility, safety, and cybersecurity
- Designing models capable of measuring their environmental impact
- Creating technical documentation
- Establishing a quality management system
- Registering the model in the EU database28
Finally, the Parliament defined generative AI as “foundation models used in AI systems specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio or video.”29 In addition to complying with the requirements on foundation models, generative AI providers would have to:
- Observe transparency requirements
- Safeguard against generating unlawful content
- Publish summaries of their use of copyright-protected materials in training data30
Because the Parliament took the fullest and most sophisticated approach to general purpose AI, foundation models, and generative AI, its proposals likely will serve as the basis for the trilogue negotiations on these points.
G. Increased Opportunity for Redress
The Parliament proposed to give individuals and groups additional avenues for redress from asserted violations.31 It introduces a new complaint process, which allows individuals or groups to file complaints with the relevant national supervisory authority alleging infringement of the AI Act.32 Complaints may be lodged without prejudice to any other administrative or judicial remedy.33 National supervisory authorities are required to keep complainants informed throughout the review process, and notify them of the outcome, including whether a judicial remedy is available.34
H. Administration of the AI Act
Administration of the AI Act has been a source of debate throughout the negotiation process. The Parliament proposed creating the AI Office35 — an independent body, intended to support, advise, and cooperate with member states on various matters, including the coordination of cross-border cases.36 The AI Office replaces the Commission and Council’s original proposal to establish a European Artificial Intelligence Board, which was intended to function as a cooperation mechanism responsible for facilitating the implementation of the AI Act.37 How to structure administration inside each member state also is a major difference among the three versions of the legislation. Participants in the trilogue will have to balance various budgetary and resource concerns, competing bureaucratic interests, disagreements over how much to centralize control, and how much to disperse responsibility among and within the member states.
I. Regulatory Sandboxes and Additional Support for Smaller Businesses
Like the Council, the Parliament added support for innovation — especially for smaller businesses. The Parliament would require member states to establish “regulatory sandboxes” (the Council and Commission made this optional) to allow innovative AI systems to be developed, trained, tested, and validated under supervision by regulatory authorities before commercial marketing or deployment. The Parliament also provided more elaborate guidance to the member states about what the sandboxes may or must entail, including the possibility of subnational or cross-border sandboxes. In addition, the Parliament would permit the Commission, as well as the European Data Protection Supervisor, to create sandboxes.38
The Parliament also expanded on the Council’s proposals for relieving burdens on smaller businesses.39 In addition, the Parliament sought to protect small and medium enterprises and startups from certain unfair contractual terms unilaterally imposed by providers of high-risk AI systems on deployers or downstream providers.40
Changes to Potential Penalties
The Parliament proposed even higher potential penalties for violations of the AI Act’s prohibitions of certain practices. Under Parliament’s Position, the maximum fine would be €40 million or, if the offender is a company, up to 7% of its global annual revenue for the preceding financial year, whichever is higher.41 These amounts reflect an increase from the originally proposed maximum fine of €30 million, or up to 6% of global annual revenue.42 (Small and medium enterprises should hope the Council prevails with its proposal that penalties for them be capped at 3% of global annual revenue.)43 However, the Parliament also decreased the maximum fine for violations of provisions other than those related to prohibited practices, data governance, and transparency to €10 million or 2% of global annual revenue, whichever is higher.44
A Practical Approach to Compliance
The AI Act is one response — albeit a prominent one — to the risks posed by AI systems. These risks include inaccuracy; bias; lack of transparency, explainability, and interpretability; privacy and cybersecurity; undermining of intellectual property rights; and harms to competition, all magnified by rapid and massive increases in AI systems’ power.
Businesses will be managing these risks for years to come. How these risks manifest themselves will vary from company to company, even within a sector, depending on how each seeks to capitalize on the benefits and efficiencies afforded by the emerging technology, its risk appetite, its corporate culture, and other factors. Where a company sits on the value chain (upstream developer, downstream developer, deployer, etc.) also will have a significant impact. Whatever the case may be, businesses operating in Europe (or whose customers operate in Europe using their AI systems or those systems’ outputs) should get a jump on preparing to comply with the AI Act.
With at least two years until the AI Act takes effect, businesses have some breathing room. While best practices are constantly evolving, early steps in the right direction will lower the likelihood that an expensive course correction will be needed later. Once the AI Act comes into force, companies will only be allowed to introduce AI systems for which the development process complies with the legislation’s requirements. Businesses working on AI systems they anticipate launching after the effective date should ensure now that their development processes satisfy those requirements unless they are willing to retrofit before launching — assuming retrofitting is even technically feasible.
Moreover, existing laws in a number of jurisdictions, including the United States, the United Kingdom, Japan, and the EU itself, already address various of the harms at which the AI Act is aimed. In other words, even though the AI Act may not take effect for a couple years, companies developing, distributing, procuring, or deploying AI systems have current obligations to ensure they do not violate privacy, antidiscrimination, consumer-protection, and other laws on the books. Given the various existing, new, and, at times, overlapping mandates, businesses should not wait any longer before commencing their compliance efforts.
An important first step is to establish policies to align legal, privacy, marketing, sales, development, and procurement professionals across all relevant departments within your organization to put clear guardrails in place with respect to AI systems and develop procedures to mitigate and manage risks. Providers of high-risk AI systems and foundation models, including generative AI systems, should also consider what changes they may need to make to comply with the AI Act. While the exact contours of the final legislation remain in doubt, enough is apparent for businesses to begin this work, including drafting technical documentation, creating recordkeeping practices, and preparing for various regulatory reporting responsibilities, among other tasks.
For a comprehensive approach to managing AI risks, consult the Artificial Intelligence Risk Management Framework (AI RMF) released by the U.S. National Institute of Standards and Technology (NIST).45 Accompanying the AI RMF is NIST’s AI RMF Playbook.46 The AI RMF Playbook provides a recommended program for governing, mapping, measuring, and managing AI risks. While prepared by a U.S. agency, the AI RMF and AI RMF Playbook are intended to “[b]e law- and regulation-agnostic.”47 They should support a global enterprise’s compliance with laws and regulations across jurisdictions.
Finally, businesses should continue to monitor regulatory developments. They should track the AI Act trilogue as it unfolds and be prepared to refine their compliance preparations as the legislation’s final form takes shape. Likewise, businesses should pay attention as lawmakers (and the plaintiffs’ bar) in the United States and globally scramble to respond to the risks presented by AI systems. Whether through new horizontal (cross-sector) legislation like the AI Act or through adaptation of existing sectoral laws, legislators and regulators around the world are striving to meet this moment with the right balance between precautions and promotion of innovation. The differences among jurisdictions may prove a challenge to companies operating globally. For now, though, firms can best prepare themselves by focusing on identifying, mitigating, and managing the risks arising from the AI systems they develop, distribute, procure, and deploy. Successful attention to these processes will go a long way toward ensuring compliance with the various regimes that are emerging.
© Arnold & Porter Kaye Scholer LLP 2023 All Rights Reserved. This Advisory is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.
-
Commission Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM (2021) 206 final (Apr. 21, 2021) (EC Proposal).
-
Council Common Position, 2021/0106 (COD), Proposal for a Regulation of the European Parliament and of the Council — Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts - General Approach (Common Position).
-
Amendments adopted by the European Parliament on June 14, 2023 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, P9_TA(2023)0236, Amendment 165, art. 3(1)(1) (Parliament Position); see also id., Amendment 18, recital 6.
-
Id., Amendment 172, art. 3(1)(4).
-
Id., Amendment 220, art. 5(1)(d); see also id., Amendment 41, recital 18.
-
Id., Amendment 227, art. 5(1)(dd); see also id., Amendment 41, recital 18.
-
Id., Amendment 224, art. 5(1)(da); see also id., Amendment 50, recital 26 a.
-
Id., Amendment 225, art. 5(1)(db); see also id., Amendment 51, recital 26 b.
-
Id., Amendment 226, art. 5(1)(dc); see also id., Amendment 52, recital 26 c.
-
Id., Amendment 217, art. 5(1)(ba); see also id., Amendment 39, recital 16 a
-
Compare id., Amendment 38, recital 16 with Common Position recital 16.
-
Compare id., Amendment 38, recital 16 with Common Position recital 16.
-
Compare id., Amendment 38, recital 16 with Common Position recital 16.
-
Id., Amendment 739, Annex III (1)(8)(aa); see also id., Amendment 72, recital 40 a.
-
Id., Amendment 740, Annex III (1)(8)(ab); see also id., Amendment 73, recital 40 b.
-
-
Parliament Position, Amendment 596, art. 65(1); see also id., Amendment 60, recital 32.
-
Id., Amendment 235, art. 6(2)(a).
-
Id., Amendment 261, art. 9(1); see also id., Amendment 76, recital 42.
-
Id., Amendment 288, art. 10(3); see also id., Amendment 78, recital 44.
-
Id., Amendment 336, art. 16(1)(c); see also id., Amendment 337, art. 16(1)(d); Amendment 81, recital 46.
-
-
Id., Amendment 413, art. 29 a; see also id., Amendment 92, recital 58 a.
-
Id., Amendment 410, art. 29 (6).
-
Id., Amendment 169, art. 3(1)(1)(d).
-
Id., Amendment 168, art. 3(1)(1)(c); see also id., Amendment 99, recital 60 e.
-
Id., Amendment 399, art. 28 b.
-
-
-
-
-
Id., Amendment 628, art. 68 a.
-
Id., Amendment 629, art. 68 b.
-
Id., Amendment 628, art. 68 a.
-
Id., Amendment 525, art. 56(1).
-
Id., Amendment 525, art. 56(1); see also id. Amendment 529, art. 56 b.
-
EC Proposal art. 56; Common Position art. 56.
-
Parliament Position, Amendment 289, art. 53(1); see also id. Amendment 490, art. 53(1)(a); Amendment 491, art. 53(1)(b); Amendment 116, recital 71.
-
Id., Amendment 517, art. 55; see also id. Amendment 518, art. 55(1)(a); Amendment 519, art. 55(1)(b); Amendment 520, art. 55(1)(c); Amendment 521, art. 55(1)(ca); Amendment 522, art. 55(2).
-
Id., Amendment 398, art. 28 a.
-
Id., Amendment 647, art. 71(3).
-
-
-
Parliament Position, Amendment 651, art. 71(4).
-
U.S. Nat’l Inst. of Standards & Tech., Artificial Intelligence Risk Management Framework (AI RMF 1.0) (Jan. 2023), available here.
-
See NIST, AI RMF Playbook, available here.
-