May.07.2021
The European Commission (the "Commission") recently published its highly-anticipated communication and proposal for a "Regulation laying down harmonised rules on artificial intelligence"[1] (the "AI Regulation"). The AI Regulation is the first ever legal framework, globally, focused solely on artificial intelligence ("AI") and has striking similarities to the GDPR. If adopted as drafted, the AI Regulation would have significant consequences for many organisations who develop, sell or use AI systems, including the introduction of a new set of legal obligations and a monitoring and enforcement regime with hefty penalties for non-compliance.
At its heart, the AI Regulation is focused on the identification and monitoring of "high risk" AI systems and the key questions for organisations who develop, sell or use AI will be whether the AI system in question is likely to be considered "high risk" and what this means for those "high-risk" AI systems if the AI Regulation is adopted, as drafted.
This article concentrates on the key aspects of the AI Regulation and the implications for organisations that provide AI systems that have some degree of nexus with the European Union ("EU").
The AI Regulation governs the "development, placement on the market and use of AI systems in the [EU] following a proportionate risk-based approach"[2]. As a Regulation, it will introduce a "uniform application of the new rules… the prohibition of certain harmful AI-enabled practices and the classification of certain AI systems"[3], which will have direct effect in all EU Member States. The AI Regulation applies across all sectors (public and private) to "ensure a level playing field"[4].
As an EU regulation, it has immediate effect and does not need further implementation by the EU Members States. A violation of the AI Regulation can potentially even lead to civil claims of individuals under Member State law.
The AI Regulation defines AI as "software that is developed with one or more of the techniques and approaches listed in Annex I [of the AI Regulation] and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with"[5].
Recognising the pace of technological development, the EU has attempted to make the definition "as technology neutral and future proof as possible"[6]. Accordingly, Annex I can be "adapted by the Commission in line with new technological developments"[7].
Like the GDPR, the AI Regulation is intended to have extraterritorial effect. Subject to some specific exceptions, the AI Regulation applies to:
The AI Regulation introduces a four-tier system of risk:
Examples include: (i) Subliminal techniques beyond an individual's consciousness in order to materially distort their behaviour; (ii) exploiting the vulnerabilities of a specific group of individuals due to their age; (iii) social scoring by public authorities; and (iv) "real-time" remote biometric identification systems in publicly accessible spaces used for law enforcement purposes (subject to limited exceptions).
Examples include: (i) "Real-time" and "post" remote biometric identification; (ii) evaluating an individuals' creditworthiness (except where used by small scale providers for their own use); and (iii) the use of AI systems in recruitment and promotion (including changes to roles and responsibilities) in an employment context.
"High-risk" AI system requirements and obligations
Chapter 2 of Title III sets out detailed "requirements" for "high-risk" AI systems. Chapter 3 of Title III sets out specific "obligations" on providers, users and other participants across the AI value chain (e.g. importers and distributors).
Providers[8] are responsible for the majority of the specific obligations in relation to "high-risk" AI systems including:
Additional responsibilities of providers, in relation to "high-risk" AI systems, include:
Obligations on other parties in relation to "high risk" AI systems
Chapter 3 of Title III establishes specific obligations for importers which are covered at Article 26, distributors at Article 27, and users at Article 29. Other obligations, which broadly covers "distributors, importers, users or any other third-party" can be located at Article 28. These parties will assume the same, extensive, obligations as providers in relation to "high-risk" AI systems, if they:
Notifying authorities and conformity assessments
Under Chapter 4 of Title III, Member States are obliged to establish a "notifying authority", responsible for the assessment, designation and notification of "conformity assessment bodies", which carry out independent assessment activities (testing, certification and inspection) of "high-risk" AI systems.
Chapter 5 of Title III sets out the "high-risk" AI system conformity assessment regime under the AI Regulation.
At Article 71, the AI Regulation provides for a GDPR-like sanction regime for non-compliance. The percentages are based upon a company's total worldwide annual turnover of the preceding financial year:
Notably, the AI Regulation does not provide for a specific right to compensation (i.e. an equivalent of Article 82 GDPR), which may provide some comfort. Of course, this does not exempt an AI system caught by the AI Regulation from the GDPR where a private right of action remains under Article 82 GDPR.
Although the AI Regulation does not provide for a specific right to compensation, a violation of the the AI Regulation (because it is an EU regulation, rather than a directive) can potentially even lead to civil claims of individuals under Member State law.
Each Member State must designate at least one national competent authority to supervise the AI Regulation's application and implementation and carry out market surveillance activities. It is likely that these powers will be designated to existing regulatory bodies such as data protection authorities.
Like the GDPR, the AI Regulation would see the establishment of an 'overarching' board to facilitate a smooth, effective and harmonised implementation of the new rules (the AI equivalent of the European Data Protection Board). The EAIB would consist of representatives of national competent authorities, the European Data Protection Supervisor, and the Commission.
To echo the comments of the Commission's Executive Vice-President, Margrethe Vestager, the AI Regulation is nothing short of "a landmark proposal". As drafted, the AI Regulation contains extensive regulatory compliance implications for organisations across a wide range of sectors.
As for next steps, the European Parliament and the Member States will look to adopt the Commission's proposals in the ordinary legislative procedure. During that time, the proposal is likely to be subject to extensive scrutiny and amendment. Once adopted, the final AI Regulation will be directly applicable across the EU. The AI Regulation includes a two-year period for application following adoption of the final regulation, which means that the new law could apply as early as 2024.
Although the AI Regulation is currently in draft form it is sensible for AI providers and other participants in the AI value chain (particular those who may fall into the "high risk" category) to acquaint themselves with the proposed requirements of the AI Regulation as, based on the political "mood music" it is likely that a similar regulation of AI is on the horizon.
The GDPR is well-known for spearheading the global privacy "revolution". Time will tell whether the AI Regulation, which draws clear influence and inspiration from the GDPR, serves as the catalyst for a new dawn of international AI regulation - we suspect that it will.
[1] Explanatory Memorandum to the proposal (page 1).
[2] Explanatory Memorandum to the proposal (page 3).
[3] Explanatory Memorandum to the proposal (page 7).
[4] Explanatory Memorandum to the proposal (page 6).
[5] Article 3(1) of the AI Regulation.
[6] Explanatory Memorandum to the proposal (page 12).
[7] Explanatory Memorandum to the proposal (page 12).
[8]'Provider' means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge