EU Draft Regulation for Regulating AI
Monday 2 August 2021
Blanca Escribano
Ernst & Young Abogados, Madrid
blanca.escribano.canas@es.ey.com
Introduction
The European Commission has taken a step forward in its strategy aimed at achieving a trustworthy artificial intelligence (AI) environment within the European Union (EU).
On 21 April 2021, the European Commission published its proposed Regulation on Artificial Intelligence (draft Regulation),[1] together with a Communication titled ‘Fostering a European approach to Artificial Intelligence’.[2]
The draft Regulation follows several preparatory acts, including a White Paper,[3] all of which were based on the 2018 Communication on AI[4] and the High-Level Experts Group (AI HLEG) Guidelines on Trustworthy AI,[5] which are currently applicable and constitute a useful framework for organisations participating in the AI ecosystem.[6]
The Regulation is horizontal, with a cross-sector approach, but sector-specific regulations may include measures for different vertical markets. Some of the challenges posed by AI were the subject of resolutions passed by the European Parliament in October 2020.[7] Consequently, liability, IP rights and certain key ethical issues are not addressed in the draft Regulation, despite its extensive emphasis on, for example, ethical issues.
The draft Regulation
The Regulation will apply to AI used or placed in the EU market, irrespective of whether the providers are based within or outside the EU.
The draft Regulation applies to the ways in which AI is used, not to the technology itself. It provides a single and comprehensive definition of AI which intends to be future-proof and technology-neutral:
‘Software that is developed with one or more of the techniques and approaches listed in Annex I [broadly speaking, machine learning, logic and statistical approaches] and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.’
Certain types of AI do not lie within the scope of the draft Regulation. These are: (1) legacy AI;[8] (2) AI systems which are components of the large-scale government IT systems established by EU laws in the areas of freedom, security and justice that have been placed on the market or put into service at least 12 months before the Regulation’s application date; and (3) AI systems for military purposes and of public authorities in third countries or international organisations, when acting in the framework of international agreements concluded at national or EU level for law enforcement and judicial cooperation with the EU or with its Member States.
The Regulation is extraterritorial in scope. It will apply not only within the EU, but also applies to providers or users that are established or located outside EU territory and which place or put into service AI systems within the EU, or the AI output produced by the system is used in the EU.
The draft Regulation sets out obligations across stakeholders throughout the entire value chain. This includes not only providers[9] bringing AI tools to market or implementing AI systems in the EU but also manufacturers, distributors, importers and users[10] of such AI systems.
Any stakeholder in the value chain will be considered a provider in any of the following circumstances: if using the AI tool in the market under its name or trademark; if modifying the intended purpose of the AI system; or if making a substantial modification. Where the latter two cases apply, the initial provider will no longer be responsible.
Requirements under the draft Regulation
AI systems will be subject to different levels of requirements or prohibited, depending on the risks posed to the health, safety and fundamental rights of citizens within the EU.
The Regulation classifies AI into four groups: banned, high-risk, low-risk, and minimal-risk. The severity of the regulatory approach depends on the classification of the AI in question. The proposed legislation sets out a regulatory structure which prohibits some uses of AI,[11] heavily regulates high-risk uses and lightly regulates less risky AI systems.[12]
Due to this risk-based approach, most of the requirements outlined in the draft Regulation refer to high-risk AI.[13] The classification of an AI system as high-risk is based on its intended purpose, in line with product safety legislation. It does not only depend on the function performed but also on the specific purpose and processes for which the system is used.
High-risk AI is subject to meeting certain obligations, namely: risk management,[14] data governance, technical documentation, record keeping (traceability), transparency and provision of information to users, human oversight, accuracy, cybersecurity[15] and robustness.
It is worth highlighting that the obligations and requirements are addressed not only to providers of AI systems but also to stakeholders who use those systems or are part of the value chain (manufacturers, importers, distributors).
Penalties
An antitrust/GDPR-style sanctioning regime is proposed, with fines of up to €30m or six per cent of global annual turnover.
Non-compliance with the requirements would expose an organisation to financial penalties of up to a maximum of €30m, or up to six per cent of total annual global turnover for the preceding financial year, whichever is greater. At the lower end of the scale, penalties may be a minimum of €10m, or up to two per cent of total annual global turnover, with an intermediate amount of €20m or up to four per cent of total annual global turnover.
The infringements which attract the maximum sanction are reserved for those that do not respect the category of outlawed AI systems/practices or do not comply with the data governance requirements. The second tier of sanctions will be imposed for non-compliance with any other requirements or obligations under the draft Regulation. Finally, the substantial but less severe penalties are likely to be imposed for the supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in response to a request.
All relevant circumstances will be considered when deciding the amount of the fine, with due regard given to: the nature, gravity and duration of the infringement and consequences, whether administrative fines have been already applied by other authorities to the same operator for the same infringement, and the size of the market share of the operator committing the infringement.
Data governance
Data governance will be taken to a new level as it will now need to be more comprehensive and subject not only to GDPR requirements but also to this new AI regulation, given the risk of greater sanctions.
The requirement of data governance is key. Data is used for training, validation and testing the AI and it will define the legality (and the ethics) of the AI system. For this reason, infringing the data governance requirement triggers the highest sanction set out in the draft Regulation.
The draft Regulation sets out several characteristics of data governance, mandating that it shall include certain quality criteria (impacting design choices, collection, formulation of assumptions, prior assessment, examination in view of possible bias and data gaps) and technical limitations.
The Regulation is without prejudice to, and complements, the EU’s General Data Protection Regulation (GDPR).[16]
Future projections
Once it is enacted, in approximately two years from now, the Regulation will enter into force 20 days after publication and will have a moratorium period of two years before it is fully applicable. In practice, it means that organisations will have 24 months for AI requirements readiness during which they will not be subject to penalties, although they will need to ensure compliance. During that time, new specialised supervisory authorities, at EU and at national level, will be appointed and established to enforce the draft Regulation. Organisations will also need to start planning the way to adapt their systems to the new setup, at least identifying and classifying their AI systems sufficiently in advance to see where they stand within the requirements envisaged by the norm.
Notes
[1] Proposal for a Regulation of The European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts; Brussels, 21 April 2021 – COM/2021/206 final; 2021/0106 (COD).
[2] Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions ‘Fostering a European approach to Artificial Intelligence’; Brussels, 21 April 2021 – COM(2021) 205 final.
[3] White Paper on Artificial Intelligence – A European approach to excellence and trust; Brussels, 19 February 2020 – COM(2020) 65 final.
[4] Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions – Artificial Intelligence for Europe; Brussels, 25 April 2018 – COM(2018) 237 final.
[5] AI HLEG deliverables: Ethics guidelines for trustworthy AI, 8 April 2019; Policy and Investment Recommendations for Trustworthy AI, 26 June 2019; Assessment List for Trustworthy AI (ALTAI), 17 July 2020; Sectoral Considerations on the Policy and Investment Recommendations, 23 July 2020.
[6] Together with the draft Regulation, the Commission has also proposed a new regulatory framework for machinery products, updating safety rules in order to build trust in new products and digital technologies. Among other objectives, the new draft Machinery Regulation (Proposal for a Regulation of the European Parliament and of the Council on machinery products; Brussels, 21 April 2021 – COM(2021) 202 final; 2021/0105 (COD)). This will replace the Machinery Directive 2006/42/EC, aiming to address the risks posed by emerging digital technologies (such as robotics and Internet of Things as well as AI) and will be complementary to the AI Regulation. The Regulation will cover the safety risks posed by AI systems, while the Machinery Regulation will apply in relation to the safe integration of AI systems into overall machinery to avoid compromising the safety of the machinery product as a whole.
[7][7] European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)); European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies (2020/2012(INL)); European Parliament resolution of 20 October 2020 on intellectual property rights for the development of AI technologies (2020/2015(INI)).
[8] Meaning AI placed on the market or put into service before the date of application. The Commission has indicated that it wishes to expedite its legislative process to bring this Regulation into force, perhaps as early as 2022. Organisations hoping to bring AI tools into operation quickly to exclude them from the Regulation’s scope should note that if the tool is subject to significant changes in its design or intended purpose after the application date, then it will be subject to the Regulation in any case.
[9] Means natural or legal person, public authority agency or other body that develops and AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge.
[10] Users shall be defined as any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of personal non-professional activity. As a result, all professional users making use of AI systems are supposed to be subject to the applicable obligations under the draft Regulation.
[11] The Commission intends to ban certain uses of AI which are deemed to be unacceptable because they: (1) deploy subliminal techniques or exploit vulnerabilities of specific groups of persons due to their age or disability, in order to materially distort a person’s behaviour in a manner that causes physical or psychological harm; (2) lead to ‘social scoring’ by public authorities; or (3) conduct ‘real time’ biometric identification in publicly available spaces (with some derogations).
[12] Non-high-risk AI (ie, impersonation, bots) is permitted subject to information and transparency obligations. This might include making it transparent to humans that they are interacting with an AI system and that emotion recognition or biometric categorisation is applied, as well as labelling so-called ‘deep fakes’ (with some exceptions).
[13] There are two broad groups of AI that are considered high-risk: those intended to be used as a safety component of a product, or is itself a product covered by EU harmonisation legislation (listed in Annex II), and which are required to undergo a third-party conformity assessment; and standalone systems in area eight (listed in Annex III, which is a fixed list and will be updated under the criteria set out in the Regulation by the Commission but with some input from Member States). These standalone systems include: biometric identification and categorisation of natural persons; management and operation of critical infrastructure; education and vocational training; employment, workers management and access to self-employment; access to, and enjoyment of, essential private services and public services and benefits; law enforcement, migration, asylum and border control management; and administration of justice and democratic processes.
[14] Risk management systems need to be established, implemented, documented and maintained. It is a continuous, interactive process running throughout the entire product lifecycle, requiring regular and systematic updating. The intended purpose of the high-risk AI system and the risk management system itself shall be considered when assessing compliance with the requirements set forth in the draft Regulation.
[15] On the cybersecurity robustness criteria, it is important to highlight the presumption of compliance for high-risk systems when they are certified or when a statement of conformity has been issued under a cybersecurity scheme pursuant to the EU Cybersecurity Act (Regulation (EU) 2019/881 of the European Parliament and of the Council, of 17 April 2019, on ENISA (the European Union Agency for Cybersecurity) and on information and communications technology cybersecurity certification and repealing Regulation (EU) No 526/2013. More generally, for easing the compliance burden, there are instruments to achieve presumption of conformity with the requirements, including adherence to standards, common specifications, cybersecurity schemes, conformity assessments, certificates and EU declarations of conformity.
[16] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC. There are many similarities between both Regulations as for instance, the extraterritorial scope: the penalty scheme; ex ante self-assessments as well as continuous monitoring throughout the lifecycle of the AI system; accountability obligations which demand that operators keep records and documentation proving compliance for a set retention period; the need to appoint legal representative in the EU for providers located outside the EU; the requirements imposed to high risk AI must be implemented by design; 24-month moratorium period after entry into force to enable readiness; purpose as the cornerstone of the obligations; a risk-based approach to technical and organisational measures; and finally, flexibility for organisations on the path or technical solutions to achieve compliance.