Following three days of intense negotiations last week, the world’s first comprehensive laws to regulate AI were finally agreed in a historic deal between the EU Parliament and Council late on the evening of Friday 8 December. The EU’s AI Act aims to ensure that AI systems placed on the European market and used in the EU are safe and respect fundamental rights and EU values, as well as seeking to stimulate investment and innovation on AI in Europe.
In its press release on the provisional agreement, the EU Council described the AI Act as a “flagship” legislative initiative, which aims to foster the development and uptake of safe and trustworthy AI across the EU’s single market by both private and public actors. The Act seeks to regulate AI based on its capability to cause harm to society following a “risk-based” approach, with AI systems classified into different categories of risk (prohibited, high-risk, limited risk and minimal risk) and stricter rules for systems categorised as high-risk. As the first legislative proposal of its kind in the world, the EU hopes that the AI Act will set a global standard for AI regulation in other jurisdictions, similar to what the GDPR has done for data protection, whilst at the same time, in the Council’s words, “stimulate investment and innovation on AI in Europe” and “support … innovation”. There are many questions about whether either of these two aims will be met, especially regarding innovation considering the myriad of other EU legislation already recently in force or coming down the pipeline – in particular the Digital Services Act, the Digital Markets Act, and the Data Act (to name but a few).
What are the main elements of the provisional agreement?
The draft of the provisional agreement has not been released yet, but for information on the general concepts in the initial drafts of the AI Act see our article from February 2023 here, which set out key elements of the EU Commission’s initial proposal.
Compared to the initial Commission proposal, the key elements of the provisional agreement include:
- Clarifications on scope: Although the AI Act will have extra-territorial scope, the provisional agreement clarifies that the AI Act will not apply to areas outside the scope of EU law and should not have any impact on Member States’ competences in national security. Furthermore, the AI Act will not apply to systems which are used exclusively for military or defence purposes, for the sole purpose of research and innovation, or for people using AI for non-professional reasons.
- Revised system of governance: The provisional agreement proposes a revised system of governance with some enforcement powers at EU level, via a new “AI Office” which will sit within the Commission and will work to coordinate governance among Member States and supervise the enforcement of rules relating to general purpose AI.
- Extension of the list of prohibitions under the “unacceptable risk” category: The list of banned systems will now include those involving behavioural manipulation affecting free will (such as toys using voice assistance which encourage dangerous behaviour by minors), social scoring, certain uses of predictive policing, and emotion-recognition technology in the workplace and in educational institutions. Certain uses of real-time remote biometric identification will also be banned, although there will be specific exemptions for law enforcement.
- Stronger protection of rights in the context of high-risk systems: The provisional agreement requires deployers of high-risk AI systems to conduct a fundamental rights impact assessment prior to putting the high-risk system into use and to comply with enhanced transparency requirements in relation to those systems. Under the provisional agreement, high-risk systems would include those used in the context of medical devices, vehicles, recruitment and other work-related situations, critical infrastructure (e.g. water, gas or electricity), access to services (e.g. insurance, banking, credit or benefits), emotion recognition systems (other than in the context of work or education which fall under the list of prohibitions), and biometric identification.
- Specific transparency requirements in relation to certain use cases: In particular, when employing AI systems such as chatbots, users must be made aware that they are interacting with a machine. Deepfakes and other AI generated content will have to be labelled as such, and users will also need to be informed when biometric categorisation or emotion recognition systems are being used. In addition, providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated.
- Specific rules for foundation models: The AI Act now includes specific rules on foundation models, which have been the subject of much debate between the EU institutions in recent weeks, These include requirements for foundation models to comply with specific transparency obligations before they are placed in the market, with a stricter regime introduced for “high impact” foundation models.
- Rules on high-impact general-purpose AI models and high-risk AI systems: The provisional agreement includes specific rules for general purpose AI systems, meaning AI systems that can be used for many different purposes. In particular, general purpose AI systems will have to adhere to transparency requirements, including drawing up technical documentation and disseminating detailed summaries about the content used for training. There are also more stringent rules where general-purpose AI technology is subsequently integrated into another high-risk system, including requirements (where certain criteria are met) to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, and report to the Commission on serious incidents.
- Fines: The AI Act will have GDPR-style fines set as a percentage of the offending company’s global annual turnover in the previous financial year or a predetermined amount, whichever is higher. Under the provisional agreement, there are three different levels of fines depending on the nature of the infringement, with fines of up to €35 million or 7% for violations of the banned AI applications, €15 million or 3% for violations of the AI Act’s obligations and €7.5 million or 1.5% for the supply of incorrect information. However, the provisional agreement provides for more proportionate caps on fines for SMEs and start-ups.
What happens next?
It is important to highlight that the deal that has been reached is just a political agreement at this stage; work will continue at a technical level in the coming weeks before the legal text is finalised, and the latest text has not yet been published. The entire text will need to be confirmed and undergo various revisions from the EU institutions before formal adoption, likely in early 2024. Once the final text is adopted and published in the Official Journal of the European Union, it will enter into force 20 days after that, with most of its provisions becoming applicable after two years. However, it is currently anticipated that the provisions on prohibited use will come into effect after six months, and general purpose AI rules after one year.
To bridge the transitional period before the AI Act becomes generally applicable, the EU Commission has confirmed in a press release that it will be launching an AI Pact, which will convene AI developers from Europe and around the world who commit on a voluntary basis to implement key obligations of the AI Act ahead of the legal deadlines. As a next step, the Commission is launching a “call for interest” for organisations who wish to be actively involved in the AI Pact, with initial meetings between those organisations to be convened in the first half of 2024.
What should organisations be doing now?
Although we are currently awaiting the final text, organisations would still be well advised to start preparing now, as the work required in anticipation of the AI Act coming into effect is likely to be substantial. In particular, organisations should start to review their use of AI to identify where AI is being used within their organisation and supply chain and the extent to which the AI Act is likely to apply, as well as how the systems they use are likely to be classified under the AI Act. For example, an organisation looking to deploy an in-scope AI system that is considered high-risk such as an HR recruitment tool should for a start be factoring into any AI deployment process the following: a) conducting appropriate impact assessments and conformity assessments; b) putting in place appropriate governance as well as risk and quality management processes; c) taking the necessary steps to meeting the relevant transparency, accuracy, robustness and cyber security requirements; as well as d) ensuring there is human oversight of the implementation and use of high-risk systems.
In addition, organisations should begin to consider how they will build out their AI governance frameworks to take account of the principles laid down by the Act more generally, as well as emerging AI regulations in other parts of the world and broader ethical considerations where relevant. While at this stage it will of course also be important to continue to monitor for further developments and to await the final version of the Act, organisations can still start laying the groundwork to be in the best possible position for compliance when the time comes.
This is a historical achievement, and a huge milestone towards the future! Today’s agreement effectively addresses a global challenge in a fast-evolving technological environment on a key area for the future of our societies and economies. And in this endeavour, we managed to keep an extremely delicate balance: boosting innovation and uptake of artificial intelligence across Europe whilst fully respecting the fundamental rights of our citizens. Carme Artigas, Spanish secretary of state for digitalisation and artificial intelligence