Late Friday night, following the months-long Trilogue[1] negotiations between the European Parliament, Council, and Commission the EU reached a provisional agreement on the proposal on harmonized rules on artificial intelligence (AI), also known as the Artificial Intelligence Act. Much like the GDPR, it is expected to have a worldwide impact.
The draft aims to ensure that AI systems placed on the European market and used in the EU are safe and respect fundamental rights and EU values. According to the key negotiators, the EU AI Act strives to strike a balance between safeguards and innovation. Its main idea is to regulate AI based on its capacity to cause harm to society following a “risk-based” approach: the higher the risk, the stricter the rules. By adopting the AI Act, the EU hopes to set a global standard for AI regulation, so far seemingly outpacing the US.
Compared to the initial European Commission proposal, the main new elements of the provisional agreement can be summarized as follows:
- rules on high-impact general-purpose AI modelsthat can cause systemic risk in the future, as well as on high-risk AI systems
- a revised system of governance with some enforcement powers at EU level
- extension of the list of prohibitionsbut with the possibility to use remote biometric identification by law enforcement authorities in public spaces, subject to safeguards
- better protection of rights through the obligation for deployers of high-risk AI systems to conduct a fundamental rights impact assessmentprior to putting an AI system into use.
Timelines
Following Friday’s provisional agreement, the technical details of the new regulation should be finalized in the coming weeks. Afterwards, the EU presidency will submit the compromise text to the member states’ representatives (Coreper) for endorsement. Commissioner for Internal Market of the EU, Thierry Breton, announced in a press conference that the finalized AI Act is expected to be released within 12 months. It is anticipated to come into effect around 24 months thereafter. While these timelines are not set in stone, it is unlikely that the EU AI Act will come into force until 2026, which is later than the previously anticipated timeframe of 2025. The proposal was first introduced by the European Commission in 2021.
Classification of AI systems as high-risk and prohibited AI practices
The compromise draft provides for a risk-based classification, to ensure that AI systems that are not likely to cause significant risks are not heavily regulated. AI systems presenting only limited risk would be subject to very light transparency obligations, for example disclosing that the content was AI-generated so users can make informed decisions on further use.
A wide range of high-risk AI systems would be authorized, but subject to a set of requirements and obligations to gain access to the EU market. According to the European Council, these requirements have been clarified and adjusted by the co-legislators in such a way that they are more technically feasible and less burdensome for stakeholders to comply with, for example as regards the quality of data, or in relation to the technical documentation that should be drawn up by SMEs to demonstrate that their high-risk AI systems comply with the requirements.
Since AI systems are developed and distributed through complex value chains, the compromise agreement includes changes clarifying the allocation of responsibilities and roles of the various actors in those chains, in particular providers and users of AI systems. It also clarifies the relationship between responsibilities under the AI Act and responsibilities that already exist under other legislation, such as the relevant EU data protection or sectorial legislation.
For some uses of AI, risk is deemed unacceptable and, therefore, these systems will be banned from the EU. The provisional agreement bans, for example, cognitive behavioral manipulation, the untargeted scrapping of facial images from the internet or CCTV footage, emotion recognition in the workplace and educational institutions, social scoring, biometric categorization to infer sensitive data, such as sexual orientation or religious beliefs, and some cases of predictive policing for individuals.
Considering the specificities of law enforcement authorities and the need to preserve their ability to use AI in their vital work, several changes to the Commission proposal were agreed relating to the use of AI systems for law enforcement purposes. However, a specific mechanism has been also introduced to ensure that fundamental rights will be sufficiently protected against any potential misuses of AI systems.
General purpose AI systems and foundation models
New provisions have been added to consider situations where AI systems can be used for many different purposes (general purpose AI), and where general-purpose AI technology is subsequently integrated into another high-risk system. The provisional agreement also addresses the specific cases of general-purpose AI (GPAI) systems.
Specific rules have been also agreed for foundation models, large systems capable to competently perform a wide range of distinctive tasks, such as generating video, text, images, conversing in lateral language, computing, or generating computer code. The provisional agreement provides that foundation models must comply with specific transparency obligations before they are placed in the market. A stricter regime was introduced for “high impact” foundation models. These are foundation models trained with large amount of data and with advanced complexity, capabilities, and performance well above the average, which can disseminate systemic risks along the value chain.
Penalties
The fines for violations of the AI Act were set as a percentage of the offending company’s global annual turnover in the previous financial year or a predetermined amount, whichever is higher. This would be €35 million or 7% for violations of the banned AI applications, €15 million or 3% for violations of the AI act’s obligations and €7,5 million or 1,5% for the supply of incorrect information. However, the provisional agreement provides for more proportionate caps on administrative fines for SMEs and start-ups in case of infringements of the provisions of the AI act.
The compromise agreement also makes clear that a natural or legal person may make a complaint to the relevant market surveillance authority concerning non-compliance with the AI act and may expect that such a complaint will be handled in line with the dedicated procedures of that authority.
Enforcement
The provisional agreement includes a robust enforcement framework for the AI Act (distinguishing it from the many voluntary frameworks around the world). It involves market surveillance at the national level and a new EU AI Office, to be established in the European Commission. A scientific panel of independent experts will advise the AI Office. The AI Board, which would comprise member states’ representatives, will remain as a coordination platform and an advisory body to the Commission. Finally, an advisory forum for stakeholders, such as industry representatives, SMEs, start-ups, civil society, and academia, will be set up to provide technical expertise to the AI Board.
Conclusion
The EU AI Act is by far the most comprehensive piece of AI regulation worldwide to come so close to finalization. It is expected to have a huge impact on AI-powered businesses worldwide, both directly and indirectly, by inspiring similar frameworks in other jurisdictions. We will keep you updated on when the final text of the AI Act becomes available.
[1] Trilogue negotiations are a peculiar political process carried out if the Council of the European Union disagrees with the legislative amendments proposed by the European Parliament in the second reading. The Trilogue stands for an equally composite tripartite meeting between the European Commission, the Council of the European Union, and the European Parliament. The European Commission acts as the mediator.
NAVIGATE THE BURGEONING DOMAIN OF ARTIFICIAL INTELLIGENCE LAW WITH
The CommLaw Group!
In our Artificial Intelligence (AI) practice, we leverage our established subject matter expertise in data privacy, intellectual property law, and regulatory compliance with our proven ability to successfully navigate the ever developing and uncertain technology law landscapes. Our attorney ranks include publishing experts, particularly in the field of legal matters related to AI, whose publications got international traction. We closely follow regulatory and case law developments to guide businesses, developers, and investors on AI-related legal compliance and legal risk mitigation.
CONTACT US NOW, WE ARE STANDING BY TO GUIDE YOUR COMPANY’S COMPLIANCE EFFORTS
Jonathan S. Marashlian – Tel: 703-714-1313 / E-mail: jsm@CommLawGroup.com
Michael Donahue — Tel: 703-714-1319 / E-mail: mpd@CommLawGroup.com
Linda McReynolds – Tel: 703-714-1318 / E-mail: lgm@CommLawGroup.com
Diana Bikbaeva – Tel: 703-663-6757 / E-mail: dab@CommLawGroup.com