After extensive discussions last week, the European Union has reached a preliminary agreement on the AI Act, moving closer to its implementation.
Despite missing the initial deadline on Wednesday, lawmakers successfully reached a consensus late on Friday, just in time for the weekend. Failure to do so would have resulted in a delay until potentially after the EU-wide elections in June.
Navigating this process has been challenging, akin to aiming at a moving target.
The AI Act is hailed as the world’s inaugural comprehensive legislation on artificial intelligence, initially proposed in 2021. Over the years, the rapid advancement of AI has caused divergences in the bloc’s regulatory strategies.
The recent discord arose following the impactful debut of ChatGPT last year. This OpenAI chatbot triggered both alarm and enthusiasm about the capabilities of foundation models, also known as “general-purpose” AI systems. EU member states were divided on the optimal approach to supervise such systems.
The most recent developments in the EU tech sphere, insights from our esteemed founder Boris, and intriguing AI creations are shared weekly in your inbox. Subscribe now!
France, Germany, and Italy opposed mandatory regulations, expressing concerns about potential hindrances to innovation and their local enterprises. Instead, they advocated for adherence to codes of conduct.
Another contentious issue pertained to limitations on biometric surveillance. While EU legislators aimed for an outright prohibition, governments sought exemptions for national security purposes.
Risk-based Framework for AI Systems
At the eleventh hour, lawmakers finalized a provisional agreement on the Act’s core principles, emphasizing a risk-based methodology. This framework comprises the following tiered categories:
- Minimal risk — covering AI-powered recommender systems or spam filters, exempt from obligations.
- High-risk — encompassing systems crucial for infrastructures, medical equipment, educational access, recruitment processes, law enforcement, etc., necessitating compliance with stringent requirements including risk mitigation mechanisms, high-quality datasets, activity logging, detailed documentation, human oversight, and robust cybersecurity measures.
- Unacceptable risk — prohibiting systems deemed a direct threat to individuals’ fundamental rights, such as those manipulating human behavior to subvert free will. This includes social scoring by entities or real-time categorization of individuals. However, there exists a limited exception for remote biometric identification for law enforcement purposes.
- Specific transparency risk — mandating user awareness when interacting with AI and the labeling of deep fakes or AI-generated content.
Consistent with EU tech regulations, substantial fines will be imposed on non-compliant entities, ranging from €35 million (or 7% of global annual turnover, whichever is higher) for violations involving banned applications to €7.5 million (or 1.5%) for providing false information.
Furthermore, the Act introduces specific guidelines for general-purpose AI models. For highly potent models posing systemic risks, additional binding obligations will be enforced through codes of practice developed collaboratively by industry stakeholders, the scientific community, civil society, and other relevant parties alongside the Commission.
Establishment of the New European AI Office
While enforcement will be the responsibility of individual member states, the Act mandates the creation of a new European AI Office under the European Commission. Additionally, the EU’s industry leader Thierry Breton emphasized that the Act serves not only as a regulatory framework but also as a catalyst for EU startups and researchers to lead the global AI landscape.
The legislation will undergo further deliberations, allowing room for additional advocacy. Nonetheless, there is a strong likelihood of reaching a comprehensive agreement before the upcoming European parliamentary elections.
Nevertheless, the enforcement of the law is expected to take at least 18 months, during which the AI landscape may undergo significant transformations.