Negotiations among European Union lawmakers tasked with reaching a consensus on a risk-based framework for regulating artificial intelligence applications seem to be delicately poised. Brando Benifei, a Member of the European Parliament (MEP) involved in the discussions, characterized the talks on the AI Act as intricate and challenging during a recent roundtable event organized by the European Center for Not-For-Profit Law (ECNL) and the civil society association EDRi.
The discussions, known as trilogues, between EU co-legislators are crucial for shaping most European Union laws. Divisive issues at the heart of the negotiations include prohibitions on certain AI practices, assessments of fundamental rights impacts, and exemptions for national security practices. Benifei emphasized the importance of upholding citizens’ fundamental rights and expressed the parliamentarians’ firm stance on these matters, urging the Council to show more flexibility.
Sarah Chander, a senior policy adviser at EDRi, provided a pessimistic assessment of the current state of the negotiations, highlighting various key civil society recommendations that are facing resistance from the Council. The disagreements range from banning the use of remote biometrics ID systems in public to the classification of high-risk AI systems and limiting exports of prohibited systems outside the EU.
The discussions also revolve around the regulation of generative AI and foundational models, with industry lobbying influencing the positions of Member States. The pushback from certain EU countries like France and Germany against regulatory measures for foundational models has added complexity to the negotiations. The involvement of industry lobbyists, including both US tech giants and European AI startups, has raised concerns about potential regulatory capture and the need to safeguard fundamental rights against biased AI systems.
The challenges in reaching a consensus on the AI Act extend beyond individual issues to structural hurdles. Lawmakers are grappling with balancing fundamental rights protection with product safety legislation in the context of AI regulation. The upcoming trilogue on December 6 represents a critical juncture in the negotiations, with the clock ticking towards European elections next year and the potential for significant political shifts that could impact the future of AI governance in the EU.
As the EU strives to establish itself as a global leader in setting AI rules, the outcome of the trilogue will be pivotal in determining the bloc’s regulatory approach to artificial intelligence. The discussions surrounding fundamental rights impact assessments and law enforcement exceptions remain contentious, highlighting the need for compromise to ensure effective AI regulation that upholds democratic freedoms.
The EU’s ambition to lead in AI governance faces challenges from industry lobbying and divergent views among Member States, underscoring the complexity of the regulatory landscape. The next trilogue will be crucial in determining whether the EU can deliver a comprehensive AI Act that addresses the risks and opportunities presented by artificial intelligence technologies.