Politicians from the European Union have made history by being the first legislative body globally to enact regulations governing the utilization of artificial intelligence technology and imposing restrictions on AI’s commercial and public applications.
Thierry Breton, the European commissioner involved in negotiating the approval of the AI Act, hailed Europe as a trailblazer, recognizing its pivotal role in this regard.
President of the EU Commission, Ursula Von der Leyen, hailed the passing of the AI Act as a monumental moment, emphasizing its role in providing legal clarity, fostering innovative and trustworthy AI practices, and contributing significantly to the establishment of international guidelines for reliable AI.
The AI Act is designed to set a benchmark for countries assessing the pros and cons of AI technology. However, before becoming enforceable, the legislation must secure final endorsement from the Council and the European Parliament, with voting potentially taking place before the early June 2024 EU legislative elections. While some provisions may come into effect as early as next year, the majority are scheduled for implementation in 2025 and 2026.
Critics highlight that the rapidly evolving nature of AI technologies may render certain aspects of the AI Act outdated by the time it is fully operational. The emergence of advanced AI models like ChatGPT necessitated substantial revisions to the law to accommodate these technological advancements.
Discussions surrounding the protection of individuals’ likenesses and safeguards for creators against AI replacing them were prominent during recent writers’ and actors’ strikes. The EU’s policy extends its coverage across various sectors, including healthcare, law enforcement, and the commercial and governmental utilization of AI.
Key provisions include restrictions on facial recognition technologies by authorities and governments, with exceptions for specific health and national security purposes. The AI Act also introduces enhanced transparency standards for manufacturers of powerful general-purpose AI systems, aligning with the transparency requirements outlined in U.S. President Joe Biden’s executive order.
The effectiveness of the new policy may be influenced significantly by law enforcement practices. The General Data Protection Regulation (GDPR), a pivotal digital privacy law established by the EU in 2016, has faced criticism for its inconsistent application across the EU’s 27 member states.
Anticipated legal challenges from affected companies could potentially delay the full implementation of the AI Act in the continent.
Following the recent vote, Barry Scannell, an AI legal expert based in Ireland, raised concerns about the impact of “enhanced transparency requirements” on intellectual property protection, necessitating significant adjustments from businesses utilizing AI systems.
While corporate entities like the Computer and Communications Industry Association in Europe (CCIA) have criticized the EU’s proposal as overly restrictive and potentially stifling tech innovation, civil rights groups argue that the policy falls short, particularly in regulating governmental and police use of AI-driven facial recognition technology.
Amnesty International’s AI advocacy advisor, Mher Hakobyan, expressed disappointment in the EU institutions for not imposing a complete ban on facial recognition, warning of the potential risks to human rights, political freedoms, and the rule of law posed by the unchecked proliferation of AI surveillance technologies.