The legal oversight of the technology utilized in popular conceptual AI services like ChatGPT, which holds the potential to transform everyday life and sparks concerns about philosophical risks to humanity, is set to be implemented following an agreement reached by European Union negotiators on Friday.
In a move to formalize a preliminary social contract for the Artificial Intelligence Act, diplomats from the European Parliament and the 27 member nations of the bloc resolved significant disputes on contentious topics such as conceptual AI and the use of facial recognition technology by authorities.
“Deal!” announced Thierry Breton, the Western Commissioner, on Twitter just before midnight, highlighting that the EU now joins the global front in establishing clear guidelines for AI usage.
This milestone follows a series of closed-door discussions earlier in the week, with the initial session lasting 22 hours before a subsequent round commenced on Friday evening.
While the authorities faced pressure to achieve a consensus on the primary regulations, they also aimed to keep avenues open for further deliberations to iron out specifics, potentially leading to additional adjustments through subsequent negotiations.
In 2021, the EU took an early lead in the global race to set AI regulations with the unveiling of its initial code. However, the rapid advancement of conceptual AI prompted European officials to swiftly revise a proposal intended to serve as a model for global standards.
Brando Benifei, a senator from Rome involved in the negotiation process, indicated to The Associated Press that the European Parliament is expected to conduct a vote on the agreement early next year, a procedural step following the completion of the deal.
When asked about his satisfaction with the outcome, he responded, “It’s truly commendable. While concessions were made, the overall result is praiseworthy.” The final legislation, which includes stringent penalties for violations amounting to up to 35 million euros ($38 million) or 7% of a company’s global turnover, is slated to take effect no earlier than 2025.
Generative AI platforms like OpenAI’s ChatGPT have captured global attention for their ability to generate text, images, and music resembling human creations, raising concerns about the evolving risks posed by this technology to employment, security, intellectual property rights, and even human life.
Although Europe has taken the lead, countries such as the United States, United Kingdom, China, and international coalitions like the Group of Seven (G7) have also initiated efforts to regulate AI within their jurisdictions.
Anu Bradford, a professor specializing in EU and modern regulation at Columbia Law School, emphasized that the EU’s comprehensive regulatory framework could serve as a benchmark for other governments contemplating similar measures. While nations may not adopt every provision verbatim, they are likely to emulate key aspects of the EU’s approach.
Critics have raised concerns about the expedited nature of the agreement.
Daniel Friedlaender, director of the German branch of the Computer and Communications Industry Association, noted that while the political consensus marks a crucial step in addressing vital information gaps in the AI Act, significant aspects still require attention.
Initially designed to mitigate risks associated with specific AI applications based on their hazard levels, the AI Act was expanded to encompass foundational models supporting advanced AI services like ChatGPT and Google’s Bard bot.
A major point of contention for Europe revolved around foundational models. Despite objections from France advocating for self-regulation to bolster domestic conceptual AI firms competing with major U.S. corporations, negotiators managed to broker a tentative compromise early in the talks, with support from industry players like Microsoft, a key backer of OpenAI.
These foundational models, also known as large language models, are trained on extensive datasets of online content and images. Unlike traditional AI systems that operate based on predefined rules, conceptual AI systems have the capacity to generate novel outputs.
Under the agreement, the most sophisticated foundational models posing significant “widespread risks” will undergo additional scrutiny, including disclosure requirements detailing aspects such as the computational resources needed for training the models.
Researchers have cautioned that these powerful foundational models, developed by major tech firms, could be exploited for activities such as misinformation campaigns, cyberattacks, or even bioweapon development.
Rights organizations have highlighted concerns over the lack of transparency surrounding the data used to train these models, as they serve as the fundamental building blocks for developers creating AI-powered services, posing potential risks to various aspects of daily life.
The contentious issue of AI-powered facial recognition systems was eventually addressed after prolonged negotiations, with diplomats reaching a consensus.
While German lawmakers advocated for a complete ban on visual scanning and other forms of “remote biological identification” in public spaces due to privacy concerns, member states sought exemptions to allow law enforcement to leverage these technologies in combating serious crimes like child exploitation or terrorist activities.
However, civil society organizations remain cautious.
According to Daniel Leufer, a senior policy analyst at the online rights organization Access Now, despite the victories in the recent negotiations, significant shortcomings persist in the final text. He pointed out gaps in the prohibitions on high-risk AI systems, the exemptions for law enforcement use, and the absence of safeguards for AI systems employed in immigration and border control.