Written by 3:59 pm AI, Discussions, Uncategorized

– Doubts Arise Over Europe’s Leadership in AI Legislation

Last-minute disagreement over the AI Act’s impact on foundation models could scupper the deal…

The European Union is striving to establish top-tier regulations for artificial intelligence (AI), akin to its achievements in internet privacy. However, the progress of the proposed AI Act has hit a snag in its final stages.

During a confidential “trilogue” session involving diplomats from the EU’s key institutions, there was optimism that the details could be ironed out by the following week. Unexpectedly, last year, the three major EU economies—Germany, France, and Italy—rejected the proposal to have the AI Act govern foundation models, causing a significant disruption. Instead, they advocated for self-regulation by foundation model providers like OpenAI, proposing stricter oversight only for “high-risk” applications utilizing such technology (excluding models like GPT-4, the basis for ChatGPT).

The surprising alignment of Germany and France with American tech firms may have raised eyebrows, but their motivations are more locally grounded. Germany openly supports its homegrown AI company Aleph Alpha, backed by major players like SAP and Bosch. Notably, the lobbying efforts are spearheaded by Cédric O, a key ally of Emmanuel Macron, further benefiting the French AI standout Mistral. In a recent debate with O, Max Tegmark from the Future of Life Institute emphasized the need for former officials to refrain from political involvement related to their past roles.

The tech industry diverges in its stance, emphasizing transparency in disclosing training data sources for base model providers. German industry, in particular, is staunchly against regulating foundation models.

Despite the reasons behind the U-turn by Germany, France, and Italy, the European Parliament remains unconvinced. Axel Voss, a prominent EU Parliament member, criticized their proposal on base models, emphasizing the necessity of clarity, security, and information obligations in even the most basic standards for self-regulation, aligning with the AI Act’s requirements.

The deadline for finalizing this legislation was set for December 6, but the European Commission, the initial proponent of the AI Act, has proposed a compromise banning the use of “foundation versions” while mandating creators of powerful “general-purpose AI models” to document and consent to formal monitoring. The likelihood of this compromise being delayed to the following year is high, given the discrepancies with Parliament’s stance.

The looming 2024 European Parliament elections and subsequent turnover further complicate the timeline for reaching a consensus on the AI Act. Benedikt Kohn from Taylor Wessing highlighted the recent strides taken by the U.S. in AI regulation, underscoring the potential setbacks of a failed AI Act project.

For more insights on evolving AI regulations, various countries, including the U.S., U.K., and Germany, have recently issued sets of security recommendations for companies engaged in AI development.

Visited 1 times, 1 visit(s) today
Last modified: February 19, 2024
Close Search Window