Potential Weakening of the E.U.’s AI Act by French, German, and Italian Authorities
There is a possibility that the E.U.’s pioneering AI Act might be weakened due to the efforts of the French, German, and Italian governments advocating for less stringent regulations on foundation models, which serve as the backbone of various artificial intelligence applications.
According to a TIME report, the three major economies in the E.U. are proposing that companies involved in foundation models should engage in self-regulation by revealing specific model details and adhering to codes of conduct. Initially, non-compliance would not lead to penalties, but repeated violations could result in future sanctions.
Foundation models like GPT-3.5, exemplified by OpenAI’s ChatGPT, are trained on extensive datasets and demonstrate versatility in various tasks and applications. These models, created by leading companies such as OpenAI, Google DeepMind, and Meta, are recognized as highly potent, valuable, and potentially risky AI systems. Consequently, there is a growing emphasis from governments on regulating these models to address associated risks.
The proposal put forth by France, Germany, and Italy underscores the importance of regulating AI based on its utilization. Developers of foundation models would be required to disclose specific details, including safety testing procedures. While immediate penalties for non-disclosure are not suggested, the document hints at the potential establishment of a sanction mechanism in the future.
The document also highlights the opposition of the three nations to the European Commission’s original “two-tier” regulatory strategy for foundation models, which proposed lighter regulations for most models and stricter oversight for the most impactful ones.
In response to the resistance towards the stringent two-tier approach, the European Commission has introduced a revised proposal on Nov. 19. This new approach would only impose an additional non-binding code of conduct on the most powerful foundation models. Although a formal agreement has not been reached yet, negotiations are likely to center around this proposal going forward, indicating a setback for the European Parliament’s inclination towards more rigorous regulation of all foundation models.
Ongoing lobbying efforts by major U.S. tech firms to ease the E.U. legislation have been noted. The recent push to relax certain regulatory aspects stems from the French, German, and Italian governments’ desire to promote AI innovation within their respective nations.
The E.U.’s AI Act, initially unveiled in 2021, is currently in the final stages of the legislative process. The objective is to achieve consensus among the European Parliament and member states before February 2024 to prevent potential delays caused by the 2024 European Parliament elections. If approved, the E.U. AI Act is set to emerge as one of the most comprehensive and stringent AI regulations globally, notwithstanding the existing disagreements regarding the regulation of foundation models.