The dramatic saga surrounding the removal and subsequent reinstatement of OpenAI CEO Sam Altman had not only sparked online jests but also laid bare some significant underlying issues. One particularly intriguing critique pondered, “How can we tackle the AI alignment challenge when aligning a few board members proves to be an insurmountable task?”
OpenAI, the entity behind ChatGPT, stands out as a prominent player in the field of artificial intelligence, yet AI transcends any single corporation—it represents a technology of profound impact that currently operates in a largely unregulated environment. The European Union (E.U.) stands at a pivotal juncture, with an opportunity to address this regulatory gap, provided it resists the relentless lobbying efforts of Big Tech. Admirable Members of the European Parliament have shown resilience against immense pressure, striving to uphold crucial legislation. Recently, E.U. Commissioner Thierry Breton condemned what he perceives as self-serving lobbying by entities like France’s Mistral AI, emphasizing the importance of legislations that serve the public interest. It is imperative that these lawmakers receive the necessary support during this critical phase.
Europe stands poised to take the lead in a global landscape awakening to the necessity of AI regulation. Initiatives ranging from the U.S. Executive Order to the U.K.’s AI Safety Summit underscore a growing recognition worldwide that reaping the benefits of this transformative technology necessitates mitigating its inherent risks. The E.U.’s AI Act represents a pioneering legal framework aimed at precisely this objective. However, a few tech giants are obstructing the legislative process, demanding exemptions from regulation or else jeopardizing the entire endeavor. Succumbing to such pressure would not only impede European innovation but also prioritize profits over public welfare, thereby undermining democratic principles. Our legislators must stand firm and not yield to such coercion.
Recent negotiations hit a roadblock on Nov. 10 when France and Germany opposed the proposed regulation of “foundation models.” Subsequently, they, along with Italy, put forth demands in a nonpaper advocating for voluntary commitments rather than mandatory regulations for companies developing such models. Foundation models, like OpenAI’s GPT-4 that powers ChatGPT, represent versatile machine learning systems with broad downstream applications. Regulating these models would compel AI firms to ensure their safety before deployment, mitigating the risks of potential harm to the public. Given the escalating concerns surrounding advanced AI systems, including the dissemination of misinformation, bioterrorism risks, infrastructure hacking, and cyber threats, this provision emerges as a prudent safeguard.
The necessity for well-defined legal safeguards, as opposed to industry self-regulation, is increasingly evident. For instance, the detrimental effects of social media on young women and girls have been glaring, with platform operators being aware of the harm for years yet failing to take adequate action. Voluntary commitments prove inadequate and unreliable in ensuring user safety. What is imperative is preemptive measures rather than reactive solutions to avert harm. Enforceable safety standards and risk mitigation strategies must be in place from the outset to govern powerful AI technologies.
The resistance to regulation of foundation models under the pretext of hindering innovation is unfounded. On the contrary, regulating these models is crucial for fostering innovation by shielding smaller European users downstream from compliance burdens and potential liabilities. While a handful of well-established companies are at the forefront of developing impactful foundation models, numerous small European enterprises have already integrated these models into their operations, with many more planning to do so. A balanced regulatory approach across the value chain is essential, ensuring that the burdens are borne by those most capable.
The opposing factions reflect a stark divide. The European DIGITAL SME Alliance, representing 45,000 business members, advocates for foundation model regulation, while certain European and U.S. AI corporations resist such measures. Contrary to their claims of safeguarding the E.U.’s innovation ecosystem, opposing regulation would likely shift financial and legal burdens from large corporations to startups, which lack the capacity and resources to modify these foundational models.
Arguments by France and Germany that regulation would impede Europe’s competitiveness in the global AI arena lack merit. The proposed tiered approach, already a compromise between the E.U. Parliament and Council, allows for targeted regulations, enabling emerging competitors to major AI players without undue restrictions. European policymakers must remain steadfast against fear-mongering tactics employed by Big Tech and its allies, focusing on the Act’s core objective: establishing a balanced framework that fosters innovation while averting potential harm. It must not serve as a tool to grant undue advantages to a select few Silicon Valley-backed AI leaders, exempting them from obligations, while hindering thousands of European businesses from maximizing the technology’s potential.
The Parliament, along with significant support from the Commission, the Council, the business community, and AI experts, endorses the regulation of foundation models. It is crucial to resist the undue influence of a few tech giants in the political arena, safeguarding the progress made over three years of legislative work. Prioritizing public safety over corporate profits and nurturing European innovation should be the guiding principles in shaping AI regulations.