The British prime minister, Rishi Sunak, engaged with government personnel, AI companies, and specialists at Bletchley Park this month. Bletchley Park, renowned as the former hub of Allied code-breaking during the Second World War, served as the backdrop for discussions on the appropriate utilization of highly-touted systems.
Criticism has been levied at the summit for various reasons, including the prioritization of input from major tech entities over voices from civil society and the emphasis on theoretical existential risks rather than tangible real-world dangers. The most significant shortcoming of the event, a direct consequence of these biases, was the glaring omission of any substantial measures to rein in the formidable corporations that present the most significant threat to public well-being.
The summit’s purported “accomplishments” encompassed a loosely defined joint statement highlighting the perils associated with so-called frontier AI models, advocating for “inclusive global dialogue,” and a voluntary safety testing agreement between governments and prominent AI corporations. Regrettably, these initiatives lack genuine substance and, exacerbating the issue, grant influential companies undue influence in shaping AI legislation.
Advocating for the idea that granting big tech complete control over AI models is the sole means of safeguarding society from severe harm is now being propagated. This notion is deemed “naive at best, risky at worst,” as articulated in an open letter endorsed by 1,500 members of civil society.
Institutions genuinely dedicated to leveraging AI for positive outcomes advocate for markedly different approaches. What is imperative are robust measures targeting the commercial dominance itself, rather than mere lofty declarations of intent and behind-the-scenes agreements with corporations. Essential components include stringent regulatory mandates for powerful gatekeepers and vigorous enforcement of competition policies.
Presently, a select cadre of tech giants has seized the upper hand in the realm of large-scale AI foundational models by leveraging their monopolistic control over computing resources, data, and technical expertise. In contrast to AI software tailored for specific purposes, these frameworks are trained on extensive datasets applicable to a myriad of tasks. Smaller enterprises lacking access to these proprietary tools are compelled to either engage in unequal partnerships with or succumb to acquisition by major players. Notable instances include Google’s acquisition of Deepmind and OpenAI’s $13 billion “partnership” with Microsoft, among others.
These tech behemoths wield considerable influence across various sectors such as hardware, cloud services, and search engines, enabling them to channel users towards their proprietary AI models and services. Network effects and economies of scale are poised to amplify this initial advantage further as more individuals gravitate towards and contribute data to a limited number of AI models and services.
Experts caution that this scenario is conducive to monopolistic practices. A saturated market for foundational models could empower a handful of dominant corporations to dictate the trajectory and pace of AI advancement, enabling them to exploit, coerce, and manipulate numerous entities reliant on their services and infrastructure, including businesses, creators, employees, and consumers.
Former software frontrunners are now leveraging AI to fortify their market dominance. Governments must not acquiesce to this trend. While antitrust regulators may have faltered in curbing the monopolization of digital technologies in the past, robust enforcement of competition policies can significantly mitigate the intensification of AI dominance.
To prevent a small cohort of digital gatekeepers from running roughshod in their pursuit of ever-increasing profits, competition authorities can wield their existing powers to monitor acquisitions, cartel formations, and monopolistic behaviors. This necessitates scrutinizing and, if warranted, dismantling anti-competitive agreements between major tech firms and AI startups, as well as preventing digital titans from abusing their dominance in pivotal platforms like research and cloud computing to solidify their AI hegemony.
Nevertheless, it is crucial to acknowledge that despite the cutthroat market dynamics, the number of providers offering large-scale models will inherently be limited due to the resources required for training and deployment. At this juncture, regulatory intervention is imperative to impose unprecedented responsibilities on dominant corporations.
As AI assumes a more prominent role in societal decision-making processes, considerations of safety, reliability, fairness, and transparency are paramount. AI systems can perpetuate biases from their underlying datasets or training, generating “hallucinations”—plausible yet erroneous responses. Malicious actors could exploit AI to discriminate against individuals, conduct surveillance, or craft deceptive advertising campaigns.
Given the cascading repercussions of foundational flaws, risks stemming from base designs are particularly severe. While the European Union is striving to enact stringent AI regulations that impose obligations tailored to different AI applications based on risk, addressing threats emanating from base designs poses a distinct challenge under the EU’s AI Act.
It is imperative to bypass past email promotions and instead focus on the forthcoming email advertising initiatives.
EU legislators are contemplating imposing a fresh set of structured obligations on providers of base models. These entities may be mandated to undergo systemic risk assessments and furnish regulators with insights into their training methodologies, including the utilization of sensitive and proprietary data.
While these measures represent incremental progress in mitigating AI-related risks, it is equally crucial to subject major corporations offering large-scale models to overarching obligations mandating ethical conduct and prioritization of public welfare, given their pivotal role within the AI ecosystem.
One potential avenue involves imposing a set of moral imperatives on GPAI services, drawing inspiration from frameworks developed by scholars such as Luigi Zingales at the University of Chicago and Jack Balkin at Yale Law School. Elevating the standard of care in law to a professional obligation entails a legal and moral duty to act in the best interests of others.
Designating digital gatekeepers as public utilities or “common carriers,” necessitating fair treatment of customers and ensuring operational safety, could serve as an alternative or complementary strategy. This legal designation could conceivably extend to both cloud computing platforms hosting AI software and the foundational models underpinning them. The EU’s Digital Markets Act, which imposes pro-competition obligations on major tech entities, could provide a viable framework for advancing this agenda.
It is evident that relying on self-regulation by specific entities, particularly major tech corporations, is insufficient to guarantee AI security and resilience. Addressing monopoly power and ensuring that authority is accompanied by accountability are indispensable steps in realizing the potential benefits of AI while mitigating associated risks.
- Georg Riekeles, an associate producer at the Brussels-based European Policy Centre, an impartial think tank, and Max von Thun, the director of Europe and intercontinental alliances at the Open Markets Institute, an anti-monopoly think tank, jointly contributed to this discourse.