Written by 11:55 am AI, Discussions

### Artificial Intelligence’s Role in Shaping a Disciplined World

Geopolitics gets in the way of global regulation of a powerful technology.

In November 2023, several nations released a joint statement pledging robust international collaboration to tackle the complexities of artificial intelligence. Surprisingly, countries with differing stances on regulatory issues, such as China, the United States, and the European Union, all endorsed the document. This declaration outlined a comprehensive perspective on addressing the risks associated with cutting-edge AI, specifically highlighting the potential misuse of AI for spreading disinformation and posing significant cybersecurity and biotechnological threats. Concurrently, U.S. and Chinese officials agreed to engage in discussions in the upcoming spring regarding AI regulation cooperation and risk mitigation strategies.

The global landscape seems to be gradually forming an international framework for AI regulation through multinational statements and bilateral dialogues. Noteworthy documents like U.S. President Joe Biden’s executive order on AI in October 2023, the EU’s AI Act passed by the European Parliament in December 2023, and China’s recent regulatory initiatives exhibit a remarkable convergence. These regulatory systems share a common objective of preventing AI misuse while fostering innovation. Some optimists have proposed ideas for enhanced international AI governance, including suggestions by geopolitical analyst Ian Bremmer and entrepreneur Mustafa Suleyman in Foreign Affairs, and a plan by Suleyman and Eric Schmidt, former Google CEO, in the Financial Times, advocating for the establishment of an international panel akin to the UN’s Intergovernmental Panel on Climate Change to provide insights to governments on AI advancements and future projections.

However, ambitious endeavors to create a new global AI governance structure may encounter a significant hurdle: practical challenges. Despite public declarations of cooperation in AI regulation by major powers like China, the United States, and the EU, their actions indicate a future marked by fragmentation and competition. Emerging divergent legal frameworks related to semiconductor access, technical standards, and data and algorithm regulations are poised to impede collaborative efforts, leading not to a unified global regulatory environment for AI but to disparate regulatory factions—a scenario where the noble vision of leveraging AI for the common good is overshadowed by geopolitical tensions.

Conflict Over Semiconductors

A prominent area of contention in the realm of AI revolves around the ongoing rivalry between China and the United States concerning global semiconductor markets. In October 2022, the U.S. Commerce Department introduced comprehensive licensing regulations for advanced chip exports and chip-making technology. These chips are vital for producing devices capable of running state-of-the-art AI models utilized by leading firms like OpenAI and Anthropic. China responded in August 2023 with its export controls on essential minerals crucial for chip manufacturing. Subsequently, the Biden administration reinforced its regulations by broadening the scope of covered semiconductor products.

The competition over semiconductors underscores the inadequacy of international trade law, particularly in restricting governments from imposing export controls. The World Trade Organization’s limited intervention in this domain, exacerbated by the weakening of its appellate body under former U.S. President Donald Trump, has created a vacuum for enforceable regulations. Consequently, the chip warfare between China and the United States is eroding free trade principles, setting disruptive precedents in international trade law, and fostering heightened geopolitical tensions.

Beyond the semiconductor conflict, another battleground in the AI domain pertains to technical standards. Establishing and adhering to technical standards is crucial for the effective utilization of any major technology. In the digital era, various standards facilitate global product production and procurement. Notably, China has increasingly assumed leadership roles in technical standardization bodies, advocating for its preferred standards and integrating them into global initiatives like the Belt and Road Initiative. This shift challenges the historical dominance of U.S. and European entities in setting global technical standards, potentially leading to a fragmented landscape of AI-related standards.

Geopolitical Struggles Over Data and Algorithms

Geopolitical rivalries are not confined to physical components but extend to intangible assets critical for AI functionality. The competition over data, a fundamental input for AI tools, is intensifying as countries vie for access to diverse data types. Disagreements over data flows have escalated, with contrasting approaches between the United States, the EU, China, and India regarding data localization and cross-border data transfers. This divergence in data governance models is reshaping global digital trade dynamics, potentially hindering data mobility and fostering data localization practices.

Furthermore, the disclosure of AI algorithms is emerging as a contentious issue, with divergent regulatory approaches adopted by different countries. While the EU’s proposed AI Act mandates algorithm disclosure to government agencies for oversight, the U.S. stance appears more nuanced, emphasizing disclosures for certain models while opposing similar mandates in international trade agreements. This regulatory disparity may lead to further fragmentation in AI governance, impeding global efforts to address AI-related challenges collectively.

Implications of Fragmented AI Regulation

The evolving legal landscape surrounding AI reflects a trend towards fragmentation and discord rather than cohesive global governance. This fractured regulatory framework, characterized by suspicion and regulatory divergence among nations, poses challenges to advancing proposals for effective global AI governance. The resulting environment may facilitate the development and dissemination of dangerous AI models for geopolitical purposes, undermining efforts to manage AI risks and potentially enabling authoritarian regimes to exploit AI for internal and external manipulative agendas.

In conclusion, the current trajectory suggests a future where AI regulation is marred by division and conflict, hindering collaborative initiatives and impeding the establishment of comprehensive global governance mechanisms for AI oversight. The implications of a fragmented regulatory order extend beyond regulatory challenges to encompass broader geopolitical and security concerns associated with unregulated AI proliferation.


AZIZ HUQ, the Frank and Bernice J. Greenberg Professor of Law at the University of Chicago and author of The Collapse of Constitutional Remedies, provides insights into the complexities and implications of the evolving global AI regulatory landscape.

Visited 2 times, 1 visit(s) today
Tags: , Last modified: March 11, 2024
Close Search Window
Close