Innovative Approaches to Safeguarding Public Interest in the Evolving AI Industry
Authored by Bhaskar Chakravorti, the dean overseeing global business at Tufts University’s Fletcher School of Law and Diplomacy.
British Prime Minister Rishi Sunak warmly greets Italian Prime Minister Giorgia Meloni at the AI Safety Summit in Bletchley, England, held on Nov. 2, 2023. Joe Giddens/Getty Images
- Science and Technology
Amidst the transformative landscape of artificial intelligence (AI) in 2023, the industry witnessed a notable divergence within the AI community into distinct factions: accelerationists, doomers, and regulators.
As the year drew to a close, the accelerationists appeared triumphant, with power consolidated among a select group of major Big Tech entities investing in cutting-edge startups, fast-tracking generative AI products. Meanwhile, the once prominent doomers, cautioning against AI risks, found themselves in retreat. In a surprising turn of events, regulators swiftly responded to the accelerationists, unveiling ambitious regulatory frameworks amidst a backdrop of numerous elections and the looming threat of AI-fueled disinformation.
Paradoxically, these regulatory actions, aimed at safeguarding public welfare, could inadvertently bolster the market dominance of the accelerationists. The question arises: How can regulators, entrusted with upholding public interest, potentially exacerbate existing challenges? Is there a pressing need for novel regulatory paradigms to mitigate the influence of a burgeoning industry? Are there imaginative solutions to protect the public interest effectively?
Let’s delve into why the AI sector is predisposed to consolidation.
Primarily, AI advancements are predominantly steered by the private sector rather than governmental entities. Despite AI being heralded as a national priority, the United States, a frontrunner in AI development, relies heavily on corporate prowess for its supremacy. The private sector’s share in major AI models surged from 11 percent in 2010 to a staggering 96 percent in 2021.
This industry-centric trend extends beyond the U.S. borders. During recent deliberations on the European Union’s AI regulations, countries like Germany, France, and Italy opposed measures that could potentially disadvantage their burgeoning private sector champions. Even in China, leading AI players are private enterprises subject to close state scrutiny. This structural reliance on businesses is attributed to their ownership of pivotal AI development resources such as talent, data, computational capabilities, and substantial capital.
While significant AI development teams exist within major corporations, smaller enterprises emerge as dynamic innovators in foundational model creation. However, these smaller entities heavily rely on major corporations for critical resources like extensive datasets, computational power, cloud access, and substantial financial backing.
Notably, investments exceeding $18 billion from tech giants like Microsoft, Google, and Amazon constituted two-thirds of global venture capital inflow into generative AI ventures. This financial influx predominantly benefitted a select few innovative companies like OpenAI, Anthropic, and Inflection, further consolidating market power among industry behemoths.
The reciprocal flow of investments reinforces the dominance of major players. AI developers turn to industry stalwarts like Nvidia for cutting-edge processing units and leverage cloud services from tech titans such as Amazon and Microsoft to deploy AI models. Additionally, Google and Microsoft are actively integrating AI models into their core offerings to fortify their primary business segments.
In an industry ripe for consolidation, regulatory interventions risk exacerbating the concentration of power within a few key players. The global AI regulatory landscape is evolving into a patchwork of regulations, with China taking early strides, followed by concerted efforts from regulators on both sides of the Atlantic. The White House’s October 2023 executive order on AI safety awaits implementation by various government bodies, while the EU’s AI regulations, announced towards the end of 2023, are slated for deliberation in early 2024.
Anticipated to influence global regulatory frameworks and industry standards, the EU laws might trigger varied responses worldwide due to the diverse political implications surrounding AI. The African Union is poised to introduce AI policies, while the United Kingdom, India, and Japan might adopt more laissez-faire approaches.
Let’s explore the potential ramifications of distinct AI regulatory features.
Initially, the issue of disparate regulations comes to the fore. In the absence of national legislation from the U.S. Congress and a standalone executive order from the White House, states are formulating individual AI regulations. For instance, a proposed bill in California mandates transparency requirements for AI models exceeding specific computational thresholds. Similarly, other states are contemplating regulations concerning AI-manipulated content, such as South Carolina’s proposal to ban deepfakes of political candidates within 90 days of an election, with Washington, Minnesota, and Michigan advancing similar AI-related election bills.
These divergent state-level regulations place smaller enterprises at a disadvantage, as they lack the resources and legal support to comply with multiple regulatory frameworks. The challenges escalate when considering the global regulatory patchwork.
Furthermore, the stipulation of red-teaming requirements in the executive order and EU regulations mandates generative AI models above a defined risk threshold to disclose outcomes from structured testing, involving simulated “red team” assaults to identify security vulnerabilities. This stringent approach contrasts with the cost-effective testing methods prevalent among tech startups, where early product versions are released, bugs are reported by users, and updates are promptly issued. The preemptive testing approach not only incurs substantial costs but also demands diverse expertise encompassing legal, technical, and geopolitical domains. Moreover, startups may struggle to validate externally sourced AI models, further tilting the playing field towards industry giants.
Additionally, both the White House executive order and EU regulations advocate for the “watermarking” of all AI-generated content, entailing the embedding of identifying information into AI-produced works. While this measure appears logical, it is not foolproof, susceptible to circumvention by malicious hackers. Compliance with watermarking requirements could pose technical and legal complexities for smaller enterprises reliant on external content for watermark verification.
A study by the Center for Data Innovation, a tech industry-backed think tank, estimated that small and medium enterprises could face compliance costs as high as 400,000 euros (approximately $435,000) by utilizing certain high-risk AI models proposed by the European Union. While the figures may be subject to debate, the fundamental concern remains that AI regulations could disproportionately burden smaller firms, potentially impeding market entry.
In navigating the regulatory landscape without advocating for additional regulations, how can we address these challenges? Should market forces be left to naturally unfold?
One plausible solution is to foster new avenues of competition. In the dynamic realm of AI, innovative entrants are expected to emerge amidst industry turbulence. The internal upheaval at OpenAI, the entity behind ChatGPT, hints at discord within the sector, likely prompting competitors to enter the fray to address perceived gaps.
The availability of open-source AI models can empower such entrants to compete effectively. Even the U.S. Federal Trade Commission acknowledges this potential. Open-source models not only enable competition but also facilitate global participation, allowing diverse competitors to leverage unique features and inventive concepts. However, the sustainability of open-source models is not guaranteed, as several initially open models have transitioned to closed frameworks over time, exemplified by Meta’s LlaMA 1 model, which initially disclosed its dataset but shifted to non-disclosure with LlaMA 2’s release.
Moreover, despite the intense rivalry in foundational model development, exemplified by Google’s Gemini challenging OpenAI’s generative AI suite, opportunities for competition could extend to the concentrated infrastructure layer. Major players like Microsoft, Google, and Alibaba are vying to challenge Amazon’s dominance in cloud services, while chipmaker AMD aims to disrupt Nvidia’s chip monopoly, amidst competition from Amazon and Chinese chip manufacturers. The differentiation potential could shift towards applications and services leveraging upstream models and infrastructure, tailoring AI solutions to meet diverse user requirements, fostering a more competitive landscape.
The computational demands of AI applications might witness a decline due to various factors. Advancements in chip efficiency and the adoption of smaller, specialized models through knowledge distillation could reduce the reliance on massive language models trained on extensive datasets, typically controlled by a select few corporations.
While these developments could potentially diminish the dominance of major tech players, their realization may take time. In the interim, policymakers could explore alternative strategies through proactive engagement.
Key industry leaders are actively advocating for regulatory participation to influence rule-making processes. This presents policymakers with an opportunity to negotiate alternative agreements with major industry players. For instance:
- Leveraging historical industrial innovation models: Policymakers could draw inspiration from the 1956 U.S. federal consent decree involving AT&T and the Bell System. The decree, while preserving AT&T’s telecommunications monopoly, mandated Bell Labs to license all its patents royalty-free, fostering innovation and broader industry participation.
- Adopting existing public investment frameworks: The public sector could collaborate with major corporations in AI development, leveraging mechanisms akin to the Bayh-Dole Act, allowing businesses to retain ownership of inventions while granting government licenses for public use.
- Exploring digital public infrastructure (DPI) principles: Policymakers could draw insights from the DPI model, envisioning public digital rails on which diverse applications can be built. Governments could requisition AI models as public utilities, fostering equitable access and innovation.
- Implementing progressive taxation principles: To alleviate regulatory burdens on smaller enterprises, policymakers could consider imposing a tax on AI-related revenues, with tax rates escalating proportionally with company size.
The concentration of the AI industry extends beyond market dominance concerns, encompassing issues like data access limitations and algorithmic biases. A consolidated industry landscape risks reinforcing biases in datasets and algorithms, prioritizing a narrow spectrum of applications. Moreover, systemic failures could propagate swiftly, posing global risks through interconnected financial networks reliant on a handful of key AI platforms or cyber threats targeting widely used AI systems, potentially exposing numerous sectors to vulnerabilities.
As 2024 unfolds, the regulatory landscape for AI will evolve significantly. It is imperative to ensure that regulatory frameworks do not inadvertently consolidate power among a select few, hindering the entry of innovative AI entrants crucial for industry diversity and resilience.
- Science and Technology
Bhaskar Chakravorti, a distinguished figure in global business at Tufts University’s Fletcher School of Law and Diplomacy, serves as the founding executive director of Fletcher’s Institute for Business in the Global Context, overseeing the renowned Digital Planet research program.