Written by 1:46 pm AI, Discussions

### Enhancing Political Processes Through Public Artificial Intelligence

Publicly owned and developed AI systems could democratize the AI market while ensuring that AI serv…

With the global spotlight shifting towards misinformation, manipulation, and propaganda in anticipation of the 2024 U.S. presidential election, it becomes evident that democracy faces an AI predicament. However, it is becoming increasingly clear that AI also grapples with challenges concerning democracy. Both issues necessitate resolution to uphold democratic governance and safeguard the public interest.

A mere trio of dominant Big Tech entities (Microsoft, Google, and Amazon) command roughly two-thirds of the worldwide market for cloud computing resources essential for training and deploying AI models. Possessing a wealth of AI expertise, substantial innovation capabilities, and operating with minimal public oversight, these firms wield significant influence.

The growing centralization of AI control raises concerns about the symbiotic evolution of democracy and technology. When AI direction is predominantly steered by tech magnates and corporations, the resultant AI tends to mirror their interests rather than those of the general populace or average consumers.

To foster societal welfare, a robust public AI sector is indispensable to counterbalance corporate AI. Additionally, reinforcing democratic institutions to regulate all AI activities is imperative.

One potential approach is the establishment of an AI Public Option, encompassing AI systems like foundational large-language models geared towards advancing the public good. Analogous to public infrastructure such as roads and postal services, a public AI option could ensure universal access to transformative technology, setting a benchmark for private services to surpass in competitiveness.

Accessible public models and computational infrastructure would yield myriad advantages for the U.S. and society at large. They would facilitate public engagement and oversight on critical ethical dilemmas in AI development, including the incorporation of copyrighted content in model training, equitable distribution of access amidst high demand for cloud computing resources, and licensing access for sensitive applications like law enforcement and healthcare. This initiative would serve as an open platform for innovation, enabling researchers, small businesses, and corporate entities to develop applications and conduct experiments.

Instances of public AI initiatives, akin to the proposed model, are not unprecedented. Taiwan, a frontrunner in global AI, has pioneered public AI development and governance. The Taiwanese government has allocated over $7 million to create its large-language model aimed at countering AI models developed by mainland Chinese corporations. By striving to democratize AI development, Taiwan’s Minister of Digital Affairs, Audrey Tang, has collaborated with the Collective Intelligence Project to introduce Alignment Assemblies, enabling public participation alongside corporations like OpenAI and Anthropic. Through AI chatbots, ordinary citizens contribute their perspectives on AI-related matters, ensuring inclusive decision-making.

A variant of the AI Public Option, overseen by a transparent and accountable public agency, would offer greater assurances regarding the accessibility, fairness, and sustainability of AI technology for society compared to exclusive private AI development.

Developing AI models entails intricate processes demanding technical acumen, well-coordinated teams, and unwavering commitment to public welfare. Despite common criticisms of Big Government, federal bureaucracies have a commendable track record in these aspects, sometimes surpassing corporate entities.

Several of the world’s most sophisticated projects, such as astrophysical observatories, nuclear facilities, and particle accelerators, are operated by U.S. federal agencies. While setbacks and delays are not uncommon in such endeavors—the Webb space telescope notably exceeded its initial budget and timeline—private enterprises encounter similar challenges. In high-stakes technological domains, delays are not unexpected.

Given political resolve and adequate financial backing from the federal government, public investment could navigate technical hurdles and setbacks that short-term corporate strategies might prompt.

The recent Executive Order on AI by the Biden administration paves the way for establishing a federal agency dedicated to AI development and deployment under political oversight. The Order advocates for a National AI Research Resource pilot program to furnish computational resources for the research community.

While this initiative marks a positive step, the U.S. should consider broader measures by instituting a services agency rather than solely focusing on research resources. Analogous to the Centers for Medicare & Medicaid Services (CMS) managing public health insurance programs, a federal agency dedicated to AI—termed Centers for AI Services—could administer and operate Public AI models. This agency could democratize the AI landscape while prioritizing the democratic impact of AI models, achieving dual objectives.

Although the scale of a public AI agency necessitates substantial resources, it remains modest in the context of the federal budget. With fewer than 800 employees, OpenAI pales in comparison to CMS’s workforce of 6,700 and an annual budget exceeding \(2 trillion. Striking a balance akin to the National Institute of Standards and Technology, with its 3,400 staff and \)1.65 billion annual budget, is crucial. Despite the significant investment required, it is a nominal expense compared to initiatives like the $50 billion CHIPS Act aimed at enhancing domestic semiconductor production. Investing in our future and democracy holds immense value.

If established, what services could such an agency provide? Its primary mandate would involve innovating, developing, and maintaining foundational AI models adhering to best practices, crafted in collaboration with academic and civil society leaders, and offered at an affordable rate to all U.S. consumers.

Foundation models serve as comprehensive AI frameworks that support a diverse array of tools and applications. These models can process varied data inputs—ranging from text in multiple languages and subjects to images, audio, video, and structured data like sensor readings or financial records. They are versatile, capable of being fine-tuned for specialized tasks. While innovation opportunities abound in designing and training these models, the fundamental techniques and architectures are well-established.

Publicly funded foundation AI models would operate as a public service, akin to healthcare options. They wouldn’t preclude private foundation models but would establish a baseline for price, quality, and ethical standards that private entities must match or surpass.

Similar to public healthcare, the government need not handle all aspects. It can engage private providers to assemble necessary resources for AI services. Additionally, the U.S. could incentivize key supply chain operators like semiconductor manufacturers, as seen in the CHIPS act, to support infrastructure provisioning.

The government could offer basic services directly to consumers atop their foundation models, such as chatbot interfaces and image generators. However, more specialized consumer-oriented products like tailored digital assistants, domain-specific knowledge systems, and customized corporate solutions may remain within the purview of private firms.

Crucially, the government would play a pivotal role in dictating design decisions related to training and deploying AI foundation models when establishing an AI Public Option. Transparency, political oversight, and public engagement in these decisions could foster outcomes aligned with democratic principles, distinct from an unregulated private market.

Key decisions in constructing AI foundation models include data selection, providing pro-social feedback during model training, and prioritizing interests when addressing deployment challenges. Public AI models could leverage public domain content, government-licensed data, and citizen-consented data for model training, eschewing ethically dubious practices like web scraping or unauthorized data usage.

Public AI models could adhere to U.S. labor laws and public sector employment standards, contrasting with instances of labor exploitation and breaches of public trust in some corporate AI projects. By subjecting these models to democratic processes and political oversight, the aim is to align foundation AI models with democratic principles and safeguard minority rights within majority rule.

Publicly funded foundation AI models, despite requiring modest federal appropriations, could negate the necessity for consumer data exploitation and combat anti-competitive behaviors. These public option services could elevate all stakeholders—individuals and corporations alike. However, the establishment of such an agency must navigate shifting political landscapes, necessitating a steadfast and principled administration committed to upholding constitutional values.

While robust legal regulations could potentially forestall the urgency for public AI development, comprehensive regulatory frameworks appear to lag behind AI advancements. Although several tech giants have pledged to safeguard democracy, these commitments are voluntary and lack specificity. The federal government’s progress in enacting legislation and regulations for corporate AI has been sluggish, though a bipartisan task force in the House of Representatives shows promise. At the state level, only a few jurisdictions have passed laws addressing AI-based misinformation in elections. Given the delayed pace of regulatory developments, exploring alternatives to corporate-controlled AI becomes imperative.

In the absence of a public option, consumers should exercise caution, considering the consolidation of markets by tech venture capital in recent years. Instances in online search and social media, as well as ridesharing, underscore the repercussions of unchecked dominance by a few firms, leading to user exploitation and product degradation.

Addressing the challenges posed by AI necessitates proactive leadership committed to public interests, steering away from ceding control to corporate entities. While reimagining democracy for AI isn’t imperative, revitalizing and fortifying democratic structures to counter unbridled corporate influence is paramount.

Visited 17,338 times, 1 visit(s) today
Tags: , Last modified: March 5, 2024
Close Search Window
Close