Written by 4:00 am OpenAI, Opinion

### Strategies to Prevent the Technological Doomsday: Insights from an OpenAI Board Member

As artificial intelligence becomes more science fact than science fiction, its governance can’…

“We must guarantee that all artificial intelligence (AI) tools comply with existing regulations, with no special privileges protecting developers from accountability in case their models deviate from legal standards,” former Texas Representative Will Hurd emphasized.

Opinion by Will Hurd

Will Hurd, a former board member of OpenAI and ex-Congressman from Texas, shared his insights.

During my tenure on the OpenAI board, a singular experience left me unsettled—a briefing that echoed a similar sensation from my extensive national security career. Witnessing the unveiling of what would later be recognized as GPT4, the potential of this tool was evident, marking the initial phase towards achieving artificial general intelligence.

While conventional AI is tailored for specific functions within predefined parameters, artificial general intelligence represents a paradigm shift. It embodies an evolution where AI can comprehend, learn, and apply intelligence across diverse problem sets, transcending its original training constraints. With capabilities mirroring human cognition, AGI holds the promise of addressing intricate global challenges, from climate crises to medical advancements. However, if left unchecked, AGI could yield repercussions as profound and irreversible as those stemming from nuclear warfare.

The gravity of this prospect was not lost on the OpenAI board during my tenure. Each meeting underscored the engineers’ meticulous projections on the future advancements in AI. Consequently, our deliberations predominantly revolved around safety and alignment efforts to ensure AI’s adherence to human intent.

The governance turmoil at OpenAI in late November 2023 was disconcerting. (I resigned from the board on June 1, 2023, to pursue the Republican presidential nomination.) Within a brief five-day span, four directors ousted the board chair and terminated CEO Sam Altman, triggering a mass employee exodus threat. Eventually, Altman was reinstated, leaving the current OpenAI board with three members—two newcomers and one incumbent.

The rationale behind the board’s actions remains elusive. Yet, OpenAI faces fundamental governance queries: Should four individuals possess the authority to jeopardize a $90 billion enterprise? Is the organizational structure of OpenAI, arguably the foremost AGI entity globally, overly intricate?

Beyond these operational concerns lie profound philosophical inquiries concerning AGI development. Who can be entrusted with crafting such a potent tool and potential weapon? How do we ensure responsible stewardship once AGI materializes? Crucially, how can we steer the advent of AGI towards benefiting humanity rather than posing existential threats?

As this technological frontier blurs the line between fact and fiction, its oversight cannot be relegated to a select few. Analogous to the nuclear arms race, malevolent actors—including adversaries—may exploit AI devoid of ethical or humanitarian considerations. This juncture transcends corporate dynamics; it beckons a concerted effort to erect guardrails that channel AGI’s potential for good, averting catastrophic outcomes.

Primarily, we must mandate legal accountability. Upholding adherence to existing laws by all AI tools is imperative, devoid of exemptions shielding developers from legal repercussions for non-compliance. We must not replicate the oversights witnessed in software and social media realms.

Presently, a disjointed regulatory landscape comprises city and state statutes targeting specific AI applications. AI technologies across sectors like finance and healthcare navigate prevailing industry-specific legal frameworks without tailored AI-centric directives.

This fragmented approach, coupled with market pressures compelling AI developers to seize early-mover advantages, risks incentivizing regulatory leniency akin to past tech sectors, fostering accountability gaps and oversight lapses that could compromise AI’s responsible evolution and application.

By 2025, projected cybercrime losses for Americans stand at $10.5 trillion. Contributing to this is the legislative and judicial reluctance to classify software as a product, thus evading stringent liability.

Social media platforms fuel a surge in self-harm among adolescent girls and serve as conduits for hate propagation by white nationalists, bigotry promotion by antisemitic factions, and subversion attempts by foreign intelligence entities. Legislative carve-outs shield social media from regulatory norms governing radio, television, and print media.

In sectors like banking, AI implementers and developers must operate in alignment with extant banking regulations. No industry should receive exemptions by virtue of AI’s novelty.

Safeguarding Intellectual Property in the AI Epoch

Secondly, safeguarding intellectual property is paramount. Creators, whose data underpins AI model training, merit due compensation when their creations fuel AI-generated content.

Analogous to earning royalties from book adaptations, creators should receive equitable recompense for AI-incorporated content beyond fair use parameters. Existing copyright and trademark statutes should extend to AI, mandating compliance to remunerate creators for content utilization. Initiatives by companies like Adobe and Canva exemplify compensatory frameworks enabling creators to profit from content utilization. Aligning AI with prevailing copyright laws can foster a vibrant creator ecosystem supplying data for algorithm training.

Enforcing Safety Permitting

Thirdly, instituting safety permitting is imperative. Analogous to permits requisite for nuclear facilities or construction projects, potent AI models should necessitate licensing to ensure adherence to safe, reliable standards.

While the Biden administration’s executive orders echo efforts initiated by predecessors to address AI concerns, recent safety permitting directives fall short. The administration’s call for AI innovators to notify authorities lacks specificity.

The White House should leverage its convening authority to delineate criteria for potent AI classification. Prioritizing autonomy and decision-making criteria can delineate AI systems necessitating stringent oversight, particularly in contexts where AI decisions impact rights, safety, and privacy. Scrutiny should extend to AI systems processing extensive personal data or susceptible to repurposing for unethical ends.

To fortify safeguards against potent AI risks, entities developing AI models meeting these benchmarks should seek permits from the National Institute of Standards and Technology before public release.

Envisioning AI’s Future

Transparency and accountability underpin these regulatory endeavors. Transparent AI operations empower experts to scrutinize decision-making processes, mitigating hidden biases and errors. Accountability ensures rectification responsibility in case of AI-induced harm or errors, crucial for upholding public trust and responsible AI utilization.

These principles assume heightened significance as AI permeates critical domains like healthcare, finance, and criminal justice, where decisions wield substantial impact on individuals.

The OpenAI episode underscores a pivotal lesson and beckons proactive measures. Artificial general intelligence governance transcends corporate confines, constituting a global imperative that resonates across societal strata.

Navigating ahead mandates robust legal frameworks, intellectual property respect, and stringent safety standards mirroring nuclear energy oversight. Beyond regulatory frameworks, a collective vision is indispensable—a vision where technology serves humanity, innovation aligns with ethical obligations, and our trajectory embodies wisdom, courage, and a collective commitment to a future that uplifts humanity in unison.

Visited 3 times, 1 visit(s) today
Last modified: January 30, 2024
Close Search Window
Close