Written by 4:56 am AI, Discussions

– Boffins Argue for Initiating AI Regulation with Hardware Optimization

Better visibility and performance caps would be good for regulation too

In the effort to mitigate the potential harm of artificial intelligence, a recent publication from the University of Cambridge proposes integrating remote kill switches and lockouts, akin to those utilized to prevent unauthorized nuclear weapon launches, into the underlying hardware.

The document, which incorporates insights from various academic institutions including several from OpenAI, argues that overseeing the hardware supporting these models could be an effective strategy to curb their misuse.

The researchers contend that intervening at the level of AI-relevant compute presents a viable opportunity: it is identifiable, excludable, quantifiable, and originates from a highly concentrated supply chain.

The training process for the most advanced models, believed to surpass a trillion parameters, demands extensive physical infrastructure: tens of thousands of GPUs or accelerators and weeks, if not months, of computational time. This aspect, according to the researchers, makes it challenging to conceal the existence and performance of these resources.

Moreover, the cutting-edge chips essential for model training are predominantly supplied by a limited number of companies such as Nvidia, AMD, and Intel, enabling policymakers to restrict the distribution of these components to specific individuals or nations of interest.

These factors, combined with supply chain limitations in semiconductor manufacturing, equip policymakers with the tools to comprehend the deployment of AI infrastructure, control access to it, and enforce penalties for its inappropriate utilization.

Regulating the Infrastructure

The paper outlines various approaches policymakers could adopt for AI hardware regulation. Many of these suggestions, including measures to enhance transparency and restrict the sale of AI accelerators, are already being implemented at a national level.

For instance, last year, US president Joe Biden introduced an executive order aimed at identifying companies engaged in developing large dual-use AI models and the infrastructure providers capable of training them. The term “dual-use” refers to technologies that can serve civilian as well as military purposes.

More recently, the US Commerce Department proposed regulations mandating American cloud service providers to implement stricter “know-your-customer” protocols to prevent circumvention of export restrictions by individuals or countries of concern.

The researchers emphasize the value of such transparency in averting scenarios akin to the arms race triggered by the missile gap controversy, where inaccurate reports spurred a massive escalation in ballistic missile production. However, they caution that fulfilling these reporting obligations risks infringing on customer privacy and compromising the security of sensitive data.

On the trade front, the Commerce Department has intensified restrictions, limiting the performance of accelerators sold to China. Nevertheless, these efforts, while impeding countries like China from accessing American chips, are not foolproof.

To address these limitations, the researchers suggest establishing a global registry for AI chip sales to monitor them throughout their lifecycle, even after leaving their country of origin. This registry could potentially embed a unique identifier in each chip to combat component smuggling.

At a more radical level, researchers propose integrating kill switches into the silicon to prevent malicious applications. This mechanism could enable regulators to swiftly respond to abuses of sensitive technologies by remotely disabling chip access. However, the authors caution that implementing such a kill switch carries risks, as it could become a target for cybercriminals if not executed correctly.

Another proposal involves requiring multiple parties to authorize potentially risky AI training tasks before they can be deployed extensively. This concept mirrors the permissive action links utilized for nuclear weapons, designed to prevent unauthorized launches. In the context of AI, this mechanism would mandate authorization before training models above a certain threshold in the cloud.

While potent, the researchers acknowledge that this approach could hinder the development of beneficial AI applications. Unlike nuclear weapons, where the outcomes are relatively clear-cut, AI applications often present more nuanced ethical considerations.

For those averse to such dystopian scenarios, the paper dedicates a section to reallocating AI resources for societal advancement. The premise is that policymakers could collaborate to make AI compute resources more accessible to groups unlikely to employ them for nefarious purposes, a concept termed “allocation.”

Challenges with AI Regulation

Why the need for such elaborate measures? The authors of the paper argue that physical hardware is inherently more manageable.

In contrast to hardware, “other inputs and outputs of AI development – data, algorithms, and trained models – are easily shareable, non-rivalrous intangible goods, making them inherently difficult to control,” as stated in the paper.

The concern is that once a model is released, whether openly or leaked, it becomes challenging to contain its proliferation across the internet.

Additionally, efforts to prevent model misuse have proven unreliable. For instance, researchers easily circumvented safeguards in Meta’s Llama 2 model intended to prevent the generation of offensive content.

At an extreme, there are fears that a highly advanced dual-use model could expedite the development of chemical or biological weapons.

The paper acknowledges that AI hardware regulation is not a panacea and does not negate the necessity for regulation in other facets of the industry.

However, the involvement of several OpenAI researchers is noteworthy, particularly in light of CEO Sam Altman’s endeavors to steer the conversation around AI regulation.

Visited 2 times, 1 visit(s) today
Tags: , Last modified: February 19, 2024
Close Search Window
Close