Written by 3:57 am AI Business, Latest news, OpenAI

### Informing the US Government about New AI Job Launches: Essential for OpenAI and Other Software Companies

The Biden administration is using the Defense Production Act to require companies to inform the Com…

Some influential figures in Silicon Valley and Washington, DC were surprised by the global impact of OpenAI’s ChatGPT when it emerged recently. The advancement of AI technologies utilizing sophisticated large-scale language models, such as the one powering ChatGPT, is now prompting a call for heightened vigilance within the US government.

The Biden administration is poised to invoke the Defense Production Act to compel tech companies to report the training of high-powered AI systems. This legislation could come into force as early as next week.

Under this new mandate, the US government will gain insights into critical projects undertaken by major players like OpenAI, Google, Amazon, and other AI-centric firms. Companies will be obligated to divulge information about the safety evaluations conducted on their latest AI innovations.

The level of effort invested in developing a successor to OpenAI’s flagship model, GPT-4, has been shrouded in secrecy. Details about the commencement of function and safety assessments for GPT-5 may first be shared with the US government. OpenAI did not immediately respond to a request for comment.

During an event at Stanford University’s Hoover Institution, US Secretary of Commerce Gina Raimondo announced the government’s utilization of the Defense Production Act to mandate companies to disclose each instance of training new high-capacity language models and share the ensuing security data for review. However, she did not specify the timeline for enforcement or potential actions based on the collected data. Further updates are expected in the coming year.

These regulatory measures stem from a broad executive order issued by the White House last October. The Commerce Department was tasked with formulating a framework by January 28 for companies to furnish details to US officials about cutting-edge AI models in development. The scope of information required includes data ownership, computational resources consumption, and security testing protocols.

The executive order initially sets a performance threshold of 100 septillion floating-point operations per second (flops), with a thousandfold reduction for AI models processing DNA sequencing data. It also mandates the establishment of guidelines for determining when AI projects necessitate reporting to the Commerce Department. An analysis accompanying the executive order indicates that the computational power surpasses what was employed in training GPT-4 and Gemini. Notably, the specifics of computing resources utilized by OpenAI and Google remain undisclosed.

Raimondo also revealed plans for the Commerce Department to enforce another provision from the October executive order, requiring cloud service providers like Amazon, Microsoft, and Google to notify authorities when foreign entities leverage their infrastructure to train large language models. Foreign projects reaching the 100 septillion flop threshold must be reported.

On the day Google showcased the capabilities of its latest AI model, Gemini, surpassing OpenAI’s GPT-4 on certain benchmarks, Raimondo made her announcement. If significant cloud resources from Google are utilized in a project, the Commerce Department may be alerted about Gemini.

Noteworthy voices in the AI community advocated for a pause in developing models exceeding GPT-4’s capabilities due to the rapid evolution of the field. While some experts, like Samuel Hammond from the Foundation for American Innovation, argue that model risk is not solely determined by computational intensity, others, such as Dan Hendrycks from the Center for AI Safety, support the government’s proactive stance in light of AI advancements and associated concerns about superintelligent AI emergence.

Anthony Aguirre, from the Future of Life Institute, echoes the sentiment, emphasizing the necessity for transparency and oversight in AI research and development. He asserts that the government’s awareness of corporate AI endeavors is crucial given the substantial investments and potential risks involved.

Raimondo disclosed plans for the National Institute of Standards and Technology (NIST) to establish safety testing standards for AI models as part of a new US AI Safety Institute. These standards, including red teaming methodologies to assess AI risk, aim to enhance industry understanding of potential hazards associated with AI applications, with a focus on preventing misuse that could infringe on human rights.

Despite the looming deadline for NIST to implement the October AI directive, concerns persist regarding the agency’s capacity to effectively execute these standards due to resource and expertise constraints.

Visited 2 times, 1 visit(s) today
Tags: , , Last modified: March 24, 2024
Close Search Window
Close