Written by 5:23 pm AI Security, AI Threat, Latest news

### Swift Action Needed to Safeguard Against AI Threats in the United States

The U.S. government must move “decisively” to avert an “extinction-level threat” to humanity …

The U.S. government is urged to act swiftly and decisively to mitigate significant national security threats associated with artificial intelligence (AI), which could potentially pose a grave risk to humanity, as outlined in a report commissioned by the U.S. government and released recently.

The report, obtained by TIME before its official publication, emphasizes the pressing and escalating national security risks posed by the current advancements in AI technology. The emergence of advanced AI and artificial general intelligence (AGI) has the potential to disrupt global security similar to the impact of nuclear weapons introduction. AGI, a theoretical technology capable of performing tasks at or beyond human levels, is actively pursued by leading AI research labs, with expectations of its realization within the next five years.

Authored over a year by three individuals who engaged with over 200 stakeholders, including government officials, experts, and employees from cutting-edge AI companies such as OpenAI and Google DeepMind, the report sheds light on concerning insights. It highlights the apprehensions among AI safety personnel within these labs regarding the decision-making processes influenced by corporate executives’ incentives.

Titled “An Action Plan to Increase the Safety and Security of Advanced AI,” the comprehensive document proposes radical policy measures that could significantly disrupt the AI industry landscape. Recommendations include the prohibition of training AI models above a specified computing power threshold, to be determined by a new federal AI agency. Additionally, the report suggests mandating government authorization for the development and deployment of new AI models by frontier AI companies. It also advocates for stringent controls on AI model weights publication and enhanced regulations on AI chip manufacturing and export.

While the report’s propositions are unprecedented and may encounter political challenges, experts like Greg Allen from the Wadhwani Center for AI and Advanced Technologies at the Center for Strategic and International Studies (CSIS) express skepticism about the feasibility of implementing such stringent regulations within the current U.S. AI policy framework.

The report underscores the risks associated with the rapid evolution of AI technology and the imperative to address these challenges promptly. It delineates the potential threats of AI weaponization and loss of control, accentuated by competitive dynamics in the AI industry. The authors stress the significance of regulating high-end computer chips, a critical bottleneck in AI advancement, to ensure long-term global safety and security.

In conclusion, the report advocates for proactive measures to mitigate the risks posed by advanced AI systems, acknowledging the contentious nature of some recommendations. The authors, including the Harris brothers from Gladstone AI, underscore the necessity of balancing AI innovation with safety considerations to avert catastrophic consequences and ensure responsible AI development and deployment.

Visited 5 times, 1 visit(s) today
Tags: , , Last modified: March 11, 2024
Close Search Window
Close