Written by 7:19 am AI, Latest news

### Pact for AI Safety Formed Between the United States and Great Britain

The U.S. and U.K. have pledged to work together on safe AI development, in an agreement that includ…

The collaboration between the United States and the United Kingdom to promote the safe development of AI was formalized through an agreement signed by U.S. Commerce Secretary Gina Raimondo and U.K. Technology Secretary Michelle Donelan on Monday (April 1). This partnership entails the joint efforts of the AI Safety Institutes from both nations in testing the most advanced AI models.

The Department of Commerce announced that the partnership would be immediately effective, facilitating seamless cooperation between the organizations. Recognizing the rapid evolution of AI, both governments acknowledge the imperative to proactively address emerging risks associated with AI technology.

Furthermore, the agreement outlines plans for the U.S. and U.K. to establish similar collaborations with other countries to enhance AI safety globally. The AI institutes are set to conduct collaborative tests on accessible models and explore personnel exchanges to leverage a shared pool of expertise.

This development follows the recent introduction of a White House policy mandating federal agencies to identify and mitigate potential AI risks, appoint chief AI officers, and create transparent inventories of AI systems. These inventories will highlight applications with implications for safety and civil rights, such as AI-driven healthcare and law enforcement decision-making.

In response to the policy announcement, Jennifer Gill, Vice President of Product Marketing at Skyhawk Security, emphasized the necessity for uniform standards across all agencies to mitigate vulnerabilities stemming from inconsistencies in AI management and monitoring. Gill underscored the risks posed by exploitable AI vulnerabilities resulting from disparate governance practices within federal agencies.

Moreover, the National Institute of Standards and Technology (NIST) established the Artificial Intelligence Safety Institute Consortium (AISIC) this year to facilitate collaboration between industry and government in ensuring safe AI deployment. Mastercard CEO Michael Miebach highlighted the importance of establishing trust in AI technology through the implementation of consistent and robust standards. The AISIC consortium includes over 200 members, comprising tech industry leaders like Amazon, Meta, Google, and Microsoft, academic institutions such as Princeton and Georgia Tech, and various research organizations.

Visited 4 times, 1 visit(s) today
Tags: , Last modified: April 11, 2024
Close Search Window
Close