Written by 7:22 am AI, Latest news

– Collaborative Testing of AI Model Safety by US-UK Partnership

The UK and the US governments have signed a Memorandum of Understanding in order to create a common…

OpenAI, Google, Anthropic, and other companies engaged in advancing generative AI technologies are continuously enhancing their capabilities and introducing more sophisticated large language models. To establish a standardized approach for independently evaluating the safety of these evolving models, the governments of the UK and the US have formally agreed on a Memorandum of Understanding. This collaboration involves the UK’s AI Safety Institute and its US counterpart, announced by Vice President Kamala Harris, which is yet to commence operations. The primary objective of this partnership is to develop comprehensive test suites aimed at assessing the potential risks and ensuring the safety of “cutting-edge AI models.”

The plan includes the exchange of technical expertise, information sharing, and even personnel collaboration between the two institutes. An initial milestone involves conducting a collaborative testing exercise on a publicly accessible model. Michelle Donelan, the UK’s science minister and signatory of the agreement, emphasized the urgency of their actions, anticipating the emergence of a new wave of AI models within the next year. These upcoming models are perceived as potential “game-changers,” with capabilities that are yet to be fully understood.

As reported by The Times, this bilateral initiative marks the world’s first formal agreement on AI safety, with both the US and the UK expressing intentions to engage with other nations in similar endeavors in the future. US Secretary of Commerce Gina Raimondo highlighted the significance of AI as the defining technology of this era. She emphasized that the partnership will expedite the work of both Institutes in addressing a wide range of risks, encompassing national security and societal implications. Raimondo underscored the proactive approach of the collaboration, focusing on enhancing the understanding of AI systems, conducting thorough evaluations, and issuing robust guidance.

While the current focus of this partnership revolves around testing and evaluation processes, governments globally are also actively formulating regulations to govern the use of AI technologies. In a recent development, the White House issued an executive order in March, mandating that federal agencies utilize AI tools that safeguard the rights and safety of the American populace. Additionally, the European Parliament passed extensive legislation to regulate artificial intelligence, prohibiting AI applications that manipulate human behavior, exploit vulnerabilities, or utilize biometric categorization systems based on sensitive attributes. The legislation also addresses concerns related to the unauthorized scraping of facial data from various sources for facial recognition databases. Furthermore, the rules mandate clear labeling of AI-generated content, including deepfakes, images, videos, and audio, to ensure transparency and accountability.

Visited 4 times, 1 visit(s) today
Tags: , Last modified: April 11, 2024
Close Search Window
Close