Written by 4:38 am AI, Technology

– Mandatory Sharing of Test Data by AI Companies Proposed by Labour

Party plans to replace voluntary agreement with statutory regime ‘so we can see where this is takin…

Labour intends to compel artificial intelligence companies to share the outcomes of their road tests following concerns about the lack of regulation over social media platforms by regulators and policymakers.

The party plans to introduce a mandatory regulatory framework, replacing the existing voluntary testing agreement between the government and tech firms. This new regime would require AI enterprises to disclose their test data to government authorities.

Peter Kyle, the shadow technology secretary, expressed dissatisfaction with the slow response of legislators and regulators in addressing issues related to social media platforms. He emphasized the importance of avoiding similar oversights in the oversight of AI technology.

In response to the tragic murder of Brianna Ghey, Kyle highlighted the need for increased transparency from tech companies involved in AI development. He stated that under a Labour administration, companies engaged in AI research would be obligated to operate with greater transparency.

Moving away from the current voluntary guidelines, Kyle proposed implementing a statutory code that mandates companies to release all test data and provide details on the nature of their experiments. This shift aims to enhance visibility into the development and implications of AI technology.

During the global AI safety summit held in November, Rishi Sunak secured a voluntary agreement with prominent AI firms, such as Google and OpenAI, to collaborate on testing advanced AI models pre and post-deployment. Labour’s proposal seeks to require AI companies to inform the government about their plans to develop high-capability AI systems and undergo safety assessments under independent supervision.

The voluntary testing pact at the AI summit garnered support from the EU and ten countries, including the US, UK, Japan, France, and Germany. Noteworthy tech entities participating in model testing include Google, OpenAI, Amazon, Microsoft, and Meta, led by Mark Zuckerberg.

Kyle, presently engaging with Washington lawmakers and tech leaders in the US, highlighted the significance of test results in supporting the UK AI Safety Institute’s efforts to ensure public confidence in scrutinizing cutting-edge AI advancements.

He emphasized the transformative impact of AI on various aspects of society and stressed the importance of ensuring safe and responsible development in this domain.

Visited 2 times, 1 visit(s) today
Tags: , Last modified: February 5, 2024
Close Search Window
Close