Written by 4:13 pm AI, Discussions, Uncategorized

### Announcement of a Global Pact for Implementing “Secure by Design” in IoT by the US, UK, and Allies

Non-binding 20-page agreement signed by 18 countries says companies must develop AI that keeps publ…

A high-ranking US official presented the initial comprehensive global accord on safeguarding artificial intelligence from malicious actors on Sunday, advocating for the development of AI technologies that are “secured by design.” The United States, the United Kingdom, and several other countries echoed this sentiment.

In a 20-page document revealed on Sunday, 18 nations agreed that companies involved in the design and implementation of AI should ensure that their practices prioritize the protection of users and the public from potential misuse.

While the agreement is non-binding, it primarily consists of fundamental guidelines such as vetting software providers, monitoring AI systems for abuse, and safeguarding data against manipulation.

Jen Easterly, the head of the US Cybersecurity and Infrastructure Security Agency, underscored the importance of having multiple nations endorse the principle that AI systems must be developed with security at the forefront.

Easterly noted that the recommendations signify a consensus that the primary focus during the design phase should be on security. This marks a departure from previous approaches that prioritized features, speed to market, and cost reduction.

This accord is the latest in a series of governmental initiatives aimed at shaping the trajectory of AI, a technology that is increasingly pervasive in both commercial and societal realms. Despite the collective efforts, many of these initiatives lack enforcement mechanisms. Just last month, the UK hosted an AI summit.

In addition to the United States and the UK, countries such as Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore have endorsed the new recommendations.

The agreement addresses concerns related to preventing the theft of AI technology by hackers and includes recommendations such as conducting thorough security assessments before releasing models.

However, notable issues such as the ethical use of AI and the ethical sourcing of data for model training remain unaddressed.

The rapid advancement of AI has raised various apprehensions, including fears of its potential misuse in undermining democracy, perpetrating fraud, or leading to substantial job displacement.

In terms of AI governance, Europe has taken the lead over the United States, with lawmakers in the region actively drafting regulations. France, Germany, and Italy have recently agreed on a framework for regulating AI that promotes “mandatory self-regulation through codes of conduct” for foundational AI concepts.

Despite efforts by the Biden administration to push for AI legislation, progress has been limited in the divided US Congress.

In October, the White House issued a new executive order aimed at mitigating AI risks to consumers, workers, and marginalized communities while bolstering national security.

Visited 2 times, 1 visit(s) today
Last modified: February 27, 2024
Close Search Window
Close