Written by 4:26 pm AI, AI Guidelines, AI Trend, Discussions, Latest news

### U.S. Joins 17 Other Countries in Secure AI Pact

The agreement is focused on AI safety in the development and deployment stages.

In the latest global effort addressing artificial intelligence (AI) development and its impact on health, eighteen countries, including the United States, have unveiled an international pact emphasizing the importance of ensuring the safety of AI throughout its deployment and progression. The agreement also urges AI enterprises to embed safety measures within their systems to guard against malicious activities.

Key Points

The AI safety guidelines were jointly issued by the Cybersecurity and Infrastructure Security Agency of the U.S. Department of Homeland Security and the UK’s National Cyber Security Centre.

Outlined in a non-binding 20-page document, the agreement mandates that AI firms safeguard users and the public from potential misuse of AI systems across all phases of development, deployment, and upkeep.

Companies are required to conduct security assessments on models before deployment, continuously monitor AI systems for misuse, secure data from cyber threats, and ensure the reliability of software providers.

The guidelines delineate the necessary steps for ensuring the security of AI technologies from inception through design, implementation, release, and maintenance.

Participating nations in the pact include Germany, Italy, Norway, Australia, Chile, Nigeria, and Singapore.

Major industry players such as Google, Microsoft, and OpenAI were instrumental in formulating these regulations.

Jen Easterly, the director of the U.S. Cybersecurity and Infrastructure Security Agency, emphasized the significance of prioritizing safety features over rapid market penetration or cost reduction in AI development.

President Joe Biden’s sweeping executive order in October aimed at mitigating potential AI risks underscores the importance of addressing privacy, trust, and safety issues in AI advancement. The Cold War-era Defense Production Act mandates that AI companies developing technologies with potential risks to regional stability, the economy, or public health must share safety test results with the U.S. government.

Recent AI legislation collaborations involving Germany, France, and Italy advocate for “mandatory self-regulation through codes of conduct” for various AI applications. The EU has faced setbacks in finalizing its AI Act, despite Germany’s proactive stance in drafting regulations related to AI growth and health, including guidelines on visual recognition and biological security.

Visited 2 times, 1 visit(s) today
Last modified: February 4, 2024
Close Search Window
Close