Written by 1:56 pm AI Security

### Resolution by UN for Ensuring AI Security

Countries around the world are signaling support of secure AI practices, but not necessarily commit…

With no known implications for global AI safety, the United Nations approved a resolution on Thursday concerning the responsible utilization of artificial intelligence.

The resolution, drafted by the United States and co-sponsored by 120 nations, was endorsed without a vote. It emphasizes the promotion of “healthy, safe, and reliable artificial intelligence,” a phrase reiterated 24 times throughout the eight-page document.

Rather than mandating specific actions, the resolution merely acknowledges security threats in a general sense. It also highlights the immediate challenges posed by AI, including its involvement in disinformation campaigns and its potential to exacerbate human rights violations and global inequality.

Joseph Thacker, a prominent AI expert and security scholar at AppOmni, views this resolution as a step towards mobilizing the relevant stakeholders. He envisions a scenario where member states are held accountable for their commitments under this agreement.

Key Points of the Resolution

The latest UN resolution urges member states to enhance investments in developing and implementing robust safeguards for artificial intelligence systems. It specifically mentions the importance of “systems security,” encompassing the entire lifecycle of AI development, not just safety considerations.

Among the recommendations are mechanisms for risk monitoring, data security, personal data protection, and impact assessments during both testing and deployment phases of AI systems.

While not revolutionary in itself, reaching a consensus on a global scale to establish a baseline standard for acceptable AI practices is a significant achievement, according to Thacker.

Government Initiatives Addressing AI Challenges

The recent UN resolution follows proactive measures taken by Western governments to address AI-related concerns.

The European Union’s AI Act sets strict regulations on certain AI applications, prohibits the development of social scoring systems, and imposes substantial penalties for non-compliance.

Furthermore, an Executive Order issued by the Biden administration has propelled advancements in AI safety by mandating cybersecurity measures, information sharing among AI developers, and measures to combat fraud and abuse.

Thacker emphasizes the critical role of policymakers in determining the efficacy of these initiatives in enhancing AI safety and security. Given the generational gap and limited understanding of AI among many world leaders, comprehensive education and awareness-building efforts are essential to drive effective regulation and governance in this domain.

Visited 3 times, 1 visit(s) today
Tags: Last modified: March 26, 2024
Close Search Window
Close