Written by 5:55 am AI Guidelines, Uncategorized

### Guidelines for Developing Secure AI Systems Released by US, UK, and Global Allies

U.K., U.S., and 16 other international partners have released new guidelines for the development of…

New directives for enhancing the development of secure artificial intelligence (AI) systems have been unveiled by the United Kingdom, the United States, and collaborative partners from 16 other countries.

As per the U.S. Cybersecurity and Infrastructure Security Agency (CISA), the strategy emphasizes taking responsibility for customers’ security outcomes, embracing radical transparency and accountability, and establishing organizational frameworks where prioritizing secure design is fundamental.

The National Cyber Security Center (NCSC) further emphasized that the primary aim is to elevate the cybersecurity standards of AI while aiding in ensuring its secure creation, development, and utilization.

These guidelines also expand on the ongoing initiatives of the U.S. government to mitigate the risks associated with AI. This includes rigorous testing of new tools before public release, implementing safeguards to address socio-economic risks such as bias, prejudice, and privacy concerns, and establishing reliable mechanisms for users to verify AI-generated content.

Moreover, the directives urge companies to commit to implementing a bug reward system to facilitate third-party identification and monitoring of vulnerabilities in their AI systems, enabling swift detection and resolution.

The latest guidelines, referred to as the “secure by design” approach, aim to assist developers in integrating cybersecurity as a fundamental requirement in the health and advancement of AI systems from inception throughout the development process.

Organizations are required to assess threats to their systems, safeguard their supply chains and infrastructure, and address key areas in the Artificial system development life cycle, encompassing secure design, development, deployment, and operation and maintenance.

The agencies highlighted the importance of fortifying defenses against adversarial attacks on AI and machine learning (ML) systems, which seek to induce unintended behaviors through methods like model classification alteration, unauthorized user actions, and sensitive data extraction.

NCSC mentioned various strategies to achieve these objectives, including combating attacks in the large language model (LLM) domain promptly and mitigating risks such as data poisoning through intentional contamination of customer feedback or training data.

Visited 2 times, 1 visit(s) today
Last modified: February 4, 2024
Close Search Window
Close