Written by 2:00 am AI Security, Generative AI

– **Developing a Taxonomy of Concepts and Defining Terminology in Adversarial Machine Learning: Insights from NIST’s Trustworthy and Responsible AI Report**

Artificial intelligence (AI) systems are expanding and advancing at a significant pace. The two mai…

Artificial intelligence (AI) systems are rapidly expanding and evolving. The two primary categories of AI systems are Predictive AI and Generative AI. Generative AI, exemplified by the widely recognized Large Language Models (LLMs), focuses on creating original content, while Predictive AI is geared towards making predictions based on data.

Ensuring the safety, reliability, and resilience of AI systems is crucial, given their integral role across various industries. The NIST AI Risk Management Framework and AI Trustworthiness taxonomy underscore the importance of these operational aspects in building trustworthy AI systems.

A recent study by researchers at NIST Trustworthy and Responsible AI aims to advance Adversarial Machine Learning (AML) by developing a comprehensive taxonomy of terms and definitions. This taxonomy, organized hierarchically, encompasses Machine Learning (ML) techniques, phases of the attack lifecycle, attacker objectives, and attacker knowledge of the learning process. The study also proposes strategies to mitigate AML attacks effectively.

The dynamic nature of AML issues underscores the need to address unresolved challenges at each stage of AI system development. The research strives to serve as a valuable resource for shaping future practices and standards in assessing and enhancing the security of AI systems.

Key highlights of the study include:

  1. Introducing a standardized vocabulary for discussing AML concepts within the ML and cybersecurity communities.
  2. Presenting a detailed taxonomy of AML attacks applicable to both Generative AI and Predictive AI systems.
  3. Categorizing Generative AI attacks into evasion, poisoning, abuse, and privacy, and Predictive AI attacks into evasion, poisoning, and confidentiality.
  4. Addressing attacks across various data modalities and learning approaches, including supervised, unsupervised, semi-supervised, federated learning, and reinforcement learning.
  5. Discussing potential mitigations for AML and strategies to combat specific attack types.
  6. Analyzing the limitations of existing mitigation approaches and offering a critical assessment of their effectiveness.
Visited 2 times, 1 visit(s) today
Last modified: January 19, 2024
Close Search Window
Close