U.S. flag   An unofficial archive of your favorite United States government website
Dot gov

Official websites do not use .rip
We are an unofficial archive, replace .rip by .gov in the URL to access the official website. Access our document index here.

Https

We are building a provable archive!
A lock (Dot gov) or https:// don't prove our archive is authentic, only that you securely accessed it. Note that we are working to fix that :)

NIST AI 100-2 E2023 (Initial Public Draft)

Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations

Date Published: March 8, 2023
Comments Due: September 30, 2023
Email Comments to: ai-100-2@nist.gov

Author(s)

Alina Oprea (Northeastern University), Apostol Vassilev (NIST)

Announcement

This NIST report on artificial intelligence (AI) develops a taxonomy of attacks and mitigations and defines terminology in the field of adversarial machine learning (AML). Taken together, the taxonomy and terminology are meant to inform other standards and future practice guides for assessing and managing the security of AI systems by establishing a common language for understanding the rapidly developing AML landscape. Future updates to the report will likely be released as attacks, mitigations, and terminology evolve.

NIST is specifically interested in comments on and recommendations for the following topics:

  • What are the latest attacks that threaten the existing landscape of AI models?
  • What are the latest mitigations that are likely to withstand the test of time?
  • What are the latest trends in AI technologies that promise to transform the industry/society? What potential vulnerabilities do they come with? What promising mitigations may be developed for them?
  • Is there new terminology that needs standardization?

NIST intends to keep the document open for comments for an extended period of time to engage with stakeholders and invite contributions to an up-to-date taxonomy that serves the needs of the public. 

Abstract

Keywords

artificial intelligence; machine learning; attack taxonomy; evasion; data poisoning; privacy breach; attack mitigation; data modality; trojan attack, backdoor attack; chatbot
Control Families

None selected

Documentation

Publication:
https://doi.org/10.6028/NIST.AI.100-2e2023.ipd
Download URL

Supplemental Material:
Trustworthy & Responsible AI Resource Center

Document History:
10/30/19: IR 8269 (Draft)
03/08/23: AI 100-2 E2023 (Draft)