Search Icon White
cyber nist news

Supporting Trustworthy AI: NIST Publishes Guidance to Identify and Mitigate Cyberattacks that Manipulate AI Systems

1/16/2024

As part of its long-term efforts to safeguard artificial intelligence (AI), the National Institute of Standards and Technology (NIST) has released guidance that identifies the types of cyber attacks that manipulate the behavior of AI systems and outlines how to mitigate such attacks.

AI systems that perform tasks learn to make decisions based on training data. As an example, an autonomous vehicle might be shown images of highways and streets with road signsdata that helps the AI predict how to respond in specific situations. But the challenge is that the data that AI systems depend on may not be trustworthy, as cyber-attacks can corrupt data or even add biased language.

A collaborative effort between government, academia, and industry, the newly-released NIST publication considers four major types of attacks: evasion, poisoning, privacy, and abuse attacks. The publication, “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations” (NIST.AI.100-2), is intended to help AI developers and users “get a handle on the types of attacks they might expect along with approaches to mitigate them,” according to a NIST news item announcing the report. 

NIST Identifies Four Cyber Attacks on AI Systems:

  • Evasion attacks occur after an AI system is deployed and attempt to alter an input to change how the system responds to it. NIST reports that these attacks include adding markings to stop signs to make an autonomous vehicle misinterpret them as speed limit signs, or creating confusing lane markings to make the vehicle veer off the road.

     

  • Poisoning attacks happen during the training phase by introducing corrupted data, such as inappropriate language that chatbots can interpret as acceptable language when operating/interacting with customers.

     

  • Privacy attacks happen during deployment and are a way to get sensitive information about the AI or the data it was trained on in order to misuse it.

     

  • Abuse attacks happens upon insertion of incorrect information from a seemingly legitimate but compromised source, such as a webpage or online document, that an AI then absorbs.

“Despite the significant progress AI and machine learning have made, these technologies are vulnerable to attacks that can cause spectacular failures with dire consequences,” said NIST computer scientist Apostol Vassilev, co-author of the study. “We are providing an overview of attack techniques and methodologies that consider all types of AI systems. We also describe current mitigation strategies reported in the literature, but these available defenses currently lack robust assurances that they fully mitigate the risks. We are encouraging the community to come up with better defenses.”  

Access the full report and more on attack mitigation on NIST’s news item.

 

Related News:

University Students Explore AI’s Potential Impact on the Workforce

How to Safeguard AI: New GAO Report Provides Recommendations for Government Agencies to Implement Federal AI Requirements

To Realize the Promise of AI, Executive Order Establishes New Standards to Ensure Safe, Secure, and Trustworthy AI

Deep Learning for a Generative AI Future: Standards Community, Government, and Tech Experts Share Expertise during World Standards Week

Access the Fall 2023 Edition of the USNC Current: “Artificial Intelligence”

CONTACT

Jana Zabinski

Senior Director, Communications & Public Relations

Phone:
212.642.8901

Email:
[email protected]

Beth Goodbaum

Journalist/Communications Specialist

Phone:
212.642.4956

Email:
[email protected]