1/14/2026
As AI agent systems grow in use, their security risks may impact public safety, undermine consumer confidence, and curb adoption of the latest innovations. The Center for AI Standards and Innovation (CAISI) is seeking information by March 9 on practices and methodologies for measuring and improving the secure development and deployment of these systems.
CAISI, formerly known as the U.S. AI Safety Institute, is an entity within the National Institute of Standards and Technology (NIST) that serves as the primary point of contact within the U.S. government for industry to facilitate testing and collaborative research for commercial AI systems.
What are AI Agent Systems?
AI agent systems are designed to perceive their environment, make decisions, and take actions to achieve specific goals without constant human intervention. However, they face a range of threats and risks, including hijacking and backdoor attacks. As NIST reports, some risks overlap with other software systems, including exploitable authentication or memory management vulnerabilities.
Request for Information
In a recent Federal Register notice, CAISI details its request for stakeholder feedback on AI agent systems to inform the center’s work in:
Respondents—including AI agent deployers, developers, computer security researchers, and others—will inform future work on voluntary guidelines and best practices related to AI agent security. Stakeholders are encouraged to provide concrete examples, best practices, case studies, and actionable recommendations based on their experience with AI agent systems.
Read more in NIST’s press release.
Related News
NIST Announces Two Centers for AI in Manufacturing and Critical Infrastructure
International Standards Organizations Issue Seoul Statement on AI Standards
ANAB Establishes the Digital Technology Initiative and AI Task Force