Search Icon White
trust AI

Commerce Department Releases Strategic Vision on AI Safety for the U.S. Artificial Intelligence Safety Institute


Strategy Recognizes Importance of Standards for AI Design and Deployment

The U.S. Commerce Department today released a strategic vision document for its U.S. Artificial Intelligence Safety Institute (AISI), outlining the AISI’s strategic goals and steps that it plans to take to advance AI safety and responsible AI innovation.

Housed within the National Institute of Standards and Technology (NIST), the AISI works to advance the science, practice, and adoption of AI safety across the spectrum of risks, including those to national security, public safety, and individual rights. The strategy document outlines the AISI’s three key goals:

  1. Advance the science of AI safety
  2. Articulate, demonstrate, and disseminate the practices of AI safety
  3. Support institutions, communities, and coordination around AI safety

The document also recognizes the importance of standards, noting that mature science of AI safety “involves a greater understanding of advanced AI model and system capabilities, the adoption of standards for safe AI design and deployment, and the development of safety evaluations of both the systems and their broader impacts.”

In order to accelerate AI safety, the strategy explains that the AISI will tackle key issues, including:

  • a lack of commonly accepted definitions for AI safety;
  • underdeveloped testing, evaluation, validation, and verification (TEVV) methods and best practices to provide holistic assessments of risk;
  • an absence of scientifically-established risk mitigations across the lifecycle of AI design and deployment;
  • an insufficient understanding of the relationship between model architecture and design and model behavior and performance; and
  • limited and ad hoc coordination around safety practices among industry, civil society, and national and international actors.

“Recent advances in AI carry exciting, lifechanging potential for our society, but only if we do the hard work to mitigate the very real dangers of AI that exist if it is not developed and deployed responsibly. That is the focus of our work every single day at the U.S. AI Safety Institute, where our scientists are fully engaged with civil society, academia, industry, and the public sector so we can understand and reduce the risks of AI, with the fundamental goal of harnessing the benefits,” said U.S. Secretary of Commerce Gina Raimondo. “The strategic vision we released today makes clear how we intend to work to achieve that objective and highlights the importance of cooperation with our allies through a global scientific network on AI safety. Safety fosters innovation, so it is paramount that we get this right and that we do so in concert with our partners around the world to ensure the rules of the road on AI are written by societies that uphold human rights, safety, and trust.”

Commerce Secretary Raimondo announced the vision document in line with the start of the AI Seoul Summit, and added that the Commerce Department and the AISI will help launch a global scientific network for AI safety through “meaningful engagement with AI Safety Institutes” and other government-backed scientific offices focused on AI safety and committed to international cooperation.

Access the news announcement and read the Strategic Vision for AISI.

Related News:

Mitigate AI Risks and Support Innovation: NIST Seeks Feedback on Draft AI Guidance Documents, Launches GenAI Challenge

Emerging AI Technology Trends and More: Participate in ISO/IEC AI Workshop Sessions in June

How Do Standards Impact AI? Enter ANSI’s Student Paper Competition!


Jana Zabinski

Senior Director, Communications & Public Relations


[email protected]

Beth Goodbaum

Journalist/Communications Specialist


[email protected]