Search Icon White
AI text

To Realize the Promise of AI, Executive Order Establishes New Standards to Ensure Safe, Secure, and Trustworthy AI

11/02/2023

Amid rapid advances in artificial intelligence (AI), the Biden-Harris administration this week announced an Executive Order (EO) that establishes new standards for AI safety and security. The EO builds on previous administration efforts requiring AI developers to share safety test results and other information with the government and directing U.S. agencies to set standards for that testing. The EO also addresses the risk of bias and civil rights violations that AI poses.

“In the wrong hands, AI can make it easier to for hackers to exploit vulnerabilities in the software that makes our society run,” President Biden said in a press conference. He noted that AI is already being used to deceive people, such as through deep fakes using AI-generated audio and video.

With the EO, the President directs the most sweeping actions ever taken to protect Americans from the potential risks of AI systems. In addition to new standards for AI safety and security, the EO encompasses protecting Americans’ privacy; advancing equity and civil rights; standing up for consumers, patients, and students; supporting workers; promoting innovation and competition; ensuring responsible and effective government use of AI; and advancing American leadership abroad.

Among the directives to advance “New Standards for AI Safety and Security:”

  • Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.
  • Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy.
  • Protect against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening.
  • Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.
  • Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge.
  • Order the development of a National Security Memorandum that directs further actions on AI and security, to be developed by the National Security Council and White House Chief of Staff.

The U.S. Department of Commerce’s agencies, including the National Institute of Standards and Technology (NIST), the Bureau of Industry and Security (BIS), the National Telecommunications and Information Administration (NTIA), and the U.S. Patent and Trademark Office (USPTO) will undertake key responsibilities to support the EO’s objectives.

While NIST will set the rigorous standards for extensive “red-team testing” to assure safety before public release, the Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The EO directs the Departments of Energy and Homeland Security to address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks.

“Expanding on our wide-ranging efforts in AI, NIST will work with private and public stakeholders to carry out its responsibilities under the Executive Order. We are committed to developing meaningful evaluation guidelines, testing environments, and information resources to help organizations develop, deploy, and use AI technologies that are safe and secure, and that enhance AI trustworthiness,” said Under Secretary of Commerce for Standards and Technology and NIST director Dr. Laurie Locascio.

Access more information and resources on the EO on NIST’s website. Additionally, NIST will release a video on November 9 with more information about its role in the EO.

ANSI looks forward to partnering with the Commerce Department as the administration seeks to leverage international standards in support of the EO. The Institute has engaged in numerous activities surrounding standardization in AI, including:

The EO is the latest effort to safeguard against malicious AI activity. Earlier this year, several leading AI companies signed voluntary commitments to the White House to implement measures such as watermarking AI-generated content to help make the technology safer. Just this week, officials from the U.S. joined representatives from the 28 government countries at a summit in the U.K., and signed a declaration agreeing to cooperate on evaluating the risks of AI.

CONTACT

Jana Zabinski

Senior Director, Communications & Public Relations

Phone:
212.642.8901

Email:
[email protected]

Beth Goodbaum

Journalist/Communications Specialist

Phone:
212.642.4956

Email:
[email protected]