Search Icon White
AI text

Understanding AI’s Capabilities: NIST Launches Program to Advance Sociotechnical Testing and Evaluation for AI

5/31/2024

In an effort to better understand the capabilities of artificial intelligence (AI) and to help organizations and individuals determine whether a given AI technology will be valid, reliable, safe, secure, private, and fair, the National Institute of Standards and Technology (NIST) this week launched the Assessing Risks and Impacts of AI (ARIA) program.

ARIA 0.1 is a research effort to assist AI evaluators in improving their assessment methods, evaluating models and systems submitted by technology developers from around the world. The program will focus on the risks and impacts associated with large language models.

Ultimately, the program will produce guidelines, tools, methodologies, and metrics that organizations can use for evaluating the safety of their systems as part of their governance and decision-making processes to design, develop, release, or use AI technology, NIST reports.

ARIA expands on the NIST AI Risk Management Framework, launched in 2023 for voluntary use, to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.

“The ARIA program is designed to meet real-world needs as the use of AI technology grows,” said Under Secretary of Commerce for Standards and Technology and NIST director Laurie E. Locascio. “This new effort will support the U.S. AI Safety Institute, expand NIST’s already broad engagement with the research community, and help establish reliable methods for testing and evaluating AI’s functionality in the real world.”

NIST reports that outcomes of ARIA will support and inform NIST’s collective efforts, including the U.S. AI Safety Institute (AISI), to build the foundation for safe, secure, and trustworthy AI systems.

Last month, the Commerce Department released a strategic vision document outlining the AISI’s strategic goals and steps that it plans to take to advance AI safety and responsible AI innovation.

Access more information via NIST’s press release.

ANSI also encourages collaboration and engagement in numerous activities surrounding standardization in AI, including these ongoing and recent efforts:

  • ANSI is the secretariat of ISO/IEC JTC 1, Subcommittee (SC) 42, Artificial intelligence. The U.S. also chairs SC 42, which is the first-of-its-kind international standards committee looking at the full AI IT ecosystem. The SC is responsible for 28 published ISO standards, including ISO/IEC 22989:2022Artificial intelligence concepts and terminologyISO/IEC 23894:2023Artificial intelligence — Guidance on risk management, and ISO/IEC TR 24368:2022Overview of ethical and societal concerns, among others, with over 30 under development.

 

 

 

 

 

Related News:

Commerce Department Releases Strategic Vision on AI Safety for the U.S. Artificial Intelligence Safety Institute

Advancing AI Initiatives, Secretary of Commerce Announces Key Executive Leadership at U.S. AI Safety Institute

Emerging AI Technology Trends and More: Participate in ISO/IEC AI Workshop Sessions in June

How Do Standards Impact AI? Enter ANSI’s Student Paper Competition!

CONTACT

Jana Zabinski

Senior Director, Communications & Public Relations

Phone:
212.642.8901

Email:
[email protected]

Beth Goodbaum

Journalist/Communications Specialist

Phone:
212.642.4956

Email:
[email protected]