The U.S. Artificial Intelligence (AI) Safety Institute at the Department of Commerce’s National Institute of Standards and Technology (NIST) recently announced that it has signed agreements with Anthropic and OpenAI to advance safe and trustworthy AI, establishing a framework for the U.S. AI Safety Institute to receive access to major new models from each company prior to and following their public release.
NIST reports that each company’s Memorandum of Understanding will enable collaborative research on how to evaluate capabilities and safety risks, as well as methods to mitigate those risks.
The AI Safety Institute, launched in November 2023, is part of NIST’s response to the Biden administration’s Executive Order on Safe, Secure, and Trustworthy Development and Use of AI. Read the Strategic Vision, released in May 2024.
Both OpenAI (the maker of ChatGPT) and Anthropic are members of the AI Safety Institute Consortium, which is part of the AI Safety Institute and aims to enable close collaboration among government agencies, companies, and impacted communities to help ensure that AI systems are safe and trustworthy.
“Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” said Elizabeth Kelly, director of the U.S. AI Safety Institute. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”
NIST also reports that the U.S. AI Safety Institute plans to provide feedback to Anthropic and OpenAI on potential safety improvements to their models, in close collaboration with its partners at the U.K. AI Safety Institute.
Access more information via NIST’s news release.
Related News:
NIST Symposium to Explore Efforts to Advance Measurements and Standards, Supporting AI Innovations