The National Institute of Standards and Technology (NIST) and the National Telecommunications and Information Administration (NTIA) have announced recent activities related to the President’s Executive Order (EO) 14110 on Safe, Secure, and Trustworthy Artificial Intelligence, released on October 30.
NIST Issues RFI on Response to the Executive Order on AI
NIST has issued a Request for Information to assist in the implementations of its responsibilities under the EO. Responses to the RFI will support NIST’s efforts to evaluate capabilities relating to AI technologies and develop a variety of guidelines called for in the EO. The RFI specifically calls for information related to AI red-teaming, generative AI risk management, reducing the risk of synthetic content, and advancing responsible global technical standards for AI development.
Responses will be accepted until February 2, 2024. See the Federal Register for information on how to submit a response. Information collected will inform the drafting of guidance that NIST will release for public comment.
NIST Requests Feedback on Guidance on Differential Privacy
NIST has released guidance on differential privacy, a privacy-enhancing technology that quantifies privacy risk to individuals when their information appears in a dataset. The draft guidance is intended to help different sets of stakeholders—from software developers to business owners and policy makers—understand and think more consistently about claims made about differential privacy.
Differential privacy is a “mathematical definition of what it means to have privacy.” Applying differential privacy allows data to be publicly released without revealing the individuals within a dataset. While differential privacy is one of the more mature privacy-enhancing technologies used in data analytics, lack of standards can make it difficult to employ effectively—potentially creating a barrier for users, NIST reports.
Draft NIST Special Publication (SP) 800-226, Guidelines for Evaluating Differential Privacy Guarantees, developed in response the EO, is an initial draft. The agency is requesting public comments on it during a 45-day period, ending on January 25, 2024. To submit comments, download the template from the NIST website and email it to [email protected]. Comments received will inform a final version, to be published some time in 2024.
NTIA Launches Public Engagement on Openness in AI Models
NTIA initiated public engagement with its review of openness in AI models at a December 13 event hosted by the Center for Democracy & Technology. The EO directs NTIA to review the risks and benefits of AI models openness and develop policy recommendations to maximize benefits while mitigating risks.
“Open” AI tools have their key components, such as model weights, available and therefore replicable or manipulable. They broaden AI’s availability to small companies, nonprofits, and individuals, increasing access to this technology's benefits, but also increasing the possible sources of AI risk. To meet the requirements of the EO, NTIA will review the risks of openness associated with actors removing safeguards or tweaking the models that AI systems rely on; the benefits to competition, AI innovation, and research of the openness of AI models and systems; and potential regulatory mechanisms to manage the risks and maximize the benefits of the openness AI models and systems.
NTIA has announced that they will issue a formal Request for Comment exploring these topics in early 2024.