Search Icon White
AI text

How to Safeguard AI: New GAO Report Provides Recommendations for Government Agencies to Implement Federal AI Requirements

12/14/2023

The U.S Government Accountability Office (GAO) just released a comprehensive report that reveals how major federal agencies use artificial intelligence (AI), and details the extent to which these agencies are meeting specific federal policy and guidance related to AI. In light of the findings, the GAO has released recommendations to various agencies, including the Office of Management and Budget (OMB), to fully implement federal requirements to better manage risks and complexities associated with AI.

Research specifically looked into the civilian agencies that are part of the Chief Financial Officers Act, with the exception of the U.S. Department of Defense, as the GAO recently issued recent reports on the department.

The GAO highlights efforts to support AI safeguards, noting how the U.S. Department of Commerce (DoC), through the National Institute of Standards and Technology (NIST), created a plan to develop AI technical standards. These efforts are under the directive of an Executive Order (released in October 2023) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.

The GAO report findings also revealed how specific agencies are employing AI:

  • The National Aeronautics and Space Administration (NASA) and the DoC reported the highest number of AI use cases.
  • About 1,200 current and planned AI use cases—specific challenges or opportunities that AI may solve—were reported by 20 out of 23 agencies.
  • Agencies reported AI uses that include analyzing data from cameras and radar to identify border activities, analyzing photographs from drones, and targeting of scientific specimens for planetary rovers.
  • Most of the reported AI use cases were in the planning phase and not yet in production.

To that end, the report also reveals a need for improvement: While some agencies have taken initial steps to comply with AI requirements in executive orders and federal law, more work remains to fully implement such requirements—a necessity in order to manage AI risks. As a result, the GAO has issued a list of 35 recommendations for 19 agencies, including that:

  • Fifteen agencies should update their AI use case inventories to include required information and take steps to ensure the data aligns with guidance.
  • OMB, Office of Science and Technology Policy (OSTP), and United States Office of Personnel Management (OPM) should implement AI requirements with government-wide implications, such as issuing guidance and establishing or updating an occupational series with AI-related positions.
  • Twelve agencies should fully implement AI requirements in federal law, policy, and guidance, such as developing a plan for how the agency intends to conduct annual inventory updates, and describing and planning for regulatory authorities on AI.

ANSI Supports Ongoing Government Efforts to Safeguard AI

As frequently reported on ansi.org, ANSI is currently engaged in numerous activities surrounding standardization to assure safer AI, including partnering with the DoC to leverage stakeholder activities and input in international standards that support the administration’s efforts.

In May 2023, ANSI sponsored an informational briefing in Washington, D.C., to provide vital information on standards to Hill staff, and to emphasize the importance of U.S. participation in standards for critical and emerging technology to strengthen economic and national security.

This fall, ANSI convened leading experts in standards, government, and technology during its annual World Standards Week for a series of panels and discussions to explore the question, “Will Generative AI Rewrite the Future?” The event provided professionals with an opportunity to share insights about how generative AI is transforming industries, and provide recommendations to assure its responsible use.

U.S experts are also active in supporting standards in AI: ANSI serves as the secretariat of International Organization for Standardization/International Electrotechnical Commission ISO/IEC Joint Technical Committee (JTC) 1, Subcommittee (SC) 42, which works on standardization in the area of AI, and provides guidance to JTC 1, IEC, and ISO committees developing AI applications.

The U.S. also chairs SC 42, which is the first-of-its-kind international standards committee looking at the full AI IT ecosystem. The SC is responsible for 20 published international AI standards—with 35 standards under development—that encompass guidance on risk management and an overview of trustworthiness in AI, among other areas.

Access more information in the GAO report, “Artificial Intelligence: Agencies Have Begun Implementation but Need to Complete Key Requirements.”

Related News in AI:

To Realize the Promise of AI, Executive Order Establishes New Standards to Ensure Safe, Secure, and Trustworthy AI

Deep Learning for a Generative AI Future: Standards Community, Government, and Tech Experts Share Expertise during World Standards Week

Access the Fall 2023 Edition of the USNC Current: “Artificial Intelligence”

CONTACT

Jana Zabinski

Senior Director, Communications & Public Relations

Phone:
212.642.8901

Email:
[email protected]

Beth Goodbaum

Journalist/Communications Specialist

Phone:
212.642.4956

Email:
[email protected]