While artificial intelligence has been around for decades, a new branch, known as generative AI, is capable of generating content, broadly impacting sectors including media, healthcare, financial technology, and retail, among others. Leading experts in standards, government, and technology joined an ANSI World Standards Week event, Will Generative AI Rewrite the Future?, to share insights about how generative AI is transforming industries, and recommendations to assure its responsible use.
Rohit Israni, founder and CEO, CertientAI and chair, INCITS/AI, kicked off the event with an introductory presentation explaining generative AI, machine learning, deep learning, and neural networks. He also highlighted several challenges with generative AI, including hallucinations, copyright, bias, misinformation, and data privacy.
Reflecting on the name of the conference, Will Generative AI Rewrite the Future?, Steve Welby, deputy director for National Security at the White House Office of Science and Technology Policy, asserted that “AI will not rewrite the future because the future is ours to shape.” He discussed the White House Blueprint for an AI Bill ofRights and the Biden-Harris administration’s efforts to work with industry and agencies to seize the benefits of AI by managing its risks. Lisa O’Connor of Accenture Labs showcased real-world uses of generative AI in filmmaking, the news, and science, while highlighting concerns about privacy, ownership, and misinformation, and noting the importance of stakeholders coming together to build a foundation for responsible generative AI.
Jason Matusow of Microsoft took attendees through a journey in “six acts,” in which he covered, among other things, how privacy fundamentally brings standards into the realm of emerging technologies, the appropriate roles of standards and regulations, and how responsible AI standards can help meet the needs at the scale of the economy. Anthony Barrett of the Berkeley Existential Risk Initiative highlighted key challenges in AI risk management standards, noting that they should address impacts to individuals (health, safety, fundamental rights); to groups (including populations vulnerable to disproportionate adverse impacts); and to society (environmental harms). Barrett stated that often, the challenge lies in a trade-off between risks and benefits—or even between different sets of risks.
The panel session, Generating the Future: Impact Across Sectors, included moderator Wael Diab, chair of ISO/IEC JTC1 SC42, Artificial Intelligence, and panelists Ben Dynkin of the City College of New York and the FS-ISAC AI Risk Working Group; Dr. John Halamka of the Mayo Clinic Platform; Jim Northey, chair of ISO TC 68, Financial Services; and Ram Rampalli of Walmart Global Tech.
Discussions covered how generative AI is the next step in the evolution of industries from retail to fintech to healthcare. Ultimately, while AI can serve as a “co-pilot” for some job operations and tasks such as summaries and initial reviews, human oversight is still critical in many instances. This is especially pertinent in areas where risk tolerance is low, including public safety and healthcare.
“We’ve been dealing with [AI] for a long time, but the AI that existed previously feels fundamentally different from generative AI and the advancements from the last year or so,” noted Dynkin. “How can we move forward in a risk-informed way, in a responsible way, from a paradigm that used to exist [in AI’s historical applications] to the current state of AI?,” he asked.
The final panel session, Trustworthy and Responsible AI, included moderator Brandon Abley of NENA: the 9-1-1 Association and panelists Mariel Acosta-Geraldino and Bob Griffin of IBM, Jason Matusow of Microsoft, and Elham Tabassi of NIST, who was recently recognized by TIME magazine as one of The 100 Most Influential People in AI 2023.
The session covered the evolutionary nature of standards development and the need for cross-functional teams to develop AI safeguards. Panelists noted that since every instance of generative AI use is different, risk assessments must be conducted for each use. There is a need to think holistically about solutions that can be delivered on a global scale, as well as a need to collaborate with not only likeminded organizations, but also competitors. To that end, market competition cannot be used as an excuse to create unsafe systems, and the community needs to use great care in developing standards.
“The recent release of Large Language Models points out the actuality that this technology is moving faster than other technologies, and faster than standards, policies, and government can keep up with,” said Tabassi. “But it’s not too late to get involved. We all believe that it has enormous potential for improving lives.”
ANSI is grateful to the speakers and attendees who shared insights and recommendations on innovative topics. Access the presentations for the recent World Standards Week event: Will Generative AI Rewrite the Future?