In aneffort to help develop trustworthy AI systems, the National Institute of Standards and Technology (NIST) requests feedback on a draft publication report on AI explainability. ANSI encourages relevant stakeholders to respond by the October 15, 2020 deadline.
The draft, entitled Four Principles of Explainable Artificial Intelligence (Draft NISTIR 8312), proposes a set of fundamental principles for explainable AI systems. NIST reports that the draft is intended to stimulate a conversation about what we should expect of our decision-making devices.
The report is part of NIST's broader effort to support trustworthy AI systems. In progress, NIST’s foundational research aims to build trust in these systems by understanding their theoretical capabilities and limitations and by improving their accuracy, reliability, security, robustness, and explainability, which is the focus of their latest publication.
For more details about the effort to support AI, visit NIST's webpage on AI explainability.
ANSI's Efforts to Support AI
ANSI recently hosted a workshop exploring standardization empowering AI-enabled systems in healthcare to identify opportunities for progress through collaboration and standardization; pinpoint challenges, barriers and gaps; and discuss steps to optimize regulatory frameworks.
The workshop examined issues surrounding data, transparency and explainability, and governance and risk management. Looking ahead, ANSI is now planning an in-person workshop in 2021 to develop recommendations for coordination of standardization and governance to meet expectations of safety, quality, responsibility, and risk.
See related ANSI news items: