RIMS, the Risk Management Society, has posted an article in its Risk Management magazine on the rise in artificial intelligence (AI) and machine learning, and the problems that can occur if there is a lack of oversight, transparency, or accountability in these systems.
AI systems may learn biases and incorporate them into their systems if there is no human oversight. Citing high-profile incidents involving AI failures, the article details how such instances have led regulators to push for more stringent rules and enforcement measures. The U.S. Federal Trade Commission has warned that the Fair Credit Reporting Act, the Equal Credit Opportunity Act, and Section 5 of the Federal Trade Commission Act apply to the sale or use of biased algorithms. Companies must comprehensively review their current and intended use of AI technologies, and conduct a risk assessment to determine the quality of data and if any bias may occur.
Read more in the RIMS article.
ANSI members may submit contributions to [email protected]. All submissions are published at ANSI's discretion, and generally must be a resource freely available and/or non-commercial information of significant value to the ANSI community.