AI Trust Library

Article Summary

Article Summary of:

New AI Regulations Are Coming. Is Your Organization Ready?

Originally published:
Apr 30, 2021

As a fast follow from the previous two articles, the Harvard Business Review published this piece by Andrew Burt of boutique AI law firm, which also featured in Issue 6 of this Newsletter. He raises many important issues for enterprises to consider as the regulatory landscape for AI continues to emerge. His thoughts parallel much of how we think about ML Assurance as a best practice here at Monitaur. Pointing to the "high rates of failure" of AI systems, Burt emphasizes that companies deploying AI need a more frequent process of audit and review of the decisions made, increasing in rigor as risk profile of a particular system rises. For most organizations accustomed to singular audits, this will require overhauling existing practices quite extensively. He distills the dizzying array of requirements across regulatory frameworks down to two key components for any impact assessment. Companies must clearly document the risks created by each AI system, and they must describe how individual risks have been addressed.

Want all our best news and analysis on trust and AI delivered straight to your inbox?