Article Summary

VISIT AI Trust Library
Article Summary of:

A “Light Touch” Regulatory Framework for AI – Transparency at the Heart of AI Regulation

Published:
December 2, 2020

This process-focused piece is a great companion piece to the above article. Authors Roger Bickerstaff and Aditya Mohan continue their exploration of a regulatory framework for artificial intelligence in this third installment that focuses on transparency. They divide the world of AI applications into those higher stakes deployments that will require regulatory approval and lower stakes deployments that should have a simple public registry. They draw a parallel to patent protection, stating that "Sufficient information should be made available to enable meaningful scrutiny without requiring important confidential information to be disclosed." However, they distinguish this requirement from explainability, noting that the end-goal should be the assessment of outcomes from the machine-driven decisions based on an index of the technical quality of the system and the human impact of the decisions made. They lay out a grading system and corresponding tranches of actionability, from none at the low end to requirements for notification, impact assessment, and prior approval at the high end. Ungoverned, some intelligent systems may have the effect of creating policy, around which developers, executives, and compliance professionals need to develop controls and better risk management practices.

Text Link
Principles & Frameworks