Article Summary

VISIT AI Trust Library
Article Summary of:

AI Regulation is Coming

Published:
September 7, 2021

With increased regulation of AI seeming more and more inevitable, the real question may be – as the authors of this piece from Boston Consulting Group and INSEAD explore – what type of regulatory approaches may prove most effective. Some of the most compelling thought explores the nature of how artificial and human intelligence interact. For example, while much public attention has focused lately on the very real dangers of biased AI decisions, they highlight the opportunity for AI to help measure and mitigate bias in human decisions, even those augmented by machines. Since more subjective decisions are trusted by consumers less, the authors argue that “companies need to communicate very carefully about the specific nature and scope of decisions they’re applying AI to and why it’s preferable to human judgment in those situations.”

This meaty Harvard Business Review read delves into many of the trade-offs of regulation such as how the opportunities for scale across geographies that make AI an attractive investment necessarily increase the likelihood of unfairness in more contained localities. They also explore the complexities of explainability for AI, a thread we have covered in most issues of this newsletter and most recently in Issue 9. Their position is that companies with “stronger explanatory capabilities will be in a better position to win the trust of consumers and regulators.”

Text Link
Ethics & Responsibility
Text Link
Principles & Frameworks