Article Summary

VISIT AI Trust Library
Article Summary of:

AI is learning how to explain itself to humans

Published:
April 6, 2022

Recent revenue success at LinkedIn was the result of a focus on Explainable AI or XAI.

Microsoft Corp’s LinkedIn boosted subscription revenue by 8% after arming its sales team with artificial intelligence software that not only predicts clients at risk of canceling but also explains how it arrived at its conclusion.

“While AI scientists have no problem designing systems that make accurate predictions on all sorts of business outcomes, they are discovering that to make those tools more effective for human operators, the AI may need to explain itself through another algorithm.”

Regulators, including the Federal Trade Commission, have been increasingly vocal about the need for AI to be explainable (or run the risk of investigation). The EU is also currently considering an Artificial Intelligence Act, that could pass as early as next year. The act includes “a set of comprehensive requirements including that users be able to interpret automated predictions.”

Text Link
AI Governance & Assurance
Text Link
Ethics & Responsibility