Article Summary

VISIT AI Trust Library
Article Summary of:

4 Reasons Why Explainable AI Is the Future of AI

Published:
September 27, 2021

A continuing thread in our newsletter is the promise and problems of explainable AI for tackling AI’s well-known issues with transparency, fairness, safety, and trust. Scott Clark in CMSWiRE focuses on the positive aspects, interviewing experts on these tools to make AI “explainable, transparent, and understandable in order to be trusted, reliable, and consistent.” In doing so, Explainable AI can bolster four core principles: 

  • Build trustworthiness
  • Satisfy developing legal requirements
  • Provide ethics justifications, and
  • Derive actionable and robust insights into AI decision-making. 

Though explainable AI does explain some of the decisions that black boxes make, it is not a perfect solution. Boris Babic and Sara Gerke from Stat wrote a piece outlining how explainable AI is unable to access the original dataset a model uses to make decisions. Instead, it creates a somewhat similar "white box" model that is fully transparent; however, that white box will never perform identically to the original model. Thus, as Kareem Saleh notes, the provided explanation is just an approximation that does not fully represent reality.

Text Link
AI Governance & Assurance
Text Link
Ethics & Responsibility