Article Summary

VISIT AI Trust Library
Article Summary of:

Explainable artificial intelligence: Easier said than done

Published:
July 21, 2021

Adapted from a longer piece in Science, this essay – also by I. Glenn Cohen from earlier in this newsletter – lays out a cogent case for why the FDA should avoid explainable AI tools and focus instead on safety and efficacy as the key measures for Software as a Medical Device (SaMD) and other health offerings. Like our previous coverage of explainability in our last edition and issue 3, the authors detail the unique challenge of designing interpretable "white box" models in medical fields like radiology in which the number of variables enforces a "black box" approach on developers. However, with explainability tools consisting of new models predicting what logic the original models may have used erects a façade of truth that feeds into our cognitive biases, a "fool's gold" of sorts. Instead, the FDA should attend to accuracy and outcomes as a gold standard.

Text Link
AI Governance & Assurance