Article Summary

VISIT AI Trust Library
Article Summary of:

When AI Reads Medical Images: Regulating to Get It Right

Published:
December 3, 2020

Stanford researchers at the institute of Human-Centered Artificial Intelligence propose a framework for regulating diagnostic algorithms that will ensure world-class clinical performance and build trust among clinicians and patients. After summarizing the current state of technology and the risks posed by it, the article identifies the most immediate priority as creating diagnostic tests that are independent of the various algorithms so that their real-world performance can be compared objectively. That capability will require a master dataset shared across developers that incorporates different pathologies, demographics, and images from prominent manufacturers. With a shared dataset and independent tests, algorithm builders could submit confidently to a regulatory agency charged with measuring and reporting on its performance. Having a shared regulatory framework and increased transparency would benefit the industry as a whole by overcoming resistance to augmented decisioning in the medical community. As of now, clinicians are extremely reluctant to trust algorithms, especially when they cannot "see" the dynamics of AI decisions or understand the broader outcomes and success of those decisions. This panel discussion and this excellent essay on AI in medicine explore related territory and are well worth your time.

Text Link
Principles & Frameworks