AI Trust Library

Article Summary

Article Summary of:

Medtechs need strategy to prevent bias in AI-machine learning-based devices: FDA

Originally published:
Oct 15, 2021

On October 14th, the FDA held a public workshop to discuss better methodologies for identification and improvement of algorithms prone to mirroring systemic biases in healthcare. Through their discussions, the FDA concluded that more racially and ethnically diverse populations should be enrolled in clinical trials.

According to Jack Resneck, president-elect of the American Medical Association, the FDA should focus on patient outcomes and clinical validation with published peer-reviewed data to build trust in AI- and ML-based medical devices. Another important area of focus concerns taking steps to safeguard these devices against bias that can exacerbate pre-existing disparities in healthcare. To prevent these inequities from growing and to protect against AI and ML related risks over time, the FDA intends to publish draft guidance in 2021 on what should be included in a SaMD Pre-Specifications (SPS) and Algorithm Change Protocol (ACP).

Want all our best news and analysis on trust and AI delivered straight to your inbox?