Nearly half of FDA-approved AI medical devices lack real patient data training, study finds
A comprehensive analysis of FDA-approved artificial intelligence medical devices reveals significant gaps in clinical validation, raising concerns about their effectiveness and safety in real-world healthcare settings.
Artificial intelligence (AI) in healthcare has been hailed as a game-changer, promising to revolutionise everything from patient communication to complex surgical procedures. However, a new study published in Nature Medicine [1] has uncovered a concerning trend in the approval process of AI medical devices by the US Food and Drug Administration (FDA).
Researchers from multiple institutions, led by the University of North Carolina School of Medicine and Duke University, have found that approximately half of the AI medical devices authorised by the FDA lack reported clinical validation data using real patient information. This revelation raises important questions about the credibility and effectiveness of these technologies in clinical practice.
The regulatory landscape
Since 2016, the FDA has seen a dramatic increase in the number of AI medical device authorisations, jumping from an average of 2 per year to 69. These devices are predominantly used to assist physicians in diagnosing abnormalities in radiological imaging, analysing pathological slides, dosing medication, and predicting disease progression.
Sammy Chouffani El Fassi, an MD candidate at the UNC School of Medicine and research scholar at Duke Heart Center, who led the study, explained the significance of their findings: “Although AI device manufacturers boast of the credibility of their technology with FDA authorization, clearance does not mean that the devices have been properly evaluated for clinical effectiveness using real patient data.”
The study analysed 521 device authorisations from the FDA’s official database of “Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices”. The results were striking: 226 devices, or about 43%, lacked published clinical validation data. Some devices even used computer-generated “phantom images” rather than real patient data, which does not meet the standard for clinical validation.
Types of clinical validation
The researchers identified three primary methods of clinical validation for AI medical devices:
Retrospective validation
This method involves using historical patient data to test the AI model. While useful, it may not account for recent changes in patient populations or medical practices.
Prospective validation
Considered more robust, this approach tests the AI using real-time patient data, allowing for a more realistic assessment of the technology’s performance in current clinical settings.
Randomised controlled trials
Viewed as the gold standard in clinical validation, these trials randomly assign patients to groups where their data is analysed either by the AI or by traditional methods, providing the strongest evidence of a device’s effectiveness.
Gail E. Henderson, PhD, professor at the UNC Department of Social Medicine and co-leader of the study, emphasised the importance of distinguishing between these validation methods: “Using these hundreds of devices in this database, we wanted to determine what it really means for an AI medical device to be FDA-authorized.”
Implications for patient care
The lack of clear clinical validation for many AI medical devices raises concerns about their reliability and safety in real-world healthcare settings. As these technologies become increasingly integrated into medical practice, ensuring their effectiveness is crucial for patient safety and trust in AI-assisted healthcare.
Chouffani El Fassi stressed the need for change: “We hope to encourage the FDA and industry to boost the credibility of device authorization by conducting clinical validation studies on these technologies and making the results of such studies publicly available.”
The research team has shared their findings with FDA directors overseeing medical device regulation, potentially influencing future regulatory decisions. They hope their work will inspire researchers and universities globally to conduct more rigorous clinical validation studies on medical AI technologies.
Reference:
- Chouffani El Fassi, S., Henderson, G. E., Abdullah, A., et. al. (2024). Not all AI health tools with regulatory authorization are clinically validated. Nature Medicine. https://doi.org/10.1038/s41591-024-03203-3