09/05/2024 / By Olivia Cook
A recent study has found that nearly half of all artificial intelligence-powered medical devices authorized for use by the Food and Drug Administration (FDA) have not gone through proper testing with patients and lack proper data proving their safety and effectiveness.
Recently published in Nature Medicine, the study found that nearly 43 percent of the 521 AI health devices approved by the FDA between 2016 and 2022 lack publicly available clinical validation data showing they were tested on real patient data. This means that almost half of these tools were not necessarily “trained” on actual patient cases, sparking concerns about their performance in real-world medical settings.
The researchers involved in the study are calling for more transparency and better standards in the development and approval of AI-powered medical devices. They argue that clearer guidelines are needed to determine which devices are truly effective and which might need more testing.
Lead author Sammy Chouffani El Fassi, a medical student from the University of North Carolina at Chapel Hill, emphasized the lack of a standard for understanding the quality and reliability of these devices. (Related: AI-powered medical devices could enhance detection and differentiation of skin cancers – but could also give false readings.)
The absence of public data does not always mean the data doesn’t exist. The FDA reviews extensive confidential information before approving a device, which might include real patient data. However, this lack of transparency could make clinicians hesitant to adopt new AI tools, as they may be unsure how these devices will perform in real-world scenarios. According to Chouffani El Fassi, physicians are unlikely to trust devices that haven’t been rigorously tested in real-world conditions.
The study also highlighted the opportunity for greater involvement from clinicians and researchers in testing these devices. By actively evaluating how AI tools perform on patients, health care professionals and respected academic institutions can improve the quality and reliability of AI in medicine.
Most of the AI tools examined were Class II devices, which are considered to have a moderate risk to patients and are typically approved based on their similarity to existing technologies. Interestingly, more than half of the devices lacking clinical validation were radiology tools, which are often used for image archiving and communication. These functions may not require prospective validation as they don’t always directly impact patient care.
However, Dr. Nigam H. Shah, a professor and chief data scientist at Stanford Health Care, noted that some of these tools might not need real-world validation to prove their effectiveness as they can be tested using any data, even non-medical images.
The number of AI devices approved by the FDA now stands at around 950 AI and machine learning-enabled devices. This surge reflects a growing interest in AI technology but it also presents a challenge for regulators to keep up. According to Shah, this is where academics and clinicians can play a critical role.
Clinicians and researchers can contribute by validating AI devices as part of their regular work, such as during medical fellowships. Device companies could also collaborate with organizations like the Coalition for Health AI to conduct thorough prospective validations.
This kind of research does not have to be overly complicated. For example, testing an AI tool that helps identify brain hemorrhages or strokes in CT scans could involve simple comparisons, like measuring the time to diagnosis with and without AI or asking radiologists to rate the tool’s effectiveness on a simple scale.
Even a basic feedback method, such as a five-point Likert scale, can provide valuable insights. If clinicians find a tool difficult to use or not beneficial in practice, that feedback helps determine the tool’s clinical value. This, in turn, could lead to greater acceptance and use of AI devices in health care.
Despite the potential advantages of AI, only 38 percent of physicians were using it, according to a recent survey by the American Medical Association. And while 65 percent of physicians believe AI could benefit health care, many are hesitant because they aren’t convinced the benefits justify the costs.
Watch this video to learn what’s next for AI in healthcare in 2023.
This video is from the Daily Videos channel on Brighteon.com.
Apple warns users with medical devices to keep iPhones away from the body because they emit EMF.
Next cyberattack target? Medical devices.
Congress allows FDA to ban off-label use of medical devices.
FDA approval of medical devices based on complete science fraud.
Surgeons receive millions from Big Pharma to promote medical devices in journals.
Sources include:
Tagged Under:
AI, artificial intelligence, big government, computing, cyber war, cyborg, FDA, Food and Drug Administration, future science, future tech, Glitch, health care, health coverage, health devices, information technology, inventions, medical devices, medical tech, robotics, robots, science deception, science fraud
This article may contain statements that reflect the opinion of the author
ScienceFraud.News is a fact-based public education website published by Science Fraud News Features, LLC.
All content copyright © 2018 by Science Fraud News Features, LLC.
Contact Us with Tips or Corrections
All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.