Prior FDA Commissioner Scott Gottlieb, M.D., had helped prepare new guidelines for how the FDA would examine and approve artificial intelligence/machine learning software – for medical device approval – based on the software as a medical device criteria.
The new framework/guidelines begin by citing how much AI and ML are changing the healthcare landscape:
“Artificial intelligence (AI)- and machine learning (ML)-based technologies have the potential to transform healthcare by deriving new and important insights from the vast amount of data generated during the delivery of healthcare every day. Example high-value applications include earlier disease detection, more accurate diagnosis, identification of new observations or patterns on human physiology, and development of personalized diagnostics and therapeutics.
One of the greatest benefits of AI/ML in software resides in its ability to learn from real-world use and experience, and its capability to improve its performance. The ability for AI/ML software to learn from real-world feedback (training) and improve its performance (adaptation) makes these technologies uniquely situated among software as a medical device (SaMD) and a rapidly expanding area of research and development.
The aim of the FDA framework is to tailor regulatory oversight – so that “AI/ML-based SaMD will deliver safe and effective software functionality that improves the quality of care that patients receive.”
As always, the FDA is concerned with balancing innovation so that patients can have access to the best healthcare possible with the need to protect the safety of the patient.
Prior AI/ML approval
As discussed in our prior article, the FDA has been approving AI medical devices through the 510(k) authorization process and through a de novo certification process. The approved devices, so far, are focused mainly on algorithms that are considered “locked.”
“Approved AI products to date generally have locked algorithms and do not automatically change over time as new data is collected. But Gottlieb suggests relying on periodic modifications by manufacturers may delay the promise of AI to actively learn and potentially improve intervention timeliness and outcomes.”
“To date, FDA has cleared or approved several AI/ML-based SaMD. Typically, these have only included algorithms that are “locked” prior to marketing, where algorithm changes likely require FDA premarket review for changes beyond the original market authorization. However, not all AI/ML-based SaMD are locked; some algorithms can adapt over time. The power of these AI/ML-based SaMD lies within the ability to continuously learn, where the adaptation or change to the algorithm is realized after the SaMD is distributed for use and has “learned” from real-world experience. Following distribution, these types of continuously learning and adaptive AI/ML algorithms may provide a different output in comparison to the output initially cleared for a given set of inputs.”
If your medical device requires FDA 510k clearance but you put it on the market without a 510k, what happens?Many clients have asked us: do we have a medical device? Do we need to submit a 510k? […]
The new proposed approval model – an overview
The idea, laid out in the discussion paper, is to determine what type of AI/machine learning-based SaMD modifications, if any, could potentially be exempted from premarket submission requirements.
“In the framework, FDA argues a total product lifecycle approach, including performance monitoring, is needed to regulate AI/ML SaMD with reasonable assurance of safety and effectiveness of a product.”
“A new approach to these technologies would address the need for the algorithms to learn and adapt when used in the real world,” said Gottlieb. The novel approach would differ from traditional step-by-step FDA regulation by adapting more seamless ongoing monitoring.”
“With artificial intelligence, because the device evolves based on what it learns while it’s in real world use, we’re working to develop an appropriate framework that allows the software to evolve in ways to improve its performance while ensuring that changes meet our gold standard for safety and effectiveness throughout the product’s lifecycle—from premarket design throughout the device’s use on the market,” Gottlieb explained.
“The total product lifestyle framework would still include premarket assurances of safety and effectiveness, the discussion paper stresses, and developers would need to seek premarket approval for major changes to the design or function of their products.”
“Developers would also be held to high standards in terms of best practices for machine learning development, such as ensuring data transparency and adhering to model validation protocols.”
“Real-world performance monitoring would take on a more critical role in ensuring that algorithms can evolve freely without putting patients in jeopardy. All changes to the algorithm would need to be appropriately documented, but not every change would require comprehensive FDA scrutiny.”
The new framework paper follows several FDA attempts to address AI and machine learning such as a statement from FDA Commissioner Scott Gottlieb, M.D., on advancing new digital health policies to encourage innovation, bring efficiency and modernization to regulation. This statement aimed at deciding the differences between high-risk and low-risk clinical decision support systems.
FDA requirements as the software adapts – learns new analytic rules
The agency is proposing a ‘predetermined change control plan’ may be needed to provide information to FDA about what anticipated changes an algorithm may undergo, along with an explanation of the method used to implement those changes.
” We also anticipate that in certain cases, the SaMD’s risk or the intended use may significantly change after learning,” FDA’s white paper states. The agency warns that such a change would trigger a need for a new premarket submission. “
The FDA sought input from its paper during 2019 and will use the feedback from people and groups outside the FDA to draft more concrete proposals. It’s not clear whether the FDA can enact these regulations without additional statutory authority from Congress.
Select passages from the proposed framework
The current approval process
“Manufacturers submit a marketing application to FDA prior to initial distribution of their medical device, with the submission type and data requirements based on the risk of the SaMD (510(k) notification, De Novo, or premarket approval application (PMA) pathway). For changes in design that are specific to software that has been reviewed and cleared under a 510(k) notification, FDA’s Center for Devices and Radiological Health (CDRH) has published guidance (Deciding When to Submit a 510(k) for a Software Change to an Existing Device, also referred to herein as the software modifications guidance) that describes a risk-based approach to assist in determining when a premarket submission is required.”
The new approach
“The highly iterative, autonomous, and adaptive nature of these tools requires a new, total product lifecycle (TPLC) regulatory approach that facilitates a rapid cycle of product improvement and allows these devices to continually improve while providing effective safeguards.”
Categories of software modifications that may require a premarket submission include:
- A change that introduces a new risk or modifies an existing risk that could result in significant harm;
- A change to risk controls to prevent significant harm;
- A change that significantly affects clinical functionality or performance specifications of the device.
AI/ML-Based Software as a Medical Device
“AI, and specifically ML, are techniques used to design and train software algorithms to learn from and act on data. These AI/ML-based software, when intended to treat, diagnose, cure, mitigate, or prevent disease or other conditions, are medical devices under the FD&C Act, and called “Software as a Medical Device” (SaMD) by FDA and IMDRF.
Types of AI/ML-based SaMD Modifications
The three main categories of modifications are:
- Performance – clinical and analytical performance
- This may include re-training with new data sets within the intended use population from the same type of input signal, a change in the AI/ML architecture, or other means. For this type of modification, the manufacturer commonly aims to update users on the performance, without changing any of the explicit use claims about their product (e.g., increased sensitivity of the SaMD at detecting breast lesions suspicious for cancer in digital mammograms).
- Inputs used by the algorithm and their clinical association to the SaMD output; and/or
- Examples of these changes could be:
- a. Expanding the SaMD’s compatibility with other source(s) of the same input data type (e.g., SaMD modification to support compatibility with CT scanners from additional manufacturers); or
- b. Adding different input data type(s) (e.g., expanding the inputs for a SaMD that diagnoses atrial fibrillation to include oximetry data, for example, in addition to heart rate data).
- Examples of these changes could be:
- Intended use – The intended use of the SaMD, as outlined above and in the IMDRF risk categorization framework, described through the significance of information provided by the SaMD for the state of the healthcare situation or condition.
- These types of modifications include those that result in a change in the significance of information provided by the SaMD (e.g., from a confidence score that is ‘an aid in diagnosis’ (drive clinical management) to a ‘definitive diagnosis’ (diagnose)). These types of modifications also include those that result in a change in the state of the healthcare situation or condition and are explicitly claimed by the manufacturer, such as an expanded intended patient population (e.g., inclusion of pediatric population where the SaMD was initially intended for adults ages 18 years or older); or the intended disease or condition (e.g., expansion to use a SaMD algorithm for lesion detection from one type of cancer to another). Changes related to either the significance of the information provided by the SaMD or the healthcare situation or condition may be limited in scope by the pre-specified performance objectives and algorithm change protocols.
The FDA’s new approach to artificial intelligence and machine learning seeks to examine the lifecycle of the software. Unlike software that is locked once it is delivered to the public, AI/ML software changes as it learns from the data it receives from patients and from other sources. This learning/adapting helps the software improve its ability to make diagnostic predictions and to respond to emergencies. Software, though, does have flaws in its original design. And there may be flaws in the adaptions. The FDA framework is a starting point for figuring out the best ways to regulate this developing technology while protecting the public.
Once you “friend” your refrigerator on Facebook (because it’s that smart, and knows about your dietary and nutritional habits than your doctor), healthcare will be unrecognizable even to fans of […]
When you operate any kind of healthcare venture, you need to get the expertise to develop your systems, your procedures, and your business. When things take a different turn than what you’d expected, you will want legal support.
To reach out, give us a call – we’d love to hear from you!