Skip to main content

FDA Guidance on AI-Enabled Devices: Transparency, Bias, & Lifecycle Oversight

On January 7, 2025, the Food and Drug Administration (FDA) issued draft guidance entitled “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations.” [FDA-2024-D-4488] This draft guidance provides recommendations to industry and FDA staff on what should be included in a marketing application to the FDA to support the evaluation of the safety and effectiveness of artificial intelligence (AI) enabled software devices. In addition, the guidance provides advice on the design, development, and implementation of AI-enabled devices to be incorporated along the total product lifecycle (TPLC), including the post-market setting. Sponsors, device developers, and AI technologists will benefit from aligning with this FDA guidance early to reduce the risk of costly delays and ensure product quality.

The FDA states in the Federal Register Notice that the goal of the guidance is to provide approaches that will address transparency and bias and ensure that the device benefits all relevant demographic groups (e.g., age, sex, race, and ethnicity) who may use the device. To maintain consistency with software development and documentation, the FDA harmonized the recommendations in the guidance with software-related consensus standards (FDA Recognized Consensus Standards Database).

A “device software function” (DSF) is a software function that meets the definition of a device under section 201(h) of the Federal Food, Drug, and Cosmetic Act and could be either “software as a medical device” (SaMD) or “software in a medical device” (SiMD). The FDA discusses these distinctions on their website. If AI models are used to achieve the intended purpose of the device software function, the term AI-DSF is used to describe the software. An AI-enabled device refers to the entire device and not just the software. A marketing submission for an AI-enabled device may be through the 510(k) process, or as a De Novo classification request, Premarket Approval (PMA) application, Humanitarian Device Exemption (HDE), Biologics License Application (BLA), or Investigational Device Exemption (IDE).

A 510(k) application can reference a non-AI-enabled device if the AI-enabled device does not introduce doubt about the safety and effectiveness of the AI-enabled device and meets the requirements under the regulations for substantial equivalence. The FDA recommends early engagement with the agency through the Q-submission process, which is a voluntary program for sponsors to receive valuable feedback from the FDA. This is especially beneficial when AI-DSF is part of a combination product, if new and emerging technology is used as part of the design or development of the device, or when novel methods are used for device validation.

Transparency and bias are particular concerns across the TPLC of an AI-enabled device. Transparency can be promoted by making information both understandable and accessible as it relates to the usability of the device. AI bias can produce erroneous results in a systemic but unpredictable way. It can also impact the safety and effectiveness of the device, especially if the bias is triggered by use of the device in a specific healthcare setting, age group, or other limited setting. The guidance provides very detailed information on design considerations to promote transparency. This guidance also suggests that bias can be controlled by considering the representativeness of the data when developing, testing, and monitoring the AI-enabled device. Testing the device in specific subgroups can also aid in evaluating performance to see if any bias exists.

Sponsors are encouraged to consider a “predetermined change control plan (PCCP)” to address long term performance in the post-marketing setting, due to systems that produce data inputs that change over time unbeknownst to the user, also called data drift, or when other factors impact performance of the model. A PCCP would allow sponsors to make modifications to an AI-DSF without the need to submit additional marketing submissions or obtain FDA authorization before initiating the modification if it is consistent with the PCCP.

The guidance is divided into sections that describe the documentation needed in an FDA submission. These sections are device description, user interface, labeling, risk assessment, data management, model description and development, validation, device performance monitoring, and cybersecurity. Certain terms across the TPLC for an AI-enabled device may have different meanings in the AI community compared to how the FDA views them. For example, validation is considered in the AI community as model tuning or data selection that can be combined with model training to optimize an AI model. However, the FDA defines validation as “…confirmation by examination and provision of objective evidence that the particular requirements for a specific intended use can be consistently fulfilled” as defined under 21 CFR 820.3(z).

Data tuning or training should not be included in an FDA submission as part of the validation process. A specific list of terms and definitions used in the digital health, artificial intelligence, and machine learning space can be found at FDA Digital Health and Artificial Intelligence Glossary – Educational Resource.

Because the guidance intends to address issues across the TPLC, sponsors are asked to consider Quality System documentation, which are the good clinical manufacturing practices (GCMP) for the manufacturers of finished medical devices for the marketing of their products (21 CFR 820.30).

The guidance also references other guidances that may be helpful when preparing a submission to the FDA:

Conclusion

The FDA’s draft guidance on AI-enabled device software functions, issued on January 7, 2025, outlines recommendations for the lifecycle management and marketing submissions of such devices. This guidance will help sponsors, device developers, and AI technologists understand what is needed for an FDA submission and what will be required across the product lifecycle to ensure product quality. To reduce the risk of costly delays, it is important to align to these FDA expectations before submission and before marketing an AI-enabled software device. The guidance focuses on transparency, bias mitigation, and ensuring the safety and effectiveness of AI-enabled devices across their TPLC. It provides detailed recommendations for documentation needed in FDA submissions, including device description, user interface, labeling, risk assessment, data management, model description and development, validation, device performance monitoring, and cybersecurity.

It emphasizes the importance of early engagement with the FDA, especially for combination products and novel technologies. The guidance also encourages the use of a PCCP for post-market modifications. Transparency and bias control are highlighted as critical factors in the design and development of AI-enabled devices, with specific recommendations provided in appendices for design and usability evaluation considerations. The document aims to harmonize with existing software development standards and address the needs of all relevant demographic groups who may use the device.

Do you have additional regulatory or other ethical review questions? WCG’s Institutional Review Board (IRB) experts are here to support you in navigating all the complexities of trial development and reviews. Please complete our form below to get in touch today.

Don't trust your study to just anyone.

And we’re the best for a reason. Experience the WCG difference starting with a free ethical review consultation. We’re here to help you streamline, alleviate, and accelerate.