Last reviewed: May 1, 2026
Pillar Guide · Updated 2026 · 9 min read
FDA's January 2025 draft guidance, Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations, is the most consequential AI document the agency has issued. It pulls together the PCCP final guidance (2024), the Good Machine Learning Practice (GMLP) guiding principles, and the cybersecurity expectations under Section 524B into a single lifecycle framework. This guide explains what the draft adds, what it does not change, and what manufacturers should already be doing.
Talk to an AI/ML regulatory expert · AI/ML Medical Device Security Service →
TL;DR
- The 2025 draft is not a replacement for PCCP or GMLP - it operationalizes them inside the standard premarket submission.
- Scope: any AI-enabled device software function (AI-DSF), including locked and adaptive models, foundation models, and generative features.
- New emphasis on transparency to users, performance monitoring across the lifecycle, and bias/subgroup performance as quality and safety signals.
- Reviewers will look for a single integrated narrative covering: model description, data, validation, transparency, risk management (ISO 14971 + AAMI CR34971), cybersecurity (Section 524B), PCCP, and postmarket performance monitoring.
- The cybersecurity content of an AI submission is now expected to address AI-specific threats (data poisoning, evasion, model inversion, prompt injection) - not just generic IT controls.
What the draft is, and is not
The draft is a lifecycle marketing-submission guidance. It tells manufacturers what to include in a 510(k), De Novo, or PMA when the device contains an AI/ML component, and how to keep that submission credible across postmarket changes. It is not a standard, not a checklist, and not law - but it is what reviewers will use to evaluate AI submissions until a final version replaces it.
It explicitly builds on:
- 2024 PCCP final guidance - how to pre-authorize specific model changes after clearance.
- GMLP guiding principles (FDA + Health Canada + MHRA, 2021) - 10 principles for the entire ML lifecycle.
- Section 524B and the 2026 final cybersecurity guidance - cybersecurity is part of the AI submission.
- ISO 14971 + AAMI CR34971 - risk management for AI/ML devices.
Scope: what counts as an AI-DSF
The draft uses the term AI-enabled device software function broadly. If the function applies any machine-learning model - supervised, unsupervised, reinforcement, foundation/LLM, or hybrid - to produce a clinical or operational output, it is in scope. That includes:
- Locked models trained once and deployed
- Adaptive models that retrain on field data
- Models that wrap a third-party foundation model or commercial API
- Generative AI features (LLM-based summarization, drafting, decision support)
- Models embedded in firmware as well as cloud-hosted models
Rule-based or deterministic algorithms are not AI-DSFs - but a hybrid system with even one ML component triggers the guidance for that component.
The seven things a submission needs
FDA organizes the expected content into seven domains. Treat each one as a section of the submission, not a paragraph.
1. Device description and model details
More than a block diagram. Include the model architecture (CNN, transformer, ensemble, etc.), the training and inference pipelines, where each runs (device, edge, cloud), versioning, and any third-party models or APIs in the chain. If you call OpenAI or Anthropic, that supplier is part of the device description.
2. User interface and labeling (transparency)
This is where the 2025 draft is most aggressive. FDA wants users (clinicians, patients, technicians) to understand:
- What the model does and does not do
- Intended user, use environment, and patient population
- Performance characteristics, including subgroup performance where clinically relevant
- Inputs the model was trained on and the inputs it expects in deployment
- Known limitations, edge cases, and failure modes
- How and when the model is updated
A model-card-style summary in the labeling is the practical output. Plain-English explanations beat technical specifications.
3. Risk assessment
ISO 14971 plus AAMI CR34971 for AI-specific risks: data quality, dataset shift, overfitting, automation bias, model drift, adversarial inputs, and unintended population disparities. The risk file should show that AI risks were identified, evaluated, and controlled - not just IT-security risks.
4. Data management
Provenance, representativeness, independence of training/validation/test sets, labeling methodology, and any preprocessing. Reviewers ask why a dataset is appropriate for the intended population - 'we used a public dataset' is not an answer.
5. Model development and validation
Training methodology, hyperparameter selection, validation strategy, performance metrics with confidence intervals, and subgroup performance breakdowns (age, sex, race/ethnicity where clinically relevant, device type, site). Demographic disparities are now a quality signal FDA expects to see addressed, not a separate ethics conversation.
6. Cybersecurity
The submission must address Section 524B requirements plus AI-specific threats: data poisoning, model evasion (adversarial examples), model inversion and membership inference, training-pipeline integrity, supply-chain integrity for third-party models, and prompt injection for LLM-based features. The cybersecurity section should map to MITRE ATLAS and/or OWASP ML Top 10 alongside the standard threat-model frame.
7. Public submission summary and PCCP
If the model will change after clearance - and most do - include a Predetermined Change Control Plan that pre-authorizes specific changes within bounded performance and a defined change-control process. See our PCCP template guide for the structure.
What's new versus prior expectations
Most of the content has been implied by GMLP and PCCP for years. What is new in 2025:
- Transparency moves from nice-to-have to expected deliverable. Model cards, plain-language explanations, and subgroup performance tables in the labeling.
- Performance monitoring is a lifecycle requirement, not a postmarket afterthought. The submission should describe how performance and bias drift will be monitored, what triggers retraining or labeling updates, and how those tie back to the PCCP.
- Bias is treated as safety. Subgroup disparities are a quality signal. Manufacturers are expected to define the slices they monitor and the thresholds that trigger action.
- Cybersecurity content must be AI-aware. A generic 524B cybersecurity package without AI-specific threats will draw a deficiency.
- Foundation models and third-party APIs are in scope. They appear in the device description, the SBOM, the supplier-evaluation file, and the threat model.
How to prepare now
Even though the 2025 document is a draft, reviewers are already using it informally. Practical steps:
- Audit your current AI submissions and roadmap against the seven domains above. Note where you are thin - usually transparency labeling, subgroup performance, and AI-specific cybersecurity.
- Stand up an AI threat model that extends STRIDE with MITRE ATLAS and OWASP ML Top 10. See our AI/ML Medical Device Security service.
- Write or refresh the PCCP so engineering can ship retrains within bounds without a new submission. Start with the PCCP template.
- Map your risk file to AAMI CR34971 alongside ISO 14971. See the CR34971 explainer.
- Define your monitoring slices and thresholds for performance and bias, and tie them to the PCCP change-control gates.
- Review every third-party model and AI API as a supplier in your QMS. They belong in the SBOM and supplier evaluation file.
Frequently asked questions
Is the 2025 draft enforceable yet? Not formally - it is a draft guidance. But reviewers are using it to evaluate AI submissions and issuing deficiencies that reference it. Treat it as the operating standard.
Do I need a PCCP if my model is locked at clearance? Not strictly, but the moment you want to retrain or update training data, you need either a PCCP or a new submission. PCCP is almost always the better path.
Does this apply to predicate-based 510(k)s? Yes. The pathway does not change the AI expectations; the AI-DSF content is required regardless of submission type.
What if our AI feature is provided by a third-party API? It is still your device. You own the device description, the supplier evaluation, the risk assessment, and the monitoring of that component as deployed in your product.
Where to go next
- PCCP Template & Worked Example for AI/ML Devices
- GMLP Crosswalk: 10 Principles to Engineering Controls
- AAMI CR34971 Explained
- AI/ML Medical Device Security Service
Sources
- FDA, Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations (Draft, January 2025)
- FDA, Predetermined Change Control Plans for Machine Learning-Enabled Device Software Functions (Final, December 2024)
- FDA / Health Canada / MHRA, Good Machine Learning Practice for Medical Device Development: Guiding Principles (October 2021)
- AAMI CR34971:2023, Application of ISO 14971 to Machine Learning in Artificial Intelligence
