Blue Goat CyberBlue Goat CyberSMMedical Device Cybersecurity
    K
    Guide · AI/ML

    PCCP Template & Worked Example for AI/ML Medical Devices

    How to write a Predetermined Change Control Plan FDA will accept - structure, the three required components, performance bounds, and a worked example.

    Hero illustration for the article: PCCP Template & Worked Example for AI/ML Medical Devices
    Christian Espinosa, Founder & CEO at Blue Goat Cyber

    By Christian Espinosa, MBA, CISSP

    Founder & CEO · Blue Goat Cyber

    Trevor Slattery, COO at Blue Goat Cyber

    Reviewed by Trevor Slattery

    COO · Blue Goat Cyber

    Last reviewed: May 1, 2026

    Working Template · Updated 2026 · 8 min read

    A Predetermined Change Control Plan (PCCP) is how you avoid filing a new 510(k) every time you retrain a model. It pre-authorizes a defined set of model changes so engineering can ship updates within bounds, under a controlled process, without re-clearing. This guide walks through the three required components in the FDA 2024 final guidance, then shows a worked example for an imaging SaMD you can adapt.

    Get help drafting your PCCP · AI/ML Medical Device Security Service →

    What a PCCP is - and is not

    A PCCP is a section of your premarket submission. Once cleared, it lets you make the specific changes you described without a new submission, provided you follow the modification protocol you committed to. It does not let you make arbitrary changes, change the intended use, or expand the patient population.

    FDA's 2024 final guidance defines three required components:

    1. Description of Modifications - what changes you intend to make.
    2. Modification Protocol - how you will make them safely and consistently.
    3. Impact Assessment - the benefit-risk analysis showing the changes remain safe and effective.

    All three live in the submission together. Skipping or thinning any one of them is the most common reason a PCCP gets rejected.

    Component 1: Description of Modifications

    Be specific. Vague PCCPs ("we will retrain as needed on additional data") fail. The description should list each change type, the parameter or component being changed, and the bounds within which the change can occur.

    Common change types:

    • Retraining on new data - new institutions, new scanners, new demographic representation, expanded date range.
    • Hyperparameter or architecture tuning within a stated range (e.g. learning rate, regularization).
    • Input preprocessing changes (e.g. updated normalization, new image format support).
    • Performance threshold adjustments for outputs or alerts.
    • Compatibility with new device hardware or software platforms (within stated specifications).

    What FDA does not accept under a PCCP:

    • Changes to intended use or indications for use
    • Expansion to new patient populations not in the original validation
    • New clinical claims
    • Changes that materially increase residual risk

    Component 2: Modification Protocol

    This is the operational core. The protocol describes the engineering and quality processes that make the changes safe and reproducible. At minimum it should cover:

    • Data management practices - how new training and validation data are sourced, labeled, quality-checked, and documented for representativeness.
    • Re-training methodology - the pipeline, the hyperparameter ranges, the stopping criteria.
    • Performance evaluation methodology - the test sets (held out and independent), the metrics, the acceptance criteria, including subgroup performance.
    • Update procedures - how a new model version is packaged, signed, deployed, monitored, and rolled back if it fails post-deployment monitoring.
    • Cybersecurity - integrity of the training pipeline, supply-chain controls for any third-party data or models, and re-validation of the threat model after meaningful changes.
    • Communication to users - if and how labeling, IFU, or transparency materials are updated when a model version changes.

    The protocol should reference the QMS procedures it relies on (design controls, document control, CAPA) so reviewers can see it lives inside the quality system.

    Component 3: Impact Assessment

    The impact assessment is the benefit-risk analysis demonstrating that the modifications, executed under the protocol, do not adversely affect device safety or effectiveness. It should:

    • Compare each modification type against the original cleared device and identify the risks introduced or affected.
    • Reference the risk file (ISO 14971 + AAMI CR34971) and the risk controls that mitigate each impact.
    • Address cybersecurity impact - new data sources can introduce poisoning risk, new third-party models introduce supply-chain risk.
    • Address bias and subgroup performance - confirm the validation methodology will detect disparate performance on retrained models.
    • Justify why the residual risk after each change remains acceptable.

    Worked example: imaging SaMD for chest X-ray triage

    Hypothetical, illustrative only - your device, data, and risk file will differ.

    Description of Modifications

    1. Retraining on additional data. Up to two retraining cycles per year using new chest X-ray studies from US-based academic and community hospitals, expanding training set by no more than 30% per cycle. New data must include at least 20% pediatric and 20% age 65+ studies to maintain demographic balance.
    2. Hyperparameter tuning. Learning rate between 1e-5 and 1e-3, batch size between 16 and 64, dropout between 0.1 and 0.5. Architecture (ResNet-50 backbone) is fixed.
    3. Input preprocessing. Support for additional DICOM transfer syntaxes and one new vendor's CR scanner output, validated for image-quality equivalence.
    4. Output threshold. Triage-flag threshold may be tuned within the range that maintains sensitivity ≥ 92% and specificity ≥ 75% on the held-out test set.

    Modification Protocol (summarized)

    • Data: New studies sourced under data-use agreements, de-identified, labeled by two board-certified radiologists with adjudication. Representativeness check on age, sex, race/ethnicity, scanner vendor, and pathology distribution before inclusion.
    • Training: Run in version-controlled pipeline; weights and configs stored in artifact registry with full lineage.
    • Validation: Performance evaluated on a sequestered test set untouched since original clearance, plus a refreshed test set drawn from the most recent six months of data. Subgroup performance reported for age, sex, and scanner vendor; any subgroup AUC drop ≥ 3% from prior version triggers review.
    • Acceptance: New version released only if overall sensitivity, specificity, and PPV are non-inferior to prior cleared version (margin defined in protocol) and no subgroup falls outside the predefined band.
    • Deployment: Signed model package, staged rollout to 10% of installed base for 30 days with active monitoring, full rollout if monitoring metrics remain in band.
    • Rollback: Automated rollback if production sensitivity drops > 5% over a rolling 7-day window.
    • Cybersecurity: New training data passes integrity checks; supplier-evaluation update if any new third-party data source is added; threat-model review documented for each release.
    • Labeling: Transparency labeling and IFU performance section updated for each new version; users notified through standard release notes; substantive changes to subgroup performance trigger updated user communications.

    Impact Assessment (summarized)

    • Retraining on new data introduces dataset-shift risk, controlled by representativeness checks and subgroup acceptance gates.
    • Hyperparameter tuning within stated bounds does not change the model architecture or the intended use; performance must remain non-inferior.
    • New scanner support is treated as a software change with image-quality equivalence testing; no change to indications.
    • Output threshold adjustment is bounded by clinical performance floors that match the cleared sensitivity/specificity envelope.
    • Cybersecurity residual risk unchanged or reduced - new data flows through the same provenance controls; no new external interfaces introduced.
    • Net benefit-risk: maintained or improved across all modification types under the protocol.

    Common reasons a PCCP gets rejected

    • Too vague. "Retrain as needed" is not a description; it is a wish.
    • No subgroup acceptance criteria. FDA expects bias monitoring as part of the gate.
    • No rollback or postmarket monitoring. A PCCP without a deployment safety net reads like a one-way door.
    • No cybersecurity coverage. New data sources and new models are supply-chain events.
    • Modification protocol not anchored in QMS. A PCCP that lives outside design controls is unaudable.

    Where to go next

    Sources

    • FDA, Predetermined Change Control Plans for Machine Learning-Enabled Device Software Functions (Final, December 2024)
    • FDA, Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations (Draft, January 2025)
    Ready when you are

    Get FDA cleared without the cybersecurity headaches.

    30-minute strategy session. No cost, no commitment - just answers from people who've shipped 250+ submissions.