AI/ML Security
AI/ML SaMD Security: Year in Review
Vulnerabilities, FDA expectations, and real-world findings on AI-enabled medical devices.
Published: December 15, 2026 · Last reviewed: December 15, 2026
Executive summary
AI/ML SaMD is the fastest-growing segment of FDA-cleared medical devices and the segment with the least mature cybersecurity tooling. This report summarizes the year in AI/ML SaMD security: what we tested, what FDA flagged, and what the threat landscape looks like heading into 2027.
Findings combine Blue Goat Cyber's AI/ML SaMD engagement data with public FDA AI/ML lifecycle management guidance and CVE disclosures affecting common ML inference stacks.
Pending analyst extract and legal review.
Methodology
- Sample
- AI/ML SaMD engagements completed during 2026.
- Time period
- January 2026 – December 2026
- Inclusion criteria
-
- Engagements where the device under test included an AI/ML model in the clinical decision path.
- Engagements that produced a final report by 30 Nov 2026.
- Public FDA AI/ML lifecycle management guidance documents in effect during 2026.
- CVEs disclosed in 2026 affecting commonly used ML frameworks (PyTorch, TensorFlow, ONNX Runtime, scikit-learn).
- Limitations
-
- AI/ML engagement volume is small relative to traditional MedTech; sample sizes for some sub-cuts will be limited.
- FDA's AI/ML guidance is evolving — analysis reflects guidance in effect during the reporting period only.
- CVE relevance is judged by Blue Goat Cyber engineers; not all listed CVEs were exploited in production devices.
- Anonymization
-
- All client and product names removed before analysis; records are keyed by an internal study ID.
- Device-specific identifiers (510(k) numbers, De Novo numbers, UDIs) stripped from the source dataset.
- Findings reported only at aggregate level; minimum cell size of 5 to prevent re-identification.
- Free-text deficiency excerpts paraphrased; no verbatim FDA correspondence is reproduced.
Key findings
-
1. Predetermined Change Control Plans (PCCPs) are the most common FDA AI/ML deficiency theme.
internal extract pendingPending extract.
-
2. Model supply chain documentation is the most common gap in AI/ML SBOMs.
internal extract pendingPending extract.
-
3. Top CVEs in AI/ML inference stacks affecting MedTech this year.
internal extract pendingPending extract.
Charts
All charts are free to re-use with attribution to Blue Goat Cyber. Each chart has an embed-friendly URL — see the press kit for the iframe snippet.
FDA deficiency themes for AI/ML SaMD submissions
internal extract pendingShare of AI/ML deficiencies by content area.
Source: Blue Goat Cyber AI/ML SaMD deficiency subset, 2026. · Unit: % of deficiencies
Penetration test findings on AI/ML SaMD
internal extract pendingShare of findings by category (model supply chain, prompt injection, data poisoning, classic web/API).
Source: Blue Goat Cyber AI/ML SaMD penetration test subset, 2026. · Unit: % of findings
2026 CVEs in ML inference stacks affecting MedTech
internal extract pendingCount of CVEs disclosed in 2026 by ML framework.
Source: Public CVE disclosures, 2026, filtered to ML inference frameworks. · Unit: CVEs
Predetermined Change Control Plan coverage in AI/ML submissions
internal extract pendingShare of AI/ML submissions including a PCCP that addressed cybersecurity-relevant changes.
Source: Blue Goat Cyber AI/ML SaMD submission subset, 2026. · Unit: % of submissions
Model supply chain SBOM completeness
internal extract pendingShare of AI/ML SBOMs that include training-data provenance, model-weight provenance, and inference-runtime version.
Source: Blue Goat Cyber AI/ML SBOM subset, 2026. · Unit: % of SBOMs
Cite this report
Blue Goat Cyber. (2026). AI/ML SaMD Security: Year in Review. https://bluegoatcyber.com/research/ai-ml-samd-security-year-in-review-2026
Sources & references
Primary sources cited in this article. Links open in a new tab.
