AI/ML-driven SaMD has unique cyber concerns: model integrity, training-data provenance, and the interfaces SaMD uses to ingest DICOM, FHIR, and PACS data. We deliver FDA-aligned threat models that explicitly cover model-supply-chain risks alongside conventional appsec.
Imaging AI and SaMD products are inferential and cloud-tethered. The model, the data pipeline, the training corpus, and the runtime are all in scope for FDA cybersecurity expectations - and increasingly for the GMLP and PCCP guidance as well.
Hospital security teams treat SaMD as a SaaS product: they want SOC 2 / HITRUST evidence, an SBOM, an API security summary, and clarity on the cloud shared-responsibility split before they let it touch PACS or EHR.