Blue Goat CyberBlue Goat CyberSMMedical Device Cybersecurity
    K
    Blog · Podcast

    What Happens When AI in Medical Devices Make Mistakes? | Ep. 40

    In this episode of The Med Device Cyber Podcast, hosts Christian Espinosa and Trevor Slattery explore the critical safety and regulatory challenges surrounding artificial intelligence in medical devices. They focus on the European Union's AI Act and the Medical Device Coordi

    Hero illustration for the Podcast article: What Happens When AI in Medical Devices Make Mistakes? | Ep. 40
    Christian Espinosa, Founder & CEO

    Reviewed by Christian Espinosa, MBA, CISSP · Founder & CEO

    Published October 2025 · Last reviewed May 2026

    The Med Device Cyber Podcast · October 30, 2025 In this episode of The Med Device Cyber Podcast, hosts Christian Espinosa and Trevor Slattery explore the critical safety and regulatory challenges surrounding artificial intelligence in medical devices. They focus on the European Union's AI Act and the Medical Device Coordination Group's (MDCG) new guidance, contrasting it with the less regulated approach in the United States. The discussion highlights a tragic real-world case where an AI-powered mental health chatbot provided harmful advice, leading to a patient's death. This incident underscores the urgent need for robust threat modeling and a comprehensive understanding of AI's edge cases in high-risk medical applications. The hosts emphasize that while AI offers groundbreaking innovation, its deployment in healthcare demands a rigorous focus on safety, security, and well-defined guardrails. They also touch upon the current 'AI boom' and how regulatory changes, similar to those seen with mobile medical apps, may temper the uncritical adoption of AI if manufacturers are forced to seriously consider liability and risk management rather than just marketing hype. The episode serves as a crucial listen for product security teams, regulatory leads, and engineers navigating the complex landscape of AI in medical technology.

    Key Takeaways

    • The EU AI Act classifies medical devices as high-risk, necessitating granular understanding and specific guidance like that from the MDCG.
    • Manufacturers of AI-enabled medical devices bear the burden of identifying and mitigating edge cases through threat modeling to prevent patient harm.
    • The distinction between AI providing clinical decision support and AI making diagnostic or treatment decisions is critical for liability and regulatory compliance.
    • Current US regulations for AI in medical devices are less stringent compared to the EU, creating a 'wild west' environment that increases risk.
    • The hype around AI in medical devices for funding and marketing overlooks crucial considerations for safety and regulatory compliance, a situation likely to change as regulations become finalized.
    • Regulators are increasingly focusing on how AI in medical devices can fail and the potential for harm, rather than just its success rates.

    Listen on mdcpodcast.com · Watch on YouTube

    Listen to this episode

    Watch on YouTube


    Want help applying this to your own device program?

    Blue Goat Cyber is a specialist medical device cybersecurity firm: 250+ FDA submissions, zero rejections. If anything in this conversation hit close to home, book a 30-minute strategy session - no cost, no obligation.

    Related articles

    Keep reading

    Related services

    Put this into practice on your device

    Every Blue Goat Cyber engagement maps directly to FDA Section 524B and the SPDF - so the evidence you need lands in your submission, not in a separate report.

    Ready when you are

    Get FDA cleared without the cybersecurity headaches.

    30-minute strategy session. No cost, no commitment - just answers from people who've shipped 250+ submissions.