
Reviewed by Christian Espinosa, MBA, CISSP · Founder & CEO
Published October 2025 · Last reviewed May 2026
The Med Device Cyber Podcast · with Jos · October 30, 2025 In this episode of The Med Device Cyber Podcast, host Trevor White and guest Christian Espinosa talk with José Acosta about the critical role of AI literacy in the future of healthcare. Acosta, with 40 years of experience as a surgeon and early technology adopter, emphasizes that AI literacy extends beyond basic prompting to understanding the underlying mathematics, accuracy limitations, and privacy implications of large language models (LLMs). The discussion highlights the current state of AI in diagnostics, particularly imaging, noting that while AI tools show promise and even FDA approvals, they lack the near 100% precision required for therapeutic applications. The conversation delves into the security vulnerabilities of AI in medical settings, addressing concerns about poisoned training data, output tampering, and ensuring models are purpose-built for their tasks. Concerns are also raised about human oversight, particularly regarding "AI scribes" and the risk of increasing patient load without adequate diagnostic time. The episode advocates for a measured approach to AI integration, stressing the importance of high-quality training data, robust governance, ethical considerations, and continuous education for medical professionals to effectively leverage AI while mitigating risks.
Key Takeaways
- AI literacy for medical professionals goes beyond simple prompting and includes understanding the underlying mathematics, limitations, privacy, governance, and ethics of large language models.
- While AI shows promise in diagnostics like medical imaging, it currently lacks the near 100% precision necessary for therapeutic applications in medicine, even with existing FDA approvals.
- The security of AI in medical devices is paramount; concerns include poisoned training data, tampered outputs, and ensuring models are securely built for their intended purpose.
- Over-reliance on AI tools like ambient scribes without proper human oversight and critical evaluation can introduce patient safety risks, such as inadequate diagnosis time and misinterpretations.
- The evolution of AI in healthcare demands a measured approach, emphasizing high-quality training data, robust guardrails, and continuous user education to effectively integrate these tools safely and securely.
- Future medical education should prioritize teaching effective AI prompting and usage to prepare healthcare professionals to leverage these tools optimally and avoid being replaced by those who can.
Listen on mdcpodcast.com · Watch on YouTube
Listen to this episode
Want help applying this to your own device program?
Blue Goat Cyber is a specialist medical device cybersecurity firm: 250+ FDA submissions, zero rejections. If anything in this conversation hit close to home, book a 30-minute strategy session - no cost, no obligation.
