top of page

Check Out Our Latest Webinar: Navigating the New Normal in Healthcare 

Playbook: Eliminating Breach Risks — 2025 Edition for midmarket organizations. Download to learn more

Suspect a Breach? 

!

Pondurance_Logo_R-10pxMargin_312px_REV-wordmark.png

AI in Healthcare: Reap the Rewards, Reduce the Risks

Pondurance
July 8, 2025

In healthcare settings, AI can be both an ally and enemy. According to the World Economic Forum, AI has the potential to expand access to critical healthcare services, improve diagnostics, detect diseases sooner, and speed up administrative tasks. With AI’s limitless potential, it’s no surprise that the generative AI market in healthcare could reach nearly $17 billion by 2034.

AI also has its downsides, from perpetuating biases of underrepresented groups to enabling sophisticated cyberattacks. In this fourth article on helping midsize healthcare organizations manage breach risks, Stacey Oneal, PhD, a senior security consultant at Pondurance, shares how providers can mitigate AI-powered threats—and benefit from AI’s limitless potential. 



Q: What are the greatest security and privacy concerns for healthcare organizations using AI?

Stacey: AI can enhance diagnostic capabilities by analyzing large amounts of patient data to detect patterns and improve treatment. However, there’s concern that this same data could be misused—such as by insurance companies to deny coverage based on predictive health analytics. 


On the privacy side, wearable devices—like smartwatches—collect sensitive health data that could be scrutinized or used unfairly. And advances in real-time patient monitoring raise additional questions about data integration and surveillance.


Then, of course, AI enables cybercriminals to more easily deceive users and break through an organization’s security defenses. From social engineering and phishing to deepfakes and adversarial attacks, AI-powered threats are especially difficult to detect and remediate.


Q: What can healthcare providers do to mitigate privacy, security, and breach risks around AI?

Stacey: The strategies for protecting against AI risks are very similar to safeguarding against any other kind of cyber risk. If you’re already using some of the following best practices, then you’re on the right path to ensuring the safe and secure use of AI in your organization. 


  • Implement zero trust architecture. Unlike traditional models that assume anyone inside the network is trustworthy, zero trust architecture operates on the principle of “never trust, always verify.” It enforces least-privilege access and requires users and systems to continuously revalidate access, even within trusted environments. This approach limits exposure and potential damage in the event of unauthorized access.


Zero trust architecture helps protect against AI threats in a complex healthcare ecosystem. Source: NIST Zero Trust Networks. 
Zero trust architecture helps protect against AI threats in a complex healthcare ecosystem. Source: NIST Zero Trust Networks

  • Strengthen governance within your organization. You need strong policies around how AI is used, tested, and monitored. This includes integrating frameworks like the NIST AI Risk Management Framework into existing policies and procedures to ensure responsible and secure AI implementation.

  • Encourage AI training and certifications for your security team. For example, ISACA has launched new AI-specific certifications, including one focused on auditing AI systems. These are designed for professionals with existing credentials like CISA and help build needed expertise in AI oversight. Building a workforce skilled in AI governance and auditing is essential as AI becomes more embedded in your healthcare operations.

  • Provide continuous monitoring. Like any software or digital system, AI tools require ongoing monitoring for anomalies and other emerging risks. Keeping systems updated and identifying abnormal behavior helps reduce the impact of potential threats.

  • Create a business case for your cybersecurity budget. Security teams are often seen as cost centers, which makes it necessary to frame cybersecurity and AI risk management as a form of insurance. Demonstrating how these measures prevent costly incidents can help justify investment and gain executive buy-in.


Q: Given the complexity of healthcare operations, it’s impossible to monitor every system. How can providers best protect against AI security and privacy risks in these environments?

Stacey: Network segmentation and data labeling help you identify what types of data live in which parts of your network, so you can focus your security resources where they’re most needed. Sensitive data—like anything covered by HIPAA—needs more protection than, say, billing information, which comes with a different set of concerns than patient privacy.


Q: In a previous article, we discussed the gap between compliance and security. How do you see the relationship between these two objectives?

Stacey: As I always say, compliance is not security; it’s just a baseline that creates a shared foundation and instills trust that organizations are implementing security controls. True security goes beyond compliance to identify and address real vulnerabilities and threats—including AI-based threats. Once you’ve achieved compliance, your next focus should be strengthening your cybersecurity program.


Q: Midsized health care organizations have often limited cybersecurity budgets and resources. How can they best address new and emerging AI risks?

Stacey: A trusted cybersecurity partner can supplement an internal team’s capabilities with tools and expertise they would not otherwise have access to. For example, Pondurance managed detection and response (MDR) solution helps healthcare organizations eliminate breach risks by stopping attacks—whether from AI or other sources—before they can do harm. Our team of experienced SOC analysts and cloud-native platform complements your existing security tools, technologies and resources. Together, we can rapidly detect, validate, and remediate threats to protect your organization from risks and harm.


We also assist with AI governance and compliance, to help set up clear policies, procedures, and a secure architecture. Ongoing oversight is key—organizations need checks and balances, regular audits, and advisory support to ensure AI is integrated safely and in line with compliance standards, whether they’re adopting AI or protecting against its risks. We don’t replace an organization’s internal experts or policies and procedures. Rather, we work with what you have, offering recommendations to improve your organization’s overall cybersecurity program. 


Final Thoughts

AI has endless potential to transform healthcare, from logistics and diagnostics to treatment and administrative tasks. The goal is to harness AI’s potential while mitigating its cyber threats and breach risks. Because, as a Dark Reading article said, “AI is…a tool that can be used for and against us.”


Get your copy of “A Midsize Organization's Guide to Reducing Breach Risks in 2025” playbook.

wave pattern background

Featured Posts

July Cyber Threat Download™

July 25, 2025

Navigating the Cybersecurity Landscape

July 29, 2025

Navigating the New Normal in Healthcare Security: Risks from the New HIPAA Security Rule Requirements, Protecting from Emerging Cyber Threats, and Insuring for Risks

July 30, 2025

bottom of page