← Back to BlogCompliance

HIPAA's New AI Requirements: What Healthcare Organizations Must Know

DAF
Dr. Amanda Foster
Healthcare Compliance Advisor
·December 22, 2025·14 min read

After two decades of "reasonable and appropriate" flexibility, HIPAA is getting prescriptive. The Security Rule modernization expected in 2026 will fundamentally change how healthcare organizations approach AI security.

I've spent the last six months working with HHS on comment submissions and advising health systems on preparation. Here's what you need to know.

What's Changing

The current HIPAA Security Rule uses "addressable" implementation specifications—allowing organizations to implement alternative measures if they document why. The modernization replaces much of this flexibility with prescriptive requirements.

For AI specifically, expect mandatory requirements for:

Asset Inventories Including AI Tools

Organizations will need to maintain inventories of all systems that process PHI, explicitly including cloud services, SaaS applications, and AI tools. The days of "we didn't know employees were using that" are ending.

Multi-Factor Authentication

MFA will become mandatory (not addressable) for all systems accessing PHI. This includes AI tools—if your clinicians access ChatGPT with SSO that touches patient systems, MFA requirements apply.

Risk Analysis for AI

Risk assessments must explicitly address AI tools and their unique risks: data exposure to training, prompt injection vulnerabilities, and output accuracy concerns.

Audit Logging

Comprehensive logging requirements will extend to AI interactions. Every prompt, every response, every file upload—if PHI could be involved, logging is required.

The 35% Problem

Here's what keeps me up at night: research shows 35% of healthcare cyberattacks stem from third-party vendors, yet 40% of vendor contracts are signed without security assessments.

AI vendors are the newest addition to this third-party risk category. Most health systems I work with have no formal assessment process for AI tools, no BAA requirements, and no monitoring of AI vendor security postures.

Compliance Timeline

While final rules are pending, compliance periods typically range from 180 days to 2 years after publication. Given the scope of changes, expect the longer timeline—but preparation should start now.

Immediate Actions (Now - Q1 2026)

  • Inventory all AI tools in use across your organization
  • Identify which AI tools process or could process PHI
  • Review existing BAAs for AI-related provisions
  • Assess current logging capabilities for AI interactions

Near-Term Actions (Q2-Q3 2026)

  • Implement technical controls for AI data protection
  • Update risk assessments to include AI-specific risks
  • Develop AI-specific policies and training
  • Establish AI vendor assessment processes

Compliance Preparation (Q4 2026 - 2027)

  • Validate controls against final rule requirements
  • Conduct mock audits of AI security controls
  • Document compliance evidence
  • Prepare for OCR audits

The Recognized Security Practices Factor

Here's something many organizations miss: the 2021 HIPAA Safe Harbor law means OCR considers "recognized security practices" when determining penalties. Organizations that can demonstrate they've implemented industry-standard AI security controls—even before explicit requirements—will fare better in enforcement actions.

This isn't just about compliance checkboxes. It's about demonstrating that AI security is embedded in daily workflows, technology decisions, and staff behavior.

Practical Recommendations

Based on my work with health systems preparing for these changes:

1. Start with Visibility

You can't secure what you can't see. Implement network monitoring to identify AI tool usage. Most organizations are shocked by what they find.

2. Implement Technical Controls

Policy alone won't satisfy the new requirements. AI security gateways that can detect and protect PHI in AI interactions will become table stakes for healthcare compliance.

3. Update Your BAA Template

Your standard BAA likely doesn't address AI. Add provisions for: AI training data usage, data retention, subprocessor AI usage, and incident notification for AI-specific risks.

4. Train Clinical Staff

Clinicians need to understand that pasting patient information into ChatGPT creates HIPAA liability. Training should be practical, scenario-based, and regularly updated.

The healthcare organizations that start preparing now will be ready when the final rule drops. Those that wait will be scrambling—and potentially facing enhanced scrutiny from an OCR that's increasingly focused on AI risks.

DAF
Dr. Amanda Foster
Healthcare Compliance Advisor

Dr. Foster advises healthcare organizations on HIPAA, FDA, and emerging AI regulations. She previously served as Chief Privacy Officer at a major health system and holds a PhD in Health Informatics.

HIPAAHealthcare ITFDA Compliance

Stop AI Data Leaks Before They Start

Deploy ZeroShare Gateway in your infrastructure. Free for up to 5 users. No code changes required.

See Plans & Deploy Free →Talk to Us

This article reflects research and analysis by the ZeroShare editorial team. Statistics and regulatory information are sourced from publicly available reports and should be verified for your specific use case. For details about our content and editorial practices, see our Terms of Service.

We use cookies to analyze site traffic and improve your experience. Learn more in our Privacy Policy.