In late 2023, a large health system deployed a generative AI chatbot to assist clinical staff with documentation. Within weeks, workforce members were pasting entire patient histories — including names, dates of birth, and diagnosis codes — into the tool. No Business Associate Agreement existed with the AI vendor. No risk analysis had been conducted. The organization had, in effect, disclosed protected health information to an unauthorized third party at scale. This is the collision of HIPAA and AI that every covered entity needs to prepare for right now.

Why HIPAA and AI Create Unprecedented PHI Risk

Artificial intelligence tools — from ambient clinical documentation to predictive analytics platforms — are being adopted across healthcare at a pace regulators have never seen. The problem is not the technology itself. The problem is that most healthcare organizations are deploying AI without mapping it against existing HIPAA obligations.

When your workforce inputs protected health information into an AI system, that system becomes part of your PHI ecosystem. If the AI vendor receives, maintains, or transmits PHI on your behalf, it meets the regulatory definition of a business associate under 45 CFR § 160.103. No Business Associate Agreement means no permissible disclosure — full stop.

OCR has not issued AI-specific HIPAA regulations, but the agency has made clear through guidance and enforcement that existing rules apply to new technologies. The Privacy Rule, Security Rule, and Breach Notification Rule do not contain exceptions for artificial intelligence.

The Risk Analysis Gap Most Organizations Ignore

Under the HIPAA Security Rule (45 CFR § 164.308(a)(1)), your organization must conduct an accurate and thorough assessment of potential risks to the confidentiality, integrity, and availability of ePHI. Every AI tool that touches patient data must be included in this risk analysis.

In my work with covered entities, I consistently find that AI tools are adopted by individual departments — radiology, coding, nursing — without IT security or compliance teams even knowing. Shadow AI is the new shadow IT, and it carries far greater regulatory exposure.

Your risk analysis for AI must address:

  • Data flow mapping: Where does PHI go when entered into the AI tool? Is it stored, cached, or used for model training?
  • Encryption and transmission security: Does the tool meet the Security Rule's technical safeguard requirements under 45 CFR § 164.312?
  • Access controls: Who in your workforce can use the AI tool, and are unique user identifiers enforced?
  • Vendor transparency: Can the AI vendor confirm it does not retain or reuse PHI for purposes outside the BAA?

If you cannot answer these questions for every AI tool in your environment, your risk analysis is incomplete — and that is one of the most commonly cited HIPAA violations in OCR enforcement actions.

The Minimum Necessary Standard Applies to AI Inputs

Healthcare organizations consistently struggle with the minimum necessary standard (45 CFR § 164.502(b)) even in traditional workflows. AI amplifies this problem dramatically. When a clinician pastes an entire medical record into an AI summarization tool, far more PHI is disclosed than the task requires.

Your policies must specifically address what types of data workforce members are permitted to input into AI systems. De-identification, where feasible, should be the default. If the AI tool only needs clinical notes to generate a summary, there is no reason to include the patient's Social Security number, address, or insurance ID.

Building these guardrails requires both technical controls and workforce education. Policy alone will not prevent a HIPAA violation if your staff does not understand why it matters.

Workforce Training Must Cover AI-Specific Scenarios

The HIPAA Privacy Rule at 45 CFR § 164.530(b) requires that all workforce members receive training on policies and procedures related to PHI. As AI tools enter clinical and administrative workflows, your training program must evolve to address them explicitly.

Generic annual HIPAA training that never mentions artificial intelligence is no longer sufficient. Your workforce needs to understand:

  • Which AI tools are approved for use with PHI and which are prohibited
  • How the minimum necessary standard applies when interacting with AI
  • What constitutes an impermissible disclosure through an AI platform
  • How to report suspected unauthorized AI use involving patient data

If your current training program does not cover these scenarios, explore our HIPAA Training & Certification program, which is updated to reflect the evolving intersection of HIPAA and AI in healthcare operations.

Business Associate Agreements Need AI-Specific Provisions

A standard BAA template from 2015 will not adequately cover the risks introduced by AI vendors. When an AI company processes PHI on behalf of your covered entity, the BAA must address issues unique to machine learning and large language models.

Specifically, your BAA should include provisions that:

  • Prohibit the vendor from using PHI to train, fine-tune, or improve AI models
  • Require the vendor to delete or return PHI after processing, with no residual data retention
  • Define breach notification obligations specific to AI-related incidents, such as model inversion attacks or prompt injection that exposes PHI
  • Require the vendor to undergo independent security assessments and share results

OCR's enforcement history demonstrates that the absence of a compliant BAA is treated as a standalone HIPAA violation, regardless of whether a breach actually occurs. In 2024, OCR continued resolving cases involving missing or deficient BAAs with penalties ranging from tens of thousands to millions of dollars.

Your Notice of Privacy Practices May Need Updating

If your organization uses AI tools to make treatment recommendations, process claims, or analyze patient data, consider whether your Notice of Privacy Practices accurately reflects these uses. Patients have a right to understand how their PHI is being used, and AI-driven processing is a material change that may require an updated notice under 45 CFR § 164.520.

Transparency is not just a regulatory obligation — it is a trust issue. Patients who discover their data was fed into an AI system without their knowledge will file complaints. OCR investigates every complaint it receives.

Take Action Before OCR Acts for You

The regulatory landscape around HIPAA and AI will only tighten. HHS has signaled increasing attention to AI governance in healthcare, and OCR enforcement is likely to follow. Waiting for formal AI-specific rules is not a compliance strategy — it is a liability.

Start by auditing every AI tool that touches PHI in your organization. Update your risk analysis. Revise your BAAs. And most critically, ensure your entire workforce understands the rules. Visit HIPAA Certify to build a workforce compliance foundation that accounts for the technology your teams are already using.

The organizations that act now will be the ones that avoid the enforcement actions, breach notifications, and reputational damage that are coming for those that don't.