In 2018, the University of Texas MD Anderson Cancer Center lost a $4.3 million appeal after OCR found that unencrypted laptops and thumb drives had exposed patient data. The kicker? MD Anderson had written encryption policies on the books for years. They just never enforced them. That gap — between what's documented and what's actually implemented — is where I see most organizations fail when it comes to technical safeguards under HIPAA.
If you're searching for clarity on what the HIPAA Security Rule actually demands from your technology stack, you're in the right place. I've spent years helping covered entities and business associates translate regulatory language into real-world IT configurations. Here's what OCR looks for, what triggers penalties, and what your team needs to do right now.
What Are Technical Safeguards Under HIPAA?
The HIPAA Security Rule organizes its requirements into three categories: administrative, physical, and technical safeguards. Technical safeguards are the technology-based protections you put in place to control access to electronic protected health information (ePHI) and keep it secure during storage and transmission.
HHS defines technical safeguards as "the technology and the policy and procedures for its use that protect electronic protected health information and control access to it." You can find the regulatory text at 45 CFR Part 164, Subpart C.
There are four standards within the technical safeguards. Two have required implementation specifications, and two have addressable ones. That distinction matters more than most compliance officers realize.
The Four Standards — And Where Organizations Actually Fail
1. Access Controls (§ 164.312(a))
This is the standard OCR investigates most aggressively. Access controls require that only authorized users can reach ePHI. The regulation specifies four implementation specs:
- Unique user identification (required): Every user must have a unique login. Shared credentials are a violation, full stop.
- Emergency access procedure (required): You need a documented, tested plan for accessing ePHI during an emergency — a system outage, a ransomware attack, a natural disaster.
- Automatic logoff (addressable): Workstations should lock after a period of inactivity. I've walked through clinics where an EHR stays open on an unlocked computer in a hallway. That's a finding every single time.
- Encryption and decryption (addressable): Addressable does not mean optional. It means you must implement it or document why an equivalent alternative is reasonable. In my experience, encryption is almost never unreasonable — and OCR agrees.
The MD Anderson case I mentioned earlier? Their failure to encrypt ePHI at rest was the primary basis for that $4.3 million penalty, upheld by a federal appeals court in 2021.
2. Audit Controls (§ 164.312(b))
Your systems must record and examine activity in information systems that contain or use ePHI. This means logging who accessed what, when, and from where.
Here's what I see constantly: organizations turn on audit logs but never review them. That defeats the purpose. OCR doesn't just want logs to exist — they want evidence that someone is monitoring them. A log that nobody reads is just a hard drive filling up.
If your organization doesn't have a scheduled log review process — weekly or monthly, depending on your size and risk — you have a gap. Document the review, document who does it, and document what they found. Even "nothing unusual" is a finding worth recording.
3. Integrity Controls (§ 164.312(c))
This standard protects ePHI from improper alteration or destruction. The implementation specification — a mechanism to authenticate ePHI — is addressable.
In practice, this means checksums, hash verification, version control, or digital signatures that confirm data hasn't been tampered with. If your EHR vendor handles this natively, great — but you still need to verify it's turned on and functioning. "Our vendor handles it" is not a compliance strategy. It's a hope.
4. Transmission Security (§ 164.312(e))
Whenever ePHI moves across a network — email, file transfers, API calls, telehealth sessions — you must guard against unauthorized access. Two addressable specs live here:
- Integrity controls: Ensure ePHI isn't modified during transmission without detection.
- Encryption: Encrypt ePHI in transit. TLS 1.2 or higher is the current floor. If your organization still has systems using TLS 1.0, you have a critical vulnerability and a compliance problem.
The HHS Breach Notification Rule provides a safe harbor for encrypted data. If a laptop is stolen but the ePHI on it was encrypted to NIST standards, it's not a reportable breach. That single fact should make the business case for encryption obvious to any CFO.
"Addressable" Doesn't Mean What You Think It Means
This is the single biggest misconception I encounter. When a technical safeguard implementation specification is labeled "addressable," many organizations read that as "optional." It is not.
Addressable means you must do one of three things:
- Implement the specification as written.
- Implement an equivalent alternative measure that achieves the same protection.
- Document why neither is reasonable and accept the risk — with written justification.
In reality, option three almost never holds up under OCR scrutiny for encryption or automatic logoff. If you're a covered entity with a modern IT environment and you choose not to encrypt ePHI, you'd better have an extraordinary reason and airtight documentation. I've never seen one that satisfied an investigator.
The $2.15 Million Reminder from a Children's Hospital
In 2020, OCR settled with Children's Hospital Colorado Medical Center (CHSPSC) for $2.15 million after a phishing attack compromised ePHI of over 3.2 million individuals. Among the findings: lack of adequate access controls and failure to implement technical policies that govern ePHI access. The corrective action plan required two years of monitoring.
This wasn't a small rural clinic without resources. This was a sophisticated healthcare system. If it can happen to them, your organization isn't immune. The enforcement page at HHS.gov lists every resolution agreement publicly — I recommend bookmarking it.
How to Audit Your Technical Safeguards Right Now
I walk clients through a five-point check that maps directly to what OCR reviews during an investigation:
- Inventory every system that touches ePHI. You cannot protect what you don't know about. Cloud apps, mobile devices, medical devices with network access — all of it.
- Verify unique user IDs exist on every system. No shared logins. No generic admin accounts used by three people.
- Confirm encryption at rest and in transit. Document the standard (AES-256, TLS 1.2+) and verify configuration.
- Pull audit logs and prove someone reviews them. Show dates, reviewer names, and outcomes.
- Test emergency access procedures. Run a tabletop exercise at least annually. Document results.
If any of these checks reveal a gap, fix it immediately and document the remediation. A gap that was found and fixed tells OCR a very different story than a gap that existed for years unnoticed.
Your Workforce Is Part of the Technical Equation
Technical safeguards don't operate in a vacuum. Your staff interacts with these systems every day. If they don't understand why automatic logoff exists, they'll find workarounds — sticky notes on screens, password-sharing, disabling timeout settings.
Workforce training that specifically covers technical safeguards is a requirement, not a nice-to-have. I've seen organizations invest heavily in firewalls and encryption while completely neglecting to teach staff how access controls actually work. That's like installing a vault door and taping the combination to the wall.
Our HIPAA training catalog includes courses that walk your team through exactly these scenarios — role-based, scenario-driven, and mapped to the Security Rule's requirements. If your current training program only covers privacy basics, you're leaving your technical safeguard investment exposed.
Quick-Reference: What Counts as a Technical Safeguard?
For those looking for a concise answer — and for anyone preparing for a risk assessment or OCR audit:
Technical safeguards under HIPAA are the technology and related policies that protect ePHI. They include four standards: access controls, audit controls, integrity controls, and transmission security. Each standard has required or addressable implementation specifications. Covered entities and business associates must implement them based on their risk analysis and document every decision.
Stop Treating Compliance Like a Checkbox
Every enforcement action I've reviewed shares a common thread: the organization treated technical safeguards as a one-time project instead of an ongoing program. They configured access controls during an EHR implementation in 2017 and never revisited them. They encrypted laptops but forgot about the file server in the closet.
HIPAA's Security Rule demands continuous evaluation. Section 164.306(e) explicitly requires covered entities to maintain security measures through ongoing review. Your risk environment changes every time you add a cloud service, onboard a vendor, or deploy a new device.
If you haven't reviewed your technical safeguards in the last twelve months, start today. Pull your last risk analysis, map it against the four standards above, and identify what's changed. Then make sure your workforce knows what's expected of them — our HIPAA compliance training programs are built for exactly this kind of operational reinforcement.
OCR doesn't fine organizations for having vulnerabilities. They fine organizations for not knowing about them — or worse, knowing and doing nothing.