The Checkbox That Lulls You to Sleep
You checked the box. "Encryption at rest: enabled." Your compliance officer is happy. Your auditor signs off. And somewhere, a threat actor is quietly reading your data because you confused encryption with protection.
This is the conversation I keep having — in post-incident reviews, in architecture reviews, in Slack threads at 11pm when someone realizes their "encrypted" S3 bucket just got exfiltrated. The marketing around encryption at rest has done serious damage to how practitioners actually think about data security. Let me unpick it.
What "Encrypted at Rest" Actually Protects Against
Let's be precise, because vagueness here costs people their jobs. Encryption at rest protects against one primary threat: physical media theft or unauthorized physical access. Someone walks out of an AWS data center with a spinning disk? Great, your data's useless to them. A decommissioned SSD ends up on eBay? No problem. That's the threat model this solves — and it's a real one, but it's also one that AWS, Azure, and GCP have largely solved at the infrastructure layer regardless of whether you enable anything.
What it does not protect against: a compromised application with database access, a misconfigured IAM role, a stolen API key, a rogue employee with legitimate access, or — and this is the one that grinds my gears — the cloud provider itself in certain configurations. More on that last one shortly.
The moment you conflate "data is encrypted" with "data is protected," you've already lost. Encryption is a mechanism. Protection is an outcome. The gap between those two things is where breaches live.
SSE-S3 vs SSE-KMS vs SSE-C: The Difference Actually Matters
AWS gives you three server-side encryption options for S3 and people treat them as interchangeable. They are not, and the distinction cuts directly to the question of who controls the keys.
SSE-S3 uses AES-256 with keys that AWS fully manages. AWS generates the keys, AWS rotates them, AWS controls the key material. If you get a valid National Security Letter served to AWS, or if AWS's own key management infrastructure is compromised, your data's confidentiality depends entirely on AWS's integrity and security posture. For most workloads this is fine. For workloads with genuine adversarial nation-state threat models, it is not.
SSE-KMS brings AWS Key Management Service into the picture. Now you have a Customer Master Key (CMK) — technically now called a KMS Key in AWS's updated documentation — and you get audit trails through CloudTrail, the ability to set key policies, and the option to disable or delete the key (with a waiting period). This is meaningfully better. But here's what most people miss: unless you're using a customer-managed key (CMK) as opposed to an AWS-managed key, you still don't control key rotation scheduling, and AWS still manages the underlying hardware. The key hierarchy matters here — KMS uses envelope encryption, where your data is encrypted with a data encryption key (DEK), and that DEK is itself encrypted with a key encryption key (KEK), the CMK. AWS KMS stores and protects the KEK. Your data's protection is only as good as your KMS key policy.
SSE-C is where you actually hold the keys. You provide the encryption key on every request; AWS uses it to encrypt/decrypt and does not store it. Lose the key, lose the data — permanently. This is the right answer for certain regulated workloads, but operationally it's a beast. Most teams aren't equipped to handle key lifecycle management at that level, and that's okay to acknowledge.
The practical upshot: if your threat model includes the cloud provider, SSE-KMS with AWS-managed keys doesn't help you. Full stop.
The Cloud Provider Problem Nobody Wants to Say Out Loud
I'll say it: if you use default encryption options from any major cloud provider, you have not protected your data from that cloud provider. You've protected it from third parties. That's a meaningful distinction that gets buried in vendor marketing.
Azure Key Vault is a great service. I use it, I recommend it. But when you're using Azure-managed keys — the default in most Azure encryption scenarios — Microsoft's key management infrastructure has the cryptographic material needed to decrypt your data. That's the model. It's not a vulnerability, it's the architecture. And it's fine for most organizations. But calling it "your encryption" is a stretch.
The answer to this problem is BYOK (Bring Your Own Key) or, for the truly paranoid, HYOK (Hold Your Own Key). With BYOK, you generate the key material yourself — typically using an HSM (Hardware Security Module) — and import it into the cloud provider's key management service. The provider can still theoretically access the key material (it's in their infrastructure), but the key was generated in hardware you controlled, and in some configurations the cloud provider has contractual and technical limits on access. With HYOK, the key material never leaves your premises. Decryption operations require a call back to your on-premises HSM. Azure Information Protection supports this via the AD RMS connector model. It's operationally complex. Latency increases. But for highly sensitive classification tiers, it's the right call.
Hardware Security Modules deserve a paragraph on their own. An HSM — whether a physical appliance like a Thales Luna or a cloud-based one like AWS CloudHSM — provides a FIPS 140-2 Level 3 boundary around key material. Keys generated in an HSM can be flagged as non-exportable. The private key material never exists in plaintext outside the tamper-resistant hardware boundary. This is categorically different from software-based key management. CloudHSM in particular gives you single-tenant HSM clusters where AWS doesn't have access to your key material — a genuine architectural distinction from standard KMS.
Transparent Data Encryption: The Database Trap
I sat in an architecture review once where a team argued their database was "encrypted" because they'd enabled Transparent Data Encryption (TDE) in SQL Server. And technically they were right. But anyone with a valid SQL login could query the data in plaintext — because that's exactly how TDE works. It's transparent. To everyone, including attackers with a compromised service account.
TDE in SQL Server encrypts the data files, log files, and backups on disk. The Database Encryption Key (DEK) is protected by a certificate stored in the master database, which is itself encrypted by the Service Master Key, which is derived from the Windows DPAPI and the service account. This is a reasonable protection for stolen backups or physical disk access. It is not protection against SQL injection, privilege escalation, or compromised credentials — because the database engine decrypts transparently as part of normal query execution.
PostgreSQL's situation is interesting. Native PostgreSQL doesn't have built-in TDE as of version 16 — there are patches and forks (like EDB's offering) that add it, and there's been long-running discussion in the community about adding it to core. What PostgreSQL does support through the pgcrypto extension is column-level encryption, where you explicitly encrypt and decrypt values using functions like pgp_sym_encrypt() and pgp_sym_decrypt(). This is meaningfully different — the encrypted value lives in the column, and decryption requires the key material explicitly. But it puts the burden on developers, and I've seen it implemented incorrectly often enough to be skeptical when teams claim they're using it.
The disk-level encryption story is similar. LUKS (Linux Unified Key Setup) with dm-crypt on Linux, BitLocker on Windows — these protect data on the physical volume. They're excellent solutions for laptops, portable media, and physical servers with accessible drives. But on a running cloud VM, the volume is mounted and decrypted. If your application is compromised while the system is running, LUKS does nothing to protect you in that moment. The decryption happens at boot. After that, you're operating on plaintext data just like any other system.
Key Rotation: The Lie You're Telling Yourself
Ask your team when the last key rotation happened. Go ahead, I'll wait.
Key rotation is one of those controls that sounds great in a policy document and is quietly abandoned in production. AWS KMS supports automatic annual rotation for symmetric customer-managed keys. Enabling it is a single toggle. And yet I've audited environments — mature environments, teams with dedicated security engineers — where CMKs haven't rotated in three years because someone was worried about "breaking something."
Here's what actually happens with envelope encryption during key rotation, because this trips people up: when KMS rotates a CMK, it does not re-encrypt your data. AWS keeps all previous key versions around and knows which version was used to encrypt each DEK. New data gets encrypted with the new key version; old data continues to be decryptable with the old versions. The CMK ID stays the same. This is the envelope encryption model working as intended — you're rotating the KEK, not re-encrypting every object. It's elegant and it means rotation carries essentially no operational risk for KMS-backed resources.
The harder problem is key rotation in custom implementations. If you're managing your own key hierarchy — say, an application that derives per-tenant encryption keys from a master key — rotation requires a migration strategy. You need to decrypt with the old key and re-encrypt with the new one, which means downtime windows or carefully choreographed online migration logic. This is the unsexy work that doesn't show up in vendor documentation. It's also where most breaches related to long-lived keys actually originate.
One More Thing About "Encrypted Backups"
I've seen this specific failure mode enough times that it deserves its own callout. A team enables encryption on their primary database. Great. They configure nightly backups. The backups go to a storage bucket. And the bucket is configured with SSE-S3 using an AWS-managed key — not the same CMK protecting the primary data, because the backup tooling just defaulted to whatever the bucket configuration said.
Now the primary data and the backup data have different key management postures. If someone compromises the backup pipeline and exfiltrates the backup files, they're working with AWS-managed SSE-S3 — a different protection model than what the security team thought they had. The encryption is real. The assumption about which keys protected what data was wrong.
Consistent key hierarchy across your primary data and backup chain is non-negotiable if you care about this. And your backup encryption keys need their own lifecycle management — you need to retain the key material for as long as you might need to restore from that backup. Delete a CMK without thinking through your backup retention period and you've just made your older backups permanently unrecoverable.
So What Does Good Actually Look Like
Not exhaustive, not a checklist — just the things that distinguish teams who actually understand this from teams who just checked the compliance box:
- Know your threat model before picking an encryption strategy. SSE-S3 is fine if your adversary is a burglar with a disk. It's not fine if your adversary has legal leverage over your cloud provider.
- Understand where your keys live and who can access them. Draw the trust boundary. If the entity you're worried about can access the key management infrastructure, your encryption provides no confidentiality guarantee against that entity.
- Treat key management as operational infrastructure, not a one-time configuration. Rotation schedules, access auditing through CloudTrail or Azure Monitor, least-privilege key policies, documented recovery procedures — these need to be running continuously, not set and forgotten.
Encryption at rest is a real and important control. But it's a narrow one, and the gap between what it protects and what people assume it protects is wide enough to drive a breach through. The teams that get this right aren't using fancier technology — they're asking sharper questions about exactly what threat they're defending against and exactly who holds the keys.
And that starts with not letting a green checkmark on a compliance dashboard tell you that you're safe.


