Passing Your Audit Doesn't Mean You're Secure — It Means You're Compliant

Passing Your Audit Doesn't Mean You're Secure — It Means You're Compliant

Passing Your Audit Doesn't Mean You're Secure — It Means You're Compliant

I've been in security long enough to watch the same theater play out dozens of times. A company spends three months in a frenzy of policy writing, evidence collection, and frantic remediation. They pass their SOC 2 Type II. The CISO sends a company-wide email. The sales team adds a badge to the website. And then, about six weeks later, I'm sitting in a kickoff call for a penetration test and I find a publicly exposed S3 bucket with customer data in the first twenty minutes. Compliant. Absolutely, demonstrably not secure.

This isn't a knock on auditors. Most of the ones I've worked with are competent, thorough, and doing exactly what they're paid to do — which is verify that you meet a defined set of criteria at a point in time. That's the product. That's what the certification says. The mistake organizations make, over and over, is conflating that certificate with a security posture.

SOC 2 is probably the most misunderstood framework in the industry right now. Every SaaS company has one, or is working on one, because enterprise buyers demand it. The Trust Services Criteria that underpin SOC 2 are actually pretty reasonable — they cover logical access, change management, monitoring, risk assessment. But here's the thing: the criteria describe what you should have controls around, not how effective those controls actually are. You can have a password policy that says minimum 12 characters, MFA required, and document that you enforce it — and your SOC 2 auditor will check a sample of five user accounts to verify MFA is enabled. Five. Out of potentially thousands. If those five are compliant, you pass.

That's not a flaw in the audit process. That's audit methodology. You cannot statistically examine every control instance in a population of thousands of systems. Sampling is necessary and reasonable for the purpose of attestation. But you absolutely cannot look at a SOC 2 report and conclude that your entire user population has MFA enabled, or that every privileged action is logged, or that all third-party integrations have been security reviewed. The report tells you that a qualified auditor checked a sample and it looked fine. Security engineering requires knowing what the full population looks like.

The PCI DSS Compensating Control Racket

PCI DSS deserves its own conversation because the compensating control mechanism has been systematically abused for years. The intent of compensating controls is legitimate — if you can't meet a specific requirement as written due to a documented technical or business constraint, you can propose an alternative control that achieves equivalent security. That's sensible. The reality is that some QSAs will accept compensating controls that are, charitably, creative fiction.

I worked with a retailer once who couldn't patch a critical POS system because the vendor had gone out of business and the replacement project kept getting delayed. Legitimate constraint. Their compensating control? A firewall rule restricting outbound traffic from the POS terminals to a whitelist of IPs. Except the whitelist included their entire internal network range because the application was poorly documented and nobody knew what it actually talked to. That compensating control passed. The system was running an unpatched Windows 7 installation with SMB exposed to the local network. EternalBlue would have had a field day.

The QSA relationship matters more than it should. I don't want to paint all QSAs with the same brush — there are excellent ones who push back hard and genuinely challenge organizations. But the incentive structure is misaligned. QSAs are hired by the organization being assessed. There's ongoing business to protect. A QSA who fails too many clients or demands remediation that causes delays gets replaced. This isn't unique to PCI — it applies to any third-party assurance model where the assessed party controls the engagement. You should be skeptical of any audit where you're paying the auditor and you have a strong financial interest in passing.

The Audit-Ready Sprint and Why It Means Nothing

Ask anyone who's been through an annual audit preparation cycle what the six weeks before the audit look like. You'll hear about late nights, emergency policy updates, frantic access reviews where people are removed from systems they haven't logged into in two years, evidence collection spreadsheets being assembled from screenshots and exported CSVs, and the inevitable discovery of something embarrassing that gets quietly remediated before the auditor arrives. Then the audit happens. The auditor leaves. And within a month, the access that was cleaned up starts accumulating again, the policy exceptions that were denied get re-approved, and the monitoring that was briefly tuned gets left in place but no one looks at the alerts.

This is compliance fatigue in its most concrete form. Organizations treat the audit as the objective rather than treating security as the objective and the audit as a byproduct. When you're operating well, audit prep should be boring — you're already collecting evidence continuously, your controls are already enforced, and the auditor is essentially validating something you already know. When audit prep feels like a crisis, that's a diagnostic signal that you've been treating compliance as a check-box and not as an operational discipline.

The ISO 27001 certification mill problem is a specific flavor of this. There are consultancies — I won't name them, but you know who they are — that specialize in getting organizations certified as efficiently as possible. They have templated ISMS documentation, pre-built policy libraries, and a practiced approach to getting through the certification audit with minimal actual implementation. The documents look great. The ISMS scope is usually drawn narrowly to exclude the complicated parts. The Statement of Applicability has a lot of controls marked as not applicable. Then the company gets their ISO 27001 certificate and their customers feel better, and nothing meaningful has changed in how they actually manage security risk.

Compliance Fatigue Is Real and It Has Consequences

If you're a mid-size SaaS company selling to enterprise healthcare customers, you're probably dealing with SOC 2, HIPAA, ISO 27001, and potentially PCI if you touch payments. These frameworks have overlapping but not identical requirements. Each one has its own evidence format preferences, its own terminology, its own audit cadence. The security team — which might be two or three people — spends a significant portion of its year managing compliance artifacts rather than doing security work. And I mean actual security work: threat modeling, vulnerability management, incident response practice, code review, architecture review.

The irony is that compliance fatigue makes organizations less secure, not more. When your team is buried in evidence collection for three different annual audits, they're not building detection capabilities. They're not reviewing cloud configurations. They're not having the hard conversation with the product team about why authentication needs to be redesigned. Compliance became the job and security became the thing they'd get back to eventually.

The gap between auditor sample testing and full coverage is where real attackers live. A SOC 2 auditor might sample your vulnerability scanning results and verify that you're running scans and remediating critical findings within your defined SLA. They're not going to verify that your scan profile is configured to actually find everything, or that your authenticated scan credentials are working, or that certain network segments are excluded from scoping. An attacker doesn't need to find a gap in your sample — they need to find a gap in your actual environment. Those are very different problems.

Continuous Compliance vs. The Annual Scramble

The tools that have emerged around continuous compliance — Drata, Vanta, Secureframe, and similar platforms — represent a genuinely useful shift in thinking even if their marketing sometimes oversells what they deliver. The core value proposition is evidence collection automation: these platforms integrate with your AWS account, your GitHub organization, your Okta tenant, and pull evidence continuously rather than requiring a manual scramble before audit time. That's legitimate value. I've seen teams cut their audit prep time significantly just by having access controls and configuration checks running continuously instead of being assembled from screenshots two weeks before the auditor arrives.

But here's where I'll push back on the continuous compliance narrative: automating evidence collection is not the same as having a strong security posture. Drata can tell you that MFA is enabled for the accounts it can see via your identity provider. It can't tell you that your developers have hardcoded AWS credentials in their local environment. It can't tell you that your application has a SQL injection vulnerability. It can't tell you that your incident response plan has never been tested. The evidence automation is valuable operational tooling, but it's answering a different question than "are we secure."

What continuous compliance actually enables, if you use it well, is freeing up your team's time so they can do actual security work. If you're not spending three months a year on evidence collection, you have three months a year to do architecture reviews and red team exercises and threat modeling. The technology is a means to an end, not the end itself.

The QSA Who's Never Done a Pentest

There's a particular breed of compliance professional who has deep knowledge of framework requirements and essentially zero background in offensive security or actual technical exploitation. I've been in conversations with QSAs who couldn't tell you what a SSRF vulnerability is, who don't know what Mimikatz does, who've never sat behind a Burp Suite proxy and watched HTTP traffic. And those same people are signing off on the security posture of organizations that process millions of credit card transactions.

This isn't about gatekeeping credentials — it's about recognizing that compliance knowledge and security knowledge are different skill sets that have significant overlap in their subject matter but require different intuitions. A good auditor knows what the framework requires. A good security engineer knows what an attacker would do. When those perspectives aren't in the same conversation, you get controls that look correct on paper and fail under any real adversarial pressure. The compensating control that's technically compliant but easily bypassed. The access review that checks whether access was reviewed but not whether the access level was appropriate. The logging requirement that verifies logs exist but not whether the logs contain the events that would actually matter during an incident.

Regulatory objectives and security objectives are aligned on the surface and diverge in the specifics. Regulations are written to be legible to a broad population — they have to be prescriptive enough to be enforceable and general enough to apply across diverse organizations. Security engineering is about understanding your specific threat model, your specific architecture, your specific data flows, and making targeted decisions about where to invest defensive effort. A regulation can tell you to encrypt data at rest. It can't tell you that your specific threat is a malicious insider with legitimate database access, and that encryption at rest doesn't help you there at all. That second-order thinking is where actual security lives, and no audit report will ever capture it.

What Good Actually Looks Like

The organizations I've seen do this well treat compliance as a floor, not a ceiling. They use the frameworks to establish baseline hygiene and use the audit process to verify that baseline is maintained. Then they build a security program on top of that — threat modeling, red team exercises, active detection engineering, incident response rehearsals, architectural security reviews that happen before code ships rather than after. Their audit prep is boring because the controls are actually running. Their audit reports don't tell the whole security story because they understand that's not what audit reports are for.

They also tend to have security people who've read the frameworks critically and know exactly where the gaps are. They know that SOC 2 doesn't require penetration testing. They know that PCI DSS requirement 11.3 requires penetration testing but the scope and methodology details leave significant room for variation in quality. They know that HIPAA's Security Rule is almost entirely flexible — "reasonable and appropriate" safeguards — and that passing a HIPAA assessment tells you almost nothing about actual PHI protection.

If you're using compliance frameworks as your primary measure of security maturity, you're using the wrong instrument. Use them for what they're designed for: establishing a documented baseline, satisfying contractual requirements, and communicating to customers that you've met a defined set of criteria. Then build a security program that's actually calibrated to your threat model. The certificate on the website and the security posture of your systems are different things, and confusing them is how you end up with a SOC 2 badge and an exposed S3 bucket in the same week.

Tags: compliance, SOC2, PCI-DSS, ISO27001, HIPAA, audit, continuous-compliance, security-assessment, GRC, risk-management

Enjoying this article?

Get more cybersecurity insights delivered to your inbox every week.

Advertisement

Related Posts

Your Bug Bounty Program Is a Liability Disguised as a Security Investment

Your Bug Bounty Program Is a Liability Disguised as a Security Investment

The 50 dollar payout for a critical SQLi. 400-report triage queue. Researcher went public after being ghosted. Not a pentest replacement.

S
SecureMango
10 minNovember 1, 2025
Red Team Reports That Collect Dust — Why Your Organization Learns Nothing From Getting Popped

Red Team Reports That Collect Dust — Why Your Organization Learns Nothing From Getting Popped

You spent six figures on a red team. They got DA in four hours. The SOC never detected them. Three months later, the same path still works. The report is in SharePoint.

S
SecureMango
10 minAugust 30, 2025
The Vulnerability Management Treadmill — Why You're Patching Everything and Fixing Nothing

The Vulnerability Management Treadmill — Why You're Patching Everything and Fixing Nothing

47,000 findings, CVSS-driven SLAs, and a compliance dashboard that shows green. Meanwhile, the KEV-listed CVE with an EPSS score of 0.85 is still in your backlog.

S
SecureMango
10 minJune 28, 2025
Penetration Testing Reports That Actually Get Vulnerabilities Fixed

Penetration Testing Reports That Actually Get Vulnerabilities Fixed

Your pentest report is beautifully technical and completely useless to the people who need to act on it. Here's how to write reports that actually drive remediation.

S
SecureMango
10 minApril 12, 2025