Let's Be Honest About What That 47-Page Acceptable Use Policy Actually Does
You know the one. It lives in SharePoint, last reviewed eighteen months ago, approved by a committee that hasn't met since the pandemic reshuffled the org chart. It covers everything from personal email use to cryptocurrency mining on company hardware. Nobody reads it. The one person who might have read it left in 2023. You know this because you're the one who gets the tickets when something goes sideways, and not once — not once — has a user said "I checked the AUP first."
This is governance theater. And if you've spent any real time in security, you've either produced it, enforced it, or been forced to sit through a readout of it in a QBR while a slide deck explained that your "policy compliance rate is 94%," which means nothing because the metric is checkbox completion, not behavior change.
The frustrating part isn't that policies exist. Policies should exist. The frustrating part is the gap — sometimes a chasm — between the document and the actual control environment. ISO 27001 Annex A gives you 93 controls across four themes. How many organizations have mapped those controls to technical implementations with evidence that isn't a self-attestation spreadsheet? Walk into any mid-market company doing their first ISO 27001 audit and you'll find the same thing: a policy binder (or SharePoint equivalent) that was assembled by a consultant six months ago, and a control environment that was assembled by an ops team three years before that, and the two have never spoken.
The Annual Review Ritual, or: How to Change Nothing Formally
Here's how it goes. Q4 hits. Someone in GRC opens a Jira ticket, or a ServiceNow task, or — I've seen this — a recurring Google Calendar invite titled "Policy Review Season." Each policy owner gets assigned their document. They open it. They read the first paragraph, maybe. They change the review date. They bump the version from 3.1 to 3.2. They send it back. The CISO signs off during a fifteen-minute meeting that also covered three other agenda items. Policy compliance: maintained. Actual security posture: unchanged.
I watched this happen at a financial services firm that had been through two PCI DSS QSAs in the same calendar year due to a merger. Their access control policy said least privilege. Their AD environment had 340 accounts in Domain Admins. Those two facts existed in parallel universes. The policy review process had no mechanism to surface that gap because the review was about the document, not the control. The QSA flagged it. The remediation took eight months. The policy? That got updated in about four minutes.
NIST CSF 2.0 actually tries to address this structurally. The new Govern function — added in the February 2024 release — explicitly pulls organizational context, risk management strategy, and supply chain risk management into a dedicated function rather than letting them scatter across Identify. The intent is to make governance a first-class concern, not just a wrapper around the technical functions. But you can implement CSF 2.0 Govern just as theatrically as anything else. The function doesn't enforce rigor. It creates a framework for rigor. Whether your organization uses it to build something real depends on whether anyone in a position of authority actually cares about the outcome versus the audit artifact.
Policy as Code Is the Only Honest Answer to the Gap Problem
If you want to close the distance between what your policy says and what your environment does, you have two real options. First, you accept the gap and manage risk manually — which is legitimate if you're honest about it and your compensating controls are actually compensating. Second, you encode the policy into something that runs continuously against the actual environment. That second option is what policy-as-code means in practice.
Open Policy Agent with Rego is the most common implementation you'll see in cloud-native environments. You write your access control rules, your resource tagging requirements, your network policy constraints — all of it — as machine-readable policy that gets evaluated at admission control or in your CI/CD pipeline. The policy isn't a Word document. It's a .rego file that lives in version control, gets reviewed in pull requests, and fails builds when something violates it. That's a policy review process with teeth.
HashiCorp Sentinel does the same thing for Terraform workflows. If your policy says "no S3 buckets without encryption at rest and versioning enabled," Sentinel enforces that before the infrastructure exists, not after some quarterly scan finds the violation. Contrast that with the traditional approach: policy document says encryption required, engineer deploys unencrypted bucket, scanner finds it in 72 hours (if you're lucky), ticket gets created, ticket sits in a queue, remediation happens eventually, and your "policy compliance rate" was 100% the whole time because nobody was measuring the right thing.
The objection I always hear is that policy-as-code doesn't cover everything. And that's true. It doesn't cover your physical security policy or your incident response policy or your vendor management policy. But it covers the technical controls that most policy documents are actually trying to enforce, and it covers them continuously rather than annually. The gap between the policy and the control? In a well-implemented OPA deployment, that gap is literally a test failure. That's progress.
GRC Platforms and the Illusion of Maturity
Let's talk about the tooling layer. ServiceNow GRC and RSA Archer are the enterprise incumbents. Drata and Vanta are the modern compliance automation plays. They all do roughly the same thing at different price points and UX quality levels: they aggregate evidence, map controls to frameworks, and give you a dashboard that tells you where you stand against SOC 2, ISO 27001, PCI DSS, whatever your auditor needs this year.
The problem isn't the platforms. The problem is the assumption that buying one of them is a security program. I've seen organizations drop six figures on a Drata implementation and use it primarily to generate evidence collection reminders that get ignored by the same people who were ignoring the SharePoint folder. The automation features in these platforms are genuinely useful — pulling Jira tickets as evidence of change management, syncing AWS Config findings, ingesting Okta logs for access reviews. But they surface the data. They don't fix the underlying thing.
What I'll say for the newer players like Drata and Vanta: they've made continuous control monitoring accessible to companies that couldn't justify an Archer implementation. A 200-person SaaS company that's trying to close enterprise deals and needs SOC 2 Type II actually has a path now that doesn't require a dedicated GRC team. That's real. But the governance still has to happen. The tool is a lens, not a substitute for judgment.
The spreadsheet governance argument — "we don't need a platform, we have Excel" — is usually made by people who haven't tried to maintain cross-framework control mappings across multiple audit cycles in a shared workbook. Once you've watched three people simultaneously edit a compliance tracker and introduce conflicting data, you understand why the platforms exist. But spreadsheet governance at small scale isn't categorically wrong. It's just fragile. Know what you're trading.
PCI DSS v4.0 Made the Customized Approach Actually Interesting
PCI DSS v4.0 — released March 2022, mandatory compliance from March 2025 — introduced the customized approach as a formal alternative to the defined approach for most requirements. This is a bigger deal than it sounds. Under the defined approach, you implement the specific control the standard mandates. Under the customized approach, you define your own control that achieves the stated security objective, document the methodology, and get your QSA to validate it.
On paper, this is the PCI SSC acknowledging that the defined approach sometimes produces compliance theater — you check the box, you don't achieve the objective. A requirement that mandates a specific technical implementation can become outdated while the security objective it was designed to meet remains entirely valid. The customized approach says: prove you're meeting the objective, not just the prescribed method.
In practice, this shifts significant burden onto the organization and the QSA. Your controls documentation has to be much more rigorous. Your targeted risk analysis — now required for several requirements in v4.0 — has to be substantive and defensible, not a template-filled formality. This is governance that actually costs something. Which is probably why adoption of the customized approach has been limited. Most organizations doing PCI DSS are trying to minimize QSA interaction time, not invite deeper scrutiny of their control rationale. The defined approach is predictable. Predictable is comfortable, even when it produces theater.
Security Awareness Training: The Click-Through That Satisfies Nobody
Annual security awareness training completion is probably the most-reported, least-meaningful metric in corporate security. Your board slide says "96% completion." What it means is that 96% of employees clicked through a fifteen-module course in thirty-two minutes while simultaneously on a call, and correctly answered enough multiple-choice questions to generate a certificate. The 4% who didn't complete it got a nastygram from HR. Behavior changed: negligible.
The research on this is pretty clear. The work by Saurabh Bhatt and others on security behavior change shows that knowledge transfer from awareness training doesn't reliably translate to behavioral change without reinforcement, context-specific delivery, and consequence. A phishing simulation that debrefs the user at the moment of click — when they're in the context of the mistake they almost made — is worth more than twelve modules about password hygiene. But the modules are trackable. Completion is reportable. The board metric is satisfiable. So that's what gets funded.
RACI matrices share this problem. You build one because the auditor wants to see defined ownership. You assign roles because the framework says to. But who actually escalates an incident at 2am? Who makes the call on whether to notify regulators? In practice, it's whoever picks up the phone, and that person might not be anywhere on the RACI chart. The RACI exists. The accountability it's supposed to enforce often doesn't.
What Board Reporting Should Actually Say
Here's what matters to a board that's trying to do its job on cybersecurity governance: residual risk exposure in terms the business understands, trend direction on the things that actually move, and honest assessment of whether the security program is resourced to handle the threat environment you're actually in. Not policy compliance percentages. Not training completion. Not the number of vulnerabilities closed last quarter without context on what was opened.
The metrics that correlate to real security outcomes — mean time to detect, mean time to respond, coverage of critical asset inventory, percentage of critical systems with tested incident response playbooks, identity hygiene metrics like orphaned accounts and stale privileged access — these are harder to generate and harder to explain, which is why they show up less often. It's easier to report what you can measure than to measure what matters and then figure out how to communicate it.
If you're building board reporting right now, read the NACD's 2023 Director's Handbook on Cyber-Risk Oversight. It's written for directors but it's useful for practitioners because it tells you what a board that's actually engaged wants to understand. Most boards are not actually engaged, which means they'll accept whatever you give them. That's not an excuse to give them theater.
The Uncomfortable Part
Governance theater persists because it serves incentives. The organization gets an audit artifact. The GRC team demonstrates value. The CISO has something to present. The auditor has something to check. Everyone in the chain is optimizing for the document, not the outcome, because the document is what gets evaluated. If you want to change that, you have to change what gets evaluated — which means being willing to have uncomfortable conversations about whether the current program is actually producing security or just producing evidence of security.
That's a harder conversation than updating a policy version number. But it's the one worth having.

