Your Risk Register Is a Lie — And Everyone Knows It

Your Risk Register Is a Lie — And Everyone Knows It

Nobody Actually Believes the Numbers

Pull up your organization's risk register right now. Go ahead. Look at that spreadsheet — or if you're lucky, your GRC platform — with its color-coded risk ratings and its neat rows of likelihood and impact scores multiplied together to produce a number that feels precise but means almost nothing. Now ask yourself honestly: does anyone in that room, when risk review comes around, actually believe those numbers?

They don't. The CISO doesn't believe them. The risk committee doesn't believe them. The person who entered them definitely doesn't believe them. Everyone sits through the quarterly review, nods along, maybe bumps a few scores up or down based on vibes, and then goes back to their desk and makes security decisions based on gut instinct and political pressure anyway.

This is one of the dirtiest open secrets in enterprise security, and it drives me absolutely nuts — because we have the frameworks, the guidance, and the tooling to do this meaningfully. We're just not using any of it correctly.

How We Got Here: The 5x5 Matrix Trap

Here's the thing about risk registers: they started with genuinely good intentions. NIST SP 800-30 laid out a coherent risk assessment methodology. ISO 27005 gave us a process for context-setting, risk identification, analysis, and treatment that actually hangs together logically. These aren't bad documents. Go read Revision 1 of 800-30 — it's thorough, it's careful, it distinguishes between threat sources, threat events, and vulnerabilities in ways that actually force you to think.

Somewhere between the publication of those frameworks and their implementation in the enterprise, something broke. What we ended up with was the 5x5 risk matrix — that beloved grid where you score likelihood 1-5 and impact 1-5, multiply them together, and get a risk score between 1 and 25. Red means bad, green means fine, yellow means it'll probably bite you in six months.

The problem isn't the matrix itself. The problem is treating ordinal rankings as if they're ratio data. A risk scored 4x4=16 is not twice as dangerous as a risk scored 4x2=8. Those numbers don't work that way. You can't do arithmetic on them. But we do anyway, and then we sort our risk register by that score and present it to the board as if it represents some kind of ground truth.

I sat in a risk review once where a team argued for twenty minutes about whether a particular risk should be scored "Likelihood: 3" or "Likelihood: 4." Twenty minutes. On a number that was, by definition, a guess made by people with no actuarial data to support it. We eventually compromised on 3.5, which — and I cannot stress this enough — is not how ordinal scales work.

The Calibration Problem Nobody Talks About

When your analyst looks at a threat scenario and decides it's "likely" to occur, what does "likely" mean? Once a year? Once a decade? More than 50% probability in the next 12 months? The honest answer is: it means whatever that analyst felt like it meant on the day they filled out the form, filtered through their own availability heuristic and whatever the last big incident they worked was.

This is called anchoring bias and availability bias, and they're not edge cases in risk assessment — they're the dominant force shaping your register. If your team just finished responding to a ransomware incident, ransomware risks get scored high everywhere, justified or not. If it's been three quiet years, scores drift down because nothing bad happened recently. The register becomes a lagging emotional indicator rather than a forward-looking analytical tool.

FAIR — Factor Analysis of Information Risk — was designed specifically to address this. Jack Jones built FAIR to force analysts to decompose risk into components that can actually be estimated independently: Threat Event Frequency, Vulnerability, Loss Event Frequency, Loss Magnitude. And crucially, FAIR outputs probability distributions, not point estimates. Instead of saying "likelihood: 3," you're saying "we estimate this event occurs between 0.1 and 2 times per year, with a median around 0.4." That's a claim you can actually interrogate.

The reason FAIR hasn't taken over the world isn't that it's wrong. It's that it's harder. It requires your analysts to think probabilistically, it requires you to gather calibration data, and it produces outputs that are harder to jam into a red/yellow/green dashboard. Executives want a stoplight. FAIR gives you a Monte Carlo simulation. These are not the same thing, and the gap between them is where most risk programs fall apart.

The Risk Treatment Theater

Even if you somehow had a perfectly calibrated risk register — and you don't, but hypothetically — you'd still run into the second failure mode: what happens after you identify a risk.

The treatment options in ISO 27005 are risk avoidance, risk transfer, risk mitigation, and risk acceptance. Clean taxonomy. The problem is that in practice, 80% of your register ends up as "risk acceptance" with no actual decision being made. It just... sits there. Accepted. By nobody in particular. With no review date. No residual risk calculation. No owner who signed anything.

I've seen risk registers where the same critical risks have been "accepted" for four consecutive years with a note that says "to be addressed in next budget cycle." That's not risk acceptance. That's risk deferral dressed up in the language of governance to make it look like a decision was made. No decision was made. The risk is just sitting there, aging like milk, and everyone's pretending it's wine.

Real risk acceptance requires a few things that rarely happen: a named owner who understands what they're accepting, an explicit acknowledgment of the potential loss exposure in terms the business actually cares about (dollars, not severity scores), and a scheduled re-evaluation trigger. Without those three things, "accepted" is just a word you put in a column so the register looks complete.

What a Useful Risk Register Actually Looks Like

I'm not going to pretend there's a perfect solution here, because there isn't. But I've seen teams run genuinely useful risk programs, and they share some common traits that are worth stealing.

First, they're ruthlessly scoped. A risk register with 200 line items is not twice as useful as one with 100. It's a bureaucratic artifact that nobody reads. The registers that actually drive decisions tend to have 15-30 items, each representing a meaningful risk scenario at the right level of abstraction — not "SQL injection" as a standalone risk, but something like "web application compromise leading to customer data exfiltration" that captures the actual business harm.

Second, they separate inherent risk from residual risk explicitly and honestly. This sounds obvious, but so many registers just have one risk score column, which makes it impossible to evaluate whether your controls are actually doing anything. If your inherent risk score for "credential theft leading to privileged access abuse" is High, and your residual score after MFA, PAM, and UEBA controls is also High, that's important information. It means either your controls aren't working, your controls aren't mapped correctly, or your initial risk assessment was wrong. All of those are worth knowing.

Third, they're tied to actual threat intelligence rather than hypothetical scenarios. This is where I see a lot of organizations leave real value on the table. Tools like MITRE ATT&CK aren't just for blue teamers mapping detections — they're a structured vocabulary for describing the threat scenarios in your risk register with enough specificity that you can actually validate your control coverage. If your risk register says "ransomware" and your ATT&CK mapping shows you have no detection coverage for T1490 (Inhibit System Recovery), you've just made your risk assessment measurably more accurate. That's useful.

The Organizational Problem Nobody Wants to Name

Hot take: most risk registers are lies not because security teams are incompetent, but because they're designed to protect the organization from accountability rather than to actually manage risk.

Think about the incentive structure. If you score a risk too high, you're either going to get a budget to fix it — which means work and commitments you might not be able to deliver on — or you're going to get overruled by a business leader who says it's not really that bad and now you've lost credibility. If you score it too low, nothing happens and everyone's happy until it materializes, at which point you'll say "the risk was accepted" and point to the register.

That's not risk management. That's liability management. The register becomes a document that proves the process was followed, not a tool that actually informs decisions. And to be fair to the practitioners in the room: this is often what the organization wants from them. They want coverage, not clarity.

The CISSP curriculum talks about risk management in terms of aligning security with business objectives, protecting organizational assets, supporting governance. That's all real. But it doesn't really grapple with the political economy of risk assessment inside large organizations — the fact that accurate risk information is sometimes unwelcome, that business units have strong incentives to downplay risks in their area, that CISOs sometimes face pressure to keep risk scores low so the board doesn't ask uncomfortable questions.

Understanding that dynamic is, I'd argue, more important to running an effective risk program than knowing the difference between qualitative and quantitative risk analysis methods. Both matter. But if you walk into a risk review thinking your job is purely technical, you're going to get eaten alive by people who understand that the register is also a political document.

The Calibration Exercise Your Team Should Do Tomorrow

Here's something concrete you can actually do this week. Take five risks from your current register — pick ones across the severity spectrum — and run a tabletop calibration exercise with the people who scored them. Ask three questions: What specific threat scenario are we describing here? What would have to be true for this event to occur? And if it did occur, what does the blast radius actually look like in operational terms?

You will be surprised — or maybe you won't — how quickly the conversation reveals that different people had different scenarios in mind when they rated the same risk. One person was thinking about an external attacker. Another was thinking about a malicious insider. A third was thinking about accidental data exposure. Same row in the register, three completely different threat scenarios with very different likelihood profiles and control requirements.

This isn't a failure of your team. It's a structural problem with how most risk registers are written — vague enough that they capture broad categories of concern but not precise enough to drive specific decisions. Fixing that ambiguity, even just for your top 10 risks, will do more for the quality of your risk management than any tool you buy or framework certification you pursue.

And if, after that exercise, your register still looks reasonable and your scores still feel roughly right? Great. You now have actual human consensus behind those numbers rather than a single analyst's gut feel on a Tuesday afternoon. That's meaningfully better. That's a register you can actually defend.

The Uncomfortable Question at the End

Risk management frameworks — NIST, ISO, FAIR, whatever your organization uses — aren't the problem. The problem is that we've industrialized the process of filling them out without preserving the analytical rigor that makes the output useful. We've optimized for completion over accuracy, for coverage over depth, for defensibility over truth.

So here's the thing I want to leave you with, and it's not comfortable: if you can't point to three decisions your organization made in the last year that were meaningfully shaped by your risk register — actual resource allocation decisions, control investments, accepted risks with real business sign-off — then your register isn't a risk management tool. It's a compliance artifact. And there's nothing wrong with compliance artifacts if that's what you need them to be. Just don't call it risk management.

The organizations that do this well aren't doing something magic. They're doing something harder: they're having honest conversations about uncertainty, making explicit the assumptions behind every risk score, and accepting that a useful risk register will sometimes tell you things you don't want to hear. That's the actual work. Everything else is just filling in columns.

Tags: Risk Management, Risk Register, CISSP, NIST SP 800-30, ISO 27005, FAIR Framework, Risk Assessment, Threat Modeling, GRC, Security Governance, Risk Treatment, MITRE ATT&CK, Quantitative Risk Analysis, Security Leadership

Enjoying this article?

Get more cybersecurity insights delivered to your inbox every week.

Advertisement

Related Posts

Security Policies Nobody Reads and the Governance Theater That Produces Them

Security Policies Nobody Reads and the Governance Theater That Produces Them

The 47-page AUP nobody reads, the annual review that changes nothing, and the GRC platform that surfaces data but doesn't fix anything. Policy-as-code is the way out.

S
SecureMango
10 minJuly 5, 2025