Penetration Testing Reports That Actually Get Vulnerabilities Fixed

Penetration Testing Reports That Actually Get Vulnerabilities Fixed

Nobody Reads Your Pentest Report

That's not an exaggeration. The average penetration testing report lands in someone's inbox, gets skimmed by a manager who doesn't understand the technical findings, forwarded to a developer who doesn't understand the risk context, and then sits in a shared drive until the next audit cycle forces someone to revisit it. If you're lucky, the critical findings get patched. The mediums? Maybe. The lows? Almost certainly not.

Here's the thing — that's on us. Not entirely, sure, but more than most pentesters want to admit. We've been writing reports optimized for us, not for the people who actually have to do something with them. We write to demonstrate technical prowess, to cover liability, to hit a page count that justifies the engagement fee. And then we wonder why remediation rates hover somewhere between "disappointing" and "actively dangerous."

I've been on both sides of this. I've delivered reports. I've sat in the remediation meetings. I've watched a genuinely critical finding — a direct path to domain admin through an unpatched Kerberoastable service account with a weak password — get deprioritized because the finding was buried on page 34 after seventeen pages of executive summary boilerplate. The developer who was supposed to fix it didn't understand what Kerberoasting even was, and the report explained it in terms that assumed they did.

So let's talk about how to fix this.

The Executive Summary Is the Most Important Thing You'll Write, and You're Probably Doing It Wrong

Most executive summaries read like a condensed version of the technical findings section with a risk score slapped on top. That's not an executive summary — that's a CliffsNotes version of a document the executive wasn't going to read anyway. What an executive actually needs is a business impact narrative. The difference is significant.

Compare these two openers:

"During the assessment period, testers identified 14 vulnerabilities across the target environment, including 3 critical, 5 high, 4 medium, and 2 low severity findings."

vs.

"An attacker with no prior access to your network could, within approximately four hours, obtain full administrative control over your Active Directory environment and access every system on your corporate network — including the customer database containing PII for 2.3 million users."

The first is a data dump. The second is a statement that demands a response. You want someone reading that to feel a very specific kind of uncomfortable. Not panicked, but not comfortable either. Motivated.

The attack path narrative is your most powerful tool here. Don't just list vulnerabilities — tell the story of the compromise. "We started with a phishing simulation that landed on a workstation running an outdated version of Chrome. From there, credential harvesting via a Mimikatz pass-the-hash attack gave us lateral movement to a development server. That server had an SSH key for the production bastion host. Game over." That's a movie. Executives understand movies. They don't understand CVSSv3 base scores.

Your CVSS Scores Are Lying to Developers

Hot take: CVSS scores, in isolation, are actively harmful to remediation prioritization.

I know, I know. They're standardized. They're defensible. Auditors love them. But here's the problem — a CVSS 9.8 finding that requires the attacker to already be on your internal network segment means something completely different at a company with strong network segmentation versus a flat network. The score doesn't capture that. The score doesn't capture the fact that your CVE-2021-44228 (Log4Shell) instance sits behind three layers of WAF and the inbound traffic is heavily filtered. It also doesn't capture that your CVSS 6.5 "medium" SQL injection is sitting directly in the unauthenticated login flow of your customer portal.

Environmental scoring exists precisely to address this, but almost nobody uses it in practice. What I've found actually moves the needle is adding a single field to every finding: "Why does this matter in YOUR environment, specifically?" Not why SQL injection is bad in general. Why this SQL injection, in this endpoint, given what you know about how this application is deployed and what it touches, is bad right now.

This requires you to actually understand the client's environment, which means doing your homework before you write a single word of the report. You need to know what their crown jewels are. You need to know their network topology. You need to know whether the compromised web server has database access or is isolated behind a strict egress firewall. All of that context has to live in the finding, not in your head.

Developers Don't Know What "Implement Proper Input Validation" Means

This drives me absolutely nuts. You find an XSS vulnerability in a React application. You document it correctly, reproduce it reliably, even include a proof-of-concept payload. And then in the remediation section you write: "Implement proper input validation and output encoding."

Congratulations. You've told a developer to do the thing they thought they were already doing. That advice is functionally useless.

Good remediation guidance is specific, actionable, and written for the person who has to implement it — which is often a developer who's never taken a security course and thinks XSS is something that only happens in "hacker movies." You need to tell them exactly what to do. For that React XSS, that might mean: "React's JSX handles output encoding automatically when you use standard rendering, but you're passing user-controlled data through dangerouslySetInnerHTML on line 247 of ProfilePage.jsx. Remove that and use the standard JSX interpolation instead. If rich HTML is genuinely required for this feature, implement a sanitization library like DOMPurify before rendering."

See the difference? You've told them exactly what's wrong, exactly where it is, and exactly how to fix it. You've even anticipated the edge case where they might genuinely need HTML rendering and given them a path forward. That developer can go fix this right now, without needing to understand the full theory of XSS. That's what you want. You want frictionless remediation.

The same principle applies at the infrastructure level. "Patch vulnerable systems" is not remediation guidance. "Update OpenSSH to version 9.3p1 or later on all systems in scope — the apt package manager on your Ubuntu 22.04 hosts will have this available via apt-get update && apt-get install openssh-server" is remediation guidance. Yes, it takes longer to write. No, there's no shortcut if you actually want things to get fixed.

The War Story About the Report That Actually Worked

A few years back I did an assessment for a mid-sized healthcare company. Standard external and internal pentest, two-week engagement. The findings were significant — we had full domain compromise by day three. The technical details were serious, but frankly not that unusual for an organization that had grown through acquisitions and inherited a mess of legacy infrastructure.

What was unusual was what happened after. They fixed almost everything within 60 days. Not just the criticals — most of the highs and a good chunk of the mediums too. I've been chasing that result ever since, trying to figure out what was different.

Part of it was the client — they had a CISO who actually read the report and cared. But a lot of it came down to choices I made in how I wrote the report. For that engagement, I did three things I hadn't consistently done before. First, I included a remediation timeline recommendation — not just severity ratings, but actual suggested timeframes. "Critical findings: 30 days. High: 90 days. Medium: next scheduled maintenance window." Having that written down gave the CISO something concrete to take to the board and hold engineers accountable to. Second, I wrote a one-page remediation roadmap that grouped related findings together. Instead of 22 separate line items, I gave them seven workstreams — things like "Active Directory hardening," "patch management," "web application security." That made the remediation effort feel manageable rather than infinite. Third, I offered a 45-minute technical walkthrough call with the dev and ops teams, separate from the executive presentation. That call was where the real questions came out. "What does Kerberoasting actually mean, and how do we check for vulnerable service accounts ourselves?" That kind of question.

None of those three things are technically difficult. They're just effort. Most of us don't do them.

Proof of Concept Evidence: Where People Get This Completely Backwards

There's a debate in the pentesting community about how much PoC evidence to include in reports. Some folks say strip it down — don't give the client a how-to guide if the report leaks. Others say include everything so the severity is undeniable. Both camps have a point, and both camps are missing the actual goal.

The goal of PoC evidence is to make the finding undeniable and to make the attack chain understandable. That's it. You include exactly as much as achieves those goals and no more.

Screenshots of a Burp Suite intercept showing a successful SQL injection response? Yes, include that. The full SQLMap command with flags that would allow someone to automatically dump the entire database? Probably not necessary for the report — save that for the verbal walkthrough if the dev team needs to understand the scope. A Wireshark capture showing cleartext credentials traversing the network? Absolutely include it. The raw packet data? Probably overkill.

What I consistently see missing is remediation verification guidance. You found the vulnerability, you documented it, the client is going to fix it — how do they know when it's actually fixed? This seems obvious but it's almost never in reports. "After implementing the fix, verify by attempting the following request and confirming you receive a 403 rather than a 200 response: GET /api/admin/users HTTP/1.1 with a standard user bearer token." Now the developer can test their own fix before they close the ticket. Now the issue doesn't get marked as resolved and then rediscovered in your next engagement.

The Uncomfortable Truth About Retest Engagements

Unpopular opinion: if your client isn't scheduling a retest, you've probably written a report that makes them feel like remediation is complete when it isn't.

The retest is where you actually find out if your report was good. I've done retests where I showed up expecting to verify patches and instead found that the developers had misunderstood the finding and implemented a fix that addressed the symptom but not the root cause. The SQL injection was "fixed" by adding client-side input filtering in JavaScript. The XXE vulnerability was patched in the web application but the same vulnerable library was still present in the mobile API backend. The weak password policy was updated in the Group Policy Object but the policy wasn't enforced on the OU containing service accounts — which was the entire attack path we documented.

Every single one of those misses traces back to a report that wasn't clear enough. Either the root cause wasn't explained correctly, or the remediation guidance was too generic, or the affected scope wasn't fully documented. A retest isn't just a revenue opportunity or a compliance checkbox — it's a feedback mechanism. When findings survive a retest despite apparent remediation effort, that's a signal about your report quality, not just the client's implementation quality.

Build this into your engagement model. Offer it. Encourage it. And when you do retests, pay attention to how findings were misunderstood — it'll make your next set of reports better.

One Last Thing That Will Make You Uncomfortable

The hardest part of writing a report that actually gets vulnerabilities fixed isn't the technical documentation. It's the honesty.

Not honesty about the vulnerabilities — most pentesters are honest about those. Honesty about the limitations. The finding you couldn't fully exploit but believe is exploitable with more time. The attack path you theorized but didn't prove. The scope restriction that prevented you from testing what was actually the riskiest part of the environment.

Clients use incomplete reports to build a false sense of security. "We did a pentest, we're good." If your report doesn't clearly document what wasn't tested, what was time-constrained, and what would require a follow-up engagement to properly assess, you're contributing to that problem. Write the limitations section like it matters — because it does. The executive who reads your report will make resource allocation decisions based on it. If they think you tested everything when you only tested half, they'll underfund the security program and be blindsided later.

You're not just a technician delivering a document. You're providing decision support for the people responsible for protecting this organization. The report is the product. And if the product doesn't get vulnerabilities fixed, it doesn't matter how elegant the exploitation chain was or how many pages of raw output you included in the appendix.

Write the report for the person who has to act on it. Everything else is just showing off.

Tags: Penetration Testing, Security Assessment, Report Writing, Vulnerability Management, Red Team, CISSP, Remediation, Offensive Security, Risk Communication, Security Testing

Enjoying this article?

Get more cybersecurity insights delivered to your inbox every week.

Advertisement

Related Posts

The Vulnerability Management Treadmill — Why You're Patching Everything and Fixing Nothing

The Vulnerability Management Treadmill — Why You're Patching Everything and Fixing Nothing

47,000 findings, CVSS-driven SLAs, and a compliance dashboard that shows green. Meanwhile, the KEV-listed CVE with an EPSS score of 0.85 is still in your backlog.

S
SecureMango
10 minJune 28, 2025