Your Vendor Risk Questionnaire Is a Work of Fiction

Your Vendor Risk Questionnaire Is a Work of Fiction

S
SecureMango
||10 min read|Security & Risk Management

The Questionnaire That Protects Nobody

The SIG questionnaire — the Shared Assessments Standardized Information Gathering document — is 800 questions organized across 18 functional domains covering everything from physical security to business continuity. It is the industry-standard vendor risk assessment instrument. It has been endorsed by regulators, referenced in audit frameworks, and returned by thousands of vendors annually.

It is also, in the way it is typically deployed, security theater.

Not because the questions are wrong. The SIG asks reasonable questions about reasonable controls. Not because the intent is misguided — the intent, aggregating security control information from vendors into a standardized format, is genuinely useful. But because the typical operational context transforms a legitimate assessment instrument into a documentation exercise that produces compliance artifacts rather than risk information.

Here is what actually happens: your procurement team sends the SIG to a vendor. The vendor's compliance team — or their third-party questionnaire response service — fills out the SIG using templated answers from their last 20 SIG submissions. "Yes, we have a documented information security policy. Yes, we conduct annual penetration testing. Yes, MFA is enforced." The completed questionnaire comes back. Your vendor risk management team reviews it against a scoring rubric, notes a few gaps, sends follow-up questions that get equally templated responses, achieves a passing score, and approves the vendor relationship. The CISO signs off. The vendor goes live.

Eight months later, the vendor has a breach. And you discover that "annual penetration testing" meant a scan from a tool with no manual validation, that MFA enforcement had an exception for legacy integrations that covered your integration specifically, and that the documented security policy was a Word document that nobody had read since 2021.

SolarWinds passed exactly this kind of assessment at thousands of organizations before SUNBURST. So did CrowdStrike before the Falcon update incident, in a different way. The vendors who breach your data are overwhelmingly vendors who passed your assessment process.

What Questionnaire Responses Actually Tell You

A vendor's answers to a SIG questionnaire tell you that the vendor has someone who can fill out a SIG questionnaire. That is approximately the full information content.

Questionnaire responses are self-reported, unverified, typically prepared by compliance personnel rather than the engineers who build and operate the systems in question, and optimized for passing the assessment rather than accurately representing the security posture. Vendors who have submitted many SIGs know which answers produce good scores and which answers trigger follow-up. The incentive structure points entirely toward positive answers.

The verification gap is the fundamental problem. When you ask a vendor "do you encrypt data at rest?" and they answer "yes," you have several possible realities: all data is encrypted at rest comprehensively; some data is encrypted at rest and some isn't; encryption is configured on the primary database but not on backups, snapshots, or log storage; encryption is in place but the key management is weak enough that it doesn't provide meaningful protection; the person answering the question was not actually sure and answered yes because that's the expected answer.

Without verification, questionnaire answers are assertions, not evidence. And evidence is what meaningful risk assessment requires.

The SOC 2 Report That Stops at Page 3

SOC 2 reports get a lot of credibility in vendor risk programs, and they deserve more scrutiny than they get. A SOC 2 Type II report from a reputable auditor is more informative than a questionnaire response. It covers a defined scope over a defined period, with procedures performed by an independent third party, and produces findings based on actual evidence review. This is legitimately more valuable than self-attestation.

But the limitations are significant:

Scope is defined by the vendor. A SOC 2 report covers the systems and processes the vendor chose to include in scope. Infrastructure components, integrations, or sub-processors that the vendor excluded from scope are unassessed. A vendor can have a clean SOC 2 report for their core platform while their data backup environment, their internal development toolchain, and their employee endpoints are not covered. The report doesn't tell you what's out of scope — you have to ask, and you have to know the right questions.

Controls are vendor-defined. The Trust Services Criteria (Security, Availability, Confidentiality, etc.) define categories, not specific controls. The vendor defines what controls they're implementing to meet each criterion, and the auditor assesses whether those controls operated effectively. A vendor can design controls that technically satisfy the criterion without providing the depth of protection you'd assume from the criterion description.

The report is point-in-time. A SOC 2 Type II covers a period — typically 12 months. The report you're reviewing may be nine months old. The vendor's environment may have changed substantially. Personnel may have turned over. New infrastructure may have been deployed. The assessment reflects what was true during the audit period.

Most reviewers don't read the exceptions. Section IV of a SOC 2 report contains the auditor's findings, exceptions, and complementary user entity controls. This is where the interesting information lives. An exception indicates a control that didn't operate effectively during the audit period. Complementary user entity controls are things the vendor expects you to do for the controls to work. Most vendor risk reviewers read the summary opinion on page 1, see "no material exceptions," and stop there. Material exceptions are the headline; non-material exceptions in section IV tell the real story.

The SolarWinds Lesson That Nobody Actually Learned

SUNBURST — the SolarWinds supply chain compromise attributed to SVR, disclosed in December 2020 — is the canonical example of third-party risk materialization at scale. Roughly 18,000 organizations downloaded the trojanized Orion update. Several hundred were specifically targeted for follow-on activity. Affected organizations included the US Treasury, the Department of Justice, Microsoft, Intel, and Cisco.

Every affected organization had a vendor risk program. Every affected organization, if they used SolarWinds in a category requiring vendor assessment, had some form of SolarWinds assessment on file. The assessments were passing. The vendor was trusted. The software update mechanism — precisely the mechanism the attackers exploited — was not a standard topic in questionnaire-based assessments because nobody was specifically asking "describe the security controls protecting your software build pipeline."

The SolarWinds compromise was a build pipeline attack. The attackers inserted malicious code into the compilation process itself, so the resulting binaries were signed with SolarWinds' legitimate code signing certificate and passed integrity checks. No questionnaire about network segmentation, access controls, or encryption coverage would have surfaced this. The risk was in the software factory, not in any of the standard assessment domains.

Three years after SUNBURST, software supply chain security is a recognized domain. SLSA (Supply chain Levels for Software Artifacts) is a framework for software build integrity. SBOM (Software Bill of Materials) requirements are appearing in government procurement and starting to influence enterprise vendor requirements. CISA has published guidance on secure software development practices. These are real advances.

But the average enterprise TPRM program still centers on a SIG questionnaire and a SOC 2 review, supplemented perhaps by a vendor security questionnaire from the security team that asks about patching cadence and MFA. It has not fundamentally evolved its assessment methodology to address supply chain risk vectors.

Continuous Monitoring vs. Annual Assessment Theater

The annual vendor assessment cycle is architecturally wrong for a threat environment that operates continuously. An assessment completed in January tells you about January. Your vendor's security posture in October — after personnel changes, infrastructure migrations, new subprocessor relationships, and whatever incidents they had that they didn't disclose — is not described by your January assessment.

Continuous monitoring doesn't mean doing a full assessment monthly. It means maintaining ongoing visibility into observable risk signals from your vendor ecosystem. The vendor risk intelligence platforms — BitSight, SecurityScorecard, RiskRecon, Panorays — provide this by measuring externally observable signals: exposed services, certificate issues, detected vulnerabilities, dark web data exposure, DNS changes, domain registrations. None of these signals require vendor cooperation.

These platforms have limitations. External-only signals miss internal control weaknesses that don't produce an external signature. A SecurityScorecard score doesn't tell you about SolarWinds-style build pipeline compromises. The scores can be gamed — vendors can improve their score by fixing the specific things the platform measures without addressing underlying security posture. The correlation between scores and breach likelihood is real but imperfect.

Used as a continuous monitoring supplement rather than a primary assessment, they're valuable. A vendor whose BitSight score drops significantly between assessments is a signal worth investigating. A vendor who goes from 800 to 630 in three months has something happening that your annual SIG questionnaire won't capture for another eight months.

Tiering Your Vendor Portfolio Honestly

Most TPRM programs have a vendor tiering model — Tier 1 (high-risk, critical), Tier 2 (moderate), Tier 3 (low) — and apply different assessment depth based on tier. The concept is correct. The tiering criteria are often where programs go wrong.

Common tiering failure: Tier 1 is defined as "vendors with access to sensitive data" and the sensitive data classification is based on what the vendor was onboarded for rather than what they actually have access to. A vendor brought in for email filtering has access to all email traffic, including everything from your executives, your legal team, and your M&A process. They're often classified as Tier 2 because "email security vendor" sounds less sensitive than "data processor."

Another common failure: access or integration footprint is not considered in tiering. A Tier 3 vendor that has an API integration with your identity provider, a network connection to your internal systems, or software deployed on employee endpoints is not actually Tier 3 from a breach impact perspective, regardless of the data classification of their primary use case.

The honest tiering criteria: What is the worst-case breach impact if this vendor is compromised and the compromise propagates? Does the vendor have network access, endpoint access, or identity system access? Does the vendor have software that runs in your environment? These questions produce a more accurate risk tier than asking what data category they were contracted to process.

Fourth-Party Risk: The Problem Nobody Has Solved

Your vendor's vendors are your fourth-party risk. A third-party provider who stores your data in AWS is also creating a fourth-party dependency on Amazon. A third-party SaaS vendor whose product relies on five sub-processors you've never heard of is creating fourth-party exposure through all five of them.

Most TPRM programs have nominal fourth-party risk requirements — contractual obligations for vendors to disclose material sub-processors, questionnaire questions about their own vendor risk program — and minimal actual visibility into fourth-party risk in practice. The disclosure requirements depend on vendors accurately self-reporting their sub-processor relationships, which is the same self-reporting problem as every other questionnaire component.

The GDPR and CCPA frameworks have moved the needle somewhat by creating legal obligations around sub-processor disclosure and accountability. Article 28 of GDPR requires data processors to obtain authorization before engaging sub-processors and to impose equivalent data protection obligations. This creates a contractual paper trail that at least makes material sub-processors visible.

In practice, getting real fourth-party visibility requires tooling investment (RiskRecon's supply chain mapping, Whistic's network-based approaches) or a different contractual model with your highest-risk vendors that includes audit rights and sub-processor notification obligations with meaningful SLAs. This is hard. It's also the gap that the next SolarWinds-scale incident will exploit.

What Effective TPRM Actually Requires

Effective third-party risk management has a different operational profile than questionnaire collection at scale. It requires investment in a smaller number of high-quality assessments rather than a large number of low-quality ones. It requires contractual leverage — audit rights, security requirements, breach notification obligations, the ability to exit a relationship if risk materially changes. It requires continuous monitoring that produces signals between assessments. And it requires ruthless tiering that puts assessment resources where the actual risk is.

The CISO who signs off on 200 vendor assessments per year is signing off on 200 questionnaire-based risk judgments that are collectively doing less risk reduction work than 20 genuinely rigorous assessments with verified evidence, ongoing monitoring, and meaningful contractual controls. Volume is not a proxy for quality. The annual assessment count is a compliance metric. Breach impact from third-party incidents is the actual risk metric, and it's rarely tracked in TPRM programs.

Build a program that a skeptical CISO could defend after a third-party breach, not one that passes an auditor's check on vendor assessment coverage rate. The auditor isn't the threat. SolarWinds was.

Tags: third-party-risk, tprm, vendor-risk, solarwinds, soc2, sig-questionnaire, supply-chain-security, continuous-monitoring, fourth-party-risk

Enjoying this article?

Get more cybersecurity insights delivered to your inbox every week.

Advertisement

Related Posts

Privacy Is Not Security's Side Quest — GDPR Enforcement Proved That

Privacy Is Not Security's Side Quest — GDPR Enforcement Proved That

Meta fined 1.2B euros. DPIAs nobody does. DSARs at scale. The we are not in the EU misconception. Privacy engineer is a real role now.

S
SecureMango
10 minJanuary 24, 2026
Your DR Plan Has a 4-Hour RTO on Paper and a 4-Day RTO in Reality

Your DR Plan Has a 4-Hour RTO on Paper and a 4-Day RTO in Reality

Your RTO is a fantasy, your BIA is too vague to be useful, and your last failover test was in 2019. Ransomware recovery is not traditional DR — stop treating it like it is.

S
SecureMango
10 minAugust 23, 2025
Security Policies Nobody Reads and the Governance Theater That Produces Them

Security Policies Nobody Reads and the Governance Theater That Produces Them

The 47-page AUP nobody reads, the annual review that changes nothing, and the GRC platform that surfaces data but doesn't fix anything. Policy-as-code is the way out.

S
SecureMango
10 minJuly 5, 2025
Your Risk Register Is a Lie — And Everyone Knows It

Your Risk Register Is a Lie — And Everyone Knows It

The 5x5 matrix, the uncalibrated scores, the perpetually accepted risks — your risk register is a compliance artifact masquerading as risk management. Let's fix that.

S
SecureMango
10 minApril 19, 2025