The Dependency You Didn't Write Is Still Your Problem
Let me tell you about a Monday morning nobody wants to have. You're two sprints from shipping, the CI pipeline is green, and someone from threat intel drops a message in Slack: a package you're pulling three levels deep in your dependency tree was quietly compromised six weeks ago. You didn't write it. You didn't choose it directly. You probably don't even know its name off the top of your head. But it's in your build, it ran on your infrastructure, and now you have a very bad day ahead of you.
This isn't hypothetical. This is the event-stream incident, almost exactly.
In 2018, a maintainer of event-stream — a wildly popular npm package with millions of weekly downloads — handed off ownership to a stranger on the internet because they didn't want to maintain it anymore. That stranger injected a malicious dependency called flatmap-stream into the package, which contained encrypted payload targeting the wallet credentials of a specific Bitcoin application. The malicious code sat in the npm registry for two months. It passed automated scans. It passed human review. Nobody caught it until a developer got suspicious about an obfuscated string.
And npm audit? Completely silent. Because this wasn't a known CVE. It was a logic bomb wrapped in legitimate-looking JavaScript.
Why npm audit Is a Seatbelt With the Airbag Removed
npm audit checks your installed packages against a database of known vulnerabilities. That's it. That's the whole game. It's reactive by design — a package has to be compromised, discovered, reported, and catalogued before the tool knows anything about it. For the class of supply chain attacks we're dealing with now, that lag is catastrophic.
Consider the ua-parser-js hijack in October 2021. The maintainer's npm account was compromised, and malicious versions were published that installed cryptominers and credential stealers. Versions 0.7.29, 0.8.0, and 1.0.0 were live on the registry. The package had around 8 million weekly downloads. For the window between publication and detection, every npm install that touched those versions was executing attacker code. npm audit had nothing. GitHub's Dependabot had nothing. The advisory didn't exist yet.
This is the fundamental problem with purely CVE-based tooling: it models the past, not the present threat landscape.
And it gets worse. npm install scripts are a loaded gun sitting on your workbench. When you run npm install, any package in your tree can declare preinstall, install, or postinstall scripts that execute arbitrary code on your machine — or your CI runner — with full ambient privilege. No prompt. No confirmation. Just execution. Attackers know this. The Codecov bash uploader compromise in 2021 demonstrated exactly this vector at scale: a tampered bash script pulled down via CI pipelines exfiltrated environment variables — including tokens, credentials, and keys — from thousands of organizations before anyone noticed.
Dependency Confusion Isn't Clever, It's Embarrassing for Everyone Involved
In 2021, security researcher Alex Birsan published a technique that broke the brains of every AppSec team in the industry simultaneously. The attack was elegant to the point of being insulting: if your organization uses internal package names on a private registry, an attacker can publish a package with the same name to the public npm registry at a higher version number. Most dependency resolution logic will prefer the higher-versioned public package over the lower-versioned private one.
Birsan tested this against 35 major companies — including Apple, Microsoft, Tesla, Shopify, Netflix, PayPal, and Yelp — and achieved remote code execution in their internal build systems. He did this legally, under bug bounty programs, and collected over $130,000 in rewards. The technique worked because nobody had thought carefully about the trust model of their package resolution order.
The fix isn't complicated in theory: use scoped packages (@yourcompany/package-name), configure your registry to refuse public fallback for private namespaces, and set explicit registry mappings in your .npmrc. But I've audited enough build pipelines to know that most teams haven't done this. They're one misconfigured .npmrc away from the same exposure Birsan demonstrated.
And then there's the cousin of this attack: typosquatting. Someone registers lodahs instead of lodash, or cross-env2 instead of cross-env, waits for developers to fat-finger an install command or copy-paste from a dubious tutorial, and ships malware. The npm registry has had hundreds of these removed over the years. It's tedious, it scales for attackers, and it will never fully go away.
That One Time I Found a Lockfile That Lied
A few years back I was reviewing the build process for a mid-sized SaaS company. Clean codebase, reasonable practices, quarterly pen tests. They were proud of their package-lock.json — "we lock everything," the lead dev told me. And they did commit it. But nobody on the team had ever thought about what it means to trust that lockfile.
Here's the thing about lockfile poisoning: your lockfile records resolved package URLs and integrity hashes. If an attacker can modify your lockfile — through a compromised developer machine, a malicious PR, or a misconfigured merge strategy — they can point a dependency at a different tarball entirely while keeping the package name and version string identical. The name looks right. The version looks right. But the resolved URL points somewhere else, and the integrity hash has been swapped to match the malicious tarball.
Unless you're verifying the integrity of your lockfile itself, and auditing changes to it as carefully as changes to source code, you're trusting a file that an attacker could have quietly modified. Most teams treat lockfile diffs in PRs as noise and click approve without reading them.
So yes: commit your lockfiles. But also read your lockfile diffs. Treat unexpected changes to resolved URLs as a security event, not a formatting artifact.
The Tools That Actually Move the Needle
Let's talk about what's actually useful here, because the tooling landscape has gotten more interesting.
Snyk and Dependabot are the defaults most teams reach for, and they're fine for CVE tracking. Dependabot opens PRs. Snyk gives you a dashboard. Neither of them is doing behavioral analysis on packages you're about to install.
Socket is different. Socket analyzes npm packages for risky behaviors — install scripts, network access, environment variable reads, obfuscated code, typosquatting indicators — before you install them. It's doing static analysis on the package itself, not just checking it against a known-bad list. If a package suddenly adds a postinstall script that wasn't there in the previous version, Socket flags it. That's the kind of signal that would have caught the event-stream compromise much earlier. The GitHub app integration means you get this analysis directly in PRs before the merge.
Renovate is underrated. It's more configurable than Dependabot, supports monorepos sanely, and lets you define update policies with real granularity — pin major versions, auto-merge patches within certain packages, hold major updates for manual review. The configuration overhead is real, but for teams with complex dependency graphs it's worth it.
None of these tools replace the thing that actually matters most: reducing your attack surface by reducing your dependency count. Go look at your package.json right now. Not the lockfile — the direct dependencies in package.json. How many of those do you actually need? Could you replace five utility packages with twenty lines of code you own and understand? The answer is usually yes, and every package you remove is an attack surface that no longer exists.
SBOMs Are Not Just Compliance Theater (Mostly)
If you've been in security for more than a few years, you've watched compliance requirements get bolted onto development workflows and accomplish nothing except additional paperwork. SBOMs — Software Bills of Materials — could go the same way. Or they could actually be useful. It depends entirely on whether you generate them as a formality or operationalize them.
The two dominant formats are SPDX (maintained by the Linux Foundation) and CycloneDX (from OWASP). Both can represent the full dependency graph of your software — package names, versions, licenses, hashes, relationship graphs. The difference is mostly in tooling ecosystem and schema design; CycloneDX tends to be more security-focused out of the box, SPDX has broader adoption for license compliance workflows.
Where SBOMs get genuinely interesting is in incident response. When SolarWinds happened — when it became clear that the Orion build pipeline had been compromised and malicious code was shipped in signed, legitimate-looking updates to 18,000 organizations — the first question everyone had was: do we have this? where? in what version? Most organizations couldn't answer quickly. An operationalized SBOM, generated at build time and stored with artifact metadata, would have cut that triage window dramatically.
Executive Order 14028, issued in May 2021, made SBOMs a requirement for software sold to the US federal government. That's driven a lot of tooling investment. But the operational value exists independent of the compliance angle: know what's in your software, at the version level, so you can respond when something in it turns out to be compromised.
Generate SBOMs at build time using tools like cyclonedx-npm or syft. Store them with your build artifacts. Query them when an advisory drops. That's the loop that makes this useful.
SLSA and Sigstore Are the Long Game
The SLSA framework (Supply chain Levels for Software Artifacts, pronounced "salsa") is Google's attempt to provide a structured way to reason about supply chain integrity. It defines four levels of assurance, from basic source control hygiene up to hermetic, reproducible builds with third-party attestation. It's ambitious. Most organizations are nowhere near SLSA Level 3, let alone 4. But it's a useful mental model for asking: where in our build pipeline could an attacker inject something, and what controls would detect or prevent that?
The colors.js and faker.js sabotage in January 2022 is a useful counterpoint here. Marak Squires, the maintainer, intentionally broke his own packages — publishing versions that output gibberish in an infinite loop — as a protest against corporations using open source without contributing back. This wasn't an external attacker. This was the maintainer acting with full legitimate authority over the packages. SLSA doesn't protect you from that. Neither does any technical control, really. This is a people and policy problem: who do you trust, what's your process for vetting new dependencies, and how fast can you pin and fork when a maintainer goes rogue or burns out and abandons a critical package?
Sigstore and its tooling — particularly cosign — are solving the artifact signing problem in a way that might actually get broad adoption. The traditional challenge with signing is key management: if you sign your artifacts with a long-lived key, you have to protect that key forever, and if it leaks, you have a serious problem. Sigstore uses short-lived certificates tied to OIDC identity, with a transparency log (Rekor) that creates an immutable audit trail. The result is that you can verify not just that an artifact is signed, but that it was signed by a specific identity in a specific CI context at a specific time. That's a meaningful improvement in the provenance story.
npm packages aren't broadly signed yet, but container images can be, and the pattern is spreading. If you're pushing container images as part of your supply chain, you should be signing them with cosign and verifying signatures in your deployment pipeline. Full stop.
What You Should Actually Do Before Your Next Deploy
Look, I'm not going to give you a twelve-point checklist and pretend that's the answer. But there are a few things that would meaningfully reduce your exposure right now.
- Disable install scripts for packages that don't need them. You can set
ignore-scripts=truein your.npmrcglobally, then selectively enable for packages that legitimately need them. It's an annoying workflow change. It's also a hard block on a whole class of compromise vectors. - Add Socket to your GitHub workflow. It takes twenty minutes and the signal-to-noise ratio is better than any CVE-only scanner for catching the novel stuff.
- Set up SBOM generation as a build step, output CycloneDX JSON, and store it somewhere queryable. When the next major supply chain incident drops, you'll thank yourself.
The deeper issue is that the JavaScript ecosystem traded security for velocity for twenty years and we're all living with the consequences. npm install runs code on your machine. The registry has historically had weak ownership verification. Package namespaces are a free-for-all. And most developers don't think about this until something explodes.
Your package.json is a trust manifest. Every line in it is a statement that you trust some human you've never met, maintaining code you've never read, to not do something catastrophic — intentionally or otherwise. That trust is usually fine. And sometimes it absolutely isn't. Build your processes around the assumption that some percentage of the time, it won't be.

