Part II — Third-party libraries: the vulnerability layer nobody counted

John W8MEJ Menerick ·

Project Butterfly of Damocles Episode 3 of 10 · Part II — The dependency graph
John Menerick securesql.info CISSP · CKS/CKA · 15+ yrs security architecture ~30 min read

The 2014 DEF CON 22 scatter chart showed you the vulnerability density of the software you deploy directly. It did not show you the software underneath that software. It did not count the libraries bundled inside those libraries, the build tools that execute during your CI pipeline, the version tags that point to code that was trustworthy yesterday and is a credential stealer today.

The chart measured what you chose. The supply chain is what you inherited. And in 2026, the supply chain has been systematically weaponized by nation-states who understand the attack surface better than most of the organizations it belongs to.

847
Average transitive npm dependencies in a typical Node.js login form — 2024 Snyk data
17,000+
Malicious packages removed from npm and PyPI combined, 2021–2026
2 yrs
How long XZ Utils attacker spent building trust before inserting the backdoor
350K+
Projects bundling OpenSSL — each inheriting its full vulnerability history

You are not running one application. You have never been running one application.

When a developer runs npm install to set up a project, they typically think of themselves as installing the packages listed in package.json. If that file lists 12 direct dependencies, the mental model is: I am now running 12 libraries plus my code. This mental model is wrong by two to three orders of magnitude.

Those 12 direct dependencies each have their own dependencies. Those have dependencies. The transitive closure of a typical Node.js application in 2024 contains 600 to 1,200 packages. A typical React application: 1,400+. A typical enterprise Node.js service with authentication, database access, and API integrations: north of 2,000. Each of those packages was written by a different person or team, published under a different maintenance model, subject to different security practices, and potentially maintained by nobody at all for the last three years.

Illustrative transitive dependency tree for a login form
Your login service
express
passport
jsonwebtoken
bcrypt
axios
dotenv
+6 more direct
↓ each of these has dependencies ↓
body-parser
debug
ms
jws
jwa
node-gyp
semver
+820 more transitive
Total: ~847 packages installed. ~843 of them you did not consciously choose.

The security question for each of those 847 packages is the same: who wrote it, who maintains it today, has it been audited for security issues, and would anyone notice if a malicious version was published? For a minority of packages, the answers are reassuring. For the majority, the honest answer to all four questions is: unknown, possibly nobody, no, and probably not for at least a few hours.

The left-pad incident — March 2016

Azer Koçulu was the author of left-pad, an 11-line npm package that left-pads a string with a specified character. It had no security vulnerabilities. It was not malicious. And when Koçulu unpublished it from npm in a dispute over a naming conflict, it immediately broke thousands of applications globally — including Babel, React, and large swaths of the Node.js ecosystem. The incident was not about security. It was about trust: the entire JavaScript build infrastructure had implicitly trusted that a package maintained by one person who had no particular obligation to keep it published would remain available indefinitely. This was the supply chain fragility problem made visible. The security version of the same vulnerability was already being written.


Three distinct eras of supply chain risk: from accidental to adversarial

The supply chain risk landscape did not arrive fully formed. It evolved through three distinct eras, each building on the structural vulnerabilities exposed by the previous one. Understanding the progression helps explain why the March 2026 attacks were not just larger than their predecessors — they were qualitatively different.

2010–2017
Era 1 — Accidental exposure
Neglect, not malice

The primary supply chain risk in this era was unmaintained packages with known vulnerabilities. A developer would install a library in 2012, the library would receive a critical CVE in 2014, and nobody would update it because the application was “working fine.” The vulnerability existed; no adversary had specifically placed it there. The risk was the gap between vulnerability disclosure and patch deployment, multiplied across the transitive dependency graph.

The canonical example: a 2016 survey by Snyk found that 14% of npm packages had at least one known vulnerability. The majority of those vulnerabilities were months or years old. The fix existed. The update had not happened. The window of exposure was entirely a function of organizational inertia, not attacker sophistication.

Representative incidents
left-pad unpublish (2016) — availability, not security, but exposed the fragility
Thousands of apps running Struts 2 with known RCE vulnerabilities (ongoing) — Equifax breach (2017) rooted here
2018–2022
Era 2 — Opportunistic insertion
Motivated attackers, broad targeting

The adversarial era begins. Attackers discovered that npm, PyPI, and other registries would accept and distribute packages from anyone, with minimal verification, to millions of developers worldwide. The attack model: publish a malicious package with a name similar to a popular legitimate package (typosquatting), or compromise a legitimate package’s maintainer account, and let the distribution network do the rest.

The event-stream incident in 2018 was the paradigm shift. A developer named Dominic Tarr maintained event-stream, a popular npm package. Someone volunteered to maintain it on his behalf, Tarr handed over maintainership, and the new maintainer published a version containing a cryptocurrency wallet harvester targeting a specific Bitcoin wallet application. The attack was targeted but distributed via a trusted package. The trust in the maintainer was the attack surface.

Representative incidents
event-stream (2018) — compromised maintainer, cryptominer in a trusted package, 2M weekly downloads
ua-parser-js (2021) — hijacked npm account, RAT/cryptominer deployed, 8M weekly downloads
colors/faker (2022) — intentional sabotage by the original author, infinite loop in protest of unpaid OSS labor
PyPI malicious package wave (2019–2022) — 7,500+ packages removed, primarily credential harvesters
2023–present
Era 3 — Strategic, persistent, targeted
Nation-state patience, infrastructure-level impact

The qualitative shift that defines the current era: nation-state actors applying the same patient, long-duration operational model they use for other intelligence collection to open source supply chain compromise. The XZ Utils attack (two years of trust-building before payload insertion) established the template. The March 2026 TeamPCP campaign demonstrated the cascade potential: compromise one trusted tool, harvest credentials, use those credentials to compromise the next tool, repeat. Automated, self-financing, and targeting the exact infrastructure organizations use to defend themselves.

The defining characteristic of Era 3 attacks is that they are not opportunistic. The targets are selected. The methods are tailored. The XZ attacker specifically targeted burned-out maintainers. UNC1069 specifically targeted the Axios maintainer because 100M weekly downloads made the ROI exceptional. TeamPCP specifically targeted Trivy because it had elevated CI/CD pipeline access by design. These are not script kiddies. They are intelligence operations applied to open source infrastructure.

Representative incidents
XZ Utils (2024) — 2-year operation, backdoor in sshd transitive dependency, caught by accident
TeamPCP / Trivy cascade (Mar 2026) — credential harvesting at CI/CD pipeline layer, 1,000+ orgs, European Commission 92 GB
UNC1069 / Axios (Mar 2026) — 2-week individualized social engineering, 174K downstream packages, cross-platform RAT

4,300+ malicious packages and counting: the economics of npm supply chain attacks

npm is the world’s largest software registry by package count, with over two million packages and approximately 30 billion downloads per week. It is also the supply chain attack surface that has attracted the most documented adversarial activity, for a straightforward reason: the blast radius of a successful npm compromise scales with download count, and npm download counts are extraordinary. Axios at 100M weekly downloads. Express at 30M. The moment a malicious version of either package is tagged latest, every npm install in every CI/CD pipeline that uses a floating version range pulls it automatically.

How an npm supply chain attack works: the Axios anatomy
1
Target selection
Identify a high-download-count package maintained by a small team. Calculate ROI: downloads × credential value per compromised system ÷ effort to compromise maintainer. Axios: 100M weekly downloads × (developer machine with AWS keys, GitHub tokens, npm tokens, database credentials) ÷ one maintainer with a public LinkedIn. Exceptional ROI.
2
Maintainer targeting
Social engineering campaign: impersonate a known company founder, invite to a crafted Slack workspace, schedule a Microsoft Teams call, fake an audio error, prompt the maintainer to install a “fix.” Two weeks of effort for an attacker with nation-state resources. The fix installs a RAT. The npm credentials are now in the attacker’s hands.
3
Registry pre-staging
Approximately 18 hours before the main attack, publish [email protected] to npm — a clean, innocent-looking package that establishes registry history. This is the “cooling off” phase: a package with no history triggers more scrutiny than one that appeared a few hours ago.
4
Payload publication
Using the stolen npm credentials, publish [email protected] and [email protected], both pointing to [email protected] as a dependency. Tag both latest and legacy — covering both the current and backwards-compatible semver ranges. Any npm install with a ^1.14.0 or ~0.30.0 range now pulls the malicious version automatically.
5
Postinstall execution
plain-crypto-js’s package.json declares a postinstall script. When npm installs the package, it automatically runs setup.js — no user interaction required. setup.js identifies the operating system and downloads a platform-specific second-stage payload: a Nim-based backdoor for macOS, a Go binary for Windows, a C++ implant for Linux. Three platforms, one delivery mechanism, zero user consent.
6
Credential exfiltration
The deployed RAT runs SilentSiphon, which harvests credentials from browsers, password managers, and secrets associated with GitHub, GitLab, npm, pip, RubyGems, NuGet, and cloud providers. The CosmicDoor backdoor establishes C2. The attacker now has persistent access to every developer machine that ran npm install during the three-hour window.
7
Detection and containment
An axios collaborator with less permission than the compromised account notices the malicious dependency, opens a deprecation PR, and escalates to npm directly at 01:38 UTC. npm removes the malicious versions at 03:15 UTC. The window was 2 hours and 54 minutes. In that window, the package was tagged latest and available to the global npm CDN. Every npm ci and npm install that ran during those three hours pulled it.

The detection signal that most organizations missed: the malicious Axios versions had no SLSA build provenance. Legitimate Axios releases have always been published via GitHub Actions with OIDC provenance metadata and SLSA level 2 attestations linking the npm package back to a specific GitHub Actions run. The malicious versions were published directly, via stolen credentials, with no attestation. For any organization monitoring SLSA provenance on their critical dependencies, the absence of the attestation on a new major-package release was an automatic alert. Most organizations were not monitoring this.

SLSA provenance absence is currently one of the strongest detection signals for supply chain attacks on high-profile packages. Major packages that have historically published with SLSA attestations will produce no attestation when published via a compromised account using a stolen token. This check is implementable today using npm audit signatures (npm 9+) and does not require waiting for the package to be flagged malicious. The Axios incident was detectable within seconds of publication for any organization with this check in their CI pipeline.

The floating version problem — why ^ and ~ are attack surface

npm’s semver range syntax was designed for convenience. ^1.14.0 means “any compatible 1.x version.” ~1.14.0 means “any patch version of 1.14.” Both are extremely common in package.json files. Both mean that running npm install after a new malicious version is published will silently upgrade to the malicious version. The lockfile (package-lock.json) pins exact versions, but only if the lockfile exists and npm ci is used instead of npm install. Many CI/CD pipelines still use npm install. Many do not commit lockfiles. The floating version pattern, combined with auto-update bots like Dependabot, means that malicious versions can be pulled automatically without any human ever making a deliberate decision to upgrade.


7,500+ malicious packages: when the data science toolchain becomes the delivery mechanism

PyPI occupies a different position in the supply chain attack landscape than npm. It is the primary distribution mechanism for Python packages, and Python is the dominant language for machine learning, data science, and increasingly AI infrastructure. This means a successful PyPI supply chain attack can target not just developer machines, but production AI workloads, model training pipelines, and the infrastructure that manages API keys for every LLM provider an organization uses.

The LiteLLM compromise in March 2026 was the clearest demonstration of this dynamic. LiteLLM is an AI gateway library that routes requests to over 100 LLM providers — OpenAI, Anthropic, Azure OpenAI, Google Vertex AI, AWS Bedrock, and more. It stores all the corresponding API keys. It runs in 36% of monitored cloud environments according to Wiz Research. When TeamPCP published malicious versions 1.82.7 and 1.82.8 to PyPI, they did not just compromise developer machines. They targeted the keys to every AI provider an organization uses, simultaneously.

npm vs. PyPI attack dynamics
Dimension npm PyPI
Primary target Developer machines, CI/CD pipelines, Node.js services Data science environments, ML training pipelines, AI gateways
Most valuable credential GitHub tokens, npm publish tokens, cloud credentials from CI runners LLM API keys, model registry tokens, GPU cluster credentials
Novel 2026 vector Postinstall hook executes automatically, no user interaction .pth file executes on every Python interpreter startup, before any import
Persistence mechanism RAT binary, scheduled tasks .pth file in site-packages survives package removal; Kubernetes kube-system pods
Detection signal Absent SLSA provenance on major package release Missing build attestation; litellm_init.pth in site-packages
Blast radius amplifier 174,000 downstream npm packages transitively depend on Axios LiteLLM present in 36% of cloud environments; often a transitive dep

The .pth persistence mechanism in the LiteLLM attack deserves special attention because it is architecturally novel. Python’s site-packages directory supports .pth files — path configuration files that are processed at Python interpreter startup, before any user code runs. A .pth file that contains an import statement will execute that import on every Python invocation: python script.py, pip install, pytest, jupyter notebook. Every Python command becomes a trigger for the malware. And removing the malicious LiteLLM packages does not remove the .pth file unless you know to look for it.

If you ran LiteLLM 1.82.7 or 1.82.8 — remediation checklist

Removing the package is necessary but not sufficient. Check for litellm_init.pth in your Python site-packages directory. Rotate all LLM API keys stored in or accessible to the affected environment. Audit Kubernetes clusters for unauthorized pods in kube-system. The malicious package deployed privileged pods to every cluster node accessible from the affected environment; these pods have full host filesystem access and persist after package removal. Treat any system that ran a Python interpreter after the package installation as fully compromised.

The typosquatting problem: 7,500 packages with names designed to deceive

Not all PyPI malicious packages are compromised legitimate ones. A substantial fraction are purpose-built fakes: packages named to be confused with legitimate ones, published by attackers, waiting to be installed by developers who make a one-letter typo or copy-paste a package name incorrectly.

Documented typosquatting patterns against ML/AI packages
torch
torchvision (malicious), torchaudio (malicious), pytoch, torh
Credential harvesters targeting ML engineers; some contained trojaned model loading code
tensorflow
tensforflow, tensorfow, tensorflow-gpu (various fakes)
Particularly dangerous: ML engineers often install GPU variants outside standard package managers
transformers (HuggingFace)
transformer, tranformers, huggingface-transformers
Target: model downloading code; some fakes phoned home with model access patterns and API keys
requests
request, reqests, python-requests
Universal Python HTTP library; presence in virtually every Python project makes it a high-value fake target

When your application bundles a library’s vulnerability history along with its code

The npm and PyPI supply chain problems involve explicit package management: there is a record of what you installed, a registry that distributed it, and in principle a process for identifying and removing malicious versions. The C/C++ supply chain problem is more insidious because much of it is invisible to standard software composition analysis tools.

Many C and C++ projects bundle their dependencies directly into the source tree rather than declaring them as external dependencies managed by a package manager. This was a common practice before package managers became standard in C/C++ development, and it persists today for reasons of build reproducibility and convenience. The practical consequence: if you have a copy of zlib embedded in your project’s source tree from 2018, and zlib received a critical CVE in 2020, your application is vulnerable — but no SCA tool will find it, because the vulnerable code is not a declared dependency. It is just code.

OpenSSL bundling: 350,000+ projects

OpenSSL is the world’s most bundled C library. Applications that need TLS support have historically either linked against the system OpenSSL or embedded a private copy in their source tree. The private copy pattern means that every CVE in OpenSSL history potentially has a long tail of applications that remain vulnerable long after the OpenSSL project itself has patched the issue.

Mythos’s finding of a 27-year-old bug in OpenBSD is relevant here: OpenBSD is one of the more security-conscious projects in the ecosystem, yet even it carried a vulnerability for 27 years. Projects with less security focus and worse patching practices carry older bugs for longer.

350K+Projects bundling or linking OpenSSL
~8 yrsAverage lag between OpenSSL CVE patch and downstream application patch

FFmpeg: 16 years of undetected history

FFmpeg is the multimedia processing library that is used by every major browser, every video streaming service, every video editing tool, and hundreds of thousands of applications for encoding, decoding, and processing audio and video. Project Glasswing’s Claude Mythos Preview found a 16-year-old vulnerability in FFmpeg in its first weeks of operation.

This is not surprising in the context of the 2014 analysis. FFmpeg is a large C codebase that processes untrusted media files — an attack surface class that was identified as high-risk in 2014. The specific 16-year-old bug had been present since the era before systematic automated security analysis of the codebase. It survived 16 years of code review, security audits, and CVE disclosures about other parts of the codebase. Not because it was subtle. Because nobody looked in exactly the right place.

16 yrsHow long the Mythos-found FFmpeg bug existed before discovery
100s of millionsDevices that process video using FFmpeg or FFmpeg-derived code

Log4j: the nuclear option in bundled Java

Log4Shell (CVE-2021-44228) is the case study in how a bundled library vulnerability becomes a civilization-scale security incident. log4j-core was bundled in thousands of Java applications — often in .jar files inside .war files inside .ear files, nested three layers deep in application archives. Standard vulnerability scanning tools in 2021 could not look inside nested archives. The vulnerability existed for almost a decade before discovery. When it was disclosed, the patch deployment took years because most organizations couldn’t reliably enumerate all the places where they had log4j running.

~10 yrsHow long Log4Shell existed before discovery
$2.4B+Estimated global remediation cost (conservative)

XZ Utils: the sleeper

XZ Utils provides lossless data compression. It is a transitive dependency of systemd on most Linux distributions, and systemd manages the SSH daemon on those distributions. In 2024, a patient attacker spent two years becoming a trusted XZ Utils contributor before inserting a carefully crafted backdoor that targeted the RSA key decryption path in affected sshd configurations. The backdoor was not in the source code in an obvious way — it was injected via a test file that the build system executed during compilation. It bypassed source code review. It was found by a Microsoft engineer noticing that SSH logins were slightly slower than expected on an unrelated benchmark.

2 yrsAttacker preparation time before payload insertion
0Security tools that would have caught this without the CPU benchmark anomaly

When the tools you trust to build and secure your software become the entry point

The March 2026 TeamPCP campaign introduced a new chapter in supply chain attack history: the systematic weaponization of the DevSecOps toolchain itself. Not the applications. Not the libraries. The vulnerability scanners, the static analysis tools, the CI/CD actions that run inside your build pipeline with ambient access to all of your secrets.

Understanding why this attack class is so potent requires understanding what a CI/CD runner knows. When a GitHub Actions workflow runs, it has access to every secret configured for that repository or organization: cloud provider credentials, database passwords, code signing keys, npm and PyPI publish tokens, Docker registry credentials. The runner executes with these secrets loaded as environment variables. Any code that executes inside the runner can read them.

Trivy is a vulnerability scanner. It runs in CI/CD pipelines to scan container images and code for known vulnerabilities. It is trusted. It runs on every PR, every merge, every deployment. And when TeamPCP compromised Trivy’s GitHub Actions workflow, every pipeline that ran Trivy — the tool literally designed to make you more secure — began exfiltrating every secret the runner had access to.

Why CI/CD pipeline secret exposure is structurally worse than application-layer exposure
Application-layer compromise
  • Attacker gets the credentials configured for that application
  • Blast radius: the services that application is authorized to access
  • Rotation scope: credentials for that application
  • Detection: anomalous API calls, runtime monitoring
CI/CD runner compromise
  • Attacker gets every secret configured for the entire pipeline: cloud provider credentials, code signing keys, registry tokens, deploy keys, service account tokens
  • Blast radius: every service the pipeline touches, which is typically everything
  • Rotation scope: all secrets across all environments that use that pipeline
  • Detection: extremely difficult — the malicious code runs inside a trusted tool with no anomalous behavior visible at the application layer
The mutable git tag vulnerability — the structural flaw TeamPCP exploited

Git tags are designed to be immutable markers for specific commits. By convention, v0.69.4 should always point to the same commit. In practice, git tags are not immutable — anyone with write access to a repository can force-push a tag to point to a completely different commit. GitHub does not prevent this, does not alert users whose workflows reference the tag, and does not visibly distinguish a force-pushed tag from the original. TeamPCP used this exactly: they force-pushed 76 of 77 trivy-action version tags to point to malicious commits. Every CI/CD pipeline that referenced trivy-action by version tag (the standard pattern in GitHub Actions documentation) began running the attacker’s code on its next execution. The mitigation: pin GitHub Actions to full commit SHAs, not version tags. uses: aquasecurity/trivy-action@f781cce5aab226378d3e6f493a1a2d3ca7b15b2 instead of uses: aquasecurity/[email protected]. SHA references cannot be force-pushed.

npm ecosystem~4,300+
Known malicious packages 2021–2026
  • event-stream (2018) — compromised maintainer, cryptominer critical
  • ua-parser-js — account hijack, RAT/cryptominer, 8M wkly DLs critical
  • colors/faker (2022) — intentional sabotage by author high
  • Axios (Mar 2026) — DPRK RAT, 100M wkly DLs, 174K downstream pkgs critical
  • Typical app: 847 transitive deps, most unmaintained high surface
PyPI / Python~7,500+
Malicious / typosquatting packages 2019–2026
  • torch typosquats — ML supply chain credential harvesters critical
  • ShadowRay (2024) — Ray framework unauth RCE critical
  • LiteLLM 1.82.7/1.82.8 (Mar 2026) — TeamPCP .pth persistence critical
  • Telnyx SDK — WAV steganography second-stage payload critical
  • Typical ML app: 200–600 transitive deps high surface
C/C++ / systemsstructural
Bundled library transitive exposure
  • XZ Utils (2024) — 2-year social engineering operation, sshd backdoor critical
  • Log4Shell — bundled log4j, JVM-scale, 10 years undetected critical
  • FFmpeg — 16-year-old bug found by Mythos in weeks critical
  • OpenSSL bundled in 350K+ projects, patching lag ~8 years high
  • zlib / libpng stale copies endemic across C/C++ ecosystem high
CI/CD / DevSecOps toolingweaponized
Security tools turned attack vectors — March 2026
  • Trivy v0.69.4 — credential stealer in scanner, 76/77 tags poisoned critical
  • Checkmarx KICS — 35 version tags force-pushed, sysmon persistence critical
  • Mutable git tags — exploitable by design in all GitHub Actions critical
  • 82% of Docker Hub images have known high/critical vulns (Snyk 2024) high
  • GitHub Actions ambient secret exposure endemic across pipelines high

The XZ Utils backdoor (CVE-2024-3094) is the canonical proof of concept for the maintainer-as-attack-surface model I described in 2014. A burned-out volunteer was socially engineered over two years by an attacker who contributed code, built trust, then inserted a backdoor into a transitive dependency of sshd. Nobody caught it through code review. A Microsoft engineer caught it through anomalous CPU benchmarking. It served as the direct operational template for both March 2026 campaigns. The playbook was published. Nation-states read it. And the same structural vulnerability — critical infrastructure, volunteer maintainer, no dedicated security support — is reproduced across every project category in this episode.


From reactive to proactive: what the supply chain security posture actually requires

The supply chain risk categories above are not equally tractable. Some have mature, deployable mitigations today. Others require structural changes to the ecosystem that are years away from broad adoption. It is important to be honest about the difference, because security theater — actions that feel like security improvements without actually reducing risk — is particularly rampant in supply chain security.

Deployable today — high impact
  • Pin GitHub Actions to full commit SHAs, not version tags. Non-negotiable after Trivy. uses: action@sha256:abc123 not uses: action@v3
  • Use npm ci instead of npm install in all CI/CD pipelines. Respects the lockfile exactly, does not auto-upgrade.
  • Commit package-lock.json and equivalent lockfiles to version control. Without this, npm ci cannot pin versions.
  • Disable npm lifecycle scripts globally for CI environments: npm config set ignore-scripts true. Prevents postinstall hook execution class of attacks.
  • Monitor SLSA provenance on high-impact packages. Absence of SLSA attestation on a new release of a package that historically had it is an immediate alert.
  • Implement a “package release cooldown” policy: do not auto-update packages published less than 72 hours ago. This window gives the security community time to analyze new releases.
  • Audit Python site-packages for unexpected .pth files after any dependency changes.
Medium-term — requires tooling investment
  • SBOM generation and continuous monitoring: know what’s in your software at all times, not just at build time.
  • Software Composition Analysis (SCA) integrated into the CI pipeline, not just as a periodic scan.
  • Private package mirror with allow-listing: only packages on the approved list can be installed. Typosquatting attacks require the attacker to be on your allow list.
  • Secrets management that limits what secrets any given CI runner can access. Principle of least privilege for pipelines: a vulnerability scanner does not need your production database credentials.
  • Container image scanning with known-vulnerability blocking, not just alerting. An image with a CVSS 9.x critical will not reach production.
Long-term — ecosystem-level change required
  • Registry-level SLSA provenance requirements: npm and PyPI requiring all packages to have verified build provenance before publication.
  • Maintainer identity verification: meaningful authentication of package maintainer identity, not just email address ownership.
  • Sustained funding for critical OSS maintainers: the XZ Utils attack targeted a burned-out volunteer because burned-out volunteers are more susceptible. Funding maintainers is a supply chain security control.
  • Broad adoption of memory-safe languages in new systems software: Rust, Go, Swift for new C/C++ replacement code. Reduces the bundling vulnerability class over a 10-20 year horizon.
  • Immutable package registry history: packages that have been published cannot be unpublished. Addresses the availability issue (left-pad) while also preventing retroactive tampering.

Why the supply chain is exactly where Glasswing’s findings will land

The structural picture that emerges from this episode is clarifying about why Project Glasswing is both necessary and potentially problematic. Mythos Preview has already found thousands of zero-day vulnerabilities — in operating systems, in browsers, in the kind of foundational software analyzed at DEF CON 22. Those vulnerabilities exist in the same ecosystem that the March 2026 supply chain attacks just demonstrated is simultaneously: (a) operated by people who are high-value social engineering targets, (b) served through pipelines that have elevated credential access, and (c) depended upon by hundreds of thousands of downstream applications that will inherit the vulnerability until patched.

When Glasswing finds a 16-year-old FFmpeg bug, the disclosure goes to FFmpeg maintainers. Those maintainers build the patch. The patch is released. Downstream projects — those 350K+ that bundle or link FFmpeg — need to update. Most of them won’t know they need to update unless they have SCA tooling running. Most of them won’t update immediately even if they know, because change is hard and breaking changes are worse than theoretical vulnerabilities. And in the meantime, the disclosure has been published — accessible to adversaries who can weaponize it faster than the patching process moves.

The supply chain is not a separate problem from the vulnerability problem. It is the mechanism by which the vulnerability problem propagates. Every bug Glasswing finds is a bug that will need to be patched through a supply chain that has been demonstrated to be systematically compromisable by patient, resourced adversaries. The discovery velocity has changed. The remediation velocity has not. And the supply chain infrastructure between them has been confirmed as an active attack surface.

The 2014 observation: “you are not running one application; you are running 847 applications, most of which nobody has audited.” The 2026 update: “you are not deploying one vulnerability fix; you are deploying it through 847 pipelines, most of which run scanners that were recently demonstrated to be credential stealers.”

The supply chain was always the liability side of the open source balance sheet. What changed in 2026 is that nation-state actors are now systematically exploiting it, the tools designed to help you find vulnerabilities in your supply chain are now targets themselves, and the most powerful vulnerability-finding capability ever built is about to produce thousands of new findings that need to be patched through that exact infrastructure. The math does not improve until the denominator changes — and the denominator is the human beings maintaining open source software on volunteer time with no dedicated security support.

supply chain security npm PyPI transitive dependencies XZ Utils CVE-2024-3094 Log4Shell event-stream Axios LiteLLM Trivy CI/CD security SLSA provenance SBOM mutable git tags typosquatting FFmpeg postinstall hooks Project Glasswing Project Butterfly of Damocles

Share on: