It’s no exaggeration to say that modern software runs on open source. Every product, platform, and digital experience we rely on, whether built by scrappy startups or global enterprises, leans heavily on third-party components. This is no accident. Open-source and commercial packages and public libraries accelerate innovation, drive down development costs, and have become the invisible scaffolding of the Internet. GitHub recently highlighted that 99% of all software projects use third-party components.
But with great reuse comes great risk.
Third-party code is a double-edged sword. On the one hand, it’s indispensable. On the other hand, it’s a potential liability. In our race to deliver software faster, we’ve created sprawling software supply chains with thousands of dependencies, many of which receive little scrutiny after the initial deployment. These dependencies often pull in other dependencies, each one potentially introducing outdated, vulnerable, or even malicious code into environments that power business-critical operations.
The result? A software ecosystem where trust is assumed but rarely verified.
When trust becomes a threat
Even the most sophisticated software organizations can be caught off guard. A relatively recent example comes from a long-standing open-source project: polyfill.io, a widely used library that helps ensure JavaScript compatibility across browsers. For years, it was seen as a safe and helpful tool to smooth out any differences in cross-browser support. But one day, the domain along with the CDN distributing the library was quietly sold and repurposed by a new maintainer—who began injecting malicious payloads into the polyfills being served to millions of users.
This wasn’t a zero-day exploit or a novel vulnerability. It was a supply-chain hijack hiding in plain sight. And it worked because we tend to treat third-party code as static and safe. Once integrated, it often becomes invisible.
Similar incidents are becoming the rule, not the exception. I saw an ESG report last year that found a staggering 91% of surveyed organizations experienced some form of software supply chain attack in the preceding 12 months. That’s a big number. Increasingly, attackers aren’t bothering to storm the front gate anymore—they can simply walk in through a side door marked “npm install.”
First-party diligence, third-party blind spots
Many development teams have processes in place for reviewing first-party code. We have code reviews, security testing, and CI/CD pipeline checks, all designed to catch issues before they make it to production.
And yet that same rigor rarely applies to third-party packages. Why? Because we don’t own them. Because they’re “someone else’s responsibility.” Because updating or removing them feels too risky, too disruptive, or too complex.
Ironically, even though these dependencies often make up the majority of an application’s actual codebase, many remain completely unmonitored after initial vetting, adoption, and deployment. This lack of visibility creates fertile ground for attackers and leaves organizations scrambling to react when something goes wrong.
The latest AI code generation tools make this situation even worse. There is solid evidence that while we believe AI-generated code is more secure than our own, it is actually less so. Combine that with it being in a third-party component and you have a recipe for vulnerability.
It’s not just what’s in your code—it’s what your code trusts
When we think about software risk, we tend to focus on newly created code. It’s the most visible and also the most important part of the application. It’s where the business logic and business value reside. From a security standpoint, though, all those meticulously checked new bits are only one small part of the overall attack surface.
A more accurate threat model needs to go far beyond first-party code to encompass everything your applications are running, whether or not you deliberately put it there. This includes transitive dependencies where a single package pulls in dozens of dependencies—many of which will be unknown to the developer.
Hiding in that crowd could be abandoned libraries that quietly do their job but haven’t been actively maintained in years, bringing any of their existing or newly discovered vulnerabilities into your environment. And that’s before you even get to the threat of deliberate malicious action to compromise or impersonate popular packages or their distribution networks, as with Polyfill.
Any of these scenarios can lead to real-world consequences. From data exfiltration and credential theft to full-scale breaches, third-party compromises have become a preferred tactic for adversaries—because they’re often easier and more discreet than breaking down the front door.
Rethinking what we trust
The risk is real, so what do we do? We can start by treating third-party code with the same caution and scrutiny we apply to everything else that enters the production pipeline. This includes maintaining a living inventory of all third-party components (including transitive dependencies) across every application and monitoring their status to prescreen updates and catch suspicious changes.
With so many ways for threats to hide, we can’t take anything on trust, so next comes actively checking for outdated or vulnerable components as well as new vulnerabilities introduced by third-party code. To cover all bases and catch all dependencies, start with conventional static code and component checks but be sure to also run dynamic tests in development and after deployment. Static scans alone won’t tell you if something deployed last year has become dangerous today.
Finally, build component distrust into your operational security model (which you should be doing anyway). Establish strong update policies that don’t leave libraries to age unchecked for months or years on end. Define SLAs for security patches and involve security teams early when considering new packages.
Ultimately, we all need to adopt a zero-trust mindset toward third-party code. That doesn’t mean blocking its use—but it does mean validating continuously, assuming risk, and building processes that can catch drift before it becomes a disaster.
Trust but verify—and keep verifying
In software, we’ve come to view dependencies as safe because they’re so helpful and so common. But ubiquity isn’t the same as security.
The reality is that third-party code behaves like any other code: it evolves, it changes, and it can be compromised. The only difference compared to your in-house repos is who controls it—and in most cases, that’s not you.
Speaking as the CEO of a cybersecurity company, I believe it’s time for all of us in the tech industry to confront the (possibly uncomfortable) truth: third-party code is an inevitable and massive part of our attack surface. If we don’t treat it that way, we’re gambling with our customers’ trust and the integrity of our businesses. At Invicti, we believe that starting with frequent scanning using our DAST-first approach is the best way to secure your application environment from the outside in.
The way forward isn’t to ban third-party code or build everything from scratch. That’s neither practical nor scalable. The answer lies in awareness, oversight, and tooling that gives you real visibility into what your applications rely on—and how those dependencies behave, not just when first deployed and tested but every day thereafter.
Because whether or not you are watching your software supply chain, potential attackers are.