Tag

CVE

From Noise to Signal: Using Runtime Context to Win the Vulnerability Management Battle

By Blog, Guest Blog

By Jonas Rosland

Security teams in 2026 have no shortage of data, alerts, or findings. In 2025 alone, 48,185 Common Vulnerabilities and Exposures (CVEs) were published, a 20.6% increase over 2024’s already record-breaking total of 39,962. That works out to roughly 130 new vulnerabilities disclosed every single day, and for seven consecutive years, the annual count has hit a new record high.

The drivers are structural: the explosive growth of open source software, the complexity of transitive dependencies hidden deep in software supply chains, and an expanding CVE ecosystem that now encompasses nearly twice as many reporting organizations as it did five years ago. With 97% of commercial applications containing open source components, inherited risk has become a routine part of working with modern software.

While only 2% of all discovered vulnerabilities are ever exploited in the wild, of that small fraction, nearly 29% were exploited on or before the day their CVE was published. Attackers are selective, but once they identify a target, the window for defenders is very narrow. The window between vulnerability disclosure and confirmed exploitation is also shrinking. Whereas that timeline was over a year in 2020, it’s now shrunk to just hours.

The old model of scanning everything, triaging by Common Vulnerability Scoring System (CVSS) score, and working through a queue simply cannot keep pace with this reality. Something has to change.

Most vulnerabilities will never be exploited

The vast majority of what your vulnerability scanner finds will never actually be used against you. That means the core challenge facing security teams isn’t patching speed, but knowing where to focus. When a scanner returns thousands of findings ranked only by CVSS score, what looks like a workload problem is really a prioritization problem. Critical vulnerabilities in libraries that aren’t loaded at runtime, or in containers that haven’t run in months, crowd out the findings that genuinely matter, such as exploitable vulnerabilities in running, exposed workloads. The result is alert fatigue, missed priorities, and growing friction between security and development teams.

The OpenSSF Best Practices criteria reflect this directly. At the “Passing” level, projects must not contain unpatched vulnerabilities of medium or higher severity that have been known publicly for more than 60 days, and critical vulnerabilities should be fixed rapidly after they are reported. The emphasis here isn’t on the volume of findings processed, but on the speed and accuracy with which the most dangerous vulnerabilities are addressed, a distinction that gets lost when teams are buried in undifferentiated backlogs.

Why static analysis alone isn’t enough

Static analysis is non-negotiable. The OpenSSF Best Practices criteria require it at the “Passing” level, and at “Silver,” projects must use tools that look for real vulnerabilities in code, not just style issues. Integrated into CI/CD pipelines, static analysis catches bugs early when they are cheapest to fix, and it remains a solid foundation of any security program. However, alone, it’s not enough.

The limitation is that static analysis sees everything, regardless of whether it matters in practice. It cannot tell you whether a vulnerable library is actually loaded in a running container, or whether that container ever receives external traffic. A CVSS 9.8 score looks identical whether the package is called thousands of times a day in a critical service or has never once been invoked in production. Runtime security fills that gap by observing what is actually executing in production. By tracking which processes are running, which packages are loaded, and which connections are being made, security teams gain much more precise intelligence about where risk actually lives.

Only 15% of critical and high-severity vulnerabilities with an available fix are in packages actually loaded at runtime. By isolating that subset, teams can reduce the scope of what needs immediate attention to a small fraction of their total backlog, in some cases by over 95%. That’s the practical difference between a list that overwhelms a development team and one they can actually act on. Static analysis provides breadth by catching everything possible during development, while runtime intelligence adds depth by showing what genuinely matters in production. Together, they give teams the context to make better decisions.

Helping security and development teams speak the same language

Runtime data also changes how security teams and developers talk to each other. Telling a developer “this CVE is rated 8.1” lands very differently than “this vulnerability is in a package actively loaded in your production authentication service.” The second statement connects a finding to a tangible business risk, and that context helps developers understand urgency in a way that a severity score on its own rarely does.

When security teams can bring developers a short, contextualized list of what needs attention and why, the conversation tends to shift from friction to collaboration. The OpenSSF Best Practices framework supports this kind of working relationship structurally, requiring documented vulnerability response processes, response times under 14 days, and release notes that explicitly identify runtime vulnerabilities fixed in each release. These aren’t bureaucratic requirements, but the scaffolding for the kind of consistent, trust-based communication that makes vulnerability management work in practice.

Neither team can do this work alone. Security engineers don’t always know which code paths are business-critical, and developers don’t always have visibility into what their software looks like from an attacker’s perspective. Runtime data helps bridge that gap by giving both sides a shared, evidence-based view of where the real risk lives.

Shrinking the problem over time

Prioritization manages the vulnerability problem today, but reducing the attack surface is how you make the problem smaller tomorrow. Runtime intelligence supports two practical strategies that static scanning alone cannot.

  1. Build leaner, more deliberate images. Runtime analysis identifies unused packages, old utilities, and bundled libraries that never get called in production, giving teams a clear basis for stripping images down. Building from scratch or distroless base images takes this further by removing shells, package managers, and other components that have no place in a production workload, and combining that with rootless containers limits the damage an attacker can do if they do gain access. Runtime data can also flag containers running on stale base images that are actively receiving traffic, making the case for a refresh concrete rather than a task that keeps getting deprioritized.
  2. Detect and respond to unexpected behavior in production. Even with good prioritization, not every risk can be patched immediately. This is where a runtime threat detection tool like Falco becomes valuable. By defining what normal behavior looks like for a given workload, Falco can flag unexpected activity in real time, such as a process spawning a shell, a container writing to a sensitive path, or an unusual outbound connection appearing. This doesn’t replace patching, but it provides a meaningful layer of protection while remediation work is underway, and it gives teams better visibility into whether a vulnerability is being actively probed or exploited.

The OpenSSF Best Practices criteria encourage minimizing the attack surface throughout, and the logic applies equally in production environments. The best vulnerability is the one that doesn’t exist because the vulnerable component was never there, and the next best outcome is knowing quickly when something unexpected is happening around the ones that remain.

Where to go from here

The 2025 numbers make one thing very clear: the volume of vulnerabilities isn’t going down, and teams that try to treat every finding with equal urgency will continue to struggle. The more practical path is to use static analysis and runtime intelligence together, letting each do what it does best, and to use that shared context to build better working relationships between security and development teams. Finding the right vulnerabilities to fix, explaining why they matter, and making it straightforward for developers to act on them is where the real progress happens.

About the Author

Jonas Rosland is Director of Open Source at Sysdig, where he works on cloud-native security and open source strategy. Sysdig supports open source security projects, including Falco, a CNCF graduated project for runtime threat detection.

Rethinking Post-Deployment Vulnerability Detection

By Blog, Guest Blog

By Tracy Ragan

Over the past decade, the IT community has made significant progress in improving pre-deployment vulnerability detection. Static analysis, Software Composition Analysis (SCA), container scanning, and dependency analysis are now standard components of modern CI/CD pipelines. These tools help developers identify vulnerable libraries and insecure code before software is released.

However, security does not end at build time.

Every successful software attack ultimately exploits a vulnerability that exists in a running system. Attackers can and do target code repositories, CI pipelines, and developer environments; these supply chain attacks are serious threats. But vulnerabilities running in live production systems are among the most dangerous because, once exploited, they can directly lead to persistent backdoors, system compromise, lateral movement, and data breaches.

This reality exposes an important gap in how organizations manage vulnerabilities today. While significant attention is placed on detecting vulnerabilities before deployment, far fewer organizations have effective mechanisms for identifying newly disclosed CVEs that affect software already running in production.

Across the industry, most development teams today run some form of pre-deployment vulnerability scanning, yet relatively few maintain continuous visibility into vulnerabilities impacting deployed software after release. This imbalance creates a dangerous blind spot: the systems organizations rely on every day may become vulnerable long after the code has passed through security checks.

As the volume of vulnerability disclosures continues to increase, the industry must rethink how post-deployment vulnerabilities are detected and remediated.

The Growing Post-Deployment Vulnerability Problem

Modern software systems depend heavily on open source components. A typical application may include hundreds, or even thousands, of transitive dependencies. While security scanning tools help identify vulnerabilities during development, they cannot predict vulnerabilities that have not yet been disclosed.

New CVEs are published daily across open source ecosystems. When a vulnerability is disclosed affecting a widely used package, thousands of deployed applications may suddenly become vulnerable, even if those applications passed every security check during their build process.

This creates a persistent challenge: software that was secure at release can become vulnerable later without any code changes.

In many organizations, the detection of these vulnerabilities relies on periodic rescanning of artifacts or manual monitoring of vulnerability feeds. These approaches introduce delays between vulnerability disclosure and detection, extending the window of exposure for deployed systems.

Because attackers actively monitor vulnerability disclosures and quickly develop exploits, this detection gap creates significant operational risk.

Current Approaches to Detecting Post-Deployment CVEs

Organizations today use several methods to identify vulnerabilities affecting deployed software. While each approach has value, they are often costly and introduce operational complexity.

One common strategy involves rescanning previously built artifacts or container images stored in registries. Security teams periodically run vulnerability scanners against these artifacts to identify newly disclosed CVEs. Although this approach can detect vulnerabilities that were unknown at build time, the process cannot identify where the containers are running across system assets. 

Another approach relies on host-based security agents or runtime inspection tools deployed on production infrastructure. These tools identify vulnerable libraries by inspecting installed packages or monitoring application behavior. In practice, these solutions are most commonly implemented in large enterprise environments where dedicated operations and security teams can manage the operational complexity. They often require significant infrastructure integration, deployment planning, and ongoing maintenance.

Agent-based approaches also struggle to support edge environments, embedded systems, air-gapped deployments, satellites, or high-performance computing clusters, where installing additional runtime software may not be feasible or permitted. Even in traditional cloud environments, deploying and maintaining agents across thousands of systems can be a substantial operational lift.

This complexity stands in sharp contrast to pre-deployment scanning tools, which can often be installed in CI/CD pipelines in just minutes. Integrating a software composition analysis scanner into a build pipeline typically requires only a small configuration change or plugin installation. Because these tools are easy to adopt and operate earlier in the development lifecycle, they have seen widespread adoption across organizations of all sizes.

Post-deployment solutions, by comparison, often require significantly more effort to deploy and maintain. As a result, far fewer organizations implement comprehensive post-deployment vulnerability monitoring. While most development teams today run some form of pre-deployment vulnerability scanning, relatively few maintain continuous visibility into vulnerabilities impacting software already running in production. This leaves a critical visibility gap in the environments where vulnerabilities are ultimately exploited: live operational systems.

SBOMs Are an Underutilized Security Asset

A more efficient model for detecting post-deployment vulnerabilities already exists but is often underutilized.

Software Bill of Materials (SBOMs) provide a detailed inventory of the components included in a software release. When generated during the build process using standardized formats such as SPDX or CycloneDX, SBOMs capture critical metadata, including component names, versions, dependency relationships, and identifiers such as Package URLs.

SBOM adoption has accelerated in recent years due in part to initiatives such as Executive Order 14028 and ongoing work across the open source ecosystem. Organizations increasingly generate SBOMs as part of their software supply chain transparency efforts.

Yet in many environments, SBOMs are treated primarily as compliance documentation rather than operational security tools. Instead of being archived after release, SBOMs can serve as persistent inventories of the components running in deployed software systems.

Detecting Vulnerabilities Without Rescanning

When SBOMs are available and associated with deployed releases, detecting newly disclosed vulnerabilities becomes significantly simpler.

Vulnerability intelligence feeds, such as the OSV.dev database, the National Vulnerability Database (NVD), and other vendor advisories, identify the packages and versions affected by each CVE. By correlating this vulnerability information with stored SBOMs and release metadata, organizations can quickly determine whether a deployed asset includes an affected component.

Because the SBOM already describes the complete dependency graph, there is no need to reanalyze artifacts or rescan source code. Detection becomes a metadata correlation problem rather than a compute-intensive scanning process.

This model enables organizations to continuously monitor deployed software environments and identify newly disclosed vulnerabilities almost immediately after they are published.

Digital Twins and Continuous Vulnerability Synchronization

To operationalize this approach at scale, organizations need systems capable of continuously tracking the relationship between software releases, deployed environments, and their associated SBOMs. One emerging concept is the creation of a software digital twin, a continuously updated model that represents the software components running across operational systems.

A digital twin maintains the relationship between deployed endpoints and the SBOMs that describe the software they run. By synchronizing these SBOM inventories with vulnerability intelligence sources such as OSV.dev or the NVD at regular intervals, organizations can automatically detect when newly disclosed CVEs impact running systems.

Rather than waiting for scheduled scans or relying on agents installed on production infrastructure, this model enables continuous vulnerability awareness through metadata synchronization.

Once an affected component is identified, remediation workflows can also be automated. Modern development platforms already rely on dependency manifests such as pom.xml, package.json, requirements.txt, or container Dockerfiles. By automatically updating these dependency files and generating pull requests with patched versions, organizations can rapidly move fixes back through their CI/CD pipelines.

This type of automation has the potential to reduce vulnerability remediation times from months to days, dramatically shrinking the window of exposure. And, it is easy to scale, giving developers more control and visibility into the production threat landscape. 

Aligning with OpenSSF Security Initiatives

Efforts across the Open Source Security Foundation (OpenSSF) ecosystem have helped establish the foundational infrastructure needed for this approach.

The OSV.dev vulnerability database provides high-quality vulnerability data tailored to open source ecosystems. Standards such as SPDX and CycloneDX enable consistent representation of SBOM data across tools and platforms. Projects like OpenVEX provide mechanisms for communicating vulnerability exploitability context, helping organizations determine which vulnerabilities require immediate attention.

Together, these initiatives create the building blocks for a more efficient and scalable vulnerability management model, one that relies on accurate software inventories and continuous vulnerability intelligence rather than repeated artifact scanning.

The Future of Vulnerability Management

Pre-deployment security scanning will continue to play an important role in software development. Identifying vulnerabilities early in the development lifecycle reduces risk and improves software quality.

But the security landscape is evolving. As software ecosystems grow more complex and vulnerability disclosures increase, organizations must also strengthen their ability to detect vulnerabilities that appear after software has already been deployed.

Rethinking post-deployment vulnerability detection means shifting away from repeated artifact scanning and toward continuous monitoring of software composition.

SBOMs provide the foundation for this shift. When combined with digital twin models that track deployed software, continuous synchronization with vulnerability databases, and automated dependency remediation, organizations can dramatically improve their ability to defend operational systems.

One thing is certain: attackers ultimately focus on exploiting vulnerabilities running in live systems. Gaining clear visibility into the attack surface, understanding exactly what OSS packages are deployed, where they are running, and how quickly they can be remediated, is essential to securing live systems from cloud-native to the edge. 

Author 

Tracy Ragan is the Founder and Chief Executive Officer of DeployHub and a recognized authority in secure software delivery and software supply chain defense. She has served on the Governing Boards of the Open Source Security Foundation (OpenSSF) and currently serves as a strategic advisor to the Continuous Delivery Foundation (CDF) Governing Board. She also sits on both the CDF and OpenSSF Technology Advisory Committees. In these roles, she helps shape industry standards and pragmatic guidance for securing the software supply chain and advancing DevOps pipelines to enable safer, more effective use of open-source ecosystems at scale.

With more than 25 years of experience across software engineering, DevOps, and secure delivery pipelines, Tracy has built a career at the intersection of automation, security, and operational reality. Her work is focused on closing one of the industry’s most critical gaps: detecting and remediating high-risk vulnerabilities running in live, deployed systems, across cloud-native, edge, embedded, and HPC environments.

Tracy’s expertise is grounded in decades of hands-on leadership. She is the Co-Founder and former COO of OpenMake Software, where she pioneered agile build automation and led the development of OpenMake Meister, a build orchestration platform adopted by hundreds of enterprise teams and generating over $60M in partner revenue. That experience directly informs her current mission: eliminating security blind spots that persist long after software is released.