By Devashri Datta, Independent Researcher, Software Supply Chain Security
Third-party notices (TPNs) are documents distributed to users that list open source third-party software components included in the product and key licensing information. Every time you buy a TV or router, you’ve probably seen them. Yet TPNs were never designed for the complexity, scale, and velocity of today’s software ecosystem. TPNs are one of the most widely distributed and yet least understood artifacts in modern software supply chains.
Inside nearly every appliance, firmware image, SaaS platform, and enterprise distribution, the same pattern persists: a long, unstructured PDF is expected to represent the full scope of open source license compliance.
As software systems have scaled, TPNs have quietly become a critical but increasingly fragile pillar. They are now failing technically, operationally, and structurally under the demands of modern development and distribution.
This article examines why TPNs are breaking. It also outlines what the ecosystem must do next based on large-scale analysis of real-world TPN documents and the development of an automated framework for extracting information directly from them. While traditionally viewed as compliance artifacts, Third-Party Notices (TPNs) also represent an underutilized source of security-relevant intelligence. In many real-world scenarios where Software Bills of Materials (SBOMs) are incomplete, unavailable, or restricted, TPNs may provide the only observable evidence of component usage. This positions TPNs as a critical input to software supply chain security workflows, including vulnerability management, third-party risk assessment, and incident response.
Despite advances in Software Bill of Materials (SBOM) formats such as SPDX and CycloneDX, TPNs remain:
SBOMs provide structured visibility into software components, but their completeness depends on the generation methods and the availability of build-time data. In practice, SBOMs may not consistently capture full transitive dependencies or runtime-resolved components. In some cases, additional components and licensing details may appear in downstream artifacts such as third-party notices (TPNs), though these are typically not integrated into SBOM analysis pipelines. SBOM availability also varies across organizations and products and may not always be accessible to end users or external stakeholders due to policy or regulatory interpretation. Regulatory frameworks such as the EU Cyber Resilience Act (CRA) are evolving, and expectations around SBOM scope and disclosure remain subject to interpretation. As a result, relying solely on SBOM data may not provide complete visibility into whether a product contains a specific vulnerable component, depending on SBOM completeness and related artifact availability.
In practice, TPNs often serve as the last mile of compliance visibility, bridging internal software composition and external disclosure.
However, TPNs were never designed to operate at the scale or complexity of today’s supply chains.
While SBOMs and software composition analysis (SCA) tools have improved visibility during development, they assume access to structured or source level data. In contrast, TPNs often represent the only externally available artifact in downstream consumption environments such as embedded systems, firmware, and proprietary SaaS distributions.
This creates a structural blind spot in software supply chain security: security teams are frequently forced to make risk decisions without machine readable component intelligence. As a result, vulnerability exposure, dependency risk, and third-party software usage often remain partially or completely unobservable at the point of consumption.
Most TPNs are distributed as large, heterogeneous PDFs containing:
TPNs often omit component identifiers and lack specific version numbers for components.
PDFs are optimized for display, not structured data. As a result, extracting meaningful compliance information programmatically is extremely difficult.
Current tools such as FOSSology, ScanCode, and ORT are designed to analyze source code or binaries—not TPN documents. Yet in many real-world scenarios, especially audits or vendor reviews, TPNs are the only artifact available.
This creates a fundamental gap: The most widely distributed compliance artifact is the least analyzable.
TPNs are generated through highly variable processes:
As a result, even TPNs from the same organization can vary significantly across releases, introducing inconsistencies, omissions, and misalignment with actual dependencies.
Modern TPNs often span hundreds of pages across multiple license families and components.
Manual review has become increasingly impractical due to:
Compliance teams are effectively being asked to analyze documents at a scale that exceeds human capability.
This work introduces a systematic framework for transforming Third-Party Notices (TPNs) from unstructured compliance artifacts into structured security intelligence inputs. The framework addresses a critical gap in software supply chain security: the absence of machine-readable component visibility in downstream and vendor-distributed environments.
Unlike traditional software composition analysis tools that rely on source code, build artifacts, or SBOMs, this approach operates on TPNs as a primary data source. It enables the extraction, classification, and interpretation of software components and license obligations from highly unstructured documents.
The key contribution of this work is the demonstration that TPNs can be operationalized into actionable security intelligence for:
To address this systemic gap, I developed an automated end-to-end framework that treats TPNs as primary compliance artifacts, rather than secondary documentation.
The approach enables structured extraction and interpretation of license intelligence directly from unstructured documents. While TPNs may lack some information, they still provide valuable signals. For example, even without version identifiers, knowing that a product includes a component can be very valuable (e.g., when asking “which products contain a version of log4j that might be vulnerable to this attack?”).
Using normalization, segmentation, and page-level reconstruction, the system identifies and extracts coherent license blocks even from highly inconsistent documents.
A hybrid approach combining rule-based methods and fuzzy matching maps extracted text into meaningful license categories:
This approach achieves, in my testing:
Each component is evaluated for compliance risk based on obligations such as:
The framework produces:
This demonstrates that meaningful compliance intelligence can be derived even from the most constrained artifact available. This closes a long-standing visibility gap in the software supply chain.
Security Implications of TPN Breakdown
The failure of TPNs is not only a compliance problem—it has direct consequences for software supply chain security. When TPNs are inconsistent, unstructured, or incomplete, they reduce the ability of downstream stakeholders to:
This makes TPN degradation a security visibility problem, not just a documentation inefficiency.
TPN failures are not isolated inefficiencies. They represent a structural weakness in how the global software supply chain communicates compliance.
Addressing this requires coordinated effort across standards, tooling, and ecosystem alignment.
The ecosystem needs formats beyond PDFs, such as:
These would enable structured, interoperable compliance disclosures.
One possible longer-term solution is to embed machine-readable data (such as an SBOM in SPDX format or a TPN in JSON format) within the PDFs, creating a “hybrid PDF”. The PDF format already permits adding internal files (called “attached files”). LibreOffice already supports generating PDFs that embed the source document, allowing people to use their existing process for exchanging display PDF while also including machine-readable data. Tools that can quickly extract those embedded files and complain when they’re not present could speed their deployment. However, while this approach has promise, it doesn’t deal with the current documents, which do not embed this information.
Unsurprisingly, many improvements for handling dependencies could help in processing TPNs, SBOMs, and many other related formats.
It would be better if there was shared reference corpora for license matching. That’s because accurate license detection requires:
This would significantly improve consistency across tools and organizations.
In addition, there should be open APIs for information on licensing. Standard APIs should support:
This would enable interoperability between vendors, auditors, and regulators.
Today, SBOMs and TPNs exist in disconnected workflows. Yet in many cases, TPNs provide the only information available about product components.
A unified pipeline would:
Prior efforts across the software supply chain ecosystem have focused on improving license detection and SBOM generation during development and build phases. However, these approaches often assume access to source code or structured metadata, leaving a visibility gap when Third‑Party Notices (TPNs) are the only available compliance artifact.
Related work on automating TPN analysis demonstrates how unstructured compliance documents can be transformed into machine‑readable license intelligence suitable for governance and audit workflows. Supporting datasets for compliance governance and SBOM alignment are described in:
Datta, D., **TPN Compliance Dataset for Software Supply Chain Governance**, Zenodo, 2025.Â
https://doi.org/10.5281/zenodo.19152619
Framework:
https://doi.org/10.5281/zenodo.19099831
The proposed framework reframes TPNs as an input layer in modern software supply chain security workflows. Rather than treating TPNs as static compliance documentation, they can be operationalized into structured security intelligence pipelines.
The extracted data can be integrated into:
This positions TPN analysis as a bridge between compliance documentation and operational security decision-making.
Third-party notices (TPNs) were originally designed as simple attribution mechanisms and ways to declare licenses to recipients (as required by many licenses). Today, they are expected to support audits, transparency, regulatory compliance, and supply chain security.
But they are still delivered as static documents that do not scale.
TPNs are not failing because organizations lack intent; they are failing because the ecosystem has outgrown the tools and formats upon which it relies.
If we want a more transparent, auditable, and trustworthy software supply chain, TPNs must evolve into structured, machine-readable, and interoperable artifacts.
The next phase of open source security will not be defined solely by SBOMs or scanning tools, but by how effectively we solve the last mile of compliance visibility.
Fixing TPNs is an important step toward a more reliable and verifiable software ecosystem.
Acknowledgments
The author acknowledges David A. Wheeler and Sally Cooper for their insightful feedback and helpful discussions during the development of this work.
The open source implementation of the prototype described in this post, including parsing logic, license-classification rules, and the interactive dashboard, is available on GitHub for anyone interested in exploring or extending the approach:
https://github.com/devashridatta-dotcom/tpn-automation
Community feedback and contributions are welcome.
Devashri Datta is an AI & Software Supply Chain Security Researcher. Security researcher and enterprise security architect focused on software supply chain security, DevSecOps automation, and security governance at scale. Research areas include SBOM governance, vulnerability intelligence (VEX), Third-Party Notice (TPN) analysis, AI-assisted risk modeling, and security exception management in cloud-native environments under compliance frameworks such as SOC 2, ISO 27001, and FedRAMP.
By Jonas Rosland
Security teams in 2026 have no shortage of data, alerts, or findings. In 2025 alone, 48,185 Common Vulnerabilities and Exposures (CVEs) were published, a 20.6% increase over 2024’s already record-breaking total of 39,962. That works out to roughly 130 new vulnerabilities disclosed every single day, and for seven consecutive years, the annual count has hit a new record high.
The drivers are structural: the explosive growth of open source software, the complexity of transitive dependencies hidden deep in software supply chains, and an expanding CVE ecosystem that now encompasses nearly twice as many reporting organizations as it did five years ago. With 97% of commercial applications containing open source components, inherited risk has become a routine part of working with modern software.
While only 2% of all discovered vulnerabilities are ever exploited in the wild, of that small fraction, nearly 29% were exploited on or before the day their CVE was published. Attackers are selective, but once they identify a target, the window for defenders is very narrow. The window between vulnerability disclosure and confirmed exploitation is also shrinking. Whereas that timeline was over a year in 2020, it’s now shrunk to just hours.
The old model of scanning everything, triaging by Common Vulnerability Scoring System (CVSS) score, and working through a queue simply cannot keep pace with this reality. Something has to change.
The vast majority of what your vulnerability scanner finds will never actually be used against you. That means the core challenge facing security teams isn’t patching speed, but knowing where to focus. When a scanner returns thousands of findings ranked only by CVSS score, what looks like a workload problem is really a prioritization problem. Critical vulnerabilities in libraries that aren’t loaded at runtime, or in containers that haven’t run in months, crowd out the findings that genuinely matter, such as exploitable vulnerabilities in running, exposed workloads. The result is alert fatigue, missed priorities, and growing friction between security and development teams.
The OpenSSF Best Practices criteria reflect this directly. At the “Passing” level, projects must not contain unpatched vulnerabilities of medium or higher severity that have been known publicly for more than 60 days, and critical vulnerabilities should be fixed rapidly after they are reported. The emphasis here isn’t on the volume of findings processed, but on the speed and accuracy with which the most dangerous vulnerabilities are addressed, a distinction that gets lost when teams are buried in undifferentiated backlogs.
Static analysis is non-negotiable. The OpenSSF Best Practices criteria require it at the “Passing” level, and at “Silver,” projects must use tools that look for real vulnerabilities in code, not just style issues. Integrated into CI/CD pipelines, static analysis catches bugs early when they are cheapest to fix, and it remains a solid foundation of any security program. However, alone, it’s not enough.
The limitation is that static analysis sees everything, regardless of whether it matters in practice. It cannot tell you whether a vulnerable library is actually loaded in a running container, or whether that container ever receives external traffic. A CVSS 9.8 score looks identical whether the package is called thousands of times a day in a critical service or has never once been invoked in production. Runtime security fills that gap by observing what is actually executing in production. By tracking which processes are running, which packages are loaded, and which connections are being made, security teams gain much more precise intelligence about where risk actually lives.
Only 15% of critical and high-severity vulnerabilities with an available fix are in packages actually loaded at runtime. By isolating that subset, teams can reduce the scope of what needs immediate attention to a small fraction of their total backlog, in some cases by over 95%. That’s the practical difference between a list that overwhelms a development team and one they can actually act on. Static analysis provides breadth by catching everything possible during development, while runtime intelligence adds depth by showing what genuinely matters in production. Together, they give teams the context to make better decisions.
Runtime data also changes how security teams and developers talk to each other. Telling a developer “this CVE is rated 8.1” lands very differently than “this vulnerability is in a package actively loaded in your production authentication service.” The second statement connects a finding to a tangible business risk, and that context helps developers understand urgency in a way that a severity score on its own rarely does.
When security teams can bring developers a short, contextualized list of what needs attention and why, the conversation tends to shift from friction to collaboration. The OpenSSF Best Practices framework supports this kind of working relationship structurally, requiring documented vulnerability response processes, response times under 14 days, and release notes that explicitly identify runtime vulnerabilities fixed in each release. These aren’t bureaucratic requirements, but the scaffolding for the kind of consistent, trust-based communication that makes vulnerability management work in practice.
Neither team can do this work alone. Security engineers don’t always know which code paths are business-critical, and developers don’t always have visibility into what their software looks like from an attacker’s perspective. Runtime data helps bridge that gap by giving both sides a shared, evidence-based view of where the real risk lives.
Prioritization manages the vulnerability problem today, but reducing the attack surface is how you make the problem smaller tomorrow. Runtime intelligence supports two practical strategies that static scanning alone cannot.
The OpenSSF Best Practices criteria encourage minimizing the attack surface throughout, and the logic applies equally in production environments. The best vulnerability is the one that doesn’t exist because the vulnerable component was never there, and the next best outcome is knowing quickly when something unexpected is happening around the ones that remain.
The 2025 numbers make one thing very clear: the volume of vulnerabilities isn’t going down, and teams that try to treat every finding with equal urgency will continue to struggle. The more practical path is to use static analysis and runtime intelligence together, letting each do what it does best, and to use that shared context to build better working relationships between security and development teams. Finding the right vulnerabilities to fix, explaining why they matter, and making it straightforward for developers to act on them is where the real progress happens.
Jonas Rosland is Director of Open Source at Sysdig, where he works on cloud-native security and open source strategy. Sysdig supports open source security projects, including Falco, a CNCF graduated project for runtime threat detection.
By Tracy Ragan
Over the past decade, the IT community has made significant progress in improving pre-deployment vulnerability detection. Static analysis, Software Composition Analysis (SCA), container scanning, and dependency analysis are now standard components of modern CI/CD pipelines. These tools help developers identify vulnerable libraries and insecure code before software is released.
However, security does not end at build time.
Every successful software attack ultimately exploits a vulnerability that exists in a running system. Attackers can and do target code repositories, CI pipelines, and developer environments; these supply chain attacks are serious threats. But vulnerabilities running in live production systems are among the most dangerous because, once exploited, they can directly lead to persistent backdoors, system compromise, lateral movement, and data breaches.
This reality exposes an important gap in how organizations manage vulnerabilities today. While significant attention is placed on detecting vulnerabilities before deployment, far fewer organizations have effective mechanisms for identifying newly disclosed CVEs that affect software already running in production.
Across the industry, most development teams today run some form of pre-deployment vulnerability scanning, yet relatively few maintain continuous visibility into vulnerabilities impacting deployed software after release. This imbalance creates a dangerous blind spot: the systems organizations rely on every day may become vulnerable long after the code has passed through security checks.
As the volume of vulnerability disclosures continues to increase, the industry must rethink how post-deployment vulnerabilities are detected and remediated.
Modern software systems depend heavily on open source components. A typical application may include hundreds, or even thousands, of transitive dependencies. While security scanning tools help identify vulnerabilities during development, they cannot predict vulnerabilities that have not yet been disclosed.
New CVEs are published daily across open source ecosystems. When a vulnerability is disclosed affecting a widely used package, thousands of deployed applications may suddenly become vulnerable, even if those applications passed every security check during their build process.
This creates a persistent challenge: software that was secure at release can become vulnerable later without any code changes.
In many organizations, the detection of these vulnerabilities relies on periodic rescanning of artifacts or manual monitoring of vulnerability feeds. These approaches introduce delays between vulnerability disclosure and detection, extending the window of exposure for deployed systems.
Because attackers actively monitor vulnerability disclosures and quickly develop exploits, this detection gap creates significant operational risk.
Organizations today use several methods to identify vulnerabilities affecting deployed software. While each approach has value, they are often costly and introduce operational complexity.
One common strategy involves rescanning previously built artifacts or container images stored in registries. Security teams periodically run vulnerability scanners against these artifacts to identify newly disclosed CVEs. Although this approach can detect vulnerabilities that were unknown at build time, the process cannot identify where the containers are running across system assets.Â
Another approach relies on host-based security agents or runtime inspection tools deployed on production infrastructure. These tools identify vulnerable libraries by inspecting installed packages or monitoring application behavior. In practice, these solutions are most commonly implemented in large enterprise environments where dedicated operations and security teams can manage the operational complexity. They often require significant infrastructure integration, deployment planning, and ongoing maintenance.
Agent-based approaches also struggle to support edge environments, embedded systems, air-gapped deployments, satellites, or high-performance computing clusters, where installing additional runtime software may not be feasible or permitted. Even in traditional cloud environments, deploying and maintaining agents across thousands of systems can be a substantial operational lift.
This complexity stands in sharp contrast to pre-deployment scanning tools, which can often be installed in CI/CD pipelines in just minutes. Integrating a software composition analysis scanner into a build pipeline typically requires only a small configuration change or plugin installation. Because these tools are easy to adopt and operate earlier in the development lifecycle, they have seen widespread adoption across organizations of all sizes.
Post-deployment solutions, by comparison, often require significantly more effort to deploy and maintain. As a result, far fewer organizations implement comprehensive post-deployment vulnerability monitoring. While most development teams today run some form of pre-deployment vulnerability scanning, relatively few maintain continuous visibility into vulnerabilities impacting software already running in production. This leaves a critical visibility gap in the environments where vulnerabilities are ultimately exploited: live operational systems.
A more efficient model for detecting post-deployment vulnerabilities already exists but is often underutilized.
Software Bill of Materials (SBOMs) provide a detailed inventory of the components included in a software release. When generated during the build process using standardized formats such as SPDX or CycloneDX, SBOMs capture critical metadata, including component names, versions, dependency relationships, and identifiers such as Package URLs.
SBOM adoption has accelerated in recent years due in part to initiatives such as Executive Order 14028 and ongoing work across the open source ecosystem. Organizations increasingly generate SBOMs as part of their software supply chain transparency efforts.
Yet in many environments, SBOMs are treated primarily as compliance documentation rather than operational security tools. Instead of being archived after release, SBOMs can serve as persistent inventories of the components running in deployed software systems.
When SBOMs are available and associated with deployed releases, detecting newly disclosed vulnerabilities becomes significantly simpler.
Vulnerability intelligence feeds, such as the OSV.dev database, the National Vulnerability Database (NVD), and other vendor advisories, identify the packages and versions affected by each CVE. By correlating this vulnerability information with stored SBOMs and release metadata, organizations can quickly determine whether a deployed asset includes an affected component.
Because the SBOM already describes the complete dependency graph, there is no need to reanalyze artifacts or rescan source code. Detection becomes a metadata correlation problem rather than a compute-intensive scanning process.
This model enables organizations to continuously monitor deployed software environments and identify newly disclosed vulnerabilities almost immediately after they are published.
To operationalize this approach at scale, organizations need systems capable of continuously tracking the relationship between software releases, deployed environments, and their associated SBOMs. One emerging concept is the creation of a software digital twin, a continuously updated model that represents the software components running across operational systems.
A digital twin maintains the relationship between deployed endpoints and the SBOMs that describe the software they run. By synchronizing these SBOM inventories with vulnerability intelligence sources such as OSV.dev or the NVD at regular intervals, organizations can automatically detect when newly disclosed CVEs impact running systems.
Rather than waiting for scheduled scans or relying on agents installed on production infrastructure, this model enables continuous vulnerability awareness through metadata synchronization.
Once an affected component is identified, remediation workflows can also be automated. Modern development platforms already rely on dependency manifests such as pom.xml, package.json, requirements.txt, or container Dockerfiles. By automatically updating these dependency files and generating pull requests with patched versions, organizations can rapidly move fixes back through their CI/CD pipelines.
This type of automation has the potential to reduce vulnerability remediation times from months to days, dramatically shrinking the window of exposure. And, it is easy to scale, giving developers more control and visibility into the production threat landscape.Â
Efforts across the Open Source Security Foundation (OpenSSF) ecosystem have helped establish the foundational infrastructure needed for this approach.
The OSV.dev vulnerability database provides high-quality vulnerability data tailored to open source ecosystems. Standards such as SPDX and CycloneDX enable consistent representation of SBOM data across tools and platforms. Projects like OpenVEX provide mechanisms for communicating vulnerability exploitability context, helping organizations determine which vulnerabilities require immediate attention.
Together, these initiatives create the building blocks for a more efficient and scalable vulnerability management model, one that relies on accurate software inventories and continuous vulnerability intelligence rather than repeated artifact scanning.
Pre-deployment security scanning will continue to play an important role in software development. Identifying vulnerabilities early in the development lifecycle reduces risk and improves software quality.
But the security landscape is evolving. As software ecosystems grow more complex and vulnerability disclosures increase, organizations must also strengthen their ability to detect vulnerabilities that appear after software has already been deployed.
Rethinking post-deployment vulnerability detection means shifting away from repeated artifact scanning and toward continuous monitoring of software composition.
SBOMs provide the foundation for this shift. When combined with digital twin models that track deployed software, continuous synchronization with vulnerability databases, and automated dependency remediation, organizations can dramatically improve their ability to defend operational systems.
One thing is certain: attackers ultimately focus on exploiting vulnerabilities running in live systems. Gaining clear visibility into the attack surface, understanding exactly what OSS packages are deployed, where they are running, and how quickly they can be remediated, is essential to securing live systems from cloud-native to the edge.Â
Tracy Ragan is the Founder and Chief Executive Officer of DeployHub and a recognized authority in secure software delivery and software supply chain defense. She has served on the Governing Boards of the Open Source Security Foundation (OpenSSF) and currently serves as a strategic advisor to the Continuous Delivery Foundation (CDF) Governing Board. She also sits on both the CDF and OpenSSF Technology Advisory Committees. In these roles, she helps shape industry standards and pragmatic guidance for securing the software supply chain and advancing DevOps pipelines to enable safer, more effective use of open-source ecosystems at scale.
With more than 25 years of experience across software engineering, DevOps, and secure delivery pipelines, Tracy has built a career at the intersection of automation, security, and operational reality. Her work is focused on closing one of the industry’s most critical gaps: detecting and remediating high-risk vulnerabilities running in live, deployed systems, across cloud-native, edge, embedded, and HPC environments.
Tracy’s expertise is grounded in decades of hands-on leadership. She is the Co-Founder and former COO of OpenMake Software, where she pioneered agile build automation and led the development of OpenMake Meister, a build orchestration platform adopted by hundreds of enterprise teams and generating over $60M in partner revenue. That experience directly informs her current mission: eliminating security blind spots that persist long after software is released.
By Jeff Diecks
Artificial intelligence is changing how we approach software security. Open source is at the center of that shift.
Over the past year, DARPA’s Artificial Intelligence Cyber Challenge (AIxCC) showed that cyber reasoning systems (CRS) can go beyond finding vulnerabilities. These systems can analyze code, confirm issues, and generate patches. This brings us closer to a future where security is more automated and scalable.
When the competition ended, one question remained. How do we take these breakthroughs and make them usable in the real world?
Today, we are taking an important step forward.
The Open Source Security Foundation (OpenSSF) is welcoming OSS-CRS as a new open source project under the AI / ML Security Working Group.
OSS-CRS emerged from AIxCC and is a standard orchestration framework for building and running LLM-based autonomous bug-finding and bug-fixing systems.
The open framework is designed to make CRS practical outside of the AIxCC environment. During the competition, teams built powerful systems that were released as open source. However, many of them depended on the competition infrastructure, which made them difficult to reuse or extend. OSS-CRS addresses that gap.
OSS-CRS Features include:
Read the OSS-CRS research paper: https://doi.org/10.48550/arXiv.2603.08566
The move of OSS-CRS into OpenSSF marks a clear transition from research and competition to open collaboration and long term development.
OpenSSF provides a neutral home where projects like OSS-CRS can grow. Contributors can work together to improve the tools, validate results, and support adoption across the ecosystem.
OSS-CRS is already producing real results. Using OSS-CRS, Team Atlanta discovered twenty-five vulnerabilities across sixteen projects spanning a broad range of software including PHP, U-Boot, memcached, and Apache Ignite 3.
OpenSSF will continue to support this important work by providing human connectors between CRS tools and open source communities. The goal is to help triage and validate vulnerability reports and proposed patches before they reach maintainers, ensuring findings are accurate, actionable, and respectful of maintainers’ time.
Recent research from the OSS-CRS team validates the necessity of having a human in the loop. The team manually reviewed a set of 630 AI-generated patches and found 20-40% of the patches to be semantically incorrect. The incorrect patches pass all automated validation but are actually wrong — a dangerous failure mode only catchable by manual review.
A key benefit of the OSS-CRS project is its Ensemble feature. The Ensemble feature enhances accuracy and reliability by combining patches from multiple CRS approaches and using a selection process to pick the one most likely to be correct. The research showed this approach consistently matches or outperforms the best single component in improving semantic correctness, which is hard to eliminate at the single-agent level. This collaboration of systems helps produce more robust results for open source defenders.
With projects like OSS-CRS, OpenSSF will continue to support AI-driven security work to help turn innovation into practical outcomes for open source.
We offer several options to get involved including:
Jeff Diecks is a Senior Technical Program Manager at The Linux Foundation. He has more than two decades of experience in technology and communications with a diverse background in operations, project management and executive leadership. A participant in open source since 1999, he’s delivered digital products and applications for universities, sports leagues, state governments, global media companies and non-profits.
Cross-post originally published on the Kusari Blog
Open source software powers the modern world; securing it remains a shared responsibility.
The software supply chain is becoming more complex and more exposed with every release. Modern applications rely on vast ecosystems of open source components, dependencies, and increasingly AI-generated code. While this accelerates innovation, it also expands the attack surface dramatically. Threat actors are taking advantage of this complexity with more frequent and sophisticated attacks, from dependency confusion and malicious package injections to license risks that consistently target open source communities.
At the same time, developers are asked to move faster while ensuring security and compliance across thousands of components. Traditional security reviews often happen too late in the development lifecycle, creating friction between development and security teams and leaving maintainers overwhelmed by reactive work.
Kusari is proud to partner with the Open Source Security Foundation (OpenSSF) to offer Kusari Inspector at no cost to OpenSSF projects. Together, we’re helping maintainers and security teams gain deeper visibility into their software supply chains and better understand the relationships between first-party code, third-party dependencies, and transitive components. Â
Projects adopting Kusari Inspector include Gemara, GitTUF, GUAC, in-toto/Witness, OpenVEX, Protobom and Supply-chain Levels for Software Artifacts (SLSA). As AI coding tools become standard in open source development, Kusari Inspector serves as the safety net maintainers didn’t know they needed.Â
“I used Claude to submit a pull request to go-witness,” said John Kjell, a maintainer of in-toto/Witness. “Kusari Inspector found an issue that Claude didn’t catch. When I asked Claude to fix what Kusari Inspector flagged, it did.”
Maintainers are under growing pressure. According to Kusari’s Application Security in Practice report, organizations continue to struggle with noise, fragmented tooling, and limited visibility into what’s actually running in production. The same challenges affect open source projects — often with fewer resources.
Kusari Inspector helps OpenSSF projects:
Kusari Inspector – Secure Contributions at the Pull Request
Kusari Inspector also helps strengthen the relationship between developers and security teams. Our Application Security in Practice research found that two-thirds of teams spend up to 20 hours per week responding to supply chain incidents — time diverted from building and innovating.Â
For open source projects, the burden is often even heavier. From our experience in co-creating and maintaining GUAC, we know most projects are maintained by small teams of part-time contributors and already overextended maintainers who don’t have dedicated security staff. Every reactive investigation, dependency review, or license question pulls limited capacity away from priorities and community support — making proactive, workflow-integrated security even more critical.
By increasing automated checks directly in pull requests, projects reduce review latency and catch issues earlier, shifting from reactive firefighting to proactive prevention. Instead of maintainers “owning” reviews in isolation, Kusari Inspector brings them integrated, context-aware feedback — closer to development and accelerating secure delivery.
This partnership gives OpenSSF projects the clarity they need to make informed security decisions without disrupting developer workflows.
“The OpenSSF welcomes Kusari Inspector as a clear demonstration of community support. This helps our projects shift from reactive security measures to proactive, integrated prevention at scale,” said Steve Fernandez, General Manager, OpenSSF.
“Kusari’s journey has always been deeply connected to the open source security community. We’ve focused on closing knowledge gaps through better metadata, relationships, and insight,” said Tim Miller, Kusari Co-Founder and CEO. “Collaborating with OpenSSF reflects exactly why Kusari was founded: to turn transparency into actionable trust.”
If you’re an OpenSSF project maintainer or contributor interested in strengthening your supply chain posture, use Kusari Inspector for free — https://us.kusari.cloud/signup.
Author Bio
Michael Lieberman is co-founder and CTO of Kusari where he helps build transparency and security in the software supply chain. Michael is an active member of the open source community, co-creating the GUAC and FRSCA projects and co-leading the CNCF’s Secure Software Factory Reference Architecture whitepaper. He is an elected member of the OpenSSF Governing Board and Technical Advisory Council along with CNCF TAG Security Lead and an SLSA steering committee member.
Foundation welcomes Helvethink, Spectro Cloud, Quantrexion as members, offers Kusari Inspector for free to projects, and celebrates increased investment in AI securityÂ
AMSTERDAM – Open Source SecurityCon Europe – March 23, 2026 – The Open Source Security Foundation (OpenSSF), a cross-industry initiative of the Linux Foundation that focuses on sustainably securing open source software (OSS), today announced new members and key project momentum during Open Source SecurityCon Europe.Â
New OpenSSF members include Helvethink, Spectro Cloud, and Quantrexion, who join the Foundation as General Members. As members, these companies will engage with working groups, contribute to technical initiatives, and help guide the strategic direction of the OpenSSF. Together, members support open, transparent, and community-driven security innovation, and the long-term sustainability of the Foundation.
“Open source security continues to evolve significantly in the face of new, automated threats,” said Steve Fernandez, General Manager of OpenSSF. “Our member organizations are seeding a more secure future, built with longevity in mind, by working with the OpenSSF. This network of projects, maintainers, and thousands of contributors is key to reinforcing reliable, sustainable open source software for all.”
Foundation Updates and Milestones
In the past quarter, OpenSSF has furthered its mission to secure open source software with the following achievements:
OpenSSF growth follows the announcement of $12.5 million in grant funding awarded to OpenSSF and Alpha-Omega from leading AI providers. Funding from these leaders underscores broad industry support for more sustainable AI security assistance that empowers maintainers. Learn more about how OpenSSF and Alpha-Omega are using this grant to build long-term, sustainable security solutions, here.Â
Supporting Quotes
“At Helvethink, we work at the intersection of cloud architecture, platform engineering, and DevSecOps. Open source components are foundational to modern infrastructure from Kubernetes and IaC tooling to CI/CD pipelines and security automation. Strengthening this ecosystem requires measurable standards, robust software supply chain security practices, and active collaboration across the community. By joining OpenSSF, we are actively participating in several working groups to contribute to initiatives focused on supply chain integrity, secure-by-design principles, and the continuous improvement of cloud-native security practices.”
– José Goncalves, co-founder, Helvethink
“Quantrexion is proud to join OpenSSF and support its mission to strengthen the security, resilience, and trustworthiness of open source software. As a company focused on governance and human risk management, we see secure open ecosystems as a critical part of long-term digital resilience.”
– Dionysis Karamitopoulos, CEO, Quantrexion
“Open source is the foundation of modern infrastructure — and its security is a shared responsibility. By joining the OpenSSF, Spectro Cloud is investing directly in the community work that raises the bar for everyone. Just as importantly, it strengthens the standards and practices behind the software we ship, so our customers can deploy Kubernetes with confidence in the integrity of every component. We’re proud to support the OpenSSF mission and to keep translating that momentum into real product capabilities that make secure software a default, not a bolt-on.”
– Saad Malik, CTO and co-founder, Spectro Cloud
Events and Gatherings
OpenSSF members are gathering this week in Amsterdam at Open Source SecurityCon Europe. To get involved with the OpenSSF community, join us at the following upcoming events:
Additional Resources
About the OpenSSF
The Open Source Security Foundation (OpenSSF) is a cross-industry organization at the Linux Foundation that brings together the industry’s most important open source security initiatives and the individuals and companies that support them. The OpenSSF is committed to collaboration and working both upstream and with existing communities to advance open source security for all. For more information, please visit us at openssf.org.Â
Media Contact
Grace Lucier
The Linux Foundation