Tag

vulnerability management

Rethinking Post-Deployment Vulnerability Detection

By Blog, Guest Blog

By Tracy Ragan

Over the past decade, the IT community has made significant progress in improving pre-deployment vulnerability detection. Static analysis, Software Composition Analysis (SCA), container scanning, and dependency analysis are now standard components of modern CI/CD pipelines. These tools help developers identify vulnerable libraries and insecure code before software is released.

However, security does not end at build time.

Every successful software attack ultimately exploits a vulnerability that exists in a running system. Attackers can and do target code repositories, CI pipelines, and developer environments; these supply chain attacks are serious threats. But vulnerabilities running in live production systems are among the most dangerous because, once exploited, they can directly lead to persistent backdoors, system compromise, lateral movement, and data breaches.

This reality exposes an important gap in how organizations manage vulnerabilities today. While significant attention is placed on detecting vulnerabilities before deployment, far fewer organizations have effective mechanisms for identifying newly disclosed CVEs that affect software already running in production.

Across the industry, most development teams today run some form of pre-deployment vulnerability scanning, yet relatively few maintain continuous visibility into vulnerabilities impacting deployed software after release. This imbalance creates a dangerous blind spot: the systems organizations rely on every day may become vulnerable long after the code has passed through security checks.

As the volume of vulnerability disclosures continues to increase, the industry must rethink how post-deployment vulnerabilities are detected and remediated.

The Growing Post-Deployment Vulnerability Problem

Modern software systems depend heavily on open source components. A typical application may include hundreds, or even thousands, of transitive dependencies. While security scanning tools help identify vulnerabilities during development, they cannot predict vulnerabilities that have not yet been disclosed.

New CVEs are published daily across open source ecosystems. When a vulnerability is disclosed affecting a widely used package, thousands of deployed applications may suddenly become vulnerable, even if those applications passed every security check during their build process.

This creates a persistent challenge: software that was secure at release can become vulnerable later without any code changes.

In many organizations, the detection of these vulnerabilities relies on periodic rescanning of artifacts or manual monitoring of vulnerability feeds. These approaches introduce delays between vulnerability disclosure and detection, extending the window of exposure for deployed systems.

Because attackers actively monitor vulnerability disclosures and quickly develop exploits, this detection gap creates significant operational risk.

Current Approaches to Detecting Post-Deployment CVEs

Organizations today use several methods to identify vulnerabilities affecting deployed software. While each approach has value, they are often costly and introduce operational complexity.

One common strategy involves rescanning previously built artifacts or container images stored in registries. Security teams periodically run vulnerability scanners against these artifacts to identify newly disclosed CVEs. Although this approach can detect vulnerabilities that were unknown at build time, the process cannot identify where the containers are running across system assets. 

Another approach relies on host-based security agents or runtime inspection tools deployed on production infrastructure. These tools identify vulnerable libraries by inspecting installed packages or monitoring application behavior. In practice, these solutions are most commonly implemented in large enterprise environments where dedicated operations and security teams can manage the operational complexity. They often require significant infrastructure integration, deployment planning, and ongoing maintenance.

Agent-based approaches also struggle to support edge environments, embedded systems, air-gapped deployments, satellites, or high-performance computing clusters, where installing additional runtime software may not be feasible or permitted. Even in traditional cloud environments, deploying and maintaining agents across thousands of systems can be a substantial operational lift.

This complexity stands in sharp contrast to pre-deployment scanning tools, which can often be installed in CI/CD pipelines in just minutes. Integrating a software composition analysis scanner into a build pipeline typically requires only a small configuration change or plugin installation. Because these tools are easy to adopt and operate earlier in the development lifecycle, they have seen widespread adoption across organizations of all sizes.

Post-deployment solutions, by comparison, often require significantly more effort to deploy and maintain. As a result, far fewer organizations implement comprehensive post-deployment vulnerability monitoring. While most development teams today run some form of pre-deployment vulnerability scanning, relatively few maintain continuous visibility into vulnerabilities impacting software already running in production. This leaves a critical visibility gap in the environments where vulnerabilities are ultimately exploited: live operational systems.

SBOMs Are an Underutilized Security Asset

A more efficient model for detecting post-deployment vulnerabilities already exists but is often underutilized.

Software Bill of Materials (SBOMs) provide a detailed inventory of the components included in a software release. When generated during the build process using standardized formats such as SPDX or CycloneDX, SBOMs capture critical metadata, including component names, versions, dependency relationships, and identifiers such as Package URLs.

SBOM adoption has accelerated in recent years due in part to initiatives such as Executive Order 14028 and ongoing work across the open source ecosystem. Organizations increasingly generate SBOMs as part of their software supply chain transparency efforts.

Yet in many environments, SBOMs are treated primarily as compliance documentation rather than operational security tools. Instead of being archived after release, SBOMs can serve as persistent inventories of the components running in deployed software systems.

Detecting Vulnerabilities Without Rescanning

When SBOMs are available and associated with deployed releases, detecting newly disclosed vulnerabilities becomes significantly simpler.

Vulnerability intelligence feeds, such as the OSV.dev database, the National Vulnerability Database (NVD), and other vendor advisories, identify the packages and versions affected by each CVE. By correlating this vulnerability information with stored SBOMs and release metadata, organizations can quickly determine whether a deployed asset includes an affected component.

Because the SBOM already describes the complete dependency graph, there is no need to reanalyze artifacts or rescan source code. Detection becomes a metadata correlation problem rather than a compute-intensive scanning process.

This model enables organizations to continuously monitor deployed software environments and identify newly disclosed vulnerabilities almost immediately after they are published.

Digital Twins and Continuous Vulnerability Synchronization

To operationalize this approach at scale, organizations need systems capable of continuously tracking the relationship between software releases, deployed environments, and their associated SBOMs. One emerging concept is the creation of a software digital twin, a continuously updated model that represents the software components running across operational systems.

A digital twin maintains the relationship between deployed endpoints and the SBOMs that describe the software they run. By synchronizing these SBOM inventories with vulnerability intelligence sources such as OSV.dev or the NVD at regular intervals, organizations can automatically detect when newly disclosed CVEs impact running systems.

Rather than waiting for scheduled scans or relying on agents installed on production infrastructure, this model enables continuous vulnerability awareness through metadata synchronization.

Once an affected component is identified, remediation workflows can also be automated. Modern development platforms already rely on dependency manifests such as pom.xml, package.json, requirements.txt, or container Dockerfiles. By automatically updating these dependency files and generating pull requests with patched versions, organizations can rapidly move fixes back through their CI/CD pipelines.

This type of automation has the potential to reduce vulnerability remediation times from months to days, dramatically shrinking the window of exposure. And, it is easy to scale, giving developers more control and visibility into the production threat landscape. 

Aligning with OpenSSF Security Initiatives

Efforts across the Open Source Security Foundation (OpenSSF) ecosystem have helped establish the foundational infrastructure needed for this approach.

The OSV.dev vulnerability database provides high-quality vulnerability data tailored to open source ecosystems. Standards such as SPDX and CycloneDX enable consistent representation of SBOM data across tools and platforms. Projects like OpenVEX provide mechanisms for communicating vulnerability exploitability context, helping organizations determine which vulnerabilities require immediate attention.

Together, these initiatives create the building blocks for a more efficient and scalable vulnerability management model, one that relies on accurate software inventories and continuous vulnerability intelligence rather than repeated artifact scanning.

The Future of Vulnerability Management

Pre-deployment security scanning will continue to play an important role in software development. Identifying vulnerabilities early in the development lifecycle reduces risk and improves software quality.

But the security landscape is evolving. As software ecosystems grow more complex and vulnerability disclosures increase, organizations must also strengthen their ability to detect vulnerabilities that appear after software has already been deployed.

Rethinking post-deployment vulnerability detection means shifting away from repeated artifact scanning and toward continuous monitoring of software composition.

SBOMs provide the foundation for this shift. When combined with digital twin models that track deployed software, continuous synchronization with vulnerability databases, and automated dependency remediation, organizations can dramatically improve their ability to defend operational systems.

One thing is certain: attackers ultimately focus on exploiting vulnerabilities running in live systems. Gaining clear visibility into the attack surface, understanding exactly what OSS packages are deployed, where they are running, and how quickly they can be remediated, is essential to securing live systems from cloud-native to the edge. 

Author 

Tracy Ragan is the Founder and Chief Executive Officer of DeployHub and a recognized authority in secure software delivery and software supply chain defense. She has served on the Governing Boards of the Open Source Security Foundation (OpenSSF) and currently serves as a strategic advisor to the Continuous Delivery Foundation (CDF) Governing Board. She also sits on both the CDF and OpenSSF Technology Advisory Committees. In these roles, she helps shape industry standards and pragmatic guidance for securing the software supply chain and advancing DevOps pipelines to enable safer, more effective use of open-source ecosystems at scale.

With more than 25 years of experience across software engineering, DevOps, and secure delivery pipelines, Tracy has built a career at the intersection of automation, security, and operational reality. Her work is focused on closing one of the industry’s most critical gaps: detecting and remediating high-risk vulnerabilities running in live, deployed systems, across cloud-native, edge, embedded, and HPC environments.

Tracy’s expertise is grounded in decades of hands-on leadership. She is the Co-Founder and former COO of OpenMake Software, where she pioneered agile build automation and led the development of OpenMake Meister, a build orchestration platform adopted by hundreds of enterprise teams and generating over $60M in partner revenue. That experience directly informs her current mission: eliminating security blind spots that persist long after software is released.

What’s in the SOSS? Podcast #57 – S3E9 From Noise to Signal: Security Expertise and Kusari Inspector with Mike Lieberman

By Podcast

Summary

In this episode, CRob talks with Mike Lieberman from Kusari about the current state of open source security. They discuss the growing burden on maintainers from the “deluge” of noisy, low-quality vulnerability reports, often generated by AI tools, and the vital role of “a human in the loop.” Mike introduces Kusari’s tool, Inspector, explaining how it uses codified security expertise to process data from tools like OpenSSF Scorecard and SLSA, effectively filtering out false positives and giving maintainers only high-quality, actionable reports. They also dive into the design philosophy of “don’t piss off the engineers” and share a vision for the future of security tooling that focuses on dramatically better user experience and building security primitives that are “secure by design”.

Conversation Highlights

00:06 Introduction: The Biggest Challenge in Security Tooling
01:12 Overwhelmed Maintainers: The Deluge of Low-Quality AI Reports
04:00 Introducing Kusari’s Inspector: How it Filters False Positives
08:40 The Secret Sauce: Security Expertise and the Need for Reproducible Tests
12:03 Meeting Engineers Where They Are: Design Choices to Reduce Maintainer Burden
18:16 The Future of Open Source Security Tooling: Focusing on Better UX
22:19 Call to Action: The Responsibility of Large Organizations

Transcript

(0:00) Intro Music

Mike Lieberman (00:06)
I think the biggest thing in security tooling is better user experience. I think that to me is one of the biggest challenges.

CRob (00:25)
Welcome, welcome, welcome to What’s in the SOSS?, the OpenSSF’s podcast where I talk to developers, maintainers, security experts, and people in and around this amazing open source ecosystem. Today, again, we have a real treat. Friend of the show, Mike Lieberman from Kusari is joining us again after – I don’t know if your podcast was toppled from its place of the most listened to before, but we’re gonna see if we can make another hit for us. But we’re here today to talk about some interesting developments that you and your crew are involved in and just things going on in open source security. So how have you been, sir?

Mike Lieberman (01:07)
Well, thank you for having me back and yeah, things are going pretty well.

CRob (01:12)
Well, let’s dive right into it. Recently, and this is a topic that I’m actually dealing with this very moment while we’re recording this podcast, that open source maintainers are just currently overwhelmed by just this deluge of noisy, low quality reports, a lot of them generated by AI tools. So kind of thinking about it with your, many hats you wear, as you know, business owner, a community member, and a long time developer, a security expert. From your perspectives, what is actually creating the most burden today? And think about it through the lens of this project you’re going to share with us in a moment.

Mike Lieberman (01:57)
Yeah, sure. So I think to kind of start, the problem has been the same problem since, know, throughout human history, it is a combination of either bad actors or just lazy people that are I would say the biggest issue here. Right. We have a lot of things like AI reports generating awful sort of vulnerability, know, fake vulnerabilities or whatnot. But if we kind of look at it through the lens of like history through tech, we saw the same thing with any sort of automation, right? When, yeah, exactly. When people could kind of create scripts, hey, let me go in spam this one project with my script. Let me spam a whole bunch of projects with my sort of automation or whatever. And, you know, the same thing sort of happened when people, when we started moving away from mailing lists to sort of GitHub and those sorts of things as well. So I think it’s really to kind of take a step back. It’s kind of how people are using the tools more so than the tools themselves. But I do think when it comes to a lot of the security reports, yeah, it is folks who are just kind of asking an LLM.

Hey, find me find me some zero day. And of course, that’s never going to work because the LLMs don’t have that information. And it’s just it kind of comes back to you need people who understand what they’re doing, using the tools in the right way in order to kind of figure out some of this stuff.

CRob (03:42)
Yeah, human in the loop. Our dear friend, Dr. David Wheeler has a saying, he says, a fool with a tool is still a fool. So again, having those experts in there, helping out is critical. So let’s…

Mike Lieberman (03:43)
Yeah.

CRob (04:00)
You’ve been in this space for a long time, focusing in on supply chain security, and you’ve written or contributed to a ton of tools. And most recently, you all helped create over at Kusari a tool called Inspector. So from just a high level TLDR, how do you see things like Inspector kind of changing this dynamic of getting more people involved or getting more expert knowledge in?

Mike Lieberman (04:26)
Sure. I think like the things. So actually to take a step back, right? There’s a lot of great tools that are being built. The challenge with those tools is, and the way I kind of think about it is like, you know, home security, right? It’s, hey, there’s a ton of tools out there that are helping out with open source security the same way that there’s a ton of tools out there for, you know, a smarter lock. A better security system.

CRob (04:59)
A doorbell that can find your dog.

Mike Lieberman (05:02)
There’s privacy concerns on that one. think, you know, we can all agree on that. But I think to that extent, when it comes to sort of these tools, it’s in how they’re used. And also, the expertise that’s required in how to use them. And also, when building the tools, what sort of expertise went into building the tools? And I think that to me is where the big gap is with just sort of some of the AI related things is you have folks using a very generic system like an LLM. And just saying, hey, LLM become a security expert and do this stuff. And of course the LLM makes a lot of mistakes and whatever. But if you were to kind of say through things like MCP and LLM skills and all these other things, if you have a way of codifying, run open SSF scorecard, run, you know, SLSA and run all of these various things and put all of this together and generate me an SBOM using these tools and whatnot. And then you can take all that, then hand the output to the LLM and say, hey, here’s everything I discovered. Here’s also the code. Help me make sense of it. And I think that to me is kind of where a lot of the benefit is. And again, what I just described is essentially inspector, right?

We’re running all of these various tools, again, that we understand because we’ve contributed to those tools, we’ve helped maintain some of those tools, we have been users of these tools for years. So we understand how they’re supposed to be used. We understand how a human who is, before the age of AI would be using these tools. And we recognize the burden of that expertise. And we’ve sort of encoded it. Had the LLM kind of come in at the last mile and then take all that information, and say, hey, if there is a finding, a vulnerability, great. Where does that vulnerability live? Is that a vulnerability in a core piece of my code, which yes, I need to address right now, or is it like, it’s in a test? Yes, it’s probably something I should fix, but maybe not the biggest issue right this second. And so I think tools like that are really helping because the thing that we found, and again, a user of inspector told us this, and I won’t call out the exact AI tool they were using, but they were using a generic LLM with some stuff. And then they were using inspector. And one of the things that they had said was, wow. Yeah. Like inspector is actually catching the issue with all of the, the, uh, it detected that a particular issue was essentially a, um, a false positive because it looked at a potential remote code execution and looked at all the stuff alongside the code and said, you are clearly have an allow list. So given that you have this allow list, we recognize it’s not a remote code execution, or rather arbitrary code sort of execution attack. And I think it’s stuff like that, that we’re seeing starting to get developed more and more. Whereas a lot of the tradition, I want to say traditional with AI, even though it’s been, you know, like in the past year, everything shifts. Yeah.

When we look at sort of how folks were using LLMs even just a year ago, a lot has shifted and we’re seeing less of these false positives coming out of AI because people are using AI the way it should be used, where it’s you’re supplementing all these other tools that are out.

CRob (08:40)
That’s awesome.

And this might lead into this next question. AI and automation are finding a lot more potential issues, but they’re not always better. And is that what you think that secret sauce of having that security expertise and that helps kind of balance out finding a vulnerability and then kind of sharing that information with the maintainer effectively?

Mike Lieberman (09:08)
Yeah, so I mean, I think when it comes to stuff like that, the way I, you know, I was actually having a conversation with a friend just a few days ago about this issue and I’m reminded of issues just even before AI. And one of the big things that maintainers would ask is, give me a way to reproduce this. If you’re not gonna give me a way to reproduce this, I’m not gonna, you know, I’m not gonna accept your report here and I’m not gonna do a ton of investigation to figure out what you intended to mean.

And I think it’s the same way with AI here, where we’re starting to see with some of the stuff coming out of AI XCC and some other places, we are starting to see tools that are being built that are actually generating the tests and whatnot that can reproduce these vulnerabilities that the LLMs are claiming, or AI tools are claiming. And I think that to me is important because when I look at Daniel from Curl or some of these other folks who are like,

I am so sick of all of these AI reports. It’s like every single one that they’re claiming is an AI report, it’s like they didn’t give me a way to reproduce it. Or even worse, the AI said, here’s a list of steps to reproduce. folks are coming out and saying, that function that you were claiming needs to get run doesn’t exist. And so I’m just thinking to myself, well, why not just write a test that does that thing, you know, and have the LLM write the test, whatever, but prove out that like, hey, an AI tool generated a test and I can run that test and I could see, yep, that is an exploit. That is actually a vulnerability. Now I can go and take that and package it up, it over to, you know, hand it over to the maintainer. And I think if I’m as a maintainer of various open source projects,

If I received something that said, hey, here is a test, you can run that test. And again, by the test, mean like an actual test, a test that makes up other code and tries to do whatever, but an actual test. If you have that, I as a maintainer would say, absolutely, that is a real vulnerability. But I think the thing that we’re seeing right now is we’re seeing all this sort of slop, which is.

Again, it’s just similar to the slop we saw years ago with other sort of automated vulnerability reporting and just generally in tech. And I think the problem here is still kind of comes back to lazy maintainer or sorry, not lazy maintainers, but lazy submitters and just other sort of bad actors who are just like, yeah, I’m just gonna throw a thing out there and hopefully one of these is right. And I’m gonna get it, make a name for myself.

CRob (11:58)
A wise man once said that knowing is half the battle.

Mike Lieberman (12:01)
Yes.

CRob (12:03)
And thinking about it from this maintainer developer perspective, almost always maintainers are volunteers first. They’re there because they have amazing idea they wanna share, they have a problem they’re trying to solve. Some people are paid to do a specific thing, but the majority of folks are volunteers first. And security experts 12th, 18th, security is not necessarily a core skill that most developers have. what design choices, kind of thinking about when you were looking at Inspector, what design choices did you make to help meet the maintainers where they are, where they are experts in languages or frameworks or kind of these techniques or algorithms? But how are you helping them where they are rather than expecting them to become a full-time securityologist like you or I?

Mike Lieberman (12:58)
So we have a mantra here at Kusari, which is essentially just don’t piss off the engineers, right? As engineers ourselves, as folks who, myself, I am a software engineer first, or really more of a dev ops, dev sec ops engineer first, became more of a software engineer over time. But one of the big sort of mantras was, one of the things that always frustrated me was you have to do all the security stuff.

And they were burdens to my daily job, right? Where I was not being, you know, again, this is me both as a maintainer of open source projects and also just, hey, I get paid as an engineer or whatever. What, but at the end of the day, I wasn’t incentivized to do secure things. I might’ve been yelled at. I might’ve been told thou shalt do this security thing, but my incentives were getting out this new feature, making my customer or my user happy, right?

And so when it comes to those sorts of things, that’s kind of how we’ve encoded all of this, where if somebody told me, hey, Mike, you put in a potential remote code execution attack or arbitrary code, whatever it is, like you put a SQL injection attack or some other, you’re not handling this off thing correctly. If you told me, yeah, that’s the thing. And you told me what I might need to look at. yeah, let me get on that. Let me fix that.

If you were to tell me, hey, you’re using a library that isn’t maintained and that everybody has mostly moved over to this other library, cool. I’ll, I’ll work on that, but don’t make the burden. Hey, there’s a, this library is unmaintained. Okay. What am I, what am I supposed to do about it? I don’t know what I’m supposed to do about it. Help me with suggestions. So when it comes to inspector, those are the sorts of things that we sort of baked in is we’re not just telling you this project. Is it maintained?

We’re telling you, hey, this project isn’t maintained, but it’s used in just one test. So maybe it’s not the immediate thing that needs to be fixed versus, hey, this thing is completely unmaintained and it’s potentially vulnerable. And this is something new you’re adding. Like this isn’t something that already exists. This is just bad practice. Like you should probably not include this new thing. Or, you know, and again, providing the suggestions to the user on what to actually do about it.

And some of those things can then be, know, know, inspector has a CLI tool that you can use. And I use it myself with Claude where, Hey, I run it kind of come in and, you know, uh, fix it. And like, it works pretty well. So I think again, it’s, it’s having, um, it’s, it’s the combination of things to sort of make sure that it, as an engineer, you know, you’re not being asked to become an expert in this thing, right? Uh, it’s okay to ask an engineer.

You are a database expert, you should be reasonable at securing databases, but securing the underlying OS and yada yada, hey, maybe you don’t need to be an expert in that. And that’s where tools like Inspector I think really help is they’re the ones who are being experts. Again, kind of going back to that, the home analogy, right? If I run a house, if I have a house, I don’t need to know the inner mechanics of know, a pin tumbler lock and yada, yada and, and, and how the various cameras, you know, that that are looking at the outside of my house, how they all interoperate. No, I just need to know, are they working if something kind of, you know, the battery died on this, I know how to change your battery, let me kind of focus on that. But if they were kind of come in and say, No, no, you need to understand the innards of the networking and you need to understand audio visual processing, I’d be like, No, just not gonna work.

So again, make sure that developers can focus just on what their experts in, and what their primary responsibility is, which is usually to the user. And yes, security is a responsibility there, but they’re not going to be generic security experts. And so what can we do to help them hold their hand and tell them what needs to be done in a way that they can kind of say, yeah, you’re asking me to do two or three small little things. Awesome. By the way, we here at Kusari have made Inspector free for open source, but not just open source, specifically for CNCF and open SSF. You have full sort of unfettered access, no rate limits, no quotas. And love to see folks sign up. The website is kusari.cloud. And yeah, yeah, I want to see folks using it.

CRob (17:45)
It’s really interesting and I love the focus on again, because you’re all you grew up through this. are in software engineer. So I love the focus on trying to how to relieve that burden from these, this army of volunteers. So let’s do something else we do often in cybersecurity. Let’s get our crystal ball out and, you know, thinking ahead from your perspective, what do you think, you know, good security tooling for open source looks like in three to five years?

Mike Lieberman (18:16)
I think the biggest thing in security tooling is better user experience. think that to me is one of the biggest challenges. And right today, and I think that’s where a lot of folks are focusing their efforts, it’s, you we need to some extent, you know, and I know, like, the first thing that came to mind, is Kubernetes, but for security, right? And I recognize that Kubernetes, depending on who you talk to, you know, YAML files,

But no, it really did democratize and make simpler the orchestrating complex container workloads, right? And I think when it comes to security, user experience is often kind of a secondary concern compared to just the, did I prevent the security, you the issue, but that’s kind of, as our world continues to get more complex and complicated and things are scaling up and we’re having AI and all these different things. The need for security continues to increase more and more every day. But with that said, if the answer is using these security tools requires, you know, tons of certifications and whatnot for just to use the security tool, right? Not to become an expert, but just to use the security tool, if you need to be an expert in all these different things, it becomes super difficult, nobody’s gonna do it. So I think we’re gonna start to see to some extent, more tools like Inspector, but also in addition to that, more tools like, and I know we’re working on this in OpenSSF, tools that make adopting of Salsa trivial for the average project. Tools that help just sort of generally with security build out that UX, make it simpler for the average engineer to do that. Similar to how we saw stuff

in that space with DevOps, right? Where you had developers and operations, those worlds kind of became more combined. And what happened was you had tools like your Terraforms or, you know, Open Tofu and Ansible and all of these great things that kind of came out of that space to kind of make it easier for both folks who are focused in operations to get a little closer to developers and then developers to actually also help out with some of the operations, infrastructure, engineering, those sorts of things. And I think we’re gonna start to see more of that as time kind of goes on where those like, I’m gonna call like security primitives are more encoded in the tools we have. So I think we’re gonna start to see a lot of tools out there become secure by design and have a lot of the security features baked in. And then also the security tools that we have just generally become a little bit simpler and where areas where they can’t be super simple, we’re gonna see tools more tools like Inspector that kind of come in and operate similar to how you might imagine the security expert to kind of come in and put the pieces together, which again, doesn’t eliminate the security engineer. I just want to be clear, like security engineers are very much still needed. The challenge is the security engineer is now being tasked. Whereas before you had to be an expert in a small set of domains. Now you’re being asked to be an expert across everything and they need to understand that they’re going to be the ones who are like taking these new security tools and given that better UX are going to be able to scale that across, you know, 10,000 projects, you know, a hundred different AI agents, all of this, like, you know, a million containers, all of those things. So I think we’re going to start seeing a lot more of the security tools working better to scale up what we’re doing.

CRob (22:04)
That is an amazing vision. look forward to observing that over the years. Hopefully your vision becomes a reality. Yeah, thank And as we’re winding down, do you have any closing thoughts or any call to action for the audience?

Mike Lieberman (22:19)
Yeah, I think the, so there’s two big ones. One is, hey, if you’re a maintainer and engineer, right? I know you care about security because even when I was not a security engineer, I cared about security. So what I want to hear from maintainers is how can the open source world help, right? How can we help you not get clobbered by a million?

letters from lawyers and other people demanding security features in your stuff. How can we, as an open source community, help out, open source security community, help out? How can we make the tools easier? How can we make sure that those tools fit your needs? And that includes whether it’s inspector or, you know, other things, hey. And on that note as well, you know, CNCF and OpenSSF projects can use inspector..

And the other call to action, I know I say this a lot, large organizations that are using open source, it is your responsibility to provide the incentives to make sure that open source is more secure. Like we can all demand, hey, we need better open source security tooling. We need this, that, and the other thing. But if nobody’s paying for it, if at the end of the day, you know, a random engineer who’s making that open source security tool, if they can’t pay the bills, they’re not going to do that. If they are getting clobbered with a million different feature requests, it’s just not going to work. So we need to make sure. And I know that there’s things like the sovereign tech fund want to see more of that. But just sort of generally, I think it needs to come from these multi billion, multi trillion dollar companies coming in and saying, hey, we are willing to foot a good deal of this bill right in order to make the world more secure for everybody.

CRob (24:17)
Those are some wise words and also I think a wonderful vision we all can work towards together. Mike Lieberman from Kusari, thank you my friend. I loved having you on. And with that, we’re gonna call this a wrap. I want everyone to stay cyber safe and sound and have a great day.

Kusari Partners with OpenSSF to Strengthen Open Source Software Supply Chain Security

By Blog, Guest Blog

Cross-post originally published on the Kusari Blog

Open source software powers the modern world; securing it remains a shared responsibility.

The software supply chain is becoming more complex and more exposed with every release. Modern applications rely on vast ecosystems of open source components, dependencies, and increasingly AI-generated code. While this accelerates innovation, it also expands the attack surface dramatically. Threat actors are taking advantage of this complexity with more frequent and sophisticated attacks, from dependency confusion and malicious package injections to license risks that consistently target open source communities.

At the same time, developers are asked to move faster while ensuring security and compliance across thousands of components. Traditional security reviews often happen too late in the development lifecycle, creating friction between development and security teams and leaving maintainers overwhelmed by reactive work.

Kusari is proud to partner with the Open Source Security Foundation (OpenSSF) to offer Kusari Inspector at no cost to OpenSSF projects. Together, we’re helping maintainers and security teams gain deeper visibility into their software supply chains and better understand the relationships between first-party code, third-party dependencies, and transitive components.  

Projects adopting Kusari Inspector include Gemara, GitTUF, GUAC, in-toto/Witness, OpenVEX, Protobom and Supply-chain Levels for Software Artifacts (SLSA). As AI coding tools become standard in open source development, Kusari Inspector serves as the safety net maintainers didn’t know they needed. 

“I used Claude to submit a pull request to go-witness,” said John Kjell, a maintainer of in-toto/Witness. “Kusari Inspector found an issue that Claude didn’t catch. When I asked Claude to fix what Kusari Inspector flagged, it did.”

Maintainers are under growing pressure. According to Kusari’s Application Security in Practice report, organizations continue to struggle with noise, fragmented tooling, and limited visibility into what’s actually running in production. The same challenges affect open source projects — often with fewer resources.

Kusari Inspector helps OpenSSF projects:

  • Map dependencies and transitive risk
  • Identify gaps in attestations and provenance
  • Understand how components relate across builds and releases
  • Reduce manual investigation and security guesswork

Kusari Inspector – Secure Contributions at the Pull Request

Kusari Inspector also helps strengthen the relationship between developers and security teams. Our Application Security in Practice research found that two-thirds of teams spend up to 20 hours per week responding to supply chain incidents — time diverted from building and innovating. 

For open source projects, the burden is often even heavier. From our experience in co-creating and maintaining GUAC, we know most projects are maintained by small teams of part-time contributors and already overextended maintainers who don’t have dedicated security staff. Every reactive investigation, dependency review, or license question pulls limited capacity away from priorities and community support — making proactive, workflow-integrated security even more critical.

By increasing automated checks directly in pull requests, projects reduce review latency and catch issues earlier, shifting from reactive firefighting to proactive prevention. Instead of maintainers “owning” reviews in isolation, Kusari Inspector brings them integrated, context-aware feedback — closer to development and accelerating secure delivery.

This partnership gives OpenSSF projects the clarity they need to make informed security decisions without disrupting developer workflows.

“The OpenSSF welcomes Kusari Inspector as a clear demonstration of community support. This helps our projects shift from reactive security measures to proactive, integrated prevention at scale,” said Steve Fernandez, General Manager, OpenSSF.

“Kusari’s journey has always been deeply connected to the open source security community. We’ve focused on closing knowledge gaps through better metadata, relationships, and insight,” said Tim Miller, Kusari Co-Founder and CEO. “Collaborating with OpenSSF reflects exactly why Kusari was founded: to turn transparency into actionable trust.”

If you’re an OpenSSF project maintainer or contributor interested in strengthening your supply chain posture, use Kusari Inspector for free — https://us.kusari.cloud/signup.

Author Bio

Michael LiebermanMichael Lieberman is co-founder and CTO of Kusari where he helps build transparency and security in the software supply chain. Michael is an active member of the open source community, co-creating the GUAC and FRSCA projects and co-leading the CNCF’s Secure Software Factory Reference Architecture whitepaper. He is an elected member of the OpenSSF Governing Board and Technical Advisory Council along with CNCF TAG Security Lead and an SLSA steering committee member.

Leading Tech Coalition Invests $12.5 Million Through OpenSSF and Alpha-Omega to Strengthen Open Source Security

By Blog

Securing the open source software that underlies our digital infrastructure is a persistent and complex challenge that continues to evolve. The Linux Foundation announced a $12.5 million collective investment to be managed by Alpha-Omega and The Open Source Security Foundation (OpenSSF). This funding comes from key partners including Anthropic, Amazon Web Services (AWS), Google, Google DeepMind, GitHub, Microsoft, and OpenAI. The goal is to strengthen the security, resilience, and long-term sustainability of the open source ecosystem worldwide.

Building on Proven Success through OpenSSF Initiatives

This new investment provides critical support for OpenSSF’s proven, maintainer-centric initiatives. Targeted financial support is a key catalyst for sustained improvement in open source security. The results of the OpenSSF’s collective work in 2025 are clear:

  • Alpha-Omega invested $5.8 million in 14 critical open source projects and completed over 60 security audits and engagements.
  • Growing a Global Community: OpenSSF grew to 117 member organizations and was advanced by 267+ active contributors from 112 organizations, working across 10 Working Groups and 32 Technical Initiatives.
  • Driving Technical Impact: The OpenSSF Technical Advisory Council (TAC) awarded over $660,000 in funding across 14 Technical Initiatives, strengthening supply chain integrity, advancing transparency tools like Sigstore, and enabling community-driven security audits.
  • Measurable Security Uplift: Focused security engagements across critical projects resulted in 52 vulnerabilities fixed and 5 fuzzing frameworks implemented.
  • Expanding Education: Nearly 20,000 course enrollments across OpenSSF’s free training programs, with new courses like Security for Software Development Managers and Secure AI/ML-Driven Software Development empowering developers globally.
  • Global Policy Engagement: Launched the Global Cyber Policy Working Group and served as a challenge advisor for the Artificial Intelligence Cyber Challenge (AIxCC), ensuring the open source voice is heard in evolving regulations like the EU Cyber Resilience Act (CRA).

AI: A New Frontier in Security

The security landscape is changing fast. Artificial intelligence (AI) accelerates both software development and the discovery of vulnerabilities, which creates new demands on maintainers and security teams. However, OpenSSF recognizes that grant funding alone is not the sole solution to the problems AI tools are causing today on open source security teams. This moment also offers powerful new opportunities to improve how security work is completed.

This new funding will help the OpenSSF provide the active resources and dedicated projects needed to support overworked maintainers with the triage and processing of the increased AI-generated security reports they are currently receiving. Our response will feature global strategies tailored to the needs of maintainers and their communities.

“Open source software now underpins the majority of modern software systems, which means the security of that ecosystem affects nearly every organization and user worldwide,” said Christopher Robinson, CTO and Chief Security Architect at OpenSSF. “Investments like this allow the community to focus on what matters most: empowering maintainers, strengthening security practices across projects, and raising the overall security bar for the global software supply chain.”

Securing the Open Source Lifecycle

The true measure of success will be execution. Success is not about how much AI we introduce into open source. It is determined by whether maintainers can use it to reduce risk, remediate serious vulnerabilities faster, and strengthen the software supply chain long term. We are grateful to our funding partners for their commitment to this work, and we look forward to continuing it alongside the maintainers and communities that power the world’s digital systems.

“Our commitment remains focused: to sustainably secure the entire lifecycle of open source software,” said Steve Fernandez, General Manager of OpenSSF. “By directly empowering the maintainers, we have an extraordinary opportunity to ensure that those at the front lines of software security have the tools and standards to take preventative measures to stay ahead of issues and build a more resilient ecosystem for everyone.”

To learn more about open source security initiatives at the Linux Foundation, please visit openssf.org and alpha-omega.dev.

What’s in the SOSS? Podcast #41 – S2E18 The Remediation Revolution: How AI Agents Are Transforming Open Source Security with John Amaral of Root.io

By Podcast

Summary

In this episode of What’s in the SOSS, CRob sits down with John Amaral from Root.io to explore the evolving landscape of open source security and vulnerability management. They discuss how AI and LLM technologies are revolutionizing the way we approach security challenges, from the shift away from traditional “scan and triage” methodologies to an emerging “fix first” approach powered by agentic systems. John shares insights on the democratization of coding through AI tools, the unique security challenges of containerized environments versus traditional VMs, and how modern developers can leverage AI as a “pair programmer” and security analyst. The conversation covers the transition from “shift left” to “shift out” security practices and offers practical advice for open source maintainers looking to enhance their security posture using AI tools.

Conversation Highlights

00:25 – Welcome and introductions
01:05 – John’s open source journey and Root.io’s SIM Toolkit project
02:24 – How application development has evolved over 20 years
05:44 – The shift from engineering rigor to accessible coding with AI
08:29 – Balancing AI acceleration with security responsibilities
10:08 – Traditional vs. containerized vulnerability management approaches
13:18 – Leveraging AI and ML for modern vulnerability management
16:58 – The coming “remediation revolution” and fix-first approach
18:24 – Why “shift left” security isn’t working for developers
19:35 – Using AI as a cybernetic programming and analysis partner
20:02 – Call to action: Start using AI tools for security today
22:00 – Closing thoughts and wrap-up

Transcript

Intro Music & Promotional clip (00:00)

CRob (00:25)
Welcome, welcome, welcome to What’s in the SOSS, the OpenSSF’s podcast where I talk to upstream maintainers, industry professionals, educators, academics, and researchers all about the amazing world of upstream open source security and software supply chain security.

Today, we have a real treat. We have John from Root.io with us here, and we’re going to be talking a little bit about some of the new air quotes, “cutting edge” things going on in the space of containers and AI security. But before we jump into it, John, could maybe you share a little bit with the audience, like how you got into open source and what you’re doing upstream?

John (01:05)
First of all, great to be here. Thank you so much for taking the time at Black Hat to have a conversation. I really appreciate it. Open source, really great topic. I love it. Been doing stuff with open source for quite some time. How do I get into it? I’m a builder. I make things. I make software been writing software. Folks can’t see me, but you know, I’m gray and have no hair and all that sort of We’ve been doing this a while. And I think that it’s been a great journey and a pleasure in my life to work with software in a way that democratizes it, gets it out there. I’ve taken a special interest in security for a long time, 20 years of working in cybersecurity. It’s a problem that’s been near and dear to me since the first day I ever had my like first floppy disk, corrupted. I’ve been on a mission to fix that. And my open source journey has been diverse. My company, Root.io, we are the maintainers of an open source project called Slim SIM (or SUM) Toolkit, which is a pretty popular open source project that is about security and containers. And it’s been our goal, myself personally, and as in my latest company to really try to help make open source secure for the masses.

CRob (02:24)
Excellent. That is an excellent kind of vision and direction to take things. So from your perspective, I feel we’re very similar age and kind of came up maybe in semi-related paths. But from your perspective, how have you seen application development kind of transmogrify over the last 20 or so years? What has gotten better? What might’ve gotten a little worse?

John (02:51)
20 years, big time frame talking about modern open source software. I remember when Linux first came out. And I was playing with it. I actually ported it to a single board computer as one of my jobs as an engineer back in the day, which was super fun. Of course, we’ve seen what happened by making software available to folks. It’s become the foundation of everything.

Andreessen said software will eat the world while the teeth were open source. They really made software available and now 95 or more percent of everything we touch and do is open source software. I’ll add that in the grand scheme of things, it’s been tremendously secure, especially projects like Linux. We’re really splitting hairs, but security problems are real. as we’ve seen, proliferation of open source and proliferation of repos with things like GitHub and all that. Then today, proliferation of tooling and the ability to build software and then to build software with AI is just simply exponentiating the rate at which we can do things. Good people who build software for the right reasons can do things. Bad people who do things for the bad reasons can do things. And it’s an arms race.

And I think it’s really both benefiting software development, society, software builders with these tremendously powerful tools to do things that they want. A person in my career arc, today I feel like I have the power to write code at a rate that’s probably better than I ever have. I’ve always been hands on the keyboard, but I feel rejuvenated. I’ve become a business person in my life and built companies.

And I didn’t always have the time or maybe even the moment to do coding at the level I’d like. And today I’m banging out projects like I was 25 or even better. But at the same time that we’re getting all this leverage universally, we also noticed that there’s an impending kind of security risk where, yeah, we can find vulnerabilities and generate them faster than ever. And LLMs aren’t quite good yet at secure coding. I think they will be. But also attackers are using it for exploits and really as soon as a disclosed vulnerability comes out or even minutes later, they’re writing exploits that can target those. I love the fact that the pace and the leverage is high and I think the world’s going to do great things with it, the world of open source folks like us. At the same time, we’ve got to be more diligent and even better at defending.

CRob (05:44)
Right. I heard an interesting statement yesterday where folks were talking about software engineering as a discipline that’s maybe 40 to 60 years old. And engineering was kind of the core noun there. Where these people, these engineers were trained, they had a certain rigor. They might not have always enjoyed security, but they were engineers and there was a certain kind of elegance to the code and that was people much like artists where they took a lot of pride in their work and how the code you could understand what the code is. Today and especially in the last several years with the influx of AI tools especially that it’s a blessing and a curse that anybody can be a developer. Not just people that don’t have time that used to do it and now they get to of scratch that itch. But now anyone can write code and they may not necessarily have that same rigor and discipline that comes from like most of them engineering trades.

John (06:42)
I’m going to guess. I think it’s not walking out too far on limb that you probably coded in systems at some point in your life where you had a very small amount of memory to work with. You knew every line of code in the system. Like literally it was written. There might have been a shim operating system or something small, but I wrote embedded systems early in my career and we knew everything. We knew every line of code and the elegance and the and the efficiency of it and the speed of it. And we were very close to the CPU, very close to the hardware. It was slow building things because you had to handcraft everything, but it was very curated and very beautiful, so to speak. I find beauty in those things. You’re exactly right. I think I started to see this happen around the time when JVM started happening, Java Virtual Machines, where you didn’t have to worry about Java garbage collection. You didn’t have to worry about memory management.

And then progressively, levels of abstraction have changed right to to make coding faster and easier and I give it more you know more power and that’s great and we’ve built a lot more systems bigger systems open source helps. But now literally anyone who can speak cogently and describe what they want and get a system and. And I look at the code my LLM’s produce. I know what good code looks like. Our team is really good at engineering right?

Hmm, how did it think to do it that way? Then go back and we tell it what we want and you can massage it with some words. It’s really dangerous and if you don’t know how to look for security problems, that’s even more dangerous. Exactly, the level of abstraction is so high that people aren’t really curating code the way they might need to to build secure production grade systems.

CRob (08:29)
Especially if you are creating software with the intention of somebody else using it, probably in a business, then you’re not really thinking about all the extra steps you need to take to help protect yourself in your downstream.

John (08:44)
Yeah, yeah. think it’s an evolution, right? And where I think of it like these AI systems we’re working with are maybe second graders. When it comes to professional code authoring, they can produce a lot of good stuff, right? It’s really up to the user to discern what’s usable.

And we can get to prototypes very quickly, which I think is greatly powerful, which lets us iterate and develop. In my company, we use AI coding techniques for everything, but nothing gets into production, into customer hands that isn’t highly vetted and highly reviewed. So, the creation part goes much faster. The review part is still a human.

CRob (09:33)
Well, that’s good. Human on the loop is important.

John (09:35)
It is.

CRob (09:36)
So let’s change the topic slightly. Let’s talk a little bit more about vulnerability management. From your perspective, thinking about traditional brick and mortar organizations, how have you seen, what key differences do you see from someone that is more data center, server, VM focused versus the new generation of cloud native where we have containers and cloud?

What are some of the differences you see in managing your security profile and your vulnerabilities there?

John (10:08)
Yeah, so I’ll start out by a general statement about vulnerability management. In general, the way I observe current methodologies today are pretty traditional.

It’s scan, it’s inventory – What do I have for software? Let’s just focus on software. What do I have? Do I know what it is or not? Do I have a full inventory of it? Then you scan it and you get a laundry list of vulnerabilities, some false positives, false negatives that you’re able to find. And then I’ve got this long list and the typical pattern there is now triage, which are more important than others and which can I explain away. And then there’s a cycle of remediation, hopefully, a lot of times not, that you’re cycling work back to the engineering organization or to whoever is in charge of doing the remediation. And this is a very big loop, mostly starting with and ending with still long lists of vulnerabilities that need to be addressed and risk managed, right? It doesn’t really matter if you’re doing VMs or traditional software or containerized software. That’s the status quo, I would say, for the average company doing vulnerability maintenance. And vulnerability management, the remediation part of that ends up being some fractional work, meaning you just don’t have time to get to it all mostly, and it becomes a big tax on the development team to fix it. Because in software, it’s very difficult for DevSec teams to fix it when it’s actually a coding problem in the end.

In traditional VM world, I’d say that the potential impact and the velocity at which those move compared to containerized environments, where you have

Kubernetes and other kinds of orchestration systems that can literally proliferate containers everywhere in a place where infrastructure as code is the norm. I just say that the risk surface in these containerized environments is much more vast and oftentimes less understood. Whereas traditional VMs still follow a pattern of pretty prescriptive way of deployment. So I think in the end, the more prolific you can be with deploying code, the more likely you’ll have this massive risk surface and containers are so portable and easy to produce that they’re everywhere. You can pull them down from Docker Hub and these things are full of vulnerabilities and they’re sitting on people’s desks.

They’re sitting in staging areas or sitting in production. So proliferation is vast. And I think that in conjunction with really high vulnerability reporting rates, really high code production rates, vast consumption of open source, and then exploits at AI speed, we’re seeing this kind of almost explosive moment in risk from vulnerability management.

CRob (13:18)
So there’s been, over the last several, like machine intelligence, which has now transformed into artificial intelligence. It’s been around for several decades, but it seems like most recently, the last four years, two years, it has been exponentially accelerating. We have this whole spectrum of things, AI, ML, LLM, GenAI, now we have Agentic and MCP servers.

So kind of looking at all these different technologies, what recommendations do you have for organizations that are looking to try to manage their vulnerabilities and potentially leveraging some of this new intelligence, these new capabilities?

John (13:58)
Yeah, it’s amazing at the rate of change of these kinds of things.

CRob (14:02)
It’s crazy.

John (14:03)
I think there’s a massively accelerating, kind of exponentially accelerating feedback loop because once you have LLMs that can do work, they can help you evolve the systems that they manifest faster and faster and faster. It’s a flywheel effect. And that is where we’re going to get all this leverage in LLMs. At Root, we build an agentic platform that does vulnerability patching at scale. We’re trying to achieve sort of an open source scale level of that.

And I only said that because I believe that rapidly, not just us, but from an industry perspective, we’re evolving to have the capabilities through agentic systems based on modern LLMs to be able to really understand and modify code at scale. There’s a lot of investment going in by all the major players, whether it’s Google or Anthropic or OpenAI to make these LLM systems really good at understanding and generating code. At the heart of most vulnerabilities today, it’s a coding problem. You have vulnerable code.

And so, we’ve been able to exploit the coding capabilities to turn it into an expert security engineer and maintainer of any software system. And so I think what we’re on the verge of is this, I’ll call it remediation revolution. I mentioned that the status quo is typically inventory, scan, list, triage, do your best. That’s a scan for us kind of, you know, I’ll call it, it’s a mode where mostly you’re just trying to get a comprehensive list of the vulnerabilities you have. It’s going to get flipped on its head with this kind of technique where it’s going to be just fix everything first. And there’ll be outliers. There’ll be things that are kind of technically impossible to fix for a while. For instance, it could be a disclosure, but you really don’t know how it works. You don’t have CWEs. You don’t have all the things yet. So you can’t really know yet.

That gap will close very quickly once you know what code base it’s in and you understand it maybe through a POC or something like that. But I think we’re gonna enter into the remediation revolution of vulnerability management where at least for third party open source code, most of it will be fixed – a priority.

Now, zero days will start to happen faster, there’ll be all the things and there’ll be a long tail on this and certainly probably things we can’t even imagine yet. But generally, I think vulnerability management as we know it will enter into this phase of fix first. And I think that’s really exciting because in the end it creates a lot of work for teams to manage those lists, to deal with the re-engineering cycle. It’s basically latent rework that you have to do. You don’t really know what’s coming. And I think that can go away, which is exciting because it frees up security practitioners and engineers to focus on, I’d say more meaningful problems, less toil problems. And that’s good for software.

CRob (17:08)
It’s good for the security engineers.

John (17:09)
Correct.

CRob (17:10)
It’s good for the developers.

John (17:11)
It’s really good for developers. I think generally the shift left revolution in software really didn’t work the way people thought. Shifting that work left, it has two major frictions. One is it’s shifting new work to the engineering teams who are already maximally busy.

CRob (17:29)
Correct.

John (17:29)
I didn’t have time to do a lot of other things when I was an engineer. And the second is software engineers aren’t security engineers. They really don’t like the work and maybe aren’t good at the work. And so what we really want is to not have that work land on their plate. I think we’re entering into an age where, and this is a general statement for software, where software as a service and the idea of shift left is really going to be replaced with I call shift out, which is if you can have an agentic system do the work for you, especially if it’s work that is toilsome and difficult, low value, or even just security maintenance, right? Like lot of this work is hard. It’s hard. That patching things is hard, especially for the engineer who doesn’t know the code. If you can make that work go away and make it secure and agents can do that for you, I think there’s higher value work for engineers to be doing.

CRob (18:24)
Well, and especially with the trend with open source, kind of where people are assembling composing apps instead of creating them whole cloth. It’s a very rare engineer indeed that’s going to understand every piece of code that’s in there.

John (18:37)
And they don’t. I don’t think it’s feasible. don’t know one except the folks who write node for instance, Node works internally. They don’t know. And if there’s a vulnerability down there, some of that stuff’s really esoteric. You have to know how that code works to fix it. As I said, luckily, agent existing LLM systems with agents kind of powering them or using or exploiting them are really good at understanding big code bases. They have like almost a perfect memory for how the code fits together. Humans don’t, and it takes a long time to learn this code.

CRob (19:11)
Yeah, absolutely. And I’ve been using leveraging AI in my practice is there are certain specific tasks AI does very well. It’s great at analyzing large pools of data and providing you lists and kind of pointers and hints. Not so great making it up by its own, but generally it’s the expert system. It’s nice to have a buddy there to assist you.

John (19:35)
It’s a pair programmer for me, and it’s a pair of data analysts for you, and that’s how you use it. I think that’s a perfect. We effectively have become cybernetic organisms. Our organic capabilities augmented with this really powerful tool. I think it’s going to keep getting more and more excellent at the tasks that we need offloaded.

CRob (19:54)
That’s great. As we’re wrapping up here, do you have any closing thoughts or a call to action for the audience?

John (20:02)
Call to action for the audience – I think it’s again, passion play for me, vulnerability management, security of open source. A couple of things. same. Again, same cloth – I think again, we’re entering an age where think security, vulnerability management can be disrupted. I think anyone who’s struggling with kind of high effort work and that never ending list helps on the way techniques you can do with open source projects and that can get you started. Just for instance, researching vulnerabilities. If you’re not using LLMs for that, you should start tomorrow. It is an amazing buddy for digging in and understanding how things work and what these exploits are and what these risks are. There are tooling like mine and others out there that you can use to really take a lot of effort away from vulnerability management. I’d say that for any open source maintainers out there, I think you can start using these programming tools as pair programmers and security analysts for you. And they’re pretty good. And if you just learn some prompting techniques, you can probably secure your code at a level that you hadn’t before. It’s pretty good at figuring out where your security weaknesses are and telling you what to do about them. I think just these things can probably enhance security open source tremendously.

CRob (24:40)
That would be amazing to help kind of offload some of that burden from our maintainers and let them work on that excellent…

John (21:46)
Threat modeling, for instance, they’re actually pretty good at it. Yeah. Which is amazing. So start using the tools and make them your friend. And even if you don’t want to use them as a pair of programmer, certainly use them as a adjunct SecOps engineer.

CRob (22:00)
Well, excellent. John from Root.io. I really appreciate you coming in here, sharing your vision and your wisdom with the audience. Thanks for showing up.

John (22:10)
Pleasure was mine. Thank you so much for having me.

CRob (22:12)
And thank you everybody. That is a wrap. Happy open sourcing everybody. We’ll talk to you soon.

Key Takeaways from VulnCon 2025: Insights from the OpenSSF Community

By Blog

By Christopher Robinson (CRob), Chief Security Architect, OpenSSF

VulnCon 2025 has once again proven to be an essential gathering for security professionals, fostering collaboration, innovation, and progress in vulnerability management. This matches well with the OpenSSF continued championing for transparency and best practices in open source security. Practitioners from around the world gathered in Raleigh, NC, the week of April 7-10, 2025 to share knowledge, collaborate, and raise awareness of key issues within the global vulnerability management ecosystem.  We wanted to share my key takeaways from this year’s conference and highlight some of the insightful contributions from our community members.

OpenSSF’s Engagement in Cybersecurity 

The Open Source Security Foundation (OpenSSF) seeks to make it easier to sustainably secure the development, maintenance, and consumption of the open source software (OSS) we all depend on. We work on this by fostering collaboration with fellow industry groups like the CVE Program and FIRST, establishing best practices like our recently released Principles for Package Repository Security guide, and developing innovative solutions like Open Source Project Security Baseline, or engaging in global cybersecurity legislation and public policy conversations with our Global Cyber Policy Working Group. Cross-industry collaboration and knowledge sharing is crucial to properly address major challenges by fostering innovation, knowledge sharing, driving sustainable growth, and maximizing the impacts of our collective efforts.

The OpenSSF was thrilled to have a notable presence at VulnCon with significant representation from our Vulnerability Disclosures Working Group and other projects throughout the week. Our engagement in this event illustrates our commitment to community engagement and further supports our strategy to actively engage with the community and facilitate collaboration across industry stakeholders to sustainably address open source software security challenges effectively with transparent operations and governance.

The partnership between the OpenSSF and the FIRST PSIRT SIG showcases how industry and upstream effectively work together on these issues that have global impacts and how we’re better collectively collaborating to solve these complex and far-reaching challenges. Through our co-work on industry standards, and frameworks, or an event like VulnCon – we’re better together!

By the Numbers

The inaugural VulnCon was a cross-industry effort that was held in March 2024. There were 360 security professionals in attendance, with an additional 239 participating virtually (599 total) with nearly 40 sessions given. 2025 saw a dramatic increase in the participants and volume of content shared! This year there were 448 in person attendees with 179 global friends watching and participating virtually (627 total). 294 organizations attended from 36 countries. The program itself almost doubled, adding a 4th full day of sessions and expanding the number of tracks provided up to 100 sessions. Of this, I am proud to say that the OpenSSF members provided over 16 sessions about our community’s work and 46 total sessions given by member representatives.

The Power of Collaboration in Vulnerability Management

This year’s VulnCon featured an amazing docket of talks and workshops spanning the broad spectrum of vulnerability management, disclosure, and coordination. Open Source Software was discussed throughout the four day event, driving home to me how much influence and exposure upstream has on industry and public policy.

Here are a few of my key takeaways:

  1. The Importance of Vulnerability Metadata
    • Vulnerability metadata is crucial for the ecosystem, and OpenSSF’s needs and contributions in this area were front and center. There were numerous talks about OSV and how gaining deeper insights into upstream metadata helps everyone involved. Our members also helped lead and participate in discussions around SBOM, VEX, Vulnerability identifiers like CVE, and helping align software identifiers and finding paths forward around things like CPE and PURL.
  2. Understanding the Open Source Supply Chain
    • The talk from Apache Airflow and Alpha-Omega was a great example of how projects are working with their critical dependencies. They shared how downstream users can do similar work for better security outcomes. Downstream is slowly waking to the notion that more attention, due-diligence, and participation is needed to help make the upstream open source projects they consume continue to be successful.
  3. EU’s Cyber Resilience Act (CRA) Takes Center Stage
    • April 8 featured a dedicated track on the CRA. This law has major implications for vendors and how they assess risk and conduct due diligence across their supply chains. Open source stewards like the Linux Foundation will be essential partners as manufacturers work to meet their CRA obligations by December 2027. Our Global Cyber Policy Working Group is collaborating with key open source peers, industry partners, and the European Commission to assist open source developers, Open Source Stewards, and Manufacturers prepare for the quickly approaching 2026 and final 2027 deadlines.
  4. OSS Security Day: A Focused Deep Dive
    • April 9 was designated as “OSS Security Day,” with 20 sessions focused on various aspects of securing open source software. One key focus was on OpenSSF’s Security Baseline. The Baseline initiative provides a structured set of security requirements aligned with international cybersecurity frameworks, standards, and regulations, aiming to bolster the security posture of open source software projects.

What’s Next? Get Involved with OpenSSF

At the end of the day, security is about effectively managing risk and preparing for the inevitable threats that loom on the horizon. Events such as VulnCon or the forthcoming CNCF-OpenSSF SecurityCon allow experts to come together, share their hard-won wisdom, raise awareness of issues of concern, and collaborate on solutions to address security issues around the world.

The conversations at VulnCon reaffirm the importance of continued engagement in the security community. If you’re interested in contributing to the advancement of open source security, I encourage you to join the OpenSSF community.

Join the OpenSSF mailing list to stay informed about upcoming events, working groups, and initiatives.

For those who couldn’t make it, you can check out recorded content from VulnCon 2024 on YouTube and look out for the VulCon 2025 playlist to get a sense of the discussions shaping the future of vulnerability management. Thank you to all of our amazing community members who were able to come out and demonstrate the power of collaboration of our open source security community and partner with our peers and downstreams within industry, security research, and global governments.