All Posts By

Jeff Diecks

From AIxCC to OpenSSF: Welcoming OSS-CRS to Advance AI Driven Open Source Security

By Blog

By Jeff Diecks

Artificial intelligence is changing how we approach software security. Open source is at the center of that shift.

Over the past year, DARPA’s Artificial Intelligence Cyber Challenge (AIxCC) showed that cyber reasoning systems (CRS) can go beyond finding vulnerabilities. These systems can analyze code, confirm issues, and generate patches. This brings us closer to a future where security is more automated and scalable.

When the competition ended, one question remained. How do we take these breakthroughs and make them usable in the real world?

Today, we are taking an important step forward.

The Open Source Security Foundation (OpenSSF) is welcoming OSS-CRS as a new open source project under the AI / ML Security Working Group.

OSS-CRS emerged from AIxCC and is a standard orchestration framework for building and running LLM-based autonomous bug-finding and bug-fixing systems.

The open framework is designed to make CRS practical outside of the AIxCC environment. During the competition, teams built powerful systems that were released as open source. However, many of them depended on the competition infrastructure, which made them difficult to reuse or extend. OSS-CRS addresses that gap.

OSS-CRS Features include:

  • Standard CRS Interface: OSS-CRS defines a unified interface for CRS development. Build your CRS once following the development guide, and run it across different environments (local, Azure, …) without any modification.
  • Effortless Targeting: Run any CRS against projects in OSS-Fuzz format. If your project is compatible with OSS-Fuzz, OSS-CRS can orchestrate CRSs against it out of the box.
  • Ensemble Multiple CRSs: Compose and run multiple CRSs together in a single campaign to combine their strengths and maximize bug-finding and bug-fixing coverage.
  • Resource Control: Manage CPU limits and LLM budgets per CRS to keep costs and resources in check.

Read the OSS-CRS research paper: https://doi.org/10.48550/arXiv.2603.08566

From Competition to Community

The move of OSS-CRS into OpenSSF marks a clear transition from research and competition to open collaboration and long term development.

OpenSSF provides a neutral home where projects like OSS-CRS can grow. Contributors can work together to improve the tools, validate results, and support adoption across the ecosystem.

OSS-CRS is already producing real results. Using OSS-CRS, Team Atlanta discovered twenty-five vulnerabilities across sixteen projects spanning a broad range of software including PHP, U-Boot, memcached, and Apache Ignite 3.

OpenSSF will continue to support this important work by providing human connectors between CRS tools and open source communities. The goal is to help triage and validate vulnerability reports and proposed patches before they reach maintainers, ensuring findings are accurate, actionable, and respectful of maintainers’ time.

Recent research from the OSS-CRS team validates the necessity of having a human in the loop. The team manually reviewed a set of 630 AI-generated patches and found 20-40% of the patches to be semantically incorrect. The incorrect patches pass all automated validation but are actually wrong — a dangerous failure mode only catchable by manual review.

A key benefit of the OSS-CRS project is its Ensemble feature. The Ensemble feature enhances accuracy and reliability by combining patches from multiple CRS approaches and using a selection process to pick the one most likely to be correct. The research showed this approach consistently matches or outperforms the best single component in improving semantic correctness, which is hard to eliminate at the single-agent level. This collaboration of systems helps produce more robust results for open source defenders.

Get Involved

With projects like OSS-CRS, OpenSSF will continue to support AI-driven security work to help turn innovation into practical outcomes for open source.

We offer several options to get involved including:

Author Bio

Jeff Diecks is a Senior Technical Program Manager at The Linux Foundation. He has more than two decades of experience in technology and communications with a diverse background in operations, project management and executive leadership. A participant in open source since 1999, he’s delivered digital products and applications for universities, sports leagues, state governments, global media companies and non-profits.

What’s in the SOSS? Podcast #57 – S3E9 From Noise to Signal: Security Expertise and Kusari Inspector with Mike Lieberman

By Podcast

Summary

In this episode, CRob talks with Mike Lieberman from Kusari about the current state of open source security. They discuss the growing burden on maintainers from the “deluge” of noisy, low-quality vulnerability reports, often generated by AI tools, and the vital role of “a human in the loop.” Mike introduces Kusari’s tool, Inspector, explaining how it uses codified security expertise to process data from tools like OpenSSF Scorecard and SLSA, effectively filtering out false positives and giving maintainers only high-quality, actionable reports. They also dive into the design philosophy of “don’t piss off the engineers” and share a vision for the future of security tooling that focuses on dramatically better user experience and building security primitives that are “secure by design”.

Conversation Highlights

00:06 Introduction: The Biggest Challenge in Security Tooling
01:12 Overwhelmed Maintainers: The Deluge of Low-Quality AI Reports
04:00 Introducing Kusari’s Inspector: How it Filters False Positives
08:40 The Secret Sauce: Security Expertise and the Need for Reproducible Tests
12:03 Meeting Engineers Where They Are: Design Choices to Reduce Maintainer Burden
18:16 The Future of Open Source Security Tooling: Focusing on Better UX
22:19 Call to Action: The Responsibility of Large Organizations

Transcript

(0:00) Intro Music

Mike Lieberman (00:06)
I think the biggest thing in security tooling is better user experience. I think that to me is one of the biggest challenges.

CRob (00:25)
Welcome, welcome, welcome to What’s in the SOSS?, the OpenSSF’s podcast where I talk to developers, maintainers, security experts, and people in and around this amazing open source ecosystem. Today, again, we have a real treat. Friend of the show, Mike Lieberman from Kusari is joining us again after – I don’t know if your podcast was toppled from its place of the most listened to before, but we’re gonna see if we can make another hit for us. But we’re here today to talk about some interesting developments that you and your crew are involved in and just things going on in open source security. So how have you been, sir?

Mike Lieberman (01:07)
Well, thank you for having me back and yeah, things are going pretty well.

CRob (01:12)
Well, let’s dive right into it. Recently, and this is a topic that I’m actually dealing with this very moment while we’re recording this podcast, that open source maintainers are just currently overwhelmed by just this deluge of noisy, low quality reports, a lot of them generated by AI tools. So kind of thinking about it with your, many hats you wear, as you know, business owner, a community member, and a long time developer, a security expert. From your perspectives, what is actually creating the most burden today? And think about it through the lens of this project you’re going to share with us in a moment.

Mike Lieberman (01:57)
Yeah, sure. So I think to kind of start, the problem has been the same problem since, know, throughout human history, it is a combination of either bad actors or just lazy people that are I would say the biggest issue here. Right. We have a lot of things like AI reports generating awful sort of vulnerability, know, fake vulnerabilities or whatnot. But if we kind of look at it through the lens of like history through tech, we saw the same thing with any sort of automation, right? When, yeah, exactly. When people could kind of create scripts, hey, let me go in spam this one project with my script. Let me spam a whole bunch of projects with my sort of automation or whatever. And, you know, the same thing sort of happened when people, when we started moving away from mailing lists to sort of GitHub and those sorts of things as well. So I think it’s really to kind of take a step back. It’s kind of how people are using the tools more so than the tools themselves. But I do think when it comes to a lot of the security reports, yeah, it is folks who are just kind of asking an LLM.

Hey, find me find me some zero day. And of course, that’s never going to work because the LLMs don’t have that information. And it’s just it kind of comes back to you need people who understand what they’re doing, using the tools in the right way in order to kind of figure out some of this stuff.

CRob (03:42)
Yeah, human in the loop. Our dear friend, Dr. David Wheeler has a saying, he says, a fool with a tool is still a fool. So again, having those experts in there, helping out is critical. So let’s…

Mike Lieberman (03:43)
Yeah.

CRob (04:00)
You’ve been in this space for a long time, focusing in on supply chain security, and you’ve written or contributed to a ton of tools. And most recently, you all helped create over at Kusari a tool called Inspector. So from just a high level TLDR, how do you see things like Inspector kind of changing this dynamic of getting more people involved or getting more expert knowledge in?

Mike Lieberman (04:26)
Sure. I think like the things. So actually to take a step back, right? There’s a lot of great tools that are being built. The challenge with those tools is, and the way I kind of think about it is like, you know, home security, right? It’s, hey, there’s a ton of tools out there that are helping out with open source security the same way that there’s a ton of tools out there for, you know, a smarter lock. A better security system.

CRob (04:59)
A doorbell that can find your dog.

Mike Lieberman (05:02)
There’s privacy concerns on that one. think, you know, we can all agree on that. But I think to that extent, when it comes to sort of these tools, it’s in how they’re used. And also, the expertise that’s required in how to use them. And also, when building the tools, what sort of expertise went into building the tools? And I think that to me is where the big gap is with just sort of some of the AI related things is you have folks using a very generic system like an LLM. And just saying, hey, LLM become a security expert and do this stuff. And of course the LLM makes a lot of mistakes and whatever. But if you were to kind of say through things like MCP and LLM skills and all these other things, if you have a way of codifying, run open SSF scorecard, run, you know, SLSA and run all of these various things and put all of this together and generate me an SBOM using these tools and whatnot. And then you can take all that, then hand the output to the LLM and say, hey, here’s everything I discovered. Here’s also the code. Help me make sense of it. And I think that to me is kind of where a lot of the benefit is. And again, what I just described is essentially inspector, right?

We’re running all of these various tools, again, that we understand because we’ve contributed to those tools, we’ve helped maintain some of those tools, we have been users of these tools for years. So we understand how they’re supposed to be used. We understand how a human who is, before the age of AI would be using these tools. And we recognize the burden of that expertise. And we’ve sort of encoded it. Had the LLM kind of come in at the last mile and then take all that information, and say, hey, if there is a finding, a vulnerability, great. Where does that vulnerability live? Is that a vulnerability in a core piece of my code, which yes, I need to address right now, or is it like, it’s in a test? Yes, it’s probably something I should fix, but maybe not the biggest issue right this second. And so I think tools like that are really helping because the thing that we found, and again, a user of inspector told us this, and I won’t call out the exact AI tool they were using, but they were using a generic LLM with some stuff. And then they were using inspector. And one of the things that they had said was, wow. Yeah. Like inspector is actually catching the issue with all of the, the, uh, it detected that a particular issue was essentially a, um, a false positive because it looked at a potential remote code execution and looked at all the stuff alongside the code and said, you are clearly have an allow list. So given that you have this allow list, we recognize it’s not a remote code execution, or rather arbitrary code sort of execution attack. And I think it’s stuff like that, that we’re seeing starting to get developed more and more. Whereas a lot of the tradition, I want to say traditional with AI, even though it’s been, you know, like in the past year, everything shifts. Yeah.

When we look at sort of how folks were using LLMs even just a year ago, a lot has shifted and we’re seeing less of these false positives coming out of AI because people are using AI the way it should be used, where it’s you’re supplementing all these other tools that are out.

CRob (08:40)
That’s awesome.

And this might lead into this next question. AI and automation are finding a lot more potential issues, but they’re not always better. And is that what you think that secret sauce of having that security expertise and that helps kind of balance out finding a vulnerability and then kind of sharing that information with the maintainer effectively?

Mike Lieberman (09:08)
Yeah, so I mean, I think when it comes to stuff like that, the way I, you know, I was actually having a conversation with a friend just a few days ago about this issue and I’m reminded of issues just even before AI. And one of the big things that maintainers would ask is, give me a way to reproduce this. If you’re not gonna give me a way to reproduce this, I’m not gonna, you know, I’m not gonna accept your report here and I’m not gonna do a ton of investigation to figure out what you intended to mean.

And I think it’s the same way with AI here, where we’re starting to see with some of the stuff coming out of AI XCC and some other places, we are starting to see tools that are being built that are actually generating the tests and whatnot that can reproduce these vulnerabilities that the LLMs are claiming, or AI tools are claiming. And I think that to me is important because when I look at Daniel from Curl or some of these other folks who are like,

I am so sick of all of these AI reports. It’s like every single one that they’re claiming is an AI report, it’s like they didn’t give me a way to reproduce it. Or even worse, the AI said, here’s a list of steps to reproduce. folks are coming out and saying, that function that you were claiming needs to get run doesn’t exist. And so I’m just thinking to myself, well, why not just write a test that does that thing, you know, and have the LLM write the test, whatever, but prove out that like, hey, an AI tool generated a test and I can run that test and I could see, yep, that is an exploit. That is actually a vulnerability. Now I can go and take that and package it up, it over to, you know, hand it over to the maintainer. And I think if I’m as a maintainer of various open source projects,

If I received something that said, hey, here is a test, you can run that test. And again, by the test, mean like an actual test, a test that makes up other code and tries to do whatever, but an actual test. If you have that, I as a maintainer would say, absolutely, that is a real vulnerability. But I think the thing that we’re seeing right now is we’re seeing all this sort of slop, which is.

Again, it’s just similar to the slop we saw years ago with other sort of automated vulnerability reporting and just generally in tech. And I think the problem here is still kind of comes back to lazy maintainer or sorry, not lazy maintainers, but lazy submitters and just other sort of bad actors who are just like, yeah, I’m just gonna throw a thing out there and hopefully one of these is right. And I’m gonna get it, make a name for myself.

CRob (11:58)
A wise man once said that knowing is half the battle.

Mike Lieberman (12:01)
Yes.

CRob (12:03)
And thinking about it from this maintainer developer perspective, almost always maintainers are volunteers first. They’re there because they have amazing idea they wanna share, they have a problem they’re trying to solve. Some people are paid to do a specific thing, but the majority of folks are volunteers first. And security experts 12th, 18th, security is not necessarily a core skill that most developers have. what design choices, kind of thinking about when you were looking at Inspector, what design choices did you make to help meet the maintainers where they are, where they are experts in languages or frameworks or kind of these techniques or algorithms? But how are you helping them where they are rather than expecting them to become a full-time securityologist like you or I?

Mike Lieberman (12:58)
So we have a mantra here at Kusari, which is essentially just don’t piss off the engineers, right? As engineers ourselves, as folks who, myself, I am a software engineer first, or really more of a dev ops, dev sec ops engineer first, became more of a software engineer over time. But one of the big sort of mantras was, one of the things that always frustrated me was you have to do all the security stuff.

And they were burdens to my daily job, right? Where I was not being, you know, again, this is me both as a maintainer of open source projects and also just, hey, I get paid as an engineer or whatever. What, but at the end of the day, I wasn’t incentivized to do secure things. I might’ve been yelled at. I might’ve been told thou shalt do this security thing, but my incentives were getting out this new feature, making my customer or my user happy, right?

And so when it comes to those sorts of things, that’s kind of how we’ve encoded all of this, where if somebody told me, hey, Mike, you put in a potential remote code execution attack or arbitrary code, whatever it is, like you put a SQL injection attack or some other, you’re not handling this off thing correctly. If you told me, yeah, that’s the thing. And you told me what I might need to look at. yeah, let me get on that. Let me fix that.

If you were to tell me, hey, you’re using a library that isn’t maintained and that everybody has mostly moved over to this other library, cool. I’ll, I’ll work on that, but don’t make the burden. Hey, there’s a, this library is unmaintained. Okay. What am I, what am I supposed to do about it? I don’t know what I’m supposed to do about it. Help me with suggestions. So when it comes to inspector, those are the sorts of things that we sort of baked in is we’re not just telling you this project. Is it maintained?

We’re telling you, hey, this project isn’t maintained, but it’s used in just one test. So maybe it’s not the immediate thing that needs to be fixed versus, hey, this thing is completely unmaintained and it’s potentially vulnerable. And this is something new you’re adding. Like this isn’t something that already exists. This is just bad practice. Like you should probably not include this new thing. Or, you know, and again, providing the suggestions to the user on what to actually do about it.

And some of those things can then be, know, know, inspector has a CLI tool that you can use. And I use it myself with Claude where, Hey, I run it kind of come in and, you know, uh, fix it. And like, it works pretty well. So I think again, it’s, it’s having, um, it’s, it’s the combination of things to sort of make sure that it, as an engineer, you know, you’re not being asked to become an expert in this thing, right? Uh, it’s okay to ask an engineer.

You are a database expert, you should be reasonable at securing databases, but securing the underlying OS and yada yada, hey, maybe you don’t need to be an expert in that. And that’s where tools like Inspector I think really help is they’re the ones who are being experts. Again, kind of going back to that, the home analogy, right? If I run a house, if I have a house, I don’t need to know the inner mechanics of know, a pin tumbler lock and yada, yada and, and, and how the various cameras, you know, that that are looking at the outside of my house, how they all interoperate. No, I just need to know, are they working if something kind of, you know, the battery died on this, I know how to change your battery, let me kind of focus on that. But if they were kind of come in and say, No, no, you need to understand the innards of the networking and you need to understand audio visual processing, I’d be like, No, just not gonna work.

So again, make sure that developers can focus just on what their experts in, and what their primary responsibility is, which is usually to the user. And yes, security is a responsibility there, but they’re not going to be generic security experts. And so what can we do to help them hold their hand and tell them what needs to be done in a way that they can kind of say, yeah, you’re asking me to do two or three small little things. Awesome. By the way, we here at Kusari have made Inspector free for open source, but not just open source, specifically for CNCF and open SSF. You have full sort of unfettered access, no rate limits, no quotas. And love to see folks sign up. The website is kusari.cloud. And yeah, yeah, I want to see folks using it.

CRob (17:45)
It’s really interesting and I love the focus on again, because you’re all you grew up through this. are in software engineer. So I love the focus on trying to how to relieve that burden from these, this army of volunteers. So let’s do something else we do often in cybersecurity. Let’s get our crystal ball out and, you know, thinking ahead from your perspective, what do you think, you know, good security tooling for open source looks like in three to five years?

Mike Lieberman (18:16)
I think the biggest thing in security tooling is better user experience. think that to me is one of the biggest challenges. And right today, and I think that’s where a lot of folks are focusing their efforts, it’s, you we need to some extent, you know, and I know, like, the first thing that came to mind, is Kubernetes, but for security, right? And I recognize that Kubernetes, depending on who you talk to, you know, YAML files,

But no, it really did democratize and make simpler the orchestrating complex container workloads, right? And I think when it comes to security, user experience is often kind of a secondary concern compared to just the, did I prevent the security, you the issue, but that’s kind of, as our world continues to get more complex and complicated and things are scaling up and we’re having AI and all these different things. The need for security continues to increase more and more every day. But with that said, if the answer is using these security tools requires, you know, tons of certifications and whatnot for just to use the security tool, right? Not to become an expert, but just to use the security tool, if you need to be an expert in all these different things, it becomes super difficult, nobody’s gonna do it. So I think we’re gonna start to see to some extent, more tools like Inspector, but also in addition to that, more tools like, and I know we’re working on this in OpenSSF, tools that make adopting of Salsa trivial for the average project. Tools that help just sort of generally with security build out that UX, make it simpler for the average engineer to do that. Similar to how we saw stuff

in that space with DevOps, right? Where you had developers and operations, those worlds kind of became more combined. And what happened was you had tools like your Terraforms or, you know, Open Tofu and Ansible and all of these great things that kind of came out of that space to kind of make it easier for both folks who are focused in operations to get a little closer to developers and then developers to actually also help out with some of the operations, infrastructure, engineering, those sorts of things. And I think we’re gonna start to see more of that as time kind of goes on where those like, I’m gonna call like security primitives are more encoded in the tools we have. So I think we’re gonna start to see a lot of tools out there become secure by design and have a lot of the security features baked in. And then also the security tools that we have just generally become a little bit simpler and where areas where they can’t be super simple, we’re gonna see tools more tools like Inspector that kind of come in and operate similar to how you might imagine the security expert to kind of come in and put the pieces together, which again, doesn’t eliminate the security engineer. I just want to be clear, like security engineers are very much still needed. The challenge is the security engineer is now being tasked. Whereas before you had to be an expert in a small set of domains. Now you’re being asked to be an expert across everything and they need to understand that they’re going to be the ones who are like taking these new security tools and given that better UX are going to be able to scale that across, you know, 10,000 projects, you know, a hundred different AI agents, all of this, like, you know, a million containers, all of those things. So I think we’re going to start seeing a lot more of the security tools working better to scale up what we’re doing.

CRob (22:04)
That is an amazing vision. look forward to observing that over the years. Hopefully your vision becomes a reality. Yeah, thank And as we’re winding down, do you have any closing thoughts or any call to action for the audience?

Mike Lieberman (22:19)
Yeah, I think the, so there’s two big ones. One is, hey, if you’re a maintainer and engineer, right? I know you care about security because even when I was not a security engineer, I cared about security. So what I want to hear from maintainers is how can the open source world help, right? How can we help you not get clobbered by a million?

letters from lawyers and other people demanding security features in your stuff. How can we, as an open source community, help out, open source security community, help out? How can we make the tools easier? How can we make sure that those tools fit your needs? And that includes whether it’s inspector or, you know, other things, hey. And on that note as well, you know, CNCF and OpenSSF projects can use inspector..

And the other call to action, I know I say this a lot, large organizations that are using open source, it is your responsibility to provide the incentives to make sure that open source is more secure. Like we can all demand, hey, we need better open source security tooling. We need this, that, and the other thing. But if nobody’s paying for it, if at the end of the day, you know, a random engineer who’s making that open source security tool, if they can’t pay the bills, they’re not going to do that. If they are getting clobbered with a million different feature requests, it’s just not going to work. So we need to make sure. And I know that there’s things like the sovereign tech fund want to see more of that. But just sort of generally, I think it needs to come from these multi billion, multi trillion dollar companies coming in and saying, hey, we are willing to foot a good deal of this bill right in order to make the world more secure for everybody.

CRob (24:17)
Those are some wise words and also I think a wonderful vision we all can work towards together. Mike Lieberman from Kusari, thank you my friend. I loved having you on. And with that, we’re gonna call this a wrap. I want everyone to stay cyber safe and sound and have a great day.

What’s in the SOSS? Podcast #56 – S3E8 Empowering New Maintainers: Inside the OpenSSF Mentorship Program

By Podcast

Summary

In this episode of What’s in the SOSS? host Sally Cooper sits down with Yesenia Yser, co-lead of the OpenSSF Mentorship Program and the BEAR Working Group, and Kairo De Araujo, Open Source Software Engineer and mentor for rstuf. They dive into the success of the OpenSSF Mentorship Program, which focuses on bringing underrepresented voices into software security. Kairo shares an incredible outcome from the last cycle – where two out of three mentees became project maintainers – while Yesenia discusses the evolution of the BEAR Working Group (Belonging, Empowerment, Allyship, and Representation) mentorship program. Whether you are a potential mentor or a mentee looking to break into open source, this episode provides a roadmap for the upcoming paid mentorship cycle.

Important Dates for the 2026 Mentorship Cycle:

  • Applications Open: March 24, 2026
  • Applications Close: April 12, 2026
  • Selection Period: April 13 – April 30, 2026
  • Notification Date: May 1, 2026
  • Onboarding: May 5 – May 29, 2026
  • Mentorship Period: June 1 – August 21, 2026

Conversation Highlights

00:01 – Welcome
01:43 – Kairo on his work with the Repository Service for TUF (rstuf).
02:30 – Yesenia on the BEAR Working Group and making open source accessible.
04:30 – The “Why” behind mentorship: Solving the barrier to entry for security beginners.
07:28 – Success strategies: Working as a team across time zones with multiple mentees.
09:28 – The ultimate goal: Moving mentees from learners to official project maintainers.
10:58 – Challenges and growing pains: Managing deadlines and interview chaos.
13:48 – Advice for Mentors: The importance of clear communication and flexibility.
15:02 – Advice for Mentees: Don’t be afraid to join; focus on “pre-onboarding”.
17:13 – Key Dates for the 2026 Mentorship Cycle.
20:15 – Call to Action: Get to know this year’s participating projects (gittuf, rstuf, SBOMit, Minder) and how to get involved.

Transcript

00:00 – Music & Intro clip

Sally Cooper (00:24)
Hello, hello and welcome back to What’s in the SOSS? An OpenSSF podcast where we get to talk to some amazing people who are involved in open source software and open source software security. And today we have a very special treat two repeat offenders coming back and they do some critical work in the OpenSSF community.

They have firsthand knowledge of the mentorship program, which we’re going to talk about today, which is a hands-on initiative designed to help underrepresented voices break into software security. So first, have Kairo. Hi, Kairo, an open source software engineer who served as one of the key mentors during last year’s program.

And we’re here to talk about the powerful impact of that mentorship program and also dive into the important work of the BEAR Working Group. So we have Yesenia also joining us from the perspective as a co-lead of the mentorship program in the BEAR Working Group. And I just have to say, Yesenia, it’s super nice to have you on this side of the microphone as a guest. So Kairo, Yesenia, welcome back and introduce yourselves.

Kairo De Araujo (01:43)
Yeah, thank you. Well, my name is Kairo, as you said, and I’m based in the Netherlands and I’m working as a software engineer for a few years. And the past six years, actually, I really focused on the security supply chain. And I’m an author of Repository Service for TUF (rstuf). That is a project to help the security supply chain, that’s part of OpenSSF. And I’m also maintainer of other critical open source projects in the security supply chain. Yeah, and last year, as you mentioned, and we’ll talk more about I participated in the mentorship program with the rstuf project, the repository size for tuf.

Yesenia (02:30)
Oh how the tables have turned tables. Hey everyone, soy Yesenia, not your co-host today, but a guest on today’s episode. I have a extensive background in security. I usually like to say I’ve been Jacqueline of all, cyber master of none, working in various umbrellas and have made my way into open source, love it, and do a lot of advocacy and outreach for it because of just the amazing folks that I’ve met on here that have done amazing things, as you’ve heard in many other episodes.

So, based out in the sunny state of Florida, I’ve seen snow once and then another time through a window. So I always forget that winter’s here. So if you show me snow in the background, I’m going to be so surprised. But that’s great. I love the work that we’re doing with bear. It was originally our DEI group. But since the man banned the word DEI, we chose the bear. So this is our belonging empowerment, allyship, and representation.

group in which we are making this more accessible for folks to enter into open source, giving them opportunities like these mentorships because I firsthand, have seen it from a mentorship that I’ve hosted several years ago. Seeing the folks that have come onto this mentorship enter into the field for very fancy Fortune 100 companies doing amazing work and align with their career. So, That’s a little about me and I’m excited for today’s episode.

Sally Cooper (04:02)
Wow, that’s incredible. It’s a lot to unpack. First off, the winter comet. I will get you back on that one. I’m going to send you lots of pictures of snow. But no, in all seriousness, it’s such great work that’s going on in the bear working group. for the mentorship program, maybe Kairo, can you tell me a little bit about like the why for your mentorship in open source security? What is the problem that you were trying to solve and by showing up for this?

Kairo De Araujo (04:30)
Well, besides security, it’s something really important. There are a lot of people that are a little bit afraid to step in in open source projects like that. Sometimes because they don’t have a huge security background, or they are just coming out of the university or learning coding, but they are not, let’s say, comfortable to step in in open source.

It happened with me as well. Everybody here once was like a beginner in something, and we need to trust ourselves. And I think that it’s really important for products like rstuf to get new people in, new ideas, and also looking to the future like a, keeping more contributors in the project or making, spreading the security, what we are doing, what the project is doing, because we know how it works. like spreading the knowledge, it’s not only talking about the project, but getting people involved in that. And we have some good engineers that would like to try it and they don’t have opportunity. And I think that, for example, this mentorship is a very good opportunity.

And me as a maintainer, I need more contributors for my project because we know that the success and how the product can grow, it’s based on contributions, right? People really writing code, understand how the project works. And this was our goal. Like let’s try to give opportunity to people to step in the project.

And give opportunities to people to understand how we can do security in different ways. Because rstuf is a really specific project based on the the update framework (TUF) that’s really complex. And maybe it can also give more knowledge to others to let’s contribute with a project in a way that I can participate and maybe grow, get knowledge and I don’t know, get a new job or…

Just knowing a little bit more about the language or the service or how I can help the security and participating in OpenSSF as well. Because it can be an entry point for people to get to the community as we have many different other products inside.

Sally Cooper (07:11)
Yeah, I love that. When you think about last year’s program, what would you say stands out the most? What worked better than you expected about the mentoring? And what lessons really stuck with you?

Kairo De Araujo (07:28)
Well, I have done a few mentorships before, not in OpenSSF, but in other companies that I worked at, helping junior engineers and so on. And usually we are afraid to have multiple mentees because how I will manage different people all together in one folks.

And what we experiment really last year that was really nice was to not get only one mentee, but at least three mentees in the product in the way that we can try to work as a team to not teach only about how to make the open source, but they could really feel how the open source works, how they can work from different time zones, from different projects and how, because we had people working documentation, another working on a specific feature and how these overlaps with each other and how we can work as a team.

This was something really that we did different in the project, giving a lot of freedom to them. Like we have the projects that we want to accomplish as the goal of the mentorship, but let’s try to work in the…as a team with a very flexible way that we can help each other. And really, this was very positive for the product last year. And everybody did very well.

Sally Cooper (09:06)
Yeah, it sounds like you set it up in a way with the freedom, the flexibility, you know, and the education to really do well. That’s great. I love to hear that. Just thinking back like to last year’s program, is there anything that surprised you and how did that shape the experience and how do think it will shape the experience going forward?

Kairo De Araujo (09:28)
What really surprised me was the commitment from the mentees. The process also for us to selecting the mentees with the proper like basic skill, not being really a job interview, right? Making it…understand what they can bring as a background to us was very nice. And…

But what surprised me in the end, and I think we will have another podcast about that in the future, that will be that now out of three…the two of the three mentees become maintainers of the project. It means that they started as mentees, then they jump as contributors of the project, and right now they are helping how to run the product. And this is…for me, is amazing as a maintainer. It’s really a relief for me, right? Because I have more people to help me running the product.

Sally Cooper (10:30)
Right best case scenario there. That’s an incredible outcome. I love to hear it. Okay, let’s shift gears. Yesenia, what would you say are some of the challenges or growing pains that you learned in running this program? And what did these lessons teach you? About how to build a sustainable mentorship program in an open source community that you could share with with our audience here

Yesenia (10:58)
Good question. Like first I want to say like… what Kairo just mentioned, the fact that these mentees go to maintainers is the ultimate goal. It doesn’t have to be one of the goals, but it’s one of the reasons we set up the mentorship was to allow folks to enter and come in and see these new projects that they may have never heard of or had visibility to. And really just come in and dive in, become part of a team and see…my experience with open source, it’s the same team kind of dynamic, but in a different aspect – as what you would feel and see in a corporate space.

So, hats off to you, Kairo and the maintainers. I’m very excited to interview them or at least hear the podcast once they’re on. So, future plug for when it will be released. And when we think back on last year’s mentorship, well, I had already done one with OpenSSF, the Linux Foundation.

One of the biggest challenge was the amount of time, right? So was the first time we had to do this. We had the deadlines and the maintainers had about a week to put together the project description. They had a week to like shift through all the mentees and interview them and make their selection. So there was a lot of chaos to the front. So big kudos to them to like push through and find the mentees and get that program kind of running.

I think once the program started running within a few weeks, it just kind of smoothed, and there was a lot less questions, was a lot less friction. And what we decided to do this year is just start early. So we won’t release the date just yet. You got to listen in a little bit further in the episode. But we are looking to run a next iteration of the mentorships, starting the program early, giving the mentees enough time before the official quote unquote start date to get onboarded. So that they can really take advantage of those 12 weeks. That’s kind of what we’re thinking of and just keep an eye out for later for those dates and important information.

Kairo De Araujo (13:04)
Yeah, I want to say that this year I will be participating again. And the good thing is that the mentees from last year will help me doing the mentorship. So we will distribute the tasks. So as you can see how beneficial it can be for our project and for mentors also, engaging to do that.

Sally Cooper (13:26)
Yeah, that’s really full circle. Thanks for sharing that. Well, since the mentorship cycle is on the horizon and the expectations are set or being set, if someone wanted to participate and join this year’s cycle for the first time, what would you give for a potential mentor for key advice based on what you did last time?

Kairo De Araujo (13:48)
Well, communication is the key because everybody is remote, everybody has different backgrounds. So, I advise to really making clear the communication with the mentees, understand what are the goals, understand what are their backgrounds, right? And we have preset projects within the product to be…done by the mentees.

Be flexible as well, because maybe you need to shape a little bit those projects to fit well for the mentees. These are my key advice that I can give. And focus on the communication with the folks, because they can deliver very good outcomes from that.

Sally Cooper (14:48)
Yeah, the outcomes will definitely come from communication. And that’s key. So for a potential mentee, what are expectations you’d set to help them make the most of this structured and paid opportunity?

Kairo De Araujo (15:02)
What I can say for the mentees is that don’t be afraid to join. Don’t care about your background or from where you are coming. You have something to help in the project. Work together with your mentor and now also work together if you have other mentees together with you in the mentorship program, try to work together. Because everybody is here to help. Be relaxed, try to do the best you can, get committed with what you want to deliver.

But as I said before to the mentors, communication, also communicate well. Ask questions, and try to help as well because you…And try to do what I always say, try to do a good pre-emboarding that it’s like, try to understand the project. I’m not saying that you need to know everything, just to understand what are you doing, what are the products, what you need to deliver and enjoy it because it’s really, really good.

Yesenia (16:17)
Yeah, one thing I wanted to jump in and add is for the project maintainers, if you need help with some of your onboarding documents, the BEAR working group is working through a process to create onboarding documents. So it is an added bonus that we can help you with that, especially if you’re a single maintainer or your team is just over at capacity right now. We are working through onboarding documents for OpenBao. So we could expand that process to other teams just to something on the floor to make it a lot easier.

Sally Cooper (16:51)
Wow, that’s so helpful. It’s really great to know that the bear working group’s doing that. For those listening who are excited to join the next cycle, can you walk us through those dates, Yesi, that we were talking about? So if you’ve stayed long enough for this, here’s your payoff because we’re gonna learn all about these upcoming dates.

Yesenia (17:13)
Yes, so the application will be released around March 24th and will be open for a few weeks ending April 12th. So that’s a good amount of time. Check us out at OpenSSF on our socials, on Slack, follow me on LinkedIn, me and Kairo, OpenSSF. I go ahead and will be reposting this and blasting it everywhere. Then from April 13th to April 30th, the mentors are going to be reviewing the applications. May 1st, you should expect an accepted decline notification by May 1st.

May 5th to the 29th, we’ll be working with getting you onboarded to the LF platform and onto the project. So this is when you’ll be getting your environment up, getting any documentation that could take some quiet time. So we’ll ask for you to be a part of it. And then the mentorship kicks off June 1st to August 21st. Within there, these we forgot to mention. This is a paid mentorship.

So, there are two evaluation points. July 10th will be your first evaluation. After that, you get half of the siphon. And then August 28th, that’s an important date for me, you get the final siphon from after you perform your evaluation. The cool part of this is not only do you get to do your mentorship program, but you’ll be part of a BEAR welcome call, where we would showcase your project.

So you’ll be able to get a public recording where you present the work that you’ve done over the mentorship. And as an option, as you heard previously, the mentees that became mentors will be on podcasts. So as an added option, if you’re welcome to share your voice, we will also love to interview you after the project for the OpenSSF, What’s in the SOSS Podcast.

These are great because you get to put them on your resume, you get to put them on your LinkedIn, show your parents, show your mom, show your dad, your grandma, your grandpa, your dog, whoever it is that you want to share it with. I definitely give my dogs my podcast episodes because they’re very proud of me. But those are just some key highlights and if you have more questions, find us on Slack and ask us and we’ll let you know.

Sally Cooper (19:43)
love it. And cats too. Plug to my cats.

Yesenia (19:47)
My cats are always here, so they hear everything anyway.

Sally Cooper (19:50)
Okay. Yeah, they hear it all. I love it. Well, thank you both so much. This has been a really interesting conversation. I learned a lot. Really excited for this next session. And to see all the great work that’s going to come out of it. Thank you. But before we wrap, are there any other calls to action for the audience if someone’s listening?

I know you gave the dates, what’s like the next best step for them?

Yesenia (20:15)
From the BEAR working group perspective, we have, we didn’t name the projects. We have gittuf, rstuf, thank you, SBOMit and Minder that are coming on board as mentors. So if you’re not sure what those are, take a moment, go to the openssf.org/getinvolved page, look at the working groups, check out their GitHub, get on Slack and check out the groups. Join one of the public calls. If you’re too nervous or introverted (I dropped after my first call, so don’t worry). Find different resources so you can be familiarized with the project that you might like and enjoy.

We also have a BEAR welcome call that we did in January that walks through all the working groups. So that’s also a good avenue to start. Let’s say you look at the projects, none of them really excite you. Mind you, they are paid. You can check out some of the other working groups and start getting involved in there as well.

Kairo De Araujo (21:18)
Yeah, even beside the mentorship, if you are not able to join the mentorship this summer, or if you don’t feel comfortable yet to join, our project repository service for tuf (rstuf) is really looking for new contributors. And we’ll do like in the mentorship, we’ll guide you to join the project, get the community, we’ll help you through that.

You can make a lot of difference out there if you want to collaborate with us. So everybody is welcome in our project as well.

Sally Cooper (21:54)
Fantastic. Well, Yessenia, Kairo, thank you so much for your time today and all the work that you’re doing for the mentorship program and the bear working group. We appreciate you both and to everyone listening, happy open sourcing and that’s a wrap.

What’s in the SOSS? Podcast #55 – S3E7 The Gemara Project: GRC Engineering Model for Automated Risk Assessment

By Podcast

Summary

Hannah Braswell and Jenn Power, Security Engineers from Red Hat and contributors to the OpenSSF, join host Sally Cooper to discuss the Gemara project. Gemara, an acronym for GRC Engineering Model for Automated Risk Assessment, is a seven-layer logical model that aims to solve the problem of incompatibility in the GRC (Governance, Risk, and Compliance) stack. By outlining a separation of concerns, the project seeks to enable engineers to build secure and compliant systems without needing to be compliance experts. The speakers explain how Gemara grew organically to seven layers and is leveraged by other open source initiatives like the OpenSSF Security Baseline and Finos Common Cloud Controls. They also touch on the ecosystem of tools being built, including Queue schemas and a Go SDK, and how new people can get involved.

Conversation Highlights

00:00 Welcome music + promo clip
00:22 Introductions
02:17 What is Gemara and what problem does it address?
03:58 Why do we need a model for GRC engineering?
05:50 The seven-layer structure of Gemara
07:40 How Gemara connects to other open source projects
10:14 Tools available to help with Gemara model adoption
11:39 How to get involved in the Gemara projects
13:59 Rapid Fire
16:03 Closing thoughts and call to action

Transcript

Sally Cooper (00:22)
Hello, hello, and welcome to What’s in the SOSS, where we talk to amazing people that make up the open source ecosystem. These are developers, security engineers, maintainers, researchers, and all manner of contributors that help make open source secure. I’m Sally, and today I have the pleasure of being joined by two fantastic security engineers from Red Hat. We have Hannah and Jenn.

Thank you both so much for joining me today and to get us started, can you tell us a little bit about yourselves and the work that you do at Red Hat? I’ll start with Jenn.

Jenn Power (00:58)
Sure. I am Jenn Power. I’m a principal product security engineer at Red Hat. My whole life is compliance automation, let’s say that. And outside of Red Hat, I participate in the OpenSSF Orbit Working Group, and I’m also a maintainer of the Gemara project.

Sally Cooper (01:18)
Amazing. Thank you, Jenn and Hannah. How about you? Hi.

Hannah Braswell (01:21)
Hey, Sally. Thanks for the nice introduction. I’m Hannah Braswell, and I’m an associate product security engineer at Red Hat. And I work with Jenn on the same team. And I primarily focus on compliance automation and enablement for compliance analysts to actually take advantage of that automation. Then within the OpenSSF, I’m involved in the Gemara project. I’m the community manager there. And then

I’m kind of a fly on the wall at a lot of the community meetings, whether it be the Gemara meeting or the orbit working group. I like to go to a lot of them.

Sally Cooper (02:01)
we love to hear that. I heard Orbit working group from both of you. That’s exciting. And I also really want to dive in to the project Gemara. So before we do dive into those details, let’s make sure that everyone’s starting from the same place. So for listeners who are hearing about Gemara for the first time, what is Gemara and what problem is it designed to address?

Jenn Power (02:23)
Sure, can start there. It’s actually secretly an acronym. So it stands for GRC Engineering Model for Automated Risk Assessment. So that’s kind of a mouthful, so we just shorten it to Gemara. And the official description I’ll give, and then I can go into it like a little bit more of a relatable example, is that it provides a logical model for describing categories of compliance activities, how they interact,

And it has schemas to enable automated interoperability between them. So like, what does that mean? I think a good, if we anchor this in an analogy, we could call Gemara like the OSI model for the GRC stack. In fact, that was one of the kind of primary inspirations for the categorical layers of Gemara. And Gemara also happens to have seven categorical layers, just like the OSI model.

So if you think about it in networking, if I want to send an email, I don’t have to understand like packet routing. I can just send my email. So in GRC, we can’t really do that today. We have security engineers that also have to be compliance experts to be successful. And so with Gemara, we want to outline the separation of concerns within the GRC stack to make sure that like each specialist can contain their complexity in their own layer while allowing them to exchange information with different specialists completing activities in different layers.

So like if I could give one takeaway, we want to make it so engineers can build secure and compliant systems without having to understand the nuance of every single compliance framework out

Sally Cooper (04:14)
I love that. So we have a baseline now. Let’s talk about the problem and get a little bit deeper into that. So Gemara is responding to a problem that you touched upon. Why do we need a model for GRC engineering and what incompatibility issue are you trying to solve? If you could go a little deeper.

Jenn Power (04:34)
Sure. So I think sharing resources in GRC is just really hard today. Sharing content, sharing tools, none of those tools and content, it doesn’t work together today, if I could say that. engineers are typically having to reinterpret security controls. They’re having to create a lot of glue code to make sure that a tool like a GRC system and a vulnerability scanner can actually talk to each other.

So we’re trying to solve that incompatibility issue on the technical side. But this is also a human problem. And I think that’s kind of the sneakiest part about it. A lot of times, we’re not even saying the same things when we use the same terms. And so that’s another thing that we’re trying to solve within the Gemara project.

This one comes up all the time. Take the word policy. If you say that to an engineer, you’re thinking immediately, policy as code, like a rego file or something you’re going to use with your policy engine. But if you’re talking to someone in the compliance space, they’re thinking like, this is like my 40 page document that outlines my organizational objectives. So we created definitions within the Gemara project to go along with the model to solve the human problem while we’re also trying to solve the technical problem.

Sally Cooper (06:05)
That’s interesting. Okay, I heard you say something about a seven-layer structure. Can you tell me why you chose a seven-layer structure for Gemara?

Jenn Power (06:17)
So this actually stemmed from an initiative under the CNCF called the Automated Governance Maturity Model. And that started as four concepts actually, policy, evaluation, enforcement, and audit. And that established the initial kind of lexicon that the project had been using.

And it initially got some adoption in the ecosystem, specifically in projects under the Linux Foundation, like FINOS Common Cloud Controls (CCC) and the Open Source Project Security Baseline (OSPS Baseline). And through the application of that lexicon, we found that there needed to be more granularity within that policy layer. So it expanded to two new layers called guidance and controls.

And I didn’t mention that we were creating a white paper yet, but we do have a white paper. And through the creation of that white paper, which Eddie Knight did so much work to create that initial draft there, we actually found that we were missing a layer. We had a seventh layer, and it was something that we had called sensitive activities. And it’s something kind of sandwiched in the middle of the Gemara layer. And so with that, we kind of organically grew to seven layers. So that I think is the kind of origin story on how the layers got to seven.

Sally Cooper (07:54)
I love that. And you’re really talking about how Gemara is not built in isolation, that you’re working with other open source projects. For example, you mentioned Baseline and the FINOS Common Cloud Controls. Can you tell me how Gemara connects to those projects?

Hannah Braswell (08:09)
Yeah. So in terms of Gemara connecting to the other open source projects, the first thing that comes to mind is really the CRA because of how prominent it is right now and just the future of its impact. And I really think that Gemara is going to be a catalyst for open source projects in general that are in need of some kind of mechanism to, you know, implement security controls and align with potential compliance requirements.

And the good thing about Gemara is that you don’t have to be a compliance expert to make sure that your open source project is secure. And so I would say that the OSPS Baseline is a great example of Gemara’s layer two, because it provides a set of security controls that engineers can actually implement. So in that case, other projects can reuse the baseline controls and then fit them to their needs.

And I think it also goes to say that, anyone that is actually building a tool they want to sell or distribute in the European Union market that’s using the open source components, they’re gonna have to think about what’s in scope and having something like the OSPS Baseline to understand how to effectively assess your open source components and their risks is really, really valuable and just gonna be super useful. And then in terms of the FINOS Common Cloud Controls, I think that’s

Also another great example, just in terms of the use case and implementation of Gemara, because they have their core catalog, which has its own definitions of threats and controls that’s then imported to their technology specific catalogs. And yeah, so that’s a great implementation within the financial sector.

And then where we’re trying to expand the ecosystem for Gemara, as in the Cloud Native Security Controls catalog refresh. And that’s actually an initiative that Jenn is leading. I’ve done a few contributions to it, but it’s essentially an effort to take the controls catalog that currently exists as a spreadsheet and make it available as a Gemara layer one machine readable guidance document. So Gemara is really connecting to projects that are all great to have on your radar, especially with the CRA coming up.

Sally Cooper (10:26)
Wow, that sounds great. But I’m just thinking about our listeners. They’re probably wondering, like, what does this look like in practice? And I’m curious if there are any tools available to help with the Gemara model adoption.

Jenn Power (10:39)
So we’re actually working on an ecosystem of tools. So we want to bridge that theory that we’re creating within the Gemara white paper to things that are actually implementable just to make sure that you don’t have to start from scratch if you’re trying to implement the Gemara model.

So we have a couple tools within the ecosystem. One would be our implementation of the model. We’re using queue schemas to allow users to create the models like in YAML, for instance, if you wanted to create your layer two, you would create YAML, you could use our queue schemas to validate that your document is in fact a Gemara compliant document. And then we’re also building SDKs. Right now we have a Go SDK, so you can build tooling around the programmatic access and manipulation of Gemara documents. A tool in the ecosystem that’s using this currently is a tool called Privateer that automates the layer five evaluations.

Sally Cooper (11:47)
Wow, that’s great. And of course, none of this works without the people. So I know you mentioned a few of them. How can new people get involved in the Gemara project?

Hannah Braswell (11:58)
So anyone that’s new and interested in getting involved in the Gemara project, my first piece of advice would just be to jump in a community meeting and listen in on what’s happening. I know I started out just by joining those meetings and I, you know, I didn’t necessarily have much to say, but I appreciated the culture and the open discussion, just like bouncing ideas back and forth off of one another.

And there’s also plenty of times when I joined a community meeting and still trying to understand the project if there was some kind of group opinion trying to be formed. Like I think it’s perfectly fine to say, you know, I don’t have the information right now. I don’t have an opinion. I’m still trying to learn about the project. But if something piques your interests and you want to contribute, then volunteer for it or show you’re interested because people are not going to forget about your willingness to step up and be part of the community.

But I started joining those meetings before we were rolling out the white paper. So that kind of brings me back to my first piece of advice. So I’d really suggest reading the white paper first, because it describes the problem and the trajectory of the project so well, and in a really clear way that I think is super important context for anyone that wants to start contributing. And I mean, from there, I mean, I’m the community manager, but I started with small contributions.

that ended up supporting the community in terms of documentation and some other aspects of the project I was excited about and that I could contribute to. So I really think the contributions are dependent on what you’re interested in. And even if there’s some difference in opinion and perspective or background, all of that can make a huge difference for the community and anything from documentation to code or discussion and collaboration will count as valid contribution and effort. So I’d say to anyone that’s wanting to join the Gemara community and start contributing, I think you should just find an area that truly interests you and makes you excited and get involved.

Sally Cooper (14:02)
Oh, that’s great. Well, thanks so much. And before we wrap, we’re going to do rapid, rapid fire. So I hope you’re ready because this is the fun part. No overthinking, no explanations, just the first instinct, okay, that comes to you. And I’m going to bounce. Yes, exactly. I’m going to bounce back and forth and ask you both some questions. Ready?

Jenn Power (14:17)
I’ll close my eyes then.

Sally Cooper (14:25)
Okay, Hannah, you’re up first. Star Wars or Star Trek?

Hannah Braswell (14:29)
Star Wars.

Sally Cooper (14:30)
Nice, I love it.
And Jenn, same question, Star Wars or Star Trek?

Jenn Power (14:335)
Star Wars.

Sally Cooper (14:36)
Okay, we’re all friends here.
Okay, back to Hannah, coffee or tea?

Hannah Braswell (14:42)
Definitely coffee.

Sally Cooper (14:43)
Yay, cheers. That’s solid.
Jenn, morning person or night owl?

Jenn Power (14:49)
Night Owl.

Sally Cooper (14:50)
Ohh that tracks. Hannah, beach vacation or mountains?

Hannah Braswell (14:56)
Hmm beach vacation.

Sally Cooper (14:58)
Nice choice. Jenn, books or movies?

Jenn Power (15:02)
Movies.

Sally Cooper (15:03)
Nice. All right, last round. Hannah, favorite open source mascot?

Hannah Braswell (15:08)
Oh…Zarf. I think that looks like an axolotl. I used to be obsessed with axolotls. And I mean, ever since I saw that, I was like, that’s the mascot.

Sally Cooper (15:18)
I love Zarf too. Cool. Okay. That’s a really strong pick.
Jenn, I’m going to give you the same question. Favorite open source mascot?

Jenn Power (15:26)
actually love the OpenSSF goose. I think it’s so cute.

Sally Cooper (15:30)
Teehee, Honk, he’s the best. Okay, let’s bring it home, Hannah, sweet or savory.

Hannah Braswell (15:38)
Savory.

Sally Cooper (15:39)
interesting. Okay, and Jenn? Spicy or mild?

Jenn Power (15:46)
mild. I can’t handle any spice. I’m a baby.

Sally Cooper (15:51)
love it. That’s amazing. Well, thank you both so much for playing along. And as we wind things down, do you have any other calls to action for our audience if someone’s listening, and they want to learn more or get involved? What is like the best next step for them?

Jenn Power (16:05)
I would say read the white paper. We are looking for feedback on it and that is really a way to understand the philosophy and the architectural goals of Gemara. And if you’re looking to just like, hey I want to learn GRC, that’s a good first step. So I think that’s what I would say.

Sally Cooper (16:28)
Fantastic. Hannah, Jenn, thank you so much for your time today and for the work you’re doing for the open source security community. We appreciate you both. And to everyone listening, happy open sourcing and that’s a wrap.

What’s in the SOSS? Podcast #54 – S3E6 AIxCC Part 4 – Cyber Reasoning Systems: The Real-World Journey After AIxCC

By Podcast

Summary

In this final episode of our AI Cyber Challenge (AIxCC) series, CRob and Jeff Diecks wrap-up the journey from DARPA’s groundbreaking two-year competition to the exciting collaborative phase happening now. Discover how winning teams are taking their AI-powered vulnerability detection systems into the real world, finding actual bugs in projects like the Linux Kernel and CUPS. Learn about the innovative OSS-CRS project that aims to create a standard infrastructure for mixing and matching the best components from different systems, and hear valuable lessons about how to responsibly introduce AI-generated security findings to open source maintainers. The competition may be over, but the real work—and collaboration—is just beginning.

This episode is part 4 of a four-part series on AIxCC:

Conversation Highlights

00:00 – Welcome and Introduction to AICC
01:37 – OpenSSF’s AI Security Mission: Two Lenses
03:54 – Competition Highlights: What the Teams Discovered
07:43 – Real-World Impact: From Research to Production
10:44 – Lessons Learned: Working with Open Source Maintainers
13:13 – OSS-CRS: Building a Standard Infrastructure
14:29 – Breaking Down Walls: Post-Competition Collaboration
15:39 – How to Get Involved

Transcript

CRob (00:09.408)
Welcome, welcome, welcome to What’s in the SOSS, the OpenSSF’s podcast where I get to talk to the most amazing people in the planet that are either involved or on the outskirts of open source software and open source security. Today, we have a treat. We get to talk to one of my dear friends and teammates, Jeff, and we’re gonna dive into a topic that I really don’t know a lot about today.

So Jeff, why you introduce yourself to the audience and kind of describe what you do for the foundation.

Jeff Diecks (00:44.686)
Yeah, thanks, CRob. And hello, I’m Jeff Diecks. I’m a technical project manager with OpenSSF. And I’ve been involved in open source for 20 plus years now. Goodness. And I am OpenSSF’s lead on the AI cyber challenge program that we work on. And CRob is sort of telling you the truth. He’s been on the three episodes prior to this where he’s learned plenty about AICC, but we’re here to talk a little bit more about this and wrap up the series today.

CRob (01:17.582)
Yeah, these words you use, AI, that isn’t something I hear a lot about. Wink. Could maybe you recap for us, like what is the OpenSSF doing around AI security? And then just maybe give a brief recap about the AI CC.

Jeff Diecks (01:37.028)
Yeah, for sure. So OpenSSF in the world of AI, we have our AI ML Security Working Group that looks at security and AI from kind of two lenses. The first is AI for security, which is what we’ll be talking about here today, projects that help you AI to help improve the security of projects, and security for AI, which is securing all this new world of AI things and all the lessons we’ve learned about securing software. AI is software too and it needs securing. We have a whole suite of projects and work that focuses on that too. Specific to AIxCC, again, it’s the AI Cyber Challenge. It was a two-year competition run by DARPA and ARPA-H. If you’re just hearing this episode first, I encourage you to go back to the first episode in this series with Andrew Carney from DARPA and ARPA-H for an overview of the program. And then we got into some good conversations with a couple of the team leads from some of the winning teams. But the purpose of the competition was to use AI and develop new systems to both find and fix vulnerabilities in open source software that are important to our critical infrastructure. An interesting part about the competition… written into the rules for any of the competitors accepting prize money, they were obligated to release their software as open source.

CRob (03:07.214)
Nice. That’s awesome. Well, yeah, again, I say that a bit tug-and-cheek, but there has been quite a lot of activity, whether it’s in the working group or specific to the follow-ons to the AICC competition, which is what we’re here about today to kind of put a bow on these conversations and help encourage the community to engage and go forward. So we talked about, we talked to a couple of the teams, we talked to Andrew, kind of gave us an overview of the program.

From your perspective and your engagement with the community, we have a new cyber reasoning special interest group within the foundation. So what have the teams been up to subsequently since the August close of the competition?

Jeff Diecks (03:54.414)
Yeah, there’s really two parts of this and I’ve had the great honor of meeting with and speaking with a lot of the teams and learning about what they’ve been doing. But we first started with just conversations about, you know, their experiences with the competition and what they learned, similar to the couple of episodes that we did. And what was really interesting is, you know, every single team, there’s something of value that came out of their system. They excelled in a, you know, at least a specific area.

They were all finding bugs in real world software. Just a couple of highlights from some of the other teams. Team Theori, which was among the three winners, the third place winner, they had a unique approach. Their system, unlike the others, did not use fuzzing. It used pure LLM AI. so just an interesting variation and you potential there for that system to be super flexible because it doesn’t come with some of the requirements of projects already being set up with fuzzing. So interesting to see what becomes of their system there. And then in a couple of other cases, it was really interesting. Teams that have systems that are extremely capable, but for one reason or another,

CRob (05:08.142)
Yeah.

Jeff Diecks (05:18.096)
There was maybe a specific part of their system that just didn’t work well with the scoring mechanisms of the competition itself. So we had a team that was one of the best at generating patches for the issues that it found that was most effective. But as these things go, there was kind of a late change in the architecture of their system during the competition. the part of their system that was supposed to submit all these patches that got generated into the competition for scoring didn’t function correctly and didn’t submit everything. So they didn’t get credit for all their great work. But we’ll talk in a little bit about well, but it’s still a capable system. And now we can use it not for a competition, but for real stuff that’s potentially even more valuable. There was another that it was great at generating proof of vulnerabilities.

CRob (06:13.87)
Mm-hmm.

Jeff Diecks (06:15.328)
And, but they had made some assumptions based on the competition infrastructure and the system just didn’t perform well within the confines of the competition. But what was interesting about the architecture of their system was they would generate a potential result that they thought might have been a finding from part of their system. And then they would submit that potential result out to several other LLMs and have them feedback.

CRob (06:24.718)
Mm-hmm.

Jeff Diecks (06:45.176)
a verdict on whether they thought it might be effective or not. We kind of made the joke. It was like doing the poll of getting eight out of nine dentists to agree and decide on the submission. So those are some highlights from the competition. But you asked about what the team’s been up to in recent months. So that’s been really interesting. DARPA has kind of extended the incentives from their program and they’re offering

CRob (06:48.846)
Hmm.

Jeff Diecks (07:13.552)
incentives and rewards for the teams now taking these systems and using them in the real world against real open source projects and demonstrating that they’re effective there. And if they can demonstrate that, they earn additional reward money, which is encouraging the adoption and the transition of this research into real world usage. So we’ve had some interesting findings there and a few examples there.

CRob (07:27.374)
Awesome.

Jeff Diecks (07:43.248)
We’ve got Team 42 that we’ve been working with. They’ve focused a lot on their system seeming to be very effective in working with the Linux kernel and specifically some of the out-of-tree subsystems. They’ve found and reported several bugs and had some of them accepted, accepted patches. And actually later this week we’ve scheduled and we’re doing a consultation for that team with a kernel maintainer.

to give them feedback and help their research move forward and any guidance they can give on how to make their system more effective. So we’re looking forward to that conversation.

CRob (08:23.36)
Excellent.

Today, so we have a mixture of projects that are being donated. They’re all open source, but some of them are being donated like here, for example. Where would we go from here? What is the group’s thoughts about these, broadly the cyber reasoning systems and kind of what other interests or ideas are floating around to keep the momentum?

Jeff Diecks (08:52.91)
Yeah, there’s a few things. So one, OpenSSF is involved with the teams and we’ve formed, as you mentioned, the special interest group, the cyber reasoning system special interest group as the kind of continued home from the competition for all the teams to continue collaborating together and working there. some interesting developments so far, we’ve hosted having the teams present to one another.

of their work with real open source projects and examples of bugs they’ve found in the process they’ve been following and how they’ve been getting received from open source projects with what they’ve been submitting. So for example, Team Fuzzing Brain, who has donated their CRS systems for OpenSSF to host and support.

They’ve been working against a bunch of projects, but specifically they shared some examples of their work with the CUPS project where they found some bugs, they reported them, they’ve had some accepted patches, and they’ve gotten great feedback from the CUPS maintainers who are very appreciative of their work, both finding bugs and submitting patches, but also helping to generate and expand the fuzzing.

CRob (09:56.408)
God.

Jeff Diecks (10:17.028)
know, harness coverage of the project itself, which the systems are pretty capable of. So we’ve been learning a lot about the reporting process because it’s one thing to have these capable systems, but you know, it’s the world where like you and I and everybody else are a bit, you know, skeptics of just, you know, pure AI things, right? So we’re, working our way through and kind of learning from one another about

CRob (10:19.82)
very nice.

Jeff Diecks (10:44.078)
What’s the best way to keep humans in the mix and how are projects receiving these things? What are some lessons learned? So for example, we had a conversation in a SIG meeting where we’re talking about the patch submission process and some of the projects were kind of reacting. It was perhaps a bit too aggressive to just go ahead and introduce yourself by submitting a patch to the system.

CRob (10:46.51)
Mm-hmm.

Jeff Diecks (11:12.784)
right into the pull request queue of a project. And the group suggested maybe for the next go-round, it’s maybe a more polite way to introduce yourself by opening an issue, reporting how it was found, what it was found, all the supporting information, and then attaching a patch to be considered versus just, hey, here’s a PR. By the way, it came from AI.

So some interesting.

CRob (11:43.022)
Right. And we’ve heard a lot of feedback from upstream about their disinterest in that approach.

Jeff Diecks (11:50.772)
Yeah, well, and was a big focus of the scoring of the competition itself. That was among the feedback that we gave consistently for a couple of years of make sure you’re incentivizing the development of systems that don’t make life more difficult for maintainers, but hopefully make things easier and think about how these things will be received, not just technical capabilities.

But you mentioned, you know, donated projects and, you know, the one that I think is of real interest and, you know, for folks to follow along. So Team Atlanta led the way development of a project that we’re calling OSS-CRS, and bundled in with it, they intend to have something called CRS Benchmark.

And what these are for is it intends to be a standard infrastructure for building and running and evaluating all these CRSs and being able to kind of mix and match and use different parts of different ones for, you know, kind of a combined solution.

So, you know, if we think of a future where we’ve got, you know, a system like we talked about that’s most effective at generating patches, but we’ve got a different one that’s best at finding vulnerabilities.

CRob (12:58.755)
Wow.

Jeff Diecks (13:13.488)
And the hope is that through this standard interface, folks can leverage and kind of fine tune things to get the best performance and the best results out of a combination of systems rather than just relying upon a single one. So you can just think of it the way it’s intended to run. If you just imagine yourself at a command line prompt and you issue a OSS dash bug find dash CRS build.

then give it a configuration and a project, a compatible project, and that’ll build a system to run against. And then you can issue, thing, OSS bug find CRS run config project and the name of harness, and it’ll go off and do its thing. So again, you’re specifying which configurations you’re wanting to use, which subsystems you want it to.

CRob (14:01.666)
Mm-hmm.

Jeff Diecks (14:14.01)
pull from. So they’ve got an interesting roadmap. You know, we’re talking with them and, you know, hoping to bring our community and perspective to help support that project and its development, you know, and adoption into the real world.

CRob (14:29.176)
And I remember us talking around DEF CON last year. And I think the competition and the prize money are great. That was very exciting. But I’m most excited about this kind of phase we’re in now, where we’re seeing the teams with that ethical wall down between them from the competition. Now they’re actually able to talk and collaborate and share ideas. I’m really excited to see the community come together, helping support these students on these ideas.

Jeff Diecks (14:58.916)
Yeah, and that’s been the interesting part and part of why this it’s taken us a bit to get this whole podcast series out through the course of the competition for competition integrity reasons. You know, we were advising the competition organizers, but we weren’t interacting with the teams themselves. So we had to go through a whole process after the finals to introduce ourselves to all of the individual teams and let them know that we’re here and about and you know, the things we offer to help.

support them in the further development of the system. that’s been an interesting few months of lots of great conversations and seeing these teams come together within our working group and special interest group.

CRob (15:39.842)
So you’ve inspired me. I sure would like to know more on how to get involved. How can I do that?

Jeff Diecks (15:41.488)
Ha

Well, if your Monday afternoons at 1 p.m. Eastern are free, we have two different meetings that basically are in that time slot on alternating weeks. Our full AI ML security working group meets again on Mondays at 1 p.m. Eastern time on a biweekly basis. And on the alternating weeks, the cyber reasoning system special interest group meets in that time slot. You can find them.

CRob (16:09.326)
Peace.

Jeff Diecks (16:11.844)
You know, both of those meeting series on our community calendar at opensf.org slash calendar.

CRob (16:18.958)
Well, I want to thank you for helping shepherd and guide the folks in the competitions. We’re seeing some great results come out of this. And I’m really excited to see what our community and these amazing students kind of come up with on how to further the use of AI to help improve security on things. Yeah. And with that, we’ll say this is a wrap.

Jeff Diecks (16:42.308)
Sounds good. Thanks, CRob, and we’ll see you in the meetings.

CRob (16:48.911)
I for one welcome our new robot overlords and I wish everybody a happy open sourcing. Have a great day.

What’s in the SOSS? Podcast #49 – S3E1 Why Marketing Matters in Open Source: Introducing Co-Host Sally Cooper

By Podcast

Summary

In this special episode, the What’s in the SOSS podcast welcomes Sally Cooper as an official co-host. Sally, who leads OpenSSF’s marketing efforts, shares her journey from hands-on technical roles in training and documentation to becoming a bridge between complex technology and everyday understanding. The conversation explores why marketing matters in open source, how personal branding connects to community building, and the importance of personas in serving diverse stakeholders. Sally also reveals OpenSSF’s 2026 marketing themes and explains how newcomers can get involved in the community, whether through Slack, working groups, or contributing content

Conversation Highlights

00:09 – Welcoming Sally Cooper as Co-Host
01:28 – From Technical Training to Marketing Leadership
03:54 – Bridging Technology and Understanding
06:19 – Why Marketing Makes Open Source Uncomfortable
08:11 – Personal Branding and Career Growth
10:42 – Understanding Community Personas
12:33 – Getting Started with OpenSSF
14:44 – OpenSSF’s 2026 Marketing Themes
16:18 – Rapid Fire Round
17:09 – How to Get Involved

Transcript

CRob (00:09.502)
Welcome, welcome, welcome to What’s in the SOSS, the OpenSSF podcast where we talk to people, projects, and we talk about the ideas that are shaping our upstream open source ecosystem. And today we have a real treat. It’s a very special episode where we’re welcoming a new friend. And this is somebody that you probably know if you’ve been involved in our community for any period of time.

This young lady gets to help us with our messaging and how we present ourselves to the outside world, how we get our messaging out to all those interested OpenSoft community contributors around the globe. And today she’s officially joining Yesenia and I as a co-host of What’s in the SOSS. So I am proud and pleased to welcome Sally Cooper.

Yesenia (01:02.916)
Woo!

CRob (01:07.488)
Sally has been helping lead our marketing wing of efforts for the last several years. So before we jump into kind of what you do within that marketing function, Sally, we would like to hear a little bit about your open source origin story and how you got into technology.

Sally Cooper (01:28.549)
wow. Well, thank you so much, Yesenia and CRob. I’m super excited to be here. And yeah, I started my career a very long time ago. I actually started in tech with hands-on technical roles, working in training, documentation and support, and really helping people understand systems and tools and workflows.

Yesenia (01:52.21)
Yeah, I want to welcome Sally. great to have just another voice on this podcast, putting the hard work that our open source ecosystem is out there and getting more of these other voices. But you were talking about that you started in tech early and for me, that’s new for me. I would love for you to dive into these like technical roles. I think understanding your background in the technical and how you’ve gotten into marketing and working with open-assess that’s just going to relate to folks and understand that.

You don’t always have to be technical or work in a technical field to support your security. So I’d love to understand your background and how you’ve connected your technical background into the transitions you’ve had in your career.

Sally Cooper (02:35.611)
that’s such a good question. Yeah. I think you really nailed it there because you don’t need to always be technical and sometimes you don’t even, you can be technical and you end up in something like marketing for me. So, when I say started in tech, mean, this was like really entry level, hands on, learn it from the ground up. I worked in finance in my first job out of college. I was working at a data processing center and it was really operational.

accuracy, lots of responsibility, really not a lot of glamour. So the thing that kind of was a turning point was that we went through a major systems upgrade and we moved from a legacy system to entirely new software. So suddenly people who had been doing their jobs a certain way for years really were expected to work differently and often overnight. And I became one of the people who could help bridge the gap.

because I understood the technology and how to explain complex systems in an easy to understand manner. And I ended up being in training. So I became a software trainer and trained the whole organization on how to use the software to do their jobs.

Yesenia (03:52.776)
That’s very useful.

Sally Cooper (03:54.649)
Yeah, thanks. It’s funny because we all have to get started somewhere, right? And that’s how it worked out for me. After that, I worked at a startup in B2B e-commerce and continued on with educational software training, writing technical guides, books, some of the first e-learning programs. So I’m definitely dating myself here. But looking back, yeah, looking back, the title marketer wasn’t something that I thought of.

CRob (04:17.772)
Yeah

Sally Cooper (04:24.131)
But I was doing a lot of work in marketing without knowing it, just helping people understand concept topics. So yeah, that’s how I got here. Thanks for asking.

Yesenia (04:37.906)
Yeah, we all date ourself very easily. mean, we’re in tech. It already ages us the minute we walk in. But I think that’s a great understanding and background, right? I think that’s one of the most important skills when it comes to this technical is like, can you bring this high level technical aspect into something that everyday folks can understand and then drive them in? I’m curious from there, now you’re doing marketing. How did you get involved with that?

Sally Cooper (05:06.713)
Yeah, great question. So around the time when my career sort of took off with the technical education, there was something happening in the background. So early 2000s, this was the dawn of YouTube, smartphones were starting to emerge, companies were beginning to realize that technology wasn’t just about features, it was about an experience. And so I find this a very full circle moment because before smartphone, I had an iPod.

It was a pink metallic iPod and I got really obsessed with podcasts. So podcasts were new. It wasn’t just about the music for me. It was really listening to, you know, a conversation that was educational. And I could do that while raising a family, doing, like going for a walk, getting exercise, making dinner. You could have headphones on and just bring yourself into a whole other world.

So yeah, so that’s when I really started like it I also loved the campaign like looking at the billboards and seeing the silhouettes with I You know the iPod and the headphone all of that. So it’s kind of full circle

CRob (06:13.484)
Yeah.

Yesenia (06:19.934)
And it’s really lovely, especially when you see those nice like billboards and like, how much thought has someone taken into that? And like, when you think of like open source, like it’s people’s hobby projects, there’s just like no profit. And I feel like marketing in a sense, I’ve learned it from my own personal knowledge, professional growth, as you could say, there, I realized I was doing marketing without realizing I was doing marketing.

But marketing can just make some people uncomfortable, especially in the open source space. Like, what do you think about that?

Sally Cooper (06:53.463)
Yeah, that’s really valid. Open source is really personal. A lot of projects start off as a hobby, a passion, a side project built on nights and weekends. The word marketing can feel a little uncomfortable. It like, it doesn’t really belong there. I’ve definitely heard that feedback from developers. In open source, we’re not selling software. So it’s a completely new concept for me. I did have some marketing jobs after the educational jobs and

CRob (07:04.014)
Right.

Sally Cooper (07:23.479)
So I’m learning still, I’m learning from all of you and from our community that we’re sharing ideas, tools, practices, and that the currency is really people’s time, attention, and trust. So without marketing, great projects stay invisible, maintainers get burnt out, and users can struggle in silence, and the people who can contribute never even find the door.

CRob (07:50.142)
And this is extremely interesting to me because I observe Yesenia and kind of for the trajectory of her career and so much of your online persona is you do a lot of work of kind of branding yourself and providing advocacy and outlets to help empower other people.

Yesenia (07:58.589)
Yeah.

CRob (08:11.522)
It seems like a really big part of what you do outside of your day job and outside of your foundation work. So from your perspective, Yesi, how do you see these worlds connecting?

Yesenia (08:17.359)
Absolutely.

Yesenia (08:23.39)
I will recently I think it’s an interesting area. I heard this quote from a co worker. I would love to call her but I don’t have her. But it was like, your branding should be getting you the next job, right? Your next step your next opportunity. And as I started in my career, I was really thinking about like,

I kept getting seen and told like I wasn’t technical, but if you looked at my background, it’s in my education. It’s like, how am I not technical? Right. So I really started thinking about like where branding is like where people start meeting you. So your resume is a form of branding, your LinkedIn page is a form of branding. And I really saw it as like sharing a story about yourself, your impact, your value. I really letting them know what they’re getting into before they even reach out to you. So.

It just naturally happened as a way for me to like leave a toxic work environment and get into the next space. And as I realized I was doing it, like I said earlier, I didn’t realize I was doing marketing until somebody was like, you’re marketing. And I’m like, cool.

CRob (09:30.102)
I think what you do is very effective.

Yesenia (09:32.338)
Thank you.

Sally Cooper (09:33.345)
Yeah, I agree. Yesenia, you were an inspiration to me when I first started at OpenSSF because you were so good at branding. You had the cybersecurity big sister. I saw that somewhere. It’s like, yeah. And then you started tagging me on LinkedIn and you just made me feel like I was welcome. And I know that you do that to the community. You make people feel like there’s someone who is technical, but also human who leads with authenticity. So I was super impressed and I always learn so much from you.

Yesenia (09:37.448)
No.

Yesenia (09:45.371)
and

Yesenia (10:02.462)
What you guys gonna make me cry? No emotion. No, there’s no crying about the bars. No need baseball. I just aged myself there. But yeah, I think it’s really about creating those personas. And this is just something that you can do for yourself, that you do for your community, that you do for your projects. It was just something that I realized we just needed to connect people and get them moving. And personas has been talked a lot today.

CRob (10:05.006)
There’s no crying in open source.

Yesenia (10:31.39)
in this conversation. Sally, I love your expert opinion on this. Why do you think they’re so important when it comes to open source marketing?

Sally Cooper (10:42.189)
Yeah, well, CRob and I ran a project along with the OpenSSF staff where about a year ago we polled our community and we asked them a few questions to try to identify who they were, what their job titles were, what was important to them, how they learned about OpenSSF and how we could serve them better. And we came up with a list of personas.

I will link the personas in this transcript, hopefully I can figure that out. But we have software developer maintainers, open source professionals, the OSPOs, security engineers, executives and C-suite. And there’s a whole bunch of titles there. And then we came up with a new one that we hadn’t thought about before, which is funny because now that we’re talking a lot about marketing, there’s a product marketer.

CRob (11:11.662)
you

Yesenia (11:13.146)
Ooh.

CRob (11:36.91)
Mm-hmm.

Sally Cooper (11:36.985)
who is very much someone who is interested in open source software and open source security software. They’re typically a member or looking to become a member of the OpenSSF and they wanna help elevate the people that they work with, the projects that they’re working on, all the great work that their companies are doing in open source. really, Personas help us move from here’s a project to here’s how you ship secure code or

Here’s how we can help you manage risk or here’s how we can help you meet policy requirements. Marketing has really become a service and that’s where personas fit into the mix.

CRob (12:17.794)
Very nice and thinking about this from like, you know, we’re three kind of insiders for the foundation. If someone’s brand new to the OpenSSF and kind of wants to learn more, what does that journey look like for them, Sally?

Sally Cooper (12:33.429)
Yeah, that’s such a good question. So first of all, we’re all really nice and welcoming and you’re all welcome here. So if you have an idea, marketing can help bring that to light. If you are just new to OpenSSF, you can join many of our, actually all of our working groups. We have an open source community. One that would be really beneficial is the bare working group, belonging, empowerment, allyship, and representation and they meet frequently and they record their meetings on YouTube. So if you’re unsure, you can watch a few and learn a little bit more what it would be like to be in a working group at OpenSSF. Strongly encourage you also to join our Slack channel. We will link that and to follow us on social media. You can sign up for our newsletter. We try to meet people where they’re at.

So when we were talking about the personas, we learned that people are on different platforms. Some people would prefer to watch a video or read a blog. And so we try to cater to that, but we’re also always looking for feedback. So join the Slack, make yourself known. Again, if you have an idea, we can help you bring that to light. So we’d love to hear from you.

Yesenia (13:53.181)
And, know, no personal bias, but the bear group does do some awesome work. You know, there’s also, says the co-lead. We’ve also have a few blog posts that was released last year that Sally and her team has helped kind of release that go into how to get started into open source that I know the community as a whole has been sharing with new members as they come into a Slack channel. They’re like, I’m new, how do I get started? So it’s great resources there.

So we’re kicking into 2026, even though my mind keeps thinking it’s 2016. I had to figure out what’s going on there, but you know, one day we’ll go back there. Sally, as an insider, I’d to know what is marketing working on this year for openness, the staff’s mission and the growth of the communities?

CRob (14:30.101)
You

Sally Cooper (14:44.078)
Thank

Yeah, yeah, great question. So OpenSSF exists to make it easier to sustainably secure the development, maintenance, release, and consumption of the world’s open source software. We do that through collaboration, best practices that are shared, and solutions. And so our themes are showing up in 2026 quarterly to help people in our community meet these needs. For Q1, which we’re in now,

We’re focused on AI ML security. Q2, we’re going to talk about CVE, vulnerability transparency.

CRob (15:25.432)
heard of that.

Sally Cooper (15:27.289)
Q3, policy and CRA alignment. Q4 is going to be all about that base. So Baseline and security best practices.

Yesenia (15:41.01)
Very big fancy buzzwords there. So if anyone’s playing bingo as they listen, you got a few.

CRob (15:48.014)
Well, that has been an interesting kind of overview of what’s been going on. But more importantly, let’s move on to the rapid fire part of the show. have a series of short questions. So just kind of give us the first thing that comes off the top of your head. And I want that visceral reaction. Slack or async docs?

Yesenia (15:58.879)
Thank you for watching.

Sally Cooper (16:18.092)
Async docs.

Yesenia (16:21.15)
Favorite open source mascot.

Sally Cooper (16:24.947)
The Base. Honk as The Base.

CRob (16:27.79)
Nice. Love that one. What do you prefer? Podcasts or audiobooks?

Yesenia (16:27.934)
Go, baby.

Sally Cooper (16:33.273)
podcast.

CRob (16:35.662)
Star Trek or Star Wars?

Sally Cooper (16:38.489)
Star Wars.

CRob (16:40.43)
And finally, what’s your food preference? you like it mild or do you like it hot?

Sally Cooper (16:48.939)
medium.

CRob (16:50.188)
Medium? Well, thanks for playing along. So, Sally, if somebody’s interested in getting involved, whether it’s contributing to a project or potentially considering, you know, joining as a member on some level, how do they learn more and do that?

Yesenia (16:52.658)
That’s your question.

Sally Cooper (16:55.033)
Great question.

Sally Cooper (17:09.995)
Amazing. So go to openssf.org. From there, you can find everything you need. We referenced a blog. You can go check out our blog, find out how to contribute a blog. Everyone can join our Slack, join a working group, follow us on social media, subscribe to our newsletter. And we would love to see you at our events. Those are open to all. And if you are a member, please get involved, submit a blog.

Join us on the podcast. We would love to have you. We have a key study program. We also do quarterly tech talks. If you can dream it, we can build it. And the best place to plug in is our marketing advisory council. It meets the third Thursday of every month at 12 p.m. Eastern time. You can also reach out to us at marketing at openssf.org.

CRob (18:02.392)
Fantastic. And I may state how thrilled I am to be adding you as kind of a voice of our community and kind of joining us as a co-host, Sally.

Sally Cooper (18:13.133)
Woohoo!

Yesenia (18:13.374)
Yeah, I’m very excited for a new voice, help offload some of this work and the stories that you’re going to bring the guests we’re going to have on and as you had shared earlier, our marketing for 2026.

Sally Cooper (18:27.982)
Well, thank you so much both for having me. It’s been a pleasure.

CRob (18:31.662)
Excellent. With that, we’ll call it a wrap. I want to wish everybody a great day and happy open sourcing.

Yesenia (18:35.718)
You’re welcome.

What’s in the SOSS? Podcast #41 – S2E18 The Remediation Revolution: How AI Agents Are Transforming Open Source Security with John Amaral of Root.io

By Podcast

Summary

In this episode of What’s in the SOSS, CRob sits down with John Amaral from Root.io to explore the evolving landscape of open source security and vulnerability management. They discuss how AI and LLM technologies are revolutionizing the way we approach security challenges, from the shift away from traditional “scan and triage” methodologies to an emerging “fix first” approach powered by agentic systems. John shares insights on the democratization of coding through AI tools, the unique security challenges of containerized environments versus traditional VMs, and how modern developers can leverage AI as a “pair programmer” and security analyst. The conversation covers the transition from “shift left” to “shift out” security practices and offers practical advice for open source maintainers looking to enhance their security posture using AI tools.

Conversation Highlights

00:25 – Welcome and introductions
01:05 – John’s open source journey and Root.io’s SIM Toolkit project
02:24 – How application development has evolved over 20 years
05:44 – The shift from engineering rigor to accessible coding with AI
08:29 – Balancing AI acceleration with security responsibilities
10:08 – Traditional vs. containerized vulnerability management approaches
13:18 – Leveraging AI and ML for modern vulnerability management
16:58 – The coming “remediation revolution” and fix-first approach
18:24 – Why “shift left” security isn’t working for developers
19:35 – Using AI as a cybernetic programming and analysis partner
20:02 – Call to action: Start using AI tools for security today
22:00 – Closing thoughts and wrap-up

Transcript

Intro Music & Promotional clip (00:00)

CRob (00:25)
Welcome, welcome, welcome to What’s in the SOSS, the OpenSSF’s podcast where I talk to upstream maintainers, industry professionals, educators, academics, and researchers all about the amazing world of upstream open source security and software supply chain security.

Today, we have a real treat. We have John from Root.io with us here, and we’re going to be talking a little bit about some of the new air quotes, “cutting edge” things going on in the space of containers and AI security. But before we jump into it, John, could maybe you share a little bit with the audience, like how you got into open source and what you’re doing upstream?

John (01:05)
First of all, great to be here. Thank you so much for taking the time at Black Hat to have a conversation. I really appreciate it. Open source, really great topic. I love it. Been doing stuff with open source for quite some time. How do I get into it? I’m a builder. I make things. I make software been writing software. Folks can’t see me, but you know, I’m gray and have no hair and all that sort of We’ve been doing this a while. And I think that it’s been a great journey and a pleasure in my life to work with software in a way that democratizes it, gets it out there. I’ve taken a special interest in security for a long time, 20 years of working in cybersecurity. It’s a problem that’s been near and dear to me since the first day I ever had my like first floppy disk, corrupted. I’ve been on a mission to fix that. And my open source journey has been diverse. My company, Root.io, we are the maintainers of an open source project called Slim SIM (or SUM) Toolkit, which is a pretty popular open source project that is about security and containers. And it’s been our goal, myself personally, and as in my latest company to really try to help make open source secure for the masses.

CRob (02:24)
Excellent. That is an excellent kind of vision and direction to take things. So from your perspective, I feel we’re very similar age and kind of came up maybe in semi-related paths. But from your perspective, how have you seen application development kind of transmogrify over the last 20 or so years? What has gotten better? What might’ve gotten a little worse?

John (02:51)
20 years, big time frame talking about modern open source software. I remember when Linux first came out. And I was playing with it. I actually ported it to a single board computer as one of my jobs as an engineer back in the day, which was super fun. Of course, we’ve seen what happened by making software available to folks. It’s become the foundation of everything.

Andreessen said software will eat the world while the teeth were open source. They really made software available and now 95 or more percent of everything we touch and do is open source software. I’ll add that in the grand scheme of things, it’s been tremendously secure, especially projects like Linux. We’re really splitting hairs, but security problems are real. as we’ve seen, proliferation of open source and proliferation of repos with things like GitHub and all that. Then today, proliferation of tooling and the ability to build software and then to build software with AI is just simply exponentiating the rate at which we can do things. Good people who build software for the right reasons can do things. Bad people who do things for the bad reasons can do things. And it’s an arms race.

And I think it’s really both benefiting software development, society, software builders with these tremendously powerful tools to do things that they want. A person in my career arc, today I feel like I have the power to write code at a rate that’s probably better than I ever have. I’ve always been hands on the keyboard, but I feel rejuvenated. I’ve become a business person in my life and built companies.

And I didn’t always have the time or maybe even the moment to do coding at the level I’d like. And today I’m banging out projects like I was 25 or even better. But at the same time that we’re getting all this leverage universally, we also noticed that there’s an impending kind of security risk where, yeah, we can find vulnerabilities and generate them faster than ever. And LLMs aren’t quite good yet at secure coding. I think they will be. But also attackers are using it for exploits and really as soon as a disclosed vulnerability comes out or even minutes later, they’re writing exploits that can target those. I love the fact that the pace and the leverage is high and I think the world’s going to do great things with it, the world of open source folks like us. At the same time, we’ve got to be more diligent and even better at defending.

CRob (05:44)
Right. I heard an interesting statement yesterday where folks were talking about software engineering as a discipline that’s maybe 40 to 60 years old. And engineering was kind of the core noun there. Where these people, these engineers were trained, they had a certain rigor. They might not have always enjoyed security, but they were engineers and there was a certain kind of elegance to the code and that was people much like artists where they took a lot of pride in their work and how the code you could understand what the code is. Today and especially in the last several years with the influx of AI tools especially that it’s a blessing and a curse that anybody can be a developer. Not just people that don’t have time that used to do it and now they get to of scratch that itch. But now anyone can write code and they may not necessarily have that same rigor and discipline that comes from like most of them engineering trades.

John (06:42)
I’m going to guess. I think it’s not walking out too far on limb that you probably coded in systems at some point in your life where you had a very small amount of memory to work with. You knew every line of code in the system. Like literally it was written. There might have been a shim operating system or something small, but I wrote embedded systems early in my career and we knew everything. We knew every line of code and the elegance and the and the efficiency of it and the speed of it. And we were very close to the CPU, very close to the hardware. It was slow building things because you had to handcraft everything, but it was very curated and very beautiful, so to speak. I find beauty in those things. You’re exactly right. I think I started to see this happen around the time when JVM started happening, Java Virtual Machines, where you didn’t have to worry about Java garbage collection. You didn’t have to worry about memory management.

And then progressively, levels of abstraction have changed right to to make coding faster and easier and I give it more you know more power and that’s great and we’ve built a lot more systems bigger systems open source helps. But now literally anyone who can speak cogently and describe what they want and get a system and. And I look at the code my LLM’s produce. I know what good code looks like. Our team is really good at engineering right?

Hmm, how did it think to do it that way? Then go back and we tell it what we want and you can massage it with some words. It’s really dangerous and if you don’t know how to look for security problems, that’s even more dangerous. Exactly, the level of abstraction is so high that people aren’t really curating code the way they might need to to build secure production grade systems.

CRob (08:29)
Especially if you are creating software with the intention of somebody else using it, probably in a business, then you’re not really thinking about all the extra steps you need to take to help protect yourself in your downstream.

John (08:44)
Yeah, yeah. think it’s an evolution, right? And where I think of it like these AI systems we’re working with are maybe second graders. When it comes to professional code authoring, they can produce a lot of good stuff, right? It’s really up to the user to discern what’s usable.

And we can get to prototypes very quickly, which I think is greatly powerful, which lets us iterate and develop. In my company, we use AI coding techniques for everything, but nothing gets into production, into customer hands that isn’t highly vetted and highly reviewed. So, the creation part goes much faster. The review part is still a human.

CRob (09:33)
Well, that’s good. Human on the loop is important.

John (09:35)
It is.

CRob (09:36)
So let’s change the topic slightly. Let’s talk a little bit more about vulnerability management. From your perspective, thinking about traditional brick and mortar organizations, how have you seen, what key differences do you see from someone that is more data center, server, VM focused versus the new generation of cloud native where we have containers and cloud?

What are some of the differences you see in managing your security profile and your vulnerabilities there?

John (10:08)
Yeah, so I’ll start out by a general statement about vulnerability management. In general, the way I observe current methodologies today are pretty traditional.

It’s scan, it’s inventory – What do I have for software? Let’s just focus on software. What do I have? Do I know what it is or not? Do I have a full inventory of it? Then you scan it and you get a laundry list of vulnerabilities, some false positives, false negatives that you’re able to find. And then I’ve got this long list and the typical pattern there is now triage, which are more important than others and which can I explain away. And then there’s a cycle of remediation, hopefully, a lot of times not, that you’re cycling work back to the engineering organization or to whoever is in charge of doing the remediation. And this is a very big loop, mostly starting with and ending with still long lists of vulnerabilities that need to be addressed and risk managed, right? It doesn’t really matter if you’re doing VMs or traditional software or containerized software. That’s the status quo, I would say, for the average company doing vulnerability maintenance. And vulnerability management, the remediation part of that ends up being some fractional work, meaning you just don’t have time to get to it all mostly, and it becomes a big tax on the development team to fix it. Because in software, it’s very difficult for DevSec teams to fix it when it’s actually a coding problem in the end.

In traditional VM world, I’d say that the potential impact and the velocity at which those move compared to containerized environments, where you have

Kubernetes and other kinds of orchestration systems that can literally proliferate containers everywhere in a place where infrastructure as code is the norm. I just say that the risk surface in these containerized environments is much more vast and oftentimes less understood. Whereas traditional VMs still follow a pattern of pretty prescriptive way of deployment. So I think in the end, the more prolific you can be with deploying code, the more likely you’ll have this massive risk surface and containers are so portable and easy to produce that they’re everywhere. You can pull them down from Docker Hub and these things are full of vulnerabilities and they’re sitting on people’s desks.

They’re sitting in staging areas or sitting in production. So proliferation is vast. And I think that in conjunction with really high vulnerability reporting rates, really high code production rates, vast consumption of open source, and then exploits at AI speed, we’re seeing this kind of almost explosive moment in risk from vulnerability management.

CRob (13:18)
So there’s been, over the last several, like machine intelligence, which has now transformed into artificial intelligence. It’s been around for several decades, but it seems like most recently, the last four years, two years, it has been exponentially accelerating. We have this whole spectrum of things, AI, ML, LLM, GenAI, now we have Agentic and MCP servers.

So kind of looking at all these different technologies, what recommendations do you have for organizations that are looking to try to manage their vulnerabilities and potentially leveraging some of this new intelligence, these new capabilities?

John (13:58)
Yeah, it’s amazing at the rate of change of these kinds of things.

CRob (14:02)
It’s crazy.

John (14:03)
I think there’s a massively accelerating, kind of exponentially accelerating feedback loop because once you have LLMs that can do work, they can help you evolve the systems that they manifest faster and faster and faster. It’s a flywheel effect. And that is where we’re going to get all this leverage in LLMs. At Root, we build an agentic platform that does vulnerability patching at scale. We’re trying to achieve sort of an open source scale level of that.

And I only said that because I believe that rapidly, not just us, but from an industry perspective, we’re evolving to have the capabilities through agentic systems based on modern LLMs to be able to really understand and modify code at scale. There’s a lot of investment going in by all the major players, whether it’s Google or Anthropic or OpenAI to make these LLM systems really good at understanding and generating code. At the heart of most vulnerabilities today, it’s a coding problem. You have vulnerable code.

And so, we’ve been able to exploit the coding capabilities to turn it into an expert security engineer and maintainer of any software system. And so I think what we’re on the verge of is this, I’ll call it remediation revolution. I mentioned that the status quo is typically inventory, scan, list, triage, do your best. That’s a scan for us kind of, you know, I’ll call it, it’s a mode where mostly you’re just trying to get a comprehensive list of the vulnerabilities you have. It’s going to get flipped on its head with this kind of technique where it’s going to be just fix everything first. And there’ll be outliers. There’ll be things that are kind of technically impossible to fix for a while. For instance, it could be a disclosure, but you really don’t know how it works. You don’t have CWEs. You don’t have all the things yet. So you can’t really know yet.

That gap will close very quickly once you know what code base it’s in and you understand it maybe through a POC or something like that. But I think we’re gonna enter into the remediation revolution of vulnerability management where at least for third party open source code, most of it will be fixed – a priority.

Now, zero days will start to happen faster, there’ll be all the things and there’ll be a long tail on this and certainly probably things we can’t even imagine yet. But generally, I think vulnerability management as we know it will enter into this phase of fix first. And I think that’s really exciting because in the end it creates a lot of work for teams to manage those lists, to deal with the re-engineering cycle. It’s basically latent rework that you have to do. You don’t really know what’s coming. And I think that can go away, which is exciting because it frees up security practitioners and engineers to focus on, I’d say more meaningful problems, less toil problems. And that’s good for software.

CRob (17:08)
It’s good for the security engineers.

John (17:09)
Correct.

CRob (17:10)
It’s good for the developers.

John (17:11)
It’s really good for developers. I think generally the shift left revolution in software really didn’t work the way people thought. Shifting that work left, it has two major frictions. One is it’s shifting new work to the engineering teams who are already maximally busy.

CRob (17:29)
Correct.

John (17:29)
I didn’t have time to do a lot of other things when I was an engineer. And the second is software engineers aren’t security engineers. They really don’t like the work and maybe aren’t good at the work. And so what we really want is to not have that work land on their plate. I think we’re entering into an age where, and this is a general statement for software, where software as a service and the idea of shift left is really going to be replaced with I call shift out, which is if you can have an agentic system do the work for you, especially if it’s work that is toilsome and difficult, low value, or even just security maintenance, right? Like lot of this work is hard. It’s hard. That patching things is hard, especially for the engineer who doesn’t know the code. If you can make that work go away and make it secure and agents can do that for you, I think there’s higher value work for engineers to be doing.

CRob (18:24)
Well, and especially with the trend with open source, kind of where people are assembling composing apps instead of creating them whole cloth. It’s a very rare engineer indeed that’s going to understand every piece of code that’s in there.

John (18:37)
And they don’t. I don’t think it’s feasible. don’t know one except the folks who write node for instance, Node works internally. They don’t know. And if there’s a vulnerability down there, some of that stuff’s really esoteric. You have to know how that code works to fix it. As I said, luckily, agent existing LLM systems with agents kind of powering them or using or exploiting them are really good at understanding big code bases. They have like almost a perfect memory for how the code fits together. Humans don’t, and it takes a long time to learn this code.

CRob (19:11)
Yeah, absolutely. And I’ve been using leveraging AI in my practice is there are certain specific tasks AI does very well. It’s great at analyzing large pools of data and providing you lists and kind of pointers and hints. Not so great making it up by its own, but generally it’s the expert system. It’s nice to have a buddy there to assist you.

John (19:35)
It’s a pair programmer for me, and it’s a pair of data analysts for you, and that’s how you use it. I think that’s a perfect. We effectively have become cybernetic organisms. Our organic capabilities augmented with this really powerful tool. I think it’s going to keep getting more and more excellent at the tasks that we need offloaded.

CRob (19:54)
That’s great. As we’re wrapping up here, do you have any closing thoughts or a call to action for the audience?

John (20:02)
Call to action for the audience – I think it’s again, passion play for me, vulnerability management, security of open source. A couple of things. same. Again, same cloth – I think again, we’re entering an age where think security, vulnerability management can be disrupted. I think anyone who’s struggling with kind of high effort work and that never ending list helps on the way techniques you can do with open source projects and that can get you started. Just for instance, researching vulnerabilities. If you’re not using LLMs for that, you should start tomorrow. It is an amazing buddy for digging in and understanding how things work and what these exploits are and what these risks are. There are tooling like mine and others out there that you can use to really take a lot of effort away from vulnerability management. I’d say that for any open source maintainers out there, I think you can start using these programming tools as pair programmers and security analysts for you. And they’re pretty good. And if you just learn some prompting techniques, you can probably secure your code at a level that you hadn’t before. It’s pretty good at figuring out where your security weaknesses are and telling you what to do about them. I think just these things can probably enhance security open source tremendously.

CRob (24:40)
That would be amazing to help kind of offload some of that burden from our maintainers and let them work on that excellent…

John (21:46)
Threat modeling, for instance, they’re actually pretty good at it. Yeah. Which is amazing. So start using the tools and make them your friend. And even if you don’t want to use them as a pair of programmer, certainly use them as a adjunct SecOps engineer.

CRob (22:00)
Well, excellent. John from Root.io. I really appreciate you coming in here, sharing your vision and your wisdom with the audience. Thanks for showing up.

John (22:10)
Pleasure was mine. Thank you so much for having me.

CRob (22:12)
And thank you everybody. That is a wrap. Happy open sourcing everybody. We’ll talk to you soon.

OpenSSF at DEF CON 33: AI Cyber Challenge (AIxCC), MLSecOps, and Securing Critical Infrastructure

By Blog

By Jeff Diecks

The OpenSSF team will be attending DEF CON 33, where the winners of the AI Cyber Challenge (AIxCC) will be announced. We will also host a panel discussion at the AIxCC village to introduce the concept of MLSecOps.

AIxCC, led by DARPA and ARPA-H, is a two-year competition focused on developing AI-enabled software to automatically identify and patch vulnerabilities in source code, particularly in open source software underpinning critical infrastructure.

OpenSSF is supporting AIxCC as a challenge advisor, guiding the competition to ensure its solutions benefit the open source community. We are actively working with DARPA and ARPA-H to open source the winning systems, infrastructure, and data from the competition, and are designing a program to facilitate their successful adoption and use by open source projects. At least four of the competitors’ Cyber Resilience Systems will be open sourced on Friday, August 8 at DEF CON. The remaining CRSs will also be open sourced soon after the event.

Join Our Panel: Applying DevSecOps Lessons to MLSecOps

We will be hosting a panel talk at the AIxCC Village, “Applying DevSecOps Lessons to MLSecOps.” This presentation will delve into the evolving landscape of security with the advent of AI/ML applications.

The panelists for this discussion will be:

  • Christopher “CRob” Robinson – Chief Security Architect, OpenSSF
  • Sarah Evans – Security Applied Research Program Lead, Dell Technologies
  • Eoin Wickens – Director of Threat Intelligence, HiddenLayer

Just as DevSecOps integrated security practices into the Software Development Life Cycle (SDLC) to address critical software security gaps, Machine Learning Operations (MLOps) now needs to transition into MLSecOps. MLSecOps emphasizes integrating security practices throughout the ML development lifecycle, establishing security as a shared responsibility among ML developers, security practitioners, and operations teams. When thinking about securing MLOps using lessons learned from DevSecOps, the conversation includes open source tools from OpenSSF and other initiatives, such as Supply-Chain Levels for Software Artifacts (SLSA) and Sigstore, that can be extended to MLSecOps. This talk will explore some of those tools, as well as talk about potential tooling gaps the community can partner to close. Embracing this methodology enables early identification and mitigation of security risks, facilitating the development of secure and trustworthy ML models.  Embracing MLSecOps methodology enables early identification and mitigation of security risks, facilitating the development of secure and trustworthy ML models.

We invite you to join us on Saturday, August 9, from 10:30-11:15 a.m. at the AIxCC Village Stage to learn more about how the lessons from DevSecOps can be applied to the unique challenges of securing AI/ML systems and to understand the importance of adopting an MLSecOps approach for a more secure future in open source software.

About the Author

JeffJeff Diecks is the Technical Program Manager for the AI Cyber Challenge (AIxCC) at the Open Source Security Foundation (OpenSSF). A participant in open source since 1999, he’s delivered digital products and applications for dozens of universities, six professional sports leagues, state governments, global media companies, non-profits, and corporate clients.