Category

Podcast

What’s in the SOSS? Podcast #54 – S3E6 AIxCC Part 4 – Cyber Reasoning Systems: The Real-World Journey After AIxCC

By Podcast

Summary

In this final episode of our AI Cyber Challenge (AIxCC) series, CRob and Jeff Diecks wrap-up the journey from DARPA’s groundbreaking two-year competition to the exciting collaborative phase happening now. Discover how winning teams are taking their AI-powered vulnerability detection systems into the real world, finding actual bugs in projects like the Linux Kernel and CUPS. Learn about the innovative OSS-CRS project that aims to create a standard infrastructure for mixing and matching the best components from different systems, and hear valuable lessons about how to responsibly introduce AI-generated security findings to open source maintainers. The competition may be over, but the real work—and collaboration—is just beginning.

This episode is part 4 of a four-part series on AIxCC:

Conversation Highlights

00:00 – Welcome and Introduction to AICC
01:37 – OpenSSF’s AI Security Mission: Two Lenses
03:54 – Competition Highlights: What the Teams Discovered
07:43 – Real-World Impact: From Research to Production
10:44 – Lessons Learned: Working with Open Source Maintainers
13:13 – OSS-CRS: Building a Standard Infrastructure
14:29 – Breaking Down Walls: Post-Competition Collaboration
15:39 – How to Get Involved

Transcript

CRob (00:09.408)
Welcome, welcome, welcome to What’s in the SOSS, the OpenSSF’s podcast where I get to talk to the most amazing people in the planet that are either involved or on the outskirts of open source software and open source security. Today, we have a treat. We get to talk to one of my dear friends and teammates, Jeff, and we’re gonna dive into a topic that I really don’t know a lot about today.

So Jeff, why you introduce yourself to the audience and kind of describe what you do for the foundation.

Jeff Diecks (00:44.686)
Yeah, thanks, CRob. And hello, I’m Jeff Diecks. I’m a technical project manager with OpenSSF. And I’ve been involved in open source for 20 plus years now. Goodness. And I am OpenSSF’s lead on the AI cyber challenge program that we work on. And CRob is sort of telling you the truth. He’s been on the three episodes prior to this where he’s learned plenty about AICC, but we’re here to talk a little bit more about this and wrap up the series today.

CRob (01:17.582)
Yeah, these words you use, AI, that isn’t something I hear a lot about. Wink. Could maybe you recap for us, like what is the OpenSSF doing around AI security? And then just maybe give a brief recap about the AI CC.

Jeff Diecks (01:37.028)
Yeah, for sure. So OpenSSF in the world of AI, we have our AI ML Security Working Group that looks at security and AI from kind of two lenses. The first is AI for security, which is what we’ll be talking about here today, projects that help you AI to help improve the security of projects, and security for AI, which is securing all this new world of AI things and all the lessons we’ve learned about securing software. AI is software too and it needs securing. We have a whole suite of projects and work that focuses on that too. Specific to AIxCC, again, it’s the AI Cyber Challenge. It was a two-year competition run by DARPA and ARPA-H. If you’re just hearing this episode first, I encourage you to go back to the first episode in this series with Andrew Carney from DARPA and ARPA-H for an overview of the program. And then we got into some good conversations with a couple of the team leads from some of the winning teams. But the purpose of the competition was to use AI and develop new systems to both find and fix vulnerabilities in open source software that are important to our critical infrastructure. An interesting part about the competition… written into the rules for any of the competitors accepting prize money, they were obligated to release their software as open source.

CRob (03:07.214)
Nice. That’s awesome. Well, yeah, again, I say that a bit tug-and-cheek, but there has been quite a lot of activity, whether it’s in the working group or specific to the follow-ons to the AICC competition, which is what we’re here about today to kind of put a bow on these conversations and help encourage the community to engage and go forward. So we talked about, we talked to a couple of the teams, we talked to Andrew, kind of gave us an overview of the program.

From your perspective and your engagement with the community, we have a new cyber reasoning special interest group within the foundation. So what have the teams been up to subsequently since the August close of the competition?

Jeff Diecks (03:54.414)
Yeah, there’s really two parts of this and I’ve had the great honor of meeting with and speaking with a lot of the teams and learning about what they’ve been doing. But we first started with just conversations about, you know, their experiences with the competition and what they learned, similar to the couple of episodes that we did. And what was really interesting is, you know, every single team, there’s something of value that came out of their system. They excelled in a, you know, at least a specific area.

They were all finding bugs in real world software. Just a couple of highlights from some of the other teams. Team Theori, which was among the three winners, the third place winner, they had a unique approach. Their system, unlike the others, did not use fuzzing. It used pure LLM AI. so just an interesting variation and you potential there for that system to be super flexible because it doesn’t come with some of the requirements of projects already being set up with fuzzing. So interesting to see what becomes of their system there. And then in a couple of other cases, it was really interesting. Teams that have systems that are extremely capable, but for one reason or another,

CRob (05:08.142)
Yeah.

Jeff Diecks (05:18.096)
There was maybe a specific part of their system that just didn’t work well with the scoring mechanisms of the competition itself. So we had a team that was one of the best at generating patches for the issues that it found that was most effective. But as these things go, there was kind of a late change in the architecture of their system during the competition. the part of their system that was supposed to submit all these patches that got generated into the competition for scoring didn’t function correctly and didn’t submit everything. So they didn’t get credit for all their great work. But we’ll talk in a little bit about well, but it’s still a capable system. And now we can use it not for a competition, but for real stuff that’s potentially even more valuable. There was another that it was great at generating proof of vulnerabilities.

CRob (06:13.87)
Mm-hmm.

Jeff Diecks (06:15.328)
And, but they had made some assumptions based on the competition infrastructure and the system just didn’t perform well within the confines of the competition. But what was interesting about the architecture of their system was they would generate a potential result that they thought might have been a finding from part of their system. And then they would submit that potential result out to several other LLMs and have them feedback.

CRob (06:24.718)
Mm-hmm.

Jeff Diecks (06:45.176)
a verdict on whether they thought it might be effective or not. We kind of made the joke. It was like doing the poll of getting eight out of nine dentists to agree and decide on the submission. So those are some highlights from the competition. But you asked about what the team’s been up to in recent months. So that’s been really interesting. DARPA has kind of extended the incentives from their program and they’re offering

CRob (06:48.846)
Hmm.

Jeff Diecks (07:13.552)
incentives and rewards for the teams now taking these systems and using them in the real world against real open source projects and demonstrating that they’re effective there. And if they can demonstrate that, they earn additional reward money, which is encouraging the adoption and the transition of this research into real world usage. So we’ve had some interesting findings there and a few examples there.

CRob (07:27.374)
Awesome.

Jeff Diecks (07:43.248)
We’ve got Team 42 that we’ve been working with. They’ve focused a lot on their system seeming to be very effective in working with the Linux kernel and specifically some of the out-of-tree subsystems. They’ve found and reported several bugs and had some of them accepted, accepted patches. And actually later this week we’ve scheduled and we’re doing a consultation for that team with a kernel maintainer.

to give them feedback and help their research move forward and any guidance they can give on how to make their system more effective. So we’re looking forward to that conversation.

CRob (08:23.36)
Excellent.

Today, so we have a mixture of projects that are being donated. They’re all open source, but some of them are being donated like here, for example. Where would we go from here? What is the group’s thoughts about these, broadly the cyber reasoning systems and kind of what other interests or ideas are floating around to keep the momentum?

Jeff Diecks (08:52.91)
Yeah, there’s a few things. So one, OpenSSF is involved with the teams and we’ve formed, as you mentioned, the special interest group, the cyber reasoning system special interest group as the kind of continued home from the competition for all the teams to continue collaborating together and working there. some interesting developments so far, we’ve hosted having the teams present to one another.

of their work with real open source projects and examples of bugs they’ve found in the process they’ve been following and how they’ve been getting received from open source projects with what they’ve been submitting. So for example, Team Fuzzing Brain, who has donated their CRS systems for OpenSSF to host and support.

They’ve been working against a bunch of projects, but specifically they shared some examples of their work with the CUPS project where they found some bugs, they reported them, they’ve had some accepted patches, and they’ve gotten great feedback from the CUPS maintainers who are very appreciative of their work, both finding bugs and submitting patches, but also helping to generate and expand the fuzzing.

CRob (09:56.408)
God.

Jeff Diecks (10:17.028)
know, harness coverage of the project itself, which the systems are pretty capable of. So we’ve been learning a lot about the reporting process because it’s one thing to have these capable systems, but you know, it’s the world where like you and I and everybody else are a bit, you know, skeptics of just, you know, pure AI things, right? So we’re, working our way through and kind of learning from one another about

CRob (10:19.82)
very nice.

Jeff Diecks (10:44.078)
What’s the best way to keep humans in the mix and how are projects receiving these things? What are some lessons learned? So for example, we had a conversation in a SIG meeting where we’re talking about the patch submission process and some of the projects were kind of reacting. It was perhaps a bit too aggressive to just go ahead and introduce yourself by submitting a patch to the system.

CRob (10:46.51)
Mm-hmm.

Jeff Diecks (11:12.784)
right into the pull request queue of a project. And the group suggested maybe for the next go-round, it’s maybe a more polite way to introduce yourself by opening an issue, reporting how it was found, what it was found, all the supporting information, and then attaching a patch to be considered versus just, hey, here’s a PR. By the way, it came from AI.

So some interesting.

CRob (11:43.022)
Right. And we’ve heard a lot of feedback from upstream about their disinterest in that approach.

Jeff Diecks (11:50.772)
Yeah, well, and was a big focus of the scoring of the competition itself. That was among the feedback that we gave consistently for a couple of years of make sure you’re incentivizing the development of systems that don’t make life more difficult for maintainers, but hopefully make things easier and think about how these things will be received, not just technical capabilities.

But you mentioned, you know, donated projects and, you know, the one that I think is of real interest and, you know, for folks to follow along. So Team Atlanta led the way development of a project that we’re calling OSS-CRS, and bundled in with it, they intend to have something called CRS Benchmark.

And what these are for is it intends to be a standard infrastructure for building and running and evaluating all these CRSs and being able to kind of mix and match and use different parts of different ones for, you know, kind of a combined solution.

So, you know, if we think of a future where we’ve got, you know, a system like we talked about that’s most effective at generating patches, but we’ve got a different one that’s best at finding vulnerabilities.

CRob (12:58.755)
Wow.

Jeff Diecks (13:13.488)
And the hope is that through this standard interface, folks can leverage and kind of fine tune things to get the best performance and the best results out of a combination of systems rather than just relying upon a single one. So you can just think of it the way it’s intended to run. If you just imagine yourself at a command line prompt and you issue a OSS dash bug find dash CRS build.

then give it a configuration and a project, a compatible project, and that’ll build a system to run against. And then you can issue, thing, OSS bug find CRS run config project and the name of harness, and it’ll go off and do its thing. So again, you’re specifying which configurations you’re wanting to use, which subsystems you want it to.

CRob (14:01.666)
Mm-hmm.

Jeff Diecks (14:14.01)
pull from. So they’ve got an interesting roadmap. You know, we’re talking with them and, you know, hoping to bring our community and perspective to help support that project and its development, you know, and adoption into the real world.

CRob (14:29.176)
And I remember us talking around DEF CON last year. And I think the competition and the prize money are great. That was very exciting. But I’m most excited about this kind of phase we’re in now, where we’re seeing the teams with that ethical wall down between them from the competition. Now they’re actually able to talk and collaborate and share ideas. I’m really excited to see the community come together, helping support these students on these ideas.

Jeff Diecks (14:58.916)
Yeah, and that’s been the interesting part and part of why this it’s taken us a bit to get this whole podcast series out through the course of the competition for competition integrity reasons. You know, we were advising the competition organizers, but we weren’t interacting with the teams themselves. So we had to go through a whole process after the finals to introduce ourselves to all of the individual teams and let them know that we’re here and about and you know, the things we offer to help.

support them in the further development of the system. that’s been an interesting few months of lots of great conversations and seeing these teams come together within our working group and special interest group.

CRob (15:39.842)
So you’ve inspired me. I sure would like to know more on how to get involved. How can I do that?

Jeff Diecks (15:41.488)
Ha

Well, if your Monday afternoons at 1 p.m. Eastern are free, we have two different meetings that basically are in that time slot on alternating weeks. Our full AI ML security working group meets again on Mondays at 1 p.m. Eastern time on a biweekly basis. And on the alternating weeks, the cyber reasoning system special interest group meets in that time slot. You can find them.

CRob (16:09.326)
Peace.

Jeff Diecks (16:11.844)
You know, both of those meeting series on our community calendar at opensf.org slash calendar.

CRob (16:18.958)
Well, I want to thank you for helping shepherd and guide the folks in the competitions. We’re seeing some great results come out of this. And I’m really excited to see what our community and these amazing students kind of come up with on how to further the use of AI to help improve security on things. Yeah. And with that, we’ll say this is a wrap.

Jeff Diecks (16:42.308)
Sounds good. Thanks, CRob, and we’ll see you in the meetings.

CRob (16:48.911)
I for one welcome our new robot overlords and I wish everybody a happy open sourcing. Have a great day.

What’s in the SOSS? Podcast #53 – S3E5 AIxCC Part 3 – Buttercup’s Hybrid Approach: Trail of Bits’ Journey to Second Place in AIxCC

By Podcast

Summary

In the third episode of our AI Cyber Challenge (AIxCC) series, CRob sits down with Michael Brown, Principal Security Engineer at Trail of Bits, to discuss their runner-up cybersecurity reasoning system, Buttercup. Michael shares how their team took a hybrid approach – combining large language models with conventional software analysis tools like fuzzers – to create a system that exceeded even their own expectations. Learn how Trail of Bits made Buttercup fully open source and accessible to run on a laptop, their commitment to ongoing maintenance with prize winnings, and why they believe AI works best when applied to small, focused problems rather than trying to solve everything at once.

This episode is part 3 of a four-part series on AIxCC:

Conversation Highlights

00:04 – Introduction & Welcome
00:12 – About Trail of Bits & Open Source Commitment
03:16 – Buttercup: Second Place in AIxCC
04:20 – The Hybrid Approach Strategy
06:45 – From Skeptic to Believer
09:28 – Surprises & Vindication During Competition
11:36 – Multi-Agent Patching Success
14:46 – Post-Competition Plans
15:26 – Making Buttercup Run on a Laptop
18:22 – The Giant Check & DEF CON
18:59 – How to Access Buttercup on GitHub
21:37 – Enterprise Deployment & Community Support
22:23 – Closing Remarks

Transcript

CRob (00:04.328)
And next up, we’re talking to Michael Brown from Trail of Bits. Michael, welcome to What’s in the SOSS.

Michael Brown (ToB) (00:10.688)
Hey, thanks for having me. I appreciate being here.

CRob (00:12.7)
We love having you. So maybe could you describe a little bit about your organization you’re coming from, Trail of Bits, and maybe share a little insight into what your open source origin story is.

Michael Brown (ToB) (00:23.756)
Yeah, sure. So Trail of Bits is a small business. We’re a security R &D firm. We’ve been in existence since about 2012. I’ve personally been with the company about four years plus. I work there within our research and engineering department. I’m a principal security engineer, and I also lead up our AIML security research team. So Trail of Bits, we do quite a bit of government research. We also work for commercial clients.

And one of the common threads in all of the work that we do, not just government, not just commercial, is that we try to make it as public as much as we possibly can allow. So for example, sometimes, you know, we work on sensitive research programs for the government and they don’t let us make it public. Sometimes our commercial clients don’t want to publicize the results of every security audit, but to the maximum extent that our clients allow us to, we make our tools, we make our findings, we make them open source. And we’re really big believers in

that the work that we do should be a rising tide that raises all ships when it comes to the security posture for the critical infrastructure that we all depend upon, whether we’re working on hobbies at home and whether we’re building things for large organizations, all that stuff.

CRob (01:37.32)
love it. And how did you get into open source?

Michael Brown (ToB) (01:42.146)
Honestly, I’ve just kind of always have been there. So realistically, you know, the open source community is where a lot of the research tools that I started out my research career. That’s where you found them. So I started off a bit in academia. I got my undergrad in computer science and then went and did something completely different for eight years. And then when I kind of

Uh, you know, for context, I joined the military. flew helicopters for like eight years and basically nothing in computing. But as I was starting to get out, um, the army, I, know, I was getting married, about to have kids. I kind of decided I wanted to be, you know, around the house a little bit more often. um, you know, I started getting a master’s degree at Georgia Tech. They’re offering it online. And then after I did that, I went to go, um, do a PhD there and also work for their, um, uh, their applied research arm, Georgia Tech Research Institute.

So lot of the work that I was doing was, you know, cutting edge work on software analysis, compilers and AI ML. And a lot of the stuff that I, you know, built the tools that I did my research on, they came from the open source community. They were tools that were open sourced as part of the publication process for academic work. They were made publicly and available open source by companies like Trail of Bits before I came to work with them as the result of government research projects.

So, honestly, I guess I don’t really have much of an origin story for when I got there. I kind of just landed there when I started my career in security research and just stayed.

CRob (03:16.814)
Everybody has a different journey that gets us here. And interestingly enough, you mentioned our friends at Georgia Tech, which was a peer competitor of yours in the AICC competition, which you all on Trail of Bits team, I believe your project was called Buttercup. And you came in second place. You had some amazing results with your work. So maybe could you tell us a little bit about the…

Michael Brown (ToB) (03:33.741)
Yeah, that’s correct.

CRob (03:43.15)
What you did is part of the AI CC competition and kind of how your team approached this.

Michael Brown (ToB) (03:51.022)
Yeah. So, um, you know, at the risk of sounding a bit like a hipster, um, I’ve been working at the intersection of software security, compiler, software analysis, AI ML for, you know, basically, um, almost my entire, uh, career as a research scientist. So, you know, dating back to the earliest program I worked on, uh, for, DARPA was back in 2019. And, um, so this was, this was before the large language model was a predominant form of the technology or kind of became synonymous with AI. So.

CRob (04:04.719)
Mm.

Michael Brown (ToB) (04:20.792)
For a long time, I’ve been working and trying to understand how we can apply techniques from AI ML modeling to security problems and doing the problem formulation, make sure that we’re applying that in an intelligent way where we’re going to get good solid results that actually generalize and scale. So as the large language model came out and we started recognizing that certain problems within the security domain are good for large language models, but a lot of them aren’t.

When the AI cyber challenge came around, we always approached this and this, I was the lead designer, my co-designer, Ian Smith. And I, you know, when we sat down and make the, the original concept for what became Buttercup, we always took an approach where we were going to use the best problem solving technique for the sub problem at hand. So when we approached this giant elephant of a problem, we did what you do and you have an elephant and you’ve got to eat it, eat it one bite at a time.

So each bite we took a look at it and said, okay, you we have like these five or six things that we have to do really, really well to win this competition. What’s the best way to solve each of these five or six things? And then the rest of it became an engineering challenge to chain them together. Our approach is very much a hybrid approach. This was a similar approach taken by the first place winners at Georgia Tech, which by the way, if you’ve got to be beat by anybody being beat by your alma mater, it takes a little bit of this thing out of it. So, you know, we came in first and second place. It’s funny, I actually have another Georgia Tech PhD alumni.

CRob (05:33.832)
You

Michael Brown (ToB) (05:42.926)
on my team who worked on Buttercup. So Georgia Tech is very well represented in the AI cyber challenge. So yeah, we’ve always had a hybrid approach. The winning team had a hybrid approach. So we used AI where it was useful. We used conventional software analysis techniques where they were useful. And we put together something that ultimately performed really, really well and exceeded even my expectations.

CRob (05:45.458)
That’s awesome.

CRob (06:07.56)
I can say I mentioned in previous talks, I was initially skeptical about the value that could have been derived from this type of work. But the results that you and the other competitors delivered were absolutely stunning. You have converted me into a believer now that I think AI absolutely has a very positive role can play both in the research, but also kind of the vulnerability and operations management space. What

Looking at your buttercup. What is unique about your approach with the cyber reasoning system with the buttercup?

Michael Brown (ToB) (06:45.39)
Yeah, so it’s funny you say that we converted you. I kind of had to convert myself along the way. There was a time in this competition where I thought, you know, this whole thing was going to kind of be reliant on AI too much and was going to fall on its face. And then, you know, at that point, I’d be able to say like, see, I told you, you can’t use LLMs for everything. But then it turns out, you know, as we got through there, we use LLMs for two critical areas and they worked much better than I thought they would. I thought they would work pretty well, but they ended up working to a much better degree than I thought they actually would. you know, what makes Buttercup unique?

CRob (06:49.852)
Yeah.

CRob (07:00.678)
You

Michael Brown (ToB) (07:15.69)
is that, like I said, we take a hybrid approach. We use AIML for very good problems that are well-suited for AIML. And what I mean by that is when we employ large language models, we use them on small subproblems for which we have a lot of context. We have tools that we can install for the large language model to use to ensure that it creates valid outputs and outputs that can carry on to the next stage with a high degree of confidence that they’re correct.

CRob (07:30.076)
Mm-hmm.

CRob (07:43.912)
Mm-hmm.

Michael Brown (ToB) (07:45.934)
And then in the places where we have to create or sorry, in one of the places where we have to use conventional software analysis tools, those areas are very amenable to the conventional analysis. So, you what I mean by this? good example is, for example, we needed to produce a proof of vulnerability. We have to have a crashing test case to show that when we claim a vulnerability exists in a system, we can prove through reproduction that it actually exists. Large language models aren’t great.

at finding these crashing test cases just by asking it to look at the code and say, hey, what’s going to crash this? They don’t do very well at that. They also don’t do well at generating an input that will even get you to a particular point in a program. But fuzzers do a great job of this. So we use the fuzzer to do this. But one of the things about fuzzers is they kind of take a long time. They’re also more generally aimed at finding bugs, not necessarily vulnerabilities.

CRob (08:36.808)
Mm-hmm.

Michael Brown (ToB) (08:42.702)
So we use an AIML, or Large Language Model based accelerator, or a C generator, to help us generate inputs that were going to guide the fuzzer to either saturate the fuzzing harnesses that existed for these programs more quickly. They would help us find and shake loose more crashing inputs that correspond to vulnerabilities as opposed to bugs. And those things really, really helped us deal with some of the short analysis and short.

processing windows that we encountered in the AI cyber challenge. So was really a matter of using conventional tools, but making them work better with AI or using AI for problems like generating software patches for which there really aren’t great conventional software analysis tools to do that.

CRob (09:28.018)
So as you were going through the competition, which went through multiple rounds, are there anything that surprised you or that you learned that, again, you said your opinion changed on using AI? What were maybe some of the moments that generated that?

Michael Brown (ToB) (09:45.226)
Yeah, so there I mean there were a couple of them. I’ll start with one where I can pat myself on the back and I’ll finish with one where I was kind of surprised. So first, we had a couple of moments that were really kind of vindicating as we went through this. Our opinion going into this was that large language models, couldn’t just throw the whole problem at it and expect it to be successful. So going into this, there was a lot of things that we did that we did

CRob (09:49.405)
hehe

Michael Brown (ToB) (10:14.774)
two years ago when we first started out that, you know, that’s like two years ago, it’s like five lifetimes when it comes to the development of AI systems now. So there were some things that we did that didn’t exist before that became industry standard by the time we finished the competition. So things like putting your LLM queries or your LLM prompts in a workflow that includes like validation with tools or the ability to use tools.

CRob (10:29.298)
Mm-hmm.

Michael Brown (ToB) (10:43.062)
That was something that was not mainstream when we first started out, but that was something that we kind of built custom into Buttercup when it came particularly to patching. And then also using a multi-agent approach. know, a lot of the, you know, I don’t know, a lot of the hype around AI is that, know, you just ask it anything and it gives you the answer. You know, we’re asking a lot of AI when we say, here’s a program, tell me what vulnerabilities exist, prove they exist, and then fix them for me.

And also don’t make a mistake anywhere along the way. It’s way too much to ask. we found particularly with patching, were, back then, multi-agent systems or even agentic systems or multi-agentic systems were unheard of. were still just, we were still using chat GPT 3.5, still very much like chatbot interactions, web browser interactions.

CRob (11:16.564)
Yeah.

Michael Brown (ToB) (11:36.438)
integration into tools was certainly less widespread. So we had seen some very early work on archive about solving complex problems with multiple agents. So breaking the problem down for it. And we used this and our patcher ended up being incredibly good. was our most important and our biggest success on the project. Really want to shout out Ricardo Charon, who’s our leader, the lead developer for our patching agent.

CRob (11:47.976)
Mm-hmm.

Michael Brown (ToB) (12:06.414)
or for a patching system within for both the semi finals and finals in the ICC. He did an incredible job and we really built something that I, like I said, I regard as our biggest success. So, know, sure enough, and as we go through this two year competition, now all of a sudden, you know, multi agentic systems, multi agentic tool enabled systems, they’re all the rage. This is how we’re solving these challenging problems. And also a lot of this problem breakdown stuff that has made its way baked into the models now, the newer thinking and reasoning models from

Anthropic and open AI respectively. They you you can give it these large complicated problems and will do it will first try to break it down before trying to solve it. So you know we were building all that stuff into our system before. It came about so that that’s an area where you know, like I said, we learned along the way that we had the right approach from the beginning and it’s really easy to go back and say that that’s what we learned that we were right. So on the other side of this I will have to say, you know, I’ll reiterate I was really surprised at how well.

CRob (12:53.639)
Mm-hmm.

Michael Brown (ToB) (13:04.11)
language models were able to do some of the tasks we asked it to do. Part of it’s how we approach the problem. We didn’t ask too much of it. And I think that’s part of the reason why the large language models were successful. an area that I thought where it was going to be much more challenging was patching. But it turned out to be an area where, a certain degree, this is kind of an easier version of the problem in general because open source software, which are the targets of the AI cyber challenge, they’re ingested into the training.

CRob (13:08.924)
Mm.

Michael Brown (ToB) (13:31.404)
data for all of these large language models. the models do have some a priori familiarity with the targets. So when we give it a chunk of vulnerable code from a given program, it’s not the first time it’s seen the code. But still, they did an amazing job actually generating useful patches. The patch rate that I expected personally to see was much lower than the actual patch rate that we had, both in the semifinals and in the finals. So even in that first year window,

CRob (13:33.64)
Mm.

Michael Brown (ToB) (13:58.63)
I was really kind of blown away with how well the models were doing at code generating, code generation tasks, particularly small focused code generation tasks. So I think, think large language models are kind of getting a bad rap right now when it comes to like, you know, trying to vibe code entire applications. They’re like, gosh, this, this code is slop. It’s terrible. It’s full of bugs and stuff. Well, well, you did also ask you to build the whole thing. You know, if I asked a junior developer to build a whole thing, they probably also put together some.

CRob (14:07.366)
Yeah.

CRob (14:17.233)
Yeah.

CRob (14:26.258)
Yeah.

Michael Brown (ToB) (14:26.71)
and gross stuff. But when I ask a junior developer to give me a bug fix, much like the large language model, when I ask it for a more constrained version of problem, they tend to do a better job because there’s just fewer moving parts. So yeah, those are are two things I took away. One that, you know, like I said, I get to pat myself on the back for another that was actually surprising.

CRob (14:46.012)
That’s awesome. That’s amazing. So now that the competition is over and what does the team plan to do next beyond this competition?

Michael Brown (ToB) (14:57.098)
Yeah, so I mean, look, we spent a lot of our time over the last two years. A lot of I wouldn’t quite say blood. I don’t think anyone would bled over this, but we certainly had some tears. We certainly had a lot of anxiety. you know, we put a lot of we put a lot of ourselves into Buttercup. And so, you we want people to use it. So to that end, Buttercup is fully available and fully open source. know, DARPA made it a contingent, a contingency of participating in the competition that

CRob (15:09.917)
Mm-hmm.

Michael Brown (ToB) (15:26.892)
you had to make the code that you submitted to the semi finals and the finals open source. So we did that along with all of our other competitors, but we actually took it one step further. So the code that we submitted to the finals is great. It’s awesome, but it runs that scale. It used $40,000 of a $130,000, I think, total budget. And it ran across like an Azure subscription that had multiple nodes.

countless replicated containers. This is not something that everyone can use. We want everyone to use it. So actually in the month after we submitted our final version of the CRS, but before DEF CON occurred, where we figured out that we won, we spent a month making a version of Buttercup that’s decoupled from DARPA’s competition infrastructure. So it runs entirely standalone on its own, but more importantly, we scaled it down so it’ll run on a laptop.

CRob (16:18.696)
Mm-hmm.

Michael Brown (ToB) (16:25.154)
We left all of the hooks. We left all of the infrastructure to scale it back up if you want. So the idea now is that if you go to trail of bits.com slash buttercup, you can learn about the tool. have links to our GitHub repositories where it’s available and you can go, you can go download buttercup on your laptop and run it right now. And if you’ve got an API key, that’ll let you spend a hundred dollars. We can run a demo to show you that, we can find and patch a vulnerability and live.

CRob (16:51.496)
That’s easy.

Michael Brown (ToB) (16:53.164)
Yeah, so anyone can do this right now. So if you’re an organization that wants to use Buttercup, you can also use the hooks that we left back in to scale it up to the size of your organization, the budget that you have, and you run it at scale on your own software targets. So even users beyond the open source community, we want this to be used on closed source code too. So yeah, our goal is for, you you asked what we’re gonna do with it afterward. We made it open source, we want people to use it.

And even on top of that, we don’t want it to bit rot. So we actually are going to retain a pretty significant portion of our winnings of our $3 million prize. We’re going to retain a pretty significant portion of that. And we’re going to use it for ongoing maintenance. So we’re maintaining it. We’ve had people submit PRs that we’ve accepted. They’re tiny, know, it’s only been out for like a month, but you know, and then we’ve also made quite a few updates to the public version of Buttercup afterwards. So it’s actively maintained.

There’s money from the company’s putting its money where its mouth is. We’re actively maintaining it. The people who built it are part of the people who are maintaining it. We are taking contributions from the community. We hope they help us maintain it as well. And yeah, we’ve made it so anyone can use it. I think we’ve taken it about as far as we can possibly go in terms of reducing the barriers to adoption to the absolute minimum level for people to use Buttercup and leverage AI to help them find and patch vulnerabilities at scale.

CRob (18:16.716)
I love that approach. Thank you for doing that. How did you fit the giant check through the teller window?

Michael Brown (ToB) (18:22.574)
Fortunately, that check was a novelty and we did not actually have a larger problem than the AICC itself to solve afterward, which was getting paid. So yeah, we did have the comically large check, you know, taped up in our booth at the AICC village at DEF CON and it certainly attracted quite a few photographs from passersby.

CRob (18:26.716)
Ha ha ha!

CRob (18:31.964)
Yeah.

CRob (18:37.864)
Mm-hmm.

Michael Brown (ToB) (18:47.736)
I don’t know. think if you get on like social media or whatever and you look up AICC, if you see anything, there’s probably lots of pictures of me throwing a big smile up and you two thumbs up underneath the check that random people took.

CRob (18:59.464)
So you mentioned that Buttercup is all open source now. So if someone was interested in checking it out or possibly even contributing, where would they go do that?

Michael Brown (ToB) (19:07.564)
Yeah, so we have a GitHub organization. We’re Trail of Bits. You can find Buttercup there. You can also find our public archives of the old versions of Buttercup. So if you’re interested in what, like, the code that actually won the competitions, you can see what got us from the semifinals to the finals. You can see what won us second place in the finals. And you can also download and use the version that’s actively maintained that’ll run on your laptop. And all three of them are there. Their repository name is just Buttercup.

We are not the only people who love the princess bride. So there are other repositories named butter. Yeah. There are other, there are other repositories named butter cup on GitHub. So you might have to sift it a little bit, but yeah, basically it github.com slash trailer bits slash butter cup, I think is like 85 % of the URL there. don’t have it memorized, but, but yeah, you can find it publicly available and, along with a lot of other tools that, trailer bits has made over the years. So we encourage you to check some of those out as well. A lot of those are still actively maintained.

CRob (19:39.036)
That’s what it was.

Michael Brown (ToB) (20:03.72)
have a lot of community support. believe it or not at last, I counted like something like 1250 stars. buttercup is only like our fifth most popular tool that, that trail of bits has created. So, you know, we, we were quite, we were quite notable for creating some binary lifting tools that are up there. we have also some other tools that we’ve created recently for, parser security, analysis like that, like graftage.

And then also some kind of more conventional security tools like algo VPN, like still rank above buttercup. So as awesome as buttercup is, it’s like, it’s like only the fifth coolest tool that we’ve made as voted on by the community. So check out the other stuff while you’re there too. believe it or not, buttercup isn’t, isn’t, isn’t our most popular, offering.

CRob (20:51.56)
Pretty awesome statement to be able to say. That’s only our fifth most important tool.

Michael Brown (ToB) (20:53.966)
Yeah.

Michael Brown (ToB) (20:58.444)
I don’t know, you know, like personally, I’m kind of hoping that maybe we move up a few notches after people get time to like go find it and, you know, and start it. But, you know, we, we’ve, we’ve made some other really significant and really awesome contributions to the community, even outside of the AI cyber challenge. So I want to really, really stress all of that stuff is open source. We, you know, we, we aren’t just doing this because we have to, we actually care about the open source community. We want to secure the software infrastructure. We want people to use the tool and secure the software for, you know, before they, before they, you know,

Get it out there so that you we we we tackle this like kind of untackable problem of securing this massive ecosystem of code.

CRob (21:37.606)
Michael, thank you to Trey Labitz and your whole team for all the work you do, including the competition runner-up Buttercup, which did an amazing job by itself. Thank you for all your work, and thank you for joining us today.

Michael Brown (ToB) (21:52.802)
Yeah, thanks for having me. You one last thing to shout out there. If you’re an organization, you’re looking to employ Buttercup within your organization. Don’t be bashful about reaching out to us and asking about use cases for deploying within your organization. We know we’re happy to help out there. That’s probably an area that we focus on a little bit less in terms of getting this out the door for average folks to use or for individuals to use. So we’re definitely interested in helping to make sure Buttercup gets used.

Like I said, reach out to us, talk to us if you’re interested in Buttercup, we want to hear.

CRob (22:23.44)
Love it. All right. Have a great day.

Michael Brown (ToB) (22:25.678)
All right, thanks a lot.

What’s in the SOSS? Podcast #52 – S3E4 AIxCC Part 2 – From Skeptics to Believers: How Team Atlanta Won AIxCC by Combining Traditional Security with LLMs

By Podcast

Summary

In this 2nd episode in our series on DARPA’s AI Cyber Challenge (AIxCC), CRob sits down with Professor Taesoo Kim from Georgia Tech to discuss Team Atlanta’s journey to victory. Kim shares how his team – comprised of academics, world-class hackers, and Samsung engineers – initially skeptical of AI tools, underwent a complete mindset shift during the competition. He shares how they successfully augmented traditional security techniques like fuzzing and symbolic execution with LLM capabilities to find vulnerabilities in large-scale open source projects. Kim also reveals exciting post-competition developments, including commercialization efforts in smart contract auditing and plans to make their winning CRS accessible to the broader security community through integration with OSS-Fuzz.

This episode is part 2 of a four-part series on AIxCC:

Conversation Highlights

00:00 – Introduction
00:37 – Team Atlanta’s Background and Competition Strategy
03:43 – The Key to Victory: Combining Traditional and Modern Techniques
05:22 – Proof of Vulnerability vs. Finding Bugs
06:55 – The Mindset Shift: From AI Skeptics to Believers
09:46 – Overcoming Scalability Challenges with LLMs
10:53 – Post-Competition Plans and Commercialization
12:25 – Smart Contract Auditing Applications
14:20 – Making the CRS Accessible to the Community
16:32 – Student Experience and Research Impact
20:18 – Getting Started: Contributing to the Open Source CRS
22:25 – Real-World Adoption and Industry Impact
24:54 – The Future of AI-Powered Security Competitions

Transcript

Intro music & intro clip (00:00)

CRob (00:10.032)
All right, I’m very excited to talk to our next guest. I have Taesoo Kim, who is a professor down at Georgia Tech, also works with Samsung. And he got the great opportunity to help shepard Team Atlanta to victory in the AIxCC competition. Thank you for joining us. It’s a really pleasure to meet you.

Taesoo Kim (00:35.064)
Thank you for having me.

CRob (00:37.766)
So we were doing a bunch of conversations around the competition. I really want to showcase like the amazing early cutting edge work that you and the team have put together. So maybe, can you tell us what was your team’s approach? What was your strategy as you were kind of approaching the competition?

Taesoo Kim (00:59.858)
that’s a great question. Let me start with a little bit of a background.

CRob (00:)
Please.

Taesoo Kim (00:59)
Ourself, our team, Atlanta, we are multiple group of people in various backgrounds, including me as academics and researchers in security area. We also have world-class hackers in our team and some of the engineers from Samsung as well. So we have a little bit of background in various areas so that we bring our expertise.

Taesoo Kim (01:29.176)
to compete in this competition. It’s a two-year journey. We put a lot of effort, not just engineering side, we also tinkled with lot of research approach that we’ve been working on this area for a while. Said that, I think most important strategy that our team took is that, although it’s an AI competition…

CRob (01:51.59)
Mm-hmm.

Taesoo Kim (01:58.966)
…meaning that they promote the adoption of LLM-like techniques, we didn’t simply give up in traditional analysis technique that we are familiar with. It means we put a lot of effort to improve, like fuzzing is one of the great dynamic testing for finding vulnerability, and also traditional techniques like symbolic executions and concocted executions, even directed fuzzing. Although we suffer from lot of scalability issues in those tools, because one of themes of AIxCC is to find bugs in the real world.

And large-scale open source project. It means most of the traditional techniques do not scale in that level. We can analyze one function or a small number of code in the source code repository when it comes to, for example, Linux or Nginx. This is crazy amount of source code. Like building a whole graph in this gigantic repository itself is extremely hard. So that we start augmenting LLM in our pipeline.

One of the great examples of fuzzing is that when we are mutating input, although we leverage a lot of mutation techniques in the fuzzing side, we also leverage the understanding of LLM in a way that LLM also navigates the possibility of mutating places in the source code in a way that they can generate some of the dictionaries, providing vocabulary for fuzzer, and realize the input format that they have to mutate as well. So lot of augmentations of using LLM happen all over the places in traditional software analysis technique that we are doing.

CRob (03:43.332)
And do you feel that combination of using some of the newer techniques and fuzzing and some of the older, more traditional techniques, do you think that that was what was kind of unique and helped push you over the victory line and the cyber reasoning challenge?

Taesoo Kim (04:01.26)
It’s extremely hard to say which one contributed the most during the competition. But I want to emphasize that finding bugs in the location of the source code versus formulating input that trigger those vulnerability in our competition, what we call as proof of vulnerability. These two tasks are completely different. You can identify many bugs.

But unfortunately, in order to say this is truly the bug, you have to prove by yourself by showing or constructing the input that triggered the vulnerability. The difficulty of both tasks are, I would say people do not comprehend the challenges of formulating input versus finding a vulnerability in the source code. You can pinpoint without much difficulty the various places in the source code.

But in fact, that’s an easier job. In practice, more difficult challenge is identifying the input that actually reach the place that you like and trigger the vulnerability as a result. So we spend much more time how to construct the input correctly to show that we really prove the existence of vulnerability in the source.

CRob (05:09.692)
Mm-hmm.

CRob (05:22.94)
And I think that’s really a key to the competition as it happened versus just someone running LLM and scanners kind of randomly on the internet is the fact that you all were incented to and required to develop that fix and actually prove that these things are really vulnerable and accessible.

Taesoo Kim (05:33.718)
Exactly.

Taesoo Kim (05:42.356)
Exactly. That also highlights what practitioners care about. So you ended up having so many false positives in the security tools. No one cares. There are a of complaints about why we are not using security tools in the first place. So this is one of the important criteria of the competition. one of the strengths in traditional tools like buzzer and concord executor, everything centers around to reduce the false positives. The region people.

CRob (05:46.192)
Yes.

Taesoo Kim (06:12.258)
take Fuzzer in their workflow. So whenever Fuzzer says there is a vulnerability, indeed there is a vulnerability. There’s a huge difference. So that we start with those existing tool and recognize the places that we have to improve so that we can really scale up those traditional tool to find vulnerability in this large scale software.

CRob (06:36.568)
Awesome. As you know, the competition was a marathon, not a sprint. So you were doing this for quite some time. But as the competition was progressing, was there anything that surprised you in the team and kind of changed your thinking about the capabilities of these tools?

Taesoo Kim (06:51.502)
Ha

Taesoo Kim (06:55.704)
So as I mentioned before, we are hackers. We won Defqon CTF many times and we also won F1 competition in the past. So by nature, we are extremely skeptical about AI tool at the beginning of the competition. Two years ago, we evaluated every single existing LLM services with the benchmark that we designed. We realized they are all not usable at all.

CRob (07:09.85)
Mm-hmm.

Taesoo Kim (07:24.33)
not appropriate for the competition. Instead of spending time on improving those tools, which we felt like inferior at the beginning, so our motto at that time, our team, don’t touch those areas. We’re going to show you how powerful these traditional techniques are. So that’s why we progressed the semi-final. We did pretty well. We found many of the bugs by using all the traditional tools that we’ve been working on. But like…

Immediately after semifinal, everything changed. We reevaluated the possibility of adopting LLM. At that time, just removing or obfuscating some of the tokens in the repository, the LLM couldn’t even reason anything about. But suddenly, near or around semifinal, something happened. We realized that even after we inject or

If you think of it this way, there is a token, and you replace this token with meaningless words. LLM previously all confused about all these synthetic structures of the source code, but now, or on semifinal, they really understand. Although we tried to fool many times, you really catch up the idea, which is a source code that they never saw before, never used in the training, because we intentionally create this source code for the evaluation.

We start realizing that we actually understand. We shock everybody. So we start realizing that there are so many places, if that’s the case, there are so many places that we can improve. Right? So that’s the moment that we change our mindset. So now everything about LLM, everything about the new Asian architectures, so that we ended up putting humongous amount of efforts creating various architectures of Asian design that we have.

Also, we replaced some of software analysis techniques with LLM as well, surprisingly. For example, symbolic execution is a good example. It’s extremely hard to scale. Whenever you execute one instruction at a time, you have to create the constraint around them. But one of the big challenges in real-world software, there are so many, I would say, hard-to-analyze functions exist. Meaning that, for example, there is a

Taesoo Kim (09:46.026)
Even NGINX as an example, we thought that they probably compared the string to string at a time. But the way they perform string compare in NGINX, they map this string or do the hashing so that they can compare the hash value. Fudger, another symbolic executor, is extremely bad at those. If you hit one hashing function, you’re screwed. There are so many constraints that there is no way we can revert back by definition.

There’s no way. But if you think about how to overcome these situations by using LLM, the LLM can recognize that this is a hashing function. We don’t actually have to create a constraint around, hey, what about we replace with identity functions? It’s something that we can easily divert by using symbolic execution. So then we start recognizing the possibility of LLM role in the symbolic execution. Now see that.

Smaller execution can scale to the large software right now. So I think this is a pretty amazing outcome of the competition.

CRob (10:53.11)
Awesome. So again, the competition completed in August. So what plans do you have? What plans does the team have for your CRS now that the competition’s over?

Taesoo Kim (10:58.446)
Thank

Taesoo Kim (11:02.318)
I think that’s a great question. Many of tech companies approach our team. Some of them recently joined, other big companies. And many of our students want to quit the PhD program and start a company. For good reasons, right?

CRob (11:14.848)
I bet.

Taesoo Kim (11:32.766)
One of the team, my four PhD students recently formed and looking for commercialization opportunity. Not in the traditional cyber infrastructure we are looking at through the DARPA, but they spotted the possibility in smart contracts. that smart contracts and modernized financial industries like stable coins and whatnot

where they can apply the AI XTC like techniques in finding vulnerability in those areas. So that instead of analyzing everything by human auditor, you can analyze everything by using LLM or agents and similar techniques that we developed for AI XTC so that you can reduce the auditing time significantly. In order to get some auditing in the smart contract, traditionally you have to wait for two weeks.

In the worst case, even months with a ridiculous amount of cost. Typically, in order to get one auditing for the smart contract, $20,000 or $50,000 per case. But in fact, you can reduce down the amount of auditing time by, I’ll say, a few hours by day. This speed, the potential benefit of achieving this speed is you really open up

CRob (12:40.454)
Mm-hmm.

CRob (12:47.836)
Wow.

Taesoo Kim (12:58.186)
amazing opportunity in this area. So you can automate the auditing, you can increase the frequency of auditing in the smart contract area. Not only that we thought there is a possibility for even more like compliance checkings of the smart contracts, there’s so many opportunities that we can play immediately by using ARCC systems. That’s the one area that we’re looking at. Another one is more traditional area.

CRob (13:00.347)
Mm-hmm.

Taesoo Kim (13:25.07)
what we call cyber infrastructure, like hospitals and some government sectors. They really want to analyze, but unfortunately, or fortunately though, there are other opportunities that in ARCC, we analyze everything by source code, but they don’t have access to them. So we are creating the pipeline that given a binary or execution only environment, how to convert them.

CRob (13:28.828)
Mm-hmm.

CRob (13:38.236)
Mm-hmm.

CRob (13:49.569)
Taesoo Kim (13:52.416)
in a way that we can still leverage the existing infrastructure that we have for AICC. More interestingly, they don’t have access to the internet when they’re doing pen testings or analyzing those, so that we start incorporating some of our open source model as part of our systems. These are two commercialization efforts that we’re thinking and many of my students are currently

CRob (13:57.67)
That’s very clever.

CRob (14:05.5)
Yeah.

CRob (14:13.564)
It’s awesome.

CRob (14:20.366)
And I imagine that this is probably amazing source material for dissertations and the PhD work, right?

Taesoo Kim (14:29.242)
Yes, yes. Last two years, we are purely focused on ARCC. Our motto is that we don’t have time for publication. It’s just win the competition. Everything is coming after. This is the moment that we actually, I think we’re going to release our Tech Report. It’s over 150 pages. Next week, around next week. So we have a draft right now, but we are still publishing.

CRob (14:39.256)
Yeah.

CRob (14:51.94)
Wow.

Taesoo Kim (14:58.51)
for publication so that other people not just like source code okay that’s great but you need some explanation why you did this many of the sources is for the competition right so that the core pieces might be a little bit different for like daily usage of normal developers and operator so we kind of create a condensed technical material for them to understand

Not only that, we have a plan to make it more accessible, meaning that currently our CRS implementation tightly bound to the competition environment. Meaning that we have a crazy amount of resources in Azure side, everything is deployed and better tested. But unfortunately, most of the people, including ourselves, we don’t have resources. Like the competition have about

80,000 cloud credit that we have to use. So no one has that kind of resource. It’s not like that, not if you’re not a company. But we want to apply this one for your project in the smaller scale. That’s what we are currently working on. So discarding all these competition dependent parameter from the source code, making more containable so that you can even launch our CRS in your local environment.

This is one of the big, big development effort that we are doing right now in our lab.

CRob (16:32.155)
That’s awesome. take me a second and thinking about this from the students perspective that participated. What kind of an experience was it getting to work with professors such as yourself and then actual professional researchers and hackers? What do you see the students are going to take away from this experience?

Taesoo Kim (16:53.846)
I think exposing to the latest model because we are tightly collaborating with this OpenAI and Gemini, we are really exposed to those latest model. If you’re just working on the security, not tightly working for LLM, you probably don’t appreciate that much. But through the competition, everyone’s mindset change. And then we spend time.

and deeply take a look in what’s possible, what’s not, we now have a great sense of what type of problem we have to solve, even in the research side. And now, suddenly, after this competition, every single security project, security research that we are doing at Georgia Tech is based on LLF. Even more surprising to hear that we have some decompilation project that we are doing, the traditional possible security research you can read.

CRob (17:42.448)
Ha ha.

Taesoo Kim (17:52.162)
binary analysis, malware analysis, decompilations, crash analysis, whatnot. Now everything is LLM. Now we realize LLM is much better at decompiling than traditional tools like IDEA and Jydra. So I think these are the type of research that we previously thought impossible. We’re probably not even thinking about applying LLM. Because we spend our lifetime working on decompiling.

CRob (17:53.68)
Mm.

CRob (17:59.068)
Yeah.

Taesoo Kim (18:22.318)
But at a certain point, we realized that LLM is just doing better than what we’ve been working on. Just one day. It’s a complete mind change. In traditional program analysis perspective, many things are empty completely. There’s no way you can solve it in an easier way. So they’re not spending time. That’s our typical mindset. But now, it works in practice, amazingly.

CRob (18:29.574)
Yeah.

Taesoo Kim (18:51.807)
how to improve what we thought previously impossible by using another one. It’s the key.

CRob (18:57.404)
That’s awesome. It’s interesting, especially since you stated initially when you went into the competition, you were very skeptical about the utility of LLMs. So that’s great that you had this complete reversal.

Taesoo Kim (19:04.238)
Thank

Yeah, but I think I like to emphasize one of the problems of LLM though, it’s expensive, it’s slow in traditional sense, you have to wait a few seconds or a few minutes in certain cases like reasoning model or whatnot. So tightly binding your performance with this performance lagging component in the entire systems is often challenging.

CRob (19:17.648)
Yes.

CRob (19:21.82)
Mm-hmm.

Taesoo Kim (19:39.598)
and then just talking. But another benefit of everything is text. There’s no proper API, just text. There’s no sophisticated way to leverage it, just text. I don’t know, you’re probably familiar with all these security issues, potentially with unstructured input. It’s similar to cross-site scripting in the web space. There’s so many problems you can imagine.

CRob (19:51.984)
Okay, yeah.

CRob (20:01.979)
Mm-hmm.

Taesoo Kim (20:08.11)
But as far as you can use in a well-contained manner in the right way, we believe there are so many opportunities we can get from it.

CRob (20:18.876)
Great. So now that your CRS has been released as open source, if someone from our community was interested in joining and maybe contributing to that, what’s the best way somebody could get started and get access?

Taesoo Kim (20:28.494)
Mm-hmm.

So we’re going to release non-competition version very soon, along with several documents, we call standardization effort that we and other teams are doing right now. So we define non-competition CRS interface so that you can tightly, as far as you implement those interface, our goal is to mainstream OSS browser together with Google team.

CRob (20:36.369)
Mm-hmm.

CRob (20:58.524)
Mm-hmm.

Taesoo Kim (20:59.086)
so that you can put your CRS as part of OSS Fuzz mainstream, so that we can make it much easier, so that everyone can evaluate one at a time in their local environment as part of OSS Fuzz project. So we’re gonna release the RFC document pretty soon through our website, so that everyone can participate and share their opinion, what are the features that they think we are missing, that we’d love to hear about.

CRob (21:03.74)
Thanks.

CRob (21:18.001)
Mm-hmm.

Taesoo Kim (21:26.502)
And then after that, a month period, we’re going to release our local version so that everyone can start using. And with a very permissive license, everyone can take advantage of the public research, including companies.

CRob (21:34.78)
Awesome.

CRob (21:42.692)
It’s, I’m just amazed. when I came into this, partnering with our friends at DARPA, I was initially skeptical as well. And as I was sitting there watching the finals announced, it was just amazing. Kind of this, the innovative innovation and creativity that all the different teams displayed. again, congratulations to your team, all the students and the researchers and everyone that participated.

Taesoo Kim (21:59.79)
Mm-hmm.

CRob (22:12.6)
Well done. Do you have any parting thoughts? know, as you’re think, as we move on, do you have any kind of words of wisdom you want to share with the community or any takeaways for people curious to get in this space?

Taesoo Kim (22:25.486)
Oh, regarding commercialization, one thing I also like to mention is that in Samsung, we already took the open source version of the CRS, start applying the internal project and open source Samsung project immediately after. So we started seeing the benefit of applying the CRS in the real world immediately after the competition. A lot of people think that competition is just for competition or show

CRob (22:38.108)
Mm-hmm.

Taesoo Kim (22:55.032)
But in fact, it’s not. Everyone in industry, including at Tropic Meta and OpenAI, they all want to adopt those technologies behind the scene. And Amazon, we also working together with Amazon AWS team so that they want to support the deployment of our systems in AWS environment as well. So everyone can just one click, they can launch the systems. And they mentioned there are several.

CRob (22:55.036)
Mm-hmm.

Taesoo Kim (23:24.023)
government-backed They explicitly request to launch our CRS in their environment.

CRob (23:31.1)
I imagine so. Well, again, kudos to the team. Congratulations. It’s amazing. I love to see when researchers have these amazing creative ideas and actually are able to add actual value. And it’s great to hear that Samsung was immediately able to start to get value out of this work. And I hopefully other folks will do the same.

Taesoo Kim (23:55.18)
Yeah, exactly. I think regarding one of wisdom or general advice in general is that this competition based innovation, particularly in academic or involvement like startups or not, because of this venue, so including ourselves and startup people and other team members put their life

on this competition. It’s an objective metric, head-to-head competitions. We don’t care about your background. Just win, right? There’s your objective score. Your job is fine and fix it, I think this competition really drives a lot of efforts behind the scene in our team. We are motivated because of this entire competition is represented in broader audience. I think this is really a way to drive the innovation.

CRob (24:26.46)
Mm-hmm.

CRob (24:32.57)
Yes.

CRob (24:36.709)
Mm-hmm.

Taesoo Kim (24:54.904)
to get some public attention beyond Alphi as well. So I think we really want to see other type of competition in this space. And in the longer future, you probably see based on the current trend, CTF competitions like that, maybe not just CTF, it’s Asian-based CTF, no human involved or the Asians are now attacking each other and solving CTF challenge.

CRob (24:58.524)
Excellent.

CRob (25:19.59)
Mm-hmm.

Taesoo Kim (25:24.846)
This is not a five-year no-vote. It’s going to happen in two years or shortly. Even in this year’s live CTF, one of the teams actually leveraged Asian systems and Asians actually solved the competition quicker than humans. So think we’re going to see those types of events and breakthroughs more often than

CRob (25:55.292)
I used to be a judge at the collegiate cyber competition for one of our local schools. And I think I see a lot of interesting applicability kind of using this as to help them to teach the students that you have an aggressive attacker is doing these different techniques and it’s able to kind of apply some of these learnings that you all have. It’s really exciting stuff.

Taesoo Kim (26:00.142)
Mm-hmm.

Taesoo Kim (26:15.47)
I think one of the interesting quote from, I don’t know who actually said, but in the AI space, someone mentioned that there will be one person, one billion market cap company appear because of LLN or because of AI in general. But if you see the CTF, currently most of the team has minimum 50 people or 100 people competing each other. We’re going to see very soon.

one person or maybe five people with the help of those AI tools and they’re going to compete. Or human are just assisting AI in a way that, hey, could you bring up the Raspberry Pi for me or set up so that human just helping LLN or helping AI in general so that AI can compete. So I think we’re going to see some interesting thing happening pretty soon in our company for sure.

CRob (26:59.088)
Mm-hmm. Yeah.

CRob (27:11.804)
I agree. Well, again, Taesoo, thank you for your time. Congratulations to the team. And that is a wrap. Thank you very much.

Taesoo Kim (27:22.147)
Thank you so much.

What’s in the SOSS? Podcast #51 – S3E3 AIxCC Part 1 – From Skepticism to Success: The AI Cyber Challenge (AIxCC) with Andrew Carney

By Podcast

Summary

This episode of What’s in the SOSS features Andrew Carney from DARPA and ARPA-H, discussing the groundbreaking AI Cyber Challenge (AIxCC). The competition was designed to create autonomous systems capable of finding and patching vulnerabilities in open source software, a crucial effort given the pervasive nature of open source in the tech ecosystem. Carney shares insights into the two-year journey, highlighting the initial skepticism from experts that ultimately turned into belief, and reveals the surprising efficiency of the competing teams, who collectively found over 80% of inserted vulnerabilities and patched nearly 70%, with remarkably low compute costs. The discussion concludes with a look at the next steps: integrating these cyber reasoning systems into the open source community to support maintainers and supercharge automated patching in development workflows.

This episode is part 1 of a four-part series on AIxCC:

Conversation Highlights

00:00 – Introduction and Guest Welcome
00:59 – Guest Background: Andrew Carney’s Role at DARPA/ARPA-H
02:20 – Overview of the AI Cyber Challenge (AIxCC)
03:48 – Competition History and Structure
04:44 – The Value of Skepticism and Surprising Learnings
07:11 – Surprising Efficiency and Low Compute Costs
08:15 – Major Competition Highlights and Results
13:09 – What’s Next: Integrating Cyber Reasoning Systems into Open Source
16:55 – A Favorite Tale of “Robots Gone Bad”
18:37 – Call to Action and Closing Thoughts

Transcript

Intro music & intro clip (00:00)

CRob (00:23)
Welcome, welcome, welcome to What’s in the SOSS, the OpenSSF podcast where I talk to people that are in and around the amazing world of open source software, open source software security and AI security. I have a really amazing guest today, Andrew.

He was one of the leaders that helped oversee this amazing AI competition we’re going to talk to. So let me start off, Andrew, welcome to the show. Thanks for being here.

Andrew Carney (00:57)
Thank you for having me so much, CRob. Really appreciate it.

CRob (00:59)
Yeah, so maybe for our audience that might not be as familiar with you as I am, could you maybe tell us a little bit about yourself, kind of where you work and what types of problems are you trying to solve?

Andrew Carney (01:12)
Yeah, I’m a vulnerability researcher. That’s been the core of my career for the last 20 years. And part of that has had me at DARPA. And now I’m at DARPA and ARPA-H, where I sort of work on cybersecurity research problems focused on national defense and/or health care. So it’s sort of the space that I’ve been living in for the past few years.

CRob (01:28)
That’s an interesting collaboration between those two worlds.

Andrew Carney (01:43)
Yeah, it’s, you know, it’s, I think the vulnerability research and reverse engineering community is, pretty tight, you know, pretty, pretty small. And, a lot of folks across lots of different industries and sectors have similar problems that, you know, we’re able to help with. So, yeah, it’s, it’s exciting to kind of see, see how, how, you know, folks in finance or automotive industry or the energy sector kind of all deal with similar-ish problems, but different scales with different kind of flavors of concerns.

CRob (02:20)
That’s awesome. And so as I mentioned, we were introduced through the AIxCC competition. Maybe for our audience that might not be as familiar, could you maybe give us an overview of AIxCC, the competition, and kind of why you felt this effort was so important and we’ve spent so much time working through this, years.

Andrew Carney (02:42)
Absolutely. I mean, AIxCC, uh, is a competition to create autonomous systems that can find and patch vulnerabilities in source code. Uh, a big part of this competition was focusing on open source software, um, because of how critical it is kind of across our tech ecosystem. It really is sort of like the font of all software.

And so DARPA and ARPA-H and other partners across the federal government, we saw this kind of need to support the open source community and also leverage kind of new technologies on the scene like LLMs. So how do we take these new technologies and apply them in a very principled way to help solve this massive problem? And working with the Linux Foundation and OpenSSF has been a huge piece of that as well. So I really appreciate everything you guys have done throughout the competition.

CRob (03:41)
Thank you.

CRob (03:48)
And maybe could you give us just a little history of when did the competition start and kind of how it was structured?

Andrew Carney (03:54)
Yeah. So the competition was announced at Black Hat in August of 2023. The competition was structured into two main sections. We had a qualifying event at DEF CON in 2024. And then we had our final event this past DEF CON, August 2025. And throughout that two-year period, we designed a competition that kept pushing the competitors sort of ahead of wherever the current models, the current kind of agentic technologies were, whatever that bar they were setting, we continued to push the competitors past that. So it’s been a really dynamic sort of competition because that technology has continued to evolve.

CRob (04:44)
I have to say when I initially heard about the competition, I’ve been doing cybersecurity a very long time. I was very skeptical about what the results will be, not to bury, to bury the lead, so to speak. But I was very surprised with the results that you all shared with the world this summer in Las Vegas. We’ll get to that in a minute. But again, this competition went over many years and as it progressed, could you maybe share what you learned that maybe surprised you, you didn’t expect from when this all kicked off.

Andrew Carney (05:21)
Yeah, think so. I think there have been a lot of surprises along the way. And I’ll also say that, you know, skepticism, especially from, you know, informed experts is a really good sign for a DARPA challenge. So for a lot of projects at DARPA generally, you know, if you’re kind of waffling between this is insanely hard and there’s no way we’ll be successful and this is kind of a much easy, like, you know, there’s an easy solution to this. If you’re constantly in that space of uncertainty, like, no, I really think this is really, really hard. And I’m getting skepticism from people that know a lot about this space. For us, that’s fuel. That’s okay. There is, you know, there’s a question to answer here. And so that really was part of driving us, even competitors, competitors that ended up making it to finals themselves were skeptical even as they were competing.

So I love that. I love that. Like, you know, we want to try to do really hard things and, you know, criticism helps us improve. Like that’s super beneficial.

CRob (06:33)
Yeah, it was, and I’ve had the opportunity to talk with many of the teams and now we’re in the phase post-competition where we’re actually starting to figure out how to share the results with the upstream projects and how to build communities around these tools. you assembled a really amazing group of folks in these competitive teams, some super top-notch minds. again,

You made me a believer now, where I really do believe that AI does have a place and can legitimately offer some real value to the world in this space.

Andrew Carney (07:11)
Yeah, think one of the biggest surprises for me was the efficiency. I think a lot of times, especially with DARPA programs, we expect that technical miracles will come with a pretty hefty price tag. And then you’ll have to find a way to scale down, to economize, to make that technology more useful, more more widely kind of distributable.

With AIxCC, we found the teams pushing so hard on the core kind of research questions, but at the same time, sort of woven into that was using their resources efficiently. And so even the competition results themselves were pleasantly surprising in terms of the compute costs for these systems to run. We’re talking tens to hundreds of dollars.

vulnerability discovered or patch emitted, which is really quite amazing.

CRob (08:15)
Yeah, so maybe could you just give me some highlights of kind of what the competition discovered, what the competitors achieved?

Andrew Carney (08:24)
Yeah. So I think when we’re trying to tackle these really challenging research questions and we’re examining it from all angles and being extremely critical of even our own approach, as well as the competitors’ approaches, that initially back in August of 2024, we had this amazing proof of life moment where the teams demonstrated with only a few hundred dollars in total compute budget.

that they were able to analyze large open source projects and find real issues. One of the teams found a real issue in SQLite that we had disclosed at the time to the maintainers. And they found that, once again, with this very limited compute budget across multiple millions of lines of code in these projects. So that was sort of the OK, there’s a there there, like there’s something here and we can keep pushing. So that was a really exciting moment for everyone. And then over the following year, up to August 2025, we had a series of these non-scoring events where the teams would be given challenges that looked very similar to what we’d give them for finals with an increasing level of scale and difficulty.

So you can think of these as like extreme integration events where we’re still giving the teams hundreds of thousands or millions of lines of code. We’re giving them, you know, eight to 12 hours per kind of task. And we’re seeing what they can do. This was important to ensure that the final competition went off without a hitch. And also because the models they were leveraging continue to evolve and change.

So it was really exciting. In that process, the teams found and disclosed hundreds of vulnerabilities and produced hundreds of potential patches that they would offer up to maintainers of the projects that they were doing their own internal kind of development on. So that was really exciting just to see that the SQLite bug wasn’t a fluke and that the teams could consistently kind of perform and keep pushing as we push them to move further and faster and deal with more complex code, they were able to adapt and find a way forward.

CRob (11:02)
That’s awesome. And I know you had, it was a long journey that you and the team and all the support folks went through, but is there any particular moment that kind of you smile on when you reflect on over the course of the competition?

Andrew Carney (11:20)
Oh, man, so many. I think there’s an equal number of like those smiling moments and also, you know, premature gray hairs that the team and myself have created. But I think one of the big moments, there were a number of just outstanding kind of experts in the field on social media.

in talks that would, the way that they talked about kind of AI powered program analysis was very skeptical. near the end, leading up to semi-finals, we had this lovely moment where the Google project zero team and the Google deep mind teams penned a blog post that said that they were inspired by one of the teams, by the SQL light bug, by one of the team’s discoveries. And that was huge, I think both for that team and just the competition as a whole. And then after that, seeing people’s opinions change and seeing people that had held, that were, like I said, top tier experts in the field, change their perspective pretty drastically, which that was, you know, that was helpful signal for us to demonstrate that we were being successful. Like converting a critic, I think, is one of the best kind of victories that you can have. Because now they can be a collaborator, right? Like now we can still kind of spar over different perspectives or ideas, but now we’re working together. That’s very exciting.

CRob (13:09)
That’s awesome. So what’s next? The hard work of the competition is over and now we’re in kind of the after action phase where we’re trying to integrate all this great work and kind of get these projects out to the world to use. So from your perspective or from DARPA or the competition, what’s next for you?

Andrew Carney (13:29)
Yeah, so one of the biggest challenges with DARPA programs is when you’re successful, sometimes you have that technological miracle, you have that accomplishment, and maybe the world’s not entirely ready for it yet. Or maybe there’s additional development that needs to happen to get it kind of into the real world. With AIxCC, we made the competition as realistic as possible. The automated systems, these cyber reasoning systems, were being given bug reports, they’re being given patch diffs, they’re being given artifacts that we would consume and review as human developers. So we modeled all the tasks very closely to the real things that we would want these systems to do. And they demonstrated incredible kind of performance. Collectively, the teams were able to find over 80 % of the vulnerabilities that we’d synthetically kind of inserted. And they patched nearly 70 % of those vulnerabilities. And that patching piece is so critical. What we didn’t want to do was create systems that made open source maintainers lives more problematic.

CRob (14:54)
Thank you.

Andrew Carney (14:56)
We wanted to demonstrate that this is a reachable bug and here’s a candidate patch. And in the months after the competition, we’ve incentivized the teams further than just the original prize money to go out into the open source community and support open source maintainers with their tools. And we’ve had folks come back and literally in their kind of reports, document that the patch they suggested to a maintainer was nearly identical to what the maintainer actually committed. Yeah. And those reports are coming in daily. So we’re getting, we have this constant feed of engagement and the tools are still obviously being improved and developed. But it’s really exciting to see it. So when I think about what’s next is like we’re already in the what’s next like getting the technology out there, using government funding to support open source maintainers wherever we can, especially if their code is part of widely used applications or code used in critical infrastructure. So that’s where we find ourselves now. And then we’re thinking a lot about how we supercharge that effort to the…

there have been, you the federal government supports a lot of actively used open source projects, right? And we’ve been working with all these partner agencies across the federal government and just making sure that we’re supporting the existing programs when we find them. And then where we see a gap, kind of figuring out what it would take to fill that gap that community that could use more support.

CRob (16:55)
So on a slightly different note, we’re both technologists and we love the field, but as I was going through this journey, kind of on the sidelines with you all, I was reflecting, do you have a a favorite tale of robots gone bad? Like Terminator’s Skynet or HAL 9000 or the Butlerian Jihad?

Andrew Carney (17:22)
That’s a, you know, I think I, I’ll, I don’t know that this is my favorite, but it is one of the most recent ones that I’ve read. There’s a series called Dungeon Crawler Carl. Yeah. And it’s been really like entertaining reading. And I just think the tension between the primal AIs and the corporations that rely on said independent entities, but also are constantly trying to rein them in is, I don’t know, it’s been really interesting to see that narrative evolve.

CRob (18:08)
I’ve always enjoyed science fiction and fantasy’s ability to kind of hold a mirror up to society and kind of put these questions in a safe space where you can kind of think about 1984 and Big Brother or these other things, but it’s just in paper or on your iPad or whatever. So it’s a nice experiment over there. And we don’t want that to be happening here.

Andrew Carney (18:29)
Yes, yes. Yeah, the fiction as thought experimentation, right?

CRob (18:37)
Right, exactly. So as we wind down, do you have a particular call to action or anything you want to highlight to the audience that they should maybe investigate a little further or participate in?

Andrew Carney (18:50)
Yeah, I think so a big one is, you know, we would love for open source maintainers to reach out to us directly. AIXCC at DARPA.mil. That’s the email address that our team uses. And we’ve been looking for more maintainers to connect with so that we can make sure that if we can provide resources to them, one, that they’re right sized for the challenges that those maintainers are having, or maintainer, right? Sometimes it’s just one person. And then two, that we’re engaging with them in the way that they would prefer to be engaged with. We want to be helpful help, not unhelpful help. So that’s a big one. And then I think in more generally, I would love to see more patching added into the kind of vulnerability research lifecycle. I think there’s so many opportunities for commercial and open source tools that have that discovery capability and that’s really their big selling point. And now with AIxCC and with the technology that the competitors open source themselves, since all of their systems were open sourced after the competition, there’s this real potential, I think that we haven’t seen it realized the way that it really could be. And so that’s, I would love to see more of that kind of automated patching added to tools and kind of development workflows.

CRob (20:29)
I’ll say my personal favorite experience out of all this is now that the competition, the minute the competition was over, then there was an ethical wall up between, you your administrators and us and the different competition teams. But now I’ve, we’ve observed the competitors, like looking at each other’s work and asking questions to each other and collaborating. that is, I’m so super excited to see what comes next. Now that all these smart people have proven themselves. and they found kind of connected spirits and they’re gonna start working together for even more amazing things.

Andrew Carney (21:07)
Absolutely. I think we’re expecting a state of knowledge paper with all the teams as authors. That’s something they’ve organized independently, to your point. And yeah, I cannot wait to see what they come out with collaboratively.

CRob (21:23)
Yeah. And anyone that’s interested to learn more or potentially directly interact with some of these competition experts, whether they’re in academia or industry, the OpenSSF is sponsoring as part of our AI ML working group. We’ve created a cyber reasoning special interest group specifically for the competition, all the competitors, and just to have public discussions and collaboration around these things. And we would invite everybody to show up and listen and participate as they feel comfortable and learn.

Well, Andrew and the whole DARPA and ARPA-H team, everyone that was involved in the competition, thank you. Thank you to our competitors. And we actually are going to have a series of podcasts talking to the individual competitors, kind of learning a little bit of the unique flavors and challenges these had. But thank you for sponsoring this and kind of really delivering something I think is going to have a ton of utility and value to the ecosystem.

Andrew Carney (21:47)
Thank you for working with us on this journey and we definitely look forward to more collaboration in the future.

CRob (21:54)
Well, and with that, we’ll wrap it up. I just want to tell everybody happy open sourcing. We’ll talk to you soon.

What’s in the SOSS? Podcast #50 – S3E2 Demystifying the CFP Process with KubeCon North America Keynote Speakers

By Podcast

Summary

Ever wondered what it takes to get your talk accepted at a major open source tech conference – or even land a keynote slot? Join What’s in the SOSS new co-host Sally Cooper, as she sits down with Stacey Potter and Adolfo “Puerco” GarcĂ­a Veytia, fresh off their viral KubeCon keynote “Supply Chain Reaction.” In this episode, they pull back the curtain on the CFP review process, share what makes a strong proposal stand out, and offer honest advice about overcoming imposter syndrome. Whether you’re a first-time speaker or a seasoned presenter, you’ll learn practical tips for crafting compelling abstracts, avoiding common pitfalls, and why your unique voice matters more than you think.

Conversation Highlights

00:00 – Introduction and Guest Welcome
01:40 – Meet the Keynote Speakers
05:27 – Why CFPs Matter for Open Source Communities
08:29 – Inside the Review Process: What Reviewers Look For
14:29 – Crafting a Strong Abstract: Dos and Don’ts
21:05 – From Regular Talk to Keynote: What Changed
25:24 – Conquering Imposter Syndrome
29:11 – Rapid Fire CFP Tips
30:45 – Upcoming Speaking Opportunities
33:08 – Closing Thoughts

Transcript

Music & Soundbyte 00:00
Puerco: Stop trying to blend or to mimic what you think the industry or your community wants from you. Represent – always show up who you are, where you came from – that is super valuable and that’s why people will always want to have you as part of their program.

Sally Cooper (00:20)
Hello, hello, and welcome back to What’s in the SOSS, an OpenSSF podcast. I’m Sally and I’ll be your host today. And we have a very, very special episode with two amazing guests and they are returning guests, which is my favorite, Stacey and Puerco. Welcome back by popular demand. Thank you for joining us for a second time on the podcast.

And since we last talked, you both delivered one of the most talked about keynote at KubeCon. Wow. So today’s episode, we’re going to talk to you about CFPs. And this is really an episode for anyone who has ever hesitated to submit a CFP, wondered how to get their talk reviewed through the CFP process. Asked themselves, am I ready to speak? Or dreamed about what it might take to keynote a major event.

We’re gonna focus on practical advice, what works, what doesn’t, and how to show up confidently. And I’m just so excited to talk to you both. So for anyone who’s listening for the first time, Stacey, Puerco, can you tell us a little bit about yourselves? and about the keynote. Stacey

Stacey (01:48)
Hey everyone, I’m Stacey Potter. I am the Community Manager here at OpenSSF. And my job, I mean, in a nutshell is basically to make security less scary and more accessible for everyone at open source, right? I’ve spent the last six or seven years in open source community building across mainly CNCF projects, Flux, Flagr, OpenFeature, Captain to name a few.

And now focusing on open source security here at OpenSSF. Basically helping people connect, learn, and just do cool things together. And yeah, and I delivered a keynote at KubeCon North America that was honestly, it’s still surreal to talk about. It was called Supply Chain Reaction, a cautionary tale in case security, and it was theatrical. It was…slightly ridiculous. And it was basically a story of a DevOps engineer who I played the DevOps engineer, even though I’m not a DevOps engineer, frantically troubleshooting a compromised deployment. And Puerto literally kaboomed onto the stage as a Luchador superhero to save the day. had him in costume and we had drama.

And then we taught people a little bit about supply chain security through like B-movie antics and theatrics. But it turns out people really responded to making security fun and approachable instead of terrifying.

Adolfo GarcĂ­a Veytia (@puerco) (03:23)
Yeah. Well, hi, and thanks everybody for listening. My name is Adolfo GarcĂ­a-Veytia. I am a software engineer working out of Mexico City. I’ve been working on open source security for, I don’t know, the past eight years or so, mainly on Kubernetes, and I maintain a couple of the technical initiatives here in the OpenSSF.

I am now part of the Governing Board as starting of this year, which is a great honor to have been voted into that position. But my real passion is really helping build tools that secure open source while being unobtrusive to developers and also raising awareness in the open source community about why security is important.

Because sometimes you will see that especially executives, CISOs, and they are compelled by legal frameworks or other requirements to make their products or projects secure. And in open source, we’re always so resource constrained that security tends to be not the first thing on people’s minds. But the good news is that here in the OpenSSF and other groups, we’re working to make that easy and transparent for the real person as much as possible.

Sally Cooper (04:57)
Wow, thank you both so much. Okay, so getting back to call for proposals, CFPs. From my perspective, they can seem really intimidating, but they’re also one of the most important ways for new voices to enter community. So I just have a couple questions. Basically, like, why are they important? So not just about like going to a conference, but why is it important to get

Why would a CFP be important to an open source community and not just a conference? Stacy, maybe you could kick that off.

Stacey (05:32)
Sure, I think this is a really important question. I think CFPs aren’t just about filling conference slots. They’re really about who gets to shape the narrative in our communities and within these conferences. So when we hear the same voices over and over and they show up repeatedly, right, you get the same perspectives, the same solutions, the same energy, which, you know, is also great. You know, we love our regular speakers, they’re brilliant, but

communities always need new and fresh perspectives, right? We need the people who just solved a weird edge case that nobody’s talking about. We need like a maintainer from a smaller project who has insights that maybe big projects haven’t considered, or, you know, we need people from different backgrounds, different use cases and different parts of the world as well. CFPs are honestly one of the most democratic ways we have to surface new leaders, right?

Sometimes someone doesn’t need to be well-connected or have a huge social media following. They just need a good idea and the courage to submit a talk about it, right? And that’s really powerful. And I think when someone gives their first talk and does well, they often become a mentor, a maintainer, a leader in that community, right? CFPs are literally how we build the next generation of contributors and speakers. So every talk is a potential origin story for someone’s open source journey.

Sally Cooper (07:08)
Puerco, what are your thoughts on that?

Sally Cooper (07:11)
And the question again is call for proposals can feel really intimidating, but they’re also one of the most important ways for new voices to enter a community.

Adolfo GarcĂ­a Veytia (@puerco) (07:20)
Yeah. So, I would say that intimidating is a very big word, especially for new people. maybe, Sometimes it’s difficult to ramp up the courage and I don’t want to mislead people into thinking it’s going to be easy. The first ones that you do, you will get up there, sweat, stutter, and basically your emotions will control your delivery and your body, so be prepared for that.

But it’s going to be fine. The next times you’ll do it, it will get better. And most importantly, people will not be judging you. In fact, it’s sometimes even more refreshing to see new voices getting up on stage.

Sally Cooper (08:13)
That’s really helpful. Thank you. I love it. The authenticity that you bring really helps and helps demystify the CFP process. But now let’s pull back the curtain on the review process. How does that work? And Stacey, have you been on a review panel before? Maybe you could talk about like, when you’re reviewing a CFP, what are you actually looking for?

Stacey (08:39)
Yeah, I’ve been on program committees. I’ve been on a program chair or co-chair on different programs and things like that. yeah, it’s a totally different experience, but I think it gives you lot of insight on how to prepare a talk once you’ve reviewed 75, 80 per session, right? It’s sometimes these calls are really big. I know KubeCon has really huge calls, right? But I would say, you know what we’re actually looking for:

So first, is this topic relevant and useful to our audience? Like, will people learn something they can actually apply? And second, like, can this person deliver on what they’re promising? And honestly, we’re looking we’re not looking for perfection, right? We’re looking for clarity and genuine expertise or experience like with that topic.

I would say be clear, be specific with your value proposition in the first two sentences of a CFP. When the program committee can read your abstract and immediately think, “oh that’s exactly what our attendees need,” right? That’s like gold, right? Also, when somebody shows that they understand the audience, that they’re they’re submitting to, right? Are you speaking to beginners or experienced practitioners and being explicit about that?

Adolfo GarcĂ­a Veytia (@puerco) (10:16)
Yeah, I think it’s important for applicants to understand who is going to be reviewing your papers. There are many kinds of conferences and I would… So ours, even though, of course, there’s commercial behind it because you have to sustain the event, like everybody involved in… Especially in the Linux Foundation conferences, I feel…

we put a lot of effort into making the conferences really community events. And I would like to distinguish the difference, like really make a clear cut between what is academic conferences, like purely trade show conferences and these community events. And especially in academia, there’s this hierarchical view of peers.

assessing what you’re doing. In pure trade show conferences, it’s mostly pay to play, I would say. And when you get down to community, especially if you ever applied to present or submit papers to the other kinds of conferences, you will be expecting completely different things. It’s easy to forget that people looking at your work, at your proposals, at your ideas is very, very close and very, very similar to you.

So don’t expect to be talking to some higher being that understands things much better than you. First of all, it’s not one person. It’s all of us reading your CFPs. keeping that in mind, what you need to keep like consider when submitting is what makes my proposal unique. I think that’s a key question. And we can talk more about that in the later topics, but I feel, to me, when I understood that it was sometimes even my friends reviewing my proposal made it so much easier.

Stacey (12:20)
Yeah, I think that’s a really, really good point Peurco makes is knowing that whatever conference you’re submitting for typically, and I say this like if it’s a Linux Foundation event, right? Because those are the ones that I’ve been most involved with. The program committee members are from within the community. They are, they submit an application to say, hey, yes, I would love to review talks. This is like me volunteering my time to help out this conference. Maybe they’re not able to make the conference.

Maybe they are, maybe they’re also submitting a talk. But usually the panel of reviewers is like five, six, up to 10 people, I would say, depending on the size of the conference. So you’re getting a wide range of perspectives reading through your submissions. And I think that’s really important. When I’m trying to select the program committee, I think it’s really important to diversify as well, right? So have voices from all over – different backgrounds, different expertise, different genders, just as much variance as you can have within the program committee panel, I think also makes a difference with the CFP reviews themselves, right?

But that’s kind of how it’s set up, is you pick these five to 10 people to review all of these CFPs, they have usually, it’s like a week or something like that to review everything, and then they rate it on a scale. And then that’s kind of how the program chairs then arrange the schedule is based off of all that feedback. You can make notes in each of the talks that you’re reviewing, you know, put those in there and then, and that’s basically how they’re all chosen. They’re ranked and they have notes, right, within that system.

Sally Cooper (14:08)
Wow, this is really educational. Thank you so much. For folks that are staring at a CFP right now, because there’s some coming up, and I think we’re going to get into that. Let’s get practical. What makes a strong abstract? How technical is too technical? How much storytelling belongs in a CFP? And what are some red flags that you might see in submissions?

Adolfo GarcĂ­a Veytia (@puerco) (14:34)
So, the first big no-no in community events is don’t pitch your product. Even if you trying to disguise it as a community event, the reviewers will … You have to keep in mind that reviewers have a lot of work in front of them. I am sure people, there are all sorts of reviewers, but usually as a reviewer, you see that folks put a lot of effort into crafting their proposals.

If you pitch your product, which is against the rules in most conferences, in the community conferences, the reviewer will instantly mark your proposal down. We can sniff it right away. You have to understand that for us, the more invalid proposals we can get out of the way as soon as possible, that will happen. If it is a product pitch, just don’t.

And then the next one is you have to be clear and concise in the first paragraph or sentence even. So when a reviewer reads your proposal, make sure that the first paragraph gives you an idea of, so this is going to be, I’ll talk about this and it’s gonna like…inspect the problem from this side or whatever, but give me that idea. And then you can develop the idea a little bit more on the next couple of paragraphs, but make sure that the idea of the talk is delivered right away. I have more, but I don’t know, Stacey, if you want to.

Stacey (16:20)
Yeah, no, I think that’s really good advice. would say whatever conference that you’re submitting, being on so many different program committees, I’ve seen the same talk submitted to every conference that has an Open CFP, regardless of the talk being specific to that conference or not. So think that’s key number one is make sure that what you’re submitting fits within the conference itself.

I think not doing a product pitch is key – especially within an open source community, open CFP, right? Those are only for open source, for non-product pitches. I think Puerco makes a really good point with that. But, you know, like, is this conference that I’m submitting this talk to higher level? Is it super technical and adjusting for those differences, right? A lot of times you’ll find in the CFPs that there is room to submit a beginner level, an intermediate level, an advanced level, but typically the conference description and the categories and things like this, you want to be very specific when you’re writing your CFP. You could sometimes you reuse the same CFP you’ve submitted to another conference, but you want to tailor it to each specific conference that you are submitting for.

Don’t just submit the same talk to five different conferences because they are unique, they are specific and you want to make sure that if you want your talk accepted, these are the little changes that make a big difference on really getting down to the brass tacks of what that conference is about and what they’re really looking for. So I always have to, when I’m writing something and when I’m looking at a conference to write it for, I have the CFP page up, I have the about page up for that conference and I’m making sure that it fits within what they’re asking me for, really.

Adolfo GarcĂ­a Veytia (@puerco) (18:20)
Yeah. And I just remember another one. And this is mostly, this happens most in the bigger ones, like the Cubicums and so on. Don’t try to slop your way into the conference. if you, I mean, it’s like, I’d rather see a proposal with bad English-ing or typos than something that was generated with AI. And I’ll tell you why.

It’s not because like, pure hates of AI or whatever. no. The problem with running your proposal into an LLM is that most of the time, so you have to keep in mind, especially in the big conferences, you will be submitting a proposal about the subject that probably then other people will be trying to talk about the same thing. And what will get you picked is your capability of expressing like…getting into the problem from a unique way, your personality, all of those things.

When you run the proposal through the LLM, it just erases them. All sorts of personal, like the uniqueness that you can give it will just be removed. And then it’ll be just like looking at the hollow doll of some of the person and you will not stand out.

Stacey (19:38)
Yeah, I agree completely – and…is it a terrible thing to have AI help you with some of the editing? No, not at all. But write your proposal first. Write it from your heart. Write it from your point of view. Write it from your angle. But do not create it in AI, in the chatbots. Create it from yourself first, and then ask for editing help. That’s fine.

I think a lot of us do that and a lot of people out there are using it for that extra pair of eyes. Do I sound crazy here? Does this make any sense? I don’t know how to word this one particular sentence. That’s fine. But yeah, don’t start that way.

Adolfo GarcĂ­a Veytia (@puerco) (20:19)
Exactly. mean, and just to make it super clear, it’s not that, especially people whose first language is not English like me. I of course use help of some of those things to like at least don’t like introduce many types or whatnot, but just as Stacey said, don’t create it there.

Sally Cooper (20:41)
This is great advice. Thank you both so much. Okay. How about getting accepted for a keynote? Like your KubeCon keynote really stood out. It was technical. It was really funny. memorable, engaging. How does someone prepare a keynote that differs from a regular talk?

Stacey (21:03)
Well, I want to start off by saying that we didn’t know, we weren’t submitting our talk for a keynote, right? We didn’t even know that that was like in the realm of possibility that could happen for KubeCon North America. We just submitted a talk that we thought would be fun, would be good, would give like, you know, some real world kind of vibes and that we wanted to have fun and we wanted to, you know, create a fun yet educational talk.

We had literally no idea that we could possibly have that talk accepted as a keynote. I didn’t know that. And this was my first real big talk. So it was a complete shock to me. I don’t know if you have other thoughts about that, but…

Adolfo GarcĂ­a Veytia (@puerco) (21:50)
Yeah, it sort of messes your plans because you had the talk planned for say 35 minutes and then you have 15 and you already had like 10 times more jokes that could fit into the 35 minutes. So, well…and then there’s also, course, like all of those things that we talked about, like getting nervous. Well, they not only come back, but they multiply in a huge way. I mean, you’ve been there. I don’t know. You get over it.

Stacey (22:28)
I would also say that once we found out that our talk was accepted first, were like, yay, our talk got accepted. And then I think it was like a few days later, they were like, no, no, your talk is now a keynote. So we freaked out, right? We had our little moment of panic. But then we just worked on it. And we worked on it, and we worked on it, and we worked on it, right? So not waiting till the last minute, I would say, to prep your talk.

But we…I think my main goal with this talk, and I have to give so much credit to Puerco because he’s such a good storyteller and he does it in such a humorous, but really technical and sound way. And we worked on this script. We wrote out an entire script because we only had 15 minutes. We went from a 25 minute talk to a 15 minute talk.

And so…pacing was really important, storytelling was really important, but also being funny was like something that I really wanted us to have, which Puerco was really good at too. And I think all of these things trying to squash it down into this 15 minutes was really tough, but I think that’s important to remember about keynotes versus talks is I think keynotes are more like, what is this experience of the talk about? Versus like, let’s get down to really technical details, right? You can do a technical talk that’s 25, 35, 45 minutes, but it’s a keynote. People aren’t going to remember anything from a keynote if you’re digging too, getting too deep in the weeds, right? So that was my focus. And I don’t know, Puerco, if you have anything else to add to that.

Adolfo GarcĂ­a Veytia (@puerco) (24:10)
Yeah, the other is that the audience is so much bigger that your responsibility just grows, especially to deliver, right? So as Stacey said, we actually wrote the script, rehearsed online, in person before the conference. And the experience also in the conference is very different because you have to show up early, you have to do a rehearsal in the prior days before your actual talk. And that’s said – nothing like it didn’t go perfect.
Like we still fumbled here and there and like messed up some of the details and the pacing and whatnot. it’s, I don’t know, at least in our case, it was about having fun and trying to get some of that fun into the attendees.

Sally Cooper (25:01)
Yeah, you really did. It was so fun. I think that’s what stood out.

Okay, one of the biggest barriers to submitting a CFP isn’t skill, it’s confidence. So what would you say to someone who feels like, I’m not expert enough. I don’t know if I have permission to do this. What you know, how do they deal? How do you personally deal with imposter syndrome? And why is it important to make sure that those new and diverse voices do submit at CFP?

Adolfo GarcĂ­a Veytia (@puerco) (25:27)
Oh, I’m an expert. So the first thing to remember, kids, is that Impostor Syndrome will never go away. In fact, you don’t want it to ever go away. Because Impostor Syndrome tells you something very, very important. And that is you are being critical of yourself, of your work, of your ideas. And if you ever stop doing that,

It means one, you don’t really understand the problem or the vastness of the problem that you’re trying to speak about and to talk about in your talk. And the other is you will stop looking for new and innovative ideas. So no matter where you get to, that imposter syndrome will ever be with you.

Stacey (26:20)
I agree. I don’t think it ever goes away. I feel like, you know, I was an imposter at the keynote. Absolutely was, right? Like, I didn’t know what the heck I was doing. I didn’t know what the heck I was saying half the time. I mean, I tried to memorize my lines and do the right thing and come off as this expert. I never, ever feel like an expert about anything, right? Unless I’m talking, I guess, about my cats or my kid or something.

Adolfo GarcĂ­a Veytia (@puerco) (26:47)
Yeah, exactly.

Stacey (26:49)
But yeah, think that’s, yeah, you’re pushing yourself to grow and that’s a good thing, right? So if you feel like an imposter, you know, that’s okay. And we all feel like that.

Adolfo GarcĂ­a Veytia (@puerco) (27:04)
Yeah. And the other, yeah, the other very important thing is think about what you are proposing to, to, to talk about in your talk. it’s supposed to be like new cutting edge stuff, like it’s something interesting, something unique. so it’s okay to feel about that because it’s, it’s a problem that you’re still researching that you’re trying to understand, that – especially think about – think about it this way.
If you propose any subject for your talk, anybody that goes there is more or less assuming that they want to know and learn more about it. if you feel confident enough to speak about it, like people will respond by willingness to attend your talk. That means you are already one little bit of a level above because you’ve done that research, you’ve done that in-depth dive into the subject. So it’s fine.

It’s fine to feel it. I realized that it’s a natural thing.

Stacey (28:05)
And most of the people in the audience are there to support you, to cheer you on, and are not gonna harp on you or say, oh gosh, you messed up this thing or that thing. They’re really there to give you kudos and really support you and be willing to hear and listen to what you have to say.

Sally Cooper (28:25)
Love that. Okay, let’s close the advice portion with a quick round of CFP tips rapid fire style. I’m going to go back and forth so each person can answer. Stacey will start with you. One thing every CFP should do.

Stacey (28:43)
I mean, get to the point as quickly as you possibly can. That would be my thing, right?

Sally Cooper (29:48)
Love it. Puerco, one thing people should stop doing in CFPs.

Adolfo GarcĂ­a Veytia (@puerco) (28:55)
Stop trying to blend or to mimic what you think the industry or your community wants from you. Represent. Always show off who you are, who you came from. That is super valuable and that’s why people will always want to have you as part of a program.

Sally Cooper (29:13)
Stacy, one piece of advice you wish you’d received earlier.

Stacey (29:18)
gosh, would say rejection is normal and not personal. I wish someone had told me that earlier, but that is one big, experience. Speakers get rejected all the time, right? It’s not about your worth. It’s about program balance, timing, and fit. So keep submitting.

Sally Cooper (29:39)
Okay, Puerco and Stacey, both got famous after this Puerco selfie or autograph?

Adolfo GarcĂ­a Veytia (@puerco) (29:44)
Selfie with a crazy face, at least get your tongue out or something.

Sally Cooper (29:50)
Stacey. KubeCon or KoobCon?

Stacey (29:54)
Oh gosh, I feel like this is like JIFF or GIF. And I’m in the GIF camp, by the way. I say KubeCon, even though I know it’s “Coo”-bernetes, I still say CubeCon, so.

Adolfo GarcĂ­a Veytia (@puerco) (30:07)
CubeCon, please.

Sally Cooper (30:09)
Okay, before we wrap up, Stacey, as the OpenSSF Community Manager, can you share some upcoming CFPs and speaking opportunities people should keep an eye on?

Stacey (30:19)
Yeah, so Open Source Summit North America is a pretty large event. I think it’s taking place in Minneapolis in May this year. There’s multiple tracks and there’s lots of opportunities for different types of talks. The CFP is currently open right now, but it does close February 9th. So go and check out the Linux Foundation Open Source Summit North America for that one.

We also have OpenSSF Community Days, which are co-located events at Open Source Summit North America, typically. And these are our events that we hold kind of around the world, but honestly, they’re perfect for first-time speakers as well. They’re smaller, they’re more intimate, and the community is super supportive. Our CFP for Community Day North America is February 15th. So go ahead and…search for that online. You can find them, and we’ll put the links in the description of this podcast so you can find that.

And then be on the lookout for key conferences later on in the year as well. KubeCon North America will be coming up later. Open Source Summit Europe is coming up later in the year. So be on the lookout for those. There’s also within the security space, I know there’s a lot of B-sides conferences and KCDs, which are Kubernetes community days and DevOps days.

If you’re in our OpenSSF Slack, we have a #cfp-nnounce channel that we try and promote and try and put out as many CFPs as we can to let people know that if you’re in our community and you want to submit talks regarding some of our projects or working groups or just OpenSSF in general, that CFP Announce channel is really a great place to keep checking.

Sally Cooper (32:13)
Amazing. Thank you both so much, not just for the insights, but for really making the CFP process feel more approachable and human. If you’re listening to this and you’ve been on the fence about submitting a CFP, let this be your sign. We really need your voice and thank you both so much.

Stacey (33:32)
Thank you.

Adolfo GarcĂ­a Veytia (@puerco) (33:33)
Thank you.

What’s in the SOSS? Podcast #49 – S3E1 Why Marketing Matters in Open Source: Introducing Co-Host Sally Cooper

By Podcast

Summary

In this special episode, the What’s in the SOSS podcast welcomes Sally Cooper as an official co-host. Sally, who leads OpenSSF’s marketing efforts, shares her journey from hands-on technical roles in training and documentation to becoming a bridge between complex technology and everyday understanding. The conversation explores why marketing matters in open source, how personal branding connects to community building, and the importance of personas in serving diverse stakeholders. Sally also reveals OpenSSF’s 2026 marketing themes and explains how newcomers can get involved in the community, whether through Slack, working groups, or contributing content

Conversation Highlights

00:09 – Welcoming Sally Cooper as Co-Host
01:28 – From Technical Training to Marketing Leadership
03:54 – Bridging Technology and Understanding
06:19 – Why Marketing Makes Open Source Uncomfortable
08:11 – Personal Branding and Career Growth
10:42 – Understanding Community Personas
12:33 – Getting Started with OpenSSF
14:44 – OpenSSF’s 2026 Marketing Themes
16:18 – Rapid Fire Round
17:09 – How to Get Involved

Transcript

CRob (00:09.502)
Welcome, welcome, welcome to What’s in the SOSS, the OpenSSF podcast where we talk to people, projects, and we talk about the ideas that are shaping our upstream open source ecosystem. And today we have a real treat. It’s a very special episode where we’re welcoming a new friend. And this is somebody that you probably know if you’ve been involved in our community for any period of time.

This young lady gets to help us with our messaging and how we present ourselves to the outside world, how we get our messaging out to all those interested OpenSoft community contributors around the globe. And today she’s officially joining Yesenia and I as a co-host of What’s in the SOSS. So I am proud and pleased to welcome Sally Cooper.

Yesenia (01:02.916)
Woo!

CRob (01:07.488)
Sally has been helping lead our marketing wing of efforts for the last several years. So before we jump into kind of what you do within that marketing function, Sally, we would like to hear a little bit about your open source origin story and how you got into technology.

Sally Cooper (01:28.549)
wow. Well, thank you so much, Yesenia and CRob. I’m super excited to be here. And yeah, I started my career a very long time ago. I actually started in tech with hands-on technical roles, working in training, documentation and support, and really helping people understand systems and tools and workflows.

Yesenia (01:52.21)
Yeah, I want to welcome Sally. great to have just another voice on this podcast, putting the hard work that our open source ecosystem is out there and getting more of these other voices. But you were talking about that you started in tech early and for me, that’s new for me. I would love for you to dive into these like technical roles. I think understanding your background in the technical and how you’ve gotten into marketing and working with open-assess that’s just going to relate to folks and understand that.

You don’t always have to be technical or work in a technical field to support your security. So I’d love to understand your background and how you’ve connected your technical background into the transitions you’ve had in your career.

Sally Cooper (02:35.611)
that’s such a good question. Yeah. I think you really nailed it there because you don’t need to always be technical and sometimes you don’t even, you can be technical and you end up in something like marketing for me. So, when I say started in tech, mean, this was like really entry level, hands on, learn it from the ground up. I worked in finance in my first job out of college. I was working at a data processing center and it was really operational.

accuracy, lots of responsibility, really not a lot of glamour. So the thing that kind of was a turning point was that we went through a major systems upgrade and we moved from a legacy system to entirely new software. So suddenly people who had been doing their jobs a certain way for years really were expected to work differently and often overnight. And I became one of the people who could help bridge the gap.

because I understood the technology and how to explain complex systems in an easy to understand manner. And I ended up being in training. So I became a software trainer and trained the whole organization on how to use the software to do their jobs.

Yesenia (03:52.776)
That’s very useful.

Sally Cooper (03:54.649)
Yeah, thanks. It’s funny because we all have to get started somewhere, right? And that’s how it worked out for me. After that, I worked at a startup in B2B e-commerce and continued on with educational software training, writing technical guides, books, some of the first e-learning programs. So I’m definitely dating myself here. But looking back, yeah, looking back, the title marketer wasn’t something that I thought of.

CRob (04:17.772)
Yeah

Sally Cooper (04:24.131)
But I was doing a lot of work in marketing without knowing it, just helping people understand concept topics. So yeah, that’s how I got here. Thanks for asking.

Yesenia (04:37.906)
Yeah, we all date ourself very easily. mean, we’re in tech. It already ages us the minute we walk in. But I think that’s a great understanding and background, right? I think that’s one of the most important skills when it comes to this technical is like, can you bring this high level technical aspect into something that everyday folks can understand and then drive them in? I’m curious from there, now you’re doing marketing. How did you get involved with that?

Sally Cooper (05:06.713)
Yeah, great question. So around the time when my career sort of took off with the technical education, there was something happening in the background. So early 2000s, this was the dawn of YouTube, smartphones were starting to emerge, companies were beginning to realize that technology wasn’t just about features, it was about an experience. And so I find this a very full circle moment because before smartphone, I had an iPod.

It was a pink metallic iPod and I got really obsessed with podcasts. So podcasts were new. It wasn’t just about the music for me. It was really listening to, you know, a conversation that was educational. And I could do that while raising a family, doing, like going for a walk, getting exercise, making dinner. You could have headphones on and just bring yourself into a whole other world.

So yeah, so that’s when I really started like it I also loved the campaign like looking at the billboards and seeing the silhouettes with I You know the iPod and the headphone all of that. So it’s kind of full circle

CRob (06:13.484)
Yeah.

Yesenia (06:19.934)
And it’s really lovely, especially when you see those nice like billboards and like, how much thought has someone taken into that? And like, when you think of like open source, like it’s people’s hobby projects, there’s just like no profit. And I feel like marketing in a sense, I’ve learned it from my own personal knowledge, professional growth, as you could say, there, I realized I was doing marketing without realizing I was doing marketing.

But marketing can just make some people uncomfortable, especially in the open source space. Like, what do you think about that?

Sally Cooper (06:53.463)
Yeah, that’s really valid. Open source is really personal. A lot of projects start off as a hobby, a passion, a side project built on nights and weekends. The word marketing can feel a little uncomfortable. It like, it doesn’t really belong there. I’ve definitely heard that feedback from developers. In open source, we’re not selling software. So it’s a completely new concept for me. I did have some marketing jobs after the educational jobs and

CRob (07:04.014)
Right.

Sally Cooper (07:23.479)
So I’m learning still, I’m learning from all of you and from our community that we’re sharing ideas, tools, practices, and that the currency is really people’s time, attention, and trust. So without marketing, great projects stay invisible, maintainers get burnt out, and users can struggle in silence, and the people who can contribute never even find the door.

CRob (07:50.142)
And this is extremely interesting to me because I observe Yesenia and kind of for the trajectory of her career and so much of your online persona is you do a lot of work of kind of branding yourself and providing advocacy and outlets to help empower other people.

Yesenia (07:58.589)
Yeah.

CRob (08:11.522)
It seems like a really big part of what you do outside of your day job and outside of your foundation work. So from your perspective, Yesi, how do you see these worlds connecting?

Yesenia (08:17.359)
Absolutely.

Yesenia (08:23.39)
I will recently I think it’s an interesting area. I heard this quote from a co worker. I would love to call her but I don’t have her. But it was like, your branding should be getting you the next job, right? Your next step your next opportunity. And as I started in my career, I was really thinking about like,

I kept getting seen and told like I wasn’t technical, but if you looked at my background, it’s in my education. It’s like, how am I not technical? Right. So I really started thinking about like where branding is like where people start meeting you. So your resume is a form of branding, your LinkedIn page is a form of branding. And I really saw it as like sharing a story about yourself, your impact, your value. I really letting them know what they’re getting into before they even reach out to you. So.

It just naturally happened as a way for me to like leave a toxic work environment and get into the next space. And as I realized I was doing it, like I said earlier, I didn’t realize I was doing marketing until somebody was like, you’re marketing. And I’m like, cool.

CRob (09:30.102)
I think what you do is very effective.

Yesenia (09:32.338)
Thank you.

Sally Cooper (09:33.345)
Yeah, I agree. Yesenia, you were an inspiration to me when I first started at OpenSSF because you were so good at branding. You had the cybersecurity big sister. I saw that somewhere. It’s like, yeah. And then you started tagging me on LinkedIn and you just made me feel like I was welcome. And I know that you do that to the community. You make people feel like there’s someone who is technical, but also human who leads with authenticity. So I was super impressed and I always learn so much from you.

Yesenia (09:37.448)
No.

Yesenia (09:45.371)
and

Yesenia (10:02.462)
What you guys gonna make me cry? No emotion. No, there’s no crying about the bars. No need baseball. I just aged myself there. But yeah, I think it’s really about creating those personas. And this is just something that you can do for yourself, that you do for your community, that you do for your projects. It was just something that I realized we just needed to connect people and get them moving. And personas has been talked a lot today.

CRob (10:05.006)
There’s no crying in open source.

Yesenia (10:31.39)
in this conversation. Sally, I love your expert opinion on this. Why do you think they’re so important when it comes to open source marketing?

Sally Cooper (10:42.189)
Yeah, well, CRob and I ran a project along with the OpenSSF staff where about a year ago we polled our community and we asked them a few questions to try to identify who they were, what their job titles were, what was important to them, how they learned about OpenSSF and how we could serve them better. And we came up with a list of personas.

I will link the personas in this transcript, hopefully I can figure that out. But we have software developer maintainers, open source professionals, the OSPOs, security engineers, executives and C-suite. And there’s a whole bunch of titles there. And then we came up with a new one that we hadn’t thought about before, which is funny because now that we’re talking a lot about marketing, there’s a product marketer.

CRob (11:11.662)
you

Yesenia (11:13.146)
Ooh.

CRob (11:36.91)
Mm-hmm.

Sally Cooper (11:36.985)
who is very much someone who is interested in open source software and open source security software. They’re typically a member or looking to become a member of the OpenSSF and they wanna help elevate the people that they work with, the projects that they’re working on, all the great work that their companies are doing in open source. really, Personas help us move from here’s a project to here’s how you ship secure code or

Here’s how we can help you manage risk or here’s how we can help you meet policy requirements. Marketing has really become a service and that’s where personas fit into the mix.

CRob (12:17.794)
Very nice and thinking about this from like, you know, we’re three kind of insiders for the foundation. If someone’s brand new to the OpenSSF and kind of wants to learn more, what does that journey look like for them, Sally?

Sally Cooper (12:33.429)
Yeah, that’s such a good question. So first of all, we’re all really nice and welcoming and you’re all welcome here. So if you have an idea, marketing can help bring that to light. If you are just new to OpenSSF, you can join many of our, actually all of our working groups. We have an open source community. One that would be really beneficial is the bare working group, belonging, empowerment, allyship, and representation and they meet frequently and they record their meetings on YouTube. So if you’re unsure, you can watch a few and learn a little bit more what it would be like to be in a working group at OpenSSF. Strongly encourage you also to join our Slack channel. We will link that and to follow us on social media. You can sign up for our newsletter. We try to meet people where they’re at.

So when we were talking about the personas, we learned that people are on different platforms. Some people would prefer to watch a video or read a blog. And so we try to cater to that, but we’re also always looking for feedback. So join the Slack, make yourself known. Again, if you have an idea, we can help you bring that to light. So we’d love to hear from you.

Yesenia (13:53.181)
And, know, no personal bias, but the bear group does do some awesome work. You know, there’s also, says the co-lead. We’ve also have a few blog posts that was released last year that Sally and her team has helped kind of release that go into how to get started into open source that I know the community as a whole has been sharing with new members as they come into a Slack channel. They’re like, I’m new, how do I get started? So it’s great resources there.

So we’re kicking into 2026, even though my mind keeps thinking it’s 2016. I had to figure out what’s going on there, but you know, one day we’ll go back there. Sally, as an insider, I’d to know what is marketing working on this year for openness, the staff’s mission and the growth of the communities?

CRob (14:30.101)
You

Sally Cooper (14:44.078)
Thank

Yeah, yeah, great question. So OpenSSF exists to make it easier to sustainably secure the development, maintenance, release, and consumption of the world’s open source software. We do that through collaboration, best practices that are shared, and solutions. And so our themes are showing up in 2026 quarterly to help people in our community meet these needs. For Q1, which we’re in now,

We’re focused on AI ML security. Q2, we’re going to talk about CVE, vulnerability transparency.

CRob (15:25.432)
heard of that.

Sally Cooper (15:27.289)
Q3, policy and CRA alignment. Q4 is going to be all about that base. So Baseline and security best practices.

Yesenia (15:41.01)
Very big fancy buzzwords there. So if anyone’s playing bingo as they listen, you got a few.

CRob (15:48.014)
Well, that has been an interesting kind of overview of what’s been going on. But more importantly, let’s move on to the rapid fire part of the show. have a series of short questions. So just kind of give us the first thing that comes off the top of your head. And I want that visceral reaction. Slack or async docs?

Yesenia (15:58.879)
Thank you for watching.

Sally Cooper (16:18.092)
Async docs.

Yesenia (16:21.15)
Favorite open source mascot.

Sally Cooper (16:24.947)
The Base. Honk as The Base.

CRob (16:27.79)
Nice. Love that one. What do you prefer? Podcasts or audiobooks?

Yesenia (16:27.934)
Go, baby.

Sally Cooper (16:33.273)
podcast.

CRob (16:35.662)
Star Trek or Star Wars?

Sally Cooper (16:38.489)
Star Wars.

CRob (16:40.43)
And finally, what’s your food preference? you like it mild or do you like it hot?

Sally Cooper (16:48.939)
medium.

CRob (16:50.188)
Medium? Well, thanks for playing along. So, Sally, if somebody’s interested in getting involved, whether it’s contributing to a project or potentially considering, you know, joining as a member on some level, how do they learn more and do that?

Yesenia (16:52.658)
That’s your question.

Sally Cooper (16:55.033)
Great question.

Sally Cooper (17:09.995)
Amazing. So go to openssf.org. From there, you can find everything you need. We referenced a blog. You can go check out our blog, find out how to contribute a blog. Everyone can join our Slack, join a working group, follow us on social media, subscribe to our newsletter. And we would love to see you at our events. Those are open to all. And if you are a member, please get involved, submit a blog.

Join us on the podcast. We would love to have you. We have a key study program. We also do quarterly tech talks. If you can dream it, we can build it. And the best place to plug in is our marketing advisory council. It meets the third Thursday of every month at 12 p.m. Eastern time. You can also reach out to us at marketing at openssf.org.

CRob (18:02.392)
Fantastic. And I may state how thrilled I am to be adding you as kind of a voice of our community and kind of joining us as a co-host, Sally.

Sally Cooper (18:13.133)
Woohoo!

Yesenia (18:13.374)
Yeah, I’m very excited for a new voice, help offload some of this work and the stories that you’re going to bring the guests we’re going to have on and as you had shared earlier, our marketing for 2026.

Sally Cooper (18:27.982)
Well, thank you so much both for having me. It’s been a pleasure.

CRob (18:31.662)
Excellent. With that, we’ll call it a wrap. I want to wish everybody a great day and happy open sourcing.

Yesenia (18:35.718)
You’re welcome.

What’s in the SOSS? Podcast #48 – S2E25 2025 Year End Wrap Up: Celebrating 5 Years of Open Source Security Impact!

By Podcast

Summary

Join co-hosts CRob and Yesenia for a special season finale celebrating OpenSSF’s fifth anniversary and recapping an incredible year of innovation in open source security! From launching three free educational courses on the EU Cyber Resilience Act, AI/ML security, and security for software development managers, to the groundbreaking DARPA AI Cyber Challenge where competitors achieved over 90% accuracy in autonomous vulnerability discovery, 2025 has been transformative. We reflect on standout interviews with new OpenSSF leaders Steve Fernandez and Stacey, deep dives into game-changing projects like the Open Source Project Security Baseline and AI model signing, and the vibrant community conversations around SBOM, supply chain security, and developer education. With nearly 12,000 total podcast downloads and exciting Season 3 plans including AI Cyber Challenge competitor interviews, CFP writing workshops, and expanded global community initiatives in Africa, we’re just getting started. Tune in for behind-the-scenes insights, friendly competition stats on our most popular episodes, and a sneak peek at what’s coming in 2026!

Conversation Highlights

00:00 – Celebrating OpenSSF’s Fifth Anniversary
02:52 – Educational Growth and New Initiatives
05:51 – Community Voices and Leadership Changes
08:45 – The Role of Community Manager
11:44 – Open Source Project Security Baseline
14:47 – AI and Machine Learning in Open Source
17:47 – Software Bill of Materials (SBOM) Discussions
20:34 – Podcast Highlights and Listener Engagement
22:26 – Looking Ahead to Season Three

Episode Links

Transcript

CRob (00:05.428)
Welcome, welcome, welcome to What’s in the SOSS. Today I’m joined with my co-host, Yesi, and we got a really great recap for everybody. We’re gonna be talking about the whole last year’s season of What’s in the SOSS, some of the amazing people that she and I got to interview. Yesi, I’m excited to actually get to talk with you today.

Yesenia (00:13.58)
Hello.

Yesenia (00:30.318)
I know, I got co-host and I never got to co-host with you and here we go. But today’s exciting because it’s not just celebrating everyone’s impact and everything awesome that’s been done in the open source community, but this year’s actually OpenSSF’s fifth year anniversary. That was amazing. I just found out. I was like, whoa, good episode.

CRob (00:47.44)
Wait!

CRob (00:53.646)
Yeah, some of us have been around a whole five years, so it’s not quite a surprise, but hey. That’s right. So I mean, kind of looking back over the last year, we had so many amazing things that both our community did and then like we’ve highlighted through the podcast. You know, let’s you know, we had a whole section where we worked with our.

Yesenia (00:58.798)
But at least we’ve made it longer than COVID. That’s fine.

CRob (01:18.704)
Linux Foundation education team on a whole cybersecurity skills framework to try to help coach new people into the profession and try to help identify skills that employers would want to hire. And I know this has been talked about a little bit in the bear working group, right?

Yesenia (01:36.598)
Yes, it’s something that we’re also using to consider as we bring in more contributors that are newer to this space. This is like a really good framework and a functional structure of how we can bring in these folks and help them scale up as well as helping these open source contributors.

CRob (01:53.368)
Right, and as we’re upskilling, you know, the crew and the back was really busy. We issued three whole new courses this year. All three exactly.

Yesenia (02:02.318)
Free courses and across the different very important spaces because who isn’t talking about CRA and AI? right there. Like it’s right there for you. got an hour long video on each. You got a nice little badge at the end. And for our software development managers, we can also talk about security. So those are, you know, three new courses if you’re checking out on how to expand your education.

You have the LFD 125, which is your security for software development managers. Two on my bucket list because they impact my work, which is understanding the EU Cyber Resilience Act. That’s LFEL 1001. wonder, my binary math is a little rustic, but curious what that converts to. And then our secure AI ML driven development. This one, I know a few people in the…

The BEAR working group that I’ve taken it and good feedback and BEARRRR!. But not even these new courses, but just the group in general. We have new LFS, new OpenSSF members joining us.

CRob (03:14.96)
That was pretty cool. And I think you actually got the opportunity to interview Stacey when she started, right? She’s our new Community Manager.

Yesenia (03:23.456)
Yeah, if you haven’t worked with the open-ended stuff, you haven’t met Stacey’s great community manager, really there. I wanted to say a word, but we’re on live, so I can’t. But she’s really driving. It’s good episode too. She got on our podcast, shared a little bit of her background. And I know she works closely with the Bear community, helping drive a lot of the operations. But we also had a new general manager. You got to interview.

CRob (03:51.384)
Right, yeah. Yeah, my new boss Steve, Steve Fernandez joined us around the first quarter and he brings with us a real kind of business and corporation focused background. So he’s really helped kind of mature a lot of the stuff we do around here and enhanced the scope of the services that we offer the community.

Yesenia (04:13.006)
And there was one more. I don’t know, I can’t put my finger on it. There was one more new member. Hmm.

CRob (04:16.75)
Hmm

Well, we did have a new co-host this year. Hello?

Yesenia (04:22.306)
That’s right, it’s me! Yes! Sound more now!

CRob (04:28.747)
Very exciting. Yeah. And overall, the podcast kind of focused on current topics, new and interesting projects like our security baseline. We had a couple talks around CRA and I know that we are, we’ll kind of save this a little bit as a teaser for next year, but we did several talks talking about AI.

Yesenia (04:49.902)
And there we did also talk on the AIxCC, which, you know, they’re going ahead and pushing security into the future with their autonomous vulnerability discovery. I know working in my past that that autonomous vulnerability discovery is such a complex, huge issue that I’m excited somebody’s driving deeper into that and working with OpenSSF.

CRob (05:12.752)
And I think I mentioned to you in some of the podcasts, I came into the whole AIxCC competition incredibly skeptical that I was unsure of the value that AI tools would bring into this space. But after we got the results, I was just floored. The fact that like the top team had over a 90% accuracy rate in finding and writing a fix for vulnerabilities.

Yesenia (05:39.078)
Wow.

CRob (05:41.836)
The second place team was only in the high 80% success ratio…only. Yeah, like there’s some amazing stuff and that really kind of convinced me that this is there’s some value in this space. And I think there’s, I’m really looking forward to some of the collaboration with around the cyber reasoning systems and a lot of the new things we’re doing in the AI space right now.

Yesenia (05:59.663)
Do know if they’re continuing it for next year?

CRob (06:06.116)
The competition isn’t continuing, but we will be continuing to work with DARPA and ARPA-H and the different competitors. We’ve already lined up. You’ll see some podcasts coming out in the early next year where we’re talking to the different competition teams. And several of those groups already are working to donate their software to the OpenSSF to help continue to grow a community and continue the development and refinement of these systems. There’s going to be some amazing stuff out of the AIML Working Group next year.

Yesenia (06:24.087)
Nice.

Yesenia (06:34.68)
Yeah, because I can just imagine with the percentage of the intrigue, just the research and the technical architecture of how they designed this to be able to produce such results. I know it’s going to be a huge impact into our open source and the security overall. But it’s for one year, you know, we had educational growth, governance, maturity, policy collaboration, our supply chain security. That’s one of my favorite words. I got earrings for it. It’s sharper.

CRob (06:51.631)
Yeah.

Yesenia (07:04.864)
And then you got AI, know, the preflow of that that’s come in. It’s really hit the open source in a real way. And I’m excited and I love that the podcast is capturing how we’re evolving in these spaces with the voices from our community.

CRob (07:19.172)
Mm-hmm. So let’s talk about those community voices a little bit. You mentioned that I had the opportunity to kind of talk with Steve, our new General Manager, and it was interesting that, know, Steve spent some time in his podcast, which was the title was Enterprise to Open Source, kind of talking about Steve’s journey. And he really kind of focused in on how his decades ofbeing a consumer of open source really is forming his current role as a steward of open source right now.

Yesenia (07:56.931)
Yeah, after listening to that, it was, it was, it’s understanding of why he got the position considering his background in the space, like, and just since he started the changes that’s happened in open source and the growth of what is, you know, Steve’s vision from where he bridges enterprises’ risks mindset with that of open source. Like that is something we definitely need to consider when it comes to it, because one of their major consumers is our enterprises.

And I know he’s played a big role in the Baseline and maturing that foundation of it. From listening to the episode, I know he talks about like those decades of consuming and then stepping into this and really calling security a hidden greatness, which is the work that you only notice when it’s missing or you get impacted, right? And this is for even the everyday person is like, you won’t realize that you need that security privacy until youknow, credit cards are stolen, right? So, but for him really coming in and turning those enterprise pain points into what is OpenSSF roadmap this year, and the greater is really helping organizations ship safer software.

CRob (09:09.88)
I agree. Now let’s talk about showing up fully. This was the interview you did with Stacey, our new Community Manager. What highlights would you like to share out of that conversation?

Yesenia (09:19.874)
This one, I love this, this is one of my favorites. Stacey came in and she’s had this background of becoming a community manager for open source communities. And she really kicked the ground running and was pushing that train. Like she’s behind that train moving it. But her real focus was around belonging, that authenticity, the inclusion and connecting BEAR with DevRel. And even though they’re two different working groups under us.

We have a very similar mission, just a different scope. So being able to come in as a community member and really ground how much community work is underpinned and all the technical work. I’ve seen her show up fully to the calls to not just Bear, but the other working groups and just making sure that she drives that community first mindset. And she connects with the maintainers, the members, the newcomers, and just making sure everyone’s being heard and felt. So,absolutely love that and you know there’s so much more into that 

CRob (10:24.24)
She also does a pretty amazing keynote.

CRob (10:29.968)
You’ve got to watch the video. It’s amazing.

Yesenia (10:45.43)
and I didn’t get to see the keynote but I you gotta watch it I’ve heard so many with her and Puerco

CRob (10:53.368)
And that’s, I think another interesting thing kind of pivoting around the community manager role. We have so many things going on across all the technical initiatives and working groups. It’s hard to kind of keep track of all of it. And that’s why having this role of that community manager is so important to be that connective tissue between our folks in the community that are contributing with staff, with the Board and the TAC. So it’s really important to have that, that role to help keep us balanced and focused.

Yesenia (11:22.306)
Yes. And let’s not forget, the podcast wouldn’t be the podcast without Stacey sitting here, listening to us, editing, publishing it. Big kudos if you ever see Stacey and you do her podcast. Please let her know she’s working really hard behind the scenes. She’s listening to us right now. So tons of kudos to her.

CRob (11:39.352)
Absolutely. Well, thinking about, thinking about what came up next is, the Open Source Project Security Baseline was a big effort for us, both in our community and within the whole broader LF. We did a, yeah, yeah. And we did a great podcast with two of the maintainers, Eddie Knight and Ben Cotton. And the title was a Deep Dive into the Open Source Project Security Baseline. And, you know, I thought that was.

Yesenia (11:53.932)
You helped push a lot of that.

CRob (12:08.752)
Pretty amazing little chat because both Eddie and Ben approached this project from the perspective of an upstream maintainer. We want to do whatever we can to remove work and burden from upstream and allow them to focus on creating amazing software and not necessarily have them have to worry about a compliance checklist, so to speak.

Yesenia (12:33.998)
And what I know with the Baseline, it ties together several projects.

CRob (12:39.596)
Yeah, have the Baseline itself is the catalog, which is the brains of the whole operation. And that details a list of requirements that should be done in the course of software development, publication and consumption. And then we have the orbit working group, which actually is the kind of the home for the baseline. And the ORBIT working group has a series of software projects.

That help try to automate or enable a lot of these different techniques. So we have things like around managing, making policy-based decisions in your CI pipeline, like a minder or a Gemara. We have a security insight spec that’s all part of the Orbit Working Group. And that’s a way for people to express how they are achieving some of these requirements. So like, for example, if you’re a project and you make, you issue SBOMS.,

You can make a security insights file to tell people how to find your SBOM. So they don’t have to come continually emailing you asking you for more information.

Yesenia (13:47.119)
And I heard a very quotable famous quote come out of this podcast, which was, oh, we got to put this on a t-shirt. “Give maintainers a way to show their security work, not just promise it.” Because that’s a huge thing. You’re working on these projects day and night in the stereotypical basement. And no one really cares unless they’re impacted, not in that sense. But it’s nice that we could show.

Have a way for maintainers to show their security work, give themselves a kudos and acknowledgement for the hard work putting it together.

CRob (14:19.279)
Right.

CRob (14:23.21)
And that’s where I’m very excited. And this kind of ties in with Steve’s vision and strategy is that projects like Baseline or SLSA, these are things that help downstream, meet your boardroom expectations. But all of these things are created and curated by the community. So again, we try to wherever possible focus in on the maintainer experience and making things easier. And I just love thatkind of dual purpose that we’re trying to help both up and downstream at the same time.

Yesenia (14:56.824)
Yeah. And then this year we also going back into those educational pieces, like some other episodes we talked to it was, you know, David Wheeler’s new AI ML Development Software. We have the Cybersecurity Skills Framework that we talked about earlier. And from there, we had that conversation with Sarah. think you interviewed Sarah on the AI competition model signing. What was your takeaway from that?

CRob (15:25.168)
Yeah, that was really great. So that was right as, so we have an AI ML Working Group is one of our technical initiatives and they’ve been around for about three years. And it was a little bit of a slow start where they did a lot of talking and evaluating and kind of setting up liaison relationships with the, there’s a whole cast of characters that are involved in AI security in the upstream ecosystem. And when I talked with Sarah, it was right after they’d had two publications.

The first was an AI Model Signing Project where they were leveraging a Sigstore and In-toto to help consumers understand, here is a signed model or a signed artifact. On this day, theycreated this artifact and it’s been untampered with since. So again, it’s trying to help provide more information into the pipeline so people can make risk-based decisions.

Yesenia (16:02.837)
interesting.

CRob (16:21.774)
And then right after that, they also released a white paper and of talking about how to integrate DevSecOps practices into machine learning and LLM development. And that’s been a really important artifact where it’s helped us realize, recognize that there are a lot of people involved in creating air quotes here, AI stuff, whether it’s an application, you’re training a model, you’re trying to go to market with something. There’s a lot of personas that are involved and onmost of them aren’t classically trained software engineers or cybersecurity practitioners. So the white paper kind of highlights these other people that participate in this creation process and talks about some techniques that are both old, you know, from AppSec, what we’ve done for 25, 30 years that have worked well, that could be applicable in the AI space. But then they also talk about some new ideas because these technologies are a little different and it does requiresome new ways of thinking, of being able to interrogate the different gizmos, whether it’s GPUs or eGenTech. So each technique requires some a little bit different tools to help protect them.

Yesenia (17:33.487)
Yeah, I’m glad you brought up the white paper because I was about to be like, I read the white paper. It was actually a good piece of knowledgeable guidance and information on how to Model Sign that I’m bringing into my own industry. It’s a good read, you know, and then we have other reads like the CRA compliance that we had a conversation with Alpha-Omega and the Erlang group. Those are also two good episodes to watch or to listen to.

Yesenia (18:03.522)
When it comes to the CRA. But, you we’ve talked about Baseline, we talked about GUAC, we’ve talked about SLSA, but the other card on, you know, the other bingo card for 2025 is SBOM. What episodes do we have on that?

CRob (18:14.746)
That’s right.

What episodes did we have on software bill of materials? 

CRob (18:51.608)
Right, we did do several things around SBOM. We had the opportunity to talk with Kate Stewart, who’s been a leader within the software build material space almost since the beginning. She represents SPDX, which is one of the two tools that most people use to create software builds materials, with the other one being Cyclone TX that our friends over at OWASP care take. And that was really interesting kind of talking about Kate’s perspective of the evolution of these things.

And then more recently, I had the opportunity to talk with the chief security officer of Canonical, my former coworker, Stephanie Domas. And we talked about a bunch of different things. And SBOM was kind of wrapped up in that conversation and talking about just challenges within the current regulated space that both commercial entities like an.

Yesenia (19:26.445)
Ooh.

CRob (19:41.546)
Canonical will face, but also upstream open source maintainers as well. So really engaging conversations around supply chain and software bill materials. The GUAC conversation was also really good and kind of important. That’s a very useful tool to help you get wisdom out of your SBOMS. Wisdom.

Yesenia (20:00.601)
Wisdom. Word of the day. It’s awesome. Considering it’s OpenSSF’s fifth year, just this year’s reflection on podcasts, we’ve really covered on multiple areas of the community, has been working on. And just my favorite thing about this whole thing is the little competition that’s going on against these podcast episodes where our guests have come in and asked, what’s my number? What’s my view? So as of today’s recording, we have the Mike Lieberman’s talk on GUAC SLSA and securing open source at 611. GitHub’s Mike Hanley, transforming department of NO . at 406.

Yesenia (20:55.886)
Eric Brewer and the future of open source software at 370, Vincent Danen and the Art of Vulnerability Management at 328. I’m so glad my dislexia, is not switching these numbers. And lastly, we have Sonatype’s Brian Fox and the Perplexing Phenomenon of Downloading Known Vulnerabilities at 327. So if you want to help these folks out,

Yesenia (21:25.644)
Give it a listen and let’s see if we can change the top episodes by the end of the year.

CRob (21:31.024)
It’s kind of a curious peek behind the scenes where guests will come in and do their podcast. And they’re very interested. It’s not vanity, but people like to hear that their work is valued. And so there is very healthy competition and some little bragging rights that Mr. Lieberman will kind of say, well, I have the most downloaded open as a podcast. So it’s just kind of fun, like a friendly little healthy competition. And again, and focusing in on some of these key areas of supply chain security, application development, software build materials and such.

Yesenia (22:06.19)
Yeah, it’s crazy to see that we’ve, across all the episodes, been about 11,800 total downloads and just 6,000 in 2025. So big thank you to our listeners, our supporters for that. I think it’s the first year of this podcast or second.

CRob (22:24.526)
Second, the second. And that actually kind of gives us our segue towards the end here. We’re talking about a lot of things that happened during 2025. And we are about to publish our annual report where you can kind of dive in and double click on some of these details. We’ll provide a link as this podcast is published that you can look at the report that will link into things like our five-year anniversary or our work with DARPA on AICC or all these amazing things around the baseline. So that’s, I’m really excited to kind of share that annual report with everybody that touches on a lot of the topics that Yacenya and I have talked through and many others. And that kind of moves us on. We’re going on to bigger and better things. 2026 is going to be season three.

CRob (23:17.88)
And I think we’ve got some really interesting topics kind of queued up.

Yesenia (23:21.464)
Are we gonna share? Are we gonna share? Are we gonna be nice to our listeners?

CRob (23:24.944)
I think everyone’s on the nice list. We can share that with them. Yeah, you’re going to see us starting off the year with kind of a full court press around AIxCC and AI and ML security topics. We have a bunch of work queued up with some of the cyber reasoning system competitors. We’ll talk with some of the competition organizers, again, talking more with our community experts around these very important topics and maybe unveiling some new projects that our AI team has in the hopper. That’s going to be very exciting. We’re going to have some very special guests from around the world of open source and public policy and research. And we’re going to have some very recognizable names that may have been in the show or a part of our community’s orbit we would love to reengage with and talk more with.

CRob (24:21.358)
So thinking about, you’re going to see multiple series of episodes around the AIxCC competition in particular. We’re going to be focusing in on industry and research stars. So we’re going to try to find some well-known voices out in the research community, joining some of our maintainers and kind of talking about some big picture conversations in the ecosystem. And then you’ll see many more things around our education efforts.

Would you like to talk about some of the stuff I know that Bear’s preparing to do?

Yesenia (24:51.822)
For Bear, we have very exciting things for next year. Not associated with that sports team. We have the next mentorship for the summer that we’re going to be producing. We’re working towards those details. We’re working with a group out in Africa for having an open source, what is it called? Open Source Security and Software Africa Group for the primary focus on doing speaking engagements, holding meetups and conferences in Africa. Cause there’s a huge community group there that have nowhere really to go. with global restrictions and visas, they’re very limited. So helping them kind of grow that out, share out some tips and tricks that we’ll be sharing on social just to drive more awareness into these projects and to these teams. And of course our community office hours, which have also had a lovely set of community members that have come in and shared their journeys, education pieces and blogs that have been recently produced like, Sal and Ejiro have produced about newcomers into the open source. We’re working on getting part three released, but you can find part one and two out in OpenSSF’s blog main page.

CRob (26:39.576)
Excellent. And I’m also excited that we’re going to be doing some special education segments on the podcast around how to write a good call for papers abstract. And then how to build your first conference talk, which is something that again, a lot of these newcomers haven’t had experience with that. Some of us that have been around the block a little bit can help share some of the wisdom we’ve earned over the last couple of Right.

Yesenia (26:46.83)
Yes.

Yesenia (27:02.358)
My try on era.

CRob (27:07.512)
And with that, I want to thank you for coming on board and being our co-host. You’ve really brought in a nice set of energy and a fresh perspective when you’re talking with our community members. And I wanted to remind everybody, as we are preparing for season three, if you have ideas or suggestions for topics, please email marketing at openssf.org. We would love to hear your episode pitches, your CFP stories, if you want to do some demos or have case studies.

Yesenia (27:12.119)
Absolutely.

CRob (27:35.822)
Or you just have just general projects that help the broader OpenSSF mission of improving the security of open source software for everybody forward. So thank you again, Yesenia. It’s been a pleasure. I’m looking forward to another exciting year of talking with you again. All right. Happy open sourcing, everybody.

Yesenia (27:50.776)
Thanks, CRob. To the next episode.

What’s in the SOSS? Podcast #45 – S2E22 SBOM Chaos and Software Sovereignty: The Hidden Challenges Facing Open Source with Stephanie Domas (Canonical)

By Podcast

Summary

Stephanie Domas, Canonical’s Chief Security Officer, returns to What’s in the SOSS to discuss critical open source challenges. She addresses the issues of third-party security patch versioning, the rise of software sovereignty, and how custom patches break SBOMs. Domas also explains why geographic code restrictions contradict open source principles and what the EU’s Cyber Resilience Act (CRA) means for enterprises. She highlights Canonical’s work integrating memory-safe components like sudo-rs into the next Ubuntu LTS. This episode challenges assumptions about supply chain security, software trust, and the future of collaborative development in a regulated world.

Conversation Highlights

00:00 – Welcome
01:49 – Memory safety revolution
02:00 – Black Hat reflections
03:48 – The SBOM versioning crisis
06:23 – Semantic versioning falls apart
10:06 – Software sovereignty exposed
12:33 – Trust through transparency
14:02 – The insider threat parallel
17:04 – EU CRA impact
18:50 – The manufacturer gray area
21:08 – The one-maintainer problem
22:51 – Will regulations kill open source adoption?
24:43 – Call to action

Transcript

CRob (00:07.109)
Welcome welcome welcome to what’s in the sauce where we talked to the amazing people that make up the upstream open source ecosystem these are developers maintainers researchers all manner of contributors that help make open source great today I have an Amazing friend of the show and actually this is a repeat performance. She has been with us once before

We have Stephanie Domas. She’s the chief security officer for Canonical. It’s a little company you might have heard about. So welcome for joining and thank you for joining us today, Stephanie.

Stephanie Domas (00:45.223)
Thank you for having me again for a second time.

CRob (00:48.121)
I know. It’s been a while since we chatted. So how are things going in the amazing world of the Linux distros?

Stephanie Domas (00:58.341)
Yeah, so just for people who aren’t as familiar with canonical because we do have a bit of a brand thing, we are the makers of Ubuntu Linux. So connect the dots for everyone. So the world of the distros is always a fun place. There has been a lot of recent excitement around NPM hacks, supply chain hacks. And so the world of archives, running archives, running a distro, there’s never a dull moment.

CRob (01:08.422)
Yay.

Stephanie Domas (01:28.475)
So on our distro front, right, we’ve been excited to try and on the security front, focusing on, we’re taking a fresh eye to how to introduce things like memory, save sort of core components into our operating systems. So a sudo RS, so a Rust implementation of sudo is now a part.

CRob (01:40.231)
Ooh.

Stephanie Domas (01:49.085)
Ubuntu will be a part of our next LTS, which is something we’re really excited. We’re looking for more opportunities to replace some of these fundamental components with memory safe versions of them. So that’s also exciting.

CRob (01:52.604)
Nice.

CRob (02:00.167)
That’s amazing. My hat is off to all of you for taking the leadership role and doing that. That’s great. I had the great opportunity to participate in a panel with you recently at Hacker Summer Camp. So kind of, I know it’s been a little bit of time, reflecting back, what was your favorite experience from the Black Hat Def Con, B-Sides, Diana week?

Stephanie Domas (02:24.609)
Yeah, and so I’m always one of those people who one of my favorite things is just the ability to reconnect with so many people in the industry. That’s the one time of year where actually despite the fact that you and I physically live near each other, that actually tends to be the one time a year we see each other in person. so extending that to all of just the great people I’ve known in the industry and getting to see them. The panel you spoke about was another real highlight for me. So the panel for those who didn’t get the privilege of attending wasn’t asked me anything on open source. And it was great because there was such a variety of interests right there. People who are interested in what it’s like to be a maintainer. What does it mean to use open source in enterprise? What does it mean to try and think of monetization models around open source? And so the diversity of conversation, which is really exciting to find people who are in the space really unfamiliar with open source or trying to figure out how do I reason about it and to be able to have those conversations with them was really exciting for me.

CRob (03:24.515)
I agree, I thought that was a great experience and I would love to see us do that again sometime. Excellent. Well, let’s move on to business. Today you have some things you’re interested in talking about. So let’s talk about one of my favorite topics, a software bill of materials and also talking about sovereignty within software.

Stephanie Domas (03:29.725)
I’ll put in a good word.

CRob (03:48.867)
From your perspective, how are you seeing these ideas and the tooling around SBOM, how are you seeing that adopted or used within the ecosystem today?

Stephanie Domas (04:00.421)
Yeah, so I will say, I’ll preface it with SBOMs continue to show a lot of really great promise. I do think there is a lot of value to them. I think they’re really starting to deliver on some of those benefits from trying to implement them at scale, right across our entire distro, across the archives, right? There are still some implementation challenges. So one of the things that’s been on my mind a lot recently is around versioning.

CRob (04:28.273)
Ooh.

Stephanie Domas (04:29.863)
So I feel like every week I see some new vendor or I get approached by some vendor whose business model is to create patches for you on the open source that you use to help maintain your security.

It’s a really interesting business proposition. get why these companies are popping up. But when I start to think of S-bombs and some of the difficulty we’re having around implementing them, this idea of version control in open source, when we have companies coming out of the woodwork to create patches, just creates this swirl of chaos.

To add some more, to add a specific example around there, you think about semantic versioning, right? When we release the version of something, it’s 5.0.0. When we release a patch, when the original manufacturer, the upstream releases a patch, it’s 5.0.1, right? Then 5.0.2. And so S-bombs come into play because the S-bombs goal is to understand the version of the thing that is inside of your piece of software. And then it uses that to kind of look up and say, hey, what are any known security issues in this version. But as I continue to see and have even talked to some of these businesses whose whole value proposition is, we will actually maintain and develop patches for you for the things that you are using, you’re breaking this whole idea of semantic versioning that now you are potentially carrying a tremendous number of patches for security specifically that aren’t representing that versioning number. And so then the question becomes like, what do you do with that? If I’m using internally 5.0.0 and I’ve used one of these companies to develop a security patch for me, what does that mean to my version number? What do I put in my S-bomb when I have created my own security patches that I have chosen not to upstream? What does that mean? And so…

Stephanie Domas (06:23.931)
I see this as a really unique problem to open source, right? Where there’s, there’s a lot of upsides to open source, like the tremendous amount to be specific, but the code base being open and allowing people to develop their own patches now creates this slurry of confusion of, don’t think version numbers will be the thing we can rely on in S bonds. And I don’t know what the answer is, right? I don’t actually, I don’t have a good approach to this when we’re starting to see, you know, customers using these companies create their own patches, they use some internal vulnerability scanner, and then they come back to us and say, hey, there’s these problems, these reports I’m getting from my scanner that’s scanning my S-bombs, and we’re sitting there confused of like, but because you’re carrying these other patches, I mean, it’s causing so much confusion where inside of the company, they don’t even realize. And so they get these scanning reports, so they come to us. They’re like, well, I don’t know how to answer this question because you’ve done something else internally, which is great. You like that, but your risk position, you’ve hired one of these companies to give you patches. So it’s one of these interesting things that’s been on my mind recently of, don’t know what the answer is, but I think SBOM’s inversion control and open source is going to be a really big struggle in the coming years as we see more of these offshoots of, of patches and security maintenance coming from not upstream and with no intention to upstream them.

CRob (07:50.415)
And I think, you we talk, get to deal with a lot of global public policy concerns and thinking about like the CRA where manufacturers are, have a lot of pressure to produce these patches. They’re encouraged to share them back upstream, but they’re not required. So I imagine the situation that you’re describing where these vendors are kind of setting up this little cottage practice.

You’re only going to see more of that. again, it’s going to cause a lot of confusion for the end users and a lot of, I’m sure, frustrated calls to your support line saying, I’m using package one, two, three, well, the actual, canonical version, pun intended. Ours is XYZ. Again, I bet that would cause a lot of customer downstream confusion.

Stephanie Domas (08:39.579)
Yeah, the EU CRA is one of the forcing functions that I think will bring a lot of this to light is where you start to have these obligations around patching and add a testing to this to your customers. But then again, you start to get this.

untenable spaghetti muster. How is that virgining happening when that manufacturer has potentially again use one of these businesses to create patches internally that they did not upstream. And so what does that mean for the S-bomb of the users when you give that to the users? How are they supposed to make informed risk decisions about that when the version number in that S-bomb is either follows upstream, in which case it misrepresents the security posture or they’ve come up with their own versioning system that’s unique to the internal patches that they’re carrying. And so again, the poor users are left without being able to make those informed decisions. So yes, the EU CRA, one of my favorite topics, is it’s gonna be a forcing function for that. And I think it’s gonna…

Again, there’s no answers. Like I think we’re going to be forced to try and make some of these decisions about like, what, what does that mean? Right? How do you, how do users reason about their SBOMs when the version numbers in them may not make sense anymore.

CRob (09:53.423)
It’s good to know that you’ve jumped into one of the hardest problems in software engineering, versioning. Do you want to tackle the naming issue as well while we’re at it?

Stephanie Domas (10:03.205)
I’m here to solve all, actually I’m just here to call out the problems.

CRob (10:06.247)
So somewhat adjacent, you mentioned that you wanted to talk about kind of this trend in the ecosystem around sovereignty. The EU thinks about it as digital sovereignty, but we’ve seen this concept in a lot of different places. So what are some of your observations around this, air quotes, new movement?

Stephanie Domas (10:32.121)
Yeah, so sovereignty is a really interesting one. For those who haven’t been too exposed to it, we’re seeing a big push in things like data sovereignty, which is focused on where is my data, right? Is my data contained in some, name some geographical border?

The big one that has been coming up for me a lot recently is software sovereignty. So there are some countries and some regulations that are taking the approach that the way to drive trust in a piece of software is by where it was developed, where the developers that wrote that piece of code come from. And on a surface, I…

I can see why they’re thinking that, right? I don’t necessarily agree with that. I can see why they may be thinking, hey, if this software was developed by people with the same ideals as me, maybe I can assume that it is for the same purpose that I agree with, right? I can see that argument, but I don’t love that argument. So the thought process to me is, is this actually the most effective path to true security, right? What they’re trying to achieve through this focus on software sovereignty is in fact more trust in the software. And being economical, right? That our solutions are open source, right? When this has come up for me for ambitions to sell to different federal governments, right? And so I sit in these conversations where contractors and consultants are trying to explain to me like, well, here’s what you have to do to sell to this name federal government here.

And you sit there thinking like, none of this makes any sense for open source. It’s kind of the antithesis of it, right? Attestation that the code was only worked on by people in certain countries. And I keep sitting there thinking, this is a wild way to derive trust in software. And so I sit there asking myself, why is this happening, right? On one side of the fence, there’s the, yes, I derived trust in software by who developed it.

Stephanie Domas (12:33.157)
The other side of that and the side that I think open source solves is, it not a more robust defense, right? To not think about national borders, but instead globally accessible and audible code bases. And again, they’re in these meetings. So CMMC was a reaching one for those in the U S but they have them in all over the world sitting there thinking like there’s absolutely no way for me to achieve anything beyond level one in CMMC because I can’t and have no interest in attesting to build pipelines that are only from certain citizens. And again, it’s one of these where it’s it’s confusing to me. So it’s been on my mind sitting there in these meetings about this and support sovereignty, right? Support sovereignty has actually come up as well for us. We have experience not just governments, but customers now who are seeking support to only be performed by people in name geographic border. And so we’re having to try and develop strategies again around support sovereignty. And again, I go back to if the codebase is globally accessible and auditable, I just feel like that’s a better way to solve this problem of deriving trust in what is done in the software, even from a support perspective, if you want to derive trust in what changes they’re proposing, what code they’re proposing to you. The fact that you can see it all should be how you derive trust, not where that person is physically sitting in the world. So that’s me on my snowboard. Sovereignty for the moment.

CRob (14:02.611)
And I, I like in this movement to back when I was an in-house practitioner in a InfoSec team, that the business always was dismissive of the insider threat vector. The fact that they were so focused on external actors that these, that somebody’s going to come in and hack the organization. But you look at reports like the Verizon data breach report that comes out every year and consistently every year, the most damaging cybersecurity breaches are people outside of your organizational trust boundary. And that’s, know, they cause the most damage. They are the most frequent, whether it’s ignorance or malice, you know, the, the, is the root cause of the problem, but it’s still someone inside of your organization. And to extrapolate that from a nation, you’re going to have a much broader spectrum of people, even outside of the, your small enterprise. So I always thought I was kind of scratched my head why people were so dismissive of this because there’s evidence that shows that the insiders are actually the ones that can cause that have consistently caused the most damage and then when you blow that up.

Stephanie Domas (15:11.773)
And I can tell you, yeah, as someone who spent time as an adjunct professor at The Ohio State University teaching software development.

There’s a lot of not great coders out there, right? So even if their ideals align, even if like there is nothing associated with code coming from a domestic location, it ensures quality, right? Again, even if you’re not worried about malice, there’s a lot of bad developers out there. The ability to have a portable code base should be your number one focus. And so yeah.

CRob (15:46.119)
And it’s the whole, have the thousand eyes, transparency perspective. But what I’ve always loved about open source is the fact that it’s meritocracy based. That, you know, the best ideas get submitted. have, we are able to pull ideas from everywhere that, know, anyone’s idea is potentially equally as valid as everybody else’s. So I love that aspect and that makes the software stronger because it helps ideally.

Coach those developers like you alluded to that might not be as skilled or as aware of security techniques and ideally helps them improve themselves, but also helps ideally not let those bad commits get into the code.

Stephanie Domas (16:25.777)
Yeah, and I’ll be the first to admit that not all open source is quality, right? It’s a variety of things. But the point is that you have the ability to determine that yourself before you choose to use the code. So depending on your risk posture, you get to make that decision.

CRob (16:43.43)
Mm-hmm.

Absolutely. So again, kind of staying on this topic, from your perspective and like the developers and then the customers you get to work with, how has this new uptick of global cyber legislation impacted your mission and impacted your customers?

Stephanie Domas (17:04.698)
Yeah, so I’ll actually I’m going to circle back to our favorite legislation, which is the EU CRA. So this has been a really interesting one because it’s.

CRob (17:10.01)
Yay!

Stephanie Domas (17:15.549)
It’s forced a lot more light, I think from the customers we’re working into the open source they’re using, how they’re consuming it and putting a lot more intentionality into if I continue to consume it this way, what does that mean for my, my liability associated with this piece of code? So in a way it’s been really good that it’s forcing customers to think more about that. As you mentioned earlier about patching and thinking about how am I getting this patch? How am I incorporating this patch? Part of the EU CRA is requiring for security patching for the defined lifecycle of the device. And again, that’s actually driving some really interesting conversations about, okay, yeah, that makes sense for the piece of software I wrote, but I had these pieces that I was consuming in the open source space and what does that mean for these? And so I do think it’s driving a good conversation on what does that mean? I worry that there is still some confusion in the legislation and until we get the practice guidance, we won’t have the answers to around that enterprise consumption of open source. And what does that mean from my risk perspective, right? If you consume so, and I’ll also caveat. like there is a carve out in the EU CRA for open source stewards. Canonical is defied, we make open source, but we are not actually an open source too, right? Because we have monetization models around support and because we want to support our customers, we are actually considered a manufacturer. So we’re taking a manufacturer role. So a manufacturer role over open source is also kind of a big gray area in the practice guidance and the legislation of like, well, what does that mean? Because again, despite the fact that we’re taking manufacturer on Ubuntu, the amount of upstream dependencies of us is astronomical. And so what does that mean? Right? What does that mean for us taking liability on the Debian kernel? I don’t know.

It’s bit confusing at the moment of what that means because we are in fact in manufacturer consuming open source. then to our customers who we have business relationships with, right? We are signing CRA support clauses, right? We are giving them, we are taking manufactured. They don’t have to take manufacturer on the open source they consume from us. But again, it’s a bit unclear of what entirely that means. We are actively involved in things like the Etsy working group that is defined some of these practice standards, particular and operating systems and Ubuntu is one of our flagship products. So we’re involved in that, that working group to try and define like, what exactly does this mean for operating systems? But despite being best known for Ubuntu, we actually have a very large software portfolio. And there’s a lot of software in our portfolio that doesn’t have a vertical practice guidance being made. And so this like general horizontal one that would define the requirements for those products not really see much activity on that yet. So there’s also big unknown around. don’t know what those expectations are for our products right now, which again, have tremendous upstream dependencies. And in some case, we are a fraction of that code base, right? What canonical produces in that code base is a fraction of the overall code base. But when people consume it through, what does that mean? Because we took the manufacturer role on it.

CRob (20:33.415)
Yeah. Well, and you touched on it a little bit earlier that all open source is amazing. Not all of it is great software. It’s a big spectrum. You look at a student project or someone running an experiment that you stumble across on GitHub is a very different experience than like operating with Kubernetes or the Linux kernel or anything from like the Apache foundation. like when you have these large mature teams, they have a lot of different capabilities as opposed to random project you find on the internet that somebody might not have intended for it to be commercialized.

Stephanie Domas (21:08.416)
Yeah, and I was reading an article recently on, I think it was an NPM archive that they had done their statistical analysis on, but they had done all this interesting analysis on the number of essentially the number of different handles who had committed to a piece of software and their overall conclusion right was that.

CRob (21:23.911)
Mm-hmm.

Stephanie Domas (21:26.301)
The majority of open source was one maintainer on this archive, right? That they showed of the, you know, N number of most popular downloads over 50 % of them. And I can’t remember the number. It might’ve been something closer to 80 % was one person. And so again, it’s not open source has a lot of amazing things. Not all of it is well written, not all of it is well maintained. so again, it’s sort of forcing regulations like the EU CRA are forcing people to take a much better look at their supply chain to understand, you know, where am I consuming that from? that something that is well maintained? Because I can’t just take the old version, say it’s stable and not worry about it from there. Can’t do that anymore under the CRA. I have to do security patching on this thing and now I’m potentially responsible for it and what does that mean? I do worry and I hope that we won’t see a recoil of enterprises using open source because of that fear, right? Because there are large libraries out there where there aren’t people willing to step up and take the maintainer role on them. And that makes enterprises afraid to consume those products under the EU CRA. And so that’s a fear that I do have. And again, it’s because it’s a bit of a gray area about how the liability, if that open source repo decides to take steward status, what does that mean for the enterprise consuming that? If it’s something small, it’s a tiny library, maybe the risk isn’t large, but it may be a really meaningful part of the application.

And if there’s no, if you’re not consuming it through somebody willing to take manufacturing role in your chain, will you still be willing to consume that piece of open source? So I do worry about that. I hope that as the practice guidance comes out, that is clarified so there’s better understanding of what that means and we won’t see that recoiling. But I have tried to talk to some of the legislators that are working on that and actually they haven’t been able to answer that question for me yet either. Is there going to be clarification because this is an area of concern. So hopefully between now and more practice guidance, we get more clarification so we don’t see that recoiling.

CRob (23:34.915)
And I know that at least from the horizontal and vertical standard standpoint, we are very close to starting to get public drafts being able to be delivered sometime fourth quarter in 2025. And ideally that’ll allow us to see what the legislators are thinking and where they are planning on landing on some of these issues. And hopefully we get some clarifications. And ideally we get the chance to provide feedback and say, hey, great work.

But have you considered X, Y, and Z to help make this more precise and more clear?

Stephanie Domas (24:11.387)
Absolutely, and my understanding that, hopefully I’m remembering the numbers correctly, I think there’s 15 vertical standards being written in Etsy right now. And so while that’s an astronomical amount of practice guidance, so like tremendous amount of work being done, that still won’t cover all the products. And so there’s still going to be a tremendous number of products that are sitting outside of those 15 vertical standards. And so again, the question will be like, but what does that mean for all the other products that are not in these 15?

CRob (24:43.431)
So as we wind down, what thoughts do you have? What do you want our audience to take away? What actions would you like them to take based off this conversation?

Stephanie Domas (24:55.825)
Yeah, depending on your role in the space. mean, I’m going to go ahead and plug open SSF, right? So the issues that we talked about, like what does SBOM mean as open source starts to become divergent, right? All of these things, I think will only be solved in organizations like the open SSF, right? Who has this bringing all the collective parties together to think about what it means. Same with some of these gray areas I mentioned about in the EU CRA and what does that mean? We need those types of organizations. if anyone listened to this and was like, well, maybe I have ideas on how to solve because all Stephanie did was complain about the problem, but I want to be a part of the solution. Look at joining the organizations like OpenSSF. That’s where the solutions will come from. Right. I can see my side of the problem, but I can’t, I can’t, even if I had ideas, I can’t independently solve it. Organizations like OpenSSF can do.

CRob (25:44.902)
Right.

It’s very much a community set of issues and I think collectively we’re stronger in having a better solution together. My friend, I need to arrange an ice cream social event in the near future before we’re covered in snow. But thank you for your time Stephanie and thank you to you and the team over at Canonical for all your hard work. with that, we’re going to call this a wrap. Thank you everybody.

Stephanie Domas (25:56.805)
Agreed.

CRob (26:17.019)
Happy open sourcing out there.