Tag

MLSecOps

What’s in the SOSS? Podcast #58 – S3E10 Big Thoughts, Open Sources: Beyond the Hype: Brian Fox on Securing the Agentic Future of Open Source

By Podcast

Summary

In this inaugural episode of Big Thoughts, Open Sources, host CRob sits down with Brian Fox, Co-founder and CTO of Sonatype, to discuss the friction between rapid AI adoption and foundational software security. Brian shares insights from the 11th annual State of the Software Supply Chain Report, revealing the emergence of “slop squatting” and the high frequency of AI models recommending non-existent or vulnerable dependencies. The conversation explores how the Model Context Protocol (MCP) could revolutionize developer compliance and why the industry must fund the critical infrastructure supporting our trillion-dollar open source ecosystem.

Conversation Highlights

00:23 – Welcome: Big Thoughts, Open Sources inaugural episode.
01:01 – Brian Fox’s journey: Apache Maven, Sonatype, and OpenSSF.
02:53 – The critical role of Maven Central in the software supply chain.
03:26 – Decades of security trends: The persistent “Log4Shell” pattern.
05:34 – The “Tribal Knowledge” problem for AI agents.
07:06 – State of the Software Supply Chain Report: AI recommending made-up code versions.
08:09 – Explaining “Slop Squatting” and AI hallucinations.
10:03 – Model Context Protocol (MCP): Turning security tools into AI expert systems.
13:42 – Do not ignore 60 years of software engineering “physics”.
15:11 – The “Vulcan Mind Meld”: Injecting governance data into AI agents.
17:19 – Risks, rewards, and the need for ML SecOps discipline.
19:30 – “Inefficient code is still inefficient code”: Lessons from cloud migrations.
21:01 – Building an “AI-native SDLC” with upfront security.
24:18 – The sustainability crisis: Secure open source builds are not free.
27:17 – Conclusion: Funding open source infrastructure (8 trillion dollars of value).

Transcript

Crob (00:23)
Welcome, welcome, welcome to Big Thoughts, Open Sources, the OpenSSF’s new podcast. We’re gonna dive a little more deeply in with some of the amazing community members, subject matter experts, and thought leaders within open source, cybersecurity, and high technology. Today in our inaugural episode, I’m very pleased to welcome a friend of the show, Brian Fox from Sonotype. How you doing, Brian?

Brian Fox (00:47)
I’m doing well, how are you?

Crob (00:48)
Excellent, we’re super glad to have you today. So maybe just for our audience members that are unfamiliar with your work, could you maybe talk a little bit about how you got into open source and kind of what you specialize in in this amazing ecosystem?

Brian Fox (01:01)
How I got an open source, that’s a long conversation. geez, all the way back in 2002, 2003, I suppose, is when I really, really got involved. I had done some dabbling and some other things before that, but I got involved around that time in Apache Maven. I started writing some plugins.

They’re pretty popular plugins, people still use them these days. And those ultimately got pulled into the Apache project, the official project. I kind stowawayed and came in as a committer. A few years later, I joined up with some other folks that were also working on Maven and we co-founded Sonatype. It’s been 19 years now.

CRob (1:45)
Wow, that’s awesome.

Brian Fox (1:46)
Yeah, and so then I was ultimately the…the chair of the Apache Maven project for a long time. still an Apache member of the foundation. And then more recently, even though it’s been a while, what, four or five years now, we joined the OpenSSF. I’ve been on the Governing Board with you for a while. I’m also on the Governing Board of FINOS, which is the financial open source.

And for the last couple years, also been on the Singapore Monetary Authority’s Cyber
Experts Group. Yeah, that’s fun And so, you know, I’ve spent a lot of time focused on those things. One of the things that Sonatype does for the community we run Maven Central, right? Which for people that don’t know that’s where all the world’s open source Java components come from.

CRob

Yeah, it’s kind of sounds like a big job It is a big job running critical infrastructure for all that kind of stuff And so, you know over the years that’s given us really interesting insights into what’s going on with the supply chain so, you know, that’s kind of that’s sort of what led us to the path that brought me to OpenSSF and all those other things.

CRob (2:53)
Yeah, you and your team have been amazing participants and contributors to our community and just kind of even putting aside all the work with Maven. Just your kind of participation in our working groups and our efforts has been amazing. Yeah, thank you. So today I think you wanted to talk about a topic a lot of people probably haven’t heard about. This little thing called AI. I have a hard time spelling that.

Brian Fox (3:16)
Right?

CRob (3:20)
Let’s just set the stage. What are you thinking about? What do we want to have a conversation around AI about?

Brian Fox (3:26)
Yeah, I think so if we back up a little bit, right? So it was probably around 2011, 2012, I suppose. We started looking at some of the trends that we were seeing within the Maven central downloads. We were seeing things like the most popular version of a crypto library was the one that had a level 10 vulnerability.

fixed and patched years before, but that everybody was still using the vulnerable version. The log for J, log for shell pattern has existed basically forever. It’s not actually new. And so that led us down the path to start doing different things to help our customers A, understand what open source they were using. Way back then, nobody knew. They were like, we’re not using open source. What they really meant was, I don’t think I’m using Linux in open office. They didn’t understand.

that their developers were pulling in all these components. And so the problem space back then was helping them have visibility and then providing data and controls to help them better govern their choices. So we’ve always been trying to help expedite and make it more efficient for developers to make better choices. And so it’s interesting to see this development of AI and all of the kind of things that have come along with it. So that got me thinking, you know, what?

When we started out to build some of the stuff that we built for our customers, my focus at that time was to make it possible to do the analysis in real time so that it wasn’t the case that, we’re just going to do all our stuff and then we’re going to run a compliance scan at the end of the week or end of the month or something. So we were very focused on, it needs to be able to be run every single bill all the time. We need to be able to provide guidelines so that they don’t have to ask the legal team and wait six weeks for an answer, or the security team, right?

We were trying to capture those roles, or those rules, into the system so that they could make better choices in real time. And that was a big thing that organizations needed to be able to scale and become efficient. When you start dealing with thousands of developers, tens of thousands or millions of applications, the tribal knowledge problem kind of falls apart.

CRob (5:33)
Absolutely.

Brian (5:34)
Right? And so you start thinking about what happens with AI, and if you don’t have that stuff in an automated, you know, coded kind of way, how do you feed that to an agent? The agent’s not hanging out with you at lunch. It doesn’t get an onboarding session where we say things like, you know what, we never use an LGPL dependency because we ship our code. Or, you know, we only fix vulnerabilities five and above. Or, you know, whatever the policy may be, those things sometimes can be shared among developers.

CRob (6:02)
Right. and it plays into kind of the classic problem with engineering – is most engineers I’ve met don’t like doing documentation. And with AI entering the chat room becoming this accelerant, it’s making decisions based off of knowledge or lack thereof. if you don’t have your security policy documented, it even goes back to thinking about the early days of Kubernetes.

Where it was a big mental shift for people to have that software defined network inside. And that helped, I think, a lot of organizations get better discipline and rigor where you had less mysterious outages. Because the firewall guy in the back end said, I didn’t do anything, but try test it again.

Brian Fox (6:46)
Right, right. Yeah, for sure. And that’s kind of what we’re seeing now. We’re seeing a lot of that with the, not just with agents. mean, agents are sort of like the next big step and not everybody’s
fully there yet. Some people are dabbling with it. But even just AI assisted coding, you’re seeing the same problem that you come in and you say, hey, I want a new feature. And it just grabs whatever statistically likely thing dependency is going to be in there. We’ve done some studies. We recently released the state of the software supply chain report. It’s a great report. Yeah, thank you. This was our 11th year. We just published it last month. And we did a deep dive on AI recommendations, you know, and we found that 30 % of the time the models were recommending made up versions.

CRob (7:35)
What?

Brian Fox (7:36)
Yeah, just making them up. You know, so it’s kind of shocking. In the real world, you know, if you’ve got a tool, that’s one of those things that fails fast, right? Like it picks a version that doesn’t exist, the thing goes and it immediately blows up and then, you know, Claude or whatever you’re using will go, whoops, and it’ll fix it. So it’s kind of funny, burns some tokens, but the downsides aren’t huge.

If the agent randomly picks a terrible dependency or a very old one that does in fact exist, I would argue that’s worse because there’s no fail fast in that scenario.

CRob (8:09)
Well, you also have the whole problem with slop squatting. Where the models seem to, regardless of what vendor provider you’re getting it from, they seem to fairly consistently suggest the wrong dependencies, kind of like typo squatting.

And so now the bad guys have recognized this kind of fairly consistent behavior and they’re uploading malicious packages with those bad names so that you don’t break the build because it can find what needs.

Brian Fox (8:33)
So instead of it failing fast, it fails fast by grabbing a back door or something. Exactly, that’s exactly right. That’s what slop squatting is what they call it now. Yeah, and so those are some of the challenges that we observe and you kind of take it to the extreme where now you potentially have less sophisticated developers, not classically trained developers using these tools, and they don’t know what they don’t know.

They wouldn’t necessarily stop to say, hey, I want you to now be a security expert and do an assessment of the code you just created. Like somebody who knows better will do that. But if you’ve not lived through the pain that you and I have lived, you wouldn’t think about that. And so on average, these things are going to potentially toss away a lot of the learnings that we’ve known for so many years.

CRob (09:21)
And that’s been a chronic challenge, trying to get the tribal knowledge instantiated, trying to help people make those right decisions. And the AI tools are amazing productivity and efficiency savers, but they are bringing in, as you said, classically untrained professionals that they are not a software engineer. They don’t understand how a system should be architected, or they don’t understand kind of the app sec best practices that help secure the foundation of everything and not let the world fall apart.

Brian Fox (9:59)
The interesting thing is I think they can be if prompted correctly.

CRob (10:02)
Yes.

Brian Fox (10:03)
Right? And that’s where some of the knowledge gap comes in. And I think, what was it last summer, Anthropic released the MCP model control protocol, right? Which is, I’ve spent a lot of time thinking about that pretty deeply and looking at all the tools. And I wrote about this. I think that there’s a high likelihood that we see a lot of the tools we use in software, in the SDLC today, moving more towards providing their capabilities as subject matter experts in “a thing” to an AI agent via MCP.

So I think that, for me, is pretty exciting for a number of reasons. It means, as a tool vendor, I don’t have to create a plug-in for IntelliJ and one for Eclipse and one for VS Code. As an example, MCP can be the same thing for I don’t care what tool you’re using, because it’s interacting with me via this standard API. And I’m kind of talking to it in more or less English prompts. So my ability to deliver the value that we have into whatever tool you feel like using today, and they change every week, is pretty cool.

And I would also argue that the ability to insert that information and to potentially roll out the root prompting that all of the developers are using in these capabilities is better. You’re going to get potentially better compliance than you do today. One of the things I struggled with forever was we created an IDE plugin for our capabilities that it demoed amazingly well. It showed, hey, this dependency has vulnerabilities, or license, or would make recommendations. It was great. But developers just didn’t want to install more plugins. They just weren’t using it, right?

So while it demoed well, the actual usage of it was very low for compliance reasons. That’s a thing we struggle with. Every tool vendor struggles with that. But if you were able to insert that same information into an MCP capability and the company rolls out a root prompt that says something like, hey, every time you’re choosing a new project or a new dependency or trying to assess a dependency, use this MCP server to get up-to-date real-time information, it’s more or less going to do that every time. Right.

CRob (12:13)
Yeah, and I think back to like when I was a baby cyber person going studying for my CISSP, there was a lot of talk in the exam materials about expert systems, which is exactly what I think a best case scenario with these tools can be. It’s you’re expert. I don’t have to necessarily have this expertise. That’s right. But thinking about it takes a lot of knowledge to craft these expert systems. Let’s talk about how some of these models have been trained on potentially less than expert data.

Brian Fox (12:43)
Right, and that’s just, think, the inevitable nature that the frontier models have been trained on, you know, all the stuff that they can find.

CRob (12:49)
The internet.

Brian Fox (12:49)
On the internet, good information, bad information, people talking about terrible dependencies a lot might statistically make that more of recommendation, right? And I think that can be okay as long as you’re plugging in the models that have real data. The things that we’ve seen, you know, when we assess the models is that like I said, they make up versions, they pick old versions arbitrarily, they don’t know about anything newer than when they were last trained, which means both new versions and also vulnerabilities that might be an older version.

So they’re inadvertently recommending, and it’s not even a recommendation really, it’s just using it, right? It’s putting it in there and writing the code around it. Imagine picking Spring, right? It’s just going to go, I’m going to write a Spring app and I’m going to use all Spring 5.

And then when you probe it, then it’s like, oh, sorry, I have to do two major framework updates. You almost have to throw it away and start over. And so if you’re able to plug the right data in up front, you don’t have all of that waste. And again, if you have people who don’t know to prompt it to ask about the latest versions, you can insert that underneath the hood. I think that’s what’s really cool.

Brian Fox (13:59)
But what we’re seeing currently, I kind of wrote about this a little bit too, that I feel like we’re throwing out all the lessons of the past. We’re talking about situations where whole tools, SAST is under fire right now, right? Because when all the code can be just completely generated, what’s the problem with SAST? But I do think that we’ve learned a lot of things over the years if we can figure out how to plug those capabilities into what’s being generated.

I think we can bring all of that forward with us. But the entire SDLC is going to have to adapt to that. It’s not going to be sufficient to say, I’ve got a bunch of developers over here. They’re doing AI assisted development. And then later, we’re going to run a bunch of SAS and produce legacy reports. That’s not going to work. The information has to be fed directly into the AI capabilities up front.

CRob (14:51)
And it’s the classic problem we’ve always had, where security historically is the the last thing done, addressed, it’s bolted on at the end in a lot of cases. And just this AI tooling and just the velocity it has is a huge accelerant for the sins of our past we’ve never actually addressed.

Brian Fox (15:11)
Absolutely, but it also provides the Vulcan mind meld if you want to think of it. You now have that opportunity to plug that right into what the agent is thinking about in the moment. You can’t do that with the humans, but you can do that with the agent. And that’s what I think is potentially exciting about this.

Where I described it recently at a summit, we’re sort of in a bootstrap situation, though, right? Like, we don’t have all of those capabilities. Organizations haven’t rolled them all out. And so we’re sort of in this weird situation, one foot on the boat, one foot on the dock, and it’s not going to end well as we’re going through it. And worse, there are people that are afraid of the MCP protocol. So I hear lots of organizations say, we just block it completely.

Yeah. It’s a little hard to argue that that’s not a reasonable place to start because of the nature of what’s happening. We saw just the other day the latest version of Shai-Hulud came out. Did you see this? And they used MCP capabilities as data exfiltration. And I’m like, come on, guys. There’s so much power in this, but now you’re making it like a bad thing. So I think the industry and the tools and all of that are going to have to work through governance of the MCP capabilities, sanitization inspection of the MCP capabilities just like we’ve seen. So it’s sort of one of these things like when you’ve been around long enough you can recognize the patterns. It’s new and exciting but also the pattern rhymes with a bunch of stuff we’ve done before and what frustrates me is that like everybody charges ahead so fast they just feel like it’s all new it’s all different it’s like yes but let’s not forget everything we’ve learned over the last 60 years of software engineering because the physics is still the same.

CRob (16:50)
Well, and that’s where so our AI / ML working group wrote a paper around ML SecOps. And the paper was really interesting. I recommend the audience check it out. But it was they talked about classic techniques that are assisted and are helpful with AI development. And then it talked about some gaps where we have things like are not documented policies that are kind of a hindrance and something that’s an opportunity in the future to try to get addressed.

Brian Fox (17:18)
Yeah.

CRob (17:19)
But
I’m of two minds about my friends, our new robot overlords, in that it can be extremely helpful, but I don’t see a lot of people reconsidering those lessons of the past of software engineering. To say this is all brand new and totally different, like, well, you’ve got different GPU accelerators and dedicated cores to do things.

And now with this like agentic and ADA architecture where things are more highly distributed, yeah, that’s new twists, but it’s not brand new. We’ve done networking. We’ve done composite applications for decades.

Brian Fox (18:01)
Right. It’s the same thing, you know, we saw when, you know, we were like, oh, everything should be serverless or let’s go to the micro architecture, micro architecture, micro service architecture is going to solve everything until it doesn’t. Right.

Or, you know, that’s no problem, we’ll just put it in the cloud because I can just infinitely scale my machines, right? So I see the same pattern all again, that we sort of say, yes, but this time is different because insert new technology, and then we realize, yes, but everything we know is still true. And that’s what I think we’re sort of grappling with right now as we go through this. What is absolutely true is that, you know, the AI capabilities, the agents, all these things are making everything happen so much faster. That can be good.

can also be bad. If you’ve forgotten all the lessons of the past, you’re just going to create a ton of crap much faster than you could before. And by the time you realize it, it might be too late.

CRob (18:57)
I’m familiar with a lot of enterprises that were going through a digital transformation journey, trying to update their heritage software to newer things and to the cloud to get that scalability and cost efficiency. But a lot of organizations didn’t take that journey, didn’t learn from lessons from the past.

they just crammed what they had out in the cloud, and then a month later they get this giant bill and they’re shocked and confused, or they didn’t understand that this thing wasn’t architected for zero trust, and they’re leaking data everywhere.

Brian Fox (19:30)
Right, right. Or that, or just even the performance reasons why you were excited to infinitely scale, sure, but somebody’s not excited to infinitely pay a bigger bill. Inefficient code is still inefficient code, right? And that’s what I think we’re gonna see with… with AI capabilities is just going to happen faster. And without humans in the loop, it provides less opportunities for us to course correct, which is why I’ve been taking a step back and thinking about how do we do that? How does it make sense? I think for some of the stuff that we’ve been doing as a business, it’s really exciting because we have built up really interesting, unique data sets based on being able to see everything going on with Maven Central. We’ve long had Nexus, the repository manager that’s out there.

We have hundreds of thousands of instances. Those things are proxying for enterprises, not just Maven, but NPM, Nougat, Python, all the things. And so that gives us visibility into other ecosystems so we can understand what’s going on, what’s commonly used in enterprises, these kinds of things. And so all of that data can be fed now directly, like I said, the Vulcan mind meld directly into these tools. And it makes it a lot easier.

So in some ways, when we sort this out, and people become less afraid of MCP capabilities, we can directly inject a stream of high quality data to make all of those things better. But, before businesses can really leverage that, they have to get out of the experimentation phase. They have to roll that out. And these things are kind of interrelated. What we see is that organizations are afraid to let developers just go with AI assisted development because it’s not governed, because they can’t govern it.

And those are echoes of what I saw firsthand during the early days of open source. Like I said in the beginning, people said, we’re not using it. And then I’d tell them, yes, you are. And then their reaction was like, well, just shut it all out. It’s like, right, you can’t do anything. So the reaction that some enterprises have right now of like, we’re just not going to do anything with AI, is just setting themselves up to be left behind.

The right answer is to do it thoughtfully and use tools to help them make better decisions.

CRob (21:43)
So reflecting back, mentioned in your report that you have some guidance for people around AI. What would the top two or three things, if somebody’s thinking about moving more aggressively in this AI direction, what can they take away and do immediately or start thinking about?

Brian Fox (21:01)
Yeah, I mean, think the biggest thing is humans like to…try to take the old patterns and just adopt it to the new new technologies like we were talking about take an inefficient architecture and throw it in a cloud It’s gonna fix everything. No, it’s not and I think that’s true of Let’s call it the AI SDLC right an AI native SDLC Might resemble a normal SDLC, but it should be designed differently, right?

You know trying to do the checks and balances after the fact is even worse than it was with humans You need to think about providing that information upfront so that you get the value in the creation of the code and not try to chase it out. You need to be able to think about how all of these things can be done in parallel with agents, breaking these things down. what I would say is, don’t just try to do what you’re doing today and use AI to do it. Take a step back and really assess how can you adopt this.

It’s sort of like the conversations we were having in the board today about developers, maintainers are getting overwhelmed with AI slop. It’s true. A reaction is to stop allowing that to be contributed, just dismiss everything AI. That’s not a good answer. A better answer is let’s figure out how to help them use AI tools to be able to keep up with that, right? Because that’s what it’s good for. It can review and assess the patches faster than the maintainers and then provide sort of a first pass filter, if you will, right?

But that requires thinking outside of the box. Don’t just try to keep doing what you’re doing and try to keep up with it. Think about how you judo move that into something that makes more sense for your organization.

CRob (23:42)
And this skirts along another kind of project you’re passionate about, sustainability and funding. It is one thing to try to admonish the developers, why aren’t you using AI? But there are real costs involved around this. And, just to say, well, you should use the tool that doesn’t help them when there’s no funding. They don’t have access to infrastructure to be able to do these things. And that’s like, think, it touches on your passion project around trying to help get the package repositories more sustainably funded.

Brian Fox (24:18)
That’s right. Yeah, I mean, if you take a step back and you think about open source when probably you started, certainly when I started, what that really meant was you were donating your time. And you were sharing your thoughts, and you were sharing your words via code. And that was in a time when it was perfectly acceptable. In fact, it was the only choice that you built things and you shipped them off of your laptop.

There was when the Apple MacBook Air launched, the first one. That launched with a version of Maven on it that was signed by my key, my personal key, that was built on my personal laptop. So everybody that bought the launch version of the MacBook Air had my signed code on it. That’s kind of cool.

But also kind of scary, right, when you think about it. Like, what if my laptop was compromised? And that’s the world we live in today. Fortunately, in 2009 or whatever it was, that was a little bit more remote of a chance. And so everybody thinks like, well, that’s crazy. You wouldn’t do that anymore. So what does it mean today? It means you have to have certified builds. Usually that means it’s running in the cloud, and it’s attested to, and all these kinds of things. And that’s not free.

Like I can’t donate that, I’m not a hyperscaler. Most open source gets that infrastructure donated by these big companies, but there’s a lot of opportunity for abuse, right? And these types of things. it’s just, at the end of the day, it’s not free. So the cost of producing open source is not free anymore. It’s not just donating my time with equipment and internet access I already have, right? That’s the big difference. And I think people don’t really recognize that and now fast forward to what we’re just talking about AI the obvious answer to deal with AI, you know Piled on PRS is to have AI assistance help.

Who’s gonna pay for that? It’s literally not free. It costs electricity, last time I checked we still pay for our electricity Regardless

CRob (26:12)
Electricity, water


Brian Fox (26:14)
Right all of these things, right? These are very
they have very real implications. They’re just not free and so There’s no good answer to that. How does that get aligned? How do we
how do we continue to create open source software that can power all of these industries in a world where it’s not just somebody donating their time and thoughts? There are no good answers. But we’re working towards trying to align that. Because the bulk of open source software, certainly in our world, in these areas, is being consumed by organizations that are selling for-profit software, more or less.

There’s definitely a lot of hobbyists and stuff like that the biggest consumers from our repositories are all the giant companies. I’ve named the top 100. You would know every single one of them. And I’m sure that’s true for all the registries. So there has to be an answer in there. I don’t know the stat off the top of my head, but the Linux Foundation does the census, right? And it’s billions of dollars of economic value that open source creates. Eight billion? Nine billion?

CRob (27:16)
Trillion.

Brian Fox (27:17)
Oh, it’s a trillion now?

CRob (27:17)
It’s eight (8) trillion, I believe.

Brian Fox ()
Eight (8) Trillion dollars worth of economic value being produced by open source
1 % of that would pay for a lot of that infrastructure, and then a whole bunch more. And so I think that’s what ultimately we have to figure out how to balance. AI just makes that worse, because it moves the bar even further.

CRob (27:39)
Interesting conversation. Any final thoughts you want our listeners and viewers to take away?

Brian Fox (27:46)
Well, certainly go take a look at The State of the Software Supply Chain Report.

CRob (27:51)
Great report.

Brian Fox (27:52))
sonatype.com/SSCR Certainly, I’ve also written a number of blogs. You can find those at our website as well. That deep dive, kind of all these topics we touched on here. Yeah.

CRob (28:02)
Excellent. We’ll put some links as we do our summary. So Brian, thank you for our inaugural episode of Big thoughts, Open Sources. I think this was an amazing conversation that we’re gonna continue to be adding onto and reconsidering in the coming weeks and months.

Brian Fox (28:20)
Yeah, thanks for having me kick it off in Napa.

CRob (28:25)
Thank you. Well, I hope everybody stays cyber safe and sound. We’ll talk to you soon.

What’s in the SOSS? Podcast #38 – S2E15 Securing AI: A Conversation with Sarah Evans on OpenSSF’s AI/ML Initiatives

By Podcast

Summary

In this episode of “What’s in the SOSS,” we welcome back Sarah Evans, Distinguished Engineer at Dell Technologies and a key figure in the OpenSSF’s AI/ML Security working group. Sarah discusses the critical work being done to extend secure software development practices to the rapidly evolving field of AI. She dives into the AI Model Signing project, the groundbreaking MLOps whitepaper developed in partnership with Ericsson, and the crucial work of identifying and addressing new personas in AI/ML operations. Tune in to learn how OpenSSF is shaping the future of AI security and what challenges and opportunities lie ahead.

Conversation Highlights

0:00 Welcome and Introduction to Sarah Evans
0:48 Sarah Evans: Role at Dell Technologies and Involvement in OpenSSF
1:38 The OpenSSF AI/ML Working Group: Genesis and Goals
3:37 Deep Dive: The AI Model Signing Project with Sigstore
4:28 AI Model Signing: Benefits for Developers
5:20 Transition to the MLSeCOps White Paper
5:49 The Mission of the MLSecOps White Paper: Addressing Industry Gaps
7:00 Collaboration with Ericsson on the MLEC Ops White Paper
8:15 Identifying and Addressing New Personas in AI/ML Ops
10:04 The Power of Open Source in Extending Previous Work
10:15 Future Directions for OpenSSF’s AI/ML Strategy
11:21 OpenSSF’s Broader AI Security Focus
12:08 Sneak Peek: New Companion Video Podcast on AI Security
12:31 Sarah’s Personal Focus: The Year of the Agents (2025)
13:00 Security Concerns: Bringing Together Data Models and Code in AI Applications
14:00 Conclusion and Thanks

Transcript

0:00 Intro Music & Promo Clip: We have so much experience in applying secure software development to CI/CD and software, we can extend what we’ve learned to the data teams and to those AI/ML engineering teams because ultimately, I don’t think that we want a world where we have to do separate security governance across AI apps.

CRob:

0:20: Welcome, welcome, welcome to What’s in the SOSS, where we talk to interesting characters from around the open source security ecosystem, maintainers, engineers, thought leaders, contributors, and I just get to talk to a lot of really great people along the way.  Today we have a friend of the show we’ve already had discussions with her in the past. I am so pleased and proud to introduce my friend Sarah Evans. Sarah, for our audience, could you maybe just tell them, remind them, you know who you are and what do you do and what you’ve been up to since our last talk.

Sarah Evans:

0:57: Well, thanks for having me here. I’m a distinguished engineer at Dell Technologies, and I have two roles. One is I do security applied research for my company looking at the future of security in our products and what innovation that we need to explore to improve the security by design. My second role is to activate my company to participate in OpenSSF, which I have thoroughly enjoyed getting to work with friends such as yourselves. I am very active and engaged in the AI/ML working group and trying to advocate for AI security.

CRob:

1:37: Awesome, yeah. And that actually brings us to our talk today. Our friends within your working group, the AI/ML working group, you’ve had a flurry of activity lately. I would love to talk about, you know, first off, let’s give the audience some context. Let’s talk about what is this group, and what’s kind of your some of your goals.

Sarah Evans:

1:58: Yeah, so the AI/ML working group really kind of came into fruition about a year and a half ago, I think, and we needed a space where we could talk about how the work that software developers were doing would change as they started to build applications that had AI in it. So were there things that we were doing today that could apply to the way the technology was changing?

One of the initial concerns is software secure software development we know a lot about that, but we may know less about AI. So is is a home for AI and OpenSSF appropriate? Should we be deeply partnering with some of the other foundations that are creating these data sets, creating the tools and models, and so we started the working group where our commitment to the tech was that we would deeply engage with the other groups around the ecosystem which we have. Done, but then we’ve also been looking for where are those places that are uniquely in the OpenSSF wheelhouse or swim lane of expertise on extending software security to AI applications, and I think that we’ve done a really good job of kind of exploring some of those places.

One of them has been with a white paper that we are partnering with another member in Ericsson to deliver, and that is something that we’re very proud of sharing with the community.

CRob:

3:28: Great, I’m really excited to talk about these projects because I for one welcome our robot overlords. Let’s first off start off – we had a big, you guys had a big announcement that really seems to have captured the imagination of the community. Let’s talk about the AI model signing project.

Sarah Evans:

3:47: Yes, so the model signing project, we worked that as a special interest group within our working group. We were approached by, some folks who are working in partnership with SigS store and. The idea was that if you can use Sigstore to sign code, could you extend Sigstore to sign a model and fill and close a gap that didn’t exist in the industry, and as you know, we were able to do that. There was a team of people that came together in the open source fashion to extend a tool to a new use case. And that’s just been very exciting to watch that evolve.

CRob

4:27: That’s awesome.

Sarah Evans:

4:28: So thinking about it from the developer perspective, I’m a developer working in the AI, how does this help me?

CRob

4:36: Right?

Sarah Evans:

4:36: So right now if you are pulling a model off of hugging face as an example, you don’t have any cryptographic digital signature on that model that that verifies it. The way you would with code. And so if that model has been signed with the SIS store components, then now you have the information that you would use to validate code. You can also follow some of those similar processes to validate a signed model.

CRob

5:07: Pretty cool.

Sara Evans

5:08: Yeah, it’s a really good use case for the supply chain security. And extending what we know about software to models and data that are part of our AI applications.

CRob

5:20: This seems to be kind of a theme for you taking classic ASA and applying it to the newer technologies. So let’s move on to the white paper. You and I have collaborated around some graphics for this, and then you’ve got a couple of folks you’re working with on the white paper. You’re shepherding through review and publication, and you should be able to read that now. So you know why do you think this talk, let’s talk about the white paper, you know, what’s it about? What’s it kind of the mission of it?

Sarah Evans

5:49: When the AI/ML working group first kicked off, I knew that we had seen this evolution of developing on open source software and processes called DevOps and then those evolved to DevSecOps over time. And so with the disruptive technology around AI/ML, I wanted to know what were the processes that a data scientist or an AI/ML engineer used and did they have the security governance they needed in their operational processes.

So I started to look at what is DataOps, what is MLops, what is LLMOps, like all the alphabet soup of ops all the ops. And I couldn’t find a lot of information online. And so I thought this is an industry gap that we have and we have so much experience in applying secure software development to CICD and software.

We can extend what we’ve learned to the data teams and to those AI/ML engineering teams because ultimately I don’t think that we want a world where we have to do separate security governance across AI apps that have these different operational pieces in them.

I was doing my research and I found a white paper by Ericsson on MLSecOps in the telco environment. Ericsson being a fellow member of OpenSSF, I, you know, worked through their OSPO and through some of the connections that we have in OpenSSF said, Hey, can you introduce me to those authors? I would love to see if we could up level that as a general resource to the community as an OpenSSF whitepaper. We were able to do that. They have been a fantastic partner in collaboration.

And so now we have for the industry an MLSecOps white paper reference architecture and some documentation about extending in two ways:

  1. One is if you’re a software developer now and you’re being asked to build an AI app, you have more information about what goes on in that MLOps environment.
  2. And if you are a person who’s creating an MLOps app and you haven’t had secure development training before, you now have a resource so it really serves kind of an existing member of our community and a new member of potential members of our community.

CRob

8:14: That’s really awesome. Congrats on that. Another area that we’ve collaborated on, the OpenSSF has a series of personas. We have 5 personas and that kind of organizes and drives our work. We have a Maintainer developer persona and OSPO persona and executive persona and so forth but one thing that you came to me that you realized early on as you were developing this white paper is there was a, there’s some gaps. Could you maybe talk about those gaps and what we’ve done to address them?

Sarah Evans

8:46: Yeah, where we found the gaps were in sub-personas so those main core personas that OpenSSF has been working with were, were just solid. We still have developers and maintainers, we still have security engineers
we still have folks working in our open source program offices, but the sub-personas were very software developer focused.

They really didn’t include some of the personas that we were seeing related to curating data sets, putting together end to end architectures, or, kind of putting together a pipeline for machine learning as a data engineer. So we, I worked based off of the language in that original Ericsson white paper that we have up leveled to an OSSF white paper to take those personas that work in that MLE op space and add them as sub personas within OpenSSF. So now we can all start to have the same language and understanding around who might be developing software applications, new members of our community that we want to be inclusive of and have language to understand how to reach them and partner with them.

CRob

10:04: I just love the power of open source where you find some previous work, you get value out of it, and then you expand it.  Thank you so much for contributing that back.

Sarah Evans

10:13: Absolutely.

CRob

10:15: And where are you going from here? Where are the next steps around the white paper?

Sarah

10:19: I think we want to spend some time championing and then you know, meeting with our community we’ve discovered that potentially OpenSSF would like to have a broader AI/ML strategy or program and so really understanding how those strategic efforts will evolve and making sure that we can plug into those and provide resources that that strategically move OpenSSF forward into this new space those could include an MLSecOps document or maybe even a converged enterprise view of multiple ops but we’re also open to just looking at. Maybe some of the other areas that have been identified such as dealing with potentially AI slop or other concerns related to AI/ML.

I think there’s a really great opportunity for OpenSSF to look through our stack of tools and processes and understand how we can extend those to AI/ML use cases and applications.

I know that there is an opportunity to have a strategic program around AI and securing AI applications, and I’m really excited and looking forward to what the future of OpenSSF tools, processes, procedures, best practices look like so we can really support our software developers as they’re developing secure AI applications.

CRob

11:12: That’s awesome. I’m really looking forward to collaborating with you all and kind of championing and showcasing the work going forward. So thank you very much.

Let’s move along. We will be creating a new companion video podcast focused on this amazing community of AI security experts we have here within OpenSSF and within the broader community, and we’ll be talking about AI security news and topics. And I’m going to give this, take this opportunity to give the listeners a sneak peek of what we might be discussing very soon. So from your perspective, Sarah, you know, beyond these cool projects that you’re working on, what are you personally keeping an eye on in this fast moving AI space?

Sarah Evans

12:42: Well, I’ll tell you, 2025 is the year of the agents, and understanding the accelerated rate that agents that impact they will have on AI applications has been something I’ve been spending a lot of time on.

CRob

12:56: Pretty cool. I’m looking forward to learning more with everyone together. And from your perspective again, what’s keeping you up at night in regards to this crazy AI/ML, LLM, GenAI agentic, blah blah blah, machine space? What what are you concerned about from a security perspective?

Sarah Evans

I think for me from a security perspective bringing together data models and deploying it with code really puts an end to end AI application. It puts a lot of pressure on teams that may not have had to tightly work together before to begin to tightly work together. And so that’s why the personas and the and the converged operations and thinking about how do we apply what security we know to new areas is so important because we don’t have a moment to lose.

There’s such accelerated excitement around leveraging AI and leveraging agents that’s going to be very important for us to have a common way to talk to each other and to begin to solve problems and challenges so that we can innovate with this technology.

CRob

13:59: Excellent. Well, Sarah, I really appreciate your time come and talk to us about these amazing going on and kind of giving us a sneak peek into the future. And you know, I, I want to thank you again from behalf of the foundation, our community, and you know all the maintainers and enterprises that we serve. So thanks for showing up today.

Sarah Evans

14:17: Yeah, thanks, CRob.

CRob

14:18: Yeah, and that’s a wrap today. Thank you for listening to what’s in the SOSS. Have a great day and happy open sourcing.

Outro

14:29: Like what you’re hearing, be sure to subscribe to what’s in the SOSS on Spotify, Apple Podcasts, Antennapod, Pocketcast, or wherever you get your podcasts. There’s a lot going on with the OpenSSF and many ways to stay on top of it all. Check out the newsletter for open source news, upcoming events, and other happenings. Go to OpenSSF.org/newsletter to subscribe. Connect with us on LinkedIn for the most up-to-date OpenSSF news and insight, and be a part of the OpenSSF community at OpenSSF.org/getinvolved. Thanks for listening and we’ll talk to you next time on What’s in the SOSS.

OpenSSF at DEF CON 33: AI Cyber Challenge (AIxCC), MLSecOps, and Securing Critical Infrastructure

By Blog

By Jeff Diecks

The OpenSSF team will be attending DEF CON 33, where the winners of the AI Cyber Challenge (AIxCC) will be announced. We will also host a panel discussion at the AIxCC village to introduce the concept of MLSecOps.

AIxCC, led by DARPA and ARPA-H, is a two-year competition focused on developing AI-enabled software to automatically identify and patch vulnerabilities in source code, particularly in open source software underpinning critical infrastructure.

OpenSSF is supporting AIxCC as a challenge advisor, guiding the competition to ensure its solutions benefit the open source community. We are actively working with DARPA and ARPA-H to open source the winning systems, infrastructure, and data from the competition, and are designing a program to facilitate their successful adoption and use by open source projects. At least four of the competitors’ Cyber Resilience Systems will be open sourced on Friday, August 8 at DEF CON. The remaining CRSs will also be open sourced soon after the event.

Join Our Panel: Applying DevSecOps Lessons to MLSecOps

We will be hosting a panel talk at the AIxCC Village, “Applying DevSecOps Lessons to MLSecOps.” This presentation will delve into the evolving landscape of security with the advent of AI/ML applications.

The panelists for this discussion will be:

  • Christopher “CRob” Robinson – Chief Security Architect, OpenSSF
  • Sarah Evans – Security Applied Research Program Lead, Dell Technologies
  • Eoin Wickens – Director of Threat Intelligence, HiddenLayer

Just as DevSecOps integrated security practices into the Software Development Life Cycle (SDLC) to address critical software security gaps, Machine Learning Operations (MLOps) now needs to transition into MLSecOps. MLSecOps emphasizes integrating security practices throughout the ML development lifecycle, establishing security as a shared responsibility among ML developers, security practitioners, and operations teams. When thinking about securing MLOps using lessons learned from DevSecOps, the conversation includes open source tools from OpenSSF and other initiatives, such as Supply-Chain Levels for Software Artifacts (SLSA) and Sigstore, that can be extended to MLSecOps. This talk will explore some of those tools, as well as talk about potential tooling gaps the community can partner to close. Embracing this methodology enables early identification and mitigation of security risks, facilitating the development of secure and trustworthy ML models.  Embracing MLSecOps methodology enables early identification and mitigation of security risks, facilitating the development of secure and trustworthy ML models.

We invite you to join us on Saturday, August 9, from 10:30-11:15 a.m. at the AIxCC Village Stage to learn more about how the lessons from DevSecOps can be applied to the unique challenges of securing AI/ML systems and to understand the importance of adopting an MLSecOps approach for a more secure future in open source software.

About the Author

JeffJeff Diecks is the Technical Program Manager for the AI Cyber Challenge (AIxCC) at the Open Source Security Foundation (OpenSSF). A participant in open source since 1999, he’s delivered digital products and applications for dozens of universities, six professional sports leagues, state governments, global media companies, non-profits, and corporate clients.

MLSecOps Whitepaper

Visualizing Secure MLOps (MLSecOps): A Practical Guide for Building Robust AI/ML Pipeline Security

By Blog, Guest Blog

By Sarah Evans and Andrey Shorov

The world of technology is constantly evolving, and with the rise of Artificial Intelligence (AI) and Machine Learning (ML), the demand for robust security measures has become more critical than ever. As organizations rush to deploy AI solutions, the gap between ML innovation and security practices has created unprecedented vulnerabilities we are only beginning to understand.

A new whitepaper, Visualizing Secure MLOps (MLSecOps): A Practical Guide for Building Robust AI/ML Pipeline Security,” addresses this critical gap by providing a comprehensive framework for practitioners focused on building and securing machine learning pipelines.

Why MLSecOps, Why Now

Why this topic? Why now? 

AI/ML systems encompass unique components, such as training datasets, models, and inference pipelines, that introduce novel weaknesses demanding dedicated attention throughout the ML lifecycle.

The evolving responsibilities within organizations have led to an intersection of expertise:

  1.     Software developers, who specialize in deploying applications with traditional code, are increasingly responsible for incorporating data sets and ML models into those applications.
  2.     Data engineers and data scientists, who specialize in data sets and creating algorithms and models tailored to those data sets, are expected to integrate data sets and models into applications using code.

These trends have exposed a gap in security knowledge, leaving AI/ML pipelines susceptible to risks that neither discipline alone is fully equipped to manage. To resolve this, we investigated how we could adapt the principles of secure DevOps to secure MLOps by creating an MLSecOps framework that empowers  both software developers and AI-focused professionals with the tools and processes needed for end-to-end ML pipeline security. During our research, we identified a scarcity of practical guidance on securing ML pipelines using open-source tools commonly employed by developers. This white paper aims to bridge that gap and provide a practical starting point.

What’s Inside the Whitepaper

This whitepaper is the result of a collaboration between Dell and Ericsson, leveraging our shared membership in the OpenSSF with the foundation stemming from a publication on MLSecOps for telecom environments authored by Ericsson researchers [https://www.ericsson.com/en/reports-and-papers/white-papers/mlsecops-protecting-the-ai-ml-lifecycle-in-telecom]. Together, we have expanded upon Ericsson’s original MLSecOps framework to create a comprehensive guide that addresses the needs of diverse industry sectors. 

We are proud to share this guide as an industry resource that demonstrates how to apply open-source tools from secure DevOps to secure MLOps. It offers a progressive, visual learning experience where concepts are fundamentally and visually layered upon one another, extending  security beyond traditional code-centric approaches. This guide integrates insights from CI/CD, the ML lifecycle, various personas, a sample reference architecture, mapped risks, security controls, and practical tools.

The document introduces a visual, “layer-by-layer” approach to help practitioners securely adopt ML, leveraging open-source tools from OpenSSF initiatives such as Supply-Chain Levels for Software Artifacts (SLSA), Sigstore, and OpenSSF Scorecard. It further explores opportunities to extend these tools to secure the AI/ML lifecycle using MLSecOps practices, while identifying specific gaps in current tooling and offering recommendations for future development.

For practitioners involved in the design, development, deployment, and operations as well as securing of AI/ML systems, this whitepaper provides a practical foundation for building robust and secure AI/ML pipelines and applications.

Join Us

Ready to help shape the future of secure AI and ML?

Read the Whitepaper

Join the AI/ML Security Working Group

Explore OpenSSF Membership

Author Bios

Sarah Evans delivers technical innovation for secure business outcomes through her role as the security research program lead in the Office of the CTO at Dell Technologies. She is an industry leader and advocate for extending secure operations and supply chain development principles in AI. Sarah also ensures the security research program explores the overlapping security impacts of emerging technologies in other research programs, such as quantum computing. Sarah leverages her extensive practical experience in security and IT, spanning small businesses, large enterprises (including the highly regulated financial services industry and a 21-year military career), and academia (computer information systems). She earned an MBA, an AIML professional certificate from MIT, and is a certified information security manager. Sarah is also a strategic and technical leader representing Dell in OpenSSF, a foundation for securing open source software.

Andrey Shorov is a Senior Security Technology Specialist at Product Security, Ericsson. He is a cybersecurity expert with more than 16 years of experience across corporate and academic environments. Specializing in AI/ML and network security, Andrey advances AI-driven cybersecurity strategies, leading the development of cutting-edge security architectures and practices at Ericsson and contributing research that shapes industry standards. He holds a Ph.D. in Computer Science and maintains CISSP and Security+ certifications.

🎉 Celebrating Five Years of OpenSSF: A Journey Through Open Source Security

By Blog

August 2025 marks five years since the official formation of the Open Source Security Foundation (OpenSSF). Born out of a critical need to secure the software supply chains and open source ecosystems powering global technology infrastructure, OpenSSF quickly emerged as a community-driven leader in open source security.

“OpenSSF was founded to unify and strengthen global efforts around securing open source software. In five years, we’ve built a collaborative foundation that reaches across industries, governments, and ecosystems. Together, we’re building a world where open source is not only powerful—but trusted.” — Steve Fernandez, General Manager, OpenSSF

đŸŒ± Beginnings: Answering the Call

OpenSSF was launched on August 3, 2020, consolidating earlier initiatives into a unified, cross-industry effort to protect open source projects. The urgency was clear—high-profile vulnerabilities such as Heartbleed served as stark reminders that collective action was essential to safeguard the digital infrastructure everyone depends on.

“From day one, OpenSSF has been about action—empowering the community to build and adopt real-world security solutions. Five years in, we’ve moved from ideas to impact. The work isn’t done, but the momentum is real, and the future is wide open.” — Christopher “CRob” Robinson, Chief Architect, OpenSSF

🚀 Milestones & Major Initiatives

Over the past five years, OpenSSF has spearheaded critical initiatives that shaped the landscape of open source security:

2021 – Secure Software Development Fundamentals:
Launching free educational courses on edX, OpenSSF equipped developers globally with foundational security practices.

“When we launched our first free training course in secure software development, we had one goal: make security knowledge available to every software developer. Today, that same mission powers all of OpenSSF—equipping developers, maintainers, and communities with the tools they need to make open source software more secure for everyone.” — David A. Wheeler, Director, Open Source Supply Chain Security, Linux Foundation

2021 – Sigstore: Open Source Signing for Everyone:
Sigstore was launched to make cryptographic signing accessible to all open source developers, providing a free and automated way to verify the integrity and provenance of software artifacts and metadata.

“Being part of the OpenSSF has been crucial for the Sigstore project. It has allowed us to not only foster community growth, neutral governance, and engagement with the broader OSS ecosystem, but also given us the ability to coordinate with a myriad of in-house initiatives — like the securing software repos working group — to further our mission of software signing for everybody. As Sigstore continues to grow and become a core technology for software supply chain security, we believe that the OpenSSF is a great place to provide a stable, reliable, and mature service for the public benefit.”
— Santiago Torres-Arias, Assistant Professor at Purdue University and Sigstore TSC Chair Member 

2021-2022 – Security with OpenSSF Scorecard & Criticality Score:
Innovative tools were introduced to automate and simplify assessing open source project security risks.

“The OpenSSF has been instrumental in transforming how the industry approaches open source security, particularly through initiatives like the Security Scorecard and Sigstore, which have improved software supply chain security for millions of developers. As we look ahead, AWS is committed to supporting OpenSSF’s mission of making open source software more secure by default, and we’re excited to help developers all over the world drive security innovation in their applications.” — Mark Ryland, Director, Amazon Security at AWS

2022 – Launch of Alpha-Omega:

Alpha-Omega (AO), an associated project of the OpenSSF launched in February 2022, is funded by Microsoft, Google, Amazon, and Citi. Its mission is to enhance the security of critical open source software by enabling sustainable improvements and ensuring vulnerabilities are identified and resolved quickly. Since its inception, the Alpha-Omega Fund has invested $14 million in open source security, supporting a range of projects including LLVM, Java, PHP, Jenkins, Airflow, OpenSSL, AI libraries, Homebrew, FreeBSD, Node.js, jQuery, RubyGems, and the Linux Kernel. It has also provided funding to key foundations and ecosystems such as the Apache Software Foundation (ASF), Eclipse Foundation, OpenJS Foundation, Python Foundation, and Rust Foundation.

2023 – SLSA v1.0 (Supply-chain Levels for Software Artifacts):
Setting clear and actionable standards for build integrity and provenance, SLSA was a turning point for software supply chain security and became essential in reducing vulnerabilities.
At the same time, community-driven tools like GUAC (Graph for Understanding Artifact Composition) built on SLSA’s principles, unlocking deep visibility into software metadata, making it more usable, actionable and connecting the dots across provenance, SBOMs and in-toto security attestations.

“Projects like GUAC demonstrate how open source innovation can make software security both scalable and practical. Kusari is proud to have played a role in these milestones, helping to strengthen the resiliency of the open source software ecosystem.”

— Michael Lieberman, CTO and Co-founder at Kusari and Governing Board member

2024 – Principles for Package Repository Security:

Offering a voluntary, community-driven security maturity model to strengthen the resilience of software ecosystems.

“Developers around the world rely daily on package repositories for secure distribution of open source software. It’s critical that we listen to the maintainers of these systems and provide support in a way that works for them. We were happy to work with these maintainers to develop the Principles for Package Repository Security, to help them put together security roadmaps and provide a reference in funding requests.” — Zach Steindler, co-chair of Securing Software Repositories Working Group, Principal Engineer, GitHub

2025

OSPS Baseline:
This initiative brought tiered security requirements into the AI space, quickly adopted by groundbreaking projects such as GUAC, OpenTelemetry, and bomctl.

“The Open Source Project Security Baseline was born from real use cases, with projects needing robust standardized guidance around how to best secure their development processes. OpenSSF has not only been the best topical location for contributors from around the world to gather — the foundation has gone above and beyond by providing project support to extend the content, promote the concept, and elevate Baseline from a simple control catalog into a robust community and ecosystem.” — Eddie Knight, OSPO Lead, Sonatype

AI/ML Security Working Group: 

The MLSecOps White Paper from the AI/ML Security Working Group marks a major step in securing machine learning pipelines and guiding the future of trustworthy AI.

“The AI/ML working group tackles problems at the confluence of security and AI. While the AI world is moving at a breakneck pace, the security problems that we are tackling in the traditional software world are also relevant. Given that AI can increase the impact of a security vulnerability, we need to handle them with determination. The working group has worked on securing LLM generating code, model signing and a new white paper for MLSecOps, among many other interesting things.” — Mihai Maruseac, co-chair of AI/ML Security Working Group, Staff Software Engineer, Google

🌐 Growing Community & Policy Impact

OpenSSF’s role rapidly expanded beyond tooling, becoming influential in global policy dialogues, including advising the White House on software security and contributing to critical policy conversations such as the EU’s Cyber Resilience Act (CRA).

OpenSSF also continues to invest in community-building and education initiatives. This year, the Foundation launched its inaugural Summer Mentorship Program, welcoming its first cohort of mentees working directly with technical project leads to gain hands-on experience in open source security.

The Foundation also supported the publication of the Compiler Options Hardening Guide for C and C++, originally contributed by Ericsson, to help developers and toolchains apply secure-by-default compilation practices—especially critical in memory-unsafe languages.

In addition, OpenSSF has contributed to improving vulnerability disclosure practices across the ecosystem, offering guidance and tools that support maintainers in navigating CVEs, responsible disclosure, and downstream communication.

“The OpenSSF is uniquely positioned to advise on considerations, technical elements, and community impact public policy decisions have not only on open source, but also on the complex reality of implementing cybersecurity to a diverse and global technical sector. In the past 5 years, OpenSSF has been building a community of well-informed open source security experts that can advise regulations but also challenge and adapt security frameworks, law, and regulation to support open source projects in raising their security posture through transparency and open collaboration; hallmarks of open source culture.” — Emily Fox, Portfolio Security Architect, Red Hat

✹ Voices from Our Community: Reflections & Hopes

Key community members, from long-standing contributors to new voices, have shaped OpenSSF’s journey:

OG Voices:

“Microsoft joined OpenSSF as a founding member, committed to advancing secure open source development. Over the past five years, OpenSSF has driven industry collaboration on security through initiatives like Alpha-Omega, SLSA, Scorecard, Secure Software Development training, and global policy efforts such as the Cyber Resilience Act. Together, we’ve improved memory safety, supply chain integrity, and secure-by-design practices, demonstrating that collaboration is key to security. We look forward to many more security advancements as we continue our partnership.” — Mark Russinovich, CTO, Deputy CISO, and Technical Fellow, Microsoft Azure

OpenSSF Leadership Perspective: 

“OpenSSF’s strength comes from the people behind it—builders, advocates, and champions from around the world working toward a safer open source future. This milestone isn’t just a celebration of what we’ve accomplished, but of the community we’ve built together.” — Adrianne Marcum, Chief of Staff, OpenSSF

Community Perspectives:

“After 5 years of hard work, the OpenSSF stands as a global force for securing the critical open-source that we all use. Here’s to five years of uniting communities, hardening the software supply chain, and driving a safer digital future.” Tracy Ragan, CEO, DeployHub

I found OpenSSF through my own curiosity, not by invitation, and I stayed because of the warmth, support, and shared mission I discovered. From contributing to the BEAR Working Group to receiving real backing for opportunities, the community consistently shows up for its members. It’s more than a project; it’s a space where people are supported, valued, and empowered to grow.” Ijeoma Onwuka, Independent Contributor

🔼 Looking Forward

As we celebrate our fifth anniversary, OpenSSF is preparing for a future increasingly influenced by AI-driven tools and global collaboration. Community members across the globe envision greater adoption of secure AI practices, expanded policy influence, and deeper, inclusive international partnerships.

“As we celebrate OpenSSF’s 5th Anniversary, I’m energized by how our vision has grown into a thriving global movement of developers, maintainers, security researchers, and organizations all united by our shared mission. Looking ahead we’re hoping to cultivate our community’s knowledge and empower growth through stronger collaboration and more inclusive pathways for contributors.” – Stacey Potter, Community Manager, OpenSSF

📣 Join the Celebration

We invite you to share your memories, contribute your voice, and become part of the next chapter in securing open source software.

Here’s to many more years ahead! 🎉