Tag

Software Supply Chain

What’s in the SOSS? Podcast #59 – S3E11 Building a Connected Africa: The Origin Story of OSSAfrica with Prince Asiedu

By Podcast

Summary

This episode features Prince Oforh Asiedu, discussing his inspiring journey into tech and open source, starting from a childhood fascination with computers in Ghana, self-learning to code despite financial and economic challenges, and making his first contributions through documentation. Prince shares the origin story of Open Source & Security Africa (OSSAfrica), revealing how the frustration of repeatedly being denied access to global conferences due to visa issues sparked the idea to build a local, Africa-rooted community to connect and empower African contributors globally. He outlines the vision for OSSAfrica to become a serious contributor to software supply chain security, aiming for African voices to be recognized as trusted collaborators and leaders in the ecosystem.

Conversation Highlights

00:01 Introduction and Welcome to Prince
01:33 From Ghana’s Internet Cafes to Learning Python: Prince’s Early Journey
05:36 The Moment That Changed Everything: Starting with Open Source Documentation
10:46 The Frustration that Founded OSSAfrica: Structural Barriers and Visa Issues
16:42 OSSAfrica’s Growth and Community of Communities
19:01 The Vision for 2026: Success for African Contributors
23:32 How to Get Involved with OSSAfrica
28:32 Advice for the Next Wave: Growth Compounds When You Are Not Alone

Transcript

Yesenia (00:01.56)
Hello and welcome to What’s in the SOSS? OpenSSF’s podcast where we talk to interesting people throughout the open source ecosystem, sharing their journey, experiences and wisdom. So Yessenia, one of your hosts, and today I have an amazing guest for us today. A guest who has started early on in his career at OpenSSF in the BEAR Working Group and today is leading his very own SIG. Welcome Prince.

Let’s start off with introducing yourself to the audience. Can you share your background and journey into tech and open source?

Prince (01:33.339)
All right, so thanks very much, Yersinia, for inviting me and having me on this podcast. I’m really excited to be here. Thank you to the OpenSSF for this wonderful opportunity. I really enjoyed this podcast a lot, it’s a dream come true to be here. Yeah, so I’ve…

I’ve honestly been fascinated with computers for as long as I can remember. And my first real encounter with a computer was back in primary school, class one in Ghana. And I immediately fell in love with it. I didn’t have access to my own computer growing up.

Prince (00:00)
But whenever I could, I’d use my friend’s laptop, go to internet cafes, play games at game centers, basically anything that kept me close to technology. And that early curiosity never really left. After senior high school, I wasn’t able to continue to the university for financial and economic reasons. And that was a really hard moment for me.

So the question I asked myself was, what can I do regardless of my current situation? So to still get into tech or to get to use or be a part of this technology that I’ve been very much interested in. So I started searching on my own and go to cafes, look up resources and basically try anything that I could find.

So that’s when I discovered edX in 2017. And I took my first real course there on the edX platform.

Prince (05:36.131)
I took this introduction to computing with Python from the Georgia Institute of Technology, which had a free version of the course. And around that same time, I had a friend who was a software engineer, though I didn’t know what that term meant back then. I saw some of his books, asked about them, and he shared them with me.

Between those books and online courses, I would say that’s where I truly began learning to code. So the journey hasn’t been or wasn’t smooth, I should say. Constant challenges made consistency hard and there were long stretches where I couldn’t practice as much as I wanted,

But, I kept going, obviously. And then in 2022, when I was actively looking for ways to gain real-world experience, I came across Open Source. I first encountered or I started with Data Umbrella from a search on Google. And that led me to the Data Umbrella PyMC project. And that was my first real exposure to what open source meant. I didn’t really feel skilled enough to contribute code at the time, but the people in Data Umbrella on the PyMC project encouraged me to start with documentation. I had, I made a small documentation change, which got merged. And honestly, that moment changed everything for me.

So, it showed me that I could contribute in a community of diverse people without having to be perfect. And I learned that what open source meant was basically showing up with and willing to do what was available or work on whatever was available to work on.

From there, I explored other projects like GreenStand and worked on features, learned through community, just kept growing. Then in 2024, this part is very interesting to me, in 2024 I found the OpenSSF while trying to reconcile my interest in both software engineering and security.

And that’s truly got me into the security open source ecosystem. Through OpenSSF, I’ve met people like Christopher Robinson, who was really my first point of, I would say first contact in the OpenSSF community. And he pointed me to Jean Robert – a member of AfricaCERT in Africa obviously. And yourself, Yesenia and Marcella and Stacey Potter, David Wheeler and a number of people who have mentored me and challenged me up until this point. And I’ve also opened doors that I didn’t know existed, especially when it comes to open source. Yeah.

Yesenia (09:50.487)
So that was a beautiful journey that you shared with us, you know, through tech and as someone who’s come from immigrant parents, I can resonate a lot with paraphrasing the question you asked yourself, but what can I do given my circumstances? You know, not everybody has the opportunity or the finances to move through going towards their goals. And I really appreciate the resilience that you have in your story.

Like, alright, I can’t do this now, but how do I keep going? And I think what you shared are a lot of defining moments that you’ve had to lead to where you are today. And I can speak from OpenSSF. We’re very grateful to have you on our team on this journey. And you know, within the last couple of years from us chatting through the BEAR working group, you brought up a lot of challenges that yourself and other members have been facing when it comes to Africa and being able to do external speaking and finding those opportunities locally.

So I’d love to hear and share with the audience your origin story. Like how did open source security Africa get started? Like what were the problems that you were seeing in your ecosystem that made you say, you know, we should probably build something.

Prince (11:10.554)
Thank you very much for asking that. I will be meaning to talk about that a lot. So OSSAfrica really started from frustration, I should say, but also from hope. In 2025, that was last year, I had several opportunities, just like you mentioned, to attend major open source and security conferences.

So I submitted my first ever conference paper to VulnCon and it got accepted with support from Christopher Robinson, who’s been really incredible. Like I mentioned earlier, instrumental in my growth. The openness itself even explored or mentioned sponsoring. They actually did sponsor my travel, which felt like a defining moment for me.

But then I couldn’t attend and it was basically because of visa issues. Then the same thing happened with OpenSSF Community Day and again with Open Source Summit in the Netherlands. And every time there was opportunity, was this opportunity that I had, but access wasn’t available. I didn’t have access. And…

That experience really forced me to confront a reality many people acrOSSAfrica face. There’s talent and readiness. Talent and readiness exist, but structural barriers keep people locked out. So I started asking this question, why don’t we build something locally?

Not, instead of global spaces, alongside global spaces or alongside them. Something rooted in Africa that connects people into global open source and security ecosystems without requiring borders to cooperate first. So around the same time, I was working with a colleague, Seth Mensah, who had a similar vision around building Linux communities in Ghana and beyond.

So we brought in another friend, our colleague, Aaron Will Djaba, and started shaping what would eventually become OSSAfrica. Of course, when we started talking about this, we had a lot of names in mind and all that.

But, then we eventually settled for OSSAfrica. And this was not as a single organization, but as a community of communities. So I shared the idea with the OpenSSF BEAR Working Group on the OpenSSF BEAR Working Group chats on Slack and Marcella, who is a co-lead with yourself and Jay White suggested that we formalize it as a special interest group. We did all of this in October of 2025. By December, after community voting, OSSAfrica became an official OpenSSF Special Interest Group (SIG). That moment marked the real beginning of OSSAfrica.

So today, OSSAfrica exists to connect..our goal is to try to connect, empower, and amplify African contributors in open source and security. Whether through existing communities or new ones where none exist yet. Our goal is to build a connected Africa where contributors are visible globally, African voices are heard, and local innovation meaningfully shapes the open source ecosystem.

And honestly, none of this would exist without the early belief and support that people like yourself, Yesenia and Stacey, Marcella and Sally Cooper, four incredibly powerful advocates in open source, who helped guide and legitimize this work from the beginning. So yes, that’s basically how we started.

Yesenia (15:58.051)
I really like this because it’s a story of community, right? So it’s, we see one member in our community struggling or we saw, we actually saw multiple members of our community struggling to get their voices on the stage and the community came together, you know, brought together ideas and possibilities on how to solve this problem. And you and your team just went off and executed. I know that. you started in October, but from what I’ve seen, you know, I’ve been having from front row seats to Open Source & Security Africa since its inception.

It looks like you have a good growth of numbers. Like I think we’re looking at like what 10, 15 people so far.

Prince (16:42.618)
So we currently have about 90 plus people in our community on Discord. Yes, and we’ve had a number of people reach out organizations and just recently there’s this community founder who reached out, Kiala. I think there’s a lot that will have to we just started out in our conversations. So we see how that that goes. But growth on that front looks really, good. And we had a community lead for a community in Mauritius also reach out to try to partner with us. And yeah, we’ve had a number of people show interest, so we can really gauge our numbers by the number of people on our Discord channel. We’ve really seen people show interest in us.

Yesenia (17:53.039)
Yeah, but I think it really shows the necessity for this type of organization. mean, you’re four months in and you’re, you know, you’re already getting reached by, you already have about 19 active members. You know, you’re being reached by different organizations and partnerships. And I know I connected you with the Afro Adventures with Quiana that is running it, which I think that’s fabulous, you know, to see what comes out of that.

So, big kudos to you with that. This is what the open source world is, right? It’s a group of folks coming together, trying to solve a problem and getting a vision and executing. You guys are doing that.

Prince (18:38.287)
Yeah, yeah. I’m really excited about what our trajectory looks like, where we are going.

Yesenia (19:01.71)
Yeah, so start looking forward. What are you seeing? How are you seeing OSSAfrica impacting? Like what kind of impact do you want this community have locally, globally this year and the next few years? Like what does the success look like for you?

Prince (19:29.178)
So, mean, by the end of 2026, success for me looks like African contributors being visible and trusted and embedded across critical open source projects, not as exceptions, as maintainers, as trusted collaborators and leaders and open source community and space. Locally, I want OSSAfrica to be like a launch pad where people enter with curiosity and leave with skills, confidence, mentorship and real world impact.

And globally, I want OSSAfrica to be recognized as a serious contributor to software supply chain security, open source system sustainability, and community governance and security discussions. And in the long term, the goal is to build something institutional. Something not personality driven, but durable systems and leadership pipelines and partnerships that continue to continue creating opportunity long after any individual like myself steps away from the organisation. I’m never hoping, I’m not hoping to step away from the organisation, but I’m hoping that we become a strong and trusted institution globally.

So in the short term, we are focused on building strong foundations locally, especially in Ghana and West Africa. We want to do this through events and community programs and partnerships that help spark curiosity and bring people into the open source ecosystem. From there, we plan to expand across the continent, over the next few years. Yeah.

Yesenia (22:00.643)
This is amazing. I’m going to quote my mom and then I’ll translate it, “MĂĄndame fotos, por favor.” So “send me pictures, please.” Because I would love to see the growth of this community. I’m every time my mom’s like, send me photos. So I’m getting a little emotional on this side because this really
Your story, the growth of source, open source security Africa is really representing what open source can do. So for the folks listening right now who want to contribute, they could be developers, security engineers, student, just anyone curious that wants to support, how can they get involved and meaningfully contribute to the cause?

Prince (23:32.154)
The easiest way is to just show up. We currently have, like we mentioned earlier, we currently have a public Discord community. We have a GitHub organization. We have open meetings and really essentially these are beginner friendly contribution pathways.

You don’t need to be a senior in any field in the tech space to show up or to join. If you are a student, if you are a career switcher, if you are a designer, a writer, a security practitioner, a community builder, or just about anyone trying to find your place in our world, you are welcome to join. So everyone has a place in OSSAfrica.

We want to help people find a place where they can work and connect with the community at large, which is aligned with their interests and pair them with people who can guide them into meaningful contributions. I mean, what you contribute to OSSAfrica depends, it doesn’t depend on any seniority skill level
whether it’s documentation, tooling, or education or research or community operations, there’s a space for you in OSSAfrica.

Yesenia (25:34.092)
This is awesome. Yeah, so you can always check out the OpenSSF calendar and join the calls. I believe they’re on Wednesday. And then there’s also the Slack channel on OpenSSF where a lot of movement is happening as well.

Prince (25:51.311)
Yes, yes. And thank you very much for adding those ones I nearly forgot about. We just created our Slack channel for the special influence group for SS Africa. The SS Africa special influence group. So that is a great thing to also collaborate or contribute your voice and perspective and how we can shape this amazing community that we are building.

Yesenia (26:22.766)
So you heard it from Prince folks, join OpenSSF Africa and help them move along. So we’re gonna move on to our rapid fire part of the interview. pew, pew.

Prince, I’m just gonna ask you some questions. You’re gonna say the first thing that comes to mind and we’ll go from there. So first question, books or podcasts?

Prince (26:43.3)
Thank you.

Prince (26:47.478)
I really like a mix of both, but I would go for podcasts.

Yesenia (26:56.342)
Awesome. Favorite programming language.

Prince (26:59.268)
Python

Yesenia (27:02.574)
Good choice. I’m a little biased on that one. Comfort food.

Yesenia (27:10.54)
What is your comfort food?

Prince (27:16.71)
So there’s this local rice, sorry, local food, sorry. There’s this local food called waakye and I really enjoy it. If you were asking about comfort food, what I eat to comfort myself, right? So then it’s basically, it’s waakye. I really enjoy Wachi and nuts and you should have it sometime. It’s really nice. It’s rice and beans.

Yesenia (27:31.085)
Yes.

Prince (27:43.608)
with some herbs, some African leaves. And it’s really nice. You should have it sometimes.

Yesenia (27:52.258)
You got me on rice and beans. I’m Cuban. I love rice and beans. Sweets or sour?

Prince (27:54.138)
Um, sweet.

Yesenia (28:07.682)
Are you an early bird or a night owl?

Prince (28:13.946)
I like to sleep early. I like to sleep early, so yeah. I mean, that’s my response to that.

Yesenia (28:32.236)
There you have it folks. That’s another rapid fire. And Prince, let’s end this with, you you’ve gone through an impressive journey through your tech and open source, right? You you shared it earlier, your resilience. And so what advice do you have for the next wave of folks, for folks that are entering tech and cybersecurity, you know, those that might just feel like an outsider, what advice would you give them and how can open source accelerate their growth and confidence?

Prince (28:58.618)
Okay.

Prince (29:02.572)
Okay, so first, feeling like an outsider doesn’t mean you don’t belong. It usually just means that you are early in the environment or you are in a new environment. And second, open source is one of the strongest accelerators, especially in my opinion, that I have ever seen.

You don’t need anyone’s permission, credentials or prestige. You don’t need anyone’s approval to join the open source space. All you need is curiosity and being showing up consistently and being humble to be willing and willing to be guided by those who are already in this space.

My advice, all you know is start small, learn in the public space, ask questions, and don’t wait until you see your regime. Usually it’s very difficult even if you are in the environment. It usually takes a lot of time. I took a lot of time before actually seeing anything in the community. that really helped me much.

So I would say speak up, ask for help, and then also read issues. And even if it is a small change in documentation that you can make, it really matters a lot and is very much welcome. You should also attend community calls and try to build relationships very early.

And most importantly, this is community that welcomes everyone. So find your community. And one last thing I would say is that growth compounds when you are not alone. So try and find your community. Yeah.

Yesenia (31:34.498)
That’s a good quote right there. Good compounds when you’re not alone. Prince, thank you so much for your time today, for your impact and contributions to our communities. I am so excited for OSSAfrica and to see its growth, its community and contributions this year’s and the upcoming. Thank you so much for your time today and folks, we’ll catch you on the next episode.

What’s in the SOSS? Podcast #58 – S3E10 Big Thoughts, Open Sources: Beyond the Hype: Brian Fox on Securing the Agentic Future of Open Source

By Podcast

Summary

In this inaugural episode of Big Thoughts, Open Sources, host CRob sits down with Brian Fox, Co-founder and CTO of Sonatype, to discuss the friction between rapid AI adoption and foundational software security. Brian shares insights from the 11th annual State of the Software Supply Chain Report, revealing the emergence of “slop squatting” and the high frequency of AI models recommending non-existent or vulnerable dependencies. The conversation explores how the Model Context Protocol (MCP) could revolutionize developer compliance and why the industry must fund the critical infrastructure supporting our trillion-dollar open source ecosystem.

Conversation Highlights

00:23 – Welcome: Big Thoughts, Open Sources inaugural episode.
01:01 – Brian Fox’s journey: Apache Maven, Sonatype, and OpenSSF.
02:53 – The critical role of Maven Central in the software supply chain.
03:26 – Decades of security trends: The persistent “Log4Shell” pattern.
05:34 – The “Tribal Knowledge” problem for AI agents.
07:06 – State of the Software Supply Chain Report: AI recommending made-up code versions.
08:09 – Explaining “Slop Squatting” and AI hallucinations.
10:03 – Model Context Protocol (MCP): Turning security tools into AI expert systems.
13:42 – Do not ignore 60 years of software engineering “physics”.
15:11 – The “Vulcan Mind Meld”: Injecting governance data into AI agents.
17:19 – Risks, rewards, and the need for ML SecOps discipline.
19:30 – “Inefficient code is still inefficient code”: Lessons from cloud migrations.
21:01 – Building an “AI-native SDLC” with upfront security.
24:18 – The sustainability crisis: Secure open source builds are not free.
27:17 – Conclusion: Funding open source infrastructure (8 trillion dollars of value).

Transcript

Crob (00:23)
Welcome, welcome, welcome to Big Thoughts, Open Sources, the OpenSSF’s new podcast. We’re gonna dive a little more deeply in with some of the amazing community members, subject matter experts, and thought leaders within open source, cybersecurity, and high technology. Today in our inaugural episode, I’m very pleased to welcome a friend of the show, Brian Fox from Sonotype. How you doing, Brian?

Brian Fox (00:47)
I’m doing well, how are you?

Crob (00:48)
Excellent, we’re super glad to have you today. So maybe just for our audience members that are unfamiliar with your work, could you maybe talk a little bit about how you got into open source and kind of what you specialize in in this amazing ecosystem?

Brian Fox (01:01)
How I got an open source, that’s a long conversation. geez, all the way back in 2002, 2003, I suppose, is when I really, really got involved. I had done some dabbling and some other things before that, but I got involved around that time in Apache Maven. I started writing some plugins.

They’re pretty popular plugins, people still use them these days. And those ultimately got pulled into the Apache project, the official project. I kind stowawayed and came in as a committer. A few years later, I joined up with some other folks that were also working on Maven and we co-founded Sonatype. It’s been 19 years now.

CRob (1:45)
Wow, that’s awesome.

Brian Fox (1:46)
Yeah, and so then I was ultimately the…the chair of the Apache Maven project for a long time. still an Apache member of the foundation. And then more recently, even though it’s been a while, what, four or five years now, we joined the OpenSSF. I’ve been on the Governing Board with you for a while. I’m also on the Governing Board of FINOS, which is the financial open source.

And for the last couple years, also been on the Singapore Monetary Authority’s Cyber
Experts Group. Yeah, that’s fun And so, you know, I’ve spent a lot of time focused on those things. One of the things that Sonatype does for the community we run Maven Central, right? Which for people that don’t know that’s where all the world’s open source Java components come from.

CRob

Yeah, it’s kind of sounds like a big job It is a big job running critical infrastructure for all that kind of stuff And so, you know over the years that’s given us really interesting insights into what’s going on with the supply chain so, you know, that’s kind of that’s sort of what led us to the path that brought me to OpenSSF and all those other things.

CRob (2:53)
Yeah, you and your team have been amazing participants and contributors to our community and just kind of even putting aside all the work with Maven. Just your kind of participation in our working groups and our efforts has been amazing. Yeah, thank you. So today I think you wanted to talk about a topic a lot of people probably haven’t heard about. This little thing called AI. I have a hard time spelling that.

Brian Fox (3:16)
Right?

CRob (3:20)
Let’s just set the stage. What are you thinking about? What do we want to have a conversation around AI about?

Brian Fox (3:26)
Yeah, I think so if we back up a little bit, right? So it was probably around 2011, 2012, I suppose. We started looking at some of the trends that we were seeing within the Maven central downloads. We were seeing things like the most popular version of a crypto library was the one that had a level 10 vulnerability.

fixed and patched years before, but that everybody was still using the vulnerable version. The log for J, log for shell pattern has existed basically forever. It’s not actually new. And so that led us down the path to start doing different things to help our customers A, understand what open source they were using. Way back then, nobody knew. They were like, we’re not using open source. What they really meant was, I don’t think I’m using Linux in open office. They didn’t understand.

that their developers were pulling in all these components. And so the problem space back then was helping them have visibility and then providing data and controls to help them better govern their choices. So we’ve always been trying to help expedite and make it more efficient for developers to make better choices. And so it’s interesting to see this development of AI and all of the kind of things that have come along with it. So that got me thinking, you know, what?

When we started out to build some of the stuff that we built for our customers, my focus at that time was to make it possible to do the analysis in real time so that it wasn’t the case that, we’re just going to do all our stuff and then we’re going to run a compliance scan at the end of the week or end of the month or something. So we were very focused on, it needs to be able to be run every single bill all the time. We need to be able to provide guidelines so that they don’t have to ask the legal team and wait six weeks for an answer, or the security team, right?

We were trying to capture those roles, or those rules, into the system so that they could make better choices in real time. And that was a big thing that organizations needed to be able to scale and become efficient. When you start dealing with thousands of developers, tens of thousands or millions of applications, the tribal knowledge problem kind of falls apart.

CRob (5:33)
Absolutely.

Brian (5:34)
Right? And so you start thinking about what happens with AI, and if you don’t have that stuff in an automated, you know, coded kind of way, how do you feed that to an agent? The agent’s not hanging out with you at lunch. It doesn’t get an onboarding session where we say things like, you know what, we never use an LGPL dependency because we ship our code. Or, you know, we only fix vulnerabilities five and above. Or, you know, whatever the policy may be, those things sometimes can be shared among developers.

CRob (6:02)
Right. and it plays into kind of the classic problem with engineering – is most engineers I’ve met don’t like doing documentation. And with AI entering the chat room becoming this accelerant, it’s making decisions based off of knowledge or lack thereof. if you don’t have your security policy documented, it even goes back to thinking about the early days of Kubernetes.

Where it was a big mental shift for people to have that software defined network inside. And that helped, I think, a lot of organizations get better discipline and rigor where you had less mysterious outages. Because the firewall guy in the back end said, I didn’t do anything, but try test it again.

Brian Fox (6:46)
Right, right. Yeah, for sure. And that’s kind of what we’re seeing now. We’re seeing a lot of that with the, not just with agents. mean, agents are sort of like the next big step and not everybody’s
fully there yet. Some people are dabbling with it. But even just AI assisted coding, you’re seeing the same problem that you come in and you say, hey, I want a new feature. And it just grabs whatever statistically likely thing dependency is going to be in there. We’ve done some studies. We recently released the state of the software supply chain report. It’s a great report. Yeah, thank you. This was our 11th year. We just published it last month. And we did a deep dive on AI recommendations, you know, and we found that 30 % of the time the models were recommending made up versions.

CRob (7:35)
What?

Brian Fox (7:36)
Yeah, just making them up. You know, so it’s kind of shocking. In the real world, you know, if you’ve got a tool, that’s one of those things that fails fast, right? Like it picks a version that doesn’t exist, the thing goes and it immediately blows up and then, you know, Claude or whatever you’re using will go, whoops, and it’ll fix it. So it’s kind of funny, burns some tokens, but the downsides aren’t huge.

If the agent randomly picks a terrible dependency or a very old one that does in fact exist, I would argue that’s worse because there’s no fail fast in that scenario.

CRob (8:09)
Well, you also have the whole problem with slop squatting. Where the models seem to, regardless of what vendor provider you’re getting it from, they seem to fairly consistently suggest the wrong dependencies, kind of like typo squatting.

And so now the bad guys have recognized this kind of fairly consistent behavior and they’re uploading malicious packages with those bad names so that you don’t break the build because it can find what needs.

Brian Fox (8:33)
So instead of it failing fast, it fails fast by grabbing a back door or something. Exactly, that’s exactly right. That’s what slop squatting is what they call it now. Yeah, and so those are some of the challenges that we observe and you kind of take it to the extreme where now you potentially have less sophisticated developers, not classically trained developers using these tools, and they don’t know what they don’t know.

They wouldn’t necessarily stop to say, hey, I want you to now be a security expert and do an assessment of the code you just created. Like somebody who knows better will do that. But if you’ve not lived through the pain that you and I have lived, you wouldn’t think about that. And so on average, these things are going to potentially toss away a lot of the learnings that we’ve known for so many years.

CRob (09:21)
And that’s been a chronic challenge, trying to get the tribal knowledge instantiated, trying to help people make those right decisions. And the AI tools are amazing productivity and efficiency savers, but they are bringing in, as you said, classically untrained professionals that they are not a software engineer. They don’t understand how a system should be architected, or they don’t understand kind of the app sec best practices that help secure the foundation of everything and not let the world fall apart.

Brian Fox (9:59)
The interesting thing is I think they can be if prompted correctly.

CRob (10:02)
Yes.

Brian Fox (10:03)
Right? And that’s where some of the knowledge gap comes in. And I think, what was it last summer, Anthropic released the MCP model control protocol, right? Which is, I’ve spent a lot of time thinking about that pretty deeply and looking at all the tools. And I wrote about this. I think that there’s a high likelihood that we see a lot of the tools we use in software, in the SDLC today, moving more towards providing their capabilities as subject matter experts in “a thing” to an AI agent via MCP.

So I think that, for me, is pretty exciting for a number of reasons. It means, as a tool vendor, I don’t have to create a plug-in for IntelliJ and one for Eclipse and one for VS Code. As an example, MCP can be the same thing for I don’t care what tool you’re using, because it’s interacting with me via this standard API. And I’m kind of talking to it in more or less English prompts. So my ability to deliver the value that we have into whatever tool you feel like using today, and they change every week, is pretty cool.

And I would also argue that the ability to insert that information and to potentially roll out the root prompting that all of the developers are using in these capabilities is better. You’re going to get potentially better compliance than you do today. One of the things I struggled with forever was we created an IDE plugin for our capabilities that it demoed amazingly well. It showed, hey, this dependency has vulnerabilities, or license, or would make recommendations. It was great. But developers just didn’t want to install more plugins. They just weren’t using it, right?

So while it demoed well, the actual usage of it was very low for compliance reasons. That’s a thing we struggle with. Every tool vendor struggles with that. But if you were able to insert that same information into an MCP capability and the company rolls out a root prompt that says something like, hey, every time you’re choosing a new project or a new dependency or trying to assess a dependency, use this MCP server to get up-to-date real-time information, it’s more or less going to do that every time. Right.

CRob (12:13)
Yeah, and I think back to like when I was a baby cyber person going studying for my CISSP, there was a lot of talk in the exam materials about expert systems, which is exactly what I think a best case scenario with these tools can be. It’s you’re expert. I don’t have to necessarily have this expertise. That’s right. But thinking about it takes a lot of knowledge to craft these expert systems. Let’s talk about how some of these models have been trained on potentially less than expert data.

Brian Fox (12:43)
Right, and that’s just, think, the inevitable nature that the frontier models have been trained on, you know, all the stuff that they can find.

CRob (12:49)
The internet.

Brian Fox (12:49)
On the internet, good information, bad information, people talking about terrible dependencies a lot might statistically make that more of recommendation, right? And I think that can be okay as long as you’re plugging in the models that have real data. The things that we’ve seen, you know, when we assess the models is that like I said, they make up versions, they pick old versions arbitrarily, they don’t know about anything newer than when they were last trained, which means both new versions and also vulnerabilities that might be an older version.

So they’re inadvertently recommending, and it’s not even a recommendation really, it’s just using it, right? It’s putting it in there and writing the code around it. Imagine picking Spring, right? It’s just going to go, I’m going to write a Spring app and I’m going to use all Spring 5.

And then when you probe it, then it’s like, oh, sorry, I have to do two major framework updates. You almost have to throw it away and start over. And so if you’re able to plug the right data in up front, you don’t have all of that waste. And again, if you have people who don’t know to prompt it to ask about the latest versions, you can insert that underneath the hood. I think that’s what’s really cool.

Brian Fox (13:59)
But what we’re seeing currently, I kind of wrote about this a little bit too, that I feel like we’re throwing out all the lessons of the past. We’re talking about situations where whole tools, SAST is under fire right now, right? Because when all the code can be just completely generated, what’s the problem with SAST? But I do think that we’ve learned a lot of things over the years if we can figure out how to plug those capabilities into what’s being generated.

I think we can bring all of that forward with us. But the entire SDLC is going to have to adapt to that. It’s not going to be sufficient to say, I’ve got a bunch of developers over here. They’re doing AI assisted development. And then later, we’re going to run a bunch of SAS and produce legacy reports. That’s not going to work. The information has to be fed directly into the AI capabilities up front.

CRob (14:51)
And it’s the classic problem we’ve always had, where security historically is the the last thing done, addressed, it’s bolted on at the end in a lot of cases. And just this AI tooling and just the velocity it has is a huge accelerant for the sins of our past we’ve never actually addressed.

Brian Fox (15:11)
Absolutely, but it also provides the Vulcan mind meld if you want to think of it. You now have that opportunity to plug that right into what the agent is thinking about in the moment. You can’t do that with the humans, but you can do that with the agent. And that’s what I think is potentially exciting about this.

Where I described it recently at a summit, we’re sort of in a bootstrap situation, though, right? Like, we don’t have all of those capabilities. Organizations haven’t rolled them all out. And so we’re sort of in this weird situation, one foot on the boat, one foot on the dock, and it’s not going to end well as we’re going through it. And worse, there are people that are afraid of the MCP protocol. So I hear lots of organizations say, we just block it completely.

Yeah. It’s a little hard to argue that that’s not a reasonable place to start because of the nature of what’s happening. We saw just the other day the latest version of Shai-Hulud came out. Did you see this? And they used MCP capabilities as data exfiltration. And I’m like, come on, guys. There’s so much power in this, but now you’re making it like a bad thing. So I think the industry and the tools and all of that are going to have to work through governance of the MCP capabilities, sanitization inspection of the MCP capabilities just like we’ve seen. So it’s sort of one of these things like when you’ve been around long enough you can recognize the patterns. It’s new and exciting but also the pattern rhymes with a bunch of stuff we’ve done before and what frustrates me is that like everybody charges ahead so fast they just feel like it’s all new it’s all different it’s like yes but let’s not forget everything we’ve learned over the last 60 years of software engineering because the physics is still the same.

CRob (16:50)
Well, and that’s where so our AI / ML working group wrote a paper around ML SecOps. And the paper was really interesting. I recommend the audience check it out. But it was they talked about classic techniques that are assisted and are helpful with AI development. And then it talked about some gaps where we have things like are not documented policies that are kind of a hindrance and something that’s an opportunity in the future to try to get addressed.

Brian Fox (17:18)
Yeah.

CRob (17:19)
But
I’m of two minds about my friends, our new robot overlords, in that it can be extremely helpful, but I don’t see a lot of people reconsidering those lessons of the past of software engineering. To say this is all brand new and totally different, like, well, you’ve got different GPU accelerators and dedicated cores to do things.

And now with this like agentic and ADA architecture where things are more highly distributed, yeah, that’s new twists, but it’s not brand new. We’ve done networking. We’ve done composite applications for decades.

Brian Fox (18:01)
Right. It’s the same thing, you know, we saw when, you know, we were like, oh, everything should be serverless or let’s go to the micro architecture, micro architecture, micro service architecture is going to solve everything until it doesn’t. Right.

Or, you know, that’s no problem, we’ll just put it in the cloud because I can just infinitely scale my machines, right? So I see the same pattern all again, that we sort of say, yes, but this time is different because insert new technology, and then we realize, yes, but everything we know is still true. And that’s what I think we’re sort of grappling with right now as we go through this. What is absolutely true is that, you know, the AI capabilities, the agents, all these things are making everything happen so much faster. That can be good.

can also be bad. If you’ve forgotten all the lessons of the past, you’re just going to create a ton of crap much faster than you could before. And by the time you realize it, it might be too late.

CRob (18:57)
I’m familiar with a lot of enterprises that were going through a digital transformation journey, trying to update their heritage software to newer things and to the cloud to get that scalability and cost efficiency. But a lot of organizations didn’t take that journey, didn’t learn from lessons from the past.

they just crammed what they had out in the cloud, and then a month later they get this giant bill and they’re shocked and confused, or they didn’t understand that this thing wasn’t architected for zero trust, and they’re leaking data everywhere.

Brian Fox (19:30)
Right, right. Or that, or just even the performance reasons why you were excited to infinitely scale, sure, but somebody’s not excited to infinitely pay a bigger bill. Inefficient code is still inefficient code, right? And that’s what I think we’re gonna see with… with AI capabilities is just going to happen faster. And without humans in the loop, it provides less opportunities for us to course correct, which is why I’ve been taking a step back and thinking about how do we do that? How does it make sense? I think for some of the stuff that we’ve been doing as a business, it’s really exciting because we have built up really interesting, unique data sets based on being able to see everything going on with Maven Central. We’ve long had Nexus, the repository manager that’s out there.

We have hundreds of thousands of instances. Those things are proxying for enterprises, not just Maven, but NPM, Nougat, Python, all the things. And so that gives us visibility into other ecosystems so we can understand what’s going on, what’s commonly used in enterprises, these kinds of things. And so all of that data can be fed now directly, like I said, the Vulcan mind meld directly into these tools. And it makes it a lot easier.

So in some ways, when we sort this out, and people become less afraid of MCP capabilities, we can directly inject a stream of high quality data to make all of those things better. But, before businesses can really leverage that, they have to get out of the experimentation phase. They have to roll that out. And these things are kind of interrelated. What we see is that organizations are afraid to let developers just go with AI assisted development because it’s not governed, because they can’t govern it.

And those are echoes of what I saw firsthand during the early days of open source. Like I said in the beginning, people said, we’re not using it. And then I’d tell them, yes, you are. And then their reaction was like, well, just shut it all out. It’s like, right, you can’t do anything. So the reaction that some enterprises have right now of like, we’re just not going to do anything with AI, is just setting themselves up to be left behind.

The right answer is to do it thoughtfully and use tools to help them make better decisions.

CRob (21:43)
So reflecting back, mentioned in your report that you have some guidance for people around AI. What would the top two or three things, if somebody’s thinking about moving more aggressively in this AI direction, what can they take away and do immediately or start thinking about?

Brian Fox (21:01)
Yeah, I mean, think the biggest thing is humans like to…try to take the old patterns and just adopt it to the new new technologies like we were talking about take an inefficient architecture and throw it in a cloud It’s gonna fix everything. No, it’s not and I think that’s true of Let’s call it the AI SDLC right an AI native SDLC Might resemble a normal SDLC, but it should be designed differently, right?

You know trying to do the checks and balances after the fact is even worse than it was with humans You need to think about providing that information upfront so that you get the value in the creation of the code and not try to chase it out. You need to be able to think about how all of these things can be done in parallel with agents, breaking these things down. what I would say is, don’t just try to do what you’re doing today and use AI to do it. Take a step back and really assess how can you adopt this.

It’s sort of like the conversations we were having in the board today about developers, maintainers are getting overwhelmed with AI slop. It’s true. A reaction is to stop allowing that to be contributed, just dismiss everything AI. That’s not a good answer. A better answer is let’s figure out how to help them use AI tools to be able to keep up with that, right? Because that’s what it’s good for. It can review and assess the patches faster than the maintainers and then provide sort of a first pass filter, if you will, right?

But that requires thinking outside of the box. Don’t just try to keep doing what you’re doing and try to keep up with it. Think about how you judo move that into something that makes more sense for your organization.

CRob (23:42)
And this skirts along another kind of project you’re passionate about, sustainability and funding. It is one thing to try to admonish the developers, why aren’t you using AI? But there are real costs involved around this. And, just to say, well, you should use the tool that doesn’t help them when there’s no funding. They don’t have access to infrastructure to be able to do these things. And that’s like, think, it touches on your passion project around trying to help get the package repositories more sustainably funded.

Brian Fox (24:18)
That’s right. Yeah, I mean, if you take a step back and you think about open source when probably you started, certainly when I started, what that really meant was you were donating your time. And you were sharing your thoughts, and you were sharing your words via code. And that was in a time when it was perfectly acceptable. In fact, it was the only choice that you built things and you shipped them off of your laptop.

There was when the Apple MacBook Air launched, the first one. That launched with a version of Maven on it that was signed by my key, my personal key, that was built on my personal laptop. So everybody that bought the launch version of the MacBook Air had my signed code on it. That’s kind of cool.

But also kind of scary, right, when you think about it. Like, what if my laptop was compromised? And that’s the world we live in today. Fortunately, in 2009 or whatever it was, that was a little bit more remote of a chance. And so everybody thinks like, well, that’s crazy. You wouldn’t do that anymore. So what does it mean today? It means you have to have certified builds. Usually that means it’s running in the cloud, and it’s attested to, and all these kinds of things. And that’s not free.

Like I can’t donate that, I’m not a hyperscaler. Most open source gets that infrastructure donated by these big companies, but there’s a lot of opportunity for abuse, right? And these types of things. it’s just, at the end of the day, it’s not free. So the cost of producing open source is not free anymore. It’s not just donating my time with equipment and internet access I already have, right? That’s the big difference. And I think people don’t really recognize that and now fast forward to what we’re just talking about AI the obvious answer to deal with AI, you know Piled on PRS is to have AI assistance help.

Who’s gonna pay for that? It’s literally not free. It costs electricity, last time I checked we still pay for our electricity Regardless

CRob (26:12)
Electricity, water


Brian Fox (26:14)
Right all of these things, right? These are very
they have very real implications. They’re just not free and so There’s no good answer to that. How does that get aligned? How do we
how do we continue to create open source software that can power all of these industries in a world where it’s not just somebody donating their time and thoughts? There are no good answers. But we’re working towards trying to align that. Because the bulk of open source software, certainly in our world, in these areas, is being consumed by organizations that are selling for-profit software, more or less.

There’s definitely a lot of hobbyists and stuff like that the biggest consumers from our repositories are all the giant companies. I’ve named the top 100. You would know every single one of them. And I’m sure that’s true for all the registries. So there has to be an answer in there. I don’t know the stat off the top of my head, but the Linux Foundation does the census, right? And it’s billions of dollars of economic value that open source creates. Eight billion? Nine billion?

CRob (27:16)
Trillion.

Brian Fox (27:17)
Oh, it’s a trillion now?

CRob (27:17)
It’s eight (8) trillion, I believe.

Brian Fox ()
Eight (8) Trillion dollars worth of economic value being produced by open source
1 % of that would pay for a lot of that infrastructure, and then a whole bunch more. And so I think that’s what ultimately we have to figure out how to balance. AI just makes that worse, because it moves the bar even further.

CRob (27:39)
Interesting conversation. Any final thoughts you want our listeners and viewers to take away?

Brian Fox (27:46)
Well, certainly go take a look at The State of the Software Supply Chain Report.

CRob (27:51)
Great report.

Brian Fox (27:52))
sonatype.com/SSCR Certainly, I’ve also written a number of blogs. You can find those at our website as well. That deep dive, kind of all these topics we touched on here. Yeah.

CRob (28:02)
Excellent. We’ll put some links as we do our summary. So Brian, thank you for our inaugural episode of Big thoughts, Open Sources. I think this was an amazing conversation that we’re gonna continue to be adding onto and reconsidering in the coming weeks and months.

Brian Fox (28:20)
Yeah, thanks for having me kick it off in Napa.

CRob (28:25)
Thank you. Well, I hope everybody stays cyber safe and sound. We’ll talk to you soon.

Kusari Partners with OpenSSF to Strengthen Open Source Software Supply Chain Security

By Blog, Guest Blog

Cross-post originally published on the Kusari Blog

Open source software powers the modern world; securing it remains a shared responsibility.

The software supply chain is becoming more complex and more exposed with every release. Modern applications rely on vast ecosystems of open source components, dependencies, and increasingly AI-generated code. While this accelerates innovation, it also expands the attack surface dramatically. Threat actors are taking advantage of this complexity with more frequent and sophisticated attacks, from dependency confusion and malicious package injections to license risks that consistently target open source communities.

At the same time, developers are asked to move faster while ensuring security and compliance across thousands of components. Traditional security reviews often happen too late in the development lifecycle, creating friction between development and security teams and leaving maintainers overwhelmed by reactive work.

Kusari is proud to partner with the Open Source Security Foundation (OpenSSF) to offer Kusari Inspector at no cost to OpenSSF projects. Together, we’re helping maintainers and security teams gain deeper visibility into their software supply chains and better understand the relationships between first-party code, third-party dependencies, and transitive components.  

Projects adopting Kusari Inspector include Gemara, GitTUF, GUAC, in-toto/Witness, OpenVEX, Protobom and Supply-chain Levels for Software Artifacts (SLSA). As AI coding tools become standard in open source development, Kusari Inspector serves as the safety net maintainers didn’t know they needed. 

“I used Claude to submit a pull request to go-witness,” said John Kjell, a maintainer of in-toto/Witness. “Kusari Inspector found an issue that Claude didn’t catch. When I asked Claude to fix what Kusari Inspector flagged, it did.”

Maintainers are under growing pressure. According to Kusari’s Application Security in Practice report, organizations continue to struggle with noise, fragmented tooling, and limited visibility into what’s actually running in production. The same challenges affect open source projects — often with fewer resources.

Kusari Inspector helps OpenSSF projects:

  • Map dependencies and transitive risk
  • Identify gaps in attestations and provenance
  • Understand how components relate across builds and releases
  • Reduce manual investigation and security guesswork

Kusari Inspector – Secure Contributions at the Pull Request

Kusari Inspector also helps strengthen the relationship between developers and security teams. Our Application Security in Practice research found that two-thirds of teams spend up to 20 hours per week responding to supply chain incidents — time diverted from building and innovating. 

For open source projects, the burden is often even heavier. From our experience in co-creating and maintaining GUAC, we know most projects are maintained by small teams of part-time contributors and already overextended maintainers who don’t have dedicated security staff. Every reactive investigation, dependency review, or license question pulls limited capacity away from priorities and community support — making proactive, workflow-integrated security even more critical.

By increasing automated checks directly in pull requests, projects reduce review latency and catch issues earlier, shifting from reactive firefighting to proactive prevention. Instead of maintainers “owning” reviews in isolation, Kusari Inspector brings them integrated, context-aware feedback — closer to development and accelerating secure delivery.

This partnership gives OpenSSF projects the clarity they need to make informed security decisions without disrupting developer workflows.

“The OpenSSF welcomes Kusari Inspector as a clear demonstration of community support. This helps our projects shift from reactive security measures to proactive, integrated prevention at scale,” said Steve Fernandez, General Manager, OpenSSF.

“Kusari’s journey has always been deeply connected to the open source security community. We’ve focused on closing knowledge gaps through better metadata, relationships, and insight,” said Tim Miller, Kusari Co-Founder and CEO. “Collaborating with OpenSSF reflects exactly why Kusari was founded: to turn transparency into actionable trust.”

If you’re an OpenSSF project maintainer or contributor interested in strengthening your supply chain posture, use Kusari Inspector for free — https://us.kusari.cloud/signup.

Author Bio

Michael LiebermanMichael Lieberman is co-founder and CTO of Kusari where he helps build transparency and security in the software supply chain. Michael is an active member of the open source community, co-creating the GUAC and FRSCA projects and co-leading the CNCF’s Secure Software Factory Reference Architecture whitepaper. He is an elected member of the OpenSSF Governing Board and Technical Advisory Council along with CNCF TAG Security Lead and an SLSA steering committee member.

Leading Tech Coalition Invests $12.5 Million Through OpenSSF and Alpha-Omega to Strengthen Open Source Security

By Blog

Securing the open source software that underlies our digital infrastructure is a persistent and complex challenge that continues to evolve. The Linux Foundation announced a $12.5 million collective investment to be managed by Alpha-Omega and The Open Source Security Foundation (OpenSSF). This funding comes from key partners including Anthropic, Amazon Web Services (AWS), Google, Google DeepMind, GitHub, Microsoft, and OpenAI. The goal is to strengthen the security, resilience, and long-term sustainability of the open source ecosystem worldwide.

Building on Proven Success through OpenSSF Initiatives

This new investment provides critical support for OpenSSF’s proven, maintainer-centric initiatives. Targeted financial support is a key catalyst for sustained improvement in open source security. The results of the OpenSSF’s collective work in 2025 are clear:

  • Alpha-Omega invested $5.8 million in 14 critical open source projects and completed over 60 security audits and engagements.
  • Growing a Global Community: OpenSSF grew to 117 member organizations and was advanced by 267+ active contributors from 112 organizations, working across 10 Working Groups and 32 Technical Initiatives.
  • Driving Technical Impact: The OpenSSF Technical Advisory Council (TAC) awarded over $660,000 in funding across 14 Technical Initiatives, strengthening supply chain integrity, advancing transparency tools like Sigstore, and enabling community-driven security audits.
  • Measurable Security Uplift: Focused security engagements across critical projects resulted in 52 vulnerabilities fixed and 5 fuzzing frameworks implemented.
  • Expanding Education: Nearly 20,000 course enrollments across OpenSSF’s free training programs, with new courses like Security for Software Development Managers and Secure AI/ML-Driven Software Development empowering developers globally.
  • Global Policy Engagement: Launched the Global Cyber Policy Working Group and served as a challenge advisor for the Artificial Intelligence Cyber Challenge (AIxCC), ensuring the open source voice is heard in evolving regulations like the EU Cyber Resilience Act (CRA).

AI: A New Frontier in Security

The security landscape is changing fast. Artificial intelligence (AI) accelerates both software development and the discovery of vulnerabilities, which creates new demands on maintainers and security teams. However, OpenSSF recognizes that grant funding alone is not the sole solution to the problems AI tools are causing today on open source security teams. This moment also offers powerful new opportunities to improve how security work is completed.

This new funding will help the OpenSSF provide the active resources and dedicated projects needed to support overworked maintainers with the triage and processing of the increased AI-generated security reports they are currently receiving. Our response will feature global strategies tailored to the needs of maintainers and their communities.

“Open source software now underpins the majority of modern software systems, which means the security of that ecosystem affects nearly every organization and user worldwide,” said Christopher Robinson, CTO and Chief Security Architect at OpenSSF. “Investments like this allow the community to focus on what matters most: empowering maintainers, strengthening security practices across projects, and raising the overall security bar for the global software supply chain.”

Securing the Open Source Lifecycle

The true measure of success will be execution. Success is not about how much AI we introduce into open source. It is determined by whether maintainers can use it to reduce risk, remediate serious vulnerabilities faster, and strengthen the software supply chain long term. We are grateful to our funding partners for their commitment to this work, and we look forward to continuing it alongside the maintainers and communities that power the world’s digital systems.

“Our commitment remains focused: to sustainably secure the entire lifecycle of open source software,” said Steve Fernandez, General Manager of OpenSSF. “By directly empowering the maintainers, we have an extraordinary opportunity to ensure that those at the front lines of software security have the tools and standards to take preventative measures to stay ahead of issues and build a more resilient ecosystem for everyone.”

To learn more about open source security initiatives at the Linux Foundation, please visit openssf.org and alpha-omega.dev.

OpenSSF Newsletter – February 2026

By Newsletter

TL;DR:

đŸ‡łđŸ‡± Open Source SecurityCon Europe → Agenda live and registration open

đŸŽ™ïž Securing Agentic AI in Practice → March 17 Tech Talk on AI/ML security in action

📖 Compiler Annotations Guide → Practical C/C++ hardening without rewrites

🏆 Security Slam 2026 → 30-day challenge to level up project security

đŸ‡ȘđŸ‡ș CRA in Practice @ FOSDEM → Turning regulation into actionable steps

📩 Package Repository Security Forum → Cross-ecosystem collaboration in action

đŸŽ™ïž What’s in the SOSS? → CFP tips and a 4-part AIxCC deep dive

6 min read

Join Us at Open Source SecurityCon Europe 2026 in Amsterdam

Planning to attend KubeCon + Cloud Native Con Europe in March? Don’t miss OpenSSF’s co-located 1-day event! This gathering will bring together a diverse community, including software developers, security engineers, public sector experts, CISOs, CIOs, and tech pioneers, to explore challenges and opportunities in modern security. Collaborate with peers and discover the essential tools, knowledge, and strategies needed to ensure a safer, more secure future.

The agenda is live! Read the blog to learn what not to miss in Amsterdam and to see highlights from SecurityCon North America.

Read the blog | Register now | View the agenda

Mark Your Calendar For the Upcoming Tech Talk: Securing Agentic AI in Practice: From OpenSSF Guidance to Real-World Implementation

Tech Talk: Securing Agentic AI in Practice: From OpenSSF Guidance to Real-World ImplementationJoin us for the first OpenSSF Tech Talk of the year, focusing on agentic artificial intelligence (AI) security.

In this session, we will explore how the OpenSSF AI/ML Security Working Group is developing open guidance and frameworks to help secure AI and machine learning systems, and how that work translates into real-world practice. Using SAFE MCP and other solutions from OpenSSF member companies as examples, we will highlight community-driven efforts to improve the security of agentic AI systems, the problems they address, the design tradeoffs involved, and the lessons learned so far.

We will also feature OpenSSF’s free course, Secure AI/ML Driven Software Development (LFEL1012), which gives attendees a clear path to build practical skills and contribute to this rapidly evolving field.

Register and mark your calendar for March 17 at 1:00 p.m. ET. Additional speaker information will be shared soon.

Fill Out All The Margins 📖: OpenSSF Releases Compiler Annotations Guide for C and C++

OpenSSF has released a new Compiler Annotations Guide for C and C++ to help developers improve memory safety, diagnostics, and overall software security by using compiler-supported annotations. The guide explains how annotations in GCC and Clang/LLVM can make code intent explicit, strengthen static analysis, reduce false positives, and enable more effective compile-time and run-time protections. As memory-safety issues continue to drive a significant share of vulnerabilities in C and C++ systems, the guide offers practical, real-world guidance for applying low-friction hardening techniques that improve security without requiring large-scale rewrites of existing codebases. 

Read the blog

Security Slam 2026

Security Slam 2026 is a 30-day security hygiene challenge running from February 20 to March 20, culminating in an awards ceremony at KubeCon + CloudNativeCon Europe. Hosted by OpenSSF in partnership with CNCF TAG Security & Compliance and Sonatype, the event encourages projects to use practical security tools, including OpenSSF resources, to strengthen their security posture based on their maturity level. Participants can earn recognition, badges, and plaques for completing milestones, reinforcing a community-driven effort to improve open source software security at scale. 

Read the blog to learn more | Register now to receive reminders and instructions

EU Cyber Resilience Act (CRA) in Practice @ FOSDEM 2026: From Awareness to Action

At FOSDEM 2026, the CRA in Practice DevRoom brought together open source and industry leaders to turn the EU Cyber Resilience Act from policy discussion into practical action. Through case studies and panels, speakers shared concrete approaches to vulnerability management, SBOMs, VEX, risk assessment, and the steward role. 

Read the blog

Advancing Package Repository Security Through Collaboration

On February 2, OpenSSF convened the Package Manager Security Forum, bringing together maintainers and registry operators from major ecosystems to address shared challenges in package repository security. Discussions highlighted common concerns around identity and account security, governance and abuse handling, transparency, and long-term sustainability. The session reinforced that package ecosystem risks are interconnected and that improving security requires cross-ecosystem coordination, shared frameworks, and continued collaboration through OpenSSF’s neutral convening role.

Read the recap

Getting an OpenSSF Baseline Badge with the Best Practices Badge System

Is your open source project meeting the “minimum definition” of security? The OpenSSF has officially integrated the Open Source Project Security Baseline (OSPS Baseline) into its Best Practices Badge Program.

In our latest blog, David A. Wheeler explains how you can quickly identify and meet essential security requirements to earn a Baseline Badge.

What’s in the SOSS? An OpenSSF Podcast:

#50 – S3E2 Demystifying the CFP Process with KubeCon North America Keynote Speakers

Stacey Potter and Adolfo “Puerco” García Veytia share practical, behind-the-scenes advice on submitting conference talks, fresh off their KubeCon keynote. They break down how CFP review committees work, what makes an abstract stand out, common mistakes to avoid, and why authenticity matters more than polish. The episode also tackles imposter syndrome and encourages new and diverse voices to shape the future of open source through speaking.

#51 – S3E3 AIxCC Part 1: From Skepticism to Success with Andrew Carney

Andrew Carney from DARPA explains the vision and results behind the two-year AI Cyber Challenge (AIxCC), which tasked teams with building AI systems that can automatically find and patch vulnerabilities in open source software. Despite early skepticism, competitors identified more than 80% of seeded vulnerabilities and generated effective patches at surprisingly low compute costs. The episode looks at what comes next as these cyber reasoning systems move from competition to real-world adoption.

#52 – S3E4 AIxCC Part 2: How Team Atlanta Won by Blending Traditional Security and LLMs

Professor Taesoo Kim of Georgia Tech describes how Team Atlanta combined fuzzing, symbolic execution, and large language models to win AIxCC. Initially skeptical of AI, the team shifted its strategy mid-competition and discovered that hybrid approaches produced the strongest results. The conversation also covers commercialization efforts, integration with OSS-Fuzz, and how the experience reshaped academic security research.

#53 – S3E5 AIxCC Part 3: Trail of Bits’ Hybrid Approach with Buttercup

Michael Brown of Trail of Bits discusses Buttercup, the second-place AIxCC system that pairs large language models with conventional software analysis tools. The team focused on using AI for well-scoped tasks like patch generation while relying on fuzzers for proof-of-vulnerability. Now fully open source and able to run on a laptop, Buttercup is actively maintained and positioned for broader enterprise and community use.

#54 – S3E6 AIxCC Part 4: Cyber Reasoning Systems in the Real World

CRob and Jeff Diecks wrap up the AIxCC series by exploring how competition teams are applying their systems to real open source projects such as the Linux kernel and CUPS. They introduce the OSS-CRS initiative, which aims to standardize and combine components from multiple cyber reasoning systems, and share lessons learned about responsibly reporting AI-generated findings. The episode highlights how collaboration through OpenSSF’s AI/ML Security Working Group and Cyber Reasoning Systems SIG is shaping the next phase of AI-driven security.

News from OpenSSF Community Meetings and Projects:

Upcoming community meetings

In the News:

  • The OpenSSF was featured in a Technology Magazine Q&A. CRob discusses OpenSSF’s goals, OSSAfrica, the BEAR Working Group, Security Baseline, and much more. This conversation was also covered by AI Magazine. 

Meet OpenSSF at These Upcoming Events!

Connect with the OpenSSF Community at these key events:

Ways to Participate:

There are a number of ways for individuals and organizations to participate in OpenSSF. Learn more here.

You’re invited to


See You Next Month! 

We want to get you the information you most want to see in your inbox. Missed our previous newsletters? Read here!

Have ideas or suggestions for next month’s newsletter about the OpenSSF? Let us know at marketing@openssf.org, and see you next month! 

Regards,

The OpenSSF Team

What’s in the SOSS? Podcast #41 – S2E18 The Remediation Revolution: How AI Agents Are Transforming Open Source Security with John Amaral of Root.io

By Podcast

Summary

In this episode of What’s in the SOSS, CRob sits down with John Amaral from Root.io to explore the evolving landscape of open source security and vulnerability management. They discuss how AI and LLM technologies are revolutionizing the way we approach security challenges, from the shift away from traditional “scan and triage” methodologies to an emerging “fix first” approach powered by agentic systems. John shares insights on the democratization of coding through AI tools, the unique security challenges of containerized environments versus traditional VMs, and how modern developers can leverage AI as a “pair programmer” and security analyst. The conversation covers the transition from “shift left” to “shift out” security practices and offers practical advice for open source maintainers looking to enhance their security posture using AI tools.

Conversation Highlights

00:25 – Welcome and introductions
01:05 – John’s open source journey and Root.io’s SIM Toolkit project
02:24 – How application development has evolved over 20 years
05:44 – The shift from engineering rigor to accessible coding with AI
08:29 – Balancing AI acceleration with security responsibilities
10:08 – Traditional vs. containerized vulnerability management approaches
13:18 – Leveraging AI and ML for modern vulnerability management
16:58 – The coming “remediation revolution” and fix-first approach
18:24 – Why “shift left” security isn’t working for developers
19:35 – Using AI as a cybernetic programming and analysis partner
20:02 – Call to action: Start using AI tools for security today
22:00 – Closing thoughts and wrap-up

Transcript

Intro Music & Promotional clip (00:00)

CRob (00:25)
Welcome, welcome, welcome to What’s in the SOSS, the OpenSSF’s podcast where I talk to upstream maintainers, industry professionals, educators, academics, and researchers all about the amazing world of upstream open source security and software supply chain security.

Today, we have a real treat. We have John from Root.io with us here, and we’re going to be talking a little bit about some of the new air quotes, “cutting edge” things going on in the space of containers and AI security. But before we jump into it, John, could maybe you share a little bit with the audience, like how you got into open source and what you’re doing upstream?

John (01:05)
First of all, great to be here. Thank you so much for taking the time at Black Hat to have a conversation. I really appreciate it. Open source, really great topic. I love it. Been doing stuff with open source for quite some time. How do I get into it? I’m a builder. I make things. I make software been writing software. Folks can’t see me, but you know, I’m gray and have no hair and all that sort of We’ve been doing this a while. And I think that it’s been a great journey and a pleasure in my life to work with software in a way that democratizes it, gets it out there. I’ve taken a special interest in security for a long time, 20 years of working in cybersecurity. It’s a problem that’s been near and dear to me since the first day I ever had my like first floppy disk, corrupted. I’ve been on a mission to fix that. And my open source journey has been diverse. My company, Root.io, we are the maintainers of an open source project called Slim SIM (or SUM) Toolkit, which is a pretty popular open source project that is about security and containers. And it’s been our goal, myself personally, and as in my latest company to really try to help make open source secure for the masses.

CRob (02:24)
Excellent. That is an excellent kind of vision and direction to take things. So from your perspective, I feel we’re very similar age and kind of came up maybe in semi-related paths. But from your perspective, how have you seen application development kind of transmogrify over the last 20 or so years? What has gotten better? What might’ve gotten a little worse?

John (02:51)
20 years, big time frame talking about modern open source software. I remember when Linux first came out. And I was playing with it. I actually ported it to a single board computer as one of my jobs as an engineer back in the day, which was super fun. Of course, we’ve seen what happened by making software available to folks. It’s become the foundation of everything.

Andreessen said software will eat the world while the teeth were open source. They really made software available and now 95 or more percent of everything we touch and do is open source software. I’ll add that in the grand scheme of things, it’s been tremendously secure, especially projects like Linux. We’re really splitting hairs, but security problems are real. as we’ve seen, proliferation of open source and proliferation of repos with things like GitHub and all that. Then today, proliferation of tooling and the ability to build software and then to build software with AI is just simply exponentiating the rate at which we can do things. Good people who build software for the right reasons can do things. Bad people who do things for the bad reasons can do things. And it’s an arms race.

And I think it’s really both benefiting software development, society, software builders with these tremendously powerful tools to do things that they want. A person in my career arc, today I feel like I have the power to write code at a rate that’s probably better than I ever have. I’ve always been hands on the keyboard, but I feel rejuvenated. I’ve become a business person in my life and built companies.

And I didn’t always have the time or maybe even the moment to do coding at the level I’d like. And today I’m banging out projects like I was 25 or even better. But at the same time that we’re getting all this leverage universally, we also noticed that there’s an impending kind of security risk where, yeah, we can find vulnerabilities and generate them faster than ever. And LLMs aren’t quite good yet at secure coding. I think they will be. But also attackers are using it for exploits and really as soon as a disclosed vulnerability comes out or even minutes later, they’re writing exploits that can target those. I love the fact that the pace and the leverage is high and I think the world’s going to do great things with it, the world of open source folks like us. At the same time, we’ve got to be more diligent and even better at defending.

CRob (05:44)
Right. I heard an interesting statement yesterday where folks were talking about software engineering as a discipline that’s maybe 40 to 60 years old. And engineering was kind of the core noun there. Where these people, these engineers were trained, they had a certain rigor. They might not have always enjoyed security, but they were engineers and there was a certain kind of elegance to the code and that was people much like artists where they took a lot of pride in their work and how the code you could understand what the code is. Today and especially in the last several years with the influx of AI tools especially that it’s a blessing and a curse that anybody can be a developer. Not just people that don’t have time that used to do it and now they get to of scratch that itch. But now anyone can write code and they may not necessarily have that same rigor and discipline that comes from like most of them engineering trades.

John (06:42)
I’m going to guess. I think it’s not walking out too far on limb that you probably coded in systems at some point in your life where you had a very small amount of memory to work with. You knew every line of code in the system. Like literally it was written. There might have been a shim operating system or something small, but I wrote embedded systems early in my career and we knew everything. We knew every line of code and the elegance and the and the efficiency of it and the speed of it. And we were very close to the CPU, very close to the hardware. It was slow building things because you had to handcraft everything, but it was very curated and very beautiful, so to speak. I find beauty in those things. You’re exactly right. I think I started to see this happen around the time when JVM started happening, Java Virtual Machines, where you didn’t have to worry about Java garbage collection. You didn’t have to worry about memory management.

And then progressively, levels of abstraction have changed right to to make coding faster and easier and I give it more you know more power and that’s great and we’ve built a lot more systems bigger systems open source helps. But now literally anyone who can speak cogently and describe what they want and get a system and. And I look at the code my LLM’s produce. I know what good code looks like. Our team is really good at engineering right?

Hmm, how did it think to do it that way? Then go back and we tell it what we want and you can massage it with some words. It’s really dangerous and if you don’t know how to look for security problems, that’s even more dangerous. Exactly, the level of abstraction is so high that people aren’t really curating code the way they might need to to build secure production grade systems.

CRob (08:29)
Especially if you are creating software with the intention of somebody else using it, probably in a business, then you’re not really thinking about all the extra steps you need to take to help protect yourself in your downstream.

John (08:44)
Yeah, yeah. think it’s an evolution, right? And where I think of it like these AI systems we’re working with are maybe second graders. When it comes to professional code authoring, they can produce a lot of good stuff, right? It’s really up to the user to discern what’s usable.

And we can get to prototypes very quickly, which I think is greatly powerful, which lets us iterate and develop. In my company, we use AI coding techniques for everything, but nothing gets into production, into customer hands that isn’t highly vetted and highly reviewed. So, the creation part goes much faster. The review part is still a human.

CRob (09:33)
Well, that’s good. Human on the loop is important.

John (09:35)
It is.

CRob (09:36)
So let’s change the topic slightly. Let’s talk a little bit more about vulnerability management. From your perspective, thinking about traditional brick and mortar organizations, how have you seen, what key differences do you see from someone that is more data center, server, VM focused versus the new generation of cloud native where we have containers and cloud?

What are some of the differences you see in managing your security profile and your vulnerabilities there?

John (10:08)
Yeah, so I’ll start out by a general statement about vulnerability management. In general, the way I observe current methodologies today are pretty traditional.

It’s scan, it’s inventory – What do I have for software? Let’s just focus on software. What do I have? Do I know what it is or not? Do I have a full inventory of it? Then you scan it and you get a laundry list of vulnerabilities, some false positives, false negatives that you’re able to find. And then I’ve got this long list and the typical pattern there is now triage, which are more important than others and which can I explain away. And then there’s a cycle of remediation, hopefully, a lot of times not, that you’re cycling work back to the engineering organization or to whoever is in charge of doing the remediation. And this is a very big loop, mostly starting with and ending with still long lists of vulnerabilities that need to be addressed and risk managed, right? It doesn’t really matter if you’re doing VMs or traditional software or containerized software. That’s the status quo, I would say, for the average company doing vulnerability maintenance. And vulnerability management, the remediation part of that ends up being some fractional work, meaning you just don’t have time to get to it all mostly, and it becomes a big tax on the development team to fix it. Because in software, it’s very difficult for DevSec teams to fix it when it’s actually a coding problem in the end.

In traditional VM world, I’d say that the potential impact and the velocity at which those move compared to containerized environments, where you have

Kubernetes and other kinds of orchestration systems that can literally proliferate containers everywhere in a place where infrastructure as code is the norm. I just say that the risk surface in these containerized environments is much more vast and oftentimes less understood. Whereas traditional VMs still follow a pattern of pretty prescriptive way of deployment. So I think in the end, the more prolific you can be with deploying code, the more likely you’ll have this massive risk surface and containers are so portable and easy to produce that they’re everywhere. You can pull them down from Docker Hub and these things are full of vulnerabilities and they’re sitting on people’s desks.

They’re sitting in staging areas or sitting in production. So proliferation is vast. And I think that in conjunction with really high vulnerability reporting rates, really high code production rates, vast consumption of open source, and then exploits at AI speed, we’re seeing this kind of almost explosive moment in risk from vulnerability management.

CRob (13:18)
So there’s been, over the last several, like machine intelligence, which has now transformed into artificial intelligence. It’s been around for several decades, but it seems like most recently, the last four years, two years, it has been exponentially accelerating. We have this whole spectrum of things, AI, ML, LLM, GenAI, now we have Agentic and MCP servers.

So kind of looking at all these different technologies, what recommendations do you have for organizations that are looking to try to manage their vulnerabilities and potentially leveraging some of this new intelligence, these new capabilities?

John (13:58)
Yeah, it’s amazing at the rate of change of these kinds of things.

CRob (14:02)
It’s crazy.

John (14:03)
I think there’s a massively accelerating, kind of exponentially accelerating feedback loop because once you have LLMs that can do work, they can help you evolve the systems that they manifest faster and faster and faster. It’s a flywheel effect. And that is where we’re going to get all this leverage in LLMs. At Root, we build an agentic platform that does vulnerability patching at scale. We’re trying to achieve sort of an open source scale level of that.

And I only said that because I believe that rapidly, not just us, but from an industry perspective, we’re evolving to have the capabilities through agentic systems based on modern LLMs to be able to really understand and modify code at scale. There’s a lot of investment going in by all the major players, whether it’s Google or Anthropic or OpenAI to make these LLM systems really good at understanding and generating code. At the heart of most vulnerabilities today, it’s a coding problem. You have vulnerable code.

And so, we’ve been able to exploit the coding capabilities to turn it into an expert security engineer and maintainer of any software system. And so I think what we’re on the verge of is this, I’ll call it remediation revolution. I mentioned that the status quo is typically inventory, scan, list, triage, do your best. That’s a scan for us kind of, you know, I’ll call it, it’s a mode where mostly you’re just trying to get a comprehensive list of the vulnerabilities you have. It’s going to get flipped on its head with this kind of technique where it’s going to be just fix everything first. And there’ll be outliers. There’ll be things that are kind of technically impossible to fix for a while. For instance, it could be a disclosure, but you really don’t know how it works. You don’t have CWEs. You don’t have all the things yet. So you can’t really know yet.

That gap will close very quickly once you know what code base it’s in and you understand it maybe through a POC or something like that. But I think we’re gonna enter into the remediation revolution of vulnerability management where at least for third party open source code, most of it will be fixed – a priority.

Now, zero days will start to happen faster, there’ll be all the things and there’ll be a long tail on this and certainly probably things we can’t even imagine yet. But generally, I think vulnerability management as we know it will enter into this phase of fix first. And I think that’s really exciting because in the end it creates a lot of work for teams to manage those lists, to deal with the re-engineering cycle. It’s basically latent rework that you have to do. You don’t really know what’s coming. And I think that can go away, which is exciting because it frees up security practitioners and engineers to focus on, I’d say more meaningful problems, less toil problems. And that’s good for software.

CRob (17:08)
It’s good for the security engineers.

John (17:09)
Correct.

CRob (17:10)
It’s good for the developers.

John (17:11)
It’s really good for developers. I think generally the shift left revolution in software really didn’t work the way people thought. Shifting that work left, it has two major frictions. One is it’s shifting new work to the engineering teams who are already maximally busy.

CRob (17:29)
Correct.

John (17:29)
I didn’t have time to do a lot of other things when I was an engineer. And the second is software engineers aren’t security engineers. They really don’t like the work and maybe aren’t good at the work. And so what we really want is to not have that work land on their plate. I think we’re entering into an age where, and this is a general statement for software, where software as a service and the idea of shift left is really going to be replaced with I call shift out, which is if you can have an agentic system do the work for you, especially if it’s work that is toilsome and difficult, low value, or even just security maintenance, right? Like lot of this work is hard. It’s hard. That patching things is hard, especially for the engineer who doesn’t know the code. If you can make that work go away and make it secure and agents can do that for you, I think there’s higher value work for engineers to be doing.

CRob (18:24)
Well, and especially with the trend with open source, kind of where people are assembling composing apps instead of creating them whole cloth. It’s a very rare engineer indeed that’s going to understand every piece of code that’s in there.

John (18:37)
And they don’t. I don’t think it’s feasible. don’t know one except the folks who write node for instance, Node works internally. They don’t know. And if there’s a vulnerability down there, some of that stuff’s really esoteric. You have to know how that code works to fix it. As I said, luckily, agent existing LLM systems with agents kind of powering them or using or exploiting them are really good at understanding big code bases. They have like almost a perfect memory for how the code fits together. Humans don’t, and it takes a long time to learn this code.

CRob (19:11)
Yeah, absolutely. And I’ve been using leveraging AI in my practice is there are certain specific tasks AI does very well. It’s great at analyzing large pools of data and providing you lists and kind of pointers and hints. Not so great making it up by its own, but generally it’s the expert system. It’s nice to have a buddy there to assist you.

John (19:35)
It’s a pair programmer for me, and it’s a pair of data analysts for you, and that’s how you use it. I think that’s a perfect. We effectively have become cybernetic organisms. Our organic capabilities augmented with this really powerful tool. I think it’s going to keep getting more and more excellent at the tasks that we need offloaded.

CRob (19:54)
That’s great. As we’re wrapping up here, do you have any closing thoughts or a call to action for the audience?

John (20:02)
Call to action for the audience – I think it’s again, passion play for me, vulnerability management, security of open source. A couple of things. same. Again, same cloth – I think again, we’re entering an age where think security, vulnerability management can be disrupted. I think anyone who’s struggling with kind of high effort work and that never ending list helps on the way techniques you can do with open source projects and that can get you started. Just for instance, researching vulnerabilities. If you’re not using LLMs for that, you should start tomorrow. It is an amazing buddy for digging in and understanding how things work and what these exploits are and what these risks are. There are tooling like mine and others out there that you can use to really take a lot of effort away from vulnerability management. I’d say that for any open source maintainers out there, I think you can start using these programming tools as pair programmers and security analysts for you. And they’re pretty good. And if you just learn some prompting techniques, you can probably secure your code at a level that you hadn’t before. It’s pretty good at figuring out where your security weaknesses are and telling you what to do about them. I think just these things can probably enhance security open source tremendously.

CRob (24:40)
That would be amazing to help kind of offload some of that burden from our maintainers and let them work on that excellent


John (21:46)
Threat modeling, for instance, they’re actually pretty good at it. Yeah. Which is amazing. So start using the tools and make them your friend. And even if you don’t want to use them as a pair of programmer, certainly use them as a adjunct SecOps engineer.

CRob (22:00)
Well, excellent. John from Root.io. I really appreciate you coming in here, sharing your vision and your wisdom with the audience. Thanks for showing up.

John (22:10)
Pleasure was mine. Thank you so much for having me.

CRob (22:12)
And thank you everybody. That is a wrap. Happy open sourcing everybody. We’ll talk to you soon.

OpenSSF Newsletter – September 2025

By Newsletter

Welcome to the September 2025 edition of the OpenSSF Newsletter! Here’s a roundup of the latest developments, key events, and upcoming opportunities in the Open Source Security community.

TL;DR:

🎉 Big week in Amsterdam: Recap of OpenSSF at OSSummit + OpenSSF Community Day Europe.

đŸ„š Golden Egg Awards shine on five amazing community leaders.

✹ Fresh resources: AI Code Assistant tips and SBOM whitepaper.

đŸ€ Trustify + GUAC = stronger supply chain security.

🌍 OpenSSF Community Day India: 230+ open source enthusiasts packed the room.

🎙 New podcasts: AI/ML security + post-quantum race.

🎓 Free courses to level up your security skills.

📅 Mark your calendar and join us for Community Events.

Celebrating the Community: OpenSSF at Open Source Summit and OpenSSF Community Day Europe Recap

From August 25–28, 2025, the Linux Foundation hosted Open Source Summit Europe and OpenSSF Community Day Europe in Amsterdam, bringing together developers, maintainers, researchers, and policymakers to strengthen software supply chain security and align on global regulations like the EU Cyber Resilience Act (CRA). The week included strong engagement at the OpenSSF booth and sessions on compliance, transparency, proactive security, SBOM accuracy, and CRA readiness. 

OpenSSF Community Day Europe celebrated milestones in AI security, public sector engagement, and the launch of Model Signing v1.0, while also honoring five community leaders with the Golden Egg Awards. Attendees explored topics ranging from GUAC+Trustify integration and post-quantum readiness to securing GitHub Actions, with an interactive Tabletop Exercise simulating a real-world incident response. 

These gatherings highlighted the community’s progress and ongoing commitment to strengthening open source security. Read more.

OpenSSF Celebrates Global Momentum, AI/ML Security Initiatives and Golden Egg Award Winners at Community Day Europe

At OpenSSF Community Day Europe, the Open Source Security Foundation honored this year’s Golden Egg Award recipients. Congratulations to Ben Cotton (Kusari), Kairo de Araujo (Eclipse Foundation), Katherine Druckman (Independent), Eddie Knight (Sonatype), and Georg Kunz (Ericsson) for their inspiring contributions.

With exceptional community engagement across continents and strategic efforts to secure the AI/ML pipeline, OpenSSF continues to build trust in open source at every level.

Read the full press release to explore the achievements, inspiring voices, and what’s next for global open source security.

Blogs: What’s New in the OpenSSF Community?

Here you will find a snapshot of what’s new on the OpenSSF blog. For more stories, ideas, and updates, visit the blog section on our website.

Open Source Friday with OpenSSF – Global Cyber Policy Working Group

On August 15, 2025, GitHub’s Open Source Friday series spotlighted the OpenSSF Global Cyber Policy Working Group (WG) and the OSPS Baseline in a live session hosted by Kevin Crosby, GitHub. The panel featured OpenSSF’s Madalin Neag (EU Policy Advisor), Christopher Robinson (CRob) (Chief Security Architect) and David A. Wheeler (Director of Open Source Supply Chain Security) who discussed how the Working Group helps developers, maintainers, and policymakers navigate global cybersecurity regulations like the EU Cyber Resilience Act (CRA). 

The conversation highlighted why the WG was created, how global policies affect open source, and the resources available to the community, including free training courses, the CRA Brief Guide, and the Security Baseline Framework. Panelists emphasized challenges such as awareness gaps, fragmented policies, and closed standards, while underscoring opportunities for collaboration, education, and open tooling. 

As the CRA shapes global standards, the Working Group continues to track regulations, engage policymakers, and provide practical support to ensure the open source community is prepared for evolving cybersecurity requirements. Learn more and watch the recording.

Improving Risk Management Decisions with SBOM Data

SBOMs are becoming part of everyday software practice, but many teams still ask the same question: how do we turn SBOM data into decisions we can trust? 

Our new whitepaper, “Improving Risk Management Decisions with SBOM Data,” answers that by tying SBOM information to concrete risk-management outcomes across engineering, security, legal, and operations. It shows how to align SBOM work with real business motivations like resiliency, release confidence, and compliance. It also describes what “decision-ready” SBOMs look like, and how to judge data quality. To learn more, download the Whitepaper.

Trustify joins GUAC

GUAC and Trustify are combining under the GUAC umbrella to tackle the challenges of consuming, processing, and utilizing supply chain security metadata at scale. With Red Hat’s contribution of Trustify, the unified community will serve as the central hub within OpenSSF for building and using supply chain knowledge graphs, defining standards, developing shared infrastructure, and fostering collaboration. Read more.

Recap: OpenSSF Community Day India 2025

On August 4, 2025, OpenSSF hosted its second Community Day India in Hyderabad, co-located with KubeCon India. With 232 registrants and standing-room-only attendance, the event brought together open source enthusiasts, security experts, engineers, and students for a full day of learning, collaboration, and networking.

The event featured opening remarks from Ram Iyengar (OpenSSF Community Engagement Lead, India), followed by technical talks on container runtimes, AI-driven coding risks, post-quantum cryptography, supply chain security, SBOM compliance, and kernel-level enforcement. Sessions also highlighted tools for policy automation, malicious package detection, and vulnerability triage, as well as emerging approaches like chaos engineering and UEFI secure boot.

The event highlighted India’s growing role in global open source development and the importance of engaging local communities to address global security challenges. Read more.

New OpenSSF Guidance on AI Code Assistant Instructions

In our recent blog, Avishay Balter, Principal SWE Lead at Microsoft and David A. Wheeler, Director, Open Source Supply Chain Security at OpenSSF introduce the OpenSSF “Security-Focused Guide for AI Code Assistant Instructions.” AI code assistants can speed development but also generate insecure or incorrect results if prompts are poorly written. The guide, created by the OpenSSF Best Practices and AI/ML Working Groups with contributors from Microsoft, Google, and Red Hat, shows how clear and security-focused instructions improve outcomes. It stands as a practical resource for developers today, while OpenSSF also develops a broader course (LFEL1012) on using AI code assistants securely. 

This effort marks a step toward ensuring AI helps improve security instead of undermining it. Read more.

Open Infrastructure Is Not Free: A Joint Statement on Sustainable Stewardship

Public package registries and other shared services power modern software at global scale, but most costs are carried by a few stewards while commercial-scale users often contribute little. Our new open letter calls for practical models that align usage with responsibility — through partnerships, tiered access, and value-add options — so these systems remain strong, secure, and open to all.

Signed by: OpenSSF, Alpha-Omega, Eclipse Foundation (Open VSX), OpenJS Foundation, Packagist (Composer), Python Software Foundation (PyPI), Rust Foundation (crates.io), Sonatype (Maven Central).

Read the open letter.

What’s in the SOSS? An OpenSSF Podcast:

#38 – S2E15 Securing AI: A Conversation with Sarah Evans on OpenSSF’s AI/ML Initiatives

In this episode of What’s in the SOSS, Sarah Evans, Distinguished Engineer at Dell Technologies, discusses extending secure software practices to AI. She highlights the AI Model Signing project, the MLSecOps whitepaper with Ericsson, and efforts to identify new personas in AI/ML operations. Tune in to hear how OpenSSF is shaping the future of AI security.

#39 – S2E16 Racing Against Quantum: The Urgent Migration to Post-Quantum Cryptography with KeyFactor’s Crypto Experts

In this episode of What’s in the SOSS, host Yesenia talks with David Hook and Tomas Gustavsson from Keyfactor about the race to post-quantum cryptography. They explain quantum-safe algorithms, the importance of crypto agility, and why sectors like finance and supply chains are leading the way. Tune in to learn the real costs of migration and why organizations must start preparing now before it’s too late.

Education:

The Open Source Security Foundation (OpenSSF), together with Linux Foundation Education, provides a selection of free e-learning courses to help the open source community build stronger software security expertise. Learners can earn digital badges by completing offerings such as:

These are just a few of the many courses available for developers, managers, and decision-makers aiming to integrate security throughout the software development lifecycle.

News from OpenSSF Community Meetings and Projects:

In the News:

Meet OpenSSF at These Upcoming Events!

Join us at OpenSSF Community Day in South Korea!

OpenSSF Community Days bring together security and open source experts to drive innovation in software security.

Connect with the OpenSSF Community at these key events:

Ways to Participate:

There are a number of ways for individuals and organizations to participate in OpenSSF. Learn more here.

You’re invited to


See You Next Month! 

We want to get you the information you most want to see in your inbox. Missed our previous newsletters? Read here!

Have ideas or suggestions for next month’s newsletter about the OpenSSF? Let us know at marketing@openssf.org, and see you next month! 

Regards,

The OpenSSF Team

OpenSSF at DEF CON 33: AI Cyber Challenge (AIxCC), MLSecOps, and Securing Critical Infrastructure

By Blog

By Jeff Diecks

The OpenSSF team will be attending DEF CON 33, where the winners of the AI Cyber Challenge (AIxCC) will be announced. We will also host a panel discussion at the AIxCC village to introduce the concept of MLSecOps.

AIxCC, led by DARPA and ARPA-H, is a two-year competition focused on developing AI-enabled software to automatically identify and patch vulnerabilities in source code, particularly in open source software underpinning critical infrastructure.

OpenSSF is supporting AIxCC as a challenge advisor, guiding the competition to ensure its solutions benefit the open source community. We are actively working with DARPA and ARPA-H to open source the winning systems, infrastructure, and data from the competition, and are designing a program to facilitate their successful adoption and use by open source projects. At least four of the competitors’ Cyber Resilience Systems will be open sourced on Friday, August 8 at DEF CON. The remaining CRSs will also be open sourced soon after the event.

Join Our Panel: Applying DevSecOps Lessons to MLSecOps

We will be hosting a panel talk at the AIxCC Village, “Applying DevSecOps Lessons to MLSecOps.” This presentation will delve into the evolving landscape of security with the advent of AI/ML applications.

The panelists for this discussion will be:

  • Christopher “CRob” Robinson – Chief Security Architect, OpenSSF
  • Sarah Evans – Security Applied Research Program Lead, Dell Technologies
  • Eoin Wickens – Director of Threat Intelligence, HiddenLayer

Just as DevSecOps integrated security practices into the Software Development Life Cycle (SDLC) to address critical software security gaps, Machine Learning Operations (MLOps) now needs to transition into MLSecOps. MLSecOps emphasizes integrating security practices throughout the ML development lifecycle, establishing security as a shared responsibility among ML developers, security practitioners, and operations teams. When thinking about securing MLOps using lessons learned from DevSecOps, the conversation includes open source tools from OpenSSF and other initiatives, such as Supply-Chain Levels for Software Artifacts (SLSA) and Sigstore, that can be extended to MLSecOps. This talk will explore some of those tools, as well as talk about potential tooling gaps the community can partner to close. Embracing this methodology enables early identification and mitigation of security risks, facilitating the development of secure and trustworthy ML models.  Embracing MLSecOps methodology enables early identification and mitigation of security risks, facilitating the development of secure and trustworthy ML models.

We invite you to join us on Saturday, August 9, from 10:30-11:15 a.m. at the AIxCC Village Stage to learn more about how the lessons from DevSecOps can be applied to the unique challenges of securing AI/ML systems and to understand the importance of adopting an MLSecOps approach for a more secure future in open source software.

About the Author

JeffJeff Diecks is the Technical Program Manager for the AI Cyber Challenge (AIxCC) at the Open Source Security Foundation (OpenSSF). A participant in open source since 1999, he’s delivered digital products and applications for dozens of universities, six professional sports leagues, state governments, global media companies, non-profits, and corporate clients.

🎉 Celebrating Five Years of OpenSSF: A Journey Through Open Source Security

By Blog

August 2025 marks five years since the official formation of the Open Source Security Foundation (OpenSSF). Born out of a critical need to secure the software supply chains and open source ecosystems powering global technology infrastructure, OpenSSF quickly emerged as a community-driven leader in open source security.

“OpenSSF was founded to unify and strengthen global efforts around securing open source software. In five years, we’ve built a collaborative foundation that reaches across industries, governments, and ecosystems. Together, we’re building a world where open source is not only powerful—but trusted.” — Steve Fernandez, General Manager, OpenSSF

đŸŒ± Beginnings: Answering the Call

OpenSSF was launched on August 3, 2020, consolidating earlier initiatives into a unified, cross-industry effort to protect open source projects. The urgency was clear—high-profile vulnerabilities such as Heartbleed served as stark reminders that collective action was essential to safeguard the digital infrastructure everyone depends on.

“From day one, OpenSSF has been about action—empowering the community to build and adopt real-world security solutions. Five years in, we’ve moved from ideas to impact. The work isn’t done, but the momentum is real, and the future is wide open.” — Christopher “CRob” Robinson, Chief Architect, OpenSSF

🚀 Milestones & Major Initiatives

Over the past five years, OpenSSF has spearheaded critical initiatives that shaped the landscape of open source security:

2021 – Secure Software Development Fundamentals:
Launching free educational courses on edX, OpenSSF equipped developers globally with foundational security practices.

“When we launched our first free training course in secure software development, we had one goal: make security knowledge available to every software developer. Today, that same mission powers all of OpenSSF—equipping developers, maintainers, and communities with the tools they need to make open source software more secure for everyone.” — David A. Wheeler, Director, Open Source Supply Chain Security, Linux Foundation

2021 – Sigstore: Open Source Signing for Everyone:
Sigstore was launched to make cryptographic signing accessible to all open source developers, providing a free and automated way to verify the integrity and provenance of software artifacts and metadata.

“Being part of the OpenSSF has been crucial for the Sigstore project. It has allowed us to not only foster community growth, neutral governance, and engagement with the broader OSS ecosystem, but also given us the ability to coordinate with a myriad of in-house initiatives — like the securing software repos working group — to further our mission of software signing for everybody. As Sigstore continues to grow and become a core technology for software supply chain security, we believe that the OpenSSF is a great place to provide a stable, reliable, and mature service for the public benefit.”
— Santiago Torres-Arias, Assistant Professor at Purdue University and Sigstore TSC Chair Member 

2021-2022 – Security with OpenSSF Scorecard & Criticality Score:
Innovative tools were introduced to automate and simplify assessing open source project security risks.

“The OpenSSF has been instrumental in transforming how the industry approaches open source security, particularly through initiatives like the Security Scorecard and Sigstore, which have improved software supply chain security for millions of developers. As we look ahead, AWS is committed to supporting OpenSSF’s mission of making open source software more secure by default, and we’re excited to help developers all over the world drive security innovation in their applications.” — Mark Ryland, Director, Amazon Security at AWS

2022 – Launch of Alpha-Omega:

Alpha-Omega (AO), an associated project of the OpenSSF launched in February 2022, is funded by Microsoft, Google, Amazon, and Citi. Its mission is to enhance the security of critical open source software by enabling sustainable improvements and ensuring vulnerabilities are identified and resolved quickly. Since its inception, the Alpha-Omega Fund has invested $14 million in open source security, supporting a range of projects including LLVM, Java, PHP, Jenkins, Airflow, OpenSSL, AI libraries, Homebrew, FreeBSD, Node.js, jQuery, RubyGems, and the Linux Kernel. It has also provided funding to key foundations and ecosystems such as the Apache Software Foundation (ASF), Eclipse Foundation, OpenJS Foundation, Python Foundation, and Rust Foundation.

2023 – SLSA v1.0 (Supply-chain Levels for Software Artifacts):
Setting clear and actionable standards for build integrity and provenance, SLSA was a turning point for software supply chain security and became essential in reducing vulnerabilities.
At the same time, community-driven tools like GUAC (Graph for Understanding Artifact Composition) built on SLSA’s principles, unlocking deep visibility into software metadata, making it more usable, actionable and connecting the dots across provenance, SBOMs and in-toto security attestations.

“Projects like GUAC demonstrate how open source innovation can make software security both scalable and practical. Kusari is proud to have played a role in these milestones, helping to strengthen the resiliency of the open source software ecosystem.”

— Michael Lieberman, CTO and Co-founder at Kusari and Governing Board member

2024 – Principles for Package Repository Security:

Offering a voluntary, community-driven security maturity model to strengthen the resilience of software ecosystems.

“Developers around the world rely daily on package repositories for secure distribution of open source software. It’s critical that we listen to the maintainers of these systems and provide support in a way that works for them. We were happy to work with these maintainers to develop the Principles for Package Repository Security, to help them put together security roadmaps and provide a reference in funding requests.” — Zach Steindler, co-chair of Securing Software Repositories Working Group, Principal Engineer, GitHub

2025

OSPS Baseline:
This initiative brought tiered security requirements into the AI space, quickly adopted by groundbreaking projects such as GUAC, OpenTelemetry, and bomctl.

“The Open Source Project Security Baseline was born from real use cases, with projects needing robust standardized guidance around how to best secure their development processes. OpenSSF has not only been the best topical location for contributors from around the world to gather — the foundation has gone above and beyond by providing project support to extend the content, promote the concept, and elevate Baseline from a simple control catalog into a robust community and ecosystem.” — Eddie Knight, OSPO Lead, Sonatype

AI/ML Security Working Group: 

The MLSecOps White Paper from the AI/ML Security Working Group marks a major step in securing machine learning pipelines and guiding the future of trustworthy AI.

“The AI/ML working group tackles problems at the confluence of security and AI. While the AI world is moving at a breakneck pace, the security problems that we are tackling in the traditional software world are also relevant. Given that AI can increase the impact of a security vulnerability, we need to handle them with determination. The working group has worked on securing LLM generating code, model signing and a new white paper for MLSecOps, among many other interesting things.” — Mihai Maruseac, co-chair of AI/ML Security Working Group, Staff Software Engineer, Google

🌐 Growing Community & Policy Impact

OpenSSF’s role rapidly expanded beyond tooling, becoming influential in global policy dialogues, including advising the White House on software security and contributing to critical policy conversations such as the EU’s Cyber Resilience Act (CRA).

OpenSSF also continues to invest in community-building and education initiatives. This year, the Foundation launched its inaugural Summer Mentorship Program, welcoming its first cohort of mentees working directly with technical project leads to gain hands-on experience in open source security.

The Foundation also supported the publication of the Compiler Options Hardening Guide for C and C++, originally contributed by Ericsson, to help developers and toolchains apply secure-by-default compilation practices—especially critical in memory-unsafe languages.

In addition, OpenSSF has contributed to improving vulnerability disclosure practices across the ecosystem, offering guidance and tools that support maintainers in navigating CVEs, responsible disclosure, and downstream communication.

“The OpenSSF is uniquely positioned to advise on considerations, technical elements, and community impact public policy decisions have not only on open source, but also on the complex reality of implementing cybersecurity to a diverse and global technical sector. In the past 5 years, OpenSSF has been building a community of well-informed open source security experts that can advise regulations but also challenge and adapt security frameworks, law, and regulation to support open source projects in raising their security posture through transparency and open collaboration; hallmarks of open source culture.” — Emily Fox, Portfolio Security Architect, Red Hat

✹ Voices from Our Community: Reflections & Hopes

Key community members, from long-standing contributors to new voices, have shaped OpenSSF’s journey:

OG Voices:

“Microsoft joined OpenSSF as a founding member, committed to advancing secure open source development. Over the past five years, OpenSSF has driven industry collaboration on security through initiatives like Alpha-Omega, SLSA, Scorecard, Secure Software Development training, and global policy efforts such as the Cyber Resilience Act. Together, we’ve improved memory safety, supply chain integrity, and secure-by-design practices, demonstrating that collaboration is key to security. We look forward to many more security advancements as we continue our partnership.” — Mark Russinovich, CTO, Deputy CISO, and Technical Fellow, Microsoft Azure

OpenSSF Leadership Perspective: 

“OpenSSF’s strength comes from the people behind it—builders, advocates, and champions from around the world working toward a safer open source future. This milestone isn’t just a celebration of what we’ve accomplished, but of the community we’ve built together.” — Adrianne Marcum, Chief of Staff, OpenSSF

Community Perspectives:

“After 5 years of hard work, the OpenSSF stands as a global force for securing the critical open-source that we all use. Here’s to five years of uniting communities, hardening the software supply chain, and driving a safer digital future.” Tracy Ragan, CEO, DeployHub

I found OpenSSF through my own curiosity, not by invitation, and I stayed because of the warmth, support, and shared mission I discovered. From contributing to the BEAR Working Group to receiving real backing for opportunities, the community consistently shows up for its members. It’s more than a project; it’s a space where people are supported, valued, and empowered to grow.” Ijeoma Onwuka, Independent Contributor

🔼 Looking Forward

As we celebrate our fifth anniversary, OpenSSF is preparing for a future increasingly influenced by AI-driven tools and global collaboration. Community members across the globe envision greater adoption of secure AI practices, expanded policy influence, and deeper, inclusive international partnerships.

“As we celebrate OpenSSF’s 5th Anniversary, I’m energized by how our vision has grown into a thriving global movement of developers, maintainers, security researchers, and organizations all united by our shared mission. Looking ahead we’re hoping to cultivate our community’s knowledge and empower growth through stronger collaboration and more inclusive pathways for contributors.” – Stacey Potter, Community Manager, OpenSSF

📣 Join the Celebration

We invite you to share your memories, contribute your voice, and become part of the next chapter in securing open source software.

Here’s to many more years ahead! 🎉