Stephanie Domas, Canonical’s Chief Security Officer, returns to What’s in the SOSS to discuss critical open source challenges. She addresses the issues of third-party security patch versioning, the rise of software sovereignty, and how custom patches break SBOMs. Domas also explains why geographic code restrictions contradict open source principles and what the EU’s Cyber Resilience Act (CRA) means for enterprises. She highlights Canonical’s work integrating memory-safe components like sudo-rs into the next Ubuntu LTS. This episode challenges assumptions about supply chain security, software trust, and the future of collaborative development in a regulated world.
00:00 – Welcome
01:49 – Memory safety revolution
02:00 – Black Hat reflections
03:48 – The SBOM versioning crisis
06:23 – Semantic versioning falls apart
10:06 – Software sovereignty exposed
12:33 – Trust through transparency
14:02 – The insider threat parallel
17:04 – EU CRA impact
18:50 – The manufacturer gray area
21:08 – The one-maintainer problem
22:51 – Will regulations kill open source adoption?
24:43 – Call to action
CRob (00:07.109)
Welcome welcome welcome to what’s in the sauce where we talked to the amazing people that make up the upstream open source ecosystem these are developers maintainers researchers all manner of contributors that help make open source great today I have an Amazing friend of the show and actually this is a repeat performance. She has been with us once before
We have Stephanie Domas. She’s the chief security officer for Canonical. It’s a little company you might have heard about. So welcome for joining and thank you for joining us today, Stephanie.
Stephanie Domas (00:45.223)
Thank you for having me again for a second time.
CRob (00:48.121)
I know. It’s been a while since we chatted. So how are things going in the amazing world of the Linux distros?
Stephanie Domas (00:58.341)
Yeah, so just for people who aren’t as familiar with canonical because we do have a bit of a brand thing, we are the makers of Ubuntu Linux. So connect the dots for everyone. So the world of the distros is always a fun place. There has been a lot of recent excitement around NPM hacks, supply chain hacks. And so the world of archives, running archives, running a distro, there’s never a dull moment.
CRob (01:08.422)
Yay.
Stephanie Domas (01:28.475)
So on our distro front, right, we’ve been excited to try and on the security front, focusing on, we’re taking a fresh eye to how to introduce things like memory, save sort of core components into our operating systems. So a sudo RS, so a Rust implementation of sudo is now a part.
CRob (01:40.231)
Ooh.
Stephanie Domas (01:49.085)
Ubuntu will be a part of our next LTS, which is something we’re really excited. We’re looking for more opportunities to replace some of these fundamental components with memory safe versions of them. So that’s also exciting.
CRob (01:52.604)
Nice.
CRob (02:00.167)
That’s amazing. My hat is off to all of you for taking the leadership role and doing that. That’s great. I had the great opportunity to participate in a panel with you recently at Hacker Summer Camp. So kind of, I know it’s been a little bit of time, reflecting back, what was your favorite experience from the Black Hat Def Con, B-Sides, Diana week?
Stephanie Domas (02:24.609)
Yeah, and so I’m always one of those people who one of my favorite things is just the ability to reconnect with so many people in the industry. That’s the one time of year where actually despite the fact that you and I physically live near each other, that actually tends to be the one time a year we see each other in person. so extending that to all of just the great people I’ve known in the industry and getting to see them. The panel you spoke about was another real highlight for me. So the panel for those who didn’t get the privilege of attending wasn’t asked me anything on open source. And it was great because there was such a variety of interests right there. People who are interested in what it’s like to be a maintainer. What does it mean to use open source in enterprise? What does it mean to try and think of monetization models around open source? And so the diversity of conversation, which is really exciting to find people who are in the space really unfamiliar with open source or trying to figure out how do I reason about it and to be able to have those conversations with them was really exciting for me.
CRob (03:24.515)
I agree, I thought that was a great experience and I would love to see us do that again sometime. Excellent. Well, let’s move on to business. Today you have some things you’re interested in talking about. So let’s talk about one of my favorite topics, a software bill of materials and also talking about sovereignty within software.
Stephanie Domas (03:29.725)
I’ll put in a good word.
CRob (03:48.867)
From your perspective, how are you seeing these ideas and the tooling around SBOM, how are you seeing that adopted or used within the ecosystem today?
Stephanie Domas (04:00.421)
Yeah, so I will say, I’ll preface it with SBOMs continue to show a lot of really great promise. I do think there is a lot of value to them. I think they’re really starting to deliver on some of those benefits from trying to implement them at scale, right across our entire distro, across the archives, right? There are still some implementation challenges. So one of the things that’s been on my mind a lot recently is around versioning.
CRob (04:28.273)
Ooh.
Stephanie Domas (04:29.863)
So I feel like every week I see some new vendor or I get approached by some vendor whose business model is to create patches for you on the open source that you use to help maintain your security.
It’s a really interesting business proposition. get why these companies are popping up. But when I start to think of S-bombs and some of the difficulty we’re having around implementing them, this idea of version control in open source, when we have companies coming out of the woodwork to create patches, just creates this swirl of chaos.
To add some more, to add a specific example around there, you think about semantic versioning, right? When we release the version of something, it’s 5.0.0. When we release a patch, when the original manufacturer, the upstream releases a patch, it’s 5.0.1, right? Then 5.0.2. And so S-bombs come into play because the S-bombs goal is to understand the version of the thing that is inside of your piece of software. And then it uses that to kind of look up and say, hey, what are any known security issues in this version. But as I continue to see and have even talked to some of these businesses whose whole value proposition is, we will actually maintain and develop patches for you for the things that you are using, you’re breaking this whole idea of semantic versioning that now you are potentially carrying a tremendous number of patches for security specifically that aren’t representing that versioning number. And so then the question becomes like, what do you do with that? If I’m using internally 5.0.0 and I’ve used one of these companies to develop a security patch for me, what does that mean to my version number? What do I put in my S-bomb when I have created my own security patches that I have chosen not to upstream? What does that mean? And so…
Stephanie Domas (06:23.931)
I see this as a really unique problem to open source, right? Where there’s, there’s a lot of upsides to open source, like the tremendous amount to be specific, but the code base being open and allowing people to develop their own patches now creates this slurry of confusion of, don’t think version numbers will be the thing we can rely on in S bonds. And I don’t know what the answer is, right? I don’t actually, I don’t have a good approach to this when we’re starting to see, you know, customers using these companies create their own patches, they use some internal vulnerability scanner, and then they come back to us and say, hey, there’s these problems, these reports I’m getting from my scanner that’s scanning my S-bombs, and we’re sitting there confused of like, but because you’re carrying these other patches, I mean, it’s causing so much confusion where inside of the company, they don’t even realize. And so they get these scanning reports, so they come to us. They’re like, well, I don’t know how to answer this question because you’ve done something else internally, which is great. You like that, but your risk position, you’ve hired one of these companies to give you patches. So it’s one of these interesting things that’s been on my mind recently of, don’t know what the answer is, but I think SBOM’s inversion control and open source is going to be a really big struggle in the coming years as we see more of these offshoots of, of patches and security maintenance coming from not upstream and with no intention to upstream them.
CRob (07:50.415)
And I think, you we talk, get to deal with a lot of global public policy concerns and thinking about like the CRA where manufacturers are, have a lot of pressure to produce these patches. They’re encouraged to share them back upstream, but they’re not required. So I imagine the situation that you’re describing where these vendors are kind of setting up this little cottage practice.
You’re only going to see more of that. again, it’s going to cause a lot of confusion for the end users and a lot of, I’m sure, frustrated calls to your support line saying, I’m using package one, two, three, well, the actual, canonical version, pun intended. Ours is XYZ. Again, I bet that would cause a lot of customer downstream confusion.
Stephanie Domas (08:39.579)
Yeah, the EU CRA is one of the forcing functions that I think will bring a lot of this to light is where you start to have these obligations around patching and add a testing to this to your customers. But then again, you start to get this.
untenable spaghetti muster. How is that virgining happening when that manufacturer has potentially again use one of these businesses to create patches internally that they did not upstream. And so what does that mean for the S-bomb of the users when you give that to the users? How are they supposed to make informed risk decisions about that when the version number in that S-bomb is either follows upstream, in which case it misrepresents the security posture or they’ve come up with their own versioning system that’s unique to the internal patches that they’re carrying. And so again, the poor users are left without being able to make those informed decisions. So yes, the EU CRA, one of my favorite topics, is it’s gonna be a forcing function for that. And I think it’s gonna…
Again, there’s no answers. Like I think we’re going to be forced to try and make some of these decisions about like, what, what does that mean? Right? How do you, how do users reason about their SBOMs when the version numbers in them may not make sense anymore.
CRob (09:53.423)
It’s good to know that you’ve jumped into one of the hardest problems in software engineering, versioning. Do you want to tackle the naming issue as well while we’re at it?
Stephanie Domas (10:03.205)
I’m here to solve all, actually I’m just here to call out the problems.
CRob (10:06.247)
So somewhat adjacent, you mentioned that you wanted to talk about kind of this trend in the ecosystem around sovereignty. The EU thinks about it as digital sovereignty, but we’ve seen this concept in a lot of different places. So what are some of your observations around this, air quotes, new movement?
Stephanie Domas (10:32.121)
Yeah, so sovereignty is a really interesting one. For those who haven’t been too exposed to it, we’re seeing a big push in things like data sovereignty, which is focused on where is my data, right? Is my data contained in some, name some geographical border?
The big one that has been coming up for me a lot recently is software sovereignty. So there are some countries and some regulations that are taking the approach that the way to drive trust in a piece of software is by where it was developed, where the developers that wrote that piece of code come from. And on a surface, I…
I can see why they’re thinking that, right? I don’t necessarily agree with that. I can see why they may be thinking, hey, if this software was developed by people with the same ideals as me, maybe I can assume that it is for the same purpose that I agree with, right? I can see that argument, but I don’t love that argument. So the thought process to me is, is this actually the most effective path to true security, right? What they’re trying to achieve through this focus on software sovereignty is in fact more trust in the software. And being economical, right? That our solutions are open source, right? When this has come up for me for ambitions to sell to different federal governments, right? And so I sit in these conversations where contractors and consultants are trying to explain to me like, well, here’s what you have to do to sell to this name federal government here.
And you sit there thinking like, none of this makes any sense for open source. It’s kind of the antithesis of it, right? Attestation that the code was only worked on by people in certain countries. And I keep sitting there thinking, this is a wild way to derive trust in software. And so I sit there asking myself, why is this happening, right? On one side of the fence, there’s the, yes, I derived trust in software by who developed it.
Stephanie Domas (12:33.157)
The other side of that and the side that I think open source solves is, it not a more robust defense, right? To not think about national borders, but instead globally accessible and audible code bases. And again, they’re in these meetings. So CMMC was a reaching one for those in the U S but they have them in all over the world sitting there thinking like there’s absolutely no way for me to achieve anything beyond level one in CMMC because I can’t and have no interest in attesting to build pipelines that are only from certain citizens. And again, it’s one of these where it’s it’s confusing to me. So it’s been on my mind sitting there in these meetings about this and support sovereignty, right? Support sovereignty has actually come up as well for us. We have experience not just governments, but customers now who are seeking support to only be performed by people in name geographic border. And so we’re having to try and develop strategies again around support sovereignty. And again, I go back to if the codebase is globally accessible and auditable, I just feel like that’s a better way to solve this problem of deriving trust in what is done in the software, even from a support perspective, if you want to derive trust in what changes they’re proposing, what code they’re proposing to you. The fact that you can see it all should be how you derive trust, not where that person is physically sitting in the world. So that’s me on my snowboard. Sovereignty for the moment.
CRob (14:02.611)
And I, I like in this movement to back when I was an in-house practitioner in a InfoSec team, that the business always was dismissive of the insider threat vector. The fact that they were so focused on external actors that these, that somebody’s going to come in and hack the organization. But you look at reports like the Verizon data breach report that comes out every year and consistently every year, the most damaging cybersecurity breaches are people outside of your organizational trust boundary. And that’s, know, they cause the most damage. They are the most frequent, whether it’s ignorance or malice, you know, the, the, is the root cause of the problem, but it’s still someone inside of your organization. And to extrapolate that from a nation, you’re going to have a much broader spectrum of people, even outside of the, your small enterprise. So I always thought I was kind of scratched my head why people were so dismissive of this because there’s evidence that shows that the insiders are actually the ones that can cause that have consistently caused the most damage and then when you blow that up.
Stephanie Domas (15:11.773)
And I can tell you, yeah, as someone who spent time as an adjunct professor at The Ohio State University teaching software development.
There’s a lot of not great coders out there, right? So even if their ideals align, even if like there is nothing associated with code coming from a domestic location, it ensures quality, right? Again, even if you’re not worried about malice, there’s a lot of bad developers out there. The ability to have a portable code base should be your number one focus. And so yeah.
CRob (15:46.119)
And it’s the whole, have the thousand eyes, transparency perspective. But what I’ve always loved about open source is the fact that it’s meritocracy based. That, you know, the best ideas get submitted. have, we are able to pull ideas from everywhere that, know, anyone’s idea is potentially equally as valid as everybody else’s. So I love that aspect and that makes the software stronger because it helps ideally.
Coach those developers like you alluded to that might not be as skilled or as aware of security techniques and ideally helps them improve themselves, but also helps ideally not let those bad commits get into the code.
Stephanie Domas (16:25.777)
Yeah, and I’ll be the first to admit that not all open source is quality, right? It’s a variety of things. But the point is that you have the ability to determine that yourself before you choose to use the code. So depending on your risk posture, you get to make that decision.
CRob (16:43.43)
Mm-hmm.
Absolutely. So again, kind of staying on this topic, from your perspective and like the developers and then the customers you get to work with, how has this new uptick of global cyber legislation impacted your mission and impacted your customers?
Stephanie Domas (17:04.698)
Yeah, so I’ll actually I’m going to circle back to our favorite legislation, which is the EU CRA. So this has been a really interesting one because it’s.
CRob (17:10.01)
Yay!
Stephanie Domas (17:15.549)
It’s forced a lot more light, I think from the customers we’re working into the open source they’re using, how they’re consuming it and putting a lot more intentionality into if I continue to consume it this way, what does that mean for my, my liability associated with this piece of code? So in a way it’s been really good that it’s forcing customers to think more about that. As you mentioned earlier about patching and thinking about how am I getting this patch? How am I incorporating this patch? Part of the EU CRA is requiring for security patching for the defined lifecycle of the device. And again, that’s actually driving some really interesting conversations about, okay, yeah, that makes sense for the piece of software I wrote, but I had these pieces that I was consuming in the open source space and what does that mean for these? And so I do think it’s driving a good conversation on what does that mean? I worry that there is still some confusion in the legislation and until we get the practice guidance, we won’t have the answers to around that enterprise consumption of open source. And what does that mean from my risk perspective, right? If you consume so, and I’ll also caveat. like there is a carve out in the EU CRA for open source stewards. Canonical is defied, we make open source, but we are not actually an open source too, right? Because we have monetization models around support and because we want to support our customers, we are actually considered a manufacturer. So we’re taking a manufacturer role. So a manufacturer role over open source is also kind of a big gray area in the practice guidance and the legislation of like, well, what does that mean? Because again, despite the fact that we’re taking manufacturer on Ubuntu, the amount of upstream dependencies of us is astronomical. And so what does that mean? Right? What does that mean for us taking liability on the Debian kernel? I don’t know.
It’s bit confusing at the moment of what that means because we are in fact in manufacturer consuming open source. then to our customers who we have business relationships with, right? We are signing CRA support clauses, right? We are giving them, we are taking manufactured. They don’t have to take manufacturer on the open source they consume from us. But again, it’s a bit unclear of what entirely that means. We are actively involved in things like the Etsy working group that is defined some of these practice standards, particular and operating systems and Ubuntu is one of our flagship products. So we’re involved in that, that working group to try and define like, what exactly does this mean for operating systems? But despite being best known for Ubuntu, we actually have a very large software portfolio. And there’s a lot of software in our portfolio that doesn’t have a vertical practice guidance being made. And so this like general horizontal one that would define the requirements for those products not really see much activity on that yet. So there’s also big unknown around. don’t know what those expectations are for our products right now, which again, have tremendous upstream dependencies. And in some case, we are a fraction of that code base, right? What canonical produces in that code base is a fraction of the overall code base. But when people consume it through, what does that mean? Because we took the manufacturer role on it.
CRob (20:33.415)
Yeah. Well, and you touched on it a little bit earlier that all open source is amazing. Not all of it is great software. It’s a big spectrum. You look at a student project or someone running an experiment that you stumble across on GitHub is a very different experience than like operating with Kubernetes or the Linux kernel or anything from like the Apache foundation. like when you have these large mature teams, they have a lot of different capabilities as opposed to random project you find on the internet that somebody might not have intended for it to be commercialized.
Stephanie Domas (21:08.416)
Yeah, and I was reading an article recently on, I think it was an NPM archive that they had done their statistical analysis on, but they had done all this interesting analysis on the number of essentially the number of different handles who had committed to a piece of software and their overall conclusion right was that.
CRob (21:23.911)
Mm-hmm.
Stephanie Domas (21:26.301)
The majority of open source was one maintainer on this archive, right? That they showed of the, you know, N number of most popular downloads over 50 % of them. And I can’t remember the number. It might’ve been something closer to 80 % was one person. And so again, it’s not open source has a lot of amazing things. Not all of it is well written, not all of it is well maintained. so again, it’s sort of forcing regulations like the EU CRA are forcing people to take a much better look at their supply chain to understand, you know, where am I consuming that from? that something that is well maintained? Because I can’t just take the old version, say it’s stable and not worry about it from there. Can’t do that anymore under the CRA. I have to do security patching on this thing and now I’m potentially responsible for it and what does that mean? I do worry and I hope that we won’t see a recoil of enterprises using open source because of that fear, right? Because there are large libraries out there where there aren’t people willing to step up and take the maintainer role on them. And that makes enterprises afraid to consume those products under the EU CRA. And so that’s a fear that I do have. And again, it’s because it’s a bit of a gray area about how the liability, if that open source repo decides to take steward status, what does that mean for the enterprise consuming that? If it’s something small, it’s a tiny library, maybe the risk isn’t large, but it may be a really meaningful part of the application.
And if there’s no, if you’re not consuming it through somebody willing to take manufacturing role in your chain, will you still be willing to consume that piece of open source? So I do worry about that. I hope that as the practice guidance comes out, that is clarified so there’s better understanding of what that means and we won’t see that recoiling. But I have tried to talk to some of the legislators that are working on that and actually they haven’t been able to answer that question for me yet either. Is there going to be clarification because this is an area of concern. So hopefully between now and more practice guidance, we get more clarification so we don’t see that recoiling.
CRob (23:34.915)
And I know that at least from the horizontal and vertical standard standpoint, we are very close to starting to get public drafts being able to be delivered sometime fourth quarter in 2025. And ideally that’ll allow us to see what the legislators are thinking and where they are planning on landing on some of these issues. And hopefully we get some clarifications. And ideally we get the chance to provide feedback and say, hey, great work.
But have you considered X, Y, and Z to help make this more precise and more clear?
Stephanie Domas (24:11.387)
Absolutely, and my understanding that, hopefully I’m remembering the numbers correctly, I think there’s 15 vertical standards being written in Etsy right now. And so while that’s an astronomical amount of practice guidance, so like tremendous amount of work being done, that still won’t cover all the products. And so there’s still going to be a tremendous number of products that are sitting outside of those 15 vertical standards. And so again, the question will be like, but what does that mean for all the other products that are not in these 15?
CRob (24:43.431)
So as we wind down, what thoughts do you have? What do you want our audience to take away? What actions would you like them to take based off this conversation?
Stephanie Domas (24:55.825)
Yeah, depending on your role in the space. mean, I’m going to go ahead and plug open SSF, right? So the issues that we talked about, like what does SBOM mean as open source starts to become divergent, right? All of these things, I think will only be solved in organizations like the open SSF, right? Who has this bringing all the collective parties together to think about what it means. Same with some of these gray areas I mentioned about in the EU CRA and what does that mean? We need those types of organizations. if anyone listened to this and was like, well, maybe I have ideas on how to solve because all Stephanie did was complain about the problem, but I want to be a part of the solution. Look at joining the organizations like OpenSSF. That’s where the solutions will come from. Right. I can see my side of the problem, but I can’t, I can’t, even if I had ideas, I can’t independently solve it. Organizations like OpenSSF can do.
CRob (25:44.902)
Right.
It’s very much a community set of issues and I think collectively we’re stronger in having a better solution together. My friend, I need to arrange an ice cream social event in the near future before we’re covered in snow. But thank you for your time Stephanie and thank you to you and the team over at Canonical for all your hard work. with that, we’re going to call this a wrap. Thank you everybody.
Stephanie Domas (25:56.805)
Agreed.
CRob (26:17.019)
Happy open sourcing out there.
In this episode of “What’s in the SOSS,” CRob, Ben Cotton, and Eddie Knight discuss the Open Source Project Security Baseline. This baseline provides a common language and control catalog for software security, enabling maintainers to demonstrate their project’s security posture and fostering confidence in open source projects. They explore its integration with other OpenSSF projects, real-world applications like the GUAC case study, and its value to maintainers and stakeholders. The role of documentation in security is emphasized, ensuring secure software deployment. The effectiveness of the baseline is validated through real-world applications and refined by community feedback, with future improvements focusing on better tooling and compliance mapping.
00:00 Welcome & Introductions
02:40 Understanding the Open Source Project Security Baseline
05:54 The Importance of Defining a Security Baseline
08:49 Integrating Baseline with Other OpenSSF Projects
11:42 Real-World Applications: The Glock Case Study
14:21 Value for Maintainers and Other Stakeholders
17:29 The Role of Documentation in Security
20:37 Future Directions for the Baseline and Orbit
23:26 Community Engagement and Feedback
CRob (00:11.23)
Welcome, welcome, welcome to What’s in the SOSS, where we talk to upstream maintainers, security experts, and just generally interesting luminaries throughout the upstream open source ecosystem. My name’s CRob. I’m the security architect for the OpenSSF, and I’m also your host for What’s in the SOSS. Today, we have two amazing friends of the show, amazing community members and developers and contributors across the open source ecosystem. So I want to welcome Eddie and Ben. Do you want to take a moment just to introduce yourselves and kind of explain what your deal is?
Eddie (01:02)
Yeah, my deal is I am in Amsterdam with you at 9 AM with a completely different energy level than you have right now. I am loving this. This is this is awesome. Eddie Knight from Sonatype. I do a lot of work across the Linux Foundation related to security compliance.
Ben (01:20)
I’m Ben Cotton. I’m the open source community lead at Kusari. I’m the leader of the OSPS Baseline SIG and a member of the Orbit Working Group.
CRob (01:29)
Awesome. Great talks today. We’re going to be diving into the OSPS Baseline, the catalog, and ORBIT, GUAC, and a whole bunch of different topics. So let’s set the stage here, gentlemen. The Baseline. Folks have been hearing about this off and on over the last few months, but maybe we could explain this in plain English, like what is the Open Source Project Security Baseline and talk about the catalog?
Eddie (01:57)
All right, I’ll let Ben give the official answer since he’s the project lead for it. Baseline’s a control catalog that helps maintainers and consumers of software have a clear definition of good for their projects and their project security. Ben, you want to give a more real answer?
Ben (02:16)
Yeah, I mean, it’s what it says on the tin, right? It’s a baseline for open source projects to follow in terms of their security hygiene. And it’s not really about the software they’re producing. It’s about the practices that are used to build that software. So the analogy that I recently came up with as we were going back and forth on this is it’s like health department regulations that say, “You have to wash your hands. You can’t pick up the food off the floor and then give it to the customer.” It doesn’t say that the quality of your recipe has to taste good. But you have to use secure practices. So we’ve developed a catalog of controls at three different tiers, the idea being that new projects, small projects, projects that are more trivial in nature just have like a sort of a bare minimum of like, yeah, everyone’s got to do this. Everyone needs to wash their hands before they start cooking food.
CRob (03:14)
I appreciate that.
Ben (03:15)
And that is important for SOSS, right?
CRob (03:18)
Right.
Ben (03:18)
As you go up the levels, know, go up to level three, like that’s really big projects that are, you know, lots of contributors, typically well resourced, at least relative to other open source projects and really important to the world of technology as we know it. And so those have to have the tightest security requirements because they’re rich targets for attackers.
With the baseline, the motivation is like, this is not a list of all the cool things you could do. It’s do this. One of the requirements we have is there is no should – there is only must. Because we don’t want to be having maintainers spending a lot of time chasing down all these things that they could do. We understand that open source maintainers, which we are, are often volunteers doing stuff in their spare time without necessarily any real security training. And so we need to give them straightforward things that are meaningful to enhancing their project security posture.
CRob (04:27)
This is, think, the first time ever on the planet anyone has ever referred to security as cool. Thank you, sir. I appreciate that as a longtime securityologist. So let me ask you, gentlemen both, why do you think it’s so important to define this baseline? And why is that important for open source projects?
Ben (04:47)
So I think the most important thing that’s been coming up in conversations I’ve had with people here in Amsterdam and other places is like, It gives us a common language. Go be more secure is not helpful. It doesn’t tell anyone anything.
And with baseline, especially with these different levels, you can say, our project is secure. We meet baseline level one. We meet baseline level two. Now there’s a common language. We all can know what that means because there is an explicitly defined catalog that says what these levels are. And then conversely, If I’m a vendor or manufacturer of some product and I use an open source project and I want them to be more secure because I have regulatory obligations, I can go to them and say, I really need you to be at baseline level two. We can help you with these specific tasks. And now we’re talking the same language. We have this common understanding of what this means as opposed to you’re not secure enough or you need to be more secure.
CRob (05:52)
Love that.
All right, so from your perspectives, and I think you might have touched on this a little bit, Ben, but how do we think that the baseline makes it easier for maintainers and developers who are already so busy with just their general feature requests and bug burn down?
Eddie (06:09)
So we started this journey a long time before we ever started saying baseline, right. My very first interaction with CNCF before I ever did any commits on OpenSSF, I was just kind of like, maybe attending a call here and there. We were doing this, it was like a security bug bash. And maybe we had called it the slam by this point. We wanted to solve for folks who were doing really cool stuff and everybody in the conversation knew that their stuff was being built well and properly and everybody’s washing their hands and stuff like that, right? But we didn’t have a way to demonstrate that outward and say like, hey, this project is running a clean kitchen. You should trust this more than just a random, you know street food vendor, whatever the open source equivalent of that is.
We want to boost confidence in the projects in our ecosystem. And, back then we had the CLO monitor because it was just for CNCF. And there was this set of rules of like, these are the things that we expect CNCF projects to do. And when we could go to a project and say, and I would pull out my phone on the KubeCon floor and be like, click through, type in your project name, pull it up, see like, this is where you’re scoring right now, right? And the scoring part brings all of its own baggage. But the point is like, there’s this list, right? And they’re like, that’s all you need? That’s it? That’s all you needed me to do, right? And so we had projects that were able to increase their own like personal maintainer confidence in their project. Like, oh man, I’m actually doing a really good job here.
All I needed to do was like shift this, rename this section header in a file so it could be detected. And now people see that I’m actually doing this stuff. And so you’re dramatically boosting our own like confidence in our work, but then you’re also boosting the public confidence in it. And this source is just having a list, right? Now that list for CNCF is not, it did not prove to be scalable and compatible with the broader ecosystem. It’s like, well, we don’t do this, we don’t do that.
So having baseline is a way of saying, let’s get that list, let’s get those wins that we experienced within the CNCF and make that possible for everybody by making it this like, not just CNCF, but agnostic baseline of controls that are good for projects.
CRob (08:53)
And those of us that have come from enterprise computing, the term baseline is very common practice as you’re deploying systems and applications. There’s a checklist of things you have to do before that system is able to go live. I love it. So thinking about the catalog, I realized that we have a lot of other assets within the OpenSSF, a lot of great projects and guidance. Could you maybe touch on some of the other projects within the OpenSSF that Baseline works with / integrates to?
Ben (09:21)
Yes. Yeah. So there is this whole working group called ORBIT in which Baseline sits. And it’s really about generating some of the tooling. So we use Gemara to sort of the scaffolding, I guess, for the control catalog. And it’s a software project that provides a framework to build control catalogs on. We do that. We’re working on tooling to automate some of the baseline evidence collection to make it easier for maintainers to you know, quickly evaluate where they are and what tasks they need to do to achieve their desired level. There’s a very smart man who has done a lot of mapping. This CRob guy has done a lot of work to map baseline to existing things like the best practices badge, as well as other external frameworks like the CRA.
CRob (10:29)
I’ve heard of that.
Ben (10:30)
Right?! Various NIST guidance, you know, really kind of make it so that, you know, baseline gives you not just, you know, confidence in your security posture, but then also gives you pointers to, you know, these more regulatory kind of control catalogs, where if you have a vendor coming to you and saying, hey, we need you to be secure, you can say, well, here, here’s what we meet. Here’s the list of things now you know. You know, so we really try, you know, we want to make sure that baseline is a part of an ecosystem and not just this really good idea that we have off in the corner that is sort of academic.
CRob (11:14)
That’s that’s excellent. That actually helps me pivot to my next set of questions. Let’s move out of the ethereal world of theory and talk about some real world applications of this. We just recently released a case study where we worked with another OpenSSF project named GUAC. And I just loved reading this case study. Could you maybe walk us through what the project was trying to prove and how baseline helped the GUAC project?
Eddie (11:44)
Yeah, that one was actually remarkably easy because all I had to do was yell at Ben and then it was suddenly done. [Crob laughing]
Ben (11:53)
So, you know, with that case study, we had the advantage of I’m a contributor to GUAC and then also as the baseline SIG Lead, like there’s some good overlap there. You know, so really what we were looking at is, you know, sort of a two pronged approach. One, you GUAC, the graph for understanding artifact composition, is a software supply chain security project. It would be really bad if it were, say, compromised in a supply chain incident, right? So, when you’re a security project, you have to have your own house in order. And so, you know, from the beginning, the project has really been done with that in mind. But we want to see, like, you know, validate our assumptions. Like, are we actually doing these things that, you know, are sort of the best practices
CRob (12:41)
Make sense.
Ben (12:43)
And then also, like, you know, from the baseline perspective, we want to get that real world, like here’s an actual project using it. What are the things that are unclear? What are the things that makes that don’t make sense? What are the things that are really easy? And so, you know, with that, we were, was able to use the Linux foundations, LFX insights, now has some automated
evidence collection. And so that, you know, was able to mark up a lot of boxes off right away. Some things are like, well, that’s just how GitHub works. So check, check, check. And so in the space of an hour or so, I was able to do…
CRob (13:35)
An hour?!
Ben (13:26)
An hour. I got level one and level two almost done. There were like four or five things where I was like, I’m not sure if we as a project would consider this sufficient or in a couple cases like we don’t actually document our dependency selection process. There is one, we don’t just add things William Nilliam, but you know we just need to write that down because as new contributors come on they need to know too and so like you know it was the amount of work that actually needed to be done to check the boxes off was really low. Which was very you know good news for me on both sides because I was gonna be the one doing the work.
And I’m the one trying to tell people like, you should use baseline to evaluate your project security. And so we really would love to have more projects do that sort of initial run and give us that feedback and help us. We spent a lot of time as a SIG with probably two dozen people at least have been involved over the last two years.
Coming up with these ideas, debating, you know, what needs to be included, what needs to be excluded. Eddie and I recently spent several hours coming up with definitions of words that were very precise so that we could be very clear and unambiguous. Like when we say project, this is what we mean in this context. and, we’ve tried very hard to keep this as a not a security wish list, but like a practical set of things for real world project maintainers. But, even with dozens of people involved, that’s only dozens of experiences. We want this to be something that’s useful to all eleventy billion open source projects out there.
So we need some more like real world do this, come back and tell us, hey, this doesn’t make sense. “This really is not a fit.” “My project can never do this because” – that kind of information.
CRob (15:36)
That’s awesome. From your perspective, as a maintainer, not that you’ve gone through this for GUAC, as a maintainer, how does that add value to you? What are you hoping to leverage out of that experience beyond the project itself, but as a GUAC maintainer, what are you hoping to gain from going through this baseline process?
Ben (15:58)
Well, I think the first thing is that it just gives confidence that like, yep, we’re doing the right things. We are doing what we can to reasonably protect ourselves from security incidents, particularly supply chain compromise, because GUAC isn’t running a service or anything.
And then, you know, being able to build on that. And then, you know, if, you know, we get emails like, Daniel Stenberg gets from, you know, car manufacturers and stuff like that, you know, we can, you know, just be like, yep, here’s our, our baseline level go have fun – (Daniel, if you’re listening, I would love for the cURL project to try out the baseline) and then you can just be like Yep, here’s my statement that we meet this baseline level as of this date. Have fun. If you want more, send me a check.
CRob (16:59)
So Eddie, we’ve talked about the maintainer a lot. But let’s talk about some other personas that are involved with the baseline and might care. From like an OSPO or an executive or security engineer perspective, what do you see the value of a project going through and having achieved some level of baseline.
Eddie (17:20)
Oh yeah. I mean, any metric badge check mark, right? It’s always helpful because going off of the number of GitHub stars only gets you so far.
CRob (17:35)
Right.
Eddie (17:36)
Especially now, we see that there’s actually an inverted relationship between stars and security for Hugging Face projects.
CRob (17:46)
Huh, really?
Eddie (17:47)
Yeah. Like there’s like somebody, well damn, now I’m gonna have to like find the research and actually show it to you to back my claim up. But yeah, was a little while ago somebody posted something where they found that it used to be more stars is more eyes. More eyes is more critiques. More critiques is more security, right? But for like ML projects, these kinds of things that you find on Hugging Face are the folks who are doing something fishy are pretty good at spoofing stars.
CRob (18:27)
Gaming the system, huh? I don’t like that. That makes me sad. And angry.
Eddie (18:33)
Yeah. And it’s like the more fishy that their thing is, the better their skill set is at spoofing stars. So it’s just kind of a weird thing. So when we have something like the best practices badge, Like, CNCF loves that, like the TOC loves that. Within TAG security and compliance, we obviously also love, it was not meaning to be a contrast statement. You like shook your head, you’re like, what, do you guys disagree? No, we don’t disagree. But there is also this desire to have something that is a little bit more fleshed out, right, which is why we were like, real big on CLO monitor and things like that. So the more fidelity that the badge has the more interesting it is. But I mean anything anything that can help accelerate that selection process is really helpful for the like The OSPO type of personality that you’re talking about.
CRob (19:37)
It’s been interesting kind of working with these projects and then being like a downstream consumer it there are many tasks within application development and app sec that are very difficult to measure. And some things are, I can verify what your settings are in your source forge. I can validate if you’re using multi-factor authentication or not. But there’s like just some tasks that are very difficult. And I’m excited that it’s not a solved problem yet, but the team has a lot of great ideas. And I think things like using security insights and other tools, to help provide that additional evidence showing that yes, here’s our policy. And a lot of the baseline encompasses some tasks that developers don’t always love, which is things like documentation.
Eddie (20:36)
Yeah, we have a lot of documentation checks. That is the number one question that we get, which is a fair question set. But one of the most common question sets is just like, what does documentation have to do with security?
CRob (20:49)
So Eddie, what does documentation have to do with security?
Eddie (20:53)
This is one of those situations where I actually struggle to answer at first. I have an answer. But the first 10 seconds is me going, why is this even? Isn’t it obvious? This is obvious, right? And then I look around the room and it’s like, it’s not obvious. OK. So there’s a couple different types of documentation that we need. So we need the things that you would put in a SECURITY.MD.
Just where do I go if I find a bug, if I find a security vulnerability? Who should I contact? Where should I post this information? What should I expect back from me? Those types of things. But then there’s also stuff if I’m using the project. If I need to run GUAC, Is GUAC secure by default? Is everything locked down when I turn it on? So it might be a little bit harder to turn on and deploy into my cloud infrastructure or whatever, but I don’t need to worry about it. Or is it the opposite? Is it insecure by default? Because almost all projects are insecure by default. The goal is to get more users. So you make it easy to deploy. And that means that when you turn this on, it’s going to have root access to something, it’s gonna have some network traffic that would not be recommended for prod, things like that. And so if we don’t have clear, easily accessible documentation with like a link that people know how to get to that kind of thing, like if this isn’t created and it’s not in a place that people know about it, then you’re actually deploying software that can be secure, but in practicality for users, there’s a high likeliness that they’re going to deploy it in securely. So you might have done your job, but people aren’t gonna be using it in the secure fashion because you haven’t documented it well enough or made it available or clear to them. And those are just like the two that come straight to mind. Like there’s a few different documentation needs that we have.
Ben (23:00)
And some of that, the documentation controls too are around like project policy in terms of, and I mentioned the dependency selection process. you can’t rely on, well, everyone knows this because one, people forget, two, if it’s not written down, everyone knows it, but everyone might know a slightly different thing. And then, you three, hopefully you’re bringing new contributors into your community. They need a place to learn about these things. And so, you know, having some of those things like, you know, we look for dependencies, you know, we prefer that they are actively maintained that they have, you know, maybe an OpenSSF Scorecard score above a certain threshold or like maybe there’s an advanced algorithm you use to mix a bunch of things together and then figure out, you know, maybe, you know, if it’s, you know, a project within an ecosystem, you don’t pull in just random things off of package repository, you have an internal repository that you mirror things into to protect from things like that right but if that’s not written down if that’s not you know clearly documented for the people who need it it’s not going to get followed.
CRob (24:15)
So let’s get out our crystal balls and look into the future. You know what do you guys see for orbit the catalog and just this general let’s work in general?
Eddie ()
What do we see for the future? So we’ve right now we’ve stabilized the control catalog, I would like to, I would like to make it a Gemara complete control catalog, right? So it lists out the capabilities, the threats and the controls, right? Because we’ve written a threat informed control catalog, but we haven’t captured and documented, what threats are we mitigating with this? So I think that’d be pretty cool. How close are we to doing that? I don’t know.
The other thing is just getting, more people to actually demonstrate compliance with the controls? think most projects, especially in the LF, are gonna be predominantly compliant already. Like you’ve already done all this stuff. We just want to be able to tell everybody that you’ve done it.
CRob (25:16)
Get credit for your homework.
Eddie (25:18)
Yeah, we wanna give you credit for this, right. And so that’s gonna be a big lift is going through and doing that hour of work with GUAC. Doing that hour of work with all of these different projects kind of adds up. So that’s gonna be something that I hope happens very soon. Within CNCF, we did it in the security slam fashion, right? So OpenTelemetry, Meshery, OSCAL-COMPASS, and Flux actually, were all part of that in the spring. And that went pretty well. Where the breakdown happened was on multi-repo projects like OpenTelemetry. I think it was 70 repos.
Yeah, like a lot of repos. think Kubernetes is double that, right? Yeah. So when you have so many different repos and we need to go in and say, here’s where the documentation is for this repo. Here’s where the policy is for this repo, right? It gets a little bit bumpy. And I think there’s still some room for improvement on how we’re capturing and doing those evaluations. say, I think I have a backlog. I know. There’s improvement on that.
But as more people are going about that and giving feedback, like Ben comes and says, this is where something took 20 minutes, but I expected it to take five. Then we can actually make those improvements and improve our backlog, refine our backlog a little bit.
Ben (26:51)
Yeah, and I would, know, to Eddie’s point and you mentioned earlier, CRob, but we do not have fully complete tooling to measure all the measurable things yet. And so that’s an area that the Orbit Working Group is working on as a group. We’ve also had some sort proto discussions about having a catalog of examples. What does a dependency selection policy look like? What does this documentation thing look like?
In baseline itself on my backlog includes like just going out real world example, you know, from Fedora, from curl, from Kubernetes, from wherever, like here are some things that look like what we would suggest you have. And then, you know, ideally, I think we’d also want to have a project that is just templates for each of these things that are templatizable. Like you don’t have, you know, so code of conduct licenses, those are pretty well established.
A lot of this other stuff like what what is sort of like the platonic ideal of a security MD file? What is you know the best dependency selection policy that people can just you know do a choose your own adventure? I want this this this put it together. This is what makes sense for my project. Here you go. It’s no it’s of no use to anyone to have everybody writing this from scratch over and over again, especially if they’ve not seen an example of it before.
CRob (28:21)
So as we wind down here. are the calls to action do you have for the community or whether it’s developers in the OpenSSF or just kind of unaffiliated maintainers? What would you like folks to take away from this?
Ben (28:37)
I would love them to look at the open source project security baseline, baseline.openssf.org and evaluate your project against it and give us feedback. What worked? What didn’t? What do you think? Why isn’t this there? We want this real world feedback on the control catalog so we can make sure it is actually fit for the purpose we’ve designed it for. So for me personally, that’s the biggest takeaway I want from people listening to this.
Eddie (29:09)
Complain loudly. That’s what I want. We are trying to create an accelerator. We’re trying to improve the ecosystem. We’re trying to improve the lives of maintainers. And any single place where this is slowing down a maintainer, that is outside of intent. That is a design flaw of some kind. If this is slowing you down, if this is confusing, if you’re getting pushback from some end user who now thinks that you’re doing worse than before you started, before baseline existed, right? We heard that feedback from somebody. It’s like, hey, LFX Insights turned on their scanner, and now I have a user who thinks that our project’s doing a bad job with security. And it’s like, oh, well, that didn’t meet expectations.
CRob (30:00)
That was an unintended consequence.
Eddie (30:02)
Yeah. And it was that perception was inaccurate. The tests were accurate but imprecise, right? They nailed exactly what the tests were trying to do. They were very, very, very much there, but not, they weren’t aiming in the right direction, right? And so we refined like, okay, let’s zone that in, move it closer to the bullseye on what we’re trying to achieve. And I think we’re getting a lot better at that. But that’s because somebody came and ruffled our feathers and was like, hey, you’re not doing what you said you’re trying to do. we thought we were.
CRob (30:43)
Right.
Ben (30:45)
Yeah. And I just want to point out that the baseline is itself an open source project with open public meetings, pull requests welcome. We truly do want feedback and contribution from people who have tried things out or don’t understand. I shared very early on on my social media accounts and a guy I know came back and was like, we could never meet this. And it turns out the wording was just awful. We did not make this clear at all. And yeah, we fixed that. it’s like, OK, we went back and forth a few times. All right, this is our intent. We have now captured it well. And I think the wording is a lot better on that because people were confused and asked questions.
CRob (31:30)
Well, and to your patches welcome comment, we’ve had decent engagement with open source maintainers. I would love to see us have more downstream GRC security people giving us feedback from your perspectives. What other compliance regimes or laws would you like to see? And did we get our compliance mapping right? Is it spot on? Does it speak to the needs you need to have to defend yourselves against auditors and regulators?
Well, Eddie and Ben, two amazing community members, friends of the show here. Thank you for your time. Thank you for all you do across your fleet of open source projects that you contribute to and maintain. And with that, we’re going to call it a day. I want to wish everybody a wonderful day here from sunny Amsterdam. And happy open sourcing. Bye, everybody.
Eddie (32:23)
Thanks CRob.
Ben (32:24)
Thanks, CRob.
By Madalin Neag, Kate Stewart, and David A. Wheeler
In our previous blog post, we explored how the Software Bill of Materials (SBOM) should not be a static artifact created only to comply with some regulation, but should be a decision ready tool. In particular, SBOMs can support risk management. This understanding is increasing thanks to the many who are trying to build an interoperable and actionable SBOM ecosystem. Yet, fragmentation, across formats, standards, and compliance frameworks, remains the main obstacle preventing SBOMs from reaching their full potential as scalable cybersecurity tools. Building on the foundation established in our previous article, this post dives deeper into the concrete mechanisms shaping that evolution, from the regulatory frameworks driving SBOM adoption to the open source initiatives enabling their global interoperability. Organizations should now treat the SBOM not merely as a compliance artifact to be created and ignored, but as an operational tool to support security and augment asset management processes to ensure vulnerable components are identified and updated in a timely proactive way. This will require actions to unify various efforts worldwide into an actionable whole.
The global adoption of the Software Bill of Materials (SBOM) was decisively accelerated by the U.S. Executive Order 14028 in 2021, which mandated SBOMs for all federal agencies and their software vendors. This established the SBOM as a cybersecurity and procurement baseline, reinforced by the initial NTIA (2021) Minimum Elements (which required the supplier, component name, version, and relationships for identified components). Building on this foundation, U.S. CISA (2025) subsequently updated these minimum elements, significantly expanding the required metadata to include fields essential for provenance, authenticity, and deeper cybersecurity integration. In parallel, European regulatory momentum is similarly mandating SBOMs for market access, driven by the EU Cyber Resilience Act (CRA). Germany’s BSI TR-03183-2 guideline complements the CRA by providing detailed technical and formal requirements, explicitly aiming to ensure software transparency and supply chain security ahead of the CRA’s full enforcement.
To prevent fragmentation and ensure these policy mandates translate into operational efficiency, a wide network of international standards organizations is driving technical convergence at multiple layers. ISO/IEC JTC 1/SC 27 formally standardizes and oversees the adoption of updates to ISO/IEC 5962 (SPDX), evaluating and approving revisions developed by the SPDX community under The Linux Foundation. The standard serves as a key international baseline, renowned for its rich data fields for licensing and provenance and support for automation of risk analysis of elements in a supply chain. Concurrently, OWASP and ECMA International maintain ECMA-424 (OWASP CycloneDX), a recognized standard optimized specifically for security automation and vulnerability linkage. Within Europe, ETSI TR 104 034, the “SBOM Compendium,” provides comprehensive guidance on the ecosystem, while CEN/CENELEC is actively developing the specific European standard (under the PT3 work stream) that will define some of the precise SBOM requirements needed to support the CRA’s vulnerability handling process for manufacturers and stewards.
Together, these initiatives show a clear global consensus: SBOMs must be machine-readable, verifiable, and interoperable, supporting both regulatory compliance over support windows and real-time security intelligence. This global momentum set the stage for the CRA, which now transforms transparency principles into concrete regulatory obligations.
The EU Cyber Resilience Act (CRA) (Regulation (EU) 2024/2847) introduces a legally binding obligation for manufacturers to create, 1maintain, and retain a Software Bill of Materials (SBOM) for all products with digital elements marketed within the European Union. This elevates the SBOM from a voluntary best practice to a legally required element of technical documentation, essential for conformity assessment, security assurance, and incident response throughout a product’s lifecycle. In essence, the CRA transforms this form of software transparency from a recommendation into a condition for market access.
The European Commission is empowered, via delegated acts under Article 13(24), to further specify the format and required data elements of SBOMs, relying on international standards wherever possible. To operationalize this, CEN/CENELEC is developing a European standard under the ongoing PT3 work stream, focused on vulnerability handling for products with digital elements and covering the essential requirements of Annex I, Part II of the CRA. Its preparation phase includes dedicated sub-chapters on formalizing SBOM structures, which will serve as the foundation for subsequent stages of identifying vulnerabilities and assessing related threats (see “CRA workshop ‘Deep dive session: Vulnerability Handling” 1h36m35s).
In parallel, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) continues to shape global SBOM practices through its “Minimum Elements” framework and automation initiatives. These efforts directly influence Europe’s focus on interoperability and structured vulnerability handling under the CRA. This transatlantic alignment helps ensure SBOM data models and processes evolve toward a consistent, globally recognized baseline. CISA recently held a public comment window ending October 2, 2025 on a draft version of a revised set of minimum elements, and is expected to publish an update to the original NTIA Minimum Elements in the coming months.
Complementing these efforts, Germany’s BSI TR-03183-2 provides a more detailed technical specification than the original NTIA baseline, introducing requirements for cryptographic checksums, license identifiers, update policies, and signing mechanisms. It already serves as a key reference for manufacturers preparing to meet CRA compliance and will likely be referenced in the forthcoming CEN/CENELEC standard together with ISO/IEC and CISA frameworks. Together, the CRA and its supporting standards position Europe as a global benchmark for verifiable, lifecycle aware SBOM implementation, bridging policy compliance with operational security.
The SBOM has transitioned from a best practice into a legal and operational requirement due to the European Union’s Cyber Resilience Act (CRA). While the CRA mandates the SBOM as part of technical documentation for market access, the detailed implementation is guided by documents like BSI TR-03183-2. To ensure global compliance and maximum tool interoperability, stakeholders must understand the converging minimum data requirements. To illustrate this concept, the following comparison aligns the minimum SBOM data fields across the NTIA, CISA, BSI, and ETSI frameworks, revealing a shared move toward completeness, verifiability, and interoperability.
| Data Field | NTIA (U.S., 2021 Baseline) | CISA’s Establishing a Common SBOM (2024) | BSI TR-03183-2 (Germany/CRA Guidance) (2024) | ETSI TR 104 034 (Compendium) (2025) |
| Component Name | Required | Required | Required | Required |
| Component Version | Required | Required | Required | Required |
| Supplier | Required | Required | Required | Required |
| Unique Identifier (e.g., PURL, CPE) | Required | Required | Required | Required |
| Cryptographic Hash | Recommended | Required | Required | Optional |
| License Information | Recommended | Required | Required | Optional |
| Dependency Relationship | Required | Required | Required | Required |
| SBOM Author | Required | Required | Required | Required |
| Timestamp (Date of Creation) | Required | Required | Required | Required |
| Tool Name / Generation Context | Not noted | Not noted | Required | Optional |
| Known Unknowns Declaration | Optional | Required | Optional | Optional |
| Common Format | Required | Not noted | Required | Required |
| Depth | Not noted | Not noted | Not noted | Optional |
The growing alignment across these frameworks shows that the SBOM is evolving into a globally shared data model, one capable of enabling automation, traceability, and trust across the international software supply chain.
The global SBOM ecosystem is underpinned by two major, robust, and mature open standards: SPDX and CycloneDX. Both provide a machine-processable format for SBOM data and support arbitrary ecosystems. These standards, while both supporting all the above frameworks, maintain distinct origins and strengths, making dual format support a strategic necessity for global commerce and comprehensive security.
The Software Package Data Exchange (SPDX), maintained by the Linux Foundation, is a comprehensive standard formally recognized by ISO/IEC 5962 in 2021. Originating with a focus on capturing open source licensing and intellectual property in a machine readable format, SPDX excels in providing rich, detailed metadata for compliance, provenance, legal due diligence, and supply chain risk analysis. Its strengths lie in capturing complex license expressions (using the SPDX License List and SPDX license expressions) and tracking component relationships in great depth, together with its extensions to support linkage to security advisories and vulnerability information, making it the preferred standard for rigorous regulatory audits and enterprise-grade software asset management. As the only ISO-approved standard, it carries significant weight in formal procurement processes and traditional compliance environments. It supports multiple formats (JSON, XML, YAML, Tag/Value, and XLS) with free tools to convert between the formats and promote interoperability.
The SPDX community has continuously evolved the specification since its inception in 2010, and most recently has extended it to a wider set of metadata to support modern supply chain elements, with the publication of SPDX 3.0 in 2024. This update to the specification contains additional fields & relationships to capture a much wider set of use cases found in modern supply chains including AI. These additional capabilities are captured as profiles, so that tooling only needs to understand the relevant sets, yet all are harmonized in a consistent framework, which is suitable for supporting product line management Fields are organized into a common “core”, and there are “software” and “licensing” profiles, which cover what was in the original specification ISO/IEC 5962. In addition there is now a “security” profile, which enables VEX and CSAF use cases to be contained directly in exported documents, as well as in databases There is also a “build” profile which supports high fidelity tracking of relevant build information for “Build” type SBOMs. SPDX 3.0 also introduced a “Data” and “AI” related profiles, which made accurate tracking of AI BOMs possible, with support for all the requirements of the EU AI Act (see table in linked report). As of writing, the SPDX 3.0 specification is in the final stages of being submitted to ISO/IEC for consideration.
CycloneDX, maintained by OWASP and standardized as ECMA-424, is a lightweight, security-oriented specification for describing software components and their interdependencies. It was originally developed within the OWASP community to improve visibility into software supply chains. The specification provides a structured, machine-readable inventory of elements within an application, capturing metadata such as component versions, hierarchical dependencies, and provenance details. Designed to enhance software supply chain risk management, CycloneDX supports automated generation and validation in CI/CD environments and enables early identification of vulnerabilities, outdated components, or licensing issues. Besides its inclusion with SPDX in the U.S. federal government’s 2021 cybersecurity Executive Order, its formal recognition as an ECMA International standard in 2023 underscore its growing role as a globally trusted format for software transparency. Like SPDX, CycloneDX has continued to evolve since formal standardization and the current release is 1.7, released October 2025.
The CycloneDX specification continues to expand under active community development, regularly publishing revisions to address new use cases and interoperability needs. Today, CycloneDX extends beyond traditional SBOMs to support multiple bill-of-materials types, including Hardware (HBOM), Machine Learning (ML-BOM), and Cryptographic (CBOM), and can also describe relationships with external SaaS and API services. It integrates naturally with vulnerability management workflows through formats such as VEX, linking component data to exploitability and remediation context. With multi-format encoding options (JSON, XML, and Protocol Buffers) and a strong emphasis on automation.
The OpenSSF has rapidly become a coordination hub uniting industry, government, and the open source community around cohesive SBOM development. Its mission is to bridge global regulatory requirements, from the EU’s Cyber Resilience Act (CRA) to CISA’s Minimum Elements and other global mandates, with practical, open source technical solutions. This coordination is primarily channeled through the “SBOM Everywhere” Special Interest Group (SIG), a neutral and open collaboration space that connects practitioners, regulators, and standards bodies. The SIG plays a critical role in maintaining consistent semantics and aligning development efforts across CISA, BSI, NIST, CEN/CENELEC, ETSI, and the communities implementing CRA-related guidance. Its work ensures that global policy drivers are directly translated into unified, implementable technical standards, helping prevent the fragmentation that so often accompanies fast-moving regulation.
A major focus of OpenSSF’s work is on delivering interoperability and automation tooling that turns SBOM policy into practical reality:
Completing this ecosystem is SBOMit, which manages the end-to-end SBOM lifecycle. It provides built-in support for creation, secure storage, cryptographic signing, and controlled publication, embedding trust, provenance, and lifecycle integrity directly into the software supply chain process. These projects are maintained through an open, consensus-driven model, continuously refined by the global SBOM community. Central to that collaboration are OpenSSF’s informal yet influential “SBOM Coffee Club” meetings, held every Monday, where developers, vendors, and regulators exchange updates, resolve implementation challenges, and shape the strategic direction of the next generation of interoperable SBOM specifications.
OpenSSF’s strategic support for both standards – SPDX and CycloneDX – is vital for the entire ecosystem. By contributing to and utilizing both formats, most visibly through projects like Protobom and BomCTL which enable seamless, lossless translation between the two, OpenSSF ensures that organizations are not forced to choose between SPDX and CycloneDX. This dual format strategy satisfies the global requirement for using both formats and maximizes interoperability, guaranteeing that SBOM data can be exchanged between all stakeholders, systems, and global regulatory jurisdictions effectively.
Through this combination of open governance and pragmatic engineering, OpenSSF is defining not only how SBOMs are created and exchanged, but how the world collaborates on software transparency.
The collective regulatory momentum, anchored by the EU Cyber Resilience Act (CRA) and the U.S. Executive Order 14028, supported by the CISA 2025 Minimum Elements revisions, has cemented the global imperative for Software Bill of Materials (SBOM). These frameworks illustrate deep global alignment: both the CRA and CISA emphasize that SBOMs must be structured, interoperable, and operationally useful for both compliance and cybersecurity. The CRA establishes legally binding transparency requirements for market access in Europe, while CISA’s work encourages SBOMs within U.S. federal procurement, risk management, and vulnerability intelligence workflows. Together, they define the emerging global consensus: SBOMs must be complete enough to satisfy regulatory obligations, yet structured and standardized enough to enable automation, continuous assurance, and actionable risk insight. The remaining challenge is eliminating format and semantic fragmentation to transform the SBOM into a universal, enforceable cybersecurity control.
Achieving this global scalability requires a unified technical foundation that bridges legal mandates and operational realities. This begins with Core Schema Consensus, adopting the NTIA 2021 baseline and extending it with critical metadata for integrity (hashes), licensing, provenance, and generation context, as already mandated by BSI TR-03183-2 and anticipated in forthcoming CRA standards. To accommodate jurisdictional or sector-specific data needs, the CISA “Core + Extensions” model provides a sustainable path: a stable global core for interoperability, supplemented by modular extensions for CRA, telecom, AI, or contractual metadata. Dual support for SPDX and CycloneDX remains essential, satisfying the CRA’s “commonly used formats” clause and ensuring compatibility across regulatory zones, toolchains, and ecosystems.
Ultimately, the evolution toward global, actionable SBOMs depends on automation, lifecycle integrity, and intelligence linkage. Organizations should embed automated SBOM generation and validation (using tools such as Protobom, BomCTL, and SBOMit) into CI/CD workflows, ensuring continuous updates and cryptographic signing for traceable trust. By connecting SBOM information with vulnerability data in internal databases, the SBOM data becomes decision-ready, capable of helping identify exploitable or end-of-life components and driving proactive remediation. This operational model, mirrored in the initiatives of Japan (METI), South Korea (KISA/NCSC), and India (MeitY), reflects a decisive global movement toward a single, interoperable SBOM ecosystem. Continuous engagement in open governance forums, ISO/IEC JTC 1, CEN/CENELEC, ETSI, and the OpenSSF SBOM Everywhere SIG, will be essential to translate these practices into a permanent international standard for software supply chain transparency.
The joint guidance “A Shared Vision of SBOM for Cybersecurity” insists on these global synergies under the endorsement of 21 international cybersecurity agencies. Describing the SBOM as a “software ingredients list,” the document positions SBOMs as essential for achieving visibility, building trust, and reducing systemic risk across global digital supply chains. That document’s central goal is to promote immediate and sustained international alignment on SBOM structure and usage, explicitly urging governments and industries to adopt compatible, unified systems rather than develop fragmented, country specific variants that could jeopardize scalability and interoperability.
The guidance organizes its vision around four key, actionable principles aimed at transforming SBOMs from static compliance documents into dynamic instruments of cybersecurity intelligence:
This Shared Vision complements regulatory frameworks like the EU Cyber Resilience Act (CRA) and reinforces the Open Source Security Foundation’s (OpenSSF) mission to achieve cross-ecosystem interoperability. Together, they anchor the future of SBOM governance in openness, modularity, and global collaboration, paving the way for a truly unified software transparency model.
The primary challenge to achieving scalable cyber resilience lies in the fragmentation of the SBOM landscape. Global policy drivers, such as the EU Cyber Resilience Act (CRA), the CISA-led Shared Vision of SBOM for Cybersecurity, and national guidelines like BSI TR-03183, have firmly established the mandate for transparency. However, divergence in formats, semantics, and compliance interpretations threatens to reduce SBOMs to static artifacts generated only because some regulation requires that they be created, rather than dynamic assets that can aid in security. Preventing this outcome requires a global commitment to a unified SBOM framework, a lingua franca capable of serving regulatory, operational, and security objectives simultaneously. This framework must balance policy diversity with technical capability universality, ensuring interoperability between European regulation, U.S. federal procurement mandates, and emerging initiatives in Asia and beyond. The collective engagement of ISO/IEC, ETSI, CEN/CENELEC, BSI, and the OpenSSF provides the necessary multistakeholder governance to sustain this alignment and accelerate convergence toward a common foundation.
Building such a framework depends on two complementary architectural pillars: Core Schema Consensus and Modular Extensions. The global core should harmonize essential SBOM elements, and CRA’s legal structure, into a single, mandatory baseline. Sectoral or regulatory needs (e.g., AI model metadata, critical infrastructure tagging, or crypto implementation details) should be layered through standardized modular extensions to prevent the ecosystem from forking into incompatible variants. To ensure practical interoperability, this architecture must rely on open tooling and universal machine-processable identifiers (such as PURL, CPE, SWID, and SWHID) that guarantee consistent and accurate linkage. Equally crucial are trust and provenance mechanisms: digitally signed SBOMs, verifiable generation context, and linkage with vulnerability data. These collectively transform the SBOM from a passive unused inventory into an actively maintained, actionable cybersecurity tool, enabling automation, real-time risk management, and genuine international trust in the digital supply chain, realizing the OpenSSF vision of “SBOMs everywhere.”
SBOMs have transitioned from a best practice to a requirement in many situations. The foundation established by the U.S. Executive Order 14028 has been legally codified by the EU’s Cyber Resilience Act (CRA), making SBOMs a non-negotiable legal requirement for accessing major markets. This legal framework is now guided by a collective mandate, notably by the Shared Vision issued by CISA, NSA, and 19 international cybersecurity agencies, which provides the critical roadmap for global alignment and action. Complementary work by BSI, ETSI, ISO/IEC, and OpenSSF now ensures these frameworks converge rather than compete.
To fully achieve global cyber resilience, SBOMs must not be merely considered as a compliance artifact to be created and ignored, but instead as an operational tool to support security and augment asset management processes. Organizations must:
By embracing this shared vision, spanning among many others the CRA, CISA, METI, KISA, NTIA, ETSI, and BSI frameworks, we can definitively move from merely fulfilling compliance obligations to achieving verifiable confidence. This collective commitment to transparency and interoperability is the essential step in building a truly global, actionable, and resilient software ecosystem.
Madalin Neag works as an EU Policy Advisor at OpenSSF focusing on cybersecurity and open source software. He bridges OpenSSF (and its community), other technical communities, and policymakers, helping position OpenSSF as a trusted resource within the global and European policy landscape. His role is supported by a technical background in R&D, innovation, and standardization, with a focus on openness and interoperability.
Kate is VP of Dependable Embedded Systems at the Linux Foundation. She has been active in the SBOM formalization efforts since the NTIA initiative started, and was co-lead of the Formats & Tooling working group there. She was co-lead on the CISA Community Stakeholder working group to update the minimum set of Elements from the original NTIA set, which was published in 2024. She is currently co-lead of the SBOM Everywhere SIG.
Dr. David A. Wheeler is an expert on open source software (OSS) and on developing secure software. He is the Director of Open Source Supply Chain Security at the Linux Foundation and teaches a graduate course in developing secure software at George Mason University (GMU). Dr. Wheeler has a PhD in Information Technology, is a Certified Information Systems Security Professional (CISSP), and a Senior Member of the Institute of Electrical and Electronics Engineers (IEEE). He lives in Northern Virginia.