Skip to main content
Yearly Archives

2025

What’s in the SOSS? Podcast #40 – S2E17 From Manager to Open Source Security Pioneer: Kate Stewart’s Journey Through SBOM, Safety, and the Zephyr Project

By Podcast

Summary

In this episode of What’s in the SOSS, CRob has an inspiring conversation with Kate Stewart, a Linux Foundation veteran who took an unconventional path into open source as a manager rather than a developer, navigating complex legal challenges to get Motorola’s contributions upstream. Now a decade into her tenure at the Linux Foundation, Kate leads critical initiatives in safety-critical open source software, including the Zephyr RTOS project and ELISA, while being instrumental in the evolution of SPDX and Software Bill of Materials (SBOM). She breaks down the different types of SBOMs, explains how the Zephyr project became a security exemplar with gold-level OpenSSF badging, and shares practical insights on navigating the European Union’s Cyber Resilience Act (CRA). Whether you’re interested in embedded systems, security best practices, or the evolving regulatory landscape for open source, this episode offers valuable perspectives from someone who’s been shaping these conversations for years.

Conversation Highlights

00:00 Intro Music & Promo Clip
00:00 Introduction and Welcome
00:42 Kate’s Current Work at Linux Foundation
02:18 Origin Story: From Motorola Manager to Open Source Advocate
06:38 Building Global Open Source Teams and SPDX Beginnings
09:45 The Variety of Open Source Contributors
10:57 Deep Dive: What is an SBOM and Why It Matters
17:05 The Evolution of SBOM Types and Academic Understanding
19:21 Cyber Resilience Act and Zephyr as a Security Exemplar
26:46 Zephyr’s Security Journey: From Badging to CNA Status
31:05 Rapid Fire Questions
32:19 Advice for Newcomers and Closing Thoughts

Transcript

Intro Music + Promo Clip (00:00)

CRob (00:07.862)
Welcome, welcome, welcome to “What’s in the SOSS?” the OpenSSF’s podcast where we talked to the amazing people that help produce, manage and advocate this amazing movement we call open source software. Today we have a real treat – someone that I’ve been interacting with off and on over the years through many kind of industry ecosystem spanning things. So I would like to have my friend, Kate Stewart, give us a little introduction, talk about yourself.

Kate Stewart (00:42.103)
Rob, glad to be here. Right now, I work at the Linux Foundation and have been focusing on what it’s going to take to actually be able to use open source in safety critical contexts. So obviously, security is going to be a part of that. We need to make sure that there’s no vulnerabilities and things like that. But it does go beyond that. And so that’s been my focus for the last few years. I’ve been working on a couple of open source projects, which is Zephyr and the other of which is Elisa and then helping with a variety of other embedded projects like Yocto and Zan and so forth. Trying to figure out what can we do to make sure that we actually get to the stage where we can do safety analysis at scale with vulnerabilities happening all the time with open source. And it’s a really good challenge. I’ve also been involved in SBOM World a fair amount over the years for fun. I think I was involved with the SBOM World before it was called the SBOM World actually with a project called SPDX, or software package data exchange is what it’s called now, and it’s now moved to being system package data exchange because data and models and all these other lovely concepts are part of the whole transparency that we’re going to need. And the transparency that we’re going to need is what we’re going to need for safety. So these things also come together with that theme in mind.

CRob (02:02.702)
Awesome, that’s an important space that we haven’t had a lot of folks talk about. Safety is just as important, if not more so, when we’re thinking about security and just kind of protecting people. So let’s dive into your background. What’s your open source origin story, Kate? How did you get into this crazy space?

Kate Stewart (02:18.967)
Well, I’m a little bit different than most. Okay. I got into this not as a developer. I got into this as a manager. Okay. So I basically was managing a team of developers doing bring up software back in the Motorola days. I was about 20 plus years ago now. And Apple had just finished pivoting away from the PowerPC. And so we needed to be able to prove that what we were doing in the silicon was actually going to work. And so this is part of embedded story and the enablement story to me. But what I ended up having to do is we went and looked at Linux. We went, OK, yeah, let’s work with Linux. Let’s work with the GCC tool chains, and let’s use that as our enablement platform. And then I had to, you know, we had our customers saying, OK, that’s fine, but until it’s upstream, we don’t believe it. So it was one of those sorts of ones, right?

It doesn’t, your port doesn’t exist unless it’s upstream as far as they were concerned. So we could get it running in our labs, but it’s all of a sudden I had to go and deal with the lawyers and figure out how can we get it so that we can actually contribute upstream. And I pretty much was doing, basically I went with the lawyers and when I was at Motorola and worked with them, convinced them, educated them, know, education, all that to get them so that they could understand, we’re not going to take a lot of risk.

CRob (03:18.866)
Mm-hmm.

Kate Stewart (03:47.313)
And it’s in our best interest to sell our chips. So we got that happening. And then the company spun out FreeScale for Motorola. And I got to do it all over again with a new set of lawyers. So I was managing the open source teams. And then we did a lot of work with the team in Austin. And then we started scaling that out worldwide to all the other open source teams in Motorola, or FreeScale at the time then, and managing teams.

CRob (04:00.05)
Hahaha

Kate Stewart (04:16.363)
Teams in China, teams in Israel, teams in Canada, and gosh, Japan and France. Anyhow, we had a good selection of teams around the world. And so making sure that we could actually do all this properly at scale was an interesting challenge. So that was my start in open source. And this is why you sort of see me in the licensing space, because I’ve been talking to lawyers a lot. And that’s kind of where SPDX started, is because we had to keep doing the same metadata over and over and over.

And so my colleagues at Wind River were looking at the same stuff. My colleagues at Monovista were looking at the same stuff. We had no way of collaborating. So it was a language to collaborate with. that if I go and scrub this curl package and pull all the licensing information out and sanitize it, I could share it with you type of deal and vice versa. So that’s kind of how SPVX started with being able to share the metadata about projects and make things transparent so that we could do the risk analysis properly. After that. you know, I basically went and got an opportunity to join Canonical and I was a Buntus release manager for two and a half years. So all of a sudden that was a whole different view of open source, right? I was coming from a nice little embedded space and at Canonical, I learned all about dependencies and all about how you had to make sure that your full dependency chain was actually satisfied and coherent.

CRob (05:23.664)
Nice.

Kate Stewart (05:39.447)
Or you were going to have problems. I also learned a lot about zero days and making the whole security story come together so that we could ship a distribution that people were coming, that was not going to cause problems for people. And so there’s about five, a bunch of releases I was released management for in that time period. And so that taught me a lot about open source at scale in the current environment. And after that, I went to, I was doing the roadmaps at Linaro.

CRob (05:51.43)
Mm-hmm.

Kate Stewart (06:09.399)
Product management, was the director of product management at Lenaro to figure out where did the army ecosystem want to collaborate? What topics do want to collaborate on? And so I was there for a couple of years and then I joined the Linux Foundation in 2015. So this will be my 10th year and it’s been a fun night. I know. So I’ve been, yeah, yeah, it is. they brought me in because the lawyers at the Linux Foundation knew me from the SPDX world. But after I joined, they sort of they had me point to a certain problem initially when I joined and we figured out a solution. But then they realized, oh, she understands embedded. Ooh. So they basically asked me to pull together the funding pool for real-time Linux. And I said to them that that was a big problem. And now, as of last year, we finally have everything upstream. It’s only taken eight years, but we finally got it. We preempt our taste at patches. And then after I was able to get that going well, went, hmm, we’ve got this RTOS called Zephyr that Intel is contributing up and we need someone to run it and start to build a program up. And so it was sort of like, okay, let’s figure out how we can deal with this one. So it was fun. We did a lot of surveys with developers, what were the big pain points in the IoT ecosystem? Back in 2015, I don’t know if you remember, but the big joke was what’s the S in IoT? Yeah, exactly. And so…

Focusing on security in Zephyr has been the focus pretty much since the day we launched the project. We had our first security committee meetings. And every time we found a new security best practice, we did try to apply it into Zephyr and to see where we are. I think we were the fifth project to get a gold badge out of the OpenSSF Badging Program. I think there’s a few more now, but not that many. We have gone to that level.

Kate Stewart (08:02.417)
And certainly Zephyr was there before Linux was there, just as side thing. So we actually got it.

CRob (08:08.274)
I’ll tell Greg next time I see him.

Kate Stewart
He knows it. I’ve teased him for many years. So nothing new there. But yeah, so we were trying to do best practices in Zephyr. And we’ve been working towards safety in Zephyr. Now it’s taken us a while to figure out a path that would work in the context of open source. But these were the two things that we were told back in 2015 by the developers that they wanted to see in open source Arctos. And so that’s what we’ve been focusing on. the project has grown really well over the years. We’re now the fifth most active project at the Linux Foundation in terms of project velocity, especially by CNCF, not us. So we have some degree of separation there.

And we’ve actually hit the top 30 of all open source projects now, so we’re the 25th. So it’s had a pretty good trajectory. And this just goes to show, if you try to do the best practices, it makes a project successful. And developers want to be there because they can build on it. So that’s kind of the origin story of where I am to today. I’ve been starting to work on it. I continue to work on Linux-related topics with the safety, with the Elisa project.

Kate Stewart (09:20.331)
And we’re busy trying to figure out similar paths, how we can go for certifications with Linux. So that’s been growing slowly. But both of these projects are ones that grow very slowly over time. And they just sort of creep up bit by bit by bit by step by step by step. And I’ve got some great, yeah, exactly, pedal by pedal by pedal. I’ve got some great board members in both projects who are very much engaged and have been doing a lot to help us move it forward. So I’ve been very lucky in that sense too. Yeah.

CRob (09:45.362)
Awesome.

That’s an amazing journey that you’ve described and touching on so many interesting different areas. I love like through these interviews, I get to hear how people got here and the variety of skills that people bring that you’re able to contribute very meaningfully to upstream is just amazing. It really makes me happy to hear that you have again, kind of a non-traditional open source background. That’s great.

Kate Stewart (10:09.729)
Yeah, like I say, whenever the academics go do surveys about open source origins, I always enjoyed being able to do that to say, it isn’t just developers that do open source and help open source. Realistically, it’s a community. And so it’s, know, everyone has different roles and different abilities to contribute in different ways. And as we all want to go after the same vision, the right thing happens. It’s pretty much what happened. It’s pretty much what I’ve seen anyhow.

CRob (10:31.89)
So where I first got to interact with you way back in days of yore was on this little thing that you alluded to, this small little thing called Software Bill of Materials. I joined the second NTIA call and you were one of the presenters in the room. So for our audience, could you maybe give me an elevator pitch on what’s an SBOM and why is it important?

Kate Stewart (10:57.835)
Well, so Software Bill of Materials is a way of exposing the metadata about the software you’re using in a transparent fashion. And so it’s basically putting in together the key elements of what are your components and how are they linked together? What’s the relationships between them? And understanding these relationships and how these components interact is what gives you the ability to decide if something is potentially at risk or not. Is it vulnerable or is it not vulnerable? What are the transitive dependencies? What’s happening? Realistically, there was a lot of simplifications that were made at that point in time in the initial S-BOM work. They’re starting to come back to bite us now. Just saying, one of which was the only thing was includes as a dependency. Well, realistically,

Kate Stewart (11:53.067)
You need to know something statically linked is something dynamically linked. How are these, how are these things interacting? and we’ve got, you know, things emerging right now in the whole AI front of, know, what training data we’re using, you know, what model on your models and how is this all assembled? And so these are things that we’d actually dealt with in SPDX a long time ago, but the SBOM community wasn’t ready to talk about it at that time. They’re starting to move in this direction now, but you’ll find, like I say, I’m having a lot of fun right now reading some academic papers, because I’ll be talking about some of these topics at the end of the month up in Ottawa at the MSR. So I’ll be doing a keynote there about some of this stuff. And I was looking at all these academic papers to see, OK, what do they think an S-BOM is, just so that I figure out where I have to talk to them about this stuff.

Kate Stewart (12:52.263)
One of the things that they’re seeing is that there’s a lot of the interpretations and the subtleties around the S-BOM is this data of components and the relationship between these components is very much like an elephant. It depends on which part of the elephant you’re coming in at it from and what you’re feeling with with your little line hands as to what you perceive it as.

Kate Stewart (13:18.999)
And we went through the, I guess it was in the CISA working group. I was a fly on the wall on the first one and opened my mouth, which is why I was on the second NTIA call. And that’s why I basically took over co-leading the formats and tooling working group under NTIA. And I was an active participant in the framing discussions. So I’ve been pretty much involved with it all the way through. then I was working when we first had the the SBOM Everywhere. So we had the SBOM Working thing in Washington. I was there. And that was sort of the start of figuring out, okay, well, we want to have the SBOM Everywhere sync to start focusing issues. And so when there was a hiatus between NTIA and CISA, that SBOM Everywhere group was keeping the discussion going in a reasonably collective way. And we’re sort of starting to head into that again with the NTIA SBOM Everywhere, pulling voices together, and understanding how this technology is evolving and where the strengths and weaknesses are and where the gaps are, filling the gaps. So I’ve been involved with the OpenSSSF, OpenSSF, SBOM Everywhere SIG under the tooling group, basically with Josh Breshers since it started, trying to make sure that we had the different voices coming in and we had the different perspectives available so that we could start to get a good lie of the land. And we started work on clarifying what is actually happening with the types of SBOMs? Because the type of data you have available for source SBOMs is what historically the legal focus focused on for license compliance, but still useful for the safety people. But then you have the build SBOMs where you say, OK, these are the pieces I’m putting together. And the question is, you capturing, these are all the source files that went into my build image, or are you just capturing these are the components that go into my build image?

Kate Stewart (15:14.559)
But these are build types of SBOMs. And they have different purposes, and they have different ability to reduce false positives. Specifically, if you don’t compile a piece of code into your package, you’re not vulnerable to it. there’s a lot of, I’d like to get rid of a whole bunch of these false positives in the security space. I really think we should be able to, if we actually get the right level of information. So we can take that and then what you actually configure may impact whether you have vulnerability or not when you deploy it. So what have you released and where are you deploying? And then what’s happening to the system as it’s running? Are you updating your libraries? Are there potentially interactions between your runtime libraries and what you put down as images? All these are different types of data that can legitimately be put into an SBOM. And there’s different levels of trust depending on where you are in the ecosystem, how much you actually how much the people who have put the data into the format actually understand the data and have confidence. Because there’s a lot of tools out there, which is the sixth type, which is an analysis SBOM. And these are ones that are looking at a different part of the life cycle or off to the side and trying to guess what’s going on. And the guessing what’s going on is if you don’t have anything else, that’s what you need to do. No question. But if you can get it precisely from the build flows and from your whole ecosystem as it’s being used, as it’s being deployed, as it’s being monitored, you’re going to have a lot more accuracy, which removes a lot of problems. So that type of concept that we sort of picked up on in OpenSSF, Seth Swansegan, has been picked up by CISUN. And we’ve got a short white paper out about the same thing. That concept hasn’t hit academia yet.

CRob (17:03.858)
We better get that over there.

Kate Stewart (17:05.55)
Uh-huh. so this is I’ll be having some fun. So the whole concept of the different data and like they’re trying to look at like all these SBOMs everywhere and how do you work with them, et cetera. And it’s sort of like, well, first question one is how authoritative is the people who produce this SBOM on you? Is it garbage in? Is it someone just filling in the fields and not really understanding what they’re filling in?

CRob (17:28.676)
Mm-hmm.

Kate Stewart (17:31.797)
Or does it come from a source that you trust and that you think actually knows what you’re talking about so you can build on it further and augment it and everything else? So yeah, these are the challenges I think that are kind of interesting for us to be playing with right now. And so I was part of, in SysI, working on the basically the tool quality was basically we were working on looking at tooling and looking at the formats and actually ended up doing a revision to the revision of the framing document. So we basically pulled together consensus on what the next additional sets of fields were going to be that got added into the tree. And then I think CISA has plans, or at least had plans, we’ll find out what happens this year, of basically going through a more formal process. But they’ve got the stakeholder input from us now. And that was the stage of where we thought, and realistically, the lawyers managed to show up in those meetings and convince them that, they did need to keep licenses in there now. Thank you. Because it’s another form of risk at the end of the day. It’s just risk forms. And so people creating these things and doing policies, the academics are busy trying to study this because it’s an interesting area for them. And so I think we’ll see a lot of innovation in the next few years and getting towards a stage where we can actually do product lines at scale.

Um, and being able to keep things safe, I think is where we need to aim for the whole SBOM initiatives and they need to be system-wide. They can’t just be software anymore. You need to know which data sets you’ve used because you can have poison data sets all over the place. can have, um, you know, bad training on your models might have an impact if you’ve not weighted things properly. These are all factors that when people are trying to understand what went wrong, they’re going to need to be able to look at.

CRob (19:21.458)
What I love about my upstream work is that I get to collaborate with amazing people like you on these really hard challenges and know software bill of materials has been won. Let’s move on to another interesting problem we all get to work with. The European Union last year released a new piece of legislation, the CRA, the Cyber Resilience Act.

Kate Stewart (19:45.565)
Yes.

CRob (19:47.43)
And it’s a set of requirements for predominantly manufacturers, but also a little bit for open source stewards that talks about the requirements for products with digital elements. through our parallel work, we worked with our, with Hillary Carter and the LF research team. And we did two study, we sponsored two studies. The first study, which was very sad news where we polled 600 people participating in upstream open source, whether that was a manufacturer, a developer, a steward like us. And the findings there were a little scary, where a lot of people are unaware and uncertain is the title of the paper. People don’t know about this law and the requirements that are coming down the pipe. But in concert with that, LF Research also released a paper that has, I feel, is some pretty spectacular news. The title of the other paper was Pathways to Cybersecurity, Best Practices in Open Source, and a project that’s near and dear to your heart was cited as one of the exemplars of an upstream project that takes security seriously and is doing the right types of things. So could you maybe talk a little bit about Zephyr and talk about like what your personal observations of what this report kind of detailed?

Kate Stewart (21:13.749)
Yeah, we’re going to have a gap. Also, there’s a lot of things that some of these projects have done right. Zephyr, for one, like I say, we’ve taken security seriously right since the project started. Literally, the first security committee meeting happened the day after we launched the project. it’s in our blood that we make sure that we’ve tried to do this right. We’ve been, you know, we started trying to figure out, and was funny enough, was part of the Core Infrastructure Initiative Open Source Badging started at that point in time roughly as well, that’s us. And so we looked at it going like, well, how do we improve our security posture as a project? Because we were coming at this from a community and we started using the badging program as looking at the criteria, filling it out, understanding how this all works. And we initially got to passing.

Yay, we got it. We’ve got our basic passing. And then David Wheeler got busy with his community, and he came up with a silver and a gold level on us. And we just kept increasing the resilience of the project at the heart of it all. So this was also really quite educational. So we started looking at that. And as we were going for that first passing, we realized, it. What’s involved with becoming a CVE, numbering authority, or CNA?

Kate Stewart (22:37.399)
And so I started working and reaching out to the folks behind that and understanding what’s involved. And so we ended up having myself and some of the security committee, we ended up meeting with them trying to figure out, okay, well, what does it take for project to be a CNA, not a company, but a project? And so in, think 2017, 2018, we actually got our CNA criteria. We fulfilled the CNA criteria and the project itself has been a CNA with a functioning P-SIRT.

CRob (23:06.066)
Nice.

Kate Stewart (23:06.557)
We’ve been plugged into the whole P-SIRT community and the first community and everything else for the last few years. And last year at first conference, I actually gave a talk about Zephyr because a lot of the corporation folks don’t realize that projects can do this too, if they have the will and the initiative behind their membership and their developers that they want to be this way. we’ve been looking at these things and tackling that.

CRob (23:14.514)
Excellent.

Kate Stewart (23:36.311)
And then we went after, we were sort of sitting on our laurels to a certain extent before we started really going after the silver. And the automation behind that badge kicked us out because some of the website lights weren’t working anymore. Yeah. So I went, oh, it kicked us out. We’ll take it seriously now. OK. This is good. It wasn’t just a paper exercise. There’s actually something that’s keeping you honest there behind the scenes. And that checking behind the scenes motivated us then to go after silver and then gold, and we finally got gold. then, so I can say we were about the fifth, I think the fifth to get gold, fourth or the fifth. I was really, you we were pretty happy. And we’ve maintained it since then. And every year we do a periodic audit of the materials. We started looking at the scorecard practices as a project. And so we started looking at last year and the security committee, actually, this is the amusing part here. The security committee was going, ah, this really doesn’t apply to people building things with Zephyr, et cetera, et cetera. And we were going like, oh, well, OK.

Kate Stewart (24:33.761)
And then when we had this little recent incident with that exposure, all of a sudden we’re talking on weekends and they realized, shit, we better take this all seriously. So we’re improving our posture there too now. So I think we went like from 50 something up to 78 and we’ve got plans for getting ourselves up to the hundred level type of deal. So we’re continuing to improve our posture and work on that sort of stuff there. So Zephyr takes security very, very seriously as you can tell.

CRob (24:52.914)
Excellent.

Kate Stewart (25:03.031)
It really is, you know, part of the reason I think we’ve been a successful project is the fact that we do have a strong security story. We’ve got our threat models. We’ve got, know, we’ve got the things that people are saying to be looking at and we keep putting them and we have, people that meet from our members and from the community and, you know, continue to refine our posture. So I think it’ll always be that way. And so whenever I can find a new practice, we’re starting to look at, can we apply it? How does it match to what we’ve already done?

And so I’m curious when the baseline stuff rolls out, what is it going to look like? And I suspect we’ve already hit there, but we might learn something. That’s always good. And the nice thing about doing it this way, which was applying the best practices badging and forth, let us put things in place. They’re serving us well for the CRA. And that’s probably why we showed up in this report, because

You know, lot of the things that we do with the US government, once the Europeans figure out, what database are they going to be throwing things at? Who is going to be the C-CERTS? We’ve been doing it already on the US side. We should be able to do it on the European side. It’s just a matter of figuring it out a little bit. Just basically, testing a few of our processes. But we’ve already been doing these types of things in one direction. We just have to broaden the reach a little bit more.

CRob (26:26.651)
Yeah, I think it’ll be a pretty easy pivot. You need to make some small adjustments, do some documentation potentially, but it sounds like you’ll be in a pretty good spot to fulfill that any obligations underneath that anyone that uses Zephyr will be in a great space to defend themselves like the Market Surveillance Authority or other groups.

Kate Stewart (26:46.037)
Right. And, you know, one of the things that the project did when we started looking at the criteria coming down, and the needs of the criteria was it was looking like, okay, they want to have this, this longer support period, like, you know, between five years up to 25 type of deer. And so the separate projects TSC actually in March voted to extend our LTS support from 2.5 years to five years. And we’ve got that we’re doing different things in different periods of time.

CRob (27:12.551)
Yay!

Kate Stewart (27:15.863)
Okay, so what we’re doing traditionally now for the first two years, two and a half years, and then we’re basically just focusing on security and critical updates after that. So it keeps the load reasonable for our maintainers and our release team members. And so that’s kind of how we’ve attracted. So, yeah, the TSE voted on it, it was approved. And so that’s our bit we could do to help shrink the gap between the steward obligations and the manufacturer obligations. So we’re trying to make it.

Kate Stewart (27:45.013)
Zephyr is friendly as possible for the manufacturers. Now we are going to have a challenge that I don’t see a clean story for yet. We went through some bulk vulnerability assessments and things like that, and we’ve changed some processes. And what we do is we have documented in the project, we will respond to any vulnerability reports in seven days. We will basically, you know, we’ll acknowledge them, you know, start working with whoever’s reporting things in, and then we will in 30 days have it fixed upstream.

And then in 60, we’ll have another 60 day embargo window. total 90 days of embargo window before we make it public. And we do this because in the embedded space, you know, it’s a lot harder sometimes to remediate things in the field. And so we wanted to make sure that, you know, the people who are trusting Zephyr as their RTOS, you know, would have a chance to work with their customers. Now the CRA, you know, and so the CRA is asking for, especially in severe vulnerabilities, some pretty different timelines. And so I’m really going to be interested in how that’s all going to boil itself out. The other thing that’s interesting is that the CRA is calling for us to notify our customers. Well, we don’t know who’s using Zephyr. So one of the mechanisms we put in place earlier, which was a product, like basically a vulnerability notification list for our product makers.

Kate Stewart (29:12.887)
So any of our members in the project or any people who can show us that they’re using a product that’s got Zephyr in it, we’ll add them to the notification list under embargo. And so we’re trying to handle it that way. But that’s going to be the best we’re going to able to do. We won’t be able to find this end user because we don’t have the line of sight. And now the cut, the manufacturer.

CRob (29:32.028)
But exactly, and that’s not necessarily your responsibility is the upstream. That’s the manufacturer somebody that’s commercializing this.

Kate Stewart (29:36.317)
Well, it’s one of those sections that applies a little bit to the stewards.

CRob (29:45.81)
It does, but the timelines are not identical.

Kate Stewart (29:47.147)
So that’s why we’re always going to do to whoever wants to let us know about them. Yeah, let us know about them. I think sanity will prevail, and we will not be subject to various other, some of the punitive stuff. But I think we can certainly look at trying to make sure that what we’ve done is as much as we can do with the data we have available to us. And hence, the more transparency we make into this ecosystem, the better. Speaking back to S-BOMs, you know, Zephyr actually, every build of Zephyr, you can get three SBOMs out with just a couple command line tweaks. You get sources SBOM of the Zephyr sources from the files you pulled from. Of course, the sources from the application you’ve used. So you get source SBOMs for each of those. And then you get a build SBOM that links back with cryptographic encryption to the source and to the, to the, to the source SBOMs and lets you know exactly which files made it in as well as which, you know, dependency links and components you may have pulled from your ecosystem. But that level of transparency is what we’re going to need for safety. And we have it today with Zephyr from following these best practices on security. So we’re in reasonable shape, I think, for the regulatory industries as well.

CRob (31:05.414)
Good. Well, I could talk about SBOM and CRA all day, but let’s move on to the rapid fire part of our interview.

Kate Stewart (31:14.134)
Okay.

CRob (31:17.66)
got a bunch of questions here. Just give me your first emotional response. First thing that comes to mind here. First question, VI or EMACs.

Kate Stewart (31:28.599)
VI.

CRob (31:30.546)
Nice, very good, very good. These are all potentially contentious somewhat, so don’t feel bad about having an opinion. What’s your favorite open source mascot?

Kate Stewart (31:34.638)
Tux

CRob (31:45.106)
Excellent choice. What’s your favorite vegetable?

Kate Stewart (31:50.625)
Carrots.

CRob (31:52.434)
Very nice. Star Trek or Star Wars?

Kate Stewart (31:58.455)
Star Trek.

CRob (32:00.541)
Very good. There’s a pattern. So everyone I’ve asked that so far has been trekkers, which is good. And finally, mild or spicy food.

Kate Stewart (32:09.611)
Depends. Probably, at this point now more mild. There’s certain things I like spicy though.

CRob (32:14.578)
Ha

CRob (32:19.42)
Fair enough. Well, thank you for playing along there. And as we wind down here, Kate, do you have any call to action or any advice to someone trying to get into this amazing upstream ecosystem?

Kate Stewart (32:33.303)
There’s lots of free training out there. Just start taking it. Educate yourself so that you can participate in the dialogue and not feel completely overwhelmed. Each domain has its own set of jargon, be it lawyers, licensing jargon, security professionals with their jargon, safety folks with all their standards jargon. Everyone talks about certain concepts a little bit differently. so taking the free training that’s available from the Linux Foundation and other places just so that you’re aware of these concepts. actually, before you start opining on some of these things, actually do your homework. I know that’s a horrible concept, but like I said, was reading on some of these papers, these academic papers, and it was pretty clear that they hadn’t done their homework in a couple of areas. So that was a bit sort of like, yeah, OK. So yeah, do your homework before you start to opine. But do your homework. Educate yourself. Do your homework. And then find the areas that are most interesting to you, because there’s so many areas where people need help these days and there’s a lot of things we can participate in. And you don’t need to be a developer to participate as well. Obviously developers make everything come together and make it all work, but there’s a need for people doing a lot of other tasks. And if you want to try and make things move forward, there’s lots of ways.

CRob (33:49.586)
Well, thank you for sharing your story and thank you for all the amazing contributions you’re making to the ecosystem. Kate Stewart, thank you for showing up today.

Kate Stewart (33:56.919)
Thank you very much for all the work you’re doing in OpenSSF. I think this is really important. Thank you.

CRob (34:03.718)
My pleasure. And to everyone out there, thank you for listening and happy open sourcing. We’ll talk soon. Cheers.

Open Infrastructure is Not Free: A Joint Statement on Sustainable Stewardship

By Blog

An Open Letter from the Stewards of Public Open Source Infrastructure

Over the past two decades, open source has revolutionized the way software is developed. Every modern application, whether written in Java, JavaScript, Python, Rust, PHP, or beyond, depends on public package registries like Maven Central, PyPI, crates.io, Packagist and open-vsx to retrieve, share, and validate dependencies. These registries have become foundational digital infrastructure – not just for open source, but for the global software supply chain.

Beyond package registries, open source projects also rely on essential systems for building, testing, analyzing, deploying, and distributing software. These also include content delivery networks (CDNs) that offer global reach and performance at scale, along with donated (usually cloud) computing power and storage to support them.

And yet, for all their importance, most of these systems operate under a dangerously fragile premise: They are often maintained, operated, and funded in ways that rely on goodwill, rather than mechanisms that align responsibility with usage.

Despite serving billions (perhaps even trillions) of downloads each month (largely driven by commercial-scale consumption), many of these services are funded by a small group of benefactors. Sometimes they are supported by commercial vendors, such as Sonatype (Maven Central), GitHub (npm) or Microsoft (NuGet). At other times, they are supported by nonprofit foundations that rely on grants, donations, and sponsorships to cover their maintenance, operation, and staffing.

Regardless of the operating model, the pattern remains the same: a small number of organizations absorb the majority of infrastructure costs, while the overwhelming majority of large-scale users, including commercial entities that generate demand and extract economic value, consume these services without contributing to their sustainability

Modern Expectations, Real Infrastructure

Not long ago, maintaining an open source project meant uploading a tarball from your local machine to a website. Today, expectations are very different:

  • Dependency resolution and distribution must be fast, reliable, and global.
  • Publishing must be verifiable, signed, and immutable.
  • Continuous integration (CI) pipelines expect deterministic builds with zero downtime.
  • Security tooling expects an immediate response from public registries.
  • Governments and enterprises demand continuous monitoring, traceability, and auditability of systems.
  • New regulatory requirements, such as the EU Cyber Resilience Act (CRA), are further increasing compliance obligations and documentation demands, adding overhead for already resource-constrained ecosystems.
  • Infrastructure must be responsive to other types of attacks, such as spam and increased supply chain attacks involving malicious components that need to be removed.

These expectations come with real costs in developer time, bandwidth, computing power, storage, CDN distribution, operational, and emergency response support. Yet, across ecosystems, most organizations that benefit from these services do not contribute financially, leaving a small group of stewards to carry the burden.

Automated CI systems, large-scale dependency scanners, and ephemeral container builds, which are often operated by companies, place enormous strain on infrastructure. These commercial-scale workloads often run without caching, throttling, or even awareness of the strain they impose. The rise of Generative and Agentic AI is driving a further explosion of machine-driven, often wasteful automated usage, compounding the existing challenges. 

The illusion of “free and infinite” infrastructure encourages wasteful usage.

Proprietary Software distribution

In many cases, public registries are now used to distribute not only open source libraries but also proprietary software, often as binaries or software development kits (SDKs) packaged as dependencies. These projects may have an open source license, but they are not functional except as part of a paid product or platform. 

For the publisher, this model is efficient. It provides the reliability, performance, and global reach of public infrastructure without having to build or maintain it. In effect, public registries have become free global CDNs for commercial vendors.

We don’t believe this is inherently wrong. In fact, it’s somewhat understandable and speaks to the power of the open source development model. Public registries offer speed, global availability, and a trusted distribution infrastructure already used by their target users, making it sensible for commercial publishers to gravitate toward them. However, it is essential to acknowledge that this was not the original intention of these systems. Open source packaging ecosystems were created to support the distribution of open, community-driven software, not as a general-purpose backend for proprietary product delivery. If these registries are now serving both roles, and doing so at a massive scale, that’s fine. But it also means it’s time to bring expectations and incentives into alignment.

Commercial-scale use without commercial-scale support is unsustainable.

Moving Towards Sustainability

Open source infrastructure cannot be expected to operate indefinitely on unbalanced generosity. The real challenge is creating sustainable funding models that scale with usage, rather than relying on informal and inconsistent support. 

There is a difference between:

  • Operating sustainably, and
  • Functioning without guardrails, with no meaningful link between usage and responsibility.

Today, that distinction is often blurred. Open source infrastructure, whether backed by companies or community-led foundations, faces rising demands, fueled by enterprise-scale consumption, without reliable mechanisms to scale funding accordingly. Documented examples demonstrate how this imbalance drives ecosystem costs, highlighting the real-world consequences of an illusion that all usage is free and unlimited.

For foundations in particular, this challenge can be especially acute. Many are entrusted with running critical public services, yet must do so through donor funding, grants, and time-limited sponsorships. This makes long-term planning difficult and often limits their ability to invest proactively in staffing, supply chain security, availability, and scalability. Meanwhile, many of these repositories are experiencing exponential growth in demand, while the growth in sponsor support is at best linear, posing a challenge to the financial stability of the nonprofit organizations managing them.

At the same time, the long-standing challenge of maintainer funding remains unresolved. Despite years of experiments and well-intentioned initiatives, most maintainers of critical projects still receive little or no sustained support, leaving them to shoulder enormous responsibility in their personal time. In many cases, these same underfunded projects are supported by the very foundations already carrying the burden of infrastructure costs. In others, scarce funds are diverted to cover the operational and staffing needs of the infrastructure itself.

If we were able to bring greater balance and alignment between usage and funding of open source infrastructure, it would not only strengthen the resilience of the systems we all depend on, but it would also free up existing investments, giving foundations more room to directly support the maintainers who form the backbone of open source.

Billion-dollar ecosystems cannot stand on foundations built of goodwill and unpaid weekends.

What Needs to Change

It is time to adopt practical and sustainable approaches that better align usage with costs. While each ecosystem will adopt the approaches that make the most sense in its own context, the need for action is universal. These are the areas where action should be investigated:

  • Commercial and institutional partnerships that help fund infrastructure in proportion to usage or in exchange for strategic benefits.
  • Tiered access models that maintain openness for general and individual use while providing scaled performance or reliability options for high-volume consumers.
  • Value-added capabilities that commercial entities might find valuable, such as usage statistics.

These are not radical ideas. They are practical, commonsense measures already used in other shared systems, such as Internet bandwidth and cloud computing. They keep open infrastructure accessible while promoting responsibility at scale.

Sustainability is not about closing access; it’s about keeping the doors open and investing for the future.

This Is a Shared Resource and a Shared Responsibility

We are proud to operate the infrastructure and systems that power the open source ecosystem and modern software development. These systems serve developers in every field, across every industry, and in every region of the world.

But their sustainability cannot continue to rely solely on a small group of donors or silent benefactors. We must shift from a culture of invisible dependence to one of balanced and aligned investments.

This is not (yet) a crisis. But it is a critical inflection point.

If we act now to evolve our models, creating room for participation, partnership, and shared responsibility, we can maintain the strength, stability, and accessibility of these systems for everyone.

Without action, the foundation beneath modern software will give way. With action — shared, aligned, and sustained — we can ensure these systems remain strong, secure, and open to all.

How You Can Help

While each ecosystem may adopt different approaches, there are clear ways for organizations and individuals to begin engaging now:

  • Show Up and Learn: Connect with the foundations and organizations that maintain the infrastructure you depend on. Understand their operational realities, funding models, and needs.
  • Align Usage with Responsibility: If your organization is a high-volume consumer, review your practices. Implement caching, reduce redundant traffic, and engage with stewards on how you can contribute proportionally.
  • Build With Care: If you create build tools, frameworks, or security products, consider how your defaults and behaviors impact public infrastructure. Reduce unnecessary requests, make proxy usage easier, and document best practices so your users can minimize their footprint.
  • Become a Financial Partner: Support foundations and projects directly, through membership, sponsorship, or by employing maintainers. Predictable funding enables proactive investment in security and scalability.

Awareness is important, but awareness alone is not enough. These systems will only remain sustainable if those who benefit most also share in their support.

What’s Next

This open letter serves as a starting point, not a finish. As stewards of this shared infrastructure, we will continue to work together with foundations, governments, and industry partners to turn principles into practice. Each ecosystem will pursue the models that make sense in its own context, but all share the same direction: aligning responsibility with usage to ensure resilience.

Future changes may take various forms, ranging from new funding partnerships to revised usage policies to expanded collaboration with governments and enterprises. What matters most is that the status quo cannot hold.

We invite you to engage with us in this work: learn from the communities that maintain your dependencies, bring forward ideas, and be prepared for a world where sustainability is not optional but expected.

Signed by

Alpha-Omega

Eclipse Foundation (Open VSX)

OpenJS Foundation

Open Source Security Foundation (OpenSSF)

Packagist (Composer)

Python Software Foundation (PyPI)

Rust Foundation (crates.io)

Sonatype (Maven Central)

Organizational signatures indicate endorsement by the listed entity. Additional organizations may be added over time.

Acknowledgments: We thank the contributors from the above organizations and the broader community for their review and input.

What’s in the SOSS? Podcast #39 – S2E16 Racing Against Quantum: The Urgent Migration to Post-Quantum Cryptography with KeyFactor’s Crypto Experts

By Podcast

Summary

The quantum threat is real, and the clock is ticking. With government deadlines set for 2030, organizations have just five years to migrate their cryptographic infrastructure before quantum computers can break current RSA and elliptic curve systems.

In this episode of “What’s in the SOSS,” join host Yesenia as she sits down with David Hook (VP Software Engineering) and Tomas Gustavsson (Chief PKI Officer) from Keyfactor to break down post-quantum cryptography, from ELI5 explanations of quantum-safe algorithms to the critical importance of crypto agility and entropy. Learn why the financial sector and supply chain security are leading the charge, discover the hidden costs of migration planning, and find out why your organization needs to start inventory and testing now because once quantum computers arrive, it’s too late.

Conversation Highlights

00:00 Introduction
00:22 Podcast Welcome
00:01 – 01:22: Introductions and Setting the Stage
01:23 – 03:22: Post-Quantum 101 – The Quantum Threat Explained
03:23 – 06:38: Government Deadlines and Industry Readiness
06:39 – 09:14: Bouncy Castle’s Quantum-Safe Journey
09:15 – 10:46: The Power of Open Source Collaboration
10:47 – 13:32: Industry Sectors Leading the Migration
13:33 – 16:33: Planning Challenges and Crypto Agility
16:34 – 22:01: The Randomness Problem – Why Entropy Matters
22:02 – 26:44: Getting Started – Practical Migration Advice
26:45 – 28:05: Supply Chain and SBOMs
28:06 – 30:48: Rapid Fire Round
30:49 – 31:40: Final Thoughts and Call to Action

Transcript

Intro Music + Promo Clip (00:00)

Yesenia (00:21)

Hello and welcome to What’s in the SOSS, OpenSSF’s podcast where we talk to interesting people throughout the open source ecosystem, sharing their journey, experiences and wisdom. Soy Yesinia Yser, one of your hosts. And today we have a very special treat. I have David and Tomas from Keyfactory here to talk to us about post quantum. Ooh, this is a hot topic. It was one definitely that was mentioned a lot in RSA and upcoming conferences.

Tomas, David I’ll hand it over to you. I’ll hand it over to Tomas – introduce yourself.

Tomas Gustavsson (00:56)

Okay, I’m Thomas Gustavsson, Chief PKI Officer at Keyfactor. And I’ve been a PKI nerd and geek for working with that for 30 years now. I would call it applied cryptography. So as compared to David, I take what he does and builds PKI, a digital signature software with it.

David Hook (01:17)

And I’m David Hook. My official title is VP Software Engineering at KeyFactor, but primarily I’m responsible for the care and feeding of the bountycast of cryptography APIs which basically form the core of the cryptography that KeyFactor and other people’s products actually use.

Yesenia (01:35)

Very nice. And for those that aren’t aware, like myself, who is kind of new into the most post-quantum cryptology, could you explain like I’m five of what that is for our audience?

David Hook (01:46)

So one of the issues basically with the progress that’s been made in quantum computers is that there’s a particular algorithm called Shor’s algorithm which enables people to break conventional PKI systems built around RSA and Elliptic-Curve, which are the two most common algorithms being used today. The idea of the post-quantum cryptography effort is to develop and deploy algorithms which are not susceptible to attack from quantum computers before we actually have a quantum computer attacking us. Not that I’m expecting the first quantum computer to get out of a box, well, you know, sort of run rampaging around the street with a knife or anything like that. But the reality is that good people and bad people will actually get access to quantum technology at about the same time. And it’s really the bad people we’re trying to protect people from.

Tomas Gustavsson (02:39)

Exactly, and since more or less the whole world as we know it runs on RSA and EC, that’s what makes it urgent and what has caused governments around the world to set timelines for the migration to post quantum cryptography or quantum safe cryptographies. It’s also known as.

David Hook (03:03)

Yeah, I was just gonna say that that’s probably quantum safe is in some ways a better way of describing it. One of the issues that people have with the term post quantum is in the industry, a lot of people hear the word post and they think I can put this off until later. But yeah, the reality is that’s not possible because once there is a quantum computer that’s cryptographically relevant, it’s too late.

Yesenia (03:23)

So from what I’m hearing, sounds that post quantum cryptology is gaining urgency. And as we’re standardizing these milestones, including our government regulations, what are you seeing from your work with Bouncy Cancel, EJBCA, and SignServer? And of course, other important ecosystem players like our HSM vendors as they’re getting ready for these PQC deployments.

David Hook (03:49)

So I guess the first thing is, from the government point of view, the deadline is actually 2030, which is only about five years away. That certainly has got people’s attention. And that includes in Australia where I’m from. Now, what we’re seeing at the moment, of course, is that for a lot of people, they’re waiting for certified implementations. But we aren’t actually seeing people use pre-certified implementations in order to get some understanding of what the differences are between the quantum algorithms, the post quantum algorithms rather, and the original RSA PKI algorithms that we’ve been using before. One of the issues of course is that the post quantum algorithms require more resources. So the keys are generally bigger, the signature sizes are generally bigger, payloads are generally bigger as well. And also the mechanism for doing key transport in post quantum relies on a system called a KEM which is a key encapsulation mechanism. Key encapsulation mechanisms in usage are also slightly different to how RSA or Diffie-Hellman works, elliptic-curve Diffie-Hellman, which is also what we’re currently used to using. So it’s going to have to be some adaption in that too. What we’re seeing certainly at bouncer-caster levels, there’s a lot of people now starting to try new implementations of the protocols and everything they’re using in order to find out what the scalability effects are and also where there are these issues where they need to rephrase the way some processes are done just because the algorithms no longer support the things they used to support it.

Tomas Gustavsson (05:24)

I think it’s definitely encouraging that things have moved quite a lot, so of course the cryptographic community have worked on this for many, many years and we’ve now moved on from, you know, what can we do to when and how can we do it? So that’s very encouraging. There’s still a few final bits and pieces to be finished on the front of standardization and the certifications as David mentioned.

But things are, you know, dripping in one by one. For example, hardware security modules or HSM vendors are coming in one by one. for the actually the right kind of limited use cases today, selecting, you know, ready some vendors or open source projects, you can make things work today, which has really been kind of just in the last couple of months, a really big step forward for planning to being able to execute.

Yesenia (06:27)

Very interesting. And we’ll jump over to like bouncy castle. It’s from my experience within the open source world, it’s been a very long time that it’s been a trusted open source crypto library. How do you approach supporting post quantum algorithms while maintaining the trust and the interoperability? That’s a hard word for me.

David Hook (06:50)

Yeah, that’s all right. It’s not actually an easy operation to execute in real life either.

Yesenia (06:55)

Oh, so that works.

David Hook (06:57)

Yeah, so it works well. So with Bouncy Castle, what we able to do is we actually, our original set of post-quantum algorithms was based on round three of the NIST post-quantum competition. And we actually got funding from the Australian government to work with a number of Australian universities to add those implementations and also one of the universities was given funding to do formal validation on them as well. So one part of the process for us was, well guess there were three parts, one part was the implementation which was done in Java and C sharp and then in addition to that then we had somebody sit down and actually study the work that was done independently to make sure that we hadn’t introduced any errors that were obvious and to check for things like side channels and that way there were timing operation considerations that might have caused side channel leakage.

And then finally, of course, with the interoperability, we’ve been actively involved with organizations like the IETF and also the OpenSSL mission. And that’s allowed us to work with other open source projects and also other vendors to determine that our certificates, for example, and our private keys and all that have been encoded in a manner that actually allows them to be read and understood by the other vendors and other open source APIs. And on top of that, we’ve also been active participants in working with NIST on the ACVP stuff, which is for algorithm validation testing, to make sure the actual implementations themselves are producing the correct results. And that’s obviously something that we’ve worked with across the IETF and OpenSSL mission as well. So, you know part of actually generating a certificate of course is you’ve got to able to verify the signature on it. So that means you have to be able to understand the public key associated with it. That’s one checkbox and then the second one of course is the signature for example makes sense too.

Yesenia (08:52)

So, it sounds like there’s a lot of layers to this that have to be kind of checked off and gives it the foundation for this. Very nice.

Tomas Gustavsson (09:02)

I would say that what is so good to work in open source is that without collaboration we won’t have a chance to meet these tight deadlines that governments are setting up. So, and the great thing in open source community is that lot of things are transparent and easy to test.

Bouncy Castle is released open source, EGBC and Science Server are released open source and early. Not only us, of course, but other people can also start testing and grabbing OpenSSL or OQS from the Linux Foundation. You can test interoperability and verify it. And actually, you do find bugs in these early tests, which is why I think open source is the foundation to…being able to do this.

Yesenia (9:58)

Yeah, open source gives us that the nice foundation while we might have several years. I know with the migration itself, it’s going to take a while, especially trying to figure out how to, how is it going to be done? So just wanted to look into what remains of 2025 and of course, beyond. You know, we’re approaching a period where many organizations will need to start migrating, especially the critical infrastructure and our software supply chains. What do you anticipate will be the most important post quantum cryptographic milestone or shifts this year?

Tomas Gustavsson (10:32)

Definitely, we see a lot of interest from specific sectors. I said, supply chain security is a really big one because that was also, say, the first or definitely one of the first anticipated use cases for post-quantum cryptography because if you cannot secure the supply chain with over there updates and those kinds of things, then you won’t be in a good position to update or upgrade systems once a potential potent quantum computer is here. So everything about code signing, software supply chain is a huge topic. And it’s actually one of the ones where you will be able to do production usage or people are starting to plan and test production usage already or some actually have already gone there.

Then there’s industries like the finance industry, which is encouraging, I guess, for us all who have a bank that we work with, that they are very early on the ball as well to plan the huge complex system they are running and doing actually practical tests now and moving from a planning phase into an implementation phase.

And then there are more, I would say, forward looking things which are, you know, very long term like telecom are looking to the next generation like 6G where they are planning in post-quantum cryptography from the beginning. So there’s everything from, you know, right now to what’s happening in the coming years and what’s going to happen, you know, definitely past 2030. So a lot of all of these things are ongoing.

While there are still, of course, body of organizations and people out there who are completely ignorant, not in a bad way, right? They just haven’t reached, been reached by the news. There’s a lot of things in this industry, so you can’t keep track of everything.

Yesenia (12:43)

Right, they’re very unaware potentially of what’s to come or even if they’re impacted.

Tomas Gustavsson (12:49)

Yes.

David Hook (12:50)

So the issue you run into of course for something like this is that it costs money. That tends to slow people down a bit.

Tomas Gustavsson (12:58)

Yeah, that’s one thing when people or organizations start planning, they fall into these non obvious things like from a developer when you just develop it and then someone integrates it and it’s going to work. But large organization, they have to look into things like hardware depreciation periods, right? When if they want to be ready by 2035 or 2030, they have to plan backwards to see when can we earliest start replacing hardware if it’s routers or VPN and these kind of things. And when do we need to procure new software or start updating and planning our updates because all these things are typically multi-year cycles in larger organizations. And that’s why things like the financial industry is trying to start to plan early. And of course, we as suppliers are kind of on the bottom of the food chain. We have to be ready early.

David Hook (14:02)

One of the, actually, I guess there’s a couple of runs across where the money’s got to get spent too. So the first one really is that people need to properly understand what they’re doing. It’s surprising how many companies don’t actually understand what algorithms or certificates that got deployed. So people actually need to have their inventory in place.

The second thing, of course, that we’ll probably talk about a couple of times is just the issue of crypto agility. It’s been a bit of a convention in the industry to bolt security on at the last minute. And we generally get away with it. Although we don’t necessarily produce the best results. But the difference between what we’ve seen in the past and now where people really need to be designing crypto agile implementations, meaning that they can replace key side certificates, keys, even whole algorithms in their implementations, is that you really have to design a system to deal with that upfront. And in the same way as we have disaster recovery testing, it’s actually the kind of thing that needs to become part of your development testing as well. Because as I was on a panel recently for NIST and as one of the people on that panel pointed out, it’s very easy to design something which is crypto agile in theory. But it’s like most things, unless you actually try and make sure that it really does work, that’s only when you actually find out that you’ve actually accidentally introduced a dependency on some old algorithm or something that you’re trying to get rid of.

So there’s those considerations as well that need to be made.

Yesenia (15:43)

Seems like a lot to be considered, especially with the migration and just the bountiful information on post quantum as well. I want to shift gears just a little bit and just throw in some randomness and talk about the importance of randomness. It’s just a topic that with many companies promoting things like QRNG and research just revealing breakable encryption keys, mostly due to weak entropy – Can you talk about why entropy can be hard to understand and what failures it depends on?

David Hook (16:20)

Yeah, entropy is great. You talk to any physicist and usually what you’ll find out is they’re spending all their time trying to get rid of the noise in their measurement systems. And of course, what they’re talking about there is low entropy. What we want, of course, in cryptography, because we’re computer scientists, we do everything backwards, we actually are looking for high entropy. So high entropy really gives you good quality keys.

That is to say that you can’t predict what actual numbers or bit strings will actually appear in your keys. And if you can’t predict them, then there’s a pretty good chance nobody else can. That’s the first thing. Of course, one slight difference, again, because we’re computer scientists and we like to make things a bit more difficult than they need to be sometimes, we actually in cryptography talk about conditioned entropy, which is what’s defined in a recent NIST standard, which has got the rather catchy name of SPA 890B.

Yesenia (17:24)

Got you.

David Hook (17:25)

And that’s become sort of the, I guess, the current standard for how to do it properly, and that’s been adopted across the globe by a number of countries. Now…one of the interesting times of this, of course, is the quantum effects actually are very good for generating lots of entropy. So we’re now seeing people actually producing quantum random number generators. And the interesting thing about those is that they can just provide virtually an infinite stream of entropy at high speed. This is good because the other thing that we usually do to get entropy is we rely on what’s called opportunistic entropy.

So on a server, for example, you go, know, how fast is my disk going? How, where am I getting blocks from? You know, what’s the operating system doing? How long is it taking the user to type something in? Is there network latency for this or that? Or, you know, all these sort of things that all these operating system functions that are taking place. How long does it take me to scan a large amount of memory? These all contribute to, you know, bits of randomness really because they’re characteristic of that system and that system only.

The issue of course that we’ve got is that nowadays a lot of systems are on what you call virtual architectures. So the actual machine that you’re running on is a virtual machine. And so it doesn’t necessarily have all those hardware characteristics that it can get access to. And then there’s the other problem, know, which is like when we do stuff fast now, we use high speed ram disks, gigabit ethernet, you all this sort of stuff. And suddenly a lot of things that used to introduce random random-ish sort of delays are no longer doing that because the hardware is running so fast and so hot, which is great for user response times, but for generating cryptographic keys, maybe not so nice. And this is really where the QRNGs, I think, at the moment are coming into their own because they provide an independent way of actually producing entropy that the opportunistic schemes that we previously used are suddenly becoming ineffective for.

Tomas Gustavsson (19:34)

I might add in there that the history is kind of littered with severe breakages due to entropy failures. We have everything from Debian wikis, which we still suffer from even though it was ages ago. We had the ROCA wikis which led to replacement of like a hundred million smart cards a bunch of years ago and there’s still research, you know, recent research that shows that off on the internet there’s breakable RSA keys in certificates which are active due to typically being generated maybe on a constrained device during the boot up phase where it hadn’t gathered enough in entropy yet. So it becomes predictable. So there’s a lot of bad history around this and it’s not obvious how to make it correctly. Typically you rely on the platform to give it to you.

But then, when the platform isn’t reliable enough, it fails.

David Hook (20:37)

And the interesting thing about that is that, know, the RSA keys that Thomas was talking about, you don’t need a quantum computer to break them. I mean, it’d be nice to have one to break them with because then you could claim you had a quantum computer. But the reality is you don’t need to wait for a quantum computer because of the poor choices that have been made around entropy. The keys are breakable now – using conventional computers. So yeah, entropy is important.

Yesenia (21:04)

The TLDR entropy is important. And we are heading towards that time of this migration and stuff. As we had mentioned earlier, a lot of companies, they just might not be aware. They might not feel like they fall under this migration and these standards that are coming along. So I just wanted to see if y’all can share some practical advice – for organizations that are beginning their post-quantum journey, what are one or two steps that you’d recommend that they take now?

Tomas Gustavsson (21:35)

I think, yep, some things we touched on already, like this inventory. So in order to migrate away from future vulnerable cryptography, you have to know what you have and where you have it today. And there’s a bunch of ways to do that. And it’s typically thought as kind of the first step in order to allow you to do some planning for your migration. I mean, you can do technical testing as well. We’re computer geeks here, so we like the testing.

While you’re doing [unintelligible] and planning, can test the obvious things that you know already that you know you’ll have to migrate. So there’s a bunch of things you can do in parallel. And then I think I mentioned is that you have to think backwards to realize that even though 2030 or 2035 doesn’t sound like tomorrow, it’s in a cryptographic migration scenario, or software and hardware replacement cycle it is virtually tomorrow. while they were saying that the best time to start was 10 years ago, but the second best time to start is now.

Yesenia (22:49)

I mean, it’s four and half years away.

David Hook (22:51)

Yeah, and we’ve still got people trying to get off SHA-1. It’s just those days are gone. The other thing too, of course, is yeah, people need to spend a bit of time looking at this issue of crypto agility because the algorithms that are coming down the pipe at the moment, while they’ve been quite well studied and well researched, it’s not necessarily going to be the case that they’re actually going to stay the algorithms that we want to use. And that might be because it could show up that there’s some issues with them that weren’t anticipated and parameter sizes might need to be changed to make them more secure. Or there’s a lot of ongoing research in the area of post-quantum algorithms and it may turn out that there are algorithms that are a lot more efficient to offer smaller key sizes or smaller signature sizes, which certain applications are one to migrate to quite quickly.

So, know, if you can imagine, you know, having a conversation with your boss where, you know, suddenly there’s some algorithm that’s going to make you twice as productive and you have to explain to him that you’ve actually hard coded the algorithm that you’re using. I don’t think a conversation like that’s going to go very well. So flexibility is required, but as I said, the flexibility needs to be designed into your system. in the same way as you have disaster recovery testing, it needs to be tested before deployment. can actually change the algorithms we need to.

Tomas Gustavsson (24:14)

Yeah, we’ve actually, you often say that since you’re doing this work on migration now, you know, it’s an opportunity to look at crypto agility. If you’re changing something, make it crypto agile. And the same thing, you know, classic advice is if you rely on vendors, be it commercial or open source, ask them about their preparedness for quantum readiness when they’re going to be ready. So you have to challenge everything, both us, you know, in the in our community, right? There are among different open source projects, nothing is start to build and build any new things which are non crypto agile or not prepared for quantum safe algorithms and for old stuff to actually plan. It’s going to take some man hours to update it to be quantum safe in many cases, in most all cases.

David Hook (25:10)

Yeah, don’t be afraid to ask people that are selling your stuff what their agility story is and what their quantum safe story is. I think all of us need to do that and respond to it.

Yesenia (25:21)

Yes, ask and respond. What would be areas or organizations that folks, let’s just say it when they’re aware, they could go ahead and ask if they’re getting started.

David Hook (25:30)

So probably internally, it’s obviously your IT people. I would start by asking them, because they’re the people on the call face. And then, yeah, as Thomas said before, it’s the vendors that you’re working with, because this is one of the things about the whole supply chain – most of us, even in IT, are not using stuff that’s all in-house, we’ve usually got other people somewhere in our supply chain responsible for the systems that we’re making use of internally. And so, you know, people need to be asking everyone. And likewise, your suppliers need to be following the same basic principle, which is making sure that in terms of how their supply chains work, again, there’s this coverage of, you know, what is the quantum safe story and, know, how these systems that have been given to them, all these APIs or products that have been given, how they crypto agile, what is required to change things that need to be changed.

Tomas Gustavsson (26:30)

Now this is a great use case for your SBOMs and CBOMs.

David Hook (26:34)

Exactly, their time has arrived.

Yesenia (26:36)

There you go. It has arrived. Time for the boms. For those unaware, I just learned Cbom because I work with AISboms and Sboms. I just learned Cboms were cryptographic boms. So in case someone was like, what is a Cbom now? There you go. We dropped the bomb on you.

Let’s move over now to our rapid fire part of the interview. I’ll pose a few questions and it’s going to be whoever answers them first. Or if you both answer them the same time, we’ll figure that out.

But our first question, Vim or Emacs?

David Hook (27:06)

Vim or Emacs? Vim! Good answer. I didn’t even know that was a question. I thought it was a joke. I’m sorry, I’m a very old school.

Tomas Gustavsson (27:19)

I was told totally Emacs 20 years ago.

Yesenia (27:22)

You know, we just got to start the first one of throwing you off a little bit. Make sure you’re awake, make sure I’m awake. I know we’re on very different time zones, but…

David Hook (27:29)

I was using VI in 1980. And I’ve never looked back.

Yesenia (27:33)

Our next one is Marvel or DC?

David Hook (27:36)

Yeah, what superheroes do prefer? Oh yeah. I’m really more a Godzilla person. know, Mothra, Station Universe for Love, that kind of thing. Yeah. I don’t know if Marvel or DC has really captured that for me yet.

Tomas Gustavsson (27:56)

Yeah, I remember Zelda, was. There was the hero as well. That was in the early 90s, maybe 80s even.

David Hook (28:05)

Yeah. There you go. Sorry.

Yesenia (28:07)

There you go. Not it’s OK. You got to answer. Sweet or sour?

Tomas Gustavsson (28:10)

Sour.

David Hook (28:11)

Yeah, I’d go sour.

Yesenia (28:12)

Sour. Favorite adult beverage?

Tomas Gustavsson (28:18)

Alcohol.

David Hook (28:22)

Probably malt whiskey, if I was going to be specific. But I have been known to act more broadly, as Thomas has indicated, so probably a more neutral answer.

Yesenia (28:35)

Thomas is like, skip the flavor, just throw in the alcohol.

Tomas Gustavsson (28:40)

Well, I think it has to be good, but it usually involves alcohol in some form or the other.

Yesenia (28:47)

Love it. Last one. Lord of the Rings or Game of Thrones?

David Hook (28:52)

Lord of the Rings. I have absolutely no doubt.

Tomas Gustavsson (28:55)

I have to agree on that one.

Yesenia (28:57)

There you go, there you have it folks, another rapid fire. Gentlemen, any last minute advice or thoughts that you want to leave with the audience?

David Hook (29:05)

Start now.

Tomas Gustavsson (29:07)

And for us, if you’re a computer geek, this is fun. So don’t miss out on the chance to have some fun.

David Hook (29:16)

Yeah, we pride ourselves on our ability to solve problems. So now is a good time to shine.

Yesenia (29:22)

There you have it. It’s time to start now and start with the fun. Thank you both so much for your time today, your impact and contribution to our communities and those in our community helping drive these efforts forward. I look forward to seeing your efforts in 2025. Thank you.

David Hook & Tomas Gustavsson (29:41)

Thank you. Thank you.