

Over the past two decades, open source has revolutionized the way software is developed. Every modern application, whether written in Java, JavaScript, Python, Rust, PHP, or beyond, depends on public package registries like Maven Central, PyPI, crates.io, Packagist and open-vsx to retrieve, share, and validate dependencies. These registries have become foundational digital infrastructure – not just for open source, but for the global software supply chain.
Beyond package registries, open source projects also rely on essential systems for building, testing, analyzing, deploying, and distributing software. These also include content delivery networks (CDNs) that offer global reach and performance at scale, along with donated (usually cloud) computing power and storage to support them.
And yet, for all their importance, most of these systems operate under a dangerously fragile premise: They are often maintained, operated, and funded in ways that rely on goodwill, rather than mechanisms that align responsibility with usage.
Despite serving billions (perhaps even trillions) of downloads each month (largely driven by commercial-scale consumption), many of these services are funded by a small group of benefactors. Sometimes they are supported by commercial vendors, such as Sonatype (Maven Central), GitHub (npm) or Microsoft (NuGet). At other times, they are supported by nonprofit foundations that rely on grants, donations, and sponsorships to cover their maintenance, operation, and staffing.
Regardless of the operating model, the pattern remains the same: a small number of organizations absorb the majority of infrastructure costs, while the overwhelming majority of large-scale users, including commercial entities that generate demand and extract economic value, consume these services without contributing to their sustainability
Not long ago, maintaining an open source project meant uploading a tarball from your local machine to a website. Today, expectations are very different:
These expectations come with real costs in developer time, bandwidth, computing power, storage, CDN distribution, operational, and emergency response support. Yet, across ecosystems, most organizations that benefit from these services do not contribute financially, leaving a small group of stewards to carry the burden.
Automated CI systems, large-scale dependency scanners, and ephemeral container builds, which are often operated by companies, place enormous strain on infrastructure. These commercial-scale workloads often run without caching, throttling, or even awareness of the strain they impose. The rise of Generative and Agentic AI is driving a further explosion of machine-driven, often wasteful automated usage, compounding the existing challenges.
The illusion of “free and infinite” infrastructure encourages wasteful usage.
In many cases, public registries are now used to distribute not only open source libraries but also proprietary software, often as binaries or software development kits (SDKs) packaged as dependencies. These projects may have an open source license, but they are not functional except as part of a paid product or platform.
For the publisher, this model is efficient. It provides the reliability, performance, and global reach of public infrastructure without having to build or maintain it. In effect, public registries have become free global CDNs for commercial vendors.
We don’t believe this is inherently wrong. In fact, it’s somewhat understandable and speaks to the power of the open source development model. Public registries offer speed, global availability, and a trusted distribution infrastructure already used by their target users, making it sensible for commercial publishers to gravitate toward them. However, it is essential to acknowledge that this was not the original intention of these systems. Open source packaging ecosystems were created to support the distribution of open, community-driven software, not as a general-purpose backend for proprietary product delivery. If these registries are now serving both roles, and doing so at a massive scale, that’s fine. But it also means it’s time to bring expectations and incentives into alignment.
Commercial-scale use without commercial-scale support is unsustainable.
Open source infrastructure cannot be expected to operate indefinitely on unbalanced generosity. The real challenge is creating sustainable funding models that scale with usage, rather than relying on informal and inconsistent support.
There is a difference between:
Today, that distinction is often blurred. Open source infrastructure, whether backed by companies or community-led foundations, faces rising demands, fueled by enterprise-scale consumption, without reliable mechanisms to scale funding accordingly. Documented examples demonstrate how this imbalance drives ecosystem costs, highlighting the real-world consequences of an illusion that all usage is free and unlimited.
For foundations in particular, this challenge can be especially acute. Many are entrusted with running critical public services, yet must do so through donor funding, grants, and time-limited sponsorships. This makes long-term planning difficult and often limits their ability to invest proactively in staffing, supply chain security, availability, and scalability. Meanwhile, many of these repositories are experiencing exponential growth in demand, while the growth in sponsor support is at best linear, posing a challenge to the financial stability of the nonprofit organizations managing them.
At the same time, the long-standing challenge of maintainer funding remains unresolved. Despite years of experiments and well-intentioned initiatives, most maintainers of critical projects still receive little or no sustained support, leaving them to shoulder enormous responsibility in their personal time. In many cases, these same underfunded projects are supported by the very foundations already carrying the burden of infrastructure costs. In others, scarce funds are diverted to cover the operational and staffing needs of the infrastructure itself.
If we were able to bring greater balance and alignment between usage and funding of open source infrastructure, it would not only strengthen the resilience of the systems we all depend on, but it would also free up existing investments, giving foundations more room to directly support the maintainers who form the backbone of open source.
Billion-dollar ecosystems cannot stand on foundations built of goodwill and unpaid weekends.
It is time to adopt practical and sustainable approaches that better align usage with costs. While each ecosystem will adopt the approaches that make the most sense in its own context, the need for action is universal. These are the areas where action should be investigated:
These are not radical ideas. They are practical, commonsense measures already used in other shared systems, such as Internet bandwidth and cloud computing. They keep open infrastructure accessible while promoting responsibility at scale.
Sustainability is not about closing access; it’s about keeping the doors open and investing for the future.
We are proud to operate the infrastructure and systems that power the open source ecosystem and modern software development. These systems serve developers in every field, across every industry, and in every region of the world.
But their sustainability cannot continue to rely solely on a small group of donors or silent benefactors. We must shift from a culture of invisible dependence to one of balanced and aligned investments.
This is not (yet) a crisis. But it is a critical inflection point.
If we act now to evolve our models, creating room for participation, partnership, and shared responsibility, we can maintain the strength, stability, and accessibility of these systems for everyone.
Without action, the foundation beneath modern software will give way. With action — shared, aligned, and sustained — we can ensure these systems remain strong, secure, and open to all.
While each ecosystem may adopt different approaches, there are clear ways for organizations and individuals to begin engaging now:
Awareness is important, but awareness alone is not enough. These systems will only remain sustainable if those who benefit most also share in their support.
This open letter serves as a starting point, not a finish. As stewards of this shared infrastructure, we will continue to work together with foundations, governments, and industry partners to turn principles into practice. Each ecosystem will pursue the models that make sense in its own context, but all share the same direction: aligning responsibility with usage to ensure resilience.
Future changes may take various forms, ranging from new funding partnerships to revised usage policies to expanded collaboration with governments and enterprises. What matters most is that the status quo cannot hold.
We invite you to engage with us in this work: learn from the communities that maintain your dependencies, bring forward ideas, and be prepared for a world where sustainability is not optional but expected.
Signed by
Alpha-Omega
Continuous Delivery Foundation
Eclipse Foundation (Open VSX)
OpenJS Foundation
Open Source Security Foundation (OpenSSF)
Packagist (Composer)
Perl and Raku Foundation
Python Software Foundation (PyPI)
Rust Foundation (crates.io)
Sonatype (Maven Central)
Organizational signatures indicate endorsement by the listed entity. Additional organizations may be added over time.
Acknowledgments: We thank the contributors from the above organizations and the broader community for their review and input.
The quantum threat is real, and the clock is ticking. With government deadlines set for 2030, organizations have just five years to migrate their cryptographic infrastructure before quantum computers can break current RSA and elliptic curve systems.
In this episode of “What’s in the SOSS,” join host Yesenia as she sits down with David Hook (VP Software Engineering) and Tomas Gustavsson (Chief PKI Officer) from Keyfactor to break down post-quantum cryptography, from ELI5 explanations of quantum-safe algorithms to the critical importance of crypto agility and entropy. Learn why the financial sector and supply chain security are leading the charge, discover the hidden costs of migration planning, and find out why your organization needs to start inventory and testing now because once quantum computers arrive, it’s too late.
00:00 Introduction
00:22 Podcast Welcome
00:01 – 01:22: Introductions and Setting the Stage
01:23 – 03:22: Post-Quantum 101 – The Quantum Threat Explained
03:23 – 06:38: Government Deadlines and Industry Readiness
06:39 – 09:14: Bouncy Castle’s Quantum-Safe Journey
09:15 – 10:46: The Power of Open Source Collaboration
10:47 – 13:32: Industry Sectors Leading the Migration
13:33 – 16:33: Planning Challenges and Crypto Agility
16:34 – 22:01: The Randomness Problem – Why Entropy Matters
22:02 – 26:44: Getting Started – Practical Migration Advice
26:45 – 28:05: Supply Chain and SBOMs
28:06 – 30:48: Rapid Fire Round
30:49 – 31:40: Final Thoughts and Call to Action
Intro Music + Promo Clip (00:00)
Yesenia (00:21)
Hello and welcome to What’s in the SOSS, OpenSSF’s podcast where we talk to interesting people throughout the open source ecosystem, sharing their journey, experiences and wisdom. Soy Yesinia Yser, one of your hosts. And today we have a very special treat. I have David and Tomas from Keyfactory here to talk to us about post quantum. Ooh, this is a hot topic. It was one definitely that was mentioned a lot in RSA and upcoming conferences.
Tomas, David I’ll hand it over to you. I’ll hand it over to Tomas – introduce yourself.
Tomas Gustavsson (00:56)
Okay, I’m Thomas Gustavsson, Chief PKI Officer at Keyfactor. And I’ve been a PKI nerd and geek for working with that for 30 years now. I would call it applied cryptography. So as compared to David, I take what he does and builds PKI, a digital signature software with it.
David Hook (01:17)
And I’m David Hook. My official title is VP Software Engineering at KeyFactor, but primarily I’m responsible for the care and feeding of the bountycast of cryptography APIs which basically form the core of the cryptography that KeyFactor and other people’s products actually use.
Yesenia (01:35)
Very nice. And for those that aren’t aware, like myself, who is kind of new into the most post-quantum cryptology, could you explain like I’m five of what that is for our audience?
David Hook (01:46)
So one of the issues basically with the progress that’s been made in quantum computers is that there’s a particular algorithm called Shor’s algorithm which enables people to break conventional PKI systems built around RSA and Elliptic-Curve, which are the two most common algorithms being used today. The idea of the post-quantum cryptography effort is to develop and deploy algorithms which are not susceptible to attack from quantum computers before we actually have a quantum computer attacking us. Not that I’m expecting the first quantum computer to get out of a box, well, you know, sort of run rampaging around the street with a knife or anything like that. But the reality is that good people and bad people will actually get access to quantum technology at about the same time. And it’s really the bad people we’re trying to protect people from.
Tomas Gustavsson (02:39)
Exactly, and since more or less the whole world as we know it runs on RSA and EC, that’s what makes it urgent and what has caused governments around the world to set timelines for the migration to post quantum cryptography or quantum safe cryptographies. It’s also known as.
David Hook (03:03)
Yeah, I was just gonna say that that’s probably quantum safe is in some ways a better way of describing it. One of the issues that people have with the term post quantum is in the industry, a lot of people hear the word post and they think I can put this off until later. But yeah, the reality is that’s not possible because once there is a quantum computer that’s cryptographically relevant, it’s too late.
Yesenia (03:23)
So from what I’m hearing, sounds that post quantum cryptology is gaining urgency. And as we’re standardizing these milestones, including our government regulations, what are you seeing from your work with Bouncy Cancel, EJBCA, and SignServer? And of course, other important ecosystem players like our HSM vendors as they’re getting ready for these PQC deployments.
David Hook (03:49)
So I guess the first thing is, from the government point of view, the deadline is actually 2030, which is only about five years away. That certainly has got people’s attention. And that includes in Australia where I’m from. Now, what we’re seeing at the moment, of course, is that for a lot of people, they’re waiting for certified implementations. But we aren’t actually seeing people use pre-certified implementations in order to get some understanding of what the differences are between the quantum algorithms, the post quantum algorithms rather, and the original RSA PKI algorithms that we’ve been using before. One of the issues of course is that the post quantum algorithms require more resources. So the keys are generally bigger, the signature sizes are generally bigger, payloads are generally bigger as well. And also the mechanism for doing key transport in post quantum relies on a system called a KEM which is a key encapsulation mechanism. Key encapsulation mechanisms in usage are also slightly different to how RSA or Diffie-Hellman works, elliptic-curve Diffie-Hellman, which is also what we’re currently used to using. So it’s going to have to be some adaption in that too. What we’re seeing certainly at bouncer-caster levels, there’s a lot of people now starting to try new implementations of the protocols and everything they’re using in order to find out what the scalability effects are and also where there are these issues where they need to rephrase the way some processes are done just because the algorithms no longer support the things they used to support it.
Tomas Gustavsson (05:24)
I think it’s definitely encouraging that things have moved quite a lot, so of course the cryptographic community have worked on this for many, many years and we’ve now moved on from, you know, what can we do to when and how can we do it? So that’s very encouraging. There’s still a few final bits and pieces to be finished on the front of standardization and the certifications as David mentioned.
But things are, you know, dripping in one by one. For example, hardware security modules or HSM vendors are coming in one by one. for the actually the right kind of limited use cases today, selecting, you know, ready some vendors or open source projects, you can make things work today, which has really been kind of just in the last couple of months, a really big step forward for planning to being able to execute.
Yesenia (06:27)
Very interesting. And we’ll jump over to like bouncy castle. It’s from my experience within the open source world, it’s been a very long time that it’s been a trusted open source crypto library. How do you approach supporting post quantum algorithms while maintaining the trust and the interoperability? That’s a hard word for me.
David Hook (06:50)
Yeah, that’s all right. It’s not actually an easy operation to execute in real life either.
Yesenia (06:55)
Oh, so that works.
David Hook (06:57)
Yeah, so it works well. So with Bouncy Castle, what we able to do is we actually, our original set of post-quantum algorithms was based on round three of the NIST post-quantum competition. And we actually got funding from the Australian government to work with a number of Australian universities to add those implementations and also one of the universities was given funding to do formal validation on them as well. So one part of the process for us was, well guess there were three parts, one part was the implementation which was done in Java and C sharp and then in addition to that then we had somebody sit down and actually study the work that was done independently to make sure that we hadn’t introduced any errors that were obvious and to check for things like side channels and that way there were timing operation considerations that might have caused side channel leakage.
And then finally, of course, with the interoperability, we’ve been actively involved with organizations like the IETF and also the OpenSSL mission. And that’s allowed us to work with other open source projects and also other vendors to determine that our certificates, for example, and our private keys and all that have been encoded in a manner that actually allows them to be read and understood by the other vendors and other open source APIs. And on top of that, we’ve also been active participants in working with NIST on the ACVP stuff, which is for algorithm validation testing, to make sure the actual implementations themselves are producing the correct results. And that’s obviously something that we’ve worked with across the IETF and OpenSSL mission as well. So, you know part of actually generating a certificate of course is you’ve got to able to verify the signature on it. So that means you have to be able to understand the public key associated with it. That’s one checkbox and then the second one of course is the signature for example makes sense too.
Yesenia (08:52)
So, it sounds like there’s a lot of layers to this that have to be kind of checked off and gives it the foundation for this. Very nice.
Tomas Gustavsson (09:02)
I would say that what is so good to work in open source is that without collaboration we won’t have a chance to meet these tight deadlines that governments are setting up. So, and the great thing in open source community is that lot of things are transparent and easy to test.
Bouncy Castle is released open source, EGBC and Science Server are released open source and early. Not only us, of course, but other people can also start testing and grabbing OpenSSL or OQS from the Linux Foundation. You can test interoperability and verify it. And actually, you do find bugs in these early tests, which is why I think open source is the foundation to…being able to do this.
Yesenia (9:58)
Yeah, open source gives us that the nice foundation while we might have several years. I know with the migration itself, it’s going to take a while, especially trying to figure out how to, how is it going to be done? So just wanted to look into what remains of 2025 and of course, beyond. You know, we’re approaching a period where many organizations will need to start migrating, especially the critical infrastructure and our software supply chains. What do you anticipate will be the most important post quantum cryptographic milestone or shifts this year?
Tomas Gustavsson (10:32)
Definitely, we see a lot of interest from specific sectors. I said, supply chain security is a really big one because that was also, say, the first or definitely one of the first anticipated use cases for post-quantum cryptography because if you cannot secure the supply chain with over there updates and those kinds of things, then you won’t be in a good position to update or upgrade systems once a potential potent quantum computer is here. So everything about code signing, software supply chain is a huge topic. And it’s actually one of the ones where you will be able to do production usage or people are starting to plan and test production usage already or some actually have already gone there.
Then there’s industries like the finance industry, which is encouraging, I guess, for us all who have a bank that we work with, that they are very early on the ball as well to plan the huge complex system they are running and doing actually practical tests now and moving from a planning phase into an implementation phase.
And then there are more, I would say, forward looking things which are, you know, very long term like telecom are looking to the next generation like 6G where they are planning in post-quantum cryptography from the beginning. So there’s everything from, you know, right now to what’s happening in the coming years and what’s going to happen, you know, definitely past 2030. So a lot of all of these things are ongoing.
While there are still, of course, body of organizations and people out there who are completely ignorant, not in a bad way, right? They just haven’t reached, been reached by the news. There’s a lot of things in this industry, so you can’t keep track of everything.
Yesenia (12:43)
Right, they’re very unaware potentially of what’s to come or even if they’re impacted.
Tomas Gustavsson (12:49)
Yes.
David Hook (12:50)
So the issue you run into of course for something like this is that it costs money. That tends to slow people down a bit.
Tomas Gustavsson (12:58)
Yeah, that’s one thing when people or organizations start planning, they fall into these non obvious things like from a developer when you just develop it and then someone integrates it and it’s going to work. But large organization, they have to look into things like hardware depreciation periods, right? When if they want to be ready by 2035 or 2030, they have to plan backwards to see when can we earliest start replacing hardware if it’s routers or VPN and these kind of things. And when do we need to procure new software or start updating and planning our updates because all these things are typically multi-year cycles in larger organizations. And that’s why things like the financial industry is trying to start to plan early. And of course, we as suppliers are kind of on the bottom of the food chain. We have to be ready early.
David Hook (14:02)
One of the, actually, I guess there’s a couple of runs across where the money’s got to get spent too. So the first one really is that people need to properly understand what they’re doing. It’s surprising how many companies don’t actually understand what algorithms or certificates that got deployed. So people actually need to have their inventory in place.
The second thing, of course, that we’ll probably talk about a couple of times is just the issue of crypto agility. It’s been a bit of a convention in the industry to bolt security on at the last minute. And we generally get away with it. Although we don’t necessarily produce the best results. But the difference between what we’ve seen in the past and now where people really need to be designing crypto agile implementations, meaning that they can replace key side certificates, keys, even whole algorithms in their implementations, is that you really have to design a system to deal with that upfront. And in the same way as we have disaster recovery testing, it’s actually the kind of thing that needs to become part of your development testing as well. Because as I was on a panel recently for NIST and as one of the people on that panel pointed out, it’s very easy to design something which is crypto agile in theory. But it’s like most things, unless you actually try and make sure that it really does work, that’s only when you actually find out that you’ve actually accidentally introduced a dependency on some old algorithm or something that you’re trying to get rid of.
So there’s those considerations as well that need to be made.
Yesenia (15:43)
Seems like a lot to be considered, especially with the migration and just the bountiful information on post quantum as well. I want to shift gears just a little bit and just throw in some randomness and talk about the importance of randomness. It’s just a topic that with many companies promoting things like QRNG and research just revealing breakable encryption keys, mostly due to weak entropy – Can you talk about why entropy can be hard to understand and what failures it depends on?
David Hook (16:20)
Yeah, entropy is great. You talk to any physicist and usually what you’ll find out is they’re spending all their time trying to get rid of the noise in their measurement systems. And of course, what they’re talking about there is low entropy. What we want, of course, in cryptography, because we’re computer scientists, we do everything backwards, we actually are looking for high entropy. So high entropy really gives you good quality keys.
That is to say that you can’t predict what actual numbers or bit strings will actually appear in your keys. And if you can’t predict them, then there’s a pretty good chance nobody else can. That’s the first thing. Of course, one slight difference, again, because we’re computer scientists and we like to make things a bit more difficult than they need to be sometimes, we actually in cryptography talk about conditioned entropy, which is what’s defined in a recent NIST standard, which has got the rather catchy name of SPA 890B.
Yesenia (17:24)
Got you.
David Hook (17:25)
And that’s become sort of the, I guess, the current standard for how to do it properly, and that’s been adopted across the globe by a number of countries. Now…one of the interesting times of this, of course, is the quantum effects actually are very good for generating lots of entropy. So we’re now seeing people actually producing quantum random number generators. And the interesting thing about those is that they can just provide virtually an infinite stream of entropy at high speed. This is good because the other thing that we usually do to get entropy is we rely on what’s called opportunistic entropy.
So on a server, for example, you go, know, how fast is my disk going? How, where am I getting blocks from? You know, what’s the operating system doing? How long is it taking the user to type something in? Is there network latency for this or that? Or, you know, all these sort of things that all these operating system functions that are taking place. How long does it take me to scan a large amount of memory? These all contribute to, you know, bits of randomness really because they’re characteristic of that system and that system only.
The issue of course that we’ve got is that nowadays a lot of systems are on what you call virtual architectures. So the actual machine that you’re running on is a virtual machine. And so it doesn’t necessarily have all those hardware characteristics that it can get access to. And then there’s the other problem, know, which is like when we do stuff fast now, we use high speed ram disks, gigabit ethernet, you all this sort of stuff. And suddenly a lot of things that used to introduce random random-ish sort of delays are no longer doing that because the hardware is running so fast and so hot, which is great for user response times, but for generating cryptographic keys, maybe not so nice. And this is really where the QRNGs, I think, at the moment are coming into their own because they provide an independent way of actually producing entropy that the opportunistic schemes that we previously used are suddenly becoming ineffective for.
Tomas Gustavsson (19:34)
I might add in there that the history is kind of littered with severe breakages due to entropy failures. We have everything from Debian wikis, which we still suffer from even though it was ages ago. We had the ROCA wikis which led to replacement of like a hundred million smart cards a bunch of years ago and there’s still research, you know, recent research that shows that off on the internet there’s breakable RSA keys in certificates which are active due to typically being generated maybe on a constrained device during the boot up phase where it hadn’t gathered enough in entropy yet. So it becomes predictable. So there’s a lot of bad history around this and it’s not obvious how to make it correctly. Typically you rely on the platform to give it to you.
But then, when the platform isn’t reliable enough, it fails.
David Hook (20:37)
And the interesting thing about that is that, know, the RSA keys that Thomas was talking about, you don’t need a quantum computer to break them. I mean, it’d be nice to have one to break them with because then you could claim you had a quantum computer. But the reality is you don’t need to wait for a quantum computer because of the poor choices that have been made around entropy. The keys are breakable now – using conventional computers. So yeah, entropy is important.
Yesenia (21:04)
The TLDR entropy is important. And we are heading towards that time of this migration and stuff. As we had mentioned earlier, a lot of companies, they just might not be aware. They might not feel like they fall under this migration and these standards that are coming along. So I just wanted to see if y’all can share some practical advice – for organizations that are beginning their post-quantum journey, what are one or two steps that you’d recommend that they take now?
Tomas Gustavsson (21:35)
I think, yep, some things we touched on already, like this inventory. So in order to migrate away from future vulnerable cryptography, you have to know what you have and where you have it today. And there’s a bunch of ways to do that. And it’s typically thought as kind of the first step in order to allow you to do some planning for your migration. I mean, you can do technical testing as well. We’re computer geeks here, so we like the testing.
While you’re doing [unintelligible] and planning, can test the obvious things that you know already that you know you’ll have to migrate. So there’s a bunch of things you can do in parallel. And then I think I mentioned is that you have to think backwards to realize that even though 2030 or 2035 doesn’t sound like tomorrow, it’s in a cryptographic migration scenario, or software and hardware replacement cycle it is virtually tomorrow. while they were saying that the best time to start was 10 years ago, but the second best time to start is now.
Yesenia (22:49)
I mean, it’s four and half years away.
David Hook (22:51)
Yeah, and we’ve still got people trying to get off SHA-1. It’s just those days are gone. The other thing too, of course, is yeah, people need to spend a bit of time looking at this issue of crypto agility because the algorithms that are coming down the pipe at the moment, while they’ve been quite well studied and well researched, it’s not necessarily going to be the case that they’re actually going to stay the algorithms that we want to use. And that might be because it could show up that there’s some issues with them that weren’t anticipated and parameter sizes might need to be changed to make them more secure. Or there’s a lot of ongoing research in the area of post-quantum algorithms and it may turn out that there are algorithms that are a lot more efficient to offer smaller key sizes or smaller signature sizes, which certain applications are one to migrate to quite quickly.
So, know, if you can imagine, you know, having a conversation with your boss where, you know, suddenly there’s some algorithm that’s going to make you twice as productive and you have to explain to him that you’ve actually hard coded the algorithm that you’re using. I don’t think a conversation like that’s going to go very well. So flexibility is required, but as I said, the flexibility needs to be designed into your system. in the same way as you have disaster recovery testing, it needs to be tested before deployment. can actually change the algorithms we need to.
Tomas Gustavsson (24:14)
Yeah, we’ve actually, you often say that since you’re doing this work on migration now, you know, it’s an opportunity to look at crypto agility. If you’re changing something, make it crypto agile. And the same thing, you know, classic advice is if you rely on vendors, be it commercial or open source, ask them about their preparedness for quantum readiness when they’re going to be ready. So you have to challenge everything, both us, you know, in the in our community, right? There are among different open source projects, nothing is start to build and build any new things which are non crypto agile or not prepared for quantum safe algorithms and for old stuff to actually plan. It’s going to take some man hours to update it to be quantum safe in many cases, in most all cases.
David Hook (25:10)
Yeah, don’t be afraid to ask people that are selling your stuff what their agility story is and what their quantum safe story is. I think all of us need to do that and respond to it.
Yesenia (25:21)
Yes, ask and respond. What would be areas or organizations that folks, let’s just say it when they’re aware, they could go ahead and ask if they’re getting started.
David Hook (25:30)
So probably internally, it’s obviously your IT people. I would start by asking them, because they’re the people on the call face. And then, yeah, as Thomas said before, it’s the vendors that you’re working with, because this is one of the things about the whole supply chain – most of us, even in IT, are not using stuff that’s all in-house, we’ve usually got other people somewhere in our supply chain responsible for the systems that we’re making use of internally. And so, you know, people need to be asking everyone. And likewise, your suppliers need to be following the same basic principle, which is making sure that in terms of how their supply chains work, again, there’s this coverage of, you know, what is the quantum safe story and, know, how these systems that have been given to them, all these APIs or products that have been given, how they crypto agile, what is required to change things that need to be changed.
Tomas Gustavsson (26:30)
Now this is a great use case for your SBOMs and CBOMs.
David Hook (26:34)
Exactly, their time has arrived.
Yesenia (26:36)
There you go. It has arrived. Time for the boms. For those unaware, I just learned Cbom because I work with AISboms and Sboms. I just learned Cboms were cryptographic boms. So in case someone was like, what is a Cbom now? There you go. We dropped the bomb on you.
Let’s move over now to our rapid fire part of the interview. I’ll pose a few questions and it’s going to be whoever answers them first. Or if you both answer them the same time, we’ll figure that out.
But our first question, Vim or Emacs?
David Hook (27:06)
Vim or Emacs? Vim! Good answer. I didn’t even know that was a question. I thought it was a joke. I’m sorry, I’m a very old school.
Tomas Gustavsson (27:19)
I was told totally Emacs 20 years ago.
Yesenia (27:22)
You know, we just got to start the first one of throwing you off a little bit. Make sure you’re awake, make sure I’m awake. I know we’re on very different time zones, but…
David Hook (27:29)
I was using VI in 1980. And I’ve never looked back.
Yesenia (27:33)
Our next one is Marvel or DC?
David Hook (27:36)
Yeah, what superheroes do prefer? Oh yeah. I’m really more a Godzilla person. know, Mothra, Station Universe for Love, that kind of thing. Yeah. I don’t know if Marvel or DC has really captured that for me yet.
Tomas Gustavsson (27:56)
Yeah, I remember Zelda, was. There was the hero as well. That was in the early 90s, maybe 80s even.
David Hook (28:05)
Yeah. There you go. Sorry.
Yesenia (28:07)
There you go. Not it’s OK. You got to answer. Sweet or sour?
Tomas Gustavsson (28:10)
Sour.
David Hook (28:11)
Yeah, I’d go sour.
Yesenia (28:12)
Sour. Favorite adult beverage?
Tomas Gustavsson (28:18)
Alcohol.
David Hook (28:22)
Probably malt whiskey, if I was going to be specific. But I have been known to act more broadly, as Thomas has indicated, so probably a more neutral answer.
Yesenia (28:35)
Thomas is like, skip the flavor, just throw in the alcohol.
Tomas Gustavsson (28:40)
Well, I think it has to be good, but it usually involves alcohol in some form or the other.
Yesenia (28:47)
Love it. Last one. Lord of the Rings or Game of Thrones?
David Hook (28:52)
Lord of the Rings. I have absolutely no doubt.
Tomas Gustavsson (28:55)
I have to agree on that one.
Yesenia (28:57)
There you go, there you have it folks, another rapid fire. Gentlemen, any last minute advice or thoughts that you want to leave with the audience?
David Hook (29:05)
Start now.
Tomas Gustavsson (29:07)
And for us, if you’re a computer geek, this is fun. So don’t miss out on the chance to have some fun.
David Hook (29:16)
Yeah, we pride ourselves on our ability to solve problems. So now is a good time to shine.
Yesenia (29:22)
There you have it. It’s time to start now and start with the fun. Thank you both so much for your time today, your impact and contribution to our communities and those in our community helping drive these efforts forward. I look forward to seeing your efforts in 2025. Thank you.
David Hook & Tomas Gustavsson (29:41)
Thank you. Thank you.
Foundation honors community achievements and strategic efforts to secure ML pipeline during community event in Amsterdam
AMSTERDAM – OpenSSF Community Day Europe – August 28, 2025 – The Open Source Security Foundation (OpenSSF), a cross-industry initiative of the Linux Foundation that focuses on sustainably securing open source software (OSS), presents the Golden Egg Award during OpenSSF Community Day Europe and celebrates notable momentum across the security industry. The Foundation’s milestones include achievements in AI/ML security, policy education, and global community engagement.
OpenSSF continues to shine a light on those who go above and beyond in our community with the Golden Egg Awards. The Golden Egg symbolizes gratitude for recipients’ selfless dedication to securing open source projects through community engagement, engineering, innovation, and thoughtful leadership. This year, we celebrate:
OpenSSF is supported by more than 118 member organizations and 1,519 technical contributors across OpenSSF projects, serving as a vendor-neutral partner to affiliated open source foundations and projects. As securing the global technology infrastructure continues to get more complex, OpenSSF will remain a trusted home to further the reliability, security, and universal trust of open source software.
Over the past quarter, OpenSSF has made several key achievements in its mission to sustainably secure open source software, including:
“Securing the AI and ML landscape requires a coordinated approach across the entire pipeline,” said Steve Fernandez, General Manager at OpenSSF. “Through our MLSecOps initiatives with OpenSSF members and policy education with our communities, we’re giving practitioners and their organizations actionable guidance to identify vulnerabilities, understand their role in the global regulatory ecosystem, and build a tapestry of trust from data to deployment.”
OpenSSF continues to expand its influence on the international stage. OpenSSF Community Days drew record attendance globally, including standing-room-only participation in India, strong engagement in Japan, and sustained presence in North America.
“As AI and ML adoption grows, so do the security risks. Visualizing Secure MLOps (MLSecOps): A Practical Guide for Building Robust AI/ML Pipeline Security is a practical guide that bridges the gap between ML innovation and security using open-source DevOps tools. It’s a valuable resource for anyone building and securing AI/ML pipelines.” Sarah Evans, Distinguished Engineer, Dell Technologies
“The whitepaper distills our collective expertise into a pragmatic roadmap, pairing open source controls with ML-security threats. Collaborating through the AI/ML Security WG proved that open, vendor-neutral teamwork can significantly accelerate the adoption of secure AI systems.” Andrey Shorov, Senior Security Technology Specialist at Product Security, Ericsson
“The Cybersecurity Skills Framework is more than a checklist — it’s a practical roadmap for embedding security into every layer of enterprise readiness, open source development, and workforce culture across international borders. By aligning skills with real-world global threats, it empowers teams worldwide to build secure software from the start.” Jamie Thomas, Chief Client Innovation Officer and the Enterprise Security Executive, IBM
“Open source is global by design, and so are the challenges we face with new regulations like the EU Cyber Resilience Act,” said Christopher “CRob” Robinson, Chief Security Architect, OpenSSF. “The Global Cyber Policy Working Group helps policymakers understand how open source is built and supports maintainers and manufacturers as they prepare for compliance.”
“The OpenSSF’s brief guide to the Cyber Resilience Act is a critical resource for the open source community, helping developers and contributors understand how the new EU law applies to their projects. It clarifies legal obligations and provides a roadmap for proactively enhancing their code’s security.” Dave Russo, Senior Principal Program Manager, Red Hat Product Security
New and existing OpenSSF members are gathering this week in Amsterdam at the annual OpenSSF Community Day Europe.
OpenSSF will continue its engagement across Europe this fall with participation in the Linux Foundation Europe Member Summit (October 28) and the Linux Foundation Europe Roadshow (October 29), both in Ghent, Belgium. At the Roadshow, OpenSSF will sponsor and host the CRA in Practice: Secure Maintenance track, building on last year’s standing-room-only CRA workshop. On October 30, OpenSSF will co-host the European Open Source Security Forum with CEPS in Brussels, bringing together open source leaders, European policymakers, and security experts to collaborate on the future of open source security policy. A landing page for this event will be available soon, check the OpenSSF events calendar for updates and registration details.
The Open Source Security Foundation (OpenSSF) is a cross-industry organization at the Linux Foundation that brings together the industry’s most important open source security initiatives and the individuals and companies that support them. The OpenSSF is committed to collaboration and working both upstream and with existing communities to advance open source security for all. For more information, please visit us at openssf.org.
Media Contact
Grace Lucier
The Linux Foundation