 
        
         
        
         
        
         
        
         
        
        In this episode of What’s in the SOSS, CRob sits down with John Amaral from Root.io to explore the evolving landscape of open source security and vulnerability management. They discuss how AI and LLM technologies are revolutionizing the way we approach security challenges, from the shift away from traditional “scan and triage” methodologies to an emerging “fix first” approach powered by agentic systems. John shares insights on the democratization of coding through AI tools, the unique security challenges of containerized environments versus traditional VMs, and how modern developers can leverage AI as a “pair programmer” and security analyst. The conversation covers the transition from “shift left” to “shift out” security practices and offers practical advice for open source maintainers looking to enhance their security posture using AI tools.
00:25 – Welcome and introductions
01:05 – John’s open source journey and Root.io’s SIM Toolkit project
02:24 – How application development has evolved over 20 years
05:44 – The shift from engineering rigor to accessible coding with AI
08:29 – Balancing AI acceleration with security responsibilities
10:08 – Traditional vs. containerized vulnerability management approaches
13:18 – Leveraging AI and ML for modern vulnerability management
16:58 – The coming “remediation revolution” and fix-first approach
18:24 – Why “shift left” security isn’t working for developers
19:35 – Using AI as a cybernetic programming and analysis partner
20:02 – Call to action: Start using AI tools for security today
22:00 – Closing thoughts and wrap-up
Intro Music & Promotional clip (00:00)
CRob (00:25)
Welcome, welcome, welcome to What’s in the SOSS, the OpenSSF’s podcast where I talk to upstream maintainers, industry professionals, educators, academics, and researchers all about the amazing world of upstream open source security and software supply chain security.
Today, we have a real treat. We have John from Root.io with us here, and we’re going to be talking a little bit about some of the new air quotes, “cutting edge” things going on in the space of containers and AI security. But before we jump into it, John, could maybe you share a little bit with the audience, like how you got into open source and what you’re doing upstream?
John (01:05)
First of all, great to be here. Thank you so much for taking the time at Black Hat to have a conversation. I really appreciate it. Open source, really great topic. I love it. Been doing stuff with open source for quite some time. How do I get into it? I’m a builder. I make things. I make software been writing software. Folks can’t see me, but you know, I’m gray and have no hair and all that sort of We’ve been doing this a while. And I think that it’s been a great journey and a pleasure in my life to work with software in a way that democratizes it, gets it out there. I’ve taken a special interest in security for a long time, 20 years of working in cybersecurity. It’s a problem that’s been near and dear to me since the first day I ever had my like first floppy disk, corrupted. I’ve been on a mission to fix that. And my open source journey has been diverse. My company, Root.io, we are the maintainers of an open source project called Slim SIM (or SUM) Toolkit, which is a pretty popular open source project that is about security and containers. And it’s been our goal, myself personally, and as in my latest company to really try to help make open source secure for the masses.
CRob (02:24)
Excellent. That is an excellent kind of vision and direction to take things. So from your perspective, I feel we’re very similar age and kind of came up maybe in semi-related paths. But from your perspective, how have you seen application development kind of transmogrify over the last 20 or so years? What has gotten better? What might’ve gotten a little worse?
John (02:51)
20 years, big time frame talking about modern open source software. I remember when Linux first came out. And I was playing with it. I actually ported it to a single board computer as one of my jobs as an engineer back in the day, which was super fun. Of course, we’ve seen what happened by making software available to folks. It’s become the foundation of everything.
Andreessen said software will eat the world while the teeth were open source. They really made software available and now 95 or more percent of everything we touch and do is open source software. I’ll add that in the grand scheme of things, it’s been tremendously secure, especially projects like Linux. We’re really splitting hairs, but security problems are real. as we’ve seen, proliferation of open source and proliferation of repos with things like GitHub and all that. Then today, proliferation of tooling and the ability to build software and then to build software with AI is just simply exponentiating the rate at which we can do things. Good people who build software for the right reasons can do things. Bad people who do things for the bad reasons can do things. And it’s an arms race.
And I think it’s really both benefiting software development, society, software builders with these tremendously powerful tools to do things that they want. A person in my career arc, today I feel like I have the power to write code at a rate that’s probably better than I ever have. I’ve always been hands on the keyboard, but I feel rejuvenated. I’ve become a business person in my life and built companies.
And I didn’t always have the time or maybe even the moment to do coding at the level I’d like. And today I’m banging out projects like I was 25 or even better. But at the same time that we’re getting all this leverage universally, we also noticed that there’s an impending kind of security risk where, yeah, we can find vulnerabilities and generate them faster than ever. And LLMs aren’t quite good yet at secure coding. I think they will be. But also attackers are using it for exploits and really as soon as a disclosed vulnerability comes out or even minutes later, they’re writing exploits that can target those. I love the fact that the pace and the leverage is high and I think the world’s going to do great things with it, the world of open source folks like us. At the same time, we’ve got to be more diligent and even better at defending.
CRob (05:44)
Right. I heard an interesting statement yesterday where folks were talking about software engineering as a discipline that’s maybe 40 to 60 years old. And engineering was kind of the core noun there. Where these people, these engineers were trained, they had a certain rigor. They might not have always enjoyed security, but they were engineers and there was a certain kind of elegance to the code and that was people much like artists where they took a lot of pride in their work and how the code you could understand what the code is. Today and especially in the last several years with the influx of AI tools especially that it’s a blessing and a curse that anybody can be a developer. Not just people that don’t have time that used to do it and now they get to of scratch that itch. But now anyone can write code and they may not necessarily have that same rigor and discipline that comes from like most of them engineering trades.
John (06:42)
I’m going to guess. I think it’s not walking out too far on limb that you probably coded in systems at some point in your life where you had a very small amount of memory to work with. You knew every line of code in the system. Like literally it was written. There might have been a shim operating system or something small, but I wrote embedded systems early in my career and we knew everything. We knew every line of code and the elegance and the and the efficiency of it and the speed of it. And we were very close to the CPU, very close to the hardware. It was slow building things because you had to handcraft everything, but it was very curated and very beautiful, so to speak. I find beauty in those things. You’re exactly right. I think I started to see this happen around the time when JVM started happening, Java Virtual Machines, where you didn’t have to worry about Java garbage collection. You didn’t have to worry about memory management.
And then progressively, levels of abstraction have changed right to to make coding faster and easier and I give it more you know more power and that’s great and we’ve built a lot more systems bigger systems open source helps. But now literally anyone who can speak cogently and describe what they want and get a system and. And I look at the code my LLM’s produce. I know what good code looks like. Our team is really good at engineering right?
Hmm, how did it think to do it that way? Then go back and we tell it what we want and you can massage it with some words. It’s really dangerous and if you don’t know how to look for security problems, that’s even more dangerous. Exactly, the level of abstraction is so high that people aren’t really curating code the way they might need to to build secure production grade systems.
CRob (08:29)
Especially if you are creating software with the intention of somebody else using it, probably in a business, then you’re not really thinking about all the extra steps you need to take to help protect yourself in your downstream.
John (08:44)
Yeah, yeah. think it’s an evolution, right? And where I think of it like these AI systems we’re working with are maybe second graders. When it comes to professional code authoring, they can produce a lot of good stuff, right? It’s really up to the user to discern what’s usable.
And we can get to prototypes very quickly, which I think is greatly powerful, which lets us iterate and develop. In my company, we use AI coding techniques for everything, but nothing gets into production, into customer hands that isn’t highly vetted and highly reviewed. So, the creation part goes much faster. The review part is still a human.
CRob (09:33)
Well, that’s good. Human on the loop is important.
John (09:35)
It is.
CRob (09:36)
So let’s change the topic slightly. Let’s talk a little bit more about vulnerability management. From your perspective, thinking about traditional brick and mortar organizations, how have you seen, what key differences do you see from someone that is more data center, server, VM focused versus the new generation of cloud native where we have containers and cloud?
What are some of the differences you see in managing your security profile and your vulnerabilities there?
John (10:08)
Yeah, so I’ll start out by a general statement about vulnerability management. In general, the way I observe current methodologies today are pretty traditional.
It’s scan, it’s inventory – What do I have for software? Let’s just focus on software. What do I have? Do I know what it is or not? Do I have a full inventory of it? Then you scan it and you get a laundry list of vulnerabilities, some false positives, false negatives that you’re able to find. And then I’ve got this long list and the typical pattern there is now triage, which are more important than others and which can I explain away. And then there’s a cycle of remediation, hopefully, a lot of times not, that you’re cycling work back to the engineering organization or to whoever is in charge of doing the remediation. And this is a very big loop, mostly starting with and ending with still long lists of vulnerabilities that need to be addressed and risk managed, right? It doesn’t really matter if you’re doing VMs or traditional software or containerized software. That’s the status quo, I would say, for the average company doing vulnerability maintenance. And vulnerability management, the remediation part of that ends up being some fractional work, meaning you just don’t have time to get to it all mostly, and it becomes a big tax on the development team to fix it. Because in software, it’s very difficult for DevSec teams to fix it when it’s actually a coding problem in the end.
In traditional VM world, I’d say that the potential impact and the velocity at which those move compared to containerized environments, where you have
Kubernetes and other kinds of orchestration systems that can literally proliferate containers everywhere in a place where infrastructure as code is the norm. I just say that the risk surface in these containerized environments is much more vast and oftentimes less understood. Whereas traditional VMs still follow a pattern of pretty prescriptive way of deployment. So I think in the end, the more prolific you can be with deploying code, the more likely you’ll have this massive risk surface and containers are so portable and easy to produce that they’re everywhere. You can pull them down from Docker Hub and these things are full of vulnerabilities and they’re sitting on people’s desks.
They’re sitting in staging areas or sitting in production. So proliferation is vast. And I think that in conjunction with really high vulnerability reporting rates, really high code production rates, vast consumption of open source, and then exploits at AI speed, we’re seeing this kind of almost explosive moment in risk from vulnerability management.
CRob (13:18)
So there’s been, over the last several, like machine intelligence, which has now transformed into artificial intelligence. It’s been around for several decades, but it seems like most recently, the last four years, two years, it has been exponentially accelerating. We have this whole spectrum of things, AI, ML, LLM, GenAI, now we have Agentic and MCP servers.
So kind of looking at all these different technologies, what recommendations do you have for organizations that are looking to try to manage their vulnerabilities and potentially leveraging some of this new intelligence, these new capabilities?
John (13:58)
Yeah, it’s amazing at the rate of change of these kinds of things.
CRob (14:02)
It’s crazy.
John (14:03)
I think there’s a massively accelerating, kind of exponentially accelerating feedback loop because once you have LLMs that can do work, they can help you evolve the systems that they manifest faster and faster and faster. It’s a flywheel effect. And that is where we’re going to get all this leverage in LLMs. At Root, we build an agentic platform that does vulnerability patching at scale. We’re trying to achieve sort of an open source scale level of that.
And I only said that because I believe that rapidly, not just us, but from an industry perspective, we’re evolving to have the capabilities through agentic systems based on modern LLMs to be able to really understand and modify code at scale. There’s a lot of investment going in by all the major players, whether it’s Google or Anthropic or OpenAI to make these LLM systems really good at understanding and generating code. At the heart of most vulnerabilities today, it’s a coding problem. You have vulnerable code.
And so, we’ve been able to exploit the coding capabilities to turn it into an expert security engineer and maintainer of any software system. And so I think what we’re on the verge of is this, I’ll call it remediation revolution. I mentioned that the status quo is typically inventory, scan, list, triage, do your best. That’s a scan for us kind of, you know, I’ll call it, it’s a mode where mostly you’re just trying to get a comprehensive list of the vulnerabilities you have. It’s going to get flipped on its head with this kind of technique where it’s going to be just fix everything first. And there’ll be outliers. There’ll be things that are kind of technically impossible to fix for a while. For instance, it could be a disclosure, but you really don’t know how it works. You don’t have CWEs. You don’t have all the things yet. So you can’t really know yet.
That gap will close very quickly once you know what code base it’s in and you understand it maybe through a POC or something like that. But I think we’re gonna enter into the remediation revolution of vulnerability management where at least for third party open source code, most of it will be fixed – a priority.
Now, zero days will start to happen faster, there’ll be all the things and there’ll be a long tail on this and certainly probably things we can’t even imagine yet. But generally, I think vulnerability management as we know it will enter into this phase of fix first. And I think that’s really exciting because in the end it creates a lot of work for teams to manage those lists, to deal with the re-engineering cycle. It’s basically latent rework that you have to do. You don’t really know what’s coming. And I think that can go away, which is exciting because it frees up security practitioners and engineers to focus on, I’d say more meaningful problems, less toil problems. And that’s good for software.
CRob (17:08)
It’s good for the security engineers.
John (17:09)
Correct.
CRob (17:10)
It’s good for the developers.
John (17:11)
It’s really good for developers. I think generally the shift left revolution in software really didn’t work the way people thought. Shifting that work left, it has two major frictions. One is it’s shifting new work to the engineering teams who are already maximally busy.
CRob (17:29)
Correct.
John (17:29)
I didn’t have time to do a lot of other things when I was an engineer. And the second is software engineers aren’t security engineers. They really don’t like the work and maybe aren’t good at the work. And so what we really want is to not have that work land on their plate. I think we’re entering into an age where, and this is a general statement for software, where software as a service and the idea of shift left is really going to be replaced with I call shift out, which is if you can have an agentic system do the work for you, especially if it’s work that is toilsome and difficult, low value, or even just security maintenance, right? Like lot of this work is hard. It’s hard. That patching things is hard, especially for the engineer who doesn’t know the code. If you can make that work go away and make it secure and agents can do that for you, I think there’s higher value work for engineers to be doing.
CRob (18:24)
Well, and especially with the trend with open source, kind of where people are assembling composing apps instead of creating them whole cloth. It’s a very rare engineer indeed that’s going to understand every piece of code that’s in there.
John (18:37)
And they don’t. I don’t think it’s feasible. don’t know one except the folks who write node for instance, Node works internally. They don’t know. And if there’s a vulnerability down there, some of that stuff’s really esoteric. You have to know how that code works to fix it. As I said, luckily, agent existing LLM systems with agents kind of powering them or using or exploiting them are really good at understanding big code bases. They have like almost a perfect memory for how the code fits together. Humans don’t, and it takes a long time to learn this code.
CRob (19:11)
Yeah, absolutely. And I’ve been using leveraging AI in my practice is there are certain specific tasks AI does very well. It’s great at analyzing large pools of data and providing you lists and kind of pointers and hints. Not so great making it up by its own, but generally it’s the expert system. It’s nice to have a buddy there to assist you.
John (19:35)
It’s a pair programmer for me, and it’s a pair of data analysts for you, and that’s how you use it. I think that’s a perfect. We effectively have become cybernetic organisms. Our organic capabilities augmented with this really powerful tool. I think it’s going to keep getting more and more excellent at the tasks that we need offloaded.
CRob (19:54)
That’s great. As we’re wrapping up here, do you have any closing thoughts or a call to action for the audience?
John (20:02)
Call to action for the audience – I think it’s again, passion play for me, vulnerability management, security of open source. A couple of things. same. Again, same cloth – I think again, we’re entering an age where think security, vulnerability management can be disrupted. I think anyone who’s struggling with kind of high effort work and that never ending list helps on the way techniques you can do with open source projects and that can get you started. Just for instance, researching vulnerabilities. If you’re not using LLMs for that, you should start tomorrow. It is an amazing buddy for digging in and understanding how things work and what these exploits are and what these risks are. There are tooling like mine and others out there that you can use to really take a lot of effort away from vulnerability management. I’d say that for any open source maintainers out there, I think you can start using these programming tools as pair programmers and security analysts for you. And they’re pretty good. And if you just learn some prompting techniques, you can probably secure your code at a level that you hadn’t before. It’s pretty good at figuring out where your security weaknesses are and telling you what to do about them. I think just these things can probably enhance security open source tremendously.
CRob (24:40)
That would be amazing to help kind of offload some of that burden from our maintainers and let them work on that excellent…
John (21:46)
Threat modeling, for instance, they’re actually pretty good at it. Yeah. Which is amazing. So start using the tools and make them your friend. And even if you don’t want to use them as a pair of programmer, certainly use them as a adjunct SecOps engineer.
CRob (22:00)
Well, excellent. John from Root.io. I really appreciate you coming in here, sharing your vision and your wisdom with the audience. Thanks for showing up.
John (22:10)
Pleasure was mine. Thank you so much for having me.
CRob (22:12)
And thank you everybody. That is a wrap. Happy open sourcing everybody. We’ll talk to you soon.
 
        
        Welcome to the September 2025 edition of the OpenSSF Newsletter! Here’s a roundup of the latest developments, key events, and upcoming opportunities in the Open Source Security community.
🎉 Big week in Amsterdam: Recap of OpenSSF at OSSummit + OpenSSF Community Day Europe.
🥚 Golden Egg Awards shine on five amazing community leaders.
✨ Fresh resources: AI Code Assistant tips and SBOM whitepaper.
🤝 Trustify + GUAC = stronger supply chain security.
🌍 OpenSSF Community Day India: 230+ open source enthusiasts packed the room.
🎙 New podcasts: AI/ML security + post-quantum race.
🎓 Free courses to level up your security skills.
📅 Mark your calendar and join us for Community Events.
 From August 25–28, 2025, the Linux Foundation hosted Open Source Summit Europe and OpenSSF Community Day Europe in Amsterdam, bringing together developers, maintainers, researchers, and policymakers to strengthen software supply chain security and align on global regulations like the EU Cyber Resilience Act (CRA). The week included strong engagement at the OpenSSF booth and sessions on compliance, transparency, proactive security, SBOM accuracy, and CRA readiness.
From August 25–28, 2025, the Linux Foundation hosted Open Source Summit Europe and OpenSSF Community Day Europe in Amsterdam, bringing together developers, maintainers, researchers, and policymakers to strengthen software supply chain security and align on global regulations like the EU Cyber Resilience Act (CRA). The week included strong engagement at the OpenSSF booth and sessions on compliance, transparency, proactive security, SBOM accuracy, and CRA readiness. 
OpenSSF Community Day Europe celebrated milestones in AI security, public sector engagement, and the launch of Model Signing v1.0, while also honoring five community leaders with the Golden Egg Awards. Attendees explored topics ranging from GUAC+Trustify integration and post-quantum readiness to securing GitHub Actions, with an interactive Tabletop Exercise simulating a real-world incident response.
These gatherings highlighted the community’s progress and ongoing commitment to strengthening open source security. Read more.
At OpenSSF Community Day Europe, the Open Source Security Foundation honored this year’s Golden Egg Award recipients. Congratulations to Ben Cotton (Kusari), Kairo de Araujo (Eclipse Foundation), Katherine Druckman (Independent), Eddie Knight (Sonatype), and Georg Kunz (Ericsson) for their inspiring contributions.
With exceptional community engagement across continents and strategic efforts to secure the AI/ML pipeline, OpenSSF continues to build trust in open source at every level.
Read the full press release to explore the achievements, inspiring voices, and what’s next for global open source security.
Here you will find a snapshot of what’s new on the OpenSSF blog. For more stories, ideas, and updates, visit the blog section on our website.
 On August 15, 2025, GitHub’s Open Source Friday series spotlighted the OpenSSF Global Cyber Policy Working Group (WG) and the OSPS Baseline in a live session hosted by Kevin Crosby, GitHub. The panel featured OpenSSF’s Madalin Neag (EU Policy Advisor), Christopher Robinson (CRob) (Chief Security Architect) and David A. Wheeler (Director of Open Source Supply Chain Security) who discussed how the Working Group helps developers, maintainers, and policymakers navigate global cybersecurity regulations like the EU Cyber Resilience Act (CRA).
On August 15, 2025, GitHub’s Open Source Friday series spotlighted the OpenSSF Global Cyber Policy Working Group (WG) and the OSPS Baseline in a live session hosted by Kevin Crosby, GitHub. The panel featured OpenSSF’s Madalin Neag (EU Policy Advisor), Christopher Robinson (CRob) (Chief Security Architect) and David A. Wheeler (Director of Open Source Supply Chain Security) who discussed how the Working Group helps developers, maintainers, and policymakers navigate global cybersecurity regulations like the EU Cyber Resilience Act (CRA). 
The conversation highlighted why the WG was created, how global policies affect open source, and the resources available to the community, including free training courses, the CRA Brief Guide, and the Security Baseline Framework. Panelists emphasized challenges such as awareness gaps, fragmented policies, and closed standards, while underscoring opportunities for collaboration, education, and open tooling.
As the CRA shapes global standards, the Working Group continues to track regulations, engage policymakers, and provide practical support to ensure the open source community is prepared for evolving cybersecurity requirements. Learn more and watch the recording.
SBOMs are becoming part of everyday software practice, but many teams still ask the same question: how do we turn SBOM data into decisions we can trust?
Our new whitepaper, “Improving Risk Management Decisions with SBOM Data,” answers that by tying SBOM information to concrete risk-management outcomes across engineering, security, legal, and operations. It shows how to align SBOM work with real business motivations like resiliency, release confidence, and compliance. It also describes what “decision-ready” SBOMs look like, and how to judge data quality. To learn more, download the Whitepaper.
 GUAC and Trustify are combining under the GUAC umbrella to tackle the challenges of consuming, processing, and utilizing supply chain security metadata at scale. With Red Hat’s contribution of Trustify, the unified community will serve as the central hub within OpenSSF for building and using supply chain knowledge graphs, defining standards, developing shared infrastructure, and fostering collaboration. Read more.
GUAC and Trustify are combining under the GUAC umbrella to tackle the challenges of consuming, processing, and utilizing supply chain security metadata at scale. With Red Hat’s contribution of Trustify, the unified community will serve as the central hub within OpenSSF for building and using supply chain knowledge graphs, defining standards, developing shared infrastructure, and fostering collaboration. Read more.
 On August 4, 2025, OpenSSF hosted its second Community Day India in Hyderabad, co-located with KubeCon India. With 232 registrants and standing-room-only attendance, the event brought together open source enthusiasts, security experts, engineers, and students for a full day of learning, collaboration, and networking.
On August 4, 2025, OpenSSF hosted its second Community Day India in Hyderabad, co-located with KubeCon India. With 232 registrants and standing-room-only attendance, the event brought together open source enthusiasts, security experts, engineers, and students for a full day of learning, collaboration, and networking.
The event featured opening remarks from Ram Iyengar (OpenSSF Community Engagement Lead, India), followed by technical talks on container runtimes, AI-driven coding risks, post-quantum cryptography, supply chain security, SBOM compliance, and kernel-level enforcement. Sessions also highlighted tools for policy automation, malicious package detection, and vulnerability triage, as well as emerging approaches like chaos engineering and UEFI secure boot.
The event highlighted India’s growing role in global open source development and the importance of engaging local communities to address global security challenges. Read more.
In our recent blog, Avishay Balter, Principal SWE Lead at Microsoft and David A. Wheeler, Director, Open Source Supply Chain Security at OpenSSF introduce the OpenSSF “Security-Focused Guide for AI Code Assistant Instructions.” AI code assistants can speed development but also generate insecure or incorrect results if prompts are poorly written. The guide, created by the OpenSSF Best Practices and AI/ML Working Groups with contributors from Microsoft, Google, and Red Hat, shows how clear and security-focused instructions improve outcomes. It stands as a practical resource for developers today, while OpenSSF also develops a broader course (LFEL1012) on using AI code assistants securely.
This effort marks a step toward ensuring AI helps improve security instead of undermining it. Read more.
 Public package registries and other shared services power modern software at global scale, but most costs are carried by a few stewards while commercial-scale users often contribute little. Our new open letter calls for practical models that align usage with responsibility — through partnerships, tiered access, and value-add options — so these systems remain strong, secure, and open to all.
Public package registries and other shared services power modern software at global scale, but most costs are carried by a few stewards while commercial-scale users often contribute little. Our new open letter calls for practical models that align usage with responsibility — through partnerships, tiered access, and value-add options — so these systems remain strong, secure, and open to all.
Signed by: OpenSSF, Alpha-Omega, Eclipse Foundation (Open VSX), OpenJS Foundation, Packagist (Composer), Python Software Foundation (PyPI), Rust Foundation (crates.io), Sonatype (Maven Central).
#38 – S2E15 Securing AI: A Conversation with Sarah Evans on OpenSSF’s AI/ML Initiatives
In this episode of What’s in the SOSS, Sarah Evans, Distinguished Engineer at Dell Technologies, discusses extending secure software practices to AI. She highlights the AI Model Signing project, the MLSecOps whitepaper with Ericsson, and efforts to identify new personas in AI/ML operations. Tune in to hear how OpenSSF is shaping the future of AI security.
#39 – S2E16 Racing Against Quantum: The Urgent Migration to Post-Quantum Cryptography with KeyFactor’s Crypto Experts
In this episode of What’s in the SOSS, host Yesenia talks with David Hook and Tomas Gustavsson from Keyfactor about the race to post-quantum cryptography. They explain quantum-safe algorithms, the importance of crypto agility, and why sectors like finance and supply chains are leading the way. Tune in to learn the real costs of migration and why organizations must start preparing now before it’s too late.
The Open Source Security Foundation (OpenSSF), together with Linux Foundation Education, provides a selection of free e-learning courses to help the open source community build stronger software security expertise. Learners can earn digital badges by completing offerings such as:
These are just a few of the many courses available for developers, managers, and decision-makers aiming to integrate security throughout the software development lifecycle.
Join us at OpenSSF Community Day in South Korea!
OpenSSF Community Days bring together security and open source experts to drive innovation in software security.
Connect with the OpenSSF Community at these key events:
There are a number of ways for individuals and organizations to participate in OpenSSF. Learn more here.
You’re invited to…
We want to get you the information you most want to see in your inbox. Missed our previous newsletters? Read here!
Have ideas or suggestions for next month’s newsletter about the OpenSSF? Let us know at marketing@openssf.org, and see you next month!
Regards,
The OpenSSF Team
 
        
        The quantum threat is real, and the clock is ticking. With government deadlines set for 2030, organizations have just five years to migrate their cryptographic infrastructure before quantum computers can break current RSA and elliptic curve systems.
In this episode of “What’s in the SOSS,” join host Yesenia as she sits down with David Hook (VP Software Engineering) and Tomas Gustavsson (Chief PKI Officer) from Keyfactor to break down post-quantum cryptography, from ELI5 explanations of quantum-safe algorithms to the critical importance of crypto agility and entropy. Learn why the financial sector and supply chain security are leading the charge, discover the hidden costs of migration planning, and find out why your organization needs to start inventory and testing now because once quantum computers arrive, it’s too late.
00:00 Introduction
00:22 Podcast Welcome
00:01 – 01:22: Introductions and Setting the Stage
01:23 – 03:22: Post-Quantum 101 – The Quantum Threat Explained
03:23 – 06:38: Government Deadlines and Industry Readiness
06:39 – 09:14: Bouncy Castle’s Quantum-Safe Journey
09:15 – 10:46: The Power of Open Source Collaboration
10:47 – 13:32: Industry Sectors Leading the Migration
13:33 – 16:33: Planning Challenges and Crypto Agility
16:34 – 22:01: The Randomness Problem – Why Entropy Matters
22:02 – 26:44: Getting Started – Practical Migration Advice
26:45 – 28:05: Supply Chain and SBOMs
28:06 – 30:48: Rapid Fire Round
30:49 – 31:40: Final Thoughts and Call to Action
Intro Music + Promo Clip (00:00)
Yesenia (00:21)
Hello and welcome to What’s in the SOSS, OpenSSF’s podcast where we talk to interesting people throughout the open source ecosystem, sharing their journey, experiences and wisdom. Soy Yesinia Yser, one of your hosts. And today we have a very special treat. I have David and Tomas from Keyfactory here to talk to us about post quantum. Ooh, this is a hot topic. It was one definitely that was mentioned a lot in RSA and upcoming conferences.
Tomas, David I’ll hand it over to you. I’ll hand it over to Tomas – introduce yourself.
Tomas Gustavsson (00:56)
Okay, I’m Thomas Gustavsson, Chief PKI Officer at Keyfactor. And I’ve been a PKI nerd and geek for working with that for 30 years now. I would call it applied cryptography. So as compared to David, I take what he does and builds PKI, a digital signature software with it.
David Hook (01:17)
And I’m David Hook. My official title is VP Software Engineering at KeyFactor, but primarily I’m responsible for the care and feeding of the bountycast of cryptography APIs which basically form the core of the cryptography that KeyFactor and other people’s products actually use.
Yesenia (01:35)
Very nice. And for those that aren’t aware, like myself, who is kind of new into the most post-quantum cryptology, could you explain like I’m five of what that is for our audience?
David Hook (01:46)
So one of the issues basically with the progress that’s been made in quantum computers is that there’s a particular algorithm called Shor’s algorithm which enables people to break conventional PKI systems built around RSA and Elliptic-Curve, which are the two most common algorithms being used today. The idea of the post-quantum cryptography effort is to develop and deploy algorithms which are not susceptible to attack from quantum computers before we actually have a quantum computer attacking us. Not that I’m expecting the first quantum computer to get out of a box, well, you know, sort of run rampaging around the street with a knife or anything like that. But the reality is that good people and bad people will actually get access to quantum technology at about the same time. And it’s really the bad people we’re trying to protect people from.
Tomas Gustavsson (02:39)
Exactly, and since more or less the whole world as we know it runs on RSA and EC, that’s what makes it urgent and what has caused governments around the world to set timelines for the migration to post quantum cryptography or quantum safe cryptographies. It’s also known as.
David Hook (03:03)
Yeah, I was just gonna say that that’s probably quantum safe is in some ways a better way of describing it. One of the issues that people have with the term post quantum is in the industry, a lot of people hear the word post and they think I can put this off until later. But yeah, the reality is that’s not possible because once there is a quantum computer that’s cryptographically relevant, it’s too late.
Yesenia (03:23)
So from what I’m hearing, sounds that post quantum cryptology is gaining urgency. And as we’re standardizing these milestones, including our government regulations, what are you seeing from your work with Bouncy Cancel, EJBCA, and SignServer? And of course, other important ecosystem players like our HSM vendors as they’re getting ready for these PQC deployments.
David Hook (03:49)
So I guess the first thing is, from the government point of view, the deadline is actually 2030, which is only about five years away. That certainly has got people’s attention. And that includes in Australia where I’m from. Now, what we’re seeing at the moment, of course, is that for a lot of people, they’re waiting for certified implementations. But we aren’t actually seeing people use pre-certified implementations in order to get some understanding of what the differences are between the quantum algorithms, the post quantum algorithms rather, and the original RSA PKI algorithms that we’ve been using before. One of the issues of course is that the post quantum algorithms require more resources. So the keys are generally bigger, the signature sizes are generally bigger, payloads are generally bigger as well. And also the mechanism for doing key transport in post quantum relies on a system called a KEM which is a key encapsulation mechanism. Key encapsulation mechanisms in usage are also slightly different to how RSA or Diffie-Hellman works, elliptic-curve Diffie-Hellman, which is also what we’re currently used to using. So it’s going to have to be some adaption in that too. What we’re seeing certainly at bouncer-caster levels, there’s a lot of people now starting to try new implementations of the protocols and everything they’re using in order to find out what the scalability effects are and also where there are these issues where they need to rephrase the way some processes are done just because the algorithms no longer support the things they used to support it.
Tomas Gustavsson (05:24)
I think it’s definitely encouraging that things have moved quite a lot, so of course the cryptographic community have worked on this for many, many years and we’ve now moved on from, you know, what can we do to when and how can we do it? So that’s very encouraging. There’s still a few final bits and pieces to be finished on the front of standardization and the certifications as David mentioned.
But things are, you know, dripping in one by one. For example, hardware security modules or HSM vendors are coming in one by one. for the actually the right kind of limited use cases today, selecting, you know, ready some vendors or open source projects, you can make things work today, which has really been kind of just in the last couple of months, a really big step forward for planning to being able to execute.
Yesenia (06:27)
Very interesting. And we’ll jump over to like bouncy castle. It’s from my experience within the open source world, it’s been a very long time that it’s been a trusted open source crypto library. How do you approach supporting post quantum algorithms while maintaining the trust and the interoperability? That’s a hard word for me.
David Hook (06:50)
Yeah, that’s all right. It’s not actually an easy operation to execute in real life either.
Yesenia (06:55)
Oh, so that works.
David Hook (06:57)
Yeah, so it works well. So with Bouncy Castle, what we able to do is we actually, our original set of post-quantum algorithms was based on round three of the NIST post-quantum competition. And we actually got funding from the Australian government to work with a number of Australian universities to add those implementations and also one of the universities was given funding to do formal validation on them as well. So one part of the process for us was, well guess there were three parts, one part was the implementation which was done in Java and C sharp and then in addition to that then we had somebody sit down and actually study the work that was done independently to make sure that we hadn’t introduced any errors that were obvious and to check for things like side channels and that way there were timing operation considerations that might have caused side channel leakage.
And then finally, of course, with the interoperability, we’ve been actively involved with organizations like the IETF and also the OpenSSL mission. And that’s allowed us to work with other open source projects and also other vendors to determine that our certificates, for example, and our private keys and all that have been encoded in a manner that actually allows them to be read and understood by the other vendors and other open source APIs. And on top of that, we’ve also been active participants in working with NIST on the ACVP stuff, which is for algorithm validation testing, to make sure the actual implementations themselves are producing the correct results. And that’s obviously something that we’ve worked with across the IETF and OpenSSL mission as well. So, you know part of actually generating a certificate of course is you’ve got to able to verify the signature on it. So that means you have to be able to understand the public key associated with it. That’s one checkbox and then the second one of course is the signature for example makes sense too.
Yesenia (08:52)
So, it sounds like there’s a lot of layers to this that have to be kind of checked off and gives it the foundation for this. Very nice.
Tomas Gustavsson (09:02)
I would say that what is so good to work in open source is that without collaboration we won’t have a chance to meet these tight deadlines that governments are setting up. So, and the great thing in open source community is that lot of things are transparent and easy to test.
Bouncy Castle is released open source, EGBC and Science Server are released open source and early. Not only us, of course, but other people can also start testing and grabbing OpenSSL or OQS from the Linux Foundation. You can test interoperability and verify it. And actually, you do find bugs in these early tests, which is why I think open source is the foundation to…being able to do this.
Yesenia (9:58)
Yeah, open source gives us that the nice foundation while we might have several years. I know with the migration itself, it’s going to take a while, especially trying to figure out how to, how is it going to be done? So just wanted to look into what remains of 2025 and of course, beyond. You know, we’re approaching a period where many organizations will need to start migrating, especially the critical infrastructure and our software supply chains. What do you anticipate will be the most important post quantum cryptographic milestone or shifts this year?
Tomas Gustavsson (10:32)
Definitely, we see a lot of interest from specific sectors. I said, supply chain security is a really big one because that was also, say, the first or definitely one of the first anticipated use cases for post-quantum cryptography because if you cannot secure the supply chain with over there updates and those kinds of things, then you won’t be in a good position to update or upgrade systems once a potential potent quantum computer is here. So everything about code signing, software supply chain is a huge topic. And it’s actually one of the ones where you will be able to do production usage or people are starting to plan and test production usage already or some actually have already gone there.
Then there’s industries like the finance industry, which is encouraging, I guess, for us all who have a bank that we work with, that they are very early on the ball as well to plan the huge complex system they are running and doing actually practical tests now and moving from a planning phase into an implementation phase.
And then there are more, I would say, forward looking things which are, you know, very long term like telecom are looking to the next generation like 6G where they are planning in post-quantum cryptography from the beginning. So there’s everything from, you know, right now to what’s happening in the coming years and what’s going to happen, you know, definitely past 2030. So a lot of all of these things are ongoing.
While there are still, of course, body of organizations and people out there who are completely ignorant, not in a bad way, right? They just haven’t reached, been reached by the news. There’s a lot of things in this industry, so you can’t keep track of everything.
Yesenia (12:43)
Right, they’re very unaware potentially of what’s to come or even if they’re impacted.
Tomas Gustavsson (12:49)
Yes.
David Hook (12:50)
So the issue you run into of course for something like this is that it costs money. That tends to slow people down a bit.
Tomas Gustavsson (12:58)
Yeah, that’s one thing when people or organizations start planning, they fall into these non obvious things like from a developer when you just develop it and then someone integrates it and it’s going to work. But large organization, they have to look into things like hardware depreciation periods, right? When if they want to be ready by 2035 or 2030, they have to plan backwards to see when can we earliest start replacing hardware if it’s routers or VPN and these kind of things. And when do we need to procure new software or start updating and planning our updates because all these things are typically multi-year cycles in larger organizations. And that’s why things like the financial industry is trying to start to plan early. And of course, we as suppliers are kind of on the bottom of the food chain. We have to be ready early.
David Hook (14:02)
One of the, actually, I guess there’s a couple of runs across where the money’s got to get spent too. So the first one really is that people need to properly understand what they’re doing. It’s surprising how many companies don’t actually understand what algorithms or certificates that got deployed. So people actually need to have their inventory in place.
The second thing, of course, that we’ll probably talk about a couple of times is just the issue of crypto agility. It’s been a bit of a convention in the industry to bolt security on at the last minute. And we generally get away with it. Although we don’t necessarily produce the best results. But the difference between what we’ve seen in the past and now where people really need to be designing crypto agile implementations, meaning that they can replace key side certificates, keys, even whole algorithms in their implementations, is that you really have to design a system to deal with that upfront. And in the same way as we have disaster recovery testing, it’s actually the kind of thing that needs to become part of your development testing as well. Because as I was on a panel recently for NIST and as one of the people on that panel pointed out, it’s very easy to design something which is crypto agile in theory. But it’s like most things, unless you actually try and make sure that it really does work, that’s only when you actually find out that you’ve actually accidentally introduced a dependency on some old algorithm or something that you’re trying to get rid of.
So there’s those considerations as well that need to be made.
Yesenia (15:43)
Seems like a lot to be considered, especially with the migration and just the bountiful information on post quantum as well. I want to shift gears just a little bit and just throw in some randomness and talk about the importance of randomness. It’s just a topic that with many companies promoting things like QRNG and research just revealing breakable encryption keys, mostly due to weak entropy – Can you talk about why entropy can be hard to understand and what failures it depends on?
David Hook (16:20)
Yeah, entropy is great. You talk to any physicist and usually what you’ll find out is they’re spending all their time trying to get rid of the noise in their measurement systems. And of course, what they’re talking about there is low entropy. What we want, of course, in cryptography, because we’re computer scientists, we do everything backwards, we actually are looking for high entropy. So high entropy really gives you good quality keys.
That is to say that you can’t predict what actual numbers or bit strings will actually appear in your keys. And if you can’t predict them, then there’s a pretty good chance nobody else can. That’s the first thing. Of course, one slight difference, again, because we’re computer scientists and we like to make things a bit more difficult than they need to be sometimes, we actually in cryptography talk about conditioned entropy, which is what’s defined in a recent NIST standard, which has got the rather catchy name of SPA 890B.
Yesenia (17:24)
Got you.
David Hook (17:25)
And that’s become sort of the, I guess, the current standard for how to do it properly, and that’s been adopted across the globe by a number of countries. Now…one of the interesting times of this, of course, is the quantum effects actually are very good for generating lots of entropy. So we’re now seeing people actually producing quantum random number generators. And the interesting thing about those is that they can just provide virtually an infinite stream of entropy at high speed. This is good because the other thing that we usually do to get entropy is we rely on what’s called opportunistic entropy.
So on a server, for example, you go, know, how fast is my disk going? How, where am I getting blocks from? You know, what’s the operating system doing? How long is it taking the user to type something in? Is there network latency for this or that? Or, you know, all these sort of things that all these operating system functions that are taking place. How long does it take me to scan a large amount of memory? These all contribute to, you know, bits of randomness really because they’re characteristic of that system and that system only.
The issue of course that we’ve got is that nowadays a lot of systems are on what you call virtual architectures. So the actual machine that you’re running on is a virtual machine. And so it doesn’t necessarily have all those hardware characteristics that it can get access to. And then there’s the other problem, know, which is like when we do stuff fast now, we use high speed ram disks, gigabit ethernet, you all this sort of stuff. And suddenly a lot of things that used to introduce random random-ish sort of delays are no longer doing that because the hardware is running so fast and so hot, which is great for user response times, but for generating cryptographic keys, maybe not so nice. And this is really where the QRNGs, I think, at the moment are coming into their own because they provide an independent way of actually producing entropy that the opportunistic schemes that we previously used are suddenly becoming ineffective for.
Tomas Gustavsson (19:34)
I might add in there that the history is kind of littered with severe breakages due to entropy failures. We have everything from Debian wikis, which we still suffer from even though it was ages ago. We had the ROCA wikis which led to replacement of like a hundred million smart cards a bunch of years ago and there’s still research, you know, recent research that shows that off on the internet there’s breakable RSA keys in certificates which are active due to typically being generated maybe on a constrained device during the boot up phase where it hadn’t gathered enough in entropy yet. So it becomes predictable. So there’s a lot of bad history around this and it’s not obvious how to make it correctly. Typically you rely on the platform to give it to you.
But then, when the platform isn’t reliable enough, it fails.
David Hook (20:37)
And the interesting thing about that is that, know, the RSA keys that Thomas was talking about, you don’t need a quantum computer to break them. I mean, it’d be nice to have one to break them with because then you could claim you had a quantum computer. But the reality is you don’t need to wait for a quantum computer because of the poor choices that have been made around entropy. The keys are breakable now – using conventional computers. So yeah, entropy is important.
Yesenia (21:04)
The TLDR entropy is important. And we are heading towards that time of this migration and stuff. As we had mentioned earlier, a lot of companies, they just might not be aware. They might not feel like they fall under this migration and these standards that are coming along. So I just wanted to see if y’all can share some practical advice – for organizations that are beginning their post-quantum journey, what are one or two steps that you’d recommend that they take now?
Tomas Gustavsson (21:35)
I think, yep, some things we touched on already, like this inventory. So in order to migrate away from future vulnerable cryptography, you have to know what you have and where you have it today. And there’s a bunch of ways to do that. And it’s typically thought as kind of the first step in order to allow you to do some planning for your migration. I mean, you can do technical testing as well. We’re computer geeks here, so we like the testing.
While you’re doing [unintelligible] and planning, can test the obvious things that you know already that you know you’ll have to migrate. So there’s a bunch of things you can do in parallel. And then I think I mentioned is that you have to think backwards to realize that even though 2030 or 2035 doesn’t sound like tomorrow, it’s in a cryptographic migration scenario, or software and hardware replacement cycle it is virtually tomorrow. while they were saying that the best time to start was 10 years ago, but the second best time to start is now.
Yesenia (22:49)
I mean, it’s four and half years away.
David Hook (22:51)
Yeah, and we’ve still got people trying to get off SHA-1. It’s just those days are gone. The other thing too, of course, is yeah, people need to spend a bit of time looking at this issue of crypto agility because the algorithms that are coming down the pipe at the moment, while they’ve been quite well studied and well researched, it’s not necessarily going to be the case that they’re actually going to stay the algorithms that we want to use. And that might be because it could show up that there’s some issues with them that weren’t anticipated and parameter sizes might need to be changed to make them more secure. Or there’s a lot of ongoing research in the area of post-quantum algorithms and it may turn out that there are algorithms that are a lot more efficient to offer smaller key sizes or smaller signature sizes, which certain applications are one to migrate to quite quickly.
So, know, if you can imagine, you know, having a conversation with your boss where, you know, suddenly there’s some algorithm that’s going to make you twice as productive and you have to explain to him that you’ve actually hard coded the algorithm that you’re using. I don’t think a conversation like that’s going to go very well. So flexibility is required, but as I said, the flexibility needs to be designed into your system. in the same way as you have disaster recovery testing, it needs to be tested before deployment. can actually change the algorithms we need to.
Tomas Gustavsson (24:14)
Yeah, we’ve actually, you often say that since you’re doing this work on migration now, you know, it’s an opportunity to look at crypto agility. If you’re changing something, make it crypto agile. And the same thing, you know, classic advice is if you rely on vendors, be it commercial or open source, ask them about their preparedness for quantum readiness when they’re going to be ready. So you have to challenge everything, both us, you know, in the in our community, right? There are among different open source projects, nothing is start to build and build any new things which are non crypto agile or not prepared for quantum safe algorithms and for old stuff to actually plan. It’s going to take some man hours to update it to be quantum safe in many cases, in most all cases.
David Hook (25:10)
Yeah, don’t be afraid to ask people that are selling your stuff what their agility story is and what their quantum safe story is. I think all of us need to do that and respond to it.
Yesenia (25:21)
Yes, ask and respond. What would be areas or organizations that folks, let’s just say it when they’re aware, they could go ahead and ask if they’re getting started.
David Hook (25:30)
So probably internally, it’s obviously your IT people. I would start by asking them, because they’re the people on the call face. And then, yeah, as Thomas said before, it’s the vendors that you’re working with, because this is one of the things about the whole supply chain – most of us, even in IT, are not using stuff that’s all in-house, we’ve usually got other people somewhere in our supply chain responsible for the systems that we’re making use of internally. And so, you know, people need to be asking everyone. And likewise, your suppliers need to be following the same basic principle, which is making sure that in terms of how their supply chains work, again, there’s this coverage of, you know, what is the quantum safe story and, know, how these systems that have been given to them, all these APIs or products that have been given, how they crypto agile, what is required to change things that need to be changed.
Tomas Gustavsson (26:30)
Now this is a great use case for your SBOMs and CBOMs.
David Hook (26:34)
Exactly, their time has arrived.
Yesenia (26:36)
There you go. It has arrived. Time for the boms. For those unaware, I just learned Cbom because I work with AISboms and Sboms. I just learned Cboms were cryptographic boms. So in case someone was like, what is a Cbom now? There you go. We dropped the bomb on you.
Let’s move over now to our rapid fire part of the interview. I’ll pose a few questions and it’s going to be whoever answers them first. Or if you both answer them the same time, we’ll figure that out.
But our first question, Vim or Emacs?
David Hook (27:06)
Vim or Emacs? Vim! Good answer. I didn’t even know that was a question. I thought it was a joke. I’m sorry, I’m a very old school.
Tomas Gustavsson (27:19)
I was told totally Emacs 20 years ago.
Yesenia (27:22)
You know, we just got to start the first one of throwing you off a little bit. Make sure you’re awake, make sure I’m awake. I know we’re on very different time zones, but…
David Hook (27:29)
I was using VI in 1980. And I’ve never looked back.
Yesenia (27:33)
Our next one is Marvel or DC?
David Hook (27:36)
Yeah, what superheroes do prefer? Oh yeah. I’m really more a Godzilla person. know, Mothra, Station Universe for Love, that kind of thing. Yeah. I don’t know if Marvel or DC has really captured that for me yet.
Tomas Gustavsson (27:56)
Yeah, I remember Zelda, was. There was the hero as well. That was in the early 90s, maybe 80s even.
David Hook (28:05)
Yeah. There you go. Sorry.
Yesenia (28:07)
There you go. Not it’s OK. You got to answer. Sweet or sour?
Tomas Gustavsson (28:10)
Sour.
David Hook (28:11)
Yeah, I’d go sour.
Yesenia (28:12)
Sour. Favorite adult beverage?
Tomas Gustavsson (28:18)
Alcohol.
David Hook (28:22)
Probably malt whiskey, if I was going to be specific. But I have been known to act more broadly, as Thomas has indicated, so probably a more neutral answer.
Yesenia (28:35)
Thomas is like, skip the flavor, just throw in the alcohol.
Tomas Gustavsson (28:40)
Well, I think it has to be good, but it usually involves alcohol in some form or the other.
Yesenia (28:47)
Love it. Last one. Lord of the Rings or Game of Thrones?
David Hook (28:52)
Lord of the Rings. I have absolutely no doubt.
Tomas Gustavsson (28:55)
I have to agree on that one.
Yesenia (28:57)
There you go, there you have it folks, another rapid fire. Gentlemen, any last minute advice or thoughts that you want to leave with the audience?
David Hook (29:05)
Start now.
Tomas Gustavsson (29:07)
And for us, if you’re a computer geek, this is fun. So don’t miss out on the chance to have some fun.
David Hook (29:16)
Yeah, we pride ourselves on our ability to solve problems. So now is a good time to shine.
Yesenia (29:22)
There you have it. It’s time to start now and start with the fun. Thank you both so much for your time today, your impact and contribution to our communities and those in our community helping drive these efforts forward. I look forward to seeing your efforts in 2025. Thank you.
David Hook & Tomas Gustavsson (29:41)
Thank you. Thank you.
 
        
        Foundation honors community achievements and strategic efforts to secure ML pipeline during community event in Amsterdam
AMSTERDAM – OpenSSF Community Day Europe – August 28, 2025 – The Open Source Security Foundation (OpenSSF), a cross-industry initiative of the Linux Foundation that focuses on sustainably securing open source software (OSS), presents the Golden Egg Award during OpenSSF Community Day Europe and celebrates notable momentum across the security industry. The Foundation’s milestones include achievements in AI/ML security, policy education, and global community engagement.
OpenSSF continues to shine a light on those who go above and beyond in our community with the Golden Egg Awards. The Golden Egg symbolizes gratitude for recipients’ selfless dedication to securing open source projects through community engagement, engineering, innovation, and thoughtful leadership. This year, we celebrate:
OpenSSF is supported by more than 118 member organizations and 1,519 technical contributors across OpenSSF projects, serving as a vendor-neutral partner to affiliated open source foundations and projects. As securing the global technology infrastructure continues to get more complex, OpenSSF will remain a trusted home to further the reliability, security, and universal trust of open source software.
Over the past quarter, OpenSSF has made several key achievements in its mission to sustainably secure open source software, including:
“Securing the AI and ML landscape requires a coordinated approach across the entire pipeline,” said Steve Fernandez, General Manager at OpenSSF. “Through our MLSecOps initiatives with OpenSSF members and policy education with our communities, we’re giving practitioners and their organizations actionable guidance to identify vulnerabilities, understand their role in the global regulatory ecosystem, and build a tapestry of trust from data to deployment.”
OpenSSF continues to expand its influence on the international stage. OpenSSF Community Days drew record attendance globally, including standing-room-only participation in India, strong engagement in Japan, and sustained presence in North America.
“As AI and ML adoption grows, so do the security risks. Visualizing Secure MLOps (MLSecOps): A Practical Guide for Building Robust AI/ML Pipeline Security is a practical guide that bridges the gap between ML innovation and security using open-source DevOps tools. It’s a valuable resource for anyone building and securing AI/ML pipelines.” Sarah Evans, Distinguished Engineer, Dell Technologies
“The whitepaper distills our collective expertise into a pragmatic roadmap, pairing open source controls with ML-security threats. Collaborating through the AI/ML Security WG proved that open, vendor-neutral teamwork can significantly accelerate the adoption of secure AI systems.” Andrey Shorov, Senior Security Technology Specialist at Product Security, Ericsson
“The Cybersecurity Skills Framework is more than a checklist — it’s a practical roadmap for embedding security into every layer of enterprise readiness, open source development, and workforce culture across international borders. By aligning skills with real-world global threats, it empowers teams worldwide to build secure software from the start.” Jamie Thomas, Chief Client Innovation Officer and the Enterprise Security Executive, IBM
“Open source is global by design, and so are the challenges we face with new regulations like the EU Cyber Resilience Act,” said Christopher “CRob” Robinson, Chief Security Architect, OpenSSF. “The Global Cyber Policy Working Group helps policymakers understand how open source is built and supports maintainers and manufacturers as they prepare for compliance.”
“The OpenSSF’s brief guide to the Cyber Resilience Act is a critical resource for the open source community, helping developers and contributors understand how the new EU law applies to their projects. It clarifies legal obligations and provides a roadmap for proactively enhancing their code’s security.” Dave Russo, Senior Principal Program Manager, Red Hat Product Security
New and existing OpenSSF members are gathering this week in Amsterdam at the annual OpenSSF Community Day Europe.
OpenSSF will continue its engagement across Europe this fall with participation in the Linux Foundation Europe Member Summit (October 28) and the Linux Foundation Europe Roadshow (October 29), both in Ghent, Belgium. At the Roadshow, OpenSSF will sponsor and host the CRA in Practice: Secure Maintenance track, building on last year’s standing-room-only CRA workshop. On October 30, OpenSSF will co-host the European Open Source Security Forum with CEPS in Brussels, bringing together open source leaders, European policymakers, and security experts to collaborate on the future of open source security policy. A landing page for this event will be available soon, check the OpenSSF events calendar for updates and registration details.
The Open Source Security Foundation (OpenSSF) is a cross-industry organization at the Linux Foundation that brings together the industry’s most important open source security initiatives and the individuals and companies that support them. The OpenSSF is committed to collaboration and working both upstream and with existing communities to advance open source security for all. For more information, please visit us at openssf.org.
Media Contact
Grace Lucier
The Linux Foundation
In this episode of “What’s in the SOSS,” Derek Zimmer and Amir Montezari from the Open Source Technology Improvement Fund (OSTIF) discuss their decade-long mission of providing security resources to open source projects. They focus on collaborative, maintainer-centric security audits that help projects improve their security posture through expert third-party reviews, without creating fear or overwhelming developers.
00:00 Introduction
00:22 Podcast Welcome
01:04 OSTIF Founders Introduction
02:31 OSTIF’s Mission and Approach
05:28 Relationship Management and Expertise
08:01 Evolution of Security Engagement Methods
12:15 Making Security Audits Less Intimidating
18:00 Rapid Fire Questions
20:45 Closing, Call to Action
CRob 0:22
Welcome, welcome. Welcome to What’s in the SOSS, the OpenSSF podcast, where I get to talk to some of those amazing people on the planet that are helping secure the open source software we all know we all use every day and that we love today, I have some very special friends with us that are doing the yeoman’s work trying to help work with projects to help improve their security posture. I have Amir and Derek from OSTIF. Can I give you guys just a brief moment to introduce yourselves?
Derek Zimmer: 0:54
Sure, I’m Derek Zimmer, founder of OSTIF. We’ve been doing this for 10 years now and take it away. Amir.
Amir Montezary: 1:04
Thank you. Amir Montezary, Managing Director of OSTIF, open source technology improvement fund, yeah, absolutely thrilled to be here on the podcast and to be talking with you, CRob, and to be talking about the work that we do. As Derek mentioned, this is our 10 year anniversary. So coming up on 10 years of really developing this organization, the processes and really fine tuning to a degree what we do and the value that we provide to the open source ecosystem. So absolutely thrilled to be here and to talk about it.
CRob 1:40
That’s amazing. So happy birthday OSTIF, for our audience that might not be familiar directly with your work. Could you maybe tell it? Tell us what OSTIF is, and what do you all do?
Derek 1:53
Sure. So we founded the organization 10 years ago on the idea that we needed a maintainer centric organization that could bring security resources to projects. There were some efforts in the past to do something similar to what we do, but most of the time, those were very corporate centric. So the ideas that circulated around them were very were dictating what open source should be doing and not we’re here to help. And here’s some resources so that that different perspective was the the kickoff for why we wanted to create something different.
Amir 2:36
Yeah, absolutely. And and still today we see that open source projects, because of their very nature, you know, they need a very strong, independent body to to help them. We provide that platform, being a nonprofit organization, being vendor neutral, being neutral in all senses of the word, and just solely focused on, as Derek mentioned, helping projects, getting them the security resources that they need, and in a way, most importantly, being able to provide those resources in a way that directly impacts the project and its security posture was really what drove us to start this organization. You know, typically, open source developers, maintainers, are not security experts, and that’s okay. Security is a very difficult topic, and like, like a lot of other things, it’s best to be left to the experts. So while, of course, there are things individual developers and maintainers can do to, you know, improve their their hygiene, so to speak, and improve the security posture of their projects, we found that getting independent third party expert audit review in a way that is again meant to be collaborative, as in, these auditors work with the maintainers, as opposed to kind of dictating to maintainers or telling them, you know, things to do, work with them on improving, kind of the holistic security posture of their project, and we found that to be really successful. A lot of research suggests that this is a very good practice to do. I come from a background in it, auditing, reviewing critical payment systems in the United States. That is a great field, and that we saw that that level of independent review, or third party review, that kind of due diligence, really helps improve the the state or posture of a software project. So so it was really. Founded on the need for it to exist. We saw there was a big need for this, that a mechanism to get security help, to open source projects, working directly with maintainers, and doing it in a way that is inclusive and impactful and most importantly, efficient, is kind of what drove us to do what we do, and so in terms of kind of how we do that, it’s largely a lot of just relationship management. So we’ve in the last 10 years, built a really vast network of security experts, researchers, a lot of which are solely focused in the open source security space, so they kind of understand some of the idiosyncrasies involved in open source software, and can, again, can actually provide meaningful review work and collaboration and essentially handle that whole process, because there are quite a lot of moving parts between. You know, typically you have a separate body funding the work, you have the maintainers or contributor base that could be very much distributed around the world. You don’t always have, I guess, established kind of decision making structures, as you might see in a corporate setting or in a more commercial environment. So we kind of handle all of that, all of that goodwill building, relationship building, project management, contract management, basically all of the pieces so that all that, all that’s needed for a funder, for example, someone who wants to fund security outcomes, or the project you know that would like to improve their security posture, they can just focus on that, and we, as an organization, as an independent body, essentially handle all of the all of the minutia and the administrivia and the facilitation and management to make it, to make it a very streamlined and efficient process. So that’s kind of high level overview.
CRob 7:23
As you both are aware, you have been long time participants and partners with our foundation and also our friends over at Alpha-Omega. From your perspective, kind of with your 10 years of working in this particular space. What do you all see as the main value that projects get out of these types of engagements?
Derek 7:47
So actually, this has changed over time, because we started out experimentally trying things just to see what works and what doesn’t. Initially, we started out as a bug bounty organization. So our concept was that companies would donate money to us, we’d establish bug bounties for projects, and then those projects would get the security benefits. What we quickly found out was this does not work well for projects that don’t have a lot of security resources, because they get buried in bunk reports things that are not actually problems. And then there’s also the bag bounties, where some dependency has a vulnerability, and then someone will go shop around to every project that depends on that dependency and try to get a bug bounty out of it and and so on and so forth. And then, increasingly, AI is also becoming a problem because it is doing automated reports to maintainers which are not accurate and then have to be thrown away, and they can be done at a much greater pace than an individual could just a few years ago. So essentially, we, we abandoned that entire thing and went to the idea of having professionals come in, give all of the support that they can give to the project, and kind of meet them where they are, and then extend their their testing so that they get long term benefit from the review as well. So So it started out with skin in our knees and finding stuff that didn’t really work, and then progressed over time, after a lot of feedback to where we are now, which seems to be extremely helpful.
Amir 09:34
So yeah, and to echo that, I would say, I would say the main value of our engagements is that direct impact. You know, we go directly to the project, to the main work with the maintainers or contributors of a project, actually going to the source. You know, the source as in reviewing and improving the code of a project. Project its design, and as Derek mentioned, one way we’ve added even more value as part of our engagements over time is creating or augmenting tooling for projects as well, so that they can continue to have security scrutiny and tools that can help them in their development cycles and to help projects mature. So I would say that that direct focus on the projects, on their code base and on the on the tried and true practice of a expert third party review is how we’re really delivering a lot of value. I would say through our engagements, we’re coming up, as I mentioned on our on our 10 year anniversary next month, and I think we have found well over 100 high or critical vulnerabilities and these projects as parts of our as part of our audits. Thank you. Thank you. We’re really we’re really proud of what we’ve been able to do and the positive impact we’ve been able to make. And yeah, and I think that really comes from sticking to our mission and to our commitment to this best practice of, you know, expert third party review, but doing it in a way that is collaborative and impactful. So so we didn’t just find all of those, those vulnerabilities, those have all been fixed and remediated, and a lot of those, at least a good portion of them were kind of design bugs or or classes of bugs that very well, you know, could eliminate future problems very effectively, not in a, unfortunately, not in a very Easily, easy to measure way, but, but the feedback suggests that the projects are, in fact, much in a much better state after our engagements. So we’re really happy to be able to do that.
CRob 12:15
That’s phenomenal. I love the fact that you all started off in one direction, and then you learned a little bit, and you’ve pivoted so you’ve evolved yourselves. Thinking about your engagements over the last almost decade, is there one thing you wish a project or a developer knew or did prior to coming into one of these engagements that would make the whole enterprise be more successful or go more smoothly. What was one thing you wish people did or knew?
Derek 12:46
So the big takeaway is that if you do a security engagement with us, it’s not scary, because we are here to help. We will offer you any support and resources that we have. You know we’re not going to find a big pile of bugs that you don’t understand, dump a document on you and walk away. The whole point of this is to help projects improve by giving them everything that they need and meeting them where they are. So the FAQ we usually get from maintainers is, you know, how long is this going to take? How much time do I have to invest into this? And then always the questions about, are you going to drop zero days on me at the end of this engagement? And of course, we follow disclosure policies that everybody agrees on and also we are very flexible. So if there’s a design level problem that requires a big rewrite, we’re not going to just drop it on the internet in 90 days. We’re going to be forgiving. So the pressure from us is very low, and I think that that’s one thing that maintainers would really like to hear from, you know, working with us.
Amir 14:07
yeah, plus one to that, Derek, I would say it’s very not meant to be a collaboration. It’s meant to be a engagement that is collaborative in nature. And I, I do wish more developers knew that it wasn’t as again, to echo you Derek, it’s not, it’s not a scary thing. It’s not you’re like, you’re going to be going in front of a tribunal, and you know, it’s very much, let’s work together to make this project better. And I’ve, I’ve I’ve observed personally that it’s one of those types of things where the more you put in, so the more that developers, maintainers, contributors, the more that they’re able to put into the engagement, in terms of providing audit teams with in. Site or with feedback or context, because I think that’s the piece that really is missing significantly with a lot of the, as Derek mentioned, kind of the tooling and some of the other kind of at scale things that at scale solutions, they really lack that context that is really important, especially in terms of security, when it comes to security in a code base, so it definitely has a multiplier effect. You know, the more we’ve seen projects being engaged in the audit, typically, we found much better results. And I can even give a direct case study example, where one an engagement that we were involved in. The audit team and the developer team happened to be our train ride apart, so they were able to arrange, essentially, an in person kind of orientation, kind of to really just discuss and get to know each other and gets in, you know, it was a really cool thing, and we learned that that led to a much better understanding of the code base as the team was auditing it, and that allowed them to find more significant findings, because, again, they had that greater understanding as a result of the context provided By the by the team and and, and actually that that same team that we worked with on this direct engagement yesterday at one of our virtual meetups, we learned that they did something similar. So their client wasn’t as was a quick it was a flight. But flights in Europe are shorter just and they were able to get together with the with the main maintainers of the project, and do, again, a very similar thing, where they were able to get together discuss, and that led to a much better understanding of the project, and allowed the auditors to add that much more value as part of the audit. So I to sum it up, I would say, as I said, add value. That’s I would that’s how I would sum it up. Is that I wish more developers knew that this is about adding value. It’s about collaborating. It’s not about, you know, making you feel bad about making mistakes or anything like that. You know, human beings will always, will always, you know, will always have that, that, you know, human error, and it’s totally normal and fine. And that’s why this as a practice is so important, because, you know, it’s such a common practice in software and really in the in the greater kind of landscape, you know, independent review. And so, yeah, I would say, you know, it’s meant to be collaborative. It’s not the scary thing. It’s really more about, as Derek said, helping and giving you resources to make your project better than anything else.
CRob 17:53
That’s amazing, and I really appreciate just kind of the innovative ideas and the coming to where the project is mentality and really you guys are making sure that security audits aren’t scary at all. But let’s move on to the rapid fire part of the interview. Are you ready for rapid rapid rapid fire? Got a couple wacky questions. Just give me the first thoughts to come out of your mouths, vi or Emacs?
Derek 18:22
oh, VI
Amir 18:25
yeah. Second that excellent.
CRob 18:26
There are no wrong answers, but there are better answers than others, right? What’s your favorite open source mascot?
Derek 18:36
Oh, I’d have to say the VLC cone. Nice, just because it’s nonsense, and they admit that it’s nonsense, and they constantly get asked about it and give nonsense answers. So it’s fantastic.
Amir 18:51
That’s a good point. And you can always tell who the VLC people are at, like FOSDEM, for example, because they have the big, the big cone on the head. And that’s a really good question. There’s a lot of really good ones out there. I’ve honestly found that the this the simpler ones mascots are, I tend to remember them more, but there’s, I’d say, for me, there’s too many good ones to pick so…
CRob 19:16
That’s a very diplomatic answer. I appreciate that. Spicy or mild food?
Derek 19:22
spicy all the way
CRob 19:28
nice, that is always the right answer.
Amir 19:30
Some of our greatest ideas came over spicy food. So…
CRob 19:35
And finally, and most importantly, Star Trek, or Star Wars.
Derek 19:40
So I’d say I’m Star Trek. I I like the idea of everybody working together toward, you know, a peaceful, wide, reaching society,
CRob 19:52
Open source of you. That’s awesome.
Amir 19:54
I would also say Star Trek. I missed the Star Wars kind of lore growing up, yeah, my experience with Star Wars, I had a high school teacher who, anytime he would not be able to make class, instead of a substitute teacher, he would just play the beginning of the first Star Wars movie. I think it was episode four, so I’ve seen the first 30 minutes plenty of times. So maybe that left a bad taste in my mouth with Star Wars.
CRob 20:27
I see we’ve had very different life experiences. That’s great. Well, thank you, gentlemen. I really appreciate you putting up with the nonsense. And then finally, as we wrap up, do you have a call to action for the community or developers, as we kind of close out
Derek 20:45
Sure, I would say we really operate on the principles of Spoon Theory. Have you ever heard of that? It’s from psychology. And the principle is that you have so many spoons of energy that you can devote to various things, and the way that we apply this to open source is thinking about the security knowledge and the just general energy available among open source communities. Some of them are very well supported. They have dedicated staff that are paid, and it’s their job to be there and be available. And then you have the complete opposite end of the spectrum, which is a solar solo maintainer invented a thing. That thing somehow became a really important piece of infrastructure. They don’t have any security knowledge, so they do what they can, you know, reading documents and and whatever, but they don’t have the available energy to invest in security so that that’s where I’m coming from. When I say, meet projects, where they are, and the call to action would be, if you are a security researcher and you’re interacting with open source, this is what you need to consider is their position on that spectrum of knowledge and available energy. So…
CRob 22:09
Amir?
Amir 22:10
Yeah, plus one to that, and to add, I would just say that if there’s one thing I’ve learned, you know, from doing this for 10 years, it’s that. It’s it’s important work, and it needs there. There’s almost an unlimited demand for it. You know, I was really shocked when I saw how some of the you know, projects, biggest names and open source projects, household names that we hear every day, really needed almost the same, if not more, security help than maybe the smaller projects, because, for example, some of the really big projects, because they have so much more scrutiny, they have a lot more noise to go through, for example, or they have, they could potentially have huge backlogs of bugs that they just haven’t gotten the time or resources to go through. And so I think my call to action would be, you know, we are one of the one tool in the in the toolkit, but I do think what we do really does help open source projects and we can do more with more. So we always are typically trying to do the most we can with what we have and which we always do, of course, but I think we really could do more with more so we can add more more help for projects, more diligence for projects, more ongoing support for projects. The work that we’ve been doing, doing tooling, augmentations, for example, has been really successful. And, you know, we, and we as a small organization, we are always happy and willing to take on more work. So we’re always open to new collaborations, new collaborate tours and helping how we can to fulfill our mission, which has been to help open source projects improve their security. So yeah, come talk to us. We’re involved in a lot of the open source security foundation working groups and events. As you mentioned, we’ve been strategic partner for Linux Foundation and OpenSSF for some time now. So yeah, we are always happy to collaborate and help how we can in the nature of open source. And so I’d say that’s that’s all I have. All right,
CRob 24:38
Derek and Amir from OSTIF, thank you all for your amazing work and helping collaborate with our developer community, and that’s going to be a wrap. Happy open sourcing, everybody. We’ll talk to you all soon. Goodbye.
Amir
Cheers, everyone. Thanks.
Outro
Like what you’re hearing. Be sure to subscribe to What’s in the SOSS on Spotify, Apple podcasts and Antenna, Pocket Cast, or wherever you get your podcasts. There’s a lot going on with the OpenSSF, and many ways to stay on top of it all. Check out the newsletter for open source news, upcoming events and other happenings. Go to openssf.org/newsletter to subscribe. Connect with us on LinkedIn for the most up to date. OpenSSF, news and insight and be a part of the OpenSSF community. At OpenSSF.org/getinvolved. Thanks for listening, and we’ll talk to you next time on What’s in the SOSS.
