Skip to main content
Monthly Archives

August 2025

OpenSSF Celebrates Global Momentum, AI/ML Security Initiatives and Golden Egg Award Winners at Community Day Europe

By Blog, Press Release

Foundation honors community achievements and strategic efforts to secure ML pipeline during community event in Amsterdam

AMSTERDAM – OpenSSF Community Day Europe – August 28, 2025 – The Open Source Security Foundation (OpenSSF), a cross-industry initiative of the Linux Foundation that focuses on sustainably securing open source software (OSS), presents the Golden Egg Award during OpenSSF Community Day Europe and celebrates notable momentum across the security industry. The Foundation’s milestones include achievements in AI/ML security, policy education, and global community engagement.

Golden Egg Award Recipients

OpenSSF continues to shine a light on those who go above and beyond in our community with the Golden Egg Awards. The Golden Egg symbolizes gratitude for recipients’ selfless dedication to securing open source projects through community engagement, engineering, innovation, and thoughtful leadership. This year, we celebrate:

  • Ben Cotton (Kusari) – for work on GUAC and the Open Source Project Security Baseline (OSPS Baseline)
  • Kairo de Araujo (Eclipse Foundation) – for maintaining RSTUF and participation in the Securing Software Repositories Working Group
  • Katherine Druckman (Independent) – for dedication to community growth and developer relations (DevRel)
  • Eddie Knight (Sonatype) – for advancing OSPS Baseline and creating project courses that strengthen open source security education
  • Georg Kunz (Ericsson) – for leadership and contributions within the Best Practices Working Group

Achievements and Milestones

OpenSSF is supported by more than 118 member organizations and 1,519 technical contributors across OpenSSF projects, serving as a vendor-neutral partner to affiliated open source foundations and projects. As securing the global technology infrastructure continues to get more complex, OpenSSF will remain a trusted home to further the reliability, security, and universal trust of open source software.

Over the past quarter, OpenSSF has made several key achievements in its mission to sustainably secure open source software, including:

  • The release of a whitepaper by the AI/ML Security Working Group on securing the AI lifecycle, which maps OWASP ML Top 10 threats to MLOps stages and highlights tools like Sigstore and OpenSSF Scorecard.
  • Success at the AI Cyber Challenge (AIxCC) at DEF CON. OpenSSF participated as a challenge advisor and will be working with DARPA and ARPA-H to open source the winning systems, infrastructure, and data from the competition.
  • Co-launching the Cybersecurity Skills Framework, a global reference guide that helps organizations identify and address critical cybersecurity competencies across a broad range of IT job families.
  • Publishing the Cyber Resilience Act (CRA) Brief Guide for OSS Developers, a practical overview to help open source maintainers and contributors understand when CRA requirements apply, what obligations exist, and how to prepare — paired with the free express course Understanding the EU Cyber Resilience Act (CRA) (LFEL1001) for those who want deeper learning and a digital badge.
  • Co-launching the Global Cyber Policy Working Group to collaborate on global cybersecurity-related legislation, frameworks, and standards which facilitate conformance to regulatory requirements by open source projects and their consumers; with initial focus on EU’s CRA legislation.

“Securing the AI and ML landscape requires a coordinated approach across the entire pipeline,” said Steve Fernandez, General Manager at OpenSSF. “Through our MLSecOps initiatives with OpenSSF members and policy education with our communities, we’re giving practitioners and their organizations actionable guidance to identify vulnerabilities, understand their role in the global regulatory ecosystem, and build a tapestry of trust from data to deployment.”

Global Community Engagement

OpenSSF continues to expand its influence on the international stage. OpenSSF Community Days drew record attendance globally, including standing-room-only participation in India, strong engagement in Japan, and sustained presence in North America.

Supporting Quotes

“As AI and ML adoption grows, so do the security risks. Visualizing Secure MLOps (MLSecOps): A Practical Guide for Building Robust AI/ML Pipeline Security is a practical guide that bridges the gap between ML innovation and security using open-source DevOps tools. It’s a valuable resource for anyone building and securing AI/ML pipelines.” Sarah Evans, Distinguished Engineer, Dell Technologies 

“The whitepaper distills our collective expertise into a pragmatic roadmap, pairing open source controls with ML-security threats. Collaborating through the AI/ML Security WG proved that open, vendor-neutral teamwork can significantly accelerate the adoption of secure AI systems.” Andrey Shorov, Senior Security Technology Specialist at Product Security, Ericsson

“The Cybersecurity Skills Framework is more than a checklist — it’s a practical roadmap for embedding security into every layer of enterprise readiness, open source development, and workforce culture across international borders. By aligning skills with real-world global threats, it empowers teams worldwide to build secure software from the start.” Jamie Thomas, Chief Client Innovation Officer and the Enterprise Security Executive, IBM 

“Open source is global by design, and so are the challenges we face with new regulations like the EU Cyber Resilience Act,” said Christopher “CRob” Robinson, Chief Security Architect, OpenSSF. “The Global Cyber Policy Working Group helps policymakers understand how open source is built and supports maintainers and manufacturers as they prepare for compliance.”

“The OpenSSF’s brief guide to the Cyber Resilience Act is a critical resource for the open source community, helping developers and contributors understand how the new EU law applies to their projects. It clarifies legal obligations and provides a roadmap for proactively enhancing their code’s security.” Dave Russo, Senior Principal Program Manager, Red Hat Product Security

Events and Gatherings

New and existing OpenSSF members are gathering this week in Amsterdam at the annual OpenSSF Community Day Europe

OpenSSF will continue its engagement across Europe this fall with participation in the Linux Foundation Europe Member Summit (October 28) and the Linux Foundation Europe Roadshow (October 29), both in Ghent, Belgium. At the Roadshow, OpenSSF will sponsor and host the CRA in Practice: Secure Maintenance track, building on last year’s standing-room-only CRA workshop. On October 30, OpenSSF will co-host the European Open Source Security Forum with CEPS in Brussels, bringing together open source leaders, European policymakers, and security experts to collaborate on the future of open source security policy. A landing page for this event will be available soon, check the OpenSSF events calendar for updates and registration details.

Additional Resources

About the OpenSSF

The Open Source Security Foundation (OpenSSF) is a cross-industry organization at the Linux Foundation that brings together the industry’s most important open source security initiatives and the individuals and companies that support them. The OpenSSF is committed to collaboration and working both upstream and with existing communities to advance open source security for all. For more information, please visit us at openssf.org

Media Contact
Grace Lucier
The Linux Foundation

pr@linuxfoundation.org 

Trustify joins GUAC

By Blog, Guest Blog

By Ben Cotton and Dejan Bosanac

The superpower of open source is multiple people working together on a common goal. That works for projects, too. GUAC and Trustify are two projects bringing visibility to the software supply chain. Today, they’re combining under the GUAC umbrella. With Red Hat’s contribution of Trustify to the GUAC project, the two combine to create a unified effort to address the challenges of consuming, processing, and utilizing supply chain security metadata at scale.

Why Join?

The Graph for Understanding Artifact Composition (GUAC) project was created to bring understanding to software supply chains. GUAC ingests software bills of materials (SBOMs) and enriches them with additional data to create a queryable graph of the software supply chain. Trustify also ingests and manages SBOMs, with a focus on security and compliance. With so much overlap, it makes sense to combine our efforts.

The grand vision for this evolved community is to become the central hub within OpenSSF for initiatives focused on building and using supply chain knowledge graphs. This includes: defining & promoting common standards, data models, & ontologies; developing shared infrastructure & libraries; improving the overall tooling ecosystem; fostering collaboration & knowledge sharing; and providing a clear & welcoming community for contributors.

What’s Next?

Right now, we’re working on the basic logistics: migrating repositories, updating websites, merging documentation. We have created a new GUAC Steering Committee that oversees two core projects: Graph for Understanding Artifact Composition (GUAC) and Trustify, and subprojects like sw-id-core and GUAC Visualizer. These projects have their own maintainers, but we expect to see a lot of cross-collaboration as everyone gets settled in.

If you’d like to learn more, join Ben Cotton and Dejan Bosanac at OpenSSF Community Day Europe for their talk on Thursday 28 August. If you can’t make it to Amsterdam, the community page has all of the ways you can engage with our community.

Author Bios

Ben Cotton is the open source community lead at Kusari, where he contributes to GUAC and leads the OSPS Baseline SIG. He has over a decade of leadership experience in Fedora and other open source communities. His career has taken him through the public and private sector in roles that include desktop support, high-performance computing administration, marketing, and program management. Ben is the author of Program Management for Open Source Projects and has contributed to the book Human at a Distance and to articles in The Next Platform, Opensource.com, Scientific Computing, and more.

Dejan Bosanac is a software engineer at Red Hat with an interest in open source and integrating systems. Over the years he’s been involved in various open source communities tackling problems like: Software supply chain security, IoT cloud platforms and Edge computing and Enterprise messaging.

 

What’s in the SOSS? Podcast #38 – Securing AI: A Conversation with Sarah Evans on OpenSSF’s AI/ML Initiatives

By Podcast

Summary

In this episode of “What’s in the SOSS,” we welcome back Sarah Evans, Distinguished Engineer at Dell Technologies and a key figure in the OpenSSF’s AI/ML Security working group. Sarah discusses the critical work being done to extend secure software development practices to the rapidly evolving field of AI. She dives into the AI Model Signing project, the groundbreaking MLOps whitepaper developed in partnership with Ericsson, and the crucial work of identifying and addressing new personas in AI/ML operations. Tune in to learn how OpenSSF is shaping the future of AI security and what challenges and opportunities lie ahead.

Conversation Highlights

0:00 Welcome and Introduction to Sarah Evans
0:48 Sarah Evans: Role at Dell Technologies and Involvement in OpenSSF
1:38 The OpenSSF AI/ML Working Group: Genesis and Goals
3:37 Deep Dive: The AI Model Signing Project with Sigstore
4:28 AI Model Signing: Benefits for Developers
5:20 Transition to the MLSeCOps White Paper
5:49 The Mission of the MLSecOps White Paper: Addressing Industry Gaps
7:00 Collaboration with Ericsson on the MLEC Ops White Paper
8:15 Identifying and Addressing New Personas in AI/ML Ops
10:04 The Power of Open Source in Extending Previous Work
10:15 Future Directions for OpenSSF’s AI/ML Strategy
11:21 OpenSSF’s Broader AI Security Focus
12:08 Sneak Peek: New Companion Video Podcast on AI Security
12:31 Sarah’s Personal Focus: The Year of the Agents (2025)
13:00 Security Concerns: Bringing Together Data Models and Code in AI Applications
14:00 Conclusion and Thanks

Transcript

0:00 Intro Music & Promo Clip: We have so much experience in applying secure software development to CI/CD and software, we can extend what we’ve learned to the data teams and to those AI/ML engineering teams because ultimately, I don’t think that we want a world where we have to do separate security governance across AI apps.

CRob:

0:20: Welcome, welcome, welcome to What’s in the SOSS, where we talk to interesting characters from around the open source security ecosystem, maintainers, engineers, thought leaders, contributors, and I just get to talk to a lot of really great people along the way.  Today we have a friend of the show we’ve already had discussions with her in the past. I am so pleased and proud to introduce my friend Sarah Evans. Sarah, for our audience, could you maybe just tell them, remind them, you know who you are and what do you do and what you’ve been up to since our last talk.

Sarah Evans:

0:57: Well, thanks for having me here. I’m a distinguished engineer at Dell Technologies, and I have two roles. One is I do security applied research for my company looking at the future of security in our products and what innovation that we need to explore to improve the security by design. My second role is to activate my company to participate in OpenSSF, which I have thoroughly enjoyed getting to work with friends such as yourselves. I am very active and engaged in the AI/ML working group and trying to advocate for AI security.

CRob:

1:37: Awesome, yeah. And that actually brings us to our talk today. Our friends within your working group, the AI/ML working group, you’ve had a flurry of activity lately. I would love to talk about, you know, first off, let’s give the audience some context. Let’s talk about what is this group, and what’s kind of your some of your goals.

Sarah Evans:

1:58: Yeah, so the AI/ML working group really kind of came into fruition about a year and a half ago, I think, and we needed a space where we could talk about how the work that software developers were doing would change as they started to build applications that had AI in it. So were there things that we were doing today that could apply to the way the technology was changing?

One of the initial concerns is software secure software development we know a lot about that, but we may know less about AI. So is is a home for AI and OpenSSF appropriate? Should we be deeply partnering with some of the other foundations that are creating these data sets, creating the tools and models, and so we started the working group where our commitment to the tech was that we would deeply engage with the other groups around the ecosystem which we have. Done, but then we’ve also been looking for where are those places that are uniquely in the OpenSSF wheelhouse or swim lane of expertise on extending software security to AI applications, and I think that we’ve done a really good job of kind of exploring some of those places.

One of them has been with a white paper that we are partnering with another member in Ericsson to deliver, and that is something that we’re very proud of sharing with the community.

CRob:

3:28: Great, I’m really excited to talk about these projects because I for one welcome our robot overlords. Let’s first off start off – we had a big, you guys had a big announcement that really seems to have captured the imagination of the community. Let’s talk about the AI model signing project.

Sarah Evans:

3:47: Yes, so the model signing project, we worked that as a special interest group within our working group. We were approached by, some folks who are working in partnership with SigS store and. The idea was that if you can use Sigstore to sign code, could you extend Sigstore to sign a model and fill and close a gap that didn’t exist in the industry, and as you know, we were able to do that. There was a team of people that came together in the open source fashion to extend a tool to a new use case. And that’s just been very exciting to watch that evolve.

CRob

4:27: That’s awesome.

Sarah Evans:

4:28: So thinking about it from the developer perspective, I’m a developer working in the AI, how does this help me?

CRob

4:36: Right?

Sarah Evans:

4:36: So right now if you are pulling a model off of hugging face as an example, you don’t have any cryptographic digital signature on that model that that verifies it. The way you would with code. And so if that model has been signed with the SIS store components, then now you have the information that you would use to validate code. You can also follow some of those similar processes to validate a signed model.

CRob

5:07: Pretty cool.

Sara Evans

5:08: Yeah, it’s a really good use case for the supply chain security. And extending what we know about software to models and data that are part of our AI applications.

CRob

5:20: This seems to be kind of a theme for you taking classic ASA and applying it to the newer technologies. So let’s move on to the white paper. You and I have collaborated around some graphics for this, and then you’ve got a couple of folks you’re working with on the white paper. You’re shepherding through review and publication, and you should be able to read that now. So you know why do you think this talk, let’s talk about the white paper, you know, what’s it about? What’s it kind of the mission of it?

Sarah Evans

5:49: When the AI/ML working group first kicked off, I knew that we had seen this evolution of developing on open source software and processes called DevOps and then those evolved to DevSecOps over time. And so with the disruptive technology around AI/ML, I wanted to know what were the processes that a data scientist or an AI/ML engineer used and did they have the security governance they needed in their operational processes.

So I started to look at what is DataOps, what is MLops, what is LLMOps, like all the alphabet soup of ops all the ops. And I couldn’t find a lot of information online. And so I thought this is an industry gap that we have and we have so much experience in applying secure software development to CICD and software.

We can extend what we’ve learned to the data teams and to those AI/ML engineering teams because ultimately I don’t think that we want a world where we have to do separate security governance across AI apps that have these different operational pieces in them.

I was doing my research and I found a white paper by Ericsson on MLSecOps in the telco environment. Ericsson being a fellow member of OpenSSF, I, you know, worked through their OSPO and through some of the connections that we have in OpenSSF said, Hey, can you introduce me to those authors? I would love to see if we could up level that as a general resource to the community as an OpenSSF whitepaper. We were able to do that. They have been a fantastic partner in collaboration.

And so now we have for the industry an MLSecOps white paper reference architecture and some documentation about extending in two ways:

  1. One is if you’re a software developer now and you’re being asked to build an AI app, you have more information about what goes on in that MLOps environment.
  2. And if you are a person who’s creating an MLOps app and you haven’t had secure development training before, you now have a resource so it really serves kind of an existing member of our community and a new member of potential members of our community.

CRob

8:14: That’s really awesome. Congrats on that. Another area that we’ve collaborated on, the OpenSSF has a series of personas. We have 5 personas and that kind of organizes and drives our work. We have a Maintainer developer persona and OSPO persona and executive persona and so forth but one thing that you came to me that you realized early on as you were developing this white paper is there was a, there’s some gaps. Could you maybe talk about those gaps and what we’ve done to address them?

Sarah Evans

8:46: Yeah, where we found the gaps were in sub-personas so those main core personas that OpenSSF has been working with were, were just solid. We still have developers and maintainers, we still have security engineers…we still have folks working in our open source program offices, but the sub-personas were very software developer focused.

They really didn’t include some of the personas that we were seeing related to curating data sets, putting together end to end architectures, or, kind of putting together a pipeline for machine learning as a data engineer. So we, I worked based off of the language in that original Ericsson white paper that we have up leveled to an OSSF white paper to take those personas that work in that MLE op space and add them as sub personas within OpenSSF. So now we can all start to have the same language and understanding around who might be developing software applications, new members of our community that we want to be inclusive of and have language to understand how to reach them and partner with them.

CRob

10:04: I just love the power of open source where you find some previous work, you get value out of it, and then you expand it.  Thank you so much for contributing that back.

Sarah Evans

10:13: Absolutely.

CRob

10:15: And where are you going from here? Where are the next steps around the white paper?

Sarah

10:19: I think we want to spend some time championing and then you know, meeting with our community we’ve discovered that potentially OpenSSF would like to have a broader AI/ML strategy or program and so really understanding how those strategic efforts will evolve and making sure that we can plug into those and provide resources that that strategically move OpenSSF forward into this new space those could include an MLSecOps document or maybe even a converged enterprise view of multiple ops but we’re also open to just looking at. Maybe some of the other areas that have been identified such as dealing with potentially AI slop or other concerns related to AI/ML.

I think there’s a really great opportunity for OpenSSF to look through our stack of tools and processes and understand how we can extend those to AI/ML use cases and applications.

I know that there is an opportunity to have a strategic program around AI and securing AI applications, and I’m really excited and looking forward to what the future of OpenSSF tools, processes, procedures, best practices look like so we can really support our software developers as they’re developing secure AI applications.

CRob

11:12: That’s awesome. I’m really looking forward to collaborating with you all and kind of championing and showcasing the work going forward. So thank you very much.

Let’s move along. We will be creating a new companion video podcast focused on this amazing community of AI security experts we have here within OpenSSF and within the broader community, and we’ll be talking about AI security news and topics. And I’m going to give this, take this opportunity to give the listeners a sneak peek of what we might be discussing very soon. So from your perspective, Sarah, you know, beyond these cool projects that you’re working on, what are you personally keeping an eye on in this fast moving AI space?

Sarah Evans

12:42: Well, I’ll tell you, 2025 is the year of the agents, and understanding the accelerated rate that agents that impact they will have on AI applications has been something I’ve been spending a lot of time on.

CRob

12:56: Pretty cool. I’m looking forward to learning more with everyone together. And from your perspective again, what’s keeping you up at night in regards to this crazy AI/ML, LLM, GenAI agentic, blah blah blah, machine space? What what are you concerned about from a security perspective?

Sarah Evans

I think for me from a security perspective bringing together data models and deploying it with code really puts an end to end AI application. It puts a lot of pressure on teams that may not have had to tightly work together before to begin to tightly work together. And so that’s why the personas and the and the converged operations and thinking about how do we apply what security we know to new areas is so important because we don’t have a moment to lose.

There’s such accelerated excitement around leveraging AI and leveraging agents that’s going to be very important for us to have a common way to talk to each other and to begin to solve problems and challenges so that we can innovate with this technology.

CRob

13:59: Excellent. Well, Sarah, I really appreciate your time come and talk to us about these amazing going on and kind of giving us a sneak peek into the future. And you know, I, I want to thank you again from behalf of the foundation, our community, and you know all the maintainers and enterprises that we serve. So thanks for showing up today.

Sarah Evans

14:17: Yeah, thanks, CRob.

CRob

14:18: Yeah, and that’s a wrap today. Thank you for listening to what’s in the SOSS. Have a great day and happy open sourcing.

Outro

14:29: Like what you’re hearing, be sure to subscribe to what’s in the SOSS on Spotify, Apple Podcasts, Antennapod, Pocketcast, or wherever you get your podcasts. There’s a lot going on with the OpenSSF and many ways to stay on top of it all. Check out the newsletter for open source news, upcoming events, and other happenings. Go to OpenSSF.org/newsletter to subscribe. Connect with us on LinkedIn for the most up-to-date OpenSSF news and insight, and be a part of the OpenSSF community at OpenSSF.org/getinvolved. Thanks for listening and we’ll talk to you next time on What’s in the SOSS.

OpenSSF Newsletter – August 2025

By Newsletter

Welcome to the August 2025 edition of the OpenSSF Newsletter! Here’s a roundup of the latest developments, key events, and upcoming opportunities in the Open Source Security community.

TL;DR:

🎉 OpenSSF Turns 5.

New MLSecOps whitepaper.

🔍 Case Study: GUAC security validated in <1hr w/Baseline.

📝 Blogs: OpenSSF Community and Working Groups, AI security, AIxCC wins.

🎙 Podcasts: OSTIF audits, CRA in Erlang Community.

🎓 Free security courses.

📅 Events: OpenSSF Community Day Europe, Linux Foundation Europe Member Summit, Open Source in Finance Forum New York, Linux Foundation Europe Roadshow, European Open Source Security Forum (link coming soon), OpenSSF Community Day Korea, Open Source SecurityCon 2025 

🎉 Celebrating Five Years of OpenSSF: A Journey Through Open Source Security

August 2025 marks five years since the official formation of the Open Source Security Foundation (OpenSSF). From uniting global efforts to securing open source software, to launching initiatives like Sigstore, OpenSSF Scorecard, Alpha-Omega, SLSA, and the OSPS Baseline, OpenSSF has moved from ideas to impact – shaping the future of software supply chain security.

This milestone isn’t just a celebration of what we have accomplished, but of the community we have built together. Here’s to five years of uniting communities, hardening the software supply chain, and driving a safer digital future.

Read the full blog to explore the journey, voices, and vision that continue to shape OpenSSF’s impact.

✨Community Highlight: Whitepaper: Visualizing Secure MLOps (MLSecOps): A Practical Guide for Building Robust AI/ML Pipeline Security

We want to give a shout out to Sarah Evans (Dell Technologies), Andrey Shorov (Ericsson) and the entire AI/ML Security Working Group for their outstanding contributions through OpenSSF, advancing secure AI/ML practices and delivering industry leadership in building robust AI/ML pipeline security.

Their new whitepaper, “Visualizing Secure MLOps (MLSecOps): A Practical Guide for Building Robust AI/ML Pipeline Security,” expands on Ericsson’s MLSecOps framework into a comprehensive, visual, “layer-by-layer” guide. It shows how to apply open source tools like SLSA, Sigstore, and OpenSSF Scorecard to secure the ML lifecycle offering mapped risks, security controls, reference architecture, and practical tools.

This is a must-read for anyone designing, developing, deploying, or securing AI/ML systems.

Read the whitepaper and the blog to see how OpenSSF members are shaping the future of trustworthy AI.

🔍Case Study: How LFX Insights and OSPS Baseline Validated GUAC’s Security in Under an Hour

How can a project like GUAC validate its strong security posture in under an hour?

Kusari used LFX Insights integrated with the OpenSSF OSPS Baseline to run a rapid, automated assessment of GUAC’s security posture. In less than an hour, evidence of strong security practices was compiled automatically, results were presented in a clear visual format, and findings were instantly aligned to major frameworks like NIST SSDF and the EU Cyber Resilience Act. The result was faster trust, reduced workload, and a smoother path for adoption.

Project leaders and community voices including Mike Lieberman (Kusari), Ben Cotton (Kusari), Eddie Knight (Sonatype), and Mihai Maruseac (Google) emphasized the value of this approach. They highlighted how OSPS Baseline makes security proof more visible, reduces repetitive effort, saves time for maintainers, and builds confidence among OSPO leads and end users.

Read the full case study to see how LFX Insights and OSPS Baseline created a blueprint for faster, more credible security assurance.

Blogs: What’s New at the OpenSSF Community?

Here you will find a snapshot of what’s new on the OpenSSF blog. For more stories, ideas, and updates, visit the blog section on our website.

Case Study: Google Secures Machine Learning Models with sigstore

As machine learning evolves, so do the threats-data poisoning, model tampering, and unverifiable origins are real risks. Google’s Open Source Security Team, sigstore, and OpenSSF created the OMS specification, integrating it into hubs like NVIDIA NGC and Kaggle. Models are automatically signed, tied to the author’s identity, verified for authenticity, and logged for a complete audit trail. This blueprint offers a path to a verified ML ecosystem. 

If we reach a state where all claims about ML systems and metadata are tamperproof, tied to identity, and verifiable by the tools ML developers already use—we can inspect the ML supply chain immediately in case of incidents.Mihai Maruseac, Staff Software Engineer, Google

Read the case study.

What’s it like to speak, volunteer, parent, and explore nature – all in one week at OSS Summit NA 2025?

Eman Abu Ishgair shares her experience attending the Open Source Summit North America in Denver as a speaker, volunteer, and new community member during OpenSSF Community Day. From co-presenting “The Open Source SDLC Control Plane: Building the Supply Chain Security Sandwich” with Michael Lieberman, CTO and Co-founder at Kusari and Governing Board member, to volunteering at the OpenSSF booth, connecting with collaborators, attending talks on SBOM, Signing, and Securing AI pipelines, and exploring Colorado’s natural wonders with her children, Eman’s week was full of learning, community, and inspiration.

Read the full blog to experience her journey and discover how you can get involved with OpenSSF.

How does the OpenSSF welcome maintainers, security engineers, students, and others to its open, global community?

Ejiro Oghenekome and Sal Kimmich share how OpenSSF serves as the global hub for collaborative work on securing the software supply chain, with no gatekeepers and open participation for all. The blog explains how to join Slack, attend meetings, contribute via GitHub, and explore working groups like AI/ML Security, BEAR, Global Cyber Policy, Security Tooling, Vulnerability Disclosures, Securing Software Repositories, ORBIT, Securing Critical Projects, and Supply Chain Integrity. Every OpenSSF group welcomes newcomers, with many paths to contribute, no matter your background.

Read the blog to discover where your skills fit and how to start contributing today.

Securing AI: The Next Cybersecurity Battleground

The AI wave is here, and it’s only getting bigger. It ushers in a pivotal new cybersecurity battleground: securing AI. In this blog, Hugo Huang, expert in Cloud Computing and Business Models spearheading joint innovation between Canonical and Google, shares findings from a security survey. The report highlights three top challenges in 2025-lack of standardized frameworks, shadow AI, and the talent gap. Building resilient AI systems needs concrete security measures across the AI lifecycle, with open source as the pivotal enabler. 

Read the full blog.

OpenSSF at Black Hat USA 2025 & DEF CON 33: AIxCC Highlights, Big Wins, and the Future of Securing Open Source

blackhatPanel

Image source: Christopher “CRob” Robinson (OpenSSF), Stephanie Domas (Canonical), and Anant Shrivastava (Cyfinoid Research) hosted a standing-room-only “Ask Me Anything About FOSS” panel at Black Hat USA 2025

The Open Source Security Foundation marked a strong presence at Black Hat USA 2025 and DEF CON 33, engaging with security leaders, showcasing initiatives, and fostering collaboration to advance open source security. At DEF CON, the spotlight was on the AI Cyber Challenge (AIxCC), a DARPA and ARPA-H competition to develop AI-enabled software that can identify and patch vulnerabilities. Trail of Bits, an OpenSSF General Member, earned second place with Buttercup, their open source Cyber Reasoning System. 

Read the full blog for more details.

What’s in the SOSS? An OpenSSF Podcast:

#37 – S2E14 Open Source Security: OSTIF’s 10-Year Journey of Collaborative Audits – Derek Zimmer and Amir Montezari, Open Source Technology Improvement Fund (OSTIF)

In this episode of What’s in the SOSS, Derek Zimmer and Amir Montezary from the Open Source Technology Improvement Fund (OSTIF) share their decade-long mission of providing security resources to open source projects. They focus on collaborative, maintainer-centric security audits that improve project security posture through expert third-party reviews. These engagements are designed to be supportive, impactful, and efficient. Listen to the full episode to hear OSTIF’s 10-year journey and how they help projects strengthen security.

#36 – S2E13 From Compliance to Community: Meeting CRA Requirements Together – Jonatan Männchen (CISO, Erlang Ecosystem Foundation), Ulf Riehm (Product Owner, Herrmann Ultraschall), and Michael Winser (Alpha-Omega)

In this episode of What’s in the SOSS?, CRob talks with Jonatan Männchen (CISO, Erlang Ecosystem Foundation), Ulf Riehm (Product Owner, Herrmann Ultraschall), and Michael Winser (Alpha-Omega). The conversation explores the critical importance of security in open source, especially with the CRA. Hear how the Erlang community brings in experts, fosters collaboration, and builds trust. Listen to the full episode to learn why manufacturers invest in upstream projects and how other ecosystems can follow this approach.

Education:

The Open Source Security Foundation (OpenSSF), together with Linux Foundation Education, provides a selection of free e-learning courses to help the open source community build stronger software security expertise. Learners can earn digital badges by completing offerings such as:

These are just a few of the many courses available for developers, managers, and decision-makers aiming to integrate security throughout the software development lifecycle.

News from OpenSSF Community Meetings and Projects:

In the News:

Meet OpenSSF at These Upcoming Events!

Join us at OpenSSF Community Day Events in Europe and South Korea!

OpenSSF Community Days bring together security and open source experts to drive innovation in software security.

Connect with the OpenSSF Community at these key events:

Ways to Participate:

There are a number of ways for individuals and organizations to participate in OpenSSF. Learn more here.

You’re invited to…

See You Next Month! 

We want to get you the information you most want to see in your inbox. Missed our previous newsletters? Read here!

Have ideas or suggestions for next month’s newsletter about the OpenSSF? Let us know at marketing@openssf.org, and see you next month! 

Regards,

The OpenSSF Team

What’s in the SOSS? Podcast #37 – S2E14 Open Source Security: OSTIF’s 10-Year Journey of Collaborative Audits

By Podcast

Summary

In this episode of “What’s in the SOSS,” Derek Zimmer and Amir Montezari from the Open Source Technology Improvement Fund (OSTIF) discuss their decade-long mission of providing security resources to open source projects. They focus on collaborative, maintainer-centric security audits that help projects improve their security posture through expert third-party reviews, without creating fear or overwhelming developers.

Conversation Highlights

00:00 Introduction
00:22 Podcast Welcome
01:04 OSTIF Founders Introduction
02:31 OSTIF’s Mission and Approach
05:28 Relationship Management and Expertise
08:01 Evolution of Security Engagement Methods
12:15 Making Security Audits Less Intimidating
18:00 Rapid Fire Questions
20:45 Closing, Call to Action

Transcript

CRob 0:22
Welcome, welcome. Welcome to What’s in the SOSS, the OpenSSF podcast, where I get to talk to some of those amazing people on the planet that are helping secure the open source software we all know we all use every day and that we love today, I have some very special friends with us that are doing the yeoman’s work trying to help work with projects to help improve their security posture. I have Amir and Derek from OSTIF. Can I give you guys just a brief moment to introduce yourselves?

Derek Zimmer: 0:54
Sure, I’m Derek Zimmer, founder of OSTIF. We’ve been doing this for 10 years now and take it away. Amir.

Amir Montezary: 1:04
Thank you. Amir Montezary, Managing Director of OSTIF, open source technology improvement fund, yeah, absolutely thrilled to be here on the podcast and to be talking with you, CRob, and to be talking about the work that we do. As Derek mentioned, this is our 10 year anniversary. So coming up on 10 years of really developing this organization, the processes and really fine tuning to a degree what we do and the value that we provide to the open source ecosystem. So absolutely thrilled to be here and to talk about it.

CRob 1:40
That’s amazing. So happy birthday OSTIF, for our audience that might not be familiar directly with your work. Could you maybe tell it? Tell us what OSTIF is, and what do you all do?

Derek 1:53
Sure. So we founded the organization 10 years ago on the idea that we needed a maintainer centric organization that could bring security resources to projects. There were some efforts in the past to do something similar to what we do, but most of the time, those were very corporate centric. So the ideas that circulated around them were very were dictating what open source should be doing and not we’re here to help. And here’s some resources so that that different perspective was the the kickoff for why we wanted to create something different.

Amir 2:36
Yeah, absolutely. And and still today we see that open source projects, because of their very nature, you know, they need a very strong, independent body to to help them. We provide that platform, being a nonprofit organization, being vendor neutral, being neutral in all senses of the word, and just solely focused on, as Derek mentioned, helping projects, getting them the security resources that they need, and in a way, most importantly, being able to provide those resources in a way that directly impacts the project and its security posture was really what drove us to start this organization. You know, typically, open source developers, maintainers, are not security experts, and that’s okay. Security is a very difficult topic, and like, like a lot of other things, it’s best to be left to the experts. So while, of course, there are things individual developers and maintainers can do to, you know, improve their their hygiene, so to speak, and improve the security posture of their projects, we found that getting independent third party expert audit review in a way that is again meant to be collaborative, as in, these auditors work with the maintainers, as opposed to kind of dictating to maintainers or telling them, you know, things to do, work with them on improving, kind of the holistic security posture of their project, and we found that to be really successful. A lot of research suggests that this is a very good practice to do. I come from a background in it, auditing, reviewing critical payment systems in the United States. That is a great field, and that we saw that that level of independent review, or third party review, that kind of due diligence, really helps improve the the state or posture of a software project. So so it was really. Founded on the need for it to exist. We saw there was a big need for this, that a mechanism to get security help, to open source projects, working directly with maintainers, and doing it in a way that is inclusive and impactful and most importantly, efficient, is kind of what drove us to do what we do, and so in terms of kind of how we do that, it’s largely a lot of just relationship management. So we’ve in the last 10 years, built a really vast network of security experts, researchers, a lot of which are solely focused in the open source security space, so they kind of understand some of the idiosyncrasies involved in open source software, and can, again, can actually provide meaningful review work and collaboration and essentially handle that whole process, because there are quite a lot of moving parts between. You know, typically you have a separate body funding the work, you have the maintainers or contributor base that could be very much distributed around the world. You don’t always have, I guess, established kind of decision making structures, as you might see in a corporate setting or in a more commercial environment. So we kind of handle all of that, all of that goodwill building, relationship building, project management, contract management, basically all of the pieces so that all that, all that’s needed for a funder, for example, someone who wants to fund security outcomes, or the project you know that would like to improve their security posture, they can just focus on that, and we, as an organization, as an independent body, essentially handle all of the all of the minutia and the administrivia and the facilitation and management to make it, to make it a very streamlined and efficient process. So that’s kind of high level overview.

CRob 7:23
As you both are aware, you have been long time participants and partners with our foundation and also our friends over at Alpha-Omega. From your perspective, kind of with your 10 years of working in this particular space. What do you all see as the main value that projects get out of these types of engagements?

Derek 7:47
So actually, this has changed over time, because we started out experimentally trying things just to see what works and what doesn’t. Initially, we started out as a bug bounty organization. So our concept was that companies would donate money to us, we’d establish bug bounties for projects, and then those projects would get the security benefits. What we quickly found out was this does not work well for projects that don’t have a lot of security resources, because they get buried in bunk reports things that are not actually problems. And then there’s also the bag bounties, where some dependency has a vulnerability, and then someone will go shop around to every project that depends on that dependency and try to get a bug bounty out of it and and so on and so forth. And then, increasingly, AI is also becoming a problem because it is doing automated reports to maintainers which are not accurate and then have to be thrown away, and they can be done at a much greater pace than an individual could just a few years ago. So essentially, we, we abandoned that entire thing and went to the idea of having professionals come in, give all of the support that they can give to the project, and kind of meet them where they are, and then extend their their testing so that they get long term benefit from the review as well. So So it started out with skin in our knees and finding stuff that didn’t really work, and then progressed over time, after a lot of feedback to where we are now, which seems to be extremely helpful.

Amir 09:34
So yeah, and to echo that, I would say, I would say the main value of our engagements is that direct impact. You know, we go directly to the project, to the main work with the maintainers or contributors of a project, actually going to the source. You know, the source as in reviewing and improving the code of a project. Project its design, and as Derek mentioned, one way we’ve added even more value as part of our engagements over time is creating or augmenting tooling for projects as well, so that they can continue to have security scrutiny and tools that can help them in their development cycles and to help projects mature. So I would say that that direct focus on the projects, on their code base and on the on the tried and true practice of a expert third party review is how we’re really delivering a lot of value. I would say through our engagements, we’re coming up, as I mentioned on our on our 10 year anniversary next month, and I think we have found well over 100 high or critical vulnerabilities and these projects as parts of our as part of our audits. Thank you. Thank you. We’re really we’re really proud of what we’ve been able to do and the positive impact we’ve been able to make. And yeah, and I think that really comes from sticking to our mission and to our commitment to this best practice of, you know, expert third party review, but doing it in a way that is collaborative and impactful. So so we didn’t just find all of those, those vulnerabilities, those have all been fixed and remediated, and a lot of those, at least a good portion of them were kind of design bugs or or classes of bugs that very well, you know, could eliminate future problems very effectively, not in a, unfortunately, not in a very Easily, easy to measure way, but, but the feedback suggests that the projects are, in fact, much in a much better state after our engagements. So we’re really happy to be able to do that.

CRob 12:15
That’s phenomenal. I love the fact that you all started off in one direction, and then you learned a little bit, and you’ve pivoted so you’ve evolved yourselves. Thinking about your engagements over the last almost decade, is there one thing you wish a project or a developer knew or did prior to coming into one of these engagements that would make the whole enterprise be more successful or go more smoothly. What was one thing you wish people did or knew?

Derek 12:46
So the big takeaway is that if you do a security engagement with us, it’s not scary, because we are here to help. We will offer you any support and resources that we have. You know we’re not going to find a big pile of bugs that you don’t understand, dump a document on you and walk away. The whole point of this is to help projects improve by giving them everything that they need and meeting them where they are. So the FAQ we usually get from maintainers is, you know, how long is this going to take? How much time do I have to invest into this? And then always the questions about, are you going to drop zero days on me at the end of this engagement? And of course, we follow disclosure policies that everybody agrees on and also we are very flexible. So if there’s a design level problem that requires a big rewrite, we’re not going to just drop it on the internet in 90 days. We’re going to be forgiving. So the pressure from us is very low, and I think that that’s one thing that maintainers would really like to hear from, you know, working with us.

Amir 14:07
yeah, plus one to that, Derek, I would say it’s very not meant to be a collaboration. It’s meant to be a engagement that is collaborative in nature. And I, I do wish more developers knew that it wasn’t as again, to echo you Derek, it’s not, it’s not a scary thing. It’s not you’re like, you’re going to be going in front of a tribunal, and you know, it’s very much, let’s work together to make this project better. And I’ve, I’ve I’ve observed personally that it’s one of those types of things where the more you put in, so the more that developers, maintainers, contributors, the more that they’re able to put into the engagement, in terms of providing audit teams with in. Site or with feedback or context, because I think that’s the piece that really is missing significantly with a lot of the, as Derek mentioned, kind of the tooling and some of the other kind of at scale things that at scale solutions, they really lack that context that is really important, especially in terms of security, when it comes to security in a code base, so it definitely has a multiplier effect. You know, the more we’ve seen projects being engaged in the audit, typically, we found much better results. And I can even give a direct case study example, where one an engagement that we were involved in. The audit team and the developer team happened to be our train ride apart, so they were able to arrange, essentially, an in person kind of orientation, kind of to really just discuss and get to know each other and gets in, you know, it was a really cool thing, and we learned that that led to a much better understanding of the code base as the team was auditing it, and that allowed them to find more significant findings, because, again, they had that greater understanding as a result of the context provided By the by the team and and, and actually that that same team that we worked with on this direct engagement yesterday at one of our virtual meetups, we learned that they did something similar. So their client wasn’t as was a quick it was a flight. But flights in Europe are shorter just and they were able to get together with the with the main maintainers of the project, and do, again, a very similar thing, where they were able to get together discuss, and that led to a much better understanding of the project, and allowed the auditors to add that much more value as part of the audit. So I to sum it up, I would say, as I said, add value. That’s I would that’s how I would sum it up. Is that I wish more developers knew that this is about adding value. It’s about collaborating. It’s not about, you know, making you feel bad about making mistakes or anything like that. You know, human beings will always, will always, you know, will always have that, that, you know, human error, and it’s totally normal and fine. And that’s why this as a practice is so important, because, you know, it’s such a common practice in software and really in the in the greater kind of landscape, you know, independent review. And so, yeah, I would say, you know, it’s meant to be collaborative. It’s not the scary thing. It’s really more about, as Derek said, helping and giving you resources to make your project better than anything else.

CRob 17:53
That’s amazing, and I really appreciate just kind of the innovative ideas and the coming to where the project is mentality and really you guys are making sure that security audits aren’t scary at all. But let’s move on to the rapid fire part of the interview. Are you ready for rapid rapid rapid fire? Got a couple wacky questions. Just give me the first thoughts to come out of your mouths, vi or Emacs?

Derek 18:22
oh, VI

Amir 18:25
yeah. Second that excellent.

CRob 18:26
There are no wrong answers, but there are better answers than others, right? What’s your favorite open source mascot?

Derek 18:36
Oh, I’d have to say the VLC cone. Nice, just because it’s nonsense, and they admit that it’s nonsense, and they constantly get asked about it and give nonsense answers. So it’s fantastic.

Amir 18:51
That’s a good point. And you can always tell who the VLC people are at, like FOSDEM, for example, because they have the big, the big cone on the head. And that’s a really good question. There’s a lot of really good ones out there. I’ve honestly found that the this the simpler ones mascots are, I tend to remember them more, but there’s, I’d say, for me, there’s too many good ones to pick so…

CRob 19:16
That’s a very diplomatic answer. I appreciate that. Spicy or mild food?

Derek 19:22
spicy all the way

CRob 19:28
nice, that is always the right answer.

Amir 19:30
Some of our greatest ideas came over spicy food. So…

CRob 19:35
And finally, and most importantly, Star Trek, or Star Wars.

Derek 19:40
So I’d say I’m Star Trek. I I like the idea of everybody working together toward, you know, a peaceful, wide, reaching society,

CRob 19:52
Open source of you. That’s awesome.

Amir 19:54
I would also say Star Trek. I missed the Star Wars kind of lore growing up, yeah, my experience with Star Wars, I had a high school teacher who, anytime he would not be able to make class, instead of a substitute teacher, he would just play the beginning of the first Star Wars movie. I think it was episode four, so I’ve seen the first 30 minutes plenty of times. So maybe that left a bad taste in my mouth with Star Wars.

CRob 20:27
I see we’ve had very different life experiences. That’s great. Well, thank you, gentlemen. I really appreciate you putting up with the nonsense. And then finally, as we wrap up, do you have a call to action for the community or developers, as we kind of close out

Derek 20:45
Sure, I would say we really operate on the principles of Spoon Theory. Have you ever heard of that? It’s from psychology. And the principle is that you have so many spoons of energy that you can devote to various things, and the way that we apply this to open source is thinking about the security knowledge and the just general energy available among open source communities. Some of them are very well supported. They have dedicated staff that are paid, and it’s their job to be there and be available. And then you have the complete opposite end of the spectrum, which is a solar solo maintainer invented a thing. That thing somehow became a really important piece of infrastructure. They don’t have any security knowledge, so they do what they can, you know, reading documents and and whatever, but they don’t have the available energy to invest in security so that that’s where I’m coming from. When I say, meet projects, where they are, and the call to action would be, if you are a security researcher and you’re interacting with open source, this is what you need to consider is their position on that spectrum of knowledge and available energy. So…

CRob 22:09
Amir?

Amir 22:10
Yeah, plus one to that, and to add, I would just say that if there’s one thing I’ve learned, you know, from doing this for 10 years, it’s that. It’s it’s important work, and it needs there. There’s almost an unlimited demand for it. You know, I was really shocked when I saw how some of the you know, projects, biggest names and open source projects, household names that we hear every day, really needed almost the same, if not more, security help than maybe the smaller projects, because, for example, some of the really big projects, because they have so much more scrutiny, they have a lot more noise to go through, for example, or they have, they could potentially have huge backlogs of bugs that they just haven’t gotten the time or resources to go through. And so I think my call to action would be, you know, we are one of the one tool in the in the toolkit, but I do think what we do really does help open source projects and we can do more with more. So we always are typically trying to do the most we can with what we have and which we always do, of course, but I think we really could do more with more so we can add more more help for projects, more diligence for projects, more ongoing support for projects. The work that we’ve been doing, doing tooling, augmentations, for example, has been really successful. And, you know, we, and we as a small organization, we are always happy and willing to take on more work. So we’re always open to new collaborations, new collaborate tours and helping how we can to fulfill our mission, which has been to help open source projects improve their security. So yeah, come talk to us. We’re involved in a lot of the open source security foundation working groups and events. As you mentioned, we’ve been strategic partner for Linux Foundation and OpenSSF for some time now. So yeah, we are always happy to collaborate and help how we can in the nature of open source. And so I’d say that’s that’s all I have. All right,

CRob 24:38
Derek and Amir from OSTIF, thank you all for your amazing work and helping collaborate with our developer community, and that’s going to be a wrap. Happy open sourcing, everybody. We’ll talk to you all soon. Goodbye.

Amir
Cheers, everyone. Thanks.

Outro
Like what you’re hearing. Be sure to subscribe to What’s in the SOSS on Spotify, Apple podcasts and Antenna, Pocket Cast, or wherever you get your podcasts. There’s a lot going on with the OpenSSF, and many ways to stay on top of it all. Check out the newsletter for open source news, upcoming events and other happenings. Go to openssf.org/newsletter to subscribe. Connect with us on LinkedIn for the most up to date. OpenSSF, news and insight and be a part of the OpenSSF community. At OpenSSF.org/getinvolved. Thanks for listening, and we’ll talk to you next time on What’s in the SOSS.