Tag

OpenSSF

OpenSSF Celebrates New Members, No-Cost Tooling, and Project Milestones

By Blog, Press Release

Foundation welcomes Helvethink, Spectro Cloud, Quantrexion as members, offers Kusari Inspector for free to projects, and celebrates increased investment in AI security 

AMSTERDAM – Open Source SecurityCon Europe – March 23, 2026 – The Open Source Security Foundation (OpenSSF), a cross-industry initiative of the Linux Foundation that focuses on sustainably securing open source software (OSS), today announced new members and key project momentum during Open Source SecurityCon Europe. 

New OpenSSF members include Helvethink, Spectro Cloud, and Quantrexion, who join the Foundation as General Members. As members, these companies will engage with working groups, contribute to technical initiatives, and help guide the strategic direction of the OpenSSF. Together, members support open, transparent, and community-driven security innovation, and the long-term sustainability of the Foundation.

“Open source security continues to evolve significantly in the face of new, automated threats,” said Steve Fernandez, General Manager of OpenSSF. “Our member organizations are seeding a more secure future, built with longevity in mind, by working with the OpenSSF. This network of projects, maintainers, and thousands of contributors is key to reinforcing reliable, sustainable open source software for all.”

Foundation Updates and Milestones

In the past quarter, OpenSSF has furthered its mission to secure open source software with the following achievements:

  • A new partnership with Kusari to offer Kusari Inspector at no cost to OpenSSF projects – this offering provides maintainers with deeper visibility into their software supply chains and enables proactive security checks at the pull request level.
  • The SLSA (Supply-chain Levels for Software Artifacts) project achieved Graduated status – this recognition advances SLSA’s stability, maturity, and broad adoption as a critical framework for supply chain integrity.
  • The release of the Gemara Project’s inaugural white paper – the findings outline a new framework for integrating security-as-code principles directly into the software development lifecycle.
  • The launch of new Special Interest Groups focused on Model Lifecycle Provenance and GPU-Based Model Integrity – these groups, under the AI/ML Security Working Group, expand the Foundation’s focus on securing the rapidly evolving field of AI/ML software security.
  • OpenSSF is approved as a CEN / CENELEC Liaison Organization for cybersecurity – this designation, through the Linux Foundation Europe, strengthens OpenSSF’s position in global standards development and policy influence.
  • The official launch of the OpenSSF Ambassador Program – applications are now open for the initial cohort.
  • Over 7,300 learners enrolled in OpenSSF’s free course, “Understanding the EU Cyber Resilience Act (LFEL1001)” – the Foundation has had over 75,000 enrollments in OpenSSF training programs to date.

OpenSSF growth follows the announcement of $12.5 million in grant funding awarded to OpenSSF and Alpha-Omega from leading AI providers. Funding from these leaders underscores broad industry support for more sustainable AI security assistance that empowers maintainers. Learn more about how OpenSSF and Alpha-Omega are using this grant to build long-term, sustainable security solutions, here. 

Supporting Quotes

“At Helvethink, we work at the intersection of cloud architecture, platform engineering, and DevSecOps. Open source components are foundational to modern infrastructure from Kubernetes and IaC tooling to CI/CD pipelines and security automation. Strengthening this ecosystem requires measurable standards, robust software supply chain security practices, and active collaboration across the community. By joining OpenSSF, we are actively participating in several working groups to contribute to initiatives focused on supply chain integrity, secure-by-design principles, and the continuous improvement of cloud-native security practices.”

– José Goncalves, co-founder, Helvethink

“Quantrexion is proud to join OpenSSF and support its mission to strengthen the security, resilience, and trustworthiness of open source software. As a company focused on governance and human risk management, we see secure open ecosystems as a critical part of long-term digital resilience.”

– Dionysis Karamitopoulos, CEO, Quantrexion

“Open source is the foundation of modern infrastructure — and its security is a shared responsibility. By joining the OpenSSF, Spectro Cloud is investing directly in the community work that raises the bar for everyone. Just as importantly, it strengthens the standards and practices behind the software we ship, so our customers can deploy Kubernetes with confidence in the integrity of every component. We’re proud to support the OpenSSF mission and to keep translating that momentum into real product capabilities that make secure software a default, not a bolt-on.”

– Saad Malik, CTO and co-founder, Spectro Cloud

Events and Gatherings

OpenSSF members are gathering this week in Amsterdam at Open Source SecurityCon Europe. To get involved with the OpenSSF community, join us at the following upcoming events:

Additional Resources

About the OpenSSF

The Open Source Security Foundation (OpenSSF) is a cross-industry organization at the Linux Foundation that brings together the industry’s most important open source security initiatives and the individuals and companies that support them. The OpenSSF is committed to collaboration and working both upstream and with existing communities to advance open source security for all. For more information, please visit us at openssf.org. 

Media Contact
Grace Lucier
The Linux Foundation

pr@linuxfoundation.org  

Leading Tech Coalition Invests $12.5 Million Through OpenSSF and Alpha-Omega to Strengthen Open Source Security

By Blog

Securing the open source software that underlies our digital infrastructure is a persistent and complex challenge that continues to evolve. The Linux Foundation announced a $12.5 million collective investment to be managed by Alpha-Omega and The Open Source Security Foundation (OpenSSF). This funding comes from key partners including Anthropic, Amazon Web Services (AWS), Google, Google DeepMind, GitHub, Microsoft, and OpenAI. The goal is to strengthen the security, resilience, and long-term sustainability of the open source ecosystem worldwide.

Building on Proven Success through OpenSSF Initiatives

This new investment provides critical support for OpenSSF’s proven, maintainer-centric initiatives. Targeted financial support is a key catalyst for sustained improvement in open source security. The results of the OpenSSF’s collective work in 2025 are clear:

  • Alpha-Omega invested $5.8 million in 14 critical open source projects and completed over 60 security audits and engagements.
  • Growing a Global Community: OpenSSF grew to 117 member organizations and was advanced by 267+ active contributors from 112 organizations, working across 10 Working Groups and 32 Technical Initiatives.
  • Driving Technical Impact: The OpenSSF Technical Advisory Council (TAC) awarded over $660,000 in funding across 14 Technical Initiatives, strengthening supply chain integrity, advancing transparency tools like Sigstore, and enabling community-driven security audits.
  • Measurable Security Uplift: Focused security engagements across critical projects resulted in 52 vulnerabilities fixed and 5 fuzzing frameworks implemented.
  • Expanding Education: Nearly 20,000 course enrollments across OpenSSF’s free training programs, with new courses like Security for Software Development Managers and Secure AI/ML-Driven Software Development empowering developers globally.
  • Global Policy Engagement: Launched the Global Cyber Policy Working Group and served as a challenge advisor for the Artificial Intelligence Cyber Challenge (AIxCC), ensuring the open source voice is heard in evolving regulations like the EU Cyber Resilience Act (CRA).

AI: A New Frontier in Security

The security landscape is changing fast. Artificial intelligence (AI) accelerates both software development and the discovery of vulnerabilities, which creates new demands on maintainers and security teams. However, OpenSSF recognizes that grant funding alone is not the sole solution to the problems AI tools are causing today on open source security teams. This moment also offers powerful new opportunities to improve how security work is completed.

This new funding will help the OpenSSF provide the active resources and dedicated projects needed to support overworked maintainers with the triage and processing of the increased AI-generated security reports they are currently receiving. Our response will feature global strategies tailored to the needs of maintainers and their communities.

“Open source software now underpins the majority of modern software systems, which means the security of that ecosystem affects nearly every organization and user worldwide,” said Christopher Robinson, CTO and Chief Security Architect at OpenSSF. “Investments like this allow the community to focus on what matters most: empowering maintainers, strengthening security practices across projects, and raising the overall security bar for the global software supply chain.”

Securing the Open Source Lifecycle

The true measure of success will be execution. Success is not about how much AI we introduce into open source. It is determined by whether maintainers can use it to reduce risk, remediate serious vulnerabilities faster, and strengthen the software supply chain long term. We are grateful to our funding partners for their commitment to this work, and we look forward to continuing it alongside the maintainers and communities that power the world’s digital systems.

“Our commitment remains focused: to sustainably secure the entire lifecycle of open source software,” said Steve Fernandez, General Manager of OpenSSF. “By directly empowering the maintainers, we have an extraordinary opportunity to ensure that those at the front lines of software security have the tools and standards to take preventative measures to stay ahead of issues and build a more resilient ecosystem for everyone.”

To learn more about open source security initiatives at the Linux Foundation, please visit openssf.org and alpha-omega.dev.

Linux Foundation Announces $12.5 Million in Grant Funding from Leading Organizations to Advance Open Source Security 

By Blog, Press Release

Anthropic, Amazon Web Services (AWS), GitHub, Google, Google DeepMind, Microsoft, and OpenAI Join Forces with the Foundation to Invest in Sustainable Security Solutions for the Open Source Ecosystem

SAN FRANCISCO – March 17, 2026 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced $12.5 million in total grants from Anthropic, AWS, GitHub, Google, Google DeepMind, Microsoft, and OpenAI to strengthen the security of the open source software ecosystem. The funding will be managed by Alpha-Omega and the Open Source Security Foundation (OpenSSF), trusted security initiatives within the Linux Foundation, to develop long-term, sustainable security solutions that support open source communities worldwide.

As the security landscape grows more complex, advances in AI are dramatically increasing the speed and scale of vulnerability discovery in open source software. Maintainers are now facing an unprecedented influx of security findings, many of which are generated by automated systems, without the resources or tooling needed to triage and remediate them effectively. Through this investment, Alpha-Omega and OpenSSF will work directly with maintainers and their communities to make emerging security capabilities accessible, practical, and aligned with existing project workflows. The effort will support sustainable strategies that help maintainers manage growing security demands while improving the overall resilience of the open source ecosystem.

“Alpha-Omega was built on the idea that open source security should be both normal and achievable. By funding audits and embedding security experts directly into the ecosystem, we’ve proven that targeted investment works,” said Michael Winser, Co-Founder of Alpha-Omega. “Now, we’re scaling that expertise. We are excited to bring maintainer-centric AI security assistance to the hundreds of thousands of projects that power our world.”

“Grant funding alone is not going to help solve the problem that AI tools are causing today on open source security teams,” said Greg Kroah-Hartman of the Linux kernel project. “OpenSSF has the active resources needed to support numerous projects that will help these overworked maintainers with the triage and processing of the increased AI-generated security reports they are currently receiving.”

“Our commitment remains focused: to sustainably secure the entire lifecycle of open source software,” said Steve Fernandez, General Manager of OpenSSF. “By directly empowering the maintainers, we have an extraordinary opportunity to ensure that those at the front lines of software security have the tools and standards to take preventative measures to stay ahead of issues and build a more resilient ecosystem for everyone.”

To learn more about open source security initiatives at the Linux Foundation, please visit openssf.org and alpha-omega.dev. 

Supporting Quotes

“The open source ecosystem underpins nearly every software system in the world, and its security can’t be taken for granted. This investment reflects our belief that the best way to improve security outcomes is to work directly with maintainers and give them the resources and tooling to address threats at scale. Ensuring the world safely navigates the transition to transformative AI means investing in the foundations it runs on.” 

– Vitaly Gudanets, CISO, Anthropic

“Over the past four years, our work with Alpha-Omega has proven it can deliver real results for the open source ecosystem at scale—from helping the Rust Foundation deploy Trusted Publishing to enabling critical vulnerability fixes across Node.js and PyPI. We are excited to increase our investment in Alpha-Omega and to work with our collaborators and directly with maintainers to provide not just funding, but the right tools and expertise that projects actually need to handle AI-generated security reports at scale.” 

— Mark Ryland, Director, AWS Security 

“Building on our initial commitment alongside Google and Microsoft four years ago, we’re now confronting new security challenges as AI transforms vulnerability discovery. That’s why AWS is investing an additional $2.5 million in Alpha-Omega. We believe the same advanced models creating these challenges can also solve them through better tooling and automation, but only through collaboration between industry leaders and the open source security community.” 

— Stormy Peters, Head of Open Source Strategy and Marketing, Amazon Web Services  

“As the home for open source, GitHub knows that code is only as strong as the community behind it. Supporting the Linux Foundation’s Alpha-Omega initiative extends our longstanding commitment to securing the global software supply chain. Through funding, training, and AI-powered tools, we’re empowering maintainers to identify risks faster and prevent burnout.”


— Kyle Daigle, COO, GitHub

“Securing the open source ecosystem is a shared responsibility that requires more than just capital, it also requires giving maintainers the right tools to stay ahead of evolving threats. By combining AI-driven innovation with the proven frameworks of Alpha-Omega and OpenSSF, we are empowering the community to not just react to threats, but build systemic resilience.” 


— Evan Kotsovinos, Vice President of Privacy, Safety and Security, Google

“Securing open source is a shared responsibility, and we have to move as fast as the technology does. We’re focused on turning AI’s ability to find and patch vulnerabilities into a massive defensive advantage. Supporting Alpha-Omega and OpenSSF is an important step for us, right alongside our work on OSS-Fuzz, Big Sleep and CodeMender. We’re going to keep building on this to put these capabilities into the hands of maintainers, leveraging AI to help scale society’s collective resistance to cyber attacks.” 

— Four Flynn, VP, Security and Privacy, Google DeepMind

“Open source software is a critical part of the modern technology landscape. As AI accelerates both software development and the discovery of vulnerabilities, the industry must step up to protect this shared infrastructure. This collaboration represents an important step in democratizing AI-powered defenses, and we’re proud to support Alpha-Omega and the OpenSSF in delivering scalable, maintainer-first solutions that secure the code powering our digital society.” 


— Mark Russinovich, CTO, Deputy CISO and Technical Fellow, Microsoft Azure

“This is a critical moment for global cybersecurity that requires unprecedented levels of collaboration across the industry, and sustained commitment. For artificial intelligence to benefit us all, we need to listen closely to maintainers and strengthen the open source foundations we all depend on. Maintainers make an extraordinary contribution, and this program is an important step in providing them the support they need.”

— Dane Stuckey, CISO, OpenAI

About Alpha-Omega

Alpha-Omega protects society by funding and catalyzing sustainable security across open source software. With over 70 grants totalling over $20M across major ecosystems, package registries, and individual projects, Alpha-Omega has an established track record of “turning money into security.” Backed by Anthropic, AWS, Citi, GitHub, Google, Google DeepMind, Microsoft, and OpenAI, Alpha-Omega partners with maintainers, security experts, and communities to invest where it can have the greatest impact. For more information, visit us at alpha-omega.dev.

About the OpenSSF

The Open Source Security Foundation (OpenSSF) is a cross-industry organization at the Linux Foundation that brings together the industry’s most important open source security initiatives and the individuals and companies that support them. The OpenSSF is committed to collaboration and working both upstream and with existing communities to advance open source security for all. For more information, please visit us at openssf.org. 

About the Linux Foundation

The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects, including Linux, Kubernetes, Model Context Protocol (MCP), OpenChain, OpenSearch, OpenSSF, OpenStack, PyTorch, Ray, RISC-V, SPDX and Zephyr, provide the foundation for global infrastructure. The Linux Foundation is focused on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org. 

Media Contact
Grace Lucier
The Linux Foundation

pr@linuxfoundation.org

What’s in the SOSS? Podcast #55 – S3E7 The Gemara Project: GRC Engineering Model for Automated Risk Assessment

By Podcast

Summary

Hannah Braswell and Jenn Power, Security Engineers from Red Hat and contributors to the OpenSSF, join host Sally Cooper to discuss the Gemara project. Gemara, an acronym for GRC Engineering Model for Automated Risk Assessment, is a seven-layer logical model that aims to solve the problem of incompatibility in the GRC (Governance, Risk, and Compliance) stack. By outlining a separation of concerns, the project seeks to enable engineers to build secure and compliant systems without needing to be compliance experts. The speakers explain how Gemara grew organically to seven layers and is leveraged by other open source initiatives like the OpenSSF Security Baseline and Finos Common Cloud Controls. They also touch on the ecosystem of tools being built, including Queue schemas and a Go SDK, and how new people can get involved.

Conversation Highlights

00:00 Welcome music + promo clip
00:22 Introductions
02:17 What is Gemara and what problem does it address?
03:58 Why do we need a model for GRC engineering?
05:50 The seven-layer structure of Gemara
07:40 How Gemara connects to other open source projects
10:14 Tools available to help with Gemara model adoption
11:39 How to get involved in the Gemara projects
13:59 Rapid Fire
16:03 Closing thoughts and call to action

Transcript

Sally Cooper (00:22)
Hello, hello, and welcome to What’s in the SOSS, where we talk to amazing people that make up the open source ecosystem. These are developers, security engineers, maintainers, researchers, and all manner of contributors that help make open source secure. I’m Sally, and today I have the pleasure of being joined by two fantastic security engineers from Red Hat. We have Hannah and Jenn.

Thank you both so much for joining me today and to get us started, can you tell us a little bit about yourselves and the work that you do at Red Hat? I’ll start with Jenn.

Jenn Power (00:58)
Sure. I am Jenn Power. I’m a principal product security engineer at Red Hat. My whole life is compliance automation, let’s say that. And outside of Red Hat, I participate in the OpenSSF Orbit Working Group, and I’m also a maintainer of the Gemara project.

Sally Cooper (01:18)
Amazing. Thank you, Jenn and Hannah. How about you? Hi.

Hannah Braswell (01:21)
Hey, Sally. Thanks for the nice introduction. I’m Hannah Braswell, and I’m an associate product security engineer at Red Hat. And I work with Jenn on the same team. And I primarily focus on compliance automation and enablement for compliance analysts to actually take advantage of that automation. Then within the OpenSSF, I’m involved in the Gemara project. I’m the community manager there. And then

I’m kind of a fly on the wall at a lot of the community meetings, whether it be the Gemara meeting or the orbit working group. I like to go to a lot of them.

Sally Cooper (02:01)
we love to hear that. I heard Orbit working group from both of you. That’s exciting. And I also really want to dive in to the project Gemara. So before we do dive into those details, let’s make sure that everyone’s starting from the same place. So for listeners who are hearing about Gemara for the first time, what is Gemara and what problem is it designed to address?

Jenn Power (02:23)
Sure, can start there. It’s actually secretly an acronym. So it stands for GRC Engineering Model for Automated Risk Assessment. So that’s kind of a mouthful, so we just shorten it to Gemara. And the official description I’ll give, and then I can go into it like a little bit more of a relatable example, is that it provides a logical model for describing categories of compliance activities, how they interact,

And it has schemas to enable automated interoperability between them. So like, what does that mean? I think a good, if we anchor this in an analogy, we could call Gemara like the OSI model for the GRC stack. In fact, that was one of the kind of primary inspirations for the categorical layers of Gemara. And Gemara also happens to have seven categorical layers, just like the OSI model.

So if you think about it in networking, if I want to send an email, I don’t have to understand like packet routing. I can just send my email. So in GRC, we can’t really do that today. We have security engineers that also have to be compliance experts to be successful. And so with Gemara, we want to outline the separation of concerns within the GRC stack to make sure that like each specialist can contain their complexity in their own layer while allowing them to exchange information with different specialists completing activities in different layers.

So like if I could give one takeaway, we want to make it so engineers can build secure and compliant systems without having to understand the nuance of every single compliance framework out

Sally Cooper (04:14)
I love that. So we have a baseline now. Let’s talk about the problem and get a little bit deeper into that. So Gemara is responding to a problem that you touched upon. Why do we need a model for GRC engineering and what incompatibility issue are you trying to solve? If you could go a little deeper.

Jenn Power (04:34)
Sure. So I think sharing resources in GRC is just really hard today. Sharing content, sharing tools, none of those tools and content, it doesn’t work together today, if I could say that. engineers are typically having to reinterpret security controls. They’re having to create a lot of glue code to make sure that a tool like a GRC system and a vulnerability scanner can actually talk to each other.

So we’re trying to solve that incompatibility issue on the technical side. But this is also a human problem. And I think that’s kind of the sneakiest part about it. A lot of times, we’re not even saying the same things when we use the same terms. And so that’s another thing that we’re trying to solve within the Gemara project.

This one comes up all the time. Take the word policy. If you say that to an engineer, you’re thinking immediately, policy as code, like a rego file or something you’re going to use with your policy engine. But if you’re talking to someone in the compliance space, they’re thinking like, this is like my 40 page document that outlines my organizational objectives. So we created definitions within the Gemara project to go along with the model to solve the human problem while we’re also trying to solve the technical problem.

Sally Cooper (06:05)
That’s interesting. Okay, I heard you say something about a seven-layer structure. Can you tell me why you chose a seven-layer structure for Gemara?

Jenn Power (06:17)
So this actually stemmed from an initiative under the CNCF called the Automated Governance Maturity Model. And that started as four concepts actually, policy, evaluation, enforcement, and audit. And that established the initial kind of lexicon that the project had been using.

And it initially got some adoption in the ecosystem, specifically in projects under the Linux Foundation, like FINOS Common Cloud Controls (CCC) and the Open Source Project Security Baseline (OSPS Baseline). And through the application of that lexicon, we found that there needed to be more granularity within that policy layer. So it expanded to two new layers called guidance and controls.

And I didn’t mention that we were creating a white paper yet, but we do have a white paper. And through the creation of that white paper, which Eddie Knight did so much work to create that initial draft there, we actually found that we were missing a layer. We had a seventh layer, and it was something that we had called sensitive activities. And it’s something kind of sandwiched in the middle of the Gemara layer. And so with that, we kind of organically grew to seven layers. So that I think is the kind of origin story on how the layers got to seven.

Sally Cooper (07:54)
I love that. And you’re really talking about how Gemara is not built in isolation, that you’re working with other open source projects. For example, you mentioned Baseline and the FINOS Common Cloud Controls. Can you tell me how Gemara connects to those projects?

Hannah Braswell (08:09)
Yeah. So in terms of Gemara connecting to the other open source projects, the first thing that comes to mind is really the CRA because of how prominent it is right now and just the future of its impact. And I really think that Gemara is going to be a catalyst for open source projects in general that are in need of some kind of mechanism to, you know, implement security controls and align with potential compliance requirements.

And the good thing about Gemara is that you don’t have to be a compliance expert to make sure that your open source project is secure. And so I would say that the OSPS Baseline is a great example of Gemara’s layer two, because it provides a set of security controls that engineers can actually implement. So in that case, other projects can reuse the baseline controls and then fit them to their needs.

And I think it also goes to say that, anyone that is actually building a tool they want to sell or distribute in the European Union market that’s using the open source components, they’re gonna have to think about what’s in scope and having something like the OSPS Baseline to understand how to effectively assess your open source components and their risks is really, really valuable and just gonna be super useful. And then in terms of the FINOS Common Cloud Controls, I think that’s

Also another great example, just in terms of the use case and implementation of Gemara, because they have their core catalog, which has its own definitions of threats and controls that’s then imported to their technology specific catalogs. And yeah, so that’s a great implementation within the financial sector.

And then where we’re trying to expand the ecosystem for Gemara, as in the Cloud Native Security Controls catalog refresh. And that’s actually an initiative that Jenn is leading. I’ve done a few contributions to it, but it’s essentially an effort to take the controls catalog that currently exists as a spreadsheet and make it available as a Gemara layer one machine readable guidance document. So Gemara is really connecting to projects that are all great to have on your radar, especially with the CRA coming up.

Sally Cooper (10:26)
Wow, that sounds great. But I’m just thinking about our listeners. They’re probably wondering, like, what does this look like in practice? And I’m curious if there are any tools available to help with the Gemara model adoption.

Jenn Power (10:39)
So we’re actually working on an ecosystem of tools. So we want to bridge that theory that we’re creating within the Gemara white paper to things that are actually implementable just to make sure that you don’t have to start from scratch if you’re trying to implement the Gemara model.

So we have a couple tools within the ecosystem. One would be our implementation of the model. We’re using queue schemas to allow users to create the models like in YAML, for instance, if you wanted to create your layer two, you would create YAML, you could use our queue schemas to validate that your document is in fact a Gemara compliant document. And then we’re also building SDKs. Right now we have a Go SDK, so you can build tooling around the programmatic access and manipulation of Gemara documents. A tool in the ecosystem that’s using this currently is a tool called Privateer that automates the layer five evaluations.

Sally Cooper (11:47)
Wow, that’s great. And of course, none of this works without the people. So I know you mentioned a few of them. How can new people get involved in the Gemara project?

Hannah Braswell (11:58)
So anyone that’s new and interested in getting involved in the Gemara project, my first piece of advice would just be to jump in a community meeting and listen in on what’s happening. I know I started out just by joining those meetings and I, you know, I didn’t necessarily have much to say, but I appreciated the culture and the open discussion, just like bouncing ideas back and forth off of one another.

And there’s also plenty of times when I joined a community meeting and still trying to understand the project if there was some kind of group opinion trying to be formed. Like I think it’s perfectly fine to say, you know, I don’t have the information right now. I don’t have an opinion. I’m still trying to learn about the project. But if something piques your interests and you want to contribute, then volunteer for it or show you’re interested because people are not going to forget about your willingness to step up and be part of the community.

But I started joining those meetings before we were rolling out the white paper. So that kind of brings me back to my first piece of advice. So I’d really suggest reading the white paper first, because it describes the problem and the trajectory of the project so well, and in a really clear way that I think is super important context for anyone that wants to start contributing. And I mean, from there, I mean, I’m the community manager, but I started with small contributions.

that ended up supporting the community in terms of documentation and some other aspects of the project I was excited about and that I could contribute to. So I really think the contributions are dependent on what you’re interested in. And even if there’s some difference in opinion and perspective or background, all of that can make a huge difference for the community and anything from documentation to code or discussion and collaboration will count as valid contribution and effort. So I’d say to anyone that’s wanting to join the Gemara community and start contributing, I think you should just find an area that truly interests you and makes you excited and get involved.

Sally Cooper (14:02)
Oh, that’s great. Well, thanks so much. And before we wrap, we’re going to do rapid, rapid fire. So I hope you’re ready because this is the fun part. No overthinking, no explanations, just the first instinct, okay, that comes to you. And I’m going to bounce. Yes, exactly. I’m going to bounce back and forth and ask you both some questions. Ready?

Jenn Power (14:17)
I’ll close my eyes then.

Sally Cooper (14:25)
Okay, Hannah, you’re up first. Star Wars or Star Trek?

Hannah Braswell (14:29)
Star Wars.

Sally Cooper (14:30)
Nice, I love it.
And Jenn, same question, Star Wars or Star Trek?

Jenn Power (14:335)
Star Wars.

Sally Cooper (14:36)
Okay, we’re all friends here.
Okay, back to Hannah, coffee or tea?

Hannah Braswell (14:42)
Definitely coffee.

Sally Cooper (14:43)
Yay, cheers. That’s solid.
Jenn, morning person or night owl?

Jenn Power (14:49)
Night Owl.

Sally Cooper (14:50)
Ohh that tracks. Hannah, beach vacation or mountains?

Hannah Braswell (14:56)
Hmm beach vacation.

Sally Cooper (14:58)
Nice choice. Jenn, books or movies?

Jenn Power (15:02)
Movies.

Sally Cooper (15:03)
Nice. All right, last round. Hannah, favorite open source mascot?

Hannah Braswell (15:08)
Oh…Zarf. I think that looks like an axolotl. I used to be obsessed with axolotls. And I mean, ever since I saw that, I was like, that’s the mascot.

Sally Cooper (15:18)
I love Zarf too. Cool. Okay. That’s a really strong pick.
Jenn, I’m going to give you the same question. Favorite open source mascot?

Jenn Power (15:26)
actually love the OpenSSF goose. I think it’s so cute.

Sally Cooper (15:30)
Teehee, Honk, he’s the best. Okay, let’s bring it home, Hannah, sweet or savory.

Hannah Braswell (15:38)
Savory.

Sally Cooper (15:39)
interesting. Okay, and Jenn? Spicy or mild?

Jenn Power (15:46)
mild. I can’t handle any spice. I’m a baby.

Sally Cooper (15:51)
love it. That’s amazing. Well, thank you both so much for playing along. And as we wind things down, do you have any other calls to action for our audience if someone’s listening, and they want to learn more or get involved? What is like the best next step for them?

Jenn Power (16:05)
I would say read the white paper. We are looking for feedback on it and that is really a way to understand the philosophy and the architectural goals of Gemara. And if you’re looking to just like, hey I want to learn GRC, that’s a good first step. So I think that’s what I would say.

Sally Cooper (16:28)
Fantastic. Hannah, Jenn, thank you so much for your time today and for the work you’re doing for the open source security community. We appreciate you both. And to everyone listening, happy open sourcing and that’s a wrap.

OpenSSF Newsletter – February 2026

By Newsletter

TL;DR:

🇳🇱 Open Source SecurityCon Europe → Agenda live and registration open

🎙️ Securing Agentic AI in Practice → March 17 Tech Talk on AI/ML security in action

📖 Compiler Annotations Guide → Practical C/C++ hardening without rewrites

🏆 Security Slam 2026 → 30-day challenge to level up project security

🇪🇺 CRA in Practice @ FOSDEM → Turning regulation into actionable steps

📦 Package Repository Security Forum → Cross-ecosystem collaboration in action

🎙️ What’s in the SOSS? → CFP tips and a 4-part AIxCC deep dive

6 min read

Join Us at Open Source SecurityCon Europe 2026 in Amsterdam

Planning to attend KubeCon + Cloud Native Con Europe in March? Don’t miss OpenSSF’s co-located 1-day event! This gathering will bring together a diverse community, including software developers, security engineers, public sector experts, CISOs, CIOs, and tech pioneers, to explore challenges and opportunities in modern security. Collaborate with peers and discover the essential tools, knowledge, and strategies needed to ensure a safer, more secure future.

The agenda is live! Read the blog to learn what not to miss in Amsterdam and to see highlights from SecurityCon North America.

Read the blog | Register now | View the agenda

Mark Your Calendar For the Upcoming Tech Talk: Securing Agentic AI in Practice: From OpenSSF Guidance to Real-World Implementation

Tech Talk: Securing Agentic AI in Practice: From OpenSSF Guidance to Real-World ImplementationJoin us for the first OpenSSF Tech Talk of the year, focusing on agentic artificial intelligence (AI) security.

In this session, we will explore how the OpenSSF AI/ML Security Working Group is developing open guidance and frameworks to help secure AI and machine learning systems, and how that work translates into real-world practice. Using SAFE MCP and other solutions from OpenSSF member companies as examples, we will highlight community-driven efforts to improve the security of agentic AI systems, the problems they address, the design tradeoffs involved, and the lessons learned so far.

We will also feature OpenSSF’s free course, Secure AI/ML Driven Software Development (LFEL1012), which gives attendees a clear path to build practical skills and contribute to this rapidly evolving field.

Register and mark your calendar for March 17 at 1:00 p.m. ET. Additional speaker information will be shared soon.

Fill Out All The Margins đź“–: OpenSSF Releases Compiler Annotations Guide for C and C++

OpenSSF has released a new Compiler Annotations Guide for C and C++ to help developers improve memory safety, diagnostics, and overall software security by using compiler-supported annotations. The guide explains how annotations in GCC and Clang/LLVM can make code intent explicit, strengthen static analysis, reduce false positives, and enable more effective compile-time and run-time protections. As memory-safety issues continue to drive a significant share of vulnerabilities in C and C++ systems, the guide offers practical, real-world guidance for applying low-friction hardening techniques that improve security without requiring large-scale rewrites of existing codebases. 

Read the blog

Security Slam 2026

Security Slam 2026 is a 30-day security hygiene challenge running from February 20 to March 20, culminating in an awards ceremony at KubeCon + CloudNativeCon Europe. Hosted by OpenSSF in partnership with CNCF TAG Security & Compliance and Sonatype, the event encourages projects to use practical security tools, including OpenSSF resources, to strengthen their security posture based on their maturity level. Participants can earn recognition, badges, and plaques for completing milestones, reinforcing a community-driven effort to improve open source software security at scale. 

Read the blog to learn more | Register now to receive reminders and instructions

EU Cyber Resilience Act (CRA) in Practice @ FOSDEM 2026: From Awareness to Action

At FOSDEM 2026, the CRA in Practice DevRoom brought together open source and industry leaders to turn the EU Cyber Resilience Act from policy discussion into practical action. Through case studies and panels, speakers shared concrete approaches to vulnerability management, SBOMs, VEX, risk assessment, and the steward role. 

Read the blog

Advancing Package Repository Security Through Collaboration

On February 2, OpenSSF convened the Package Manager Security Forum, bringing together maintainers and registry operators from major ecosystems to address shared challenges in package repository security. Discussions highlighted common concerns around identity and account security, governance and abuse handling, transparency, and long-term sustainability. The session reinforced that package ecosystem risks are interconnected and that improving security requires cross-ecosystem coordination, shared frameworks, and continued collaboration through OpenSSF’s neutral convening role.

Read the recap

Getting an OpenSSF Baseline Badge with the Best Practices Badge System

Is your open source project meeting the “minimum definition” of security? The OpenSSF has officially integrated the Open Source Project Security Baseline (OSPS Baseline) into its Best Practices Badge Program.

In our latest blog, David A. Wheeler explains how you can quickly identify and meet essential security requirements to earn a Baseline Badge.

What’s in the SOSS? An OpenSSF Podcast:

#50 – S3E2 Demystifying the CFP Process with KubeCon North America Keynote Speakers

Stacey Potter and Adolfo “Puerco” García Veytia share practical, behind-the-scenes advice on submitting conference talks, fresh off their KubeCon keynote. They break down how CFP review committees work, what makes an abstract stand out, common mistakes to avoid, and why authenticity matters more than polish. The episode also tackles imposter syndrome and encourages new and diverse voices to shape the future of open source through speaking.

#51 – S3E3 AIxCC Part 1: From Skepticism to Success with Andrew Carney

Andrew Carney from DARPA explains the vision and results behind the two-year AI Cyber Challenge (AIxCC), which tasked teams with building AI systems that can automatically find and patch vulnerabilities in open source software. Despite early skepticism, competitors identified more than 80% of seeded vulnerabilities and generated effective patches at surprisingly low compute costs. The episode looks at what comes next as these cyber reasoning systems move from competition to real-world adoption.

#52 – S3E4 AIxCC Part 2: How Team Atlanta Won by Blending Traditional Security and LLMs

Professor Taesoo Kim of Georgia Tech describes how Team Atlanta combined fuzzing, symbolic execution, and large language models to win AIxCC. Initially skeptical of AI, the team shifted its strategy mid-competition and discovered that hybrid approaches produced the strongest results. The conversation also covers commercialization efforts, integration with OSS-Fuzz, and how the experience reshaped academic security research.

#53 – S3E5 AIxCC Part 3: Trail of Bits’ Hybrid Approach with Buttercup

Michael Brown of Trail of Bits discusses Buttercup, the second-place AIxCC system that pairs large language models with conventional software analysis tools. The team focused on using AI for well-scoped tasks like patch generation while relying on fuzzers for proof-of-vulnerability. Now fully open source and able to run on a laptop, Buttercup is actively maintained and positioned for broader enterprise and community use.

#54 – S3E6 AIxCC Part 4: Cyber Reasoning Systems in the Real World

CRob and Jeff Diecks wrap up the AIxCC series by exploring how competition teams are applying their systems to real open source projects such as the Linux kernel and CUPS. They introduce the OSS-CRS initiative, which aims to standardize and combine components from multiple cyber reasoning systems, and share lessons learned about responsibly reporting AI-generated findings. The episode highlights how collaboration through OpenSSF’s AI/ML Security Working Group and Cyber Reasoning Systems SIG is shaping the next phase of AI-driven security.

News from OpenSSF Community Meetings and Projects:

Upcoming community meetings

In the News:

  • The OpenSSF was featured in a Technology Magazine Q&A. CRob discusses OpenSSF’s goals, OSSAfrica, the BEAR Working Group, Security Baseline, and much more. This conversation was also covered by AI Magazine. 

Meet OpenSSF at These Upcoming Events!

Connect with the OpenSSF Community at these key events:

Ways to Participate:

There are a number of ways for individuals and organizations to participate in OpenSSF. Learn more here.

You’re invited to…

See You Next Month! 

We want to get you the information you most want to see in your inbox. Missed our previous newsletters? Read here!

Have ideas or suggestions for next month’s newsletter about the OpenSSF? Let us know at marketing@openssf.org, and see you next month! 

Regards,

The OpenSSF Team

Getting an OpenSSF Baseline Badge with the Best Practices Badge System

By Blog

By David A. Wheeler

Many open source software (OSS) projects aim to securely develop software and have an easy way to communicate their security posture to others.

Overview

The OpenSSF developed the Open Source Project Security Baseline (OSPS Baseline) to act as a “minimum definition of requirements for a project relative to its maturity level”. It’s a three-level checklist (baseline-1 through baseline-3) to help OSS projects improve their security. The OpenSSF Best Practices Badge Program now supports the baseline criteria, making it easier for OSS projects to determine what they’ve already accomplished and what remains. OSS projects can then display their badge on their web pages, demonstrating what they’ve accomplished and making it easy for potential users to learn more.

This post introduces how to earn an OpenSSF baseline badge through the OpenSSF Best Practices Badge System.

Getting Started with the Best Practices Badge Program

First, visit https://www.bestpractices.dev. The site currently supports nine locales, and this URL automatically redirects you to your preferred language (e.g., https://www.bestpractices.dev/en for English).

Click on “Login” to add information. You can use your GitHub account to log in. Most users prefer this method. You must grant permission during your first visit. You can also create an account specifically for the site.

Click on the “Projects” tab to see projects currently pursuing badges, then click either the “+ Add” tab or the “Add New Project” button. The “New badge” form allows you to enter your project’s repository URL and/or home page URL. You can also decide whether to begin with the “metal” series or the “baseline” series. The baseline series is a focused checklist that includes only MUST security requirements and draws in part from global cybersecurity regulations and frameworks. The metal series is a larger set of criteria that includes suggestions and quality issues impacting security derived in part from the experiences of secure Free/Libre and Open Source Software (FLOSS) projects. Both focus on security, and we encourage projects to eventually complete both; simply choose a starting point. For the purposes of this blog post, we’ll assume you chose the “baseline” series.

When you click on “Submit Project”, the system assigns a unique numeric ID to the project. The system will pause to examine the repository and attempt to automatically determine the answers to various questions. For many, this automation can save a lot of time. Once that’s done, you’ll see a form to update project information. Information highlighted in yellow with the robot symbol 🤖 indicates data entered by automation. We recommend double-checking automation results for accuracy.

Understanding and Completing the Baseline Criteria

You can now fill in the form. Each criterion can be “?” (unknown), “N/A” (not applicable), Unmet, or Met. By default, each is marked “?” (unknown). As you identify more and more items that are Met (or N/A), the % completion bar will increase. We’ve intentionally gamified this; when you reach 100% in baseline-1, you’ve earned a baseline-1 badge. You can also provide justification text; we recommend including it (even when it’s not required) to help others understand the project’s current status. Badge claims are mostly self-assertions. In some cases, automation can override false claims. The answers given are presented for public scrutiny, incentivizing correct answers.

The form shows the criterion requirements; click “show details” for more information. For example, baseline-1 criterion OSPS-AC-01.01 requires that, “When a user attempts to read or modify a sensitive resource in the project’s authoritative repository, the system MUST require the user to complete a multi-factor authentication process.” Any project hosted on GitHub automatically meets this requirement. GitHub has required multi-factor authentication since March 2023, and the system automatically fills in the required information. Not all projects are hosted on GitHub. Those projects must ensure they meet this criterion.

When you’re done, you can select “Save and Continue” or “Save and Exit” to save your work to the website. The “Save and Continue” option not only lets you continue, but also reruns automations to fill in currently unknown information.

The Best Practices Badge site currently supports version v2025.10.10, but it will soon integrate the recently released v2026.02.19. New requirements wil initially appear as “future” criteria, allowing maintainers to address updates without losing their current badge status. There is no reason to wait; projects should begin the process now, as the system will provide ample time to adapt to new criteria.

Displaying Your Baseline Badge

Once you’ve met the baseline-1 criteria, you can add some code to your repository to show off your badge. The site shows the code to add, and it follows the usual badge conventions. For example, in Markdown you would add this:

[![OpenSSF Baseline](https://www.bestpractices.dev/projects/ID/baseline)]
(https://www.bestpractices.dev/projects/ID)

If you’ve earned the baseline-1 badge, this Markdown code would show an image like this:

Advanced Integrations and Automation Options

We support various mechanisms to rapidly get information in and out of the badge system (replace “ID” with the project’s numerical ID), for example:

  • Project’s information (JSON): https://www.bestpractices.dev/projects/ID.json
  • Project’s baseline badge (SVG) https://www.bestpractices.dev/projects/ID/baseline
  • Proposed edit values: https://www.bestpractices.dev/projects/ID/SECTION/edit?PROPOSALS where PROPOSALS is &-separated key=value pairs. This highlights those proposals with a robot icon, so you can review them before accepting them. For example, in section “baseline-1” you can use the proposal “osps_ac_01_01_status=met” to propose setting the status of OSPS-AC-01.01 to “Met”. For more information, see the documentation on automation proposals.

You can also include a “.bestpractices.json” file in the repository that contains proposed values for a badge. If present, these values will be treated as automation results and highlighted during editing so users can decide whether or not to accept them. The .bestpractices.json documentation provides more details.

Why the Baseline Badge Matters

Our goal is to help OSS projects identify next steps to improve security and provide clear guidance. These capabilities help projects demonstrate measurable progress.

If you maintain an OSS project, visit https://www.bestpractices.dev and start working on a badge. If you use OSS, support those projects on which you depend as they strengthen their security practices.

About the Author

Dr. David A. Wheeler is an expert on developing secure software and on open source software.  He created the Open Source Security Foundation (OpenSSF) courses “Developing Secure Software” (LFD121) and “Understanding the EU Cyber Resilience Act (CRA)” (LFEL1001), and is completing creation of the OpenSSF course “Secure AI/ML-Driven Software Development” (LFEL1012).  His other contributions include “Fully Countering Trusting Trust through Diverse Double-Compiling (DDC)”. He is the Director of Open Source Supply Chain Security at the Linux Foundation and teaches a graduate course in developing secure software at George Mason University (GMU).