Skip to main content
Category

Blog

OpenSSF at DEF CON 33: AI Cyber Challenge (AIxCC), MLSecOps, and Securing Critical Infrastructure

By Blog

By Jeff Diecks

The OpenSSF team will be attending DEF CON 33, where the winners of the AI Cyber Challenge (AIxCC) will be announced. We will also host a panel discussion at the AIxCC village to introduce the concept of MLSecOps.

AIxCC, led by DARPA and ARPA-H, is a two-year competition focused on developing AI-enabled software to automatically identify and patch vulnerabilities in source code, particularly in open source software underpinning critical infrastructure.

OpenSSF is supporting AIxCC as a challenge advisor, guiding the competition to ensure its solutions benefit the open source community. We are actively working with DARPA and ARPA-H to open source the winning systems, infrastructure, and data from the competition, and are designing a program to facilitate their successful adoption and use by open source projects. At least four of the competitors’ Cyber Resilience Systems will be open sourced on Friday, August 8 at DEF CON. The remaining CRSs will also be open sourced soon after the event.

Join Our Panel: Applying DevSecOps Lessons to MLSecOps

We will be hosting a panel talk at the AIxCC Village, “Applying DevSecOps Lessons to MLSecOps.” This presentation will delve into the evolving landscape of security with the advent of AI/ML applications.

The panelists for this discussion will be:

  • Christopher “CRob” Robinson – Chief Security Architect, OpenSSF
  • Sarah Evans – Security Applied Research Program Lead, Dell Technologies
  • Eoin Wickens – Director of Threat Intelligence, HiddenLayer

Just as DevSecOps integrated security practices into the Software Development Life Cycle (SDLC) to address critical software security gaps, Machine Learning Operations (MLOps) now needs to transition into MLSecOps. MLSecOps emphasizes integrating security practices throughout the ML development lifecycle, establishing security as a shared responsibility among ML developers, security practitioners, and operations teams. When thinking about securing MLOps using lessons learned from DevSecOps, the conversation includes open source tools from OpenSSF and other initiatives, such as Supply-Chain Levels for Software Artifacts (SLSA) and Sigstore, that can be extended to MLSecOps. This talk will explore some of those tools, as well as talk about potential tooling gaps the community can partner to close. Embracing this methodology enables early identification and mitigation of security risks, facilitating the development of secure and trustworthy ML models.  Embracing MLSecOps methodology enables early identification and mitigation of security risks, facilitating the development of secure and trustworthy ML models.

We invite you to join us on Saturday, August 9, from 10:30-11:15 a.m. at the AIxCC Village Stage to learn more about how the lessons from DevSecOps can be applied to the unique challenges of securing AI/ML systems and to understand the importance of adopting an MLSecOps approach for a more secure future in open source software.

About the Author

JeffJeff Diecks is the Technical Program Manager for the AI Cyber Challenge (AIxCC) at the Open Source Security Foundation (OpenSSF). A participant in open source since 1999, he’s delivered digital products and applications for dozens of universities, six professional sports leagues, state governments, global media companies, non-profits, and corporate clients.

MLSecOps Whitepaper

Visualizing Secure MLOps (MLSecOps): A Practical Guide for Building Robust AI/ML Pipeline Security

By Blog, Guest Blog

By Sarah Evans and Andrey Shorov

The world of technology is constantly evolving, and with the rise of Artificial Intelligence (AI) and Machine Learning (ML), the demand for robust security measures has become more critical than ever. As organizations rush to deploy AI solutions, the gap between ML innovation and security practices has created unprecedented vulnerabilities we are only beginning to understand.

A new whitepaper, Visualizing Secure MLOps (MLSecOps): A Practical Guide for Building Robust AI/ML Pipeline Security,” addresses this critical gap by providing a comprehensive framework for practitioners focused on building and securing machine learning pipelines.

Why MLSecOps, Why Now

Why this topic? Why now? 

AI/ML systems encompass unique components, such as training datasets, models, and inference pipelines, that introduce novel weaknesses demanding dedicated attention throughout the ML lifecycle.

The evolving responsibilities within organizations have led to an intersection of expertise:

  1.     Software developers, who specialize in deploying applications with traditional code, are increasingly responsible for incorporating data sets and ML models into those applications.
  2.     Data engineers and data scientists, who specialize in data sets and creating algorithms and models tailored to those data sets, are expected to integrate data sets and models into applications using code.

These trends have exposed a gap in security knowledge, leaving AI/ML pipelines susceptible to risks that neither discipline alone is fully equipped to manage. To resolve this, we investigated how we could adapt the principles of secure DevOps to secure MLOps by creating an MLSecOps framework that empowers  both software developers and AI-focused professionals with the tools and processes needed for end-to-end ML pipeline security. During our research, we identified a scarcity of practical guidance on securing ML pipelines using open-source tools commonly employed by developers. This white paper aims to bridge that gap and provide a practical starting point.

What’s Inside the Whitepaper

This whitepaper is the result of a collaboration between Dell and Ericsson, leveraging our shared membership in the OpenSSF with the foundation stemming from a publication on MLSecOps for telecom environments authored by Ericsson researchers [https://www.ericsson.com/en/reports-and-papers/white-papers/mlsecops-protecting-the-ai-ml-lifecycle-in-telecom]. Together, we have expanded upon Ericsson’s original MLSecOps framework to create a comprehensive guide that addresses the needs of diverse industry sectors. 

We are proud to share this guide as an industry resource that demonstrates how to apply open-source tools from secure DevOps to secure MLOps. It offers a progressive, visual learning experience where concepts are fundamentally and visually layered upon one another, extending  security beyond traditional code-centric approaches. This guide integrates insights from CI/CD, the ML lifecycle, various personas, a sample reference architecture, mapped risks, security controls, and practical tools.

The document introduces a visual, “layer-by-layer” approach to help practitioners securely adopt ML, leveraging open-source tools from OpenSSF initiatives such as Supply-Chain Levels for Software Artifacts (SLSA), Sigstore, and OpenSSF Scorecard. It further explores opportunities to extend these tools to secure the AI/ML lifecycle using MLSecOps practices, while identifying specific gaps in current tooling and offering recommendations for future development.

For practitioners involved in the design, development, deployment, and operations as well as securing of AI/ML systems, this whitepaper provides a practical foundation for building robust and secure AI/ML pipelines and applications.

Join Us

Ready to help shape the future of secure AI and ML?

Read the Wwhitepaper

Join the AI/ML Security Working Group

Explore OpenSSF Membership

Author Bios

Sarah Evans delivers technical innovation for secure business outcomes through her role as the security research program lead in the Office of the CTO at Dell Technologies. She is an industry leader and advocate for extending secure operations and supply chain development principles in AI. Sarah also ensures the security research program explores the overlapping security impacts of emerging technologies in other research programs, such as quantum computing. Sarah leverages her extensive practical experience in security and IT, spanning small businesses, large enterprises (including the highly regulated financial services industry and a 21-year military career), and academia (computer information systems). She earned an MBA, an AIML professional certificate from MIT, and is a certified information security manager. Sarah is also a strategic and technical leader representing Dell in OpenSSF, a foundation for securing open source software.

Andrey Shorov is a Senior Security Technology Specialist at Product Security, Ericsson. He is a cybersecurity expert with more than 16 years of experience across corporate and academic environments. Specializing in AI/ML and network security, Andrey advances AI-driven cybersecurity strategies, leading the development of cutting-edge security architectures and practices at Ericsson and contributing research that shapes industry standards. He holds a Ph.D. in Computer Science and maintains CISSP and Security+ certifications.

🎉 Celebrating Five Years of OpenSSF: A Journey Through Open Source Security

By Blog

August 2025 marks five years since the official formation of the Open Source Security Foundation (OpenSSF). Born out of a critical need to secure the software supply chains and open source ecosystems powering global technology infrastructure, OpenSSF quickly emerged as a community-driven leader in open source security.

“OpenSSF was founded to unify and strengthen global efforts around securing open source software. In five years, we’ve built a collaborative foundation that reaches across industries, governments, and ecosystems. Together, we’re building a world where open source is not only powerful—but trusted.” — Steve Fernandez, General Manager, OpenSSF

đŸŒ± Beginnings: Answering the Call

OpenSSF was launched on August 3, 2020, consolidating earlier initiatives into a unified, cross-industry effort to protect open source projects. The urgency was clear—high-profile vulnerabilities such as Heartbleed served as stark reminders that collective action was essential to safeguard the digital infrastructure everyone depends on.

“From day one, OpenSSF has been about action—empowering the community to build and adopt real-world security solutions. Five years in, we’ve moved from ideas to impact. The work isn’t done, but the momentum is real, and the future is wide open.” — Christopher “CRob” Robinson, Chief Architect, OpenSSF

🚀 Milestones & Major Initiatives

Over the past five years, OpenSSF has spearheaded critical initiatives that shaped the landscape of open source security:

2021 – Secure Software Development Fundamentals:
Launching free educational courses on edX, OpenSSF equipped developers globally with foundational security practices.

“When we launched our first free training course in secure software development, we had one goal: make security knowledge available to every software developer. Today, that same mission powers all of OpenSSF—equipping developers, maintainers, and communities with the tools they need to make open source software more secure for everyone.” — David A. Wheeler, Director, Open Source Supply Chain Security, Linux Foundation

2021 – Sigstore: Open Source Signing for Everyone:
Sigstore was launched to make cryptographic signing accessible to all open source developers, providing a free and automated way to verify the integrity and provenance of software artifacts and metadata.

“Being part of the OpenSSF has been crucial for the Sigstore project. It has allowed us to not only foster community growth, neutral governance, and engagement with the broader OSS ecosystem, but also given us the ability to coordinate with a myriad of in-house initiatives — like the securing software repos working group — to further our mission of software signing for everybody. As Sigstore continues to grow and become a core technology for software supply chain security, we believe that the OpenSSF is a great place to provide a stable, reliable, and mature service for the public benefit.”
— Santiago Torres-Arias, Assistant Professor at Purdue University and Sigstore TSC Chair Member 

2021-2022 – Security with OpenSSF Scorecard & Criticality Score:
Innovative tools were introduced to automate and simplify assessing open source project security risks.

“The OpenSSF has been instrumental in transforming how the industry approaches open source security, particularly through initiatives like the Security Scorecard and Sigstore, which have improved software supply chain security for millions of developers. As we look ahead, AWS is committed to supporting OpenSSF’s mission of making open source software more secure by default, and we’re excited to help developers all over the world drive security innovation in their applications.” — Mark Ryland, Director, Amazon Security at AWS

2022 – Launch of Alpha-Omega:

Alpha-Omega (AO), an associated project of the OpenSSF launched in February 2022, is funded by Microsoft, Google, Amazon, and Citi. Its mission is to enhance the security of critical open source software by enabling sustainable improvements and ensuring vulnerabilities are identified and resolved quickly. Since its inception, the Alpha-Omega Fund has invested $14 million in open source security, supporting a range of projects including LLVM, Java, PHP, Jenkins, Airflow, OpenSSL, AI libraries, Homebrew, FreeBSD, Node.js, jQuery, RubyGems, and the Linux Kernel. It has also provided funding to key foundations and ecosystems such as the Apache Software Foundation (ASF), Eclipse Foundation, OpenJS Foundation, Python Foundation, and Rust Foundation.

2023 – SLSA v1.0 (Supply-chain Levels for Software Artifacts):
Setting clear and actionable standards for build integrity and provenance, SLSA was a turning point for software supply chain security and became essential in reducing vulnerabilities.
At the same time, community-driven tools like GUAC (Graph for Understanding Artifact Composition) built on SLSA’s principles, unlocking deep visibility into software metadata, making it more usable, actionable and connecting the dots across provenance, SBOMs and in-toto security attestations.

“Projects like GUAC demonstrate how open source innovation can make software security both scalable and practical. Kusari is proud to have played a role in these milestones, helping to strengthen the resiliency of the open source software ecosystem.”

— Michael Lieberman, CTO and Co-founder at Kusari and Governing Board member

2024 – Principles for Package Repository Security:

Offering a voluntary, community-driven security maturity model to strengthen the resilience of software ecosystems.

“Developers around the world rely daily on package repositories for secure distribution of open source software. It’s critical that we listen to the maintainers of these systems and provide support in a way that works for them. We were happy to work with these maintainers to develop the Principles for Package Repository Security, to help them put together security roadmaps and provide a reference in funding requests.” — Zach Steindler, co-chair of Securing Software Repositories Working Group, Principal Engineer, GitHub

2025

OSPS Baseline:
This initiative brought tiered security requirements into the AI space, quickly adopted by groundbreaking projects such as GUAC, OpenTelemetry, and bomctl.

“The Open Source Project Security Baseline was born from real use cases, with projects needing robust standardized guidance around how to best secure their development processes. OpenSSF has not only been the best topical location for contributors from around the world to gather — the foundation has gone above and beyond by providing project support to extend the content, promote the concept, and elevate Baseline from a simple control catalog into a robust community and ecosystem.” — Eddie Knight, OSPO Lead, Sonatype

AI/ML Security Working Group: 

The MLSecOps White Paper from the AI/ML Security Working Group marks a major step in securing machine learning pipelines and guiding the future of trustworthy AI.

“The AI/ML working group tackles problems at the confluence of security and AI. While the AI world is moving at a breakneck pace, the security problems that we are tackling in the traditional software world are also relevant. Given that AI can increase the impact of a security vulnerability, we need to handle them with determination. The working group has worked on securing LLM generating code, model signing and a new white paper for MLSecOps, among many other interesting things.” — Mihai Maruseac, co-chair of AI/ML Security Working Group, Staff Software Engineer, Google

🌐 Growing Community & Policy Impact

OpenSSF’s role rapidly expanded beyond tooling, becoming influential in global policy dialogues, including advising the White House on software security and contributing to critical policy conversations such as the EU’s Cyber Resilience Act (CRA).

OpenSSF also continues to invest in community-building and education initiatives. This year, the Foundation launched its inaugural Summer Mentorship Program, welcoming its first cohort of mentees working directly with technical project leads to gain hands-on experience in open source security.

The Foundation also supported the publication of the Compiler Options Hardening Guide for C and C++, originally contributed by Ericsson, to help developers and toolchains apply secure-by-default compilation practices—especially critical in memory-unsafe languages.

In addition, OpenSSF has contributed to improving vulnerability disclosure practices across the ecosystem, offering guidance and tools that support maintainers in navigating CVEs, responsible disclosure, and downstream communication.

“The OpenSSF is uniquely positioned to advise on considerations, technical elements, and community impact public policy decisions have not only on open source, but also on the complex reality of implementing cybersecurity to a diverse and global technical sector. In the past 5 years, OpenSSF has been building a community of well-informed open source security experts that can advise regulations but also challenge and adapt security frameworks, law, and regulation to support open source projects in raising their security posture through transparency and open collaboration; hallmarks of open source culture.” — Emily Fox, Portfolio Security Architect, Red Hat

✹ Voices from Our Community: Reflections & Hopes

Key community members, from long-standing contributors to new voices, have shaped OpenSSF’s journey:

OG Voices:

“Microsoft joined OpenSSF as a founding member, committed to advancing secure open source development. Over the past five years, OpenSSF has driven industry collaboration on security through initiatives like Alpha-Omega, SLSA, Scorecard, Secure Software Development training, and global policy efforts such as the Cyber Resilience Act. Together, we’ve improved memory safety, supply chain integrity, and secure-by-design practices, demonstrating that collaboration is key to security. We look forward to many more security advancements as we continue our partnership.” — Mark Russinovich, CTO, Deputy CISO, and Technical Fellow, Microsoft Azure

OpenSSF Leadership Perspective: 

“OpenSSF’s strength comes from the people behind it—builders, advocates, and champions from around the world working toward a safer open source future. This milestone isn’t just a celebration of what we’ve accomplished, but of the community we’ve built together.” — Adrianne Marcum, Chief of Staff, OpenSSF

Community Perspectives:

“After 5 years of hard work, the OpenSSF stands as a global force for securing the critical open-source that we all use. Here’s to five years of uniting communities, hardening the software supply chain, and driving a safer digital future.” Tracy Ragan, CEO, DeployHub

I found OpenSSF through my own curiosity, not by invitation, and I stayed because of the warmth, support, and shared mission I discovered. From contributing to the BEAR Working Group to receiving real backing for opportunities, the community consistently shows up for its members. It’s more than a project; it’s a space where people are supported, valued, and empowered to grow.” Ijeoma Onwuka, Independent Contributor

🔼 Looking Forward

As we celebrate our fifth anniversary, OpenSSF is preparing for a future increasingly influenced by AI-driven tools and global collaboration. Community members across the globe envision greater adoption of secure AI practices, expanded policy influence, and deeper, inclusive international partnerships.

“As we celebrate OpenSSF’s 5th Anniversary, I’m energized by how our vision has grown into a thriving global movement of developers, maintainers, security researchers, and organizations all united by our shared mission. Looking ahead we’re hoping to cultivate our community’s knowledge and empower growth through stronger collaboration and more inclusive pathways for contributors.” – Stacey Potter, Community Manager, OpenSSF

📣 Join the Celebration

We invite you to share your memories, contribute your voice, and become part of the next chapter in securing open source software.

Here’s to many more years ahead! 🎉

Case Study: Google Secures Machine Learning Models with sigstore

By Blog, Case Studies

As machine learning (ML) evolves at lightning speed, so do the threats. The rise of large models like LLMs has accelerated innovation—but also introduced serious vulnerabilities. Data poisoning, model tampering, and unverifiable origins are not theoretical—they’re real risks that impact the entire ML supply chain.

Model hubs, platforms for data scientists to share models and datasets, recognized the challenge: How could they ensure the models hosted on their platform were authentic and safe?

That’s where Google’s Open Source Security Team (GOSST), sigstore, and the Open Source Security Foundation (OpenSSF) stepped in. Together, we created the OpenSSF Model Signing (OMS) specification, an industry standard for signing AI models. We then integrated OMS into major model hubs such as NVIDIA’s NGC and Google’s Kaggle.

The Solution: Seamless Model Signing Built into Model Hubs

We partnered with Kaggle to experiment with how to make the model signing easier without disrupting publishing UX.

“The simplest solution to securing models is: sign the model when you train it and verify it every time you use it.”
— Mihai Maruseac, Staff Software Engineer, Google

Key features of the prototyped implementation:

  • Model authors could use the same model hub upload tools and processes to upload their models, but, behind the scenes, these models would be automatically signed during the upload process.
  • Each model is signed using the uploader’s identity on the model hub, via OpenID Connect (OIDC). Model hubs should become OIDC providers to ensure that they can sign the model during upload.
  • Model hubs use sigstore to obtain a short-lived certificate, sign the model, and store the signature alongside the model.
  • Verification is automatic and transparent—the model hub verifies the signature and displays its status. A “signed” status confirms the model’s authenticity.
  • Users can independently verify signatures by using a notebook hosted on the model hub, or downloading the model and the signature and verifying using the `model_signing` CLI.
  • Model repositories implement access controls (ACLs) to ensure that only authorized users can sign on behalf of specific organizations.
  • All signing events are logged in the sigstore transparency log, providing a complete audit trail.
  • Future plans include GUAC integration for generating AI-BOMs and inspecting ML supply chains for incident response and transparency.

The process dramatically improves trust and provenance while remaining invisible to most users.

The Result: A Blueprint for Securing AI Models

With sigstore integrated, the experiment with Kaggle proved that model hubs can offer a verified ML ecosystem. Users know that what they download hasn’t been tampered with or misattributed. Each model is cryptographically signed and tied to the author’s identity—no more guessing whether a model came from “Meta” or a spoofed account.

“If we reach a state where all claims about ML systems and metadata are tamperproof, tied to identity, and verifiable by the tools ML developers already use—we can inspect the ML supply chain immediately in case of incidents.”
— Mihai Maruseac, Staff Software Engineer, Google

This solution serves as a model for the broader ecosystem. Platforms hosting datasets and models can adopt similar practices using open tools like sigstore, backed by community-driven standards through OpenSSF.

Get Involved & Learn More

Join the OpenSSF Community
Be part of the movement to secure open source software, including AI/ML systems. → Join the AI/ML Security WG

Explore sigstore
See how sigstore enables secure, transparent signing for software and models. → Visit sigstore

Learn About Google’s Open Source Security Efforts
Discover how Google is advancing supply chain security in open source and machine learning. → Google Open Source Security Team

Learn More about Kaggle
Explore how Kaggle is evolving into a secure hub for trustworthy ML models. → Visit Kaggle

Watch the Talk
Title: Taming the Wild West of ML: Practical Model Signing With sigstore on Kaggle
Speaker: Mihai Maruseac, Google
Event: OpenSSF Community Day North America – June 26, 2025
Watch the talk → YouTube