Build Your Cybersecurity Skills Stack with Cyber Week Savings - SAVE NOW

Tag

machine learning security

OpenSSF at DEF CON 33: AI Cyber Challenge (AIxCC), MLSecOps, and Securing Critical Infrastructure

By Blog

By Jeff Diecks

The OpenSSF team will be attending DEF CON 33, where the winners of the AI Cyber Challenge (AIxCC) will be announced. We will also host a panel discussion at the AIxCC village to introduce the concept of MLSecOps.

AIxCC, led by DARPA and ARPA-H, is a two-year competition focused on developing AI-enabled software to automatically identify and patch vulnerabilities in source code, particularly in open source software underpinning critical infrastructure.

OpenSSF is supporting AIxCC as a challenge advisor, guiding the competition to ensure its solutions benefit the open source community. We are actively working with DARPA and ARPA-H to open source the winning systems, infrastructure, and data from the competition, and are designing a program to facilitate their successful adoption and use by open source projects. At least four of the competitors’ Cyber Resilience Systems will be open sourced on Friday, August 8 at DEF CON. The remaining CRSs will also be open sourced soon after the event.

Join Our Panel: Applying DevSecOps Lessons to MLSecOps

We will be hosting a panel talk at the AIxCC Village, “Applying DevSecOps Lessons to MLSecOps.” This presentation will delve into the evolving landscape of security with the advent of AI/ML applications.

The panelists for this discussion will be:

  • Christopher “CRob” Robinson – Chief Security Architect, OpenSSF
  • Sarah Evans – Security Applied Research Program Lead, Dell Technologies
  • Eoin Wickens – Director of Threat Intelligence, HiddenLayer

Just as DevSecOps integrated security practices into the Software Development Life Cycle (SDLC) to address critical software security gaps, Machine Learning Operations (MLOps) now needs to transition into MLSecOps. MLSecOps emphasizes integrating security practices throughout the ML development lifecycle, establishing security as a shared responsibility among ML developers, security practitioners, and operations teams. When thinking about securing MLOps using lessons learned from DevSecOps, the conversation includes open source tools from OpenSSF and other initiatives, such as Supply-Chain Levels for Software Artifacts (SLSA) and Sigstore, that can be extended to MLSecOps. This talk will explore some of those tools, as well as talk about potential tooling gaps the community can partner to close. Embracing this methodology enables early identification and mitigation of security risks, facilitating the development of secure and trustworthy ML models.  Embracing MLSecOps methodology enables early identification and mitigation of security risks, facilitating the development of secure and trustworthy ML models.

We invite you to join us on Saturday, August 9, from 10:30-11:15 a.m. at the AIxCC Village Stage to learn more about how the lessons from DevSecOps can be applied to the unique challenges of securing AI/ML systems and to understand the importance of adopting an MLSecOps approach for a more secure future in open source software.

About the Author

JeffJeff Diecks is the Technical Program Manager for the AI Cyber Challenge (AIxCC) at the Open Source Security Foundation (OpenSSF). A participant in open source since 1999, he’s delivered digital products and applications for dozens of universities, six professional sports leagues, state governments, global media companies, non-profits, and corporate clients.

Case Study: Google Secures Machine Learning Models with sigstore

By Blog, Case Studies

As machine learning (ML) evolves at lightning speed, so do the threats. The rise of large models like LLMs has accelerated innovation—but also introduced serious vulnerabilities. Data poisoning, model tampering, and unverifiable origins are not theoretical—they’re real risks that impact the entire ML supply chain.

Model hubs, platforms for data scientists to share models and datasets, recognized the challenge: How could they ensure the models hosted on their platform were authentic and safe?

That’s where Google’s Open Source Security Team (GOSST), sigstore, and the Open Source Security Foundation (OpenSSF) stepped in. Together, we created the OpenSSF Model Signing (OMS) specification, an industry standard for signing AI models. We then integrated OMS into major model hubs such as NVIDIA’s NGC and Google’s Kaggle.

The Solution: Seamless Model Signing Built into Model Hubs

We partnered with Kaggle to experiment with how to make the model signing easier without disrupting publishing UX.

“The simplest solution to securing models is: sign the model when you train it and verify it every time you use it.”
— Mihai Maruseac, Staff Software Engineer, Google

Key features of the prototyped implementation:

  • Model authors could use the same model hub upload tools and processes to upload their models, but, behind the scenes, these models would be automatically signed during the upload process.
  • Each model is signed using the uploader’s identity on the model hub, via OpenID Connect (OIDC). Model hubs should become OIDC providers to ensure that they can sign the model during upload.
  • Model hubs use sigstore to obtain a short-lived certificate, sign the model, and store the signature alongside the model.
  • Verification is automatic and transparent—the model hub verifies the signature and displays its status. A “signed” status confirms the model’s authenticity.
  • Users can independently verify signatures by using a notebook hosted on the model hub, or downloading the model and the signature and verifying using the `model_signing` CLI.
  • Model repositories implement access controls (ACLs) to ensure that only authorized users can sign on behalf of specific organizations.
  • All signing events are logged in the sigstore transparency log, providing a complete audit trail.
  • Future plans include GUAC integration for generating AI-BOMs and inspecting ML supply chains for incident response and transparency.

The process dramatically improves trust and provenance while remaining invisible to most users.

The Result: A Blueprint for Securing AI Models

With sigstore integrated, the experiment with Kaggle proved that model hubs can offer a verified ML ecosystem. Users know that what they download hasn’t been tampered with or misattributed. Each model is cryptographically signed and tied to the author’s identity—no more guessing whether a model came from “Meta” or a spoofed account.

“If we reach a state where all claims about ML systems and metadata are tamperproof, tied to identity, and verifiable by the tools ML developers already use—we can inspect the ML supply chain immediately in case of incidents.”
— Mihai Maruseac, Staff Software Engineer, Google

This solution serves as a model for the broader ecosystem. Platforms hosting datasets and models can adopt similar practices using open tools like sigstore, backed by community-driven standards through OpenSSF.

Get Involved & Learn More

Join the OpenSSF Community
Be part of the movement to secure open source software, including AI/ML systems. → Join the AI/ML Security WG

Explore sigstore
See how sigstore enables secure, transparent signing for software and models. → Visit sigstore

Learn About Google’s Open Source Security Efforts
Discover how Google is advancing supply chain security in open source and machine learning. → Google Open Source Security Team

Learn More about Kaggle
Explore how Kaggle is evolving into a secure hub for trustworthy ML models. → Visit Kaggle

Watch the Talk
Title: Taming the Wild West of ML: Practical Model Signing With sigstore on Kaggle
Speaker: Mihai Maruseac, Google
Event: OpenSSF Community Day North America – June 26, 2025
Watch the talk → YouTube