
As machine learning (ML) evolves at lightning speed, so do the threats. The rise of large models like LLMs has accelerated innovation—but also introduced serious vulnerabilities. Data poisoning, model tampering, and unverifiable origins are not theoretical—they’re real risks that impact the entire ML supply chain.
Model hubs, platforms for data scientists to share models and datasets, recognized the challenge: How could they ensure the models hosted on their platform were authentic and safe?
That’s where Google’s Open Source Security Team (GOSST), sigstore, and the Open Source Security Foundation (OpenSSF) stepped in. Together, we created the OpenSSF Model Signing (OMS) specification, an industry standard for signing AI models. We then integrated OMS into major model hubs such as NVIDIA’s NGC and Google’s Kaggle.
The Solution: Seamless Model Signing Built into Model Hubs
We partnered with Kaggle to experiment with how to make the model signing easier without disrupting publishing UX.
“The simplest solution to securing models is: sign the model when you train it and verify it every time you use it.”
— Mihai Maruseac, Staff Software Engineer, Google
Key features of the prototyped implementation:
- Model authors could use the same model hub upload tools and processes to upload their models, but, behind the scenes, these models would be automatically signed during the upload process.
- Each model is signed using the uploader’s identity on the model hub, via OpenID Connect (OIDC). Model hubs should become OIDC providers to ensure that they can sign the model during upload.
- Model hubs use sigstore to obtain a short-lived certificate, sign the model, and store the signature alongside the model.
- Verification is automatic and transparent—the model hub verifies the signature and displays its status. A “signed” status confirms the model’s authenticity.
- Users can independently verify signatures by using a notebook hosted on the model hub, or downloading the model and the signature and verifying using the `model_signing` CLI.
- Model repositories implement access controls (ACLs) to ensure that only authorized users can sign on behalf of specific organizations.
- All signing events are logged in the sigstore transparency log, providing a complete audit trail.
- Future plans include GUAC integration for generating AI-BOMs and inspecting ML supply chains for incident response and transparency.
The process dramatically improves trust and provenance while remaining invisible to most users.
The Result: A Blueprint for Securing AI Models
With sigstore integrated, the experiment with Kaggle proved that model hubs can offer a verified ML ecosystem. Users know that what they download hasn’t been tampered with or misattributed. Each model is cryptographically signed and tied to the author’s identity—no more guessing whether a model came from “Meta” or a spoofed account.
“If we reach a state where all claims about ML systems and metadata are tamperproof, tied to identity, and verifiable by the tools ML developers already use—we can inspect the ML supply chain immediately in case of incidents.”
— Mihai Maruseac, Staff Software Engineer, Google
This solution serves as a model for the broader ecosystem. Platforms hosting datasets and models can adopt similar practices using open tools like sigstore, backed by community-driven standards through OpenSSF.
Get Involved & Learn More
Join the OpenSSF Community
Be part of the movement to secure open source software, including AI/ML systems. → Join the AI/ML Security WG
Explore sigstore
See how sigstore enables secure, transparent signing for software and models. → Visit sigstore
Learn About Google’s Open Source Security Efforts
Discover how Google is advancing supply chain security in open source and machine learning. → Google Open Source Security Team
Learn More about Kaggle
Explore how Kaggle is evolving into a secure hub for trustworthy ML models. → Visit Kaggle
Watch the Talk
Title: Taming the Wild West of ML: Practical Model Signing With sigstore on Kaggle
Speaker: Mihai Maruseac, Google
Event: OpenSSF Community Day North America – June 26, 2025
Watch the talk → YouTube