Skip to main content
Category

Blog

An Introduction to the OpenSSF Model Signing (OMS) Specification: Model Signing for Secure and Trusted AI Supply Chains

By Blog, Guest Blog

By Mihai Maruseac (Google), Eoin Wickens (HiddenLayer), Daniel Major (NVIDIA), Martin Sablotny (NVIDIA)

As AI adoption continues to accelerate, so does the need to secure the AI supply chain. Organizations want to be able to verify that the models they build, deploy, or consume are authentic, untampered, and compliant with internal policies and external regulations. From tampered models to poisoned datasets, the risks facing production AI systems are growing — and the industry is responding.

In collaboration with industry partners, the Open Source Security Foundation (OpenSSF)’s AI/ML Working Group recently delivered a model signing solution. Today, we are formalizing the signature format as OpenSSF Model Signing (OMS): a flexible and implementation-agnostic standard for model signing, purpose-built for the unique requirements of AI workflows.

What is Model Signing

Model signing is a cryptographic process that creates a verifiable record of the origin and integrity of machine learning models.  Recipients can verify that a model was published by the expected source, and has not subsequently been tampered with.  

Signing AI artifacts is an essential step in building trust and accountability across the AI supply chain.  For projects that depend on open source foundational models, project teams can verify the models they are building upon are the ones they trust.  Organizations can trace the integrity of models — whether models are developed in-house, shared between teams, or deployed into production.  

Key stakeholders that benefit from model signing:

  • End users gain confidence that the models they are running are legitimate and unmodified.
  • Compliance and governance teams benefit from traceable metadata that supports audits and regulatory reporting.
  • Developers and MLOps teams are equipped to trace issues, improve incident response, and ensure reproducibility across experiments and deployments.

How does Model Signing Work

Model signing uses cryptographic keys to ensure the integrity and authenticity of an AI model. A signing program uses a private key to generate a digital signature for the model. This signature can then be verified by anyone using the corresponding public key. These keys can be generated a-priori, obtained from signing certificates, or generated transparently during the Sigstore signing flow.If verification succeeds, the model is confirmed as untampered and authentic; if it fails, the model may have been altered or is untrusted.

Figure 1:  Model Signing Diagram

How Does OMS Work

OMS Signature Format

OMS is designed to handle the complexity of modern AI systems, supporting any type of model format and models of any size. Instead of treating each file independently, OMS uses a detached OMS Signature Format that can represent multiple related artifacts—such as model weights, configuration files, tokenizers, and datasets—in a single, verifiable unit.

The OMS Signature Format includes: 

  • A list of all files in the bundle, each referenced by its cryptographic hash (e.g., SHA256)
  • An optional annotations section for custom, domain-specific fields (future support coming)
  • A digital signature that covers the entire manifest, ensuring tamper-evidence

The OMS Signature File follows the Sigstore Bundle Format, ensuring maximum compatibility with existing Sigstore (a graduated OpenSSF project) ecosystem tooling.  This detached format allows verification without modifying or repackaging the original content, making it easier to integrate into existing workflows and distribution systems.

OMS is PKI-agnostic, supporting a wide range of signing options, including:

  • Private or enterprise PKI systems
  • Self-signed certificates
  • Bare keys
  • Keyless signing with public or private Sigstore instances 

This flexibility enables organizations to adopt OMS without changing their existing key management or trust models.

Figure 1. OMS Signature Format

Signing and Verifying with OMS

As reference implementations to speed adoption, OMS offers both a command-line interface (CLI) for lightweight operational use and a Python library for deep integration into CI/CD pipelines, automated publishing flows, and model hubs. Other library integrations are planned.

Signing and Verifying with Sigstore

Shell
# install model-signing package
$ pip install model-signing

# signing the model with Sigstore
$ model_signing sign <MODEL_PATH>

# verification if the model is signed with Sigstore
$ model_signing verify \
  <MODEL_PATH> \
  --signature <OMS_SIG_FILE> \
  --identity "<IDENTITY>" \
  --identity_provider "<OIDC_PROVIDER>"

 

Signing and Verifying with PKI Certificates

Shell
# install model-signing package
$ pip install model-signing

# signing the model with a PKI certificate
$ model_signing sign  \
  --certificate_chain  \
  --private_key 

# verification if the model is signed with a PKI certificate
$ model_signing verify \
 <MODEL_PATH> \
  --signature <OMS_SIG_FILE> \
  --certificate_chain <ROOT_CERT> 


 

Other examples, including signing using PKCS#11, can be found in the model-signing documentation.

This design enables better interoperability across tools and vendors, reduces manual steps in model validation, and helps establish a consistent trust foundation across the AI lifecycle.

Looking Ahead

The release of OMS marks a major step forward in securing the AI supply chain. By enabling organizations to verify the integrity, provenance, and trustworthiness of machine learning artifacts, OMS lays the foundation for safer, more transparent AI development and deployment.

Backed by broad industry collaboration and designed with real-world workflows in mind, OMS is ready for adoption today. Whether integrating model signing into CI/CD pipelines, enforcing provenance policies, or distributing models at scale, OMS provides the tools and flexibility to meet enterprise needs.

This is just the first step towards a future of secure AI supply chains. The OpenSSF AI/ML Working Group is engaging with the Coalition for Secure AI to incorporate other AI metadata into the OMS Signature Format, such as embedding rich metadata such as training data sources, model version, hardware used, and compliance attributes.  

To get started, explore the OMS specification, try the CLI and library, and join the OpenSSF AI/ML Working Group to help shape the future of trusted AI.

Special thanks to the contributors driving this effort forward, including Laurent Simon, Rich Harang, and the many others at Google, HiddenLayer, NVIDIA, Red Hat, Intel, Meta, IBM, Microsoft, and in the Sigstore, Coalition for Secure AI, and OpenSSF communities.

Mihai Maruseac is a member of the Google Open Source Security Team (GOSST), working on Supply Chain Security for ML. He is a co-lead on a Secure AI Framework (SAIF) workstream from Google. Under OpenSSF, Mihai chairs the AI/ML working group and the model signing project. Mihai is also a GUAC maintainer. Before joining GOSST, Mihai created the TensorFlow Security team and prior to Google, he worked on adding Differential Privacy to Machine Learning algorithms. Mihai has a PhD in Differential Privacy from UMass Boston.

Eoin Wickens, Director of Threat Intelligence at HiddenLayer, specializes in AI security, threat research, and malware reverse engineering. He has authored numerous articles on AI security, co-authored a book on cyber threat intelligence, and spoken at conferences such as SANS AI Cybersecurity Summit, BSides SF, LABSCON, and 44CON, and delivered the 2024 ACM SCORED opening keynote.

Daniel Major is a Distinguished Security Architect at NVIDIA, where he provides security leadership in areas such as code signing, device PKI, ML deployments and mobile operating systems. Previously, as Principal Security Architect at BlackBerry, he played a key role in leading the mobile phone division’s transition from BlackBerry 10 OS to Android. When not working, Daniel can be found planning his next travel adventure.

Martin Sablotny is a security architect for AI/ML at NVIDIA working on identifying existing gaps in AI security and researching solutions. He received his Ph.D. in computing science from the University of Glasgow in 2023. Before joining NVIDIA, he worked as a security researcher in the German military and conducted research in using AI for security at Google.

Member Spotlight: Datadog – Powering Open Source Security with Tools, Standards, and Community Leadership

By Blog

Datadog, a leading cloud-scale observability and security platform, joined the Open Source Security Foundation (OpenSSF) as a Premier Member in July, 2024. With both executive leadership and deep technical involvement, Datadog has rapidly become a force in advancing secure open source practices across the industry.

Key Contributions

GuardDog: Open Source Threat Detection

In early 2025, Datadog launched GuardDog, a Python-based open source tool that scans package ecosystems like npm, PyPI, and Go for signs of malicious behavior. GuardDog is backed by a publicly available threat dataset, giving developers and organizations real-time visibility into emerging supply chain risks.

This contribution directly supports OpenSSF’s mission to provide practical tools that harden open source ecosystems against common attack vectors—while promoting transparency and shared defense.

Datadog actively supports the open source security ecosystem through its engineering efforts, tooling contributions, and participation in the OpenSSF community:

  • SBOM Generation and Runtime Insights
    Datadog enhances the usability and value of Software Bills of Materials (SBOMs) through tools and educational content. Their blog, Enhance SBOMs with runtime security context, outlines how they combine SBOM data with runtime intelligence to identify real-world risks and vulnerabilities more effectively.
  • Open Source Tools Supporting SBOM Adoption
    Datadog maintains the SBOM Generator, an open source tool based on CycloneDX, which scans codebases to produce high-quality SBOMs. They also released the datadog-sca-github-action, a GitHub Action that automates SBOM generation and integrates results into the Datadog platform for improved visibility.
  • Sigstore and Software Signing
    As part of the OpenSSF ecosystem, Datadog supports efforts like Sigstore to bring cryptographic signing and verification to the software supply chain. These efforts align with Datadog’s broader commitment to improving software provenance and integrity, especially as part of secure build and deployment practices.
  • OpenSSF Membership
    As a Premier Member of OpenSSF, Datadog collaborates with industry leaders to advance best practices, contribute to strategic initiatives, and help shape the future of secure open source software.

These collaborations demonstrate Datadog’s investment in long-term, community-driven approaches to open source security.

What’s Next

Datadog takes the stage at OpenSSF Community Day North America on Thursday, June 26, 2025, in Denver, CO, co-located with Open Source Summit North America.

They’ll be presenting alongside Intel Labs in the session:

Talk Title: Harnessing In-toto Attestations for Security and Compliance With Next-gen Policies
Time: 3:10–3:30 PM MDT
Location: Bluebird Ballroom 3A
Speakers:

  • Trishank Karthik Kuppusamy, Staff Engineer, Datadog
  • Marcela Melara, Research Scientist, Intel Labs

This session dives into the evolution of the in-toto Attestation Framework, spotlighting new policy standards that make it easier for consumers and auditors to derive meaningful insights from authenticated metadata—such as SBOMs and SLSA Build Provenance. Attendees will see how the latest policy framework bridges gaps in compatibility and usability with a flexible, real-world-ready approach to securing complex software supply chains.

Register now and connect with Datadog, Intel Labs, and fellow open source security leaders in Denver.

Why It Matters

By contributing to secure development frameworks, creating open source tooling, and educating the broader community, Datadog exemplifies what it means to be an OpenSSF Premier Member. Their work is hands-on, standards-driven, and deeply collaborative—helping make open source safer for everyone.

Learn More

Case Study: OSTIF Improves Security Posture of Critical Open Source Projects Through OpenSSF Membership

By Blog, Case Studies

Organization: Open Source Technology Improvement Fund, Inc. (OSTIF)
Contributor: Amir Montazery, Managing Director
Website: ostif.org

Problem

Critical open source software (OSS) projects—especially those that are long-standing and widely adopted—often lack the resources and systematic support needed to regularly review and improve their security posture. Many of these projects are maintained by small teams with limited bandwidth, making it challenging to conduct comprehensive security audits and implement best practices. The risk of undetected vulnerabilities in these projects presents a growing concern for the broader software ecosystem.

Action

To address this gap, OSTIF leverages its OpenSSF membership to conduct rigorous security audits of critical OSS projects. Using a curated process rooted in industry best practices, OSTIF delivers structured security engagements that improve real-world outcomes for maintainers and users alike.

Through active participation in OpenSSF’s Securing Critical Projects working group and Alpha-Omega initiatives since their inception, and through strategic partnership with organizations like Eclipse Foundation, OSTIF receives targeted funding and support to carry out its mission. These collaborations help prioritize high-impact projects and streamline audit administration—despite the inherent complexity of managing funding approvals and coordination. 

It’s pivotal that these important projects receive customized work. Each open source project is unique and so are its security needs, making standardization of audits difficult. OSTIF is able to invest time and expertise in scoping and organizing engagements to be tailored to the project’s best interests, necessities, and budget to generate effective investment in open source security.

OSTIF also incorporates other OpenSSF tools and services such as the OpenSSF Scorecard and the broader Securing Critical Projects Set, which complement its robust audit methodology and offer additional layers of insight into project health. In an ecosystem that is varied and complex, having security resources that can be applied to all projects contextually to generate impactful and sustainable security outcomes is incredibly valuable to all stakeholders, especially OSTIF.

Results

OSTIF’s work has demonstrated the effectiveness of formal security audits in strengthening OSS project resilience. As a member of OpenSSF, OSTIF has been able to expand its reach, increase audit throughput, and reinforce the security practices of some of the open source community’s most essential projects. Since 2021, OSTIF has facilitated numerous engagements funded by OpenSSF. In March of 2025, OSTIF published the results of the audit of RSTUF with OpenSSF’s funding and support. Additionally, 2 more Alpha-Omega funded engagements will be published later this year.

“OSTIF is grateful for the support from OpenSSF, particularly for funding security audits both directly and via Project Alpha-Omega, to help improve the security of critical OSS projects.”
— Amir Montazery, Managing Director, OSTIF

In addition to the technical improvements achieved through audits, OSTIF’s OpenSSF membership has fostered valuable connections with project maintainers, security experts, and funders—creating a collaborative ecosystem dedicated to open source security. Building a community around security audits is a goal of OSTIFs; by sharing resources and providing a platform for researchers to present audit findings through meetups, their goal is to grow expertise and access to security knowledge of the average open source user. 

Key Benefits

  • Enhanced security posture of widely-used OSS projects.
  • Strategic collaboration with OpenSSF working groups.
  • Access to funding and expert networks.
  • Improved audit administration through community support.

Biggest Challenge

  • Navigating administrative processes and funding approval cycles for new audit projects.
  • Funding multi-year programs and engagements. 

To learn more about OSTIF’s work, visit their 2024 Annual Report. Visit their website at ostif.org or follow them on LinkedIn to stay up to date with audit releases.