Tag

Supply Chain Integrity

Case Study: Defending the Open Source Supply Chain in a New Regulatory Era

By Blog, Case Studies, EU Cyber Resilience Act

How Red Hat and OpenSSF are translating regulatory mandates into scalable open source community practices

Challenge

The European Union Cyber Resilience Act (CRA) introduces legally binding cybersecurity requirements for products with digital elements (including software) placed on the EU market. While designed to bolster digital safety, these requirements relied on standards historically shaped by proprietary software assumptions.

For Red Hat, whose products rely on thousands of upstream open source components, the risk was clear. If CRA standards failed to reflect the reality of how open source is built, the resulting compliance hurdles could increase cost and legal uncertainty for the enterprise while placing an unsustainable administrative burden on voluntary community maintainers.

As Red Hat Security Communities Lead Roman Zhukov, along with fellow Red Hatters from Product Security and Public Policy (Jaroslav Reznik, Pavel Hruza, and James Lovegrove), shared insights working on the CRA standards:

“Working on traditional industry standardization ‘behind closed doors’ started as a big challenge for us, upstream-minded people, who used to openly share and collaborate on all the work that we do. But that was important. Because if those standards didn’t reflect how open source actually works, there would be a real risk of imposing corporate-level liability on the community, because of persistent compliance pressure by enterprise adopters.” 

Solution

As a Premier Member of the OpenSSF, Red Hat transitioned from collaboration to leadership, engaging with the European Commission to advocate for a clear understanding of open source development methods and helping shape CRA standards, policy, and implementation guidance.

Through OpenSSF and direct participation in European standards bodies, Red Hat has helped advance open source development practices into CRA standards and technical guidelines, including: 

  • Hardened development lifecycles: Advancing expectations that respect community workflows
  • SBOM and Vulnerability handling: Streamlining how data is shared across the supply chain
  • Supply chain integrity: Promoting frameworks that can verify security without slowing innovation

Red Hat also championed OpenSSF frameworks as essential reference points for industry preparing for CRA compliance, including:

Together, these efforts provided regulators and manufacturers with practical, community-vetted guidance for implementing CRA requirements. This helps shift the responsibility back to manufacturers and stewards through consistent data discovery rather than placing the burden of evidence upon voluntary communities.

Red Hat’s Portfolio Security Architect Emily Fox expanded on her thoughts regarding stewardship and shared responsibility under the CRA:

“True stewardship shields open source creators from legislative burden. We don’t ask maintainers to become commercial suppliers; we step in to absorb the complexity, turning commercial compliance mandates into engagement opportunities that drive real security for everyone.”

Results

Red Hat’s leadership within OpenSSF helped deliver ecosystem-wide impact:

  • Standardization Alignment: State-of-the-art secure development practices were incorporated into EU CRA technical guidelines
  • Framework Recognition: The OpenSSF Security Baseline and SLSA are now recognized as reference frameworks for development
  • Reduced Friction: Lowered compliance barriers across thousands of upstream open source components
  • Increased Confidence: Bolstered regulator and enterprise trust in open source maturity

Why This Matters

Open source software underpins 90% of modern technology stacks. By leading through OpenSSF, Red Hat helped the CRA reinforce shared responsibility and practical security improvements rather than shifting administrative weight onto open source maintainers.

Learn More

About

Roman Zhukov is a cybersecurity expert, engineer, and leader with over 17 years of hands-on experience securing complex systems and software products at scale. At Red Hat, Roman leads open source security strategy, upstream collaboration, and cross-industry initiatives focused on building trusted ecosystems. He is an active contributor to open source security and co-chair of the OpenSSF Global Cyber Policy WG.

 

Emily Fox is a visionary security leader whose sustained contributions have profoundly shaped both internal company strategy and the broader open source industry. With over 15 years of experience, she has consistently operated at the intersection of deep technical expertise and strategic leadership, driving critical initiatives in cloud native security, software supply chain integrity, post-quantum cryptography, and zero trust architecture at top-tier organizations including Red Hat, Apple, and the National Security Agency. Her career is marked by a rare ability to not only architect complex, cutting-edge solutions but also to lead global communities, influence industry standards, and mentor the next generation of technologists.

An Introduction to the OpenSSF Model Signing (OMS) Specification: Model Signing for Secure and Trusted AI Supply Chains

By Blog, Guest Blog

By Mihai Maruseac (Google), Eoin Wickens (HiddenLayer), Daniel Major (NVIDIA), Martin Sablotny (NVIDIA)

As AI adoption continues to accelerate, so does the need to secure the AI supply chain. Organizations want to be able to verify that the models they build, deploy, or consume are authentic, untampered, and compliant with internal policies and external regulations. From tampered models to poisoned datasets, the risks facing production AI systems are growing — and the industry is responding.

In collaboration with industry partners, the Open Source Security Foundation (OpenSSF)’s AI/ML Working Group recently delivered a model signing solution. Today, we are formalizing the signature format as OpenSSF Model Signing (OMS): a flexible and implementation-agnostic standard for model signing, purpose-built for the unique requirements of AI workflows.

What is Model Signing

Model signing is a cryptographic process that creates a verifiable record of the origin and integrity of machine learning models.  Recipients can verify that a model was published by the expected source, and has not subsequently been tampered with.  

Signing AI artifacts is an essential step in building trust and accountability across the AI supply chain.  For projects that depend on open source foundational models, project teams can verify the models they are building upon are the ones they trust.  Organizations can trace the integrity of models — whether models are developed in-house, shared between teams, or deployed into production.  

Key stakeholders that benefit from model signing:

  • End users gain confidence that the models they are running are legitimate and unmodified.
  • Compliance and governance teams benefit from traceable metadata that supports audits and regulatory reporting.
  • Developers and MLOps teams are equipped to trace issues, improve incident response, and ensure reproducibility across experiments and deployments.

How does Model Signing Work

Model signing uses cryptographic keys to ensure the integrity and authenticity of an AI model. A signing program uses a private key to generate a digital signature for the model. This signature can then be verified by anyone using the corresponding public key. These keys can be generated a-priori, obtained from signing certificates, or generated transparently during the Sigstore signing flow.If verification succeeds, the model is confirmed as untampered and authentic; if it fails, the model may have been altered or is untrusted.

Figure 1:  Model Signing Diagram

How Does OMS Work

OMS Signature Format

OMS is designed to handle the complexity of modern AI systems, supporting any type of model format and models of any size. Instead of treating each file independently, OMS uses a detached OMS Signature Format that can represent multiple related artifacts—such as model weights, configuration files, tokenizers, and datasets—in a single, verifiable unit.

The OMS Signature Format includes: 

  • A list of all files in the bundle, each referenced by its cryptographic hash (e.g., SHA256)
  • An optional annotations section for custom, domain-specific fields (future support coming)
  • A digital signature that covers the entire manifest, ensuring tamper-evidence

The OMS Signature File follows the Sigstore Bundle Format, ensuring maximum compatibility with existing Sigstore (a graduated OpenSSF project) ecosystem tooling.  This detached format allows verification without modifying or repackaging the original content, making it easier to integrate into existing workflows and distribution systems.

OMS is PKI-agnostic, supporting a wide range of signing options, including:

  • Private or enterprise PKI systems
  • Self-signed certificates
  • Bare keys
  • Keyless signing with public or private Sigstore instances 

This flexibility enables organizations to adopt OMS without changing their existing key management or trust models.

Figure 1. OMS Signature Format

Signing and Verifying with OMS

As reference implementations to speed adoption, OMS offers both a command-line interface (CLI) for lightweight operational use and a Python library for deep integration into CI/CD pipelines, automated publishing flows, and model hubs. Other library integrations are planned.

Signing and Verifying with Sigstore

Shell
# install model-signing package
$ pip install model-signing

# signing the model with Sigstore
$ model_signing sign <MODEL_PATH>

# verification if the model is signed with Sigstore
$ model_signing verify \
  <MODEL_PATH> \
  --signature <OMS_SIG_FILE> \
  --identity "<IDENTITY>" \
  --identity_provider "<OIDC_PROVIDER>"

 

Signing and Verifying with PKI Certificates

Shell
# install model-signing package
$ pip install model-signing

# signing the model with a PKI certificate
$ model_signing sign  \
  --certificate_chain  \
  --private_key 

# verification if the model is signed with a PKI certificate
$ model_signing verify \
 <MODEL_PATH> \
  --signature <OMS_SIG_FILE> \
  --certificate_chain <ROOT_CERT> 


 

Other examples, including signing using PKCS#11, can be found in the model-signing documentation.

This design enables better interoperability across tools and vendors, reduces manual steps in model validation, and helps establish a consistent trust foundation across the AI lifecycle.

Looking Ahead

The release of OMS marks a major step forward in securing the AI supply chain. By enabling organizations to verify the integrity, provenance, and trustworthiness of machine learning artifacts, OMS lays the foundation for safer, more transparent AI development and deployment.

Backed by broad industry collaboration and designed with real-world workflows in mind, OMS is ready for adoption today. Whether integrating model signing into CI/CD pipelines, enforcing provenance policies, or distributing models at scale, OMS provides the tools and flexibility to meet enterprise needs.

This is just the first step towards a future of secure AI supply chains. The OpenSSF AI/ML Working Group is engaging with the Coalition for Secure AI to incorporate other AI metadata into the OMS Signature Format, such as embedding rich metadata such as training data sources, model version, hardware used, and compliance attributes.  

To get started, explore the OMS specification, try the CLI and library, and join the OpenSSF AI/ML Working Group to help shape the future of trusted AI.

Special thanks to the contributors driving this effort forward, including Laurent Simon, Rich Harang, and the many others at Google, HiddenLayer, NVIDIA, Red Hat, Intel, Meta, IBM, Microsoft, and in the Sigstore, Coalition for Secure AI, and OpenSSF communities.

Mihai Maruseac is a member of the Google Open Source Security Team (GOSST), working on Supply Chain Security for ML. He is a co-lead on a Secure AI Framework (SAIF) workstream from Google. Under OpenSSF, Mihai chairs the AI/ML working group and the model signing project. Mihai is also a GUAC maintainer. Before joining GOSST, Mihai created the TensorFlow Security team and prior to Google, he worked on adding Differential Privacy to Machine Learning algorithms. Mihai has a PhD in Differential Privacy from UMass Boston.

Eoin Wickens, Director of Threat Intelligence at HiddenLayer, specializes in AI security, threat research, and malware reverse engineering. He has authored numerous articles on AI security, co-authored a book on cyber threat intelligence, and spoken at conferences such as SANS AI Cybersecurity Summit, BSides SF, LABSCON, and 44CON, and delivered the 2024 ACM SCORED opening keynote.

Daniel Major is a Distinguished Security Architect at NVIDIA, where he provides security leadership in areas such as code signing, device PKI, ML deployments and mobile operating systems. Previously, as Principal Security Architect at BlackBerry, he played a key role in leading the mobile phone division’s transition from BlackBerry 10 OS to Android. When not working, Daniel can be found planning his next travel adventure.

Martin Sablotny is a security architect for AI/ML at NVIDIA working on identifying existing gaps in AI security and researching solutions. He received his Ph.D. in computing science from the University of Glasgow in 2023. Before joining NVIDIA, he worked as a security researcher in the German military and conducted research in using AI for security at Google.