Skip to main content
Tag

Supply Chain Security

Trustify joins GUAC

By Blog, Guest Blog

By Ben Cotton and Dejan Bosanac

The superpower of open source is multiple people working together on a common goal. That works for projects, too. GUAC and Trustify are two projects bringing visibility to the software supply chain. Today, they’re combining under the GUAC umbrella. With Red Hat’s contribution of Trustify to the GUAC project, the two combine to create a unified effort to address the challenges of consuming, processing, and utilizing supply chain security metadata at scale.

Why Join?

The Graph for Understanding Artifact Composition (GUAC) project was created to bring understanding to software supply chains. GUAC ingests software bills of materials (SBOMs) and enriches them with additional data to create a queryable graph of the software supply chain. Trustify also ingests and manages SBOMs, with a focus on security and compliance. With so much overlap, it makes sense to combine our efforts.

The grand vision for this evolved community is to become the central hub within OpenSSF for initiatives focused on building and using supply chain knowledge graphs. This includes: defining & promoting common standards, data models, & ontologies; developing shared infrastructure & libraries; improving the overall tooling ecosystem; fostering collaboration & knowledge sharing; and providing a clear & welcoming community for contributors.

What’s Next?

Right now, we’re working on the basic logistics: migrating repositories, updating websites, merging documentation. We have created a new GUAC Steering Committee that oversees two core projects: Graph for Understanding Artifact Composition (GUAC) and Trustify, and subprojects like sw-id-core and GUAC Visualizer. These projects have their own maintainers, but we expect to see a lot of cross-collaboration as everyone gets settled in.

If you’d like to learn more, join Ben Cotton and Dejan Bosanac at OpenSSF Community Day Europe for their talk on Thursday 28 August. If you can’t make it to Amsterdam, the community page has all of the ways you can engage with our community.

Author Bios

Ben Cotton is the open source community lead at Kusari, where he contributes to GUAC and leads the OSPS Baseline SIG. He has over a decade of leadership experience in Fedora and other open source communities. His career has taken him through the public and private sector in roles that include desktop support, high-performance computing administration, marketing, and program management. Ben is the author of Program Management for Open Source Projects and has contributed to the book Human at a Distance and to articles in The Next Platform, Opensource.com, Scientific Computing, and more.

Dejan Bosanac is a software engineer at Red Hat with an interest in open source and integrating systems. Over the years he’s been involved in various open source communities tackling problems like: Software supply chain security, IoT cloud platforms and Edge computing and Enterprise messaging.

 

MLSecOps Whitepaper

Visualizing Secure MLOps (MLSecOps): A Practical Guide for Building Robust AI/ML Pipeline Security

By Blog, Guest Blog

By Sarah Evans and Andrey Shorov

The world of technology is constantly evolving, and with the rise of Artificial Intelligence (AI) and Machine Learning (ML), the demand for robust security measures has become more critical than ever. As organizations rush to deploy AI solutions, the gap between ML innovation and security practices has created unprecedented vulnerabilities we are only beginning to understand.

A new whitepaper, Visualizing Secure MLOps (MLSecOps): A Practical Guide for Building Robust AI/ML Pipeline Security,” addresses this critical gap by providing a comprehensive framework for practitioners focused on building and securing machine learning pipelines.

Why MLSecOps, Why Now

Why this topic? Why now? 

AI/ML systems encompass unique components, such as training datasets, models, and inference pipelines, that introduce novel weaknesses demanding dedicated attention throughout the ML lifecycle.

The evolving responsibilities within organizations have led to an intersection of expertise:

  1.     Software developers, who specialize in deploying applications with traditional code, are increasingly responsible for incorporating data sets and ML models into those applications.
  2.     Data engineers and data scientists, who specialize in data sets and creating algorithms and models tailored to those data sets, are expected to integrate data sets and models into applications using code.

These trends have exposed a gap in security knowledge, leaving AI/ML pipelines susceptible to risks that neither discipline alone is fully equipped to manage. To resolve this, we investigated how we could adapt the principles of secure DevOps to secure MLOps by creating an MLSecOps framework that empowers  both software developers and AI-focused professionals with the tools and processes needed for end-to-end ML pipeline security. During our research, we identified a scarcity of practical guidance on securing ML pipelines using open-source tools commonly employed by developers. This white paper aims to bridge that gap and provide a practical starting point.

What’s Inside the Whitepaper

This whitepaper is the result of a collaboration between Dell and Ericsson, leveraging our shared membership in the OpenSSF with the foundation stemming from a publication on MLSecOps for telecom environments authored by Ericsson researchers [https://www.ericsson.com/en/reports-and-papers/white-papers/mlsecops-protecting-the-ai-ml-lifecycle-in-telecom]. Together, we have expanded upon Ericsson’s original MLSecOps framework to create a comprehensive guide that addresses the needs of diverse industry sectors. 

We are proud to share this guide as an industry resource that demonstrates how to apply open-source tools from secure DevOps to secure MLOps. It offers a progressive, visual learning experience where concepts are fundamentally and visually layered upon one another, extending  security beyond traditional code-centric approaches. This guide integrates insights from CI/CD, the ML lifecycle, various personas, a sample reference architecture, mapped risks, security controls, and practical tools.

The document introduces a visual, “layer-by-layer” approach to help practitioners securely adopt ML, leveraging open-source tools from OpenSSF initiatives such as Supply-Chain Levels for Software Artifacts (SLSA), Sigstore, and OpenSSF Scorecard. It further explores opportunities to extend these tools to secure the AI/ML lifecycle using MLSecOps practices, while identifying specific gaps in current tooling and offering recommendations for future development.

For practitioners involved in the design, development, deployment, and operations as well as securing of AI/ML systems, this whitepaper provides a practical foundation for building robust and secure AI/ML pipelines and applications.

Join Us

Ready to help shape the future of secure AI and ML?

Read the Whitepaper

Join the AI/ML Security Working Group

Explore OpenSSF Membership

Author Bios

Sarah Evans delivers technical innovation for secure business outcomes through her role as the security research program lead in the Office of the CTO at Dell Technologies. She is an industry leader and advocate for extending secure operations and supply chain development principles in AI. Sarah also ensures the security research program explores the overlapping security impacts of emerging technologies in other research programs, such as quantum computing. Sarah leverages her extensive practical experience in security and IT, spanning small businesses, large enterprises (including the highly regulated financial services industry and a 21-year military career), and academia (computer information systems). She earned an MBA, an AIML professional certificate from MIT, and is a certified information security manager. Sarah is also a strategic and technical leader representing Dell in OpenSSF, a foundation for securing open source software.

Andrey Shorov is a Senior Security Technology Specialist at Product Security, Ericsson. He is a cybersecurity expert with more than 16 years of experience across corporate and academic environments. Specializing in AI/ML and network security, Andrey advances AI-driven cybersecurity strategies, leading the development of cutting-edge security architectures and practices at Ericsson and contributing research that shapes industry standards. He holds a Ph.D. in Computer Science and maintains CISSP and Security+ certifications.

Case Study: Google Secures Machine Learning Models with sigstore

By Blog, Case Studies

As machine learning (ML) evolves at lightning speed, so do the threats. The rise of large models like LLMs has accelerated innovation—but also introduced serious vulnerabilities. Data poisoning, model tampering, and unverifiable origins are not theoretical—they’re real risks that impact the entire ML supply chain.

Model hubs, platforms for data scientists to share models and datasets, recognized the challenge: How could they ensure the models hosted on their platform were authentic and safe?

That’s where Google’s Open Source Security Team (GOSST), sigstore, and the Open Source Security Foundation (OpenSSF) stepped in. Together, we created the OpenSSF Model Signing (OMS) specification, an industry standard for signing AI models. We then integrated OMS into major model hubs such as NVIDIA’s NGC and Google’s Kaggle.

The Solution: Seamless Model Signing Built into Model Hubs

We partnered with Kaggle to experiment with how to make the model signing easier without disrupting publishing UX.

“The simplest solution to securing models is: sign the model when you train it and verify it every time you use it.”
— Mihai Maruseac, Staff Software Engineer, Google

Key features of the prototyped implementation:

  • Model authors could use the same model hub upload tools and processes to upload their models, but, behind the scenes, these models would be automatically signed during the upload process.
  • Each model is signed using the uploader’s identity on the model hub, via OpenID Connect (OIDC). Model hubs should become OIDC providers to ensure that they can sign the model during upload.
  • Model hubs use sigstore to obtain a short-lived certificate, sign the model, and store the signature alongside the model.
  • Verification is automatic and transparent—the model hub verifies the signature and displays its status. A “signed” status confirms the model’s authenticity.
  • Users can independently verify signatures by using a notebook hosted on the model hub, or downloading the model and the signature and verifying using the `model_signing` CLI.
  • Model repositories implement access controls (ACLs) to ensure that only authorized users can sign on behalf of specific organizations.
  • All signing events are logged in the sigstore transparency log, providing a complete audit trail.
  • Future plans include GUAC integration for generating AI-BOMs and inspecting ML supply chains for incident response and transparency.

The process dramatically improves trust and provenance while remaining invisible to most users.

The Result: A Blueprint for Securing AI Models

With sigstore integrated, the experiment with Kaggle proved that model hubs can offer a verified ML ecosystem. Users know that what they download hasn’t been tampered with or misattributed. Each model is cryptographically signed and tied to the author’s identity—no more guessing whether a model came from “Meta” or a spoofed account.

“If we reach a state where all claims about ML systems and metadata are tamperproof, tied to identity, and verifiable by the tools ML developers already use—we can inspect the ML supply chain immediately in case of incidents.”
— Mihai Maruseac, Staff Software Engineer, Google

This solution serves as a model for the broader ecosystem. Platforms hosting datasets and models can adopt similar practices using open tools like sigstore, backed by community-driven standards through OpenSSF.

Get Involved & Learn More

Join the OpenSSF Community
Be part of the movement to secure open source software, including AI/ML systems. → Join the AI/ML Security WG

Explore sigstore
See how sigstore enables secure, transparent signing for software and models. → Visit sigstore

Learn About Google’s Open Source Security Efforts
Discover how Google is advancing supply chain security in open source and machine learning. → Google Open Source Security Team

Learn More about Kaggle
Explore how Kaggle is evolving into a secure hub for trustworthy ML models. → Visit Kaggle

Watch the Talk
Title: Taming the Wild West of ML: Practical Model Signing With sigstore on Kaggle
Speaker: Mihai Maruseac, Google
Event: OpenSSF Community Day North America – June 26, 2025
Watch the talk → YouTube

Securing Public Sector Supply Chains is a Team Sport

By Blog, Global Cyber Policy, Guest Blog

By Daniel Moch, Lockheed Martin

Everyone—from private companies to governments—is aware (or is quickly becoming aware) that the security of their software supply chain is critical to their broader security and continued success. The OpenSSF exists in part to help organizations grapple with the complexity of their supply chains, promoting standards and technologies that help organizations faced with a newly disclosed security vulnerability in a popular open source library answer the question, “Where do we use this library so we can go update it?”

In my work in the public sector, I have an additional layer of complexity: the labyrinth of policies and procedures that I am required to follow to comply with security requirements imposed by my government customers. Don’t get me wrong, this is good complexity, put in place to protect critical infrastructure from advanced and evolving adversaries.

In this post I will describe some of the challenges public sector organizations face as they try to manage their supply chain and how the OpenSSF, with the broader open source community, can help address them. My hope is that meeting these challenges together, head-on will make us all more secure.

Public Sector Challenges

Exposure

Even in the public sector, open source software is being used everywhere. According to Black Duck Auditing Services’ Open Source Security and Risk Analysis (OSSRA) report, as of 2024 open source software comprises at least part of 96% of commercial code bases, with the average code base containing more than 500 open source components. A vulnerability in any one of those components might present significant risk if left unpatched and unmitigated.

Assuming the figures in the public sector are in-line with this report this represents a significant amount of exposure. Unique to the public sector are the risks that come along with this exposure, which don’t just include lost opportunities or productivity, but may put lives in jeopardy. For example, if part of a nation’s power grid is brought down by a cyberattack in mid-winter, people might freeze to death. The added risks, particularly where critical infrastructure is concerned, heighten the need for effective supply chain security.

Identification

Another area where public sector organizations face increased scrutiny is around identification, or what NIST SP 800-63A calls identity proofing. That document describes the requirements the US government imposes on itself when answering the question, “How do we know a person is who they claim to be?”

To provide a satisfactory answer to that question, a person needs to do a lot more than demonstrate ownership of an email address. It is a safe bet that organizations working in the public sector are going to follow a more rigorous identification standard for employees operating on their behalf, even if they do not follow NIST’s guidance to the letter.

It should be obvious that systems supporting the development of open source software do not adhere to this kind of a standard. GitHub, for example, does not ask to see your government-issued ID before allowing you to open an account. As a result, public sector actors must live with a double standard—proving to the government they are who they claim to be on the government’s terms but judging the identities of open source contributors by a different standard.

All that may not be a problem outright. Indeed, there are good reasons to allow open source development to happen without rigorous identification standards. It does, however, introduce some tensions that public sector organizations will need to deal with. For example, if a contractor is required to ensure none of the code in her product originated in a foreign country, how does she ensure that is true for any open source component she is using?

Approval Timelines

When I speak to others in aerospace and defense (part of the public sector, since our customers are governments), the conversation often turns to approval timelines to get software packages onto various, closed networks. The security teams responsible for these approvals have an important job, protecting the critical information on these networks from malicious software. How do they go about this work? Beats me. And even if I could tell you how it worked for one classified network, it would likely be quite different for another. What we have today is a patchwork system, an archipelago of isolated networks protected by security teams doing the best they can with the tools available to them. Historically this has meant manually curated spreadsheets, and lots of them.

This problem is not limited to networks used within aerospace and defense, but keeping the plight of these security groups in mind puts into sharp relief the basic problem faced by every group charged with protecting a network. There might be sufficient information available to make an informed decision, but there has historically been little available in the way of tooling to help bring greater confidence, ease and speed to the decision-making process.

How The Open Source Community Can Help

I have outlined three basic problems that the public sector faces: the risks associated with security vulnerabilities, the limits of identifying where open source software originates, and the timelines associated with getting software approved for use on isolated networks. Now let’s consider some of the ways in which the open source community can help alleviate these problems.

While there’s clearly nothing the open source community can do to directly reduce the risk posed to public infrastructure by vulnerabilities, there are ways maintainers can help the public sector make more informed decisions. Providing a SLSA Provenance alongside build artifacts is a great way to give public sector organizations confidence that what they’re using is what maintainers actually released. What’s more, a Level 3 Provenance gives a high level of assurance that the build process wasn’t interfered with at all. It is possible to achieve SLSA Level 3 by using GitHub Actions.

SLSA Provenance also provides useful information to the groups charged with securing networks (our third problem above). Going further, maintainers can also provide VEX documents with their releases to describe the known vulnerabilities and their status. One interesting use case that VEX supports is the ability to declare a vulnerability in an upstream dependency and assert that the vulnerability does not affect your project. That is useful information for a security group to have, even if they take it with a grain of salt.

That second problem—the impossibility of confidently identifying origin—is one that public sector groups will need to learn to live with. We cannot expect every open source contributor to identify themselves and the country where they reside. In light of this, perhaps the best path forward is for the open source community to develop reputation-based ways to score individual contributors. One could imagine ways of doing this that would both respect individual privacy and provide on-ramps for new contributors to begin building trust. This is almost certainly being done informally and piecemeal already. Systematizing it would only bring more transparency to the process, something that everyone would benefit from.

These kinds of third-party systems would be beneficial beyond contributor reputation as well. There are a variety of data sets useful to supply chain security that are likely being collected by organizations already. When possible, these should be made publicly available so the entire ecosystem can contribute to, help curate and benefit from them. But we cannot stop there. These data sets should be supported by easy-to-use interfaces that help security teams build confidence in the software they are being asked to allow on privileged networks. In short, we should welcome ways to make supply chain security and transparency a team sport.

Conclusion

To sum up, we have considered three challenges that public sector organizations face when securing their supply chains: The high potential impact of supply chain risks, the lack of ability to identify country of origin for open source software, and the long approval times to get new software onto closed networks. We also discussed how the open source community can work to close these gaps. It is worth repeating that doing so would make all of us—not just the public sector—more secure.

It is also gratifying to see the ways the OpenSSF is already contributing to this work, primarily by laying the foundation upon which this work can proceed. SLSA and VEX (in the form of OpenVEX) are both OpenSSF projects. Getting projects to adopt these technologies will take time and should be a priority.

About the Author

For nearly 20 years, Daniel has worked as a software engineer in the Defense and Aerospace industry. His experience ranges from embedded device drivers to large logistics and information systems. In recent years, he has focused on helping legacy programs adopt modern DevOps practices. Daniel works with the open source community as part of Lockheed Martin’s Open Source Program Office.