

By Jeff Diecks
The OpenSSF team will be attending DEF CON 33, where the winners of the AI Cyber Challenge (AIxCC) will be announced. We will also host a panel discussion at the AIxCC village to introduce the concept of MLSecOps.
AIxCC, led by DARPA and ARPA-H, is a two-year competition focused on developing AI-enabled software to automatically identify and patch vulnerabilities in source code, particularly in open source software underpinning critical infrastructure.
OpenSSF is supporting AIxCC as a challenge advisor, guiding the competition to ensure its solutions benefit the open source community. We are actively working with DARPA and ARPA-H to open source the winning systems, infrastructure, and data from the competition, and are designing a program to facilitate their successful adoption and use by open source projects. At least four of the competitorsâ Cyber Resilience Systems will be open sourced on Friday, August 8 at DEF CON. The remaining CRSs will also be open sourced soon after the event.
We will be hosting a panel talk at the AIxCC Village, “Applying DevSecOps Lessons to MLSecOps.” This presentation will delve into the evolving landscape of security with the advent of AI/ML applications.
The panelists for this discussion will be:
Just as DevSecOps integrated security practices into the Software Development Life Cycle (SDLC) to address critical software security gaps, Machine Learning Operations (MLOps) now needs to transition into MLSecOps. MLSecOps emphasizes integrating security practices throughout the ML development lifecycle, establishing security as a shared responsibility among ML developers, security practitioners, and operations teams. When thinking about securing MLOps using lessons learned from DevSecOps, the conversation includes open source tools from OpenSSF and other initiatives, such as Supply-Chain Levels for Software Artifacts (SLSA) and Sigstore, that can be extended to MLSecOps. This talk will explore some of those tools, as well as talk about potential tooling gaps the community can partner to close. Embracing this methodology enables early identification and mitigation of security risks, facilitating the development of secure and trustworthy ML models. Embracing MLSecOps methodology enables early identification and mitigation of security risks, facilitating the development of secure and trustworthy ML models.
We invite you to join us on Saturday, August 9, from 10:30-11:15 a.m. at the AIxCC Village Stage to learn more about how the lessons from DevSecOps can be applied to the unique challenges of securing AI/ML systems and to understand the importance of adopting an MLSecOps approach for a more secure future in open source software.
Jeff Diecks is the Technical Program Manager for the AI Cyber Challenge (AIxCC) at the Open Source Security Foundation (OpenSSF). A participant in open source since 1999, heâs delivered digital products and applications for dozens of universities, six professional sports leagues, state governments, global media companies, non-profits, and corporate clients.
By Sarah Evans and Andrey Shorov
The world of technology is constantly evolving, and with the rise of Artificial Intelligence (AI) and Machine Learning (ML), the demand for robust security measures has become more critical than ever. As organizations rush to deploy AI solutions, the gap between ML innovation and security practices has created unprecedented vulnerabilities we are only beginning to understand.
A new whitepaper, “Visualizing Secure MLOps (MLSecOps): A Practical Guide for Building Robust AI/ML Pipeline Security,” addresses this critical gap by providing a comprehensive framework for practitioners focused on building and securing machine learning pipelines.
Why this topic? Why now?Â
AI/ML systems encompass unique components, such as training datasets, models, and inference pipelines, that introduce novel weaknesses demanding dedicated attention throughout the ML lifecycle.
The evolving responsibilities within organizations have led to an intersection of expertise:
These trends have exposed a gap in security knowledge, leaving AI/ML pipelines susceptible to risks that neither discipline alone is fully equipped to manage. To resolve this, we investigated how we could adapt the principles of secure DevOps to secure MLOps by creating an MLSecOps framework that empowers both software developers and AI-focused professionals with the tools and processes needed for end-to-end ML pipeline security. During our research, we identified a scarcity of practical guidance on securing ML pipelines using open-source tools commonly employed by developers. This white paper aims to bridge that gap and provide a practical starting point.
This whitepaper is the result of a collaboration between Dell and Ericsson, leveraging our shared membership in the OpenSSF with the foundation stemming from a publication on MLSecOps for telecom environments authored by Ericsson researchers [https://www.ericsson.com/en/reports-and-papers/white-papers/mlsecops-protecting-the-ai-ml-lifecycle-in-telecom]. Together, we have expanded upon Ericsson’s original MLSecOps framework to create a comprehensive guide that addresses the needs of diverse industry sectors.Â
We are proud to share this guide as an industry resource that demonstrates how to apply open-source tools from secure DevOps to secure MLOps. It offers a progressive, visual learning experience where concepts are fundamentally and visually layered upon one another, extending security beyond traditional code-centric approaches. This guide integrates insights from CI/CD, the ML lifecycle, various personas, a sample reference architecture, mapped risks, security controls, and practical tools.
The document introduces a visual, “layer-by-layer” approach to help practitioners securely adopt ML, leveraging open-source tools from OpenSSF initiatives such as Supply-Chain Levels for Software Artifacts (SLSA), Sigstore, and OpenSSF Scorecard. It further explores opportunities to extend these tools to secure the AI/ML lifecycle using MLSecOps practices, while identifying specific gaps in current tooling and offering recommendations for future development.
For practitioners involved in the design, development, deployment, and operations as well as securing of AI/ML systems, this whitepaper provides a practical foundation for building robust and secure AI/ML pipelines and applications.
Ready to help shape the future of secure AI and ML?
Join the AI/ML Security Working Group
Sarah Evans delivers technical innovation for secure business outcomes through her role as the security research program lead in the Office of the CTO at Dell Technologies. She is an industry leader and advocate for extending secure operations and supply chain development principles in AI. Sarah also ensures the security research program explores the overlapping security impacts of emerging technologies in other research programs, such as quantum computing. Sarah leverages her extensive practical experience in security and IT, spanning small businesses, large enterprises (including the highly regulated financial services industry and a 21-year military career), and academia (computer information systems). She earned an MBA, an AIML professional certificate from MIT, and is a certified information security manager. Sarah is also a strategic and technical leader representing Dell in OpenSSF, a foundation for securing open source software.
Andrey Shorov is a Senior Security Technology Specialist at Product Security, Ericsson. He is a cybersecurity expert with more than 16 years of experience across corporate and academic environments. Specializing in AI/ML and network security, Andrey advances AI-driven cybersecurity strategies, leading the development of cutting-edge security architectures and practices at Ericsson and contributing research that shapes industry standards. He holds a Ph.D. in Computer Science and maintains CISSP and Security+ certifications.
August 2025 marks five years since the official formation of the Open Source Security Foundation (OpenSSF). Born out of a critical need to secure the software supply chains and open source ecosystems powering global technology infrastructure, OpenSSF quickly emerged as a community-driven leader in open source security.
“OpenSSF was founded to unify and strengthen global efforts around securing open source software. In five years, weâve built a collaborative foundation that reaches across industries, governments, and ecosystems. Together, weâre building a world where open source is not only powerfulâbut trusted.” â Steve Fernandez, General Manager, OpenSSF
OpenSSF was launched on August 3, 2020, consolidating earlier initiatives into a unified, cross-industry effort to protect open source projects. The urgency was clearâhigh-profile vulnerabilities such as Heartbleed served as stark reminders that collective action was essential to safeguard the digital infrastructure everyone depends on.
âFrom day one, OpenSSF has been about actionâempowering the community to build and adopt real-world security solutions. Five years in, weâve moved from ideas to impact. The work isnât done, but the momentum is real, and the future is wide open.â â Christopher “CRob” Robinson, Chief Architect, OpenSSF
Over the past five years, OpenSSF has spearheaded critical initiatives that shaped the landscape of open source security:
2021 – Secure Software Development Fundamentals:
Launching free educational courses on edX, OpenSSF equipped developers globally with foundational security practices.
“When we launched our first free training course in secure software development, we had one goal: make security knowledge available to every software developer. Today, that same mission powers all of OpenSSFâequipping developers, maintainers, and communities with the tools they need to make open source software more secure for everyone.” â David A. Wheeler, Director, Open Source Supply Chain Security, Linux Foundation
2021 – Sigstore: Open Source Signing for Everyone:
Sigstore was launched to make cryptographic signing accessible to all open source developers, providing a free and automated way to verify the integrity and provenance of software artifacts and metadata.
âBeing part of the OpenSSF has been crucial for the Sigstore project. It has allowed us to not only foster community growth, neutral governance, and engagement with the broader OSS ecosystem, but also given us the ability to coordinate with a myriad of in-house initiatives — like the securing software repos working group — to further our mission of software signing for everybody. As Sigstore continues to grow and become a core technology for software supply chain security, we believe that the OpenSSF is a great place to provide a stable, reliable, and mature service for the public benefit.â
â Santiago Torres-Arias, Assistant Professor at Purdue University and Sigstore TSC Chair MemberÂ
2021-2022 – Security with OpenSSF Scorecard & Criticality Score:
Innovative tools were introduced to automate and simplify assessing open source project security risks.
âThe OpenSSF has been instrumental in transforming how the industry approaches open source security, particularly through initiatives like the Security Scorecard and Sigstore, which have improved software supply chain security for millions of developers. As we look ahead, AWS is committed to supporting OpenSSF’s mission of making open source software more secure by default, and we’re excited to help developers all over the world drive security innovation in their applications.â â Mark Ryland, Director, Amazon Security at AWS
2022 – Launch of Alpha-Omega:
Alpha-Omega (AO), an associated project of the OpenSSF launched in February 2022, is funded by Microsoft, Google, Amazon, and Citi. Its mission is to enhance the security of critical open source software by enabling sustainable improvements and ensuring vulnerabilities are identified and resolved quickly. Since its inception, the Alpha-Omega Fund has invested $14 million in open source security, supporting a range of projects including LLVM, Java, PHP, Jenkins, Airflow, OpenSSL, AI libraries, Homebrew, FreeBSD, Node.js, jQuery, RubyGems, and the Linux Kernel. It has also provided funding to key foundations and ecosystems such as the Apache Software Foundation (ASF), Eclipse Foundation, OpenJS Foundation, Python Foundation, and Rust Foundation.
2023 – SLSA v1.0 (Supply-chain Levels for Software Artifacts):
Setting clear and actionable standards for build integrity and provenance, SLSA was a turning point for software supply chain security and became essential in reducing vulnerabilities.
At the same time, community-driven tools like GUAC (Graph for Understanding Artifact Composition) built on SLSAâs principles, unlocking deep visibility into software metadata, making it more usable, actionable and connecting the dots across provenance, SBOMs and in-toto security attestations.
“Projects like GUAC demonstrate how open source innovation can make software security both scalable and practical. Kusari is proud to have played a role in these milestones, helping to strengthen the resiliency of the open source software ecosystem.”
â Michael Lieberman, CTO and Co-founder at Kusari and Governing Board member
2024 – Principles for Package Repository Security:
Offering a voluntary, community-driven security maturity model to strengthen the resilience of software ecosystems.
âDevelopers around the world rely daily on package repositories for secure distribution of open source software. It’s critical that we listen to the maintainers of these systems and provide support in a way that works for them. We were happy to work with these maintainers to develop the Principles for Package Repository Security, to help them put together security roadmaps and provide a reference in funding requests.â â Zach Steindler, co-chair of Securing Software Repositories Working Group, Principal Engineer, GitHub
2025
OSPS Baseline:
This initiative brought tiered security requirements into the AI space, quickly adopted by groundbreaking projects such as GUAC, OpenTelemetry, and bomctl.
“The Open Source Project Security Baseline was born from real use cases, with projects needing robust standardized guidance around how to best secure their development processes. OpenSSF has not only been the best topical location for contributors from around the world to gather â the foundation has gone above and beyond by providing project support to extend the content, promote the concept, and elevate Baseline from a simple control catalog into a robust community and ecosystem.” â Eddie Knight, OSPO Lead, Sonatype
AI/ML Security Working Group:Â
The MLSecOps White Paper from the AI/ML Security Working Group marks a major step in securing machine learning pipelines and guiding the future of trustworthy AI.
âThe AI/ML working group tackles problems at the confluence of security and AI. While the AI world is moving at a breakneck pace, the security problems that we are tackling in the traditional software world are also relevant. Given that AI can increase the impact of a security vulnerability, we need to handle them with determination. The working group has worked on securing LLM generating code, model signing and a new white paper for MLSecOps, among many other interesting things.â â Mihai Maruseac, co-chair of AI/ML Security Working Group, Staff Software Engineer, Google
OpenSSFâs role rapidly expanded beyond tooling, becoming influential in global policy dialogues, including advising the White House on software security and contributing to critical policy conversations such as the EUâs Cyber Resilience Act (CRA).
OpenSSF also continues to invest in community-building and education initiatives. This year, the Foundation launched its inaugural Summer Mentorship Program, welcoming its first cohort of mentees working directly with technical project leads to gain hands-on experience in open source security.
The Foundation also supported the publication of the Compiler Options Hardening Guide for C and C++, originally contributed by Ericsson, to help developers and toolchains apply secure-by-default compilation practicesâespecially critical in memory-unsafe languages.
In addition, OpenSSF has contributed to improving vulnerability disclosure practices across the ecosystem, offering guidance and tools that support maintainers in navigating CVEs, responsible disclosure, and downstream communication.
âThe OpenSSF is uniquely positioned to advise on considerations, technical elements, and community impact public policy decisions have not only on open source, but also on the complex reality of implementing cybersecurity to a diverse and global technical sector. In the past 5 years, OpenSSF has been building a community of well-informed open source security experts that can advise regulations but also challenge and adapt security frameworks, law, and regulation to support open source projects in raising their security posture through transparency and open collaboration; hallmarks of open source culture.â â Emily Fox, Portfolio Security Architect, Red Hat
Key community members, from long-standing contributors to new voices, have shaped OpenSSFâs journey:
OG Voices:
âMicrosoft joined OpenSSF as a founding member, committed to advancing secure open source development. Over the past five years, OpenSSF has driven industry collaboration on security through initiatives like Alpha-Omega, SLSA, Scorecard, Secure Software Development training, and global policy efforts such as the Cyber Resilience Act. Together, we’ve improved memory safety, supply chain integrity, and secure-by-design practices, demonstrating that collaboration is key to security. We look forward to many more security advancements as we continue our partnership.â â Mark Russinovich, CTO, Deputy CISO, and Technical Fellow, Microsoft Azure
OpenSSF Leadership Perspective:Â
“OpenSSFâs strength comes from the people behind itâbuilders, advocates, and champions from around the world working toward a safer open source future. This milestone isnât just a celebration of what weâve accomplished, but of the community weâve built together.” â Adrianne Marcum, Chief of Staff, OpenSSF
Community Perspectives:
“After 5 years of hard work, the OpenSSF stands as a global force for securing the critical open-source that we all use. Here’s to five years of uniting communities, hardening the software supply chain, and driving a safer digital future.” Tracy Ragan, CEO, DeployHub
“I found OpenSSF through my own curiosity, not by invitation, and I stayed because of the warmth, support, and shared mission I discovered. From contributing to the BEAR Working Group to receiving real backing for opportunities, the community consistently shows up for its members. Itâs more than a project; itâs a space where people are supported, valued, and empowered to grow.” Ijeoma Onwuka, Independent Contributor
As we celebrate our fifth anniversary, OpenSSF is preparing for a future increasingly influenced by AI-driven tools and global collaboration. Community members across the globe envision greater adoption of secure AI practices, expanded policy influence, and deeper, inclusive international partnerships.
âAs we celebrate OpenSSFâs 5th Anniversary, Iâm energized by how our vision has grown into a thriving global movement of developers, maintainers, security researchers, and organizations all united by our shared mission. Looking ahead weâre hoping to cultivate our communityâs knowledge and empower growth through stronger collaboration and more inclusive pathways for contributors.â â Stacey Potter, Community Manager, OpenSSF
We invite you to share your memories, contribute your voice, and become part of the next chapter in securing open source software.
Hereâs to many more years ahead! đ
As machine learning (ML) evolves at lightning speed, so do the threats. The rise of large models like LLMs has accelerated innovationâbut also introduced serious vulnerabilities. Data poisoning, model tampering, and unverifiable origins are not theoreticalâtheyâre real risks that impact the entire ML supply chain.
Model hubs, platforms for data scientists to share models and datasets, recognized the challenge: How could they ensure the models hosted on their platform were authentic and safe?
Thatâs where Googleâs Open Source Security Team (GOSST), sigstore, and the Open Source Security Foundation (OpenSSF) stepped in. Together, we created the OpenSSF Model Signing (OMS) specification, an industry standard for signing AI models. We then integrated OMS into major model hubs such as NVIDIA’s NGC and Google’s Kaggle.
We partnered with Kaggle to experiment with how to make the model signing easier without disrupting publishing UX.
“The simplest solution to securing models is: sign the model when you train it and verify it every time you use it.”
â Mihai Maruseac, Staff Software Engineer, Google
Key features of the prototyped implementation:
The process dramatically improves trust and provenance while remaining invisible to most users.
With sigstore integrated, the experiment with Kaggle proved that model hubs can offer a verified ML ecosystem. Users know that what they download hasn’t been tampered with or misattributed. Each model is cryptographically signed and tied to the authorâs identityâno more guessing whether a model came from âMetaâ or a spoofed account.
“If we reach a state where all claims about ML systems and metadata are tamperproof, tied to identity, and verifiable by the tools ML developers already useâwe can inspect the ML supply chain immediately in case of incidents.”
â Mihai Maruseac, Staff Software Engineer, Google
This solution serves as a model for the broader ecosystem. Platforms hosting datasets and models can adopt similar practices using open tools like sigstore, backed by community-driven standards through OpenSSF.
Join the OpenSSF Community
Be part of the movement to secure open source software, including AI/ML systems. â Join the AI/ML Security WG
Explore sigstore
See how sigstore enables secure, transparent signing for software and models. â Visit sigstore
Learn About Googleâs Open Source Security Efforts
Discover how Google is advancing supply chain security in open source and machine learning. â Google Open Source Security Team
Learn More about Kaggle
Explore how Kaggle is evolving into a secure hub for trustworthy ML models. â Visit Kaggle
Watch the Talk
Title: Taming the Wild West of ML: Practical Model Signing With sigstore on Kaggle
Speaker: Mihai Maruseac, Google
Event: OpenSSF Community Day North America â June 26, 2025
Watch the talk â YouTube