Skip to main content

Securing the AI Lifecycle: Trust, Transparency & Tooling in Open Source

WED, SEPT 24, 2025 | 1:00PM ET

AI is everywhere, and it’s moving fast. But with innovation comes new security challenges. As developers, researchers, and policymakers race to define what “safe AI” looks like, the open source community has a critical role to play.

Join us for a 50-minute Tech Talk exploring how open source projects and contributors are helping build trust into the AI/ML supply chain, through model signing, reproducibility, metadata, and secure development practices.

We’ll spotlight insights from a new industry resource:
Visualizing Secure MLOps (MLSecOps): A Practical Guide for Building Robust AI/ML Pipeline Security, a white paper that introduces a visual, open source-centric approach to integrating security across the AI/ML lifecycle, leveraging lessons learned from DevSecOps.

This session brings together security, cloud, and AI/ML practitioners to examine practical strategies, emerging standards, and real-world implementation patterns for secure AI, including how existing tools like SLSA, Sigstore, and OpenSSF Scorecard are being extended to meet the moment.

What you’ll learn:

âś… How model signing and reproducibility build trust in AI
✅ What’s real (and what’s hype) in secure AI metadata and governance
âś… How MLSecOps bridges the gap between DevSecOps and modern AI/ML pipelines
âś… A walk-through of visuals and open source tooling highlighted in the MLSecOps white paper
✅ How to integrate trust-by-design into your AI workflows using open standards

Register Today

Speakers:

Marcela Melara

Marcela Melara
Research Scientist, Intel Labs

Mihai Maruseac
Staff Software Engineer, Google

Sarah Evans

Sarah Evans
Distinguished Engineer, Dell Technologies

CRob

Christopher “CRob” Robinson
Chief Architect at OpenSSF, Linux Foundation