Agentic AI systems and AI-driven software workflows are evolving quickly, with more people building on top of them. With that shift comes new questions around trust, control, provenance, and secure interaction between models, tools, and users.
In this session, we will explore how the OpenSSF AI/ML Security Working Group is developing open guidance and frameworks to help secure AI and machine learning systems, and how that work translates into real-world practice. Using SAFE-MCP (Security Analysis Framework for Evaluation of Model Context Protocol) and other solutions from OpenSSF member companies as examples, we will highlight community-driven efforts to improve the security of agentic AI systems, the problems they address, the design tradeoffs involved, and the lessons learned so far.
The session will also highlight OpenSSF’s free course, Secure AI/ML-Driven Software Development (LFEL1012), giving attendees a clear path to build skills and contribute to this evolving space.