Securing Agentic AI in Practice: From OpenSSF Guidance to Real-World Implementation

By March 13, 2026Blog

Agentic AI systems and AI-driven software workflows are evolving quickly, with more people building on top of them. With that shift comes new questions around trust, control, provenance, and secure interaction between models, tools, and users. Traditional cybersecurity models are being pushed to their limits, and the security stakes have never been higher.

Are you prepared to secure your AI workflow?

Join us on March 17th at 1:00 PM ET for an essential OpenSSF Tech Talk: “Securing Agentic AI in Practice: From OpenSSF Guidance to Real-World Implementation”

Register Here for the Webinar

Why This Talk Matters

Agentic AI introduces unique vulnerabilities, including agent autonomy risks, tool interaction trust issues, and context integrity. This session bridges the gap between high-level security guidance and the gritty reality of production level implementation.

What We’ll Cover:

  • The Problem Space: Angela McNeal (Thread AI) will dive into why agent autonomy and context integrity are the new frontiers of AI security.
  • Deep Dive into SAFE-MCP: Frederick Kautz (AI/ML Security Working Group, SAFE-MCP SIG Maintainer) will explore the Secure AI Framework Ecosystem (SAFE) and Model Context Protocol (MCP), discussing threat models and design tradeoffs.
  • Infrastructure Perspectives: Hugo Huang and Abdelrahman Hosny (Canonical) will share how security translates to the infrastructure layer.
  • Skills Development: Learn about the OpenSSF’s free course, Secure AI/ML-Driven Software Development (LFEL1012), to help you build your own expertise.

Meet the Speakers!

  • Moderator: Yesenia Yser (Senior Security Program Manager, Microsoft)
  • Angela McNeal (CEO & Co-Founder, Thread AI)
  • Frederick Kautz (AI/ML Security Working Group, SAFE-MCP SIG Maintainer)
  • Hugo Huang (Public Cloud Alliance Director, Canonical)
  • Abdelrahman Hosny (Silicon Alliance Manager, Canonical)

Event Details

Don’t miss this chance to hear from the experts building the frameworks that will keep AI safe and reliable.