OpenSSF Tech Talk Recap: Securing Agentic AI

By April 8, 2026Blog

The shift from traditional software to agentic Artificial Intelligence (AI) represents a fundamental change in the security landscape. While traditional systems are deterministic, AI agents are non-deterministic by design. This presents a new set of challenges: execution paths that change with the same input, runtime discovery of access surfaces, and the critical need for delegated identity.

At our recent Open Source Security Foundation (OpenSSF) Tech Talk, experts from Microsoft, Thread AI, Canonical, and the OpenSSF AI/ML Security Working Group joined forces to dismantle the “black box” of AI security. They provided a roadmap for securing the entire stack – from the user prompt down to the silicon. For those who missed the live session, the full recording and slide deck are now available on demand.

The New Threat Model: Why Agents Differ

Angela McNeal of Thread AI opened the discussion by highlighting why our existing frameworks need a major upgrade. “Traditional software processes are deterministic,” McNeal noted. “AI agents, however, follow the path of least resistance.”

Without explicit boundaries, an agent might pull records for an entire family plan just to process a single claim. This occurs not out of malice, but from a drive to be thorough. This “unbounded” nature makes least privilege an architectural requirement rather than a mere policy.

Key Problem Areas:

  • Agent Autonomy: The risk of “confused deputy” problems where agents delegate tasks to sub-agents without narrowing authorization scopes.
  • Tool-Model Trust: Every Application Programming Interface (API) and the Model Context Protocol (MCP) server is an untrusted boundary. Models cannot natively distinguish between data and instructions, a vulnerability known as prompt injection.
  • Context Integrity: Decision-making must be defensible. Organizations must capture the “reasoning chain” of an agent to satisfy regulators. Logging only the final output is insufficient.

Introducing SAFE-MCP: A Threat Catalog for the AI Era

To address these vulnerabilities, Frederick Kautz, contributor of the OpenSSF AI/ML Security Working Group introduced SAFE-MCP, one of the newest initiatives and SIGs within the OpenSSF. Inspired by the MITRE ATT&CK® framework, SAFE-MCP provides a standardized catalog of over 80 attack techniques specifically targeting tool-based Large Language Models (LLMs).

“The attacks often live within the changes of assumptions,” Kautz explained. “If we just do what we did two years ago, we are going to open ourselves up to attacks.”

By assigning specific identifiers to threats, such as SAFE-T1201 (MCP Rugpull Attack) – the community can communicate without ambiguity. This allows security teams to verify whether their architecture successfully mitigates specific, known risks such as context exfiltration or lateral movement.

The “Seven-Layer Cake” of AI Infrastructure

Hugo Huang and Abdelrahman Hosny of Canonical shifted the focus to the underlying infrastructure. They described AI security as a seven-layer stack, emphasizing that open source is the bedrock of every layer.

  1. User Interface: Web applications with thousands of open source dependencies.
  2. Orchestration: Tools such as Ollama or vLLM that manage model loading and memory.
  3. Inference Runtime: The math engine (e.g., Llama.cpp) that executes matrix multiplications.
  4. Model Format: Static files that can be poisoned to corrupt the sampling process.
  5. Hardware Drivers: The software communicating with the Graphics Processing Unit (GPU) or Language Processing Unit (LPU).
  6. Kernel: The Linux bedrock managing resource allocation.
  7. Silicon: The physical hardware (NVIDIA, Intel, AMD) that provides high-performance computing power.

With more than 3,000 open source dependencies in a typical AI stack, the panel stressed the importance of Software Bill of Materials (SBOM) visibility and enterprise-grade patch management.

How to Get Involved

Securing the future of open source AI is a community-wide effort. The OpenSSF provides several resources for organizations to maintain security: