Skip to main content

A New Course on Secure AI/ML-Driven Software Development

By October 16, 2025Blog

By David A. Wheeler, OpenSSF

Software development is changing. Artificial intelligence (AI)/machine learning (ML) are becoming an integral part of how the world develops software. AI code assistants, in particular, are powerful tools that can help software developers create software more quickly and efficiently.

However, they also come with security risks. As we integrate AI/ML into software development, we must address these risks head-on. When AI systems read documentation, that documentation can include hidden malicious instructions that the AI systems may obey. AI models are also trained on vast amounts of code, much of which is insecure, often resulting in vulnerable results.  Developers using these tools should consider security from the very beginning.

To help with this, the Open Source Security Foundation (OpenSSF) is proud to announce the release of our new free course, Secure AI/ML-Driven Software Development (LFEL1012).

This course is designed for anyone developing software, either closed source or open source software. Our goal is to help you use AI for software development while maintaining security. The course will help you navigate the new security challenges introduced by AI, such as the “lethal trifecta”: exposure to untrusted content, access to private data, and the ability to externally communicate. The course provides a pragmatic approach to using AI in software development, with a focus on applying it in the real world. It’s expected to take about one hour to complete.

The course covers:

  • Introduction and Key Concepts: We’ll begin by setting the context and defining key terms; you don’t need to be an AI expert to learn from this course.
  • Security Risks of Using AI Assistants: This section breaks down two categories of security risks: those in the development environment and those in the final generated results. We’ll discuss how attackers can trick AI assistants, and the high risk of AI-generated code containing vulnerabilities, including the risk from slopsquatting. This section also discusses the risks of “vibe coding” (accepting AI-generated code without review or edit).
  • Best Practices for Secure Assistant Use: This explains ways to use assistants while countering attacks. You’ll learn how to practically apply least privilege to AI assistants, as well as how to cautiously use external data, limit their access to your private data, and limit their ability to externally communicate.
  • Writing More Secure Code with AI: This section shifts focus to how you can improve the security of the code generated with AI. This includes trusting less and engaging more with the assistant and how to use it to generate tests for your code. We cite and build on the OpenSSF Security-Focused Guide for AI Code Assistant Instructions.
  • Reviewing Changes in a World with AI: The final section focuses on how to review changes to software, now that AI is here. Developers are ultimately responsible for the software they develop, even with an AI assistant. This module covers how to effectively review proposed changes, whether they come from an assistant or another human. It discusses addressing challenges such as slop as purported contributions.

Knowing this is crucial for helping today’s software developers worldwide build more secure software.

Please take our new free course: Secure AI/ML-Driven Software Development (LFEL1012).

In addition, check out our growing list of other free educational materials, including our courses Developing Secure Software” (LFD121), Security for Software Development Managers (LFD125), and Understanding the EU Cyber Resilience Act (CRA) (LFEL1001).

About the Author

Dr. David A. Wheeler is an expert on developing secure software and on open source software.  He created the Open Source Security Foundation (OpenSSF) courses “Developing Secure Software” (LFD121) and “Understanding the EU Cyber Resilience Act (CRA)” (LFEL1001), and is completing creation of the OpenSSF course “Secure AI/ML-Driven Software Development” (LFEL1012).  His other contributions include “Fully Countering Trusting Trust through Diverse Double-Compiling (DDC)”. He is the Director of Open Source Supply Chain Security at the Linux Foundation and teaches a graduate course in developing secure software at George Mason University (GMU).