
By Avishay Balter & David A. Wheeler
AI code assistants are powerful tools. They can speed up development, suggest solutions, and help explore alternatives. But they also create security risks, because the results you get depend heavily on what you ask. These systems’ models are trained on vast amounts of code (much of it insecure), they don’t truly understand context, and they can confidently produce wrong or vulnerable results. If you want secure code, you need to give instructions that focus on this. Bad prompts are more likely to lead to insecure results.
That’s why the Open Source Security Foundation (OpenSSF) created the “Security-Focused Guide for AI Code Assistant Instructions”. This work was created by the OpenSSF Best Practices and AI/ML Working Groups, led by Avishay Balter (Microsoft and co-author of this blog), with contributors from organizations such as Microsoft, Google, and Red Hat.
This new guide focuses specifically on the prompts (human inputs) you give to these assistants. Clear, careful, and security-focused instructions can greatly increase the chance that the assistant produces code that’s correct and secure.
Developers are already using AI to generate code, and we want them to succeed when they do. The more secure your prompts, the more likely you’ll get secure results. Assistants will still make mistakes, but better prompts make a difference.
This guide is only the beginning. OpenSSF is also developing a new course (LFEL1012) on using AI code assistants securely. That course will cover broader issues, including how to safely run these assistants. The course will reference this guide, but the guide stands on its own as a practical, lightweight resource for writing better prompts today.
AI code assistants are changing the process of software development, but they need human guidance. We want AI to help improve security instead of undermining it, and that requires considering security while we use these tools. This guide is one step toward that goal.
Are you using an AI code assistant? That’s great! Use this guide by providing instructions based on it, or have the assistant read the guide. Prepare to collaborate with a more security-minded assistant at your side.
About the Authors
Avishay Balter is a Principal SWE Lead at Microsoft with nearly 20 years of experience in building cutting-edge software and leading high-performing engineering teams. Deeply involved in the open-source community as a co-chair of the Open Source Security Foundation (OpenSSF) Best Practices WG and Memory Safety SIG.
Dr. David A. Wheeler is an expert on developing secure software and on open source software. He created the Open Source Security Foundation (OpenSSF) courses “Developing Secure Software” (LFD121) and “Understanding the EU Cyber Resilience Act (CRA)” (LFEL1001), and is completing creation of the OpenSSF course “Secure AI/ML-Driven Software Development” (LFEL1012). His other contributions include “Fully Countering Trusting Trust through Diverse Double-Compiling (DDC)”. He is the Director of Open Source Supply Chain Security at the Linux Foundation and teaches a graduate course in developing secure software at George Mason University (GMU).