
By David A. Wheeler
This is part 1 of a 2-part article discussing the impact of Artificial Intelligence (AI) on software development. In this part, I’ll note that AI use during software development is now the norm, despite frequent errors in AI-generated results, because productivity is king. I’ll then discuss its potential security implications. In part 2, I’ll offer tips for software developers (so they can better use AI) and look to the future. Part of the future has arrived! We need to adjust and be prepared for more.
AI is the norm now during software development
Using Artificial Intelligence (AI) during software development has become the norm; manual-only software development is the exception. Here’s some evidence:
- 51% of professional developers use AI tools daily [Stackoverflow 2025 survey]. This doesn’t mean they blindly trust AI tools; only 3.1% “highly trust” them, and their primary complaint is of “AI solutions that are almost right, but not quite”.
- 87.9% of developers use AI tools at least weekly, 59.8% daily + 28.1% at least weekly [2025 Qodo report]. This is despite 76.4% reporting that AI systems often produce incorrect or misleading code, and many report a need for them to better handle context.
- 91% of web developers are using AI to generate code [dev.to “What Web Developers Really Think About AI in 2025”]. In this study, the top pain point was “hallucination & inaccuracies” followed by “context & memory limitations”.
These studies’ data are from earlier in 2025. Given adoption trends, I believe these studies dramatically under-report current use.
Productivity is king
There’s a simple reason: AI can improve productivity. Stanford did a study of 100K software developers in the real world. They did identify caveats with current AI:
- AI can generate lots of code, but that code must be reviewed and reworked. Productivity measurement must include this rework time to be meaningful.
- AI doesn’t replace humans; it’s an assistant to humans that needs constant guidance.
- AI uses machine learning (ML), and thus isn’t very helpful for programming languages with little publicly available training data (e.g., COBOL).
- AI has extremely high productivity for “greenfield” development of new tiny programs, 10-40% depending on complexity. Such tasks involve a lot of boilerplate, and ML-based systems are good at filling in boilerplate. However, relatively little of today’s programming is like this; greenfield experiences have led many to form excessive expectations.
If you’re trying to make real-world improvements to an existing program, they estimate overall productivity gains of 15-20% for low complexity tasks and 5-10% for high complexity tasks. That is an extraordinarily large gain in productivity, especially since many tasks are low complexity. The history of software development is full of fads that failed to provide adequate real-world productivity gains. The pursuit of such fads even has a name: “hype driven development”. A few approaches succeeded and became the norm, for example, the use of high-level languages, structured programming, and open source software. AI has joined the ranks of successful approaches. Customers can pay a premium to always manually create code, but that won’t be sustainable for many.
These results are consistent with my own experience. AI results may often be wrong, but humans often find it easier to recognize correct results and fix incorrect ones than to develop their own correct results from scratch. A mostly correct result should not be blindly accepted, but it can serve as a useful starting point for developing a correct one.
Security
There are two main kinds of security risks that can specifically arise when using AI assistants to help develop software:
- Development environment exploitation: Security failures can happen in the development environment due to the assistant itself. For example, attackers can insert malicious hidden instructions in documentation that the assistant might read, and valuable organizational data (such as trade secrets) might leak out. There are countermeasures, such as limiting the AI’s access to only selected data, filtering what the AI sees, and running these systems within virtual machines.
- Vulnerable generated results: The AI-generated results, particularly code, can have security vulnerabilities. AI-generated code is often insecure if special steps aren’t taken. For example, a Veracode study found that 45% of AI-generated code contains vulnerabilities. Over the past few years, AI’s ability to generate functional code has increased, but AI’s rate of generating secure code has not. That shouldn’t be surprising; these systems are built with training sets full of insecure software! They also often try to add false dependencies with names attackers can predict, leading to “slopquatting” attacks. A compromised AI assistant might even insert malicious code into generated results. There are countermeasures, like providing specific instructions for security and performing human review.
Any use of AI built on Large Language Models (LLMs), including as AI assistants in software development, needs to try to avoid the lethal trifecta:
- Access to your private data
- Exposure to untrusted content
- Ability to externally communicate
If you use AI assistants to develop software, we strongly encourage you to take our free OpenSSF course Secure AI/ML-Driven Software Development (LFEL1012). It discusses these two main risks and how to manage them, including ways to reduce risks from the lethal trifecta.
Another way AI is affecting software development is the many software projects that are embedding AI into the systems being produced. Sometimes this is great, and sometimes this is foolish. A general issue is that AI systems often make mistakes. A key security-specific challenge is that current AI systems are easily fooled by attackers. As Simon Willison wisely explained in 2023, “99% is a failing grade in security”. If your AI security counters attacks 99% of the time, attackers will simply attack the system a million times and laugh at you. Beware of snake oil – there are many published papers proposing AI security measures that don’t work (this presentation lists many), because they presume naive attackers won’t read papers. What we need are AI security measures that have withstood intensive adversarial review (see Carlini et al.’s 2019 paper on this) that work even when the attacker knows about the defense. The CaMeL architecture is one of the few AI security architectures that actually works, though only for some situations; we need a lot more work like that! Before building AI into a system, make sure it makes sense given AI’s strengths and weaknesses. As noted in the movie Jurassic Park, too many people are so “preoccupied with whether or not they could, they didn’t stop to think if they should”.
AI is fundamentally an accelerant, including in security:
- For attackers, AI helps them carry out attacks. AI can help attackers find and exploit software vulnerabilities. AI can also help attackers research and exploit humans via phishing and other forms of social engineering.
- For defenders, AI helps them defend. AI can help defenders find and fix software vulnerabilities. AI can also help defend against attacks on people, e.g., by flagging suspicious communications. The OpenSSF worked with the AI Cyber Challenge (AIxCC), which rewarded creating AI systems to find and fix software vulnerabilities. This was a great success, and the winning systems are being released as open source software (OSS) so others can build on them. The OpenSSF AI/ML Working Group’s Cyber Reasoning SIG is working to improve the ability of AI systems to detect and fix software vulnerabilities.
There are many ways to attack machine-learning-based AI systems. NIST AI 100-2 E2025 provides a taxonomy of such attacks. While that paper identifies some weak countermeasures, it also acknowledges that current countermeasures have serious limitations. For example, it notes that “mitigating adversarial examples is a well-known challenge… and deserves additional research and investigation. The field has a history of publishing defenses evaluated under relatively weak adversarial models that are subsequently broken…”. Today, adding “=coffee” breaks many guardrails. Many AI systems don’t have strong defenses. Our current abilities are limited. Before developing a system that embeds AI, consider what will happen when the attackers show up.
In part 2, I’ll continue by offering tips for software developers to better use AI and look to the future.