Skip to main content

What’s in the SOSS? Podcast #42 – S2E19 New Education Course: Secure AI/ML-Driven Software Development (LFEL1012) with David A. Wheeler

By October 16, 2025Podcast

Summary

In this episode of “What’s In The SOSS,” Yesenia interviews David A. Wheeler, the Director of Open Source Supply Chain Security at the Linux Foundation. They discuss the importance of secure software development, particularly in the context of AI and machine learning. David shares insights from his extensive experience in the field, emphasizing the need for both education and tools to ensure security. The conversation also touches on common misconceptions about AI, the relevance of digital badges for developers, and the structure of a new course aimed at teaching secure AI practices. David highlights the evolving nature of software development and the necessity for continuous learning in this rapidly changing landscape.

Conversation Highlights

00:00 Introduction to Open Source and Security
02:31 The Journey to Secure AI and ML Development
08:28 Understanding AI’s Impact on Software Development
12:14 Myths and Misconceptions about AI in Security
18:24 Connecting AI Security to Open Source and Closed Source
20:29 The Importance of Digital Badges for Developers
24:31 Course Structure and Learning Outcomes
28:18 Final Thoughts on AI and Software Security

Transcript

Yesenia (00:01)
Hello and welcome to What’s in the SOSS, OpenSSF podcast where we talk to interesting people throughout the open source ecosystem. They share their journey, expertise and wisdom. So yes, I need said one of your hosts and today we have the extraordinary experience of having David Wheeler on a welcome David. For those that may not know you, can you share a little bit about your row at the Linux Foundation OpenSSF?

David A. Wheeler (00:39)
Sure, my official title is actually probably not very illuminating. It says it’s the direct, I’m the director of open source supply chain security. But what that really means is that my job is to try to help other folks improve the security of open source software at large, all the way from it’s in someone’s head, they’re thinking about how to do it, developing it, putting it in repos, getting it packaged up, getting it distributed, receiving it just all the way through. We want to make sure that people get secure software and the software they actually intended to get.

Yesenia (01:16)
It’s always important, right? You don’t want to open up a Hershey bar that has no peanuts in the peanuts, right? So that was my analogy for the supply chain security in MySpace. Because I’m a little sensitive to peanuts. I was like, you know, you don’t want that.

David A. Wheeler (01:22)
You

David A. Wheeler (01:31)
You don’t want that. And although the food analogy is often pulled up, I think it’s still a good analogy. If you’re allergic to peanuts, you don’t want the peanuts. And unfortunately, it’s not just, hey, whether or not it’s got peanuts or not, but there was a scare involving Tylenol a while back. And to be fair, the manufacturer didn’t do anything wrong, but the bottles were tampered with by a third party.

Yesenia (01:40)
Mm-mm.

David A. Wheeler (01:57)
And so we don’t want tampered products. We want to make sure that when you request an open source program, it’s actually the one that was intended and not something else.

Yesenia (02:07)
So you have a very important job. Don’t play yourself there. We want to make sure the product you get is the one you get, right? So if you don’t know David, go ahead and message him on Slack, connect with him. Great gentleman in the open source space. And you’ve had a long time advocating for secure software development in the open source space. How did your journey lead to creating a course specifically on secure AI and ML driven development?

David A. Wheeler (02:36)
As with many journeys, it’s a complicated journey with lots of whens and ways. As you know, I’ve been interested in how do you develop secure software for a long time, decades now, frankly. And I have been collecting up over the years what are the common kinds of mistakes and more importantly, what are the systemic simple solutions you can make that would prevent that problem and eliminating it entirely ideally.

Um, and over the years it’s turned out that in fact, for a vast number, for the vast majority of problems that people have, there are well-known solutions, but they’re not well known by the developers. So a lot of this is really an education story of trying to make it so that software developers know how to do things. Now it’s a fair, you know, some would say, some would say, well, what about tools? Tools are valuable. Absolutely.

If to the extent that we can, we want to make it so that tools automatically do the secure thing. And that’s the right thing to do, but that’ll never be perfect. And people can always override tools. And so it’s not a matter of education or tools. I think that’s a false dichotomy. It’s you need tools and you need education. You need education or you can’t use the tools well as much as we can. We want to automate things so that they will handle things automatically, but you need both. You need both.

Now, to answer your specific question, I’ve actually been involved in and out with AI to some extent for literally decades as well. People have been interested in AI for years, me too. I did a lot more with symbolic based AI back in the day, wrote a lot of Lisp code. But since that time, really machine learning, although it’s not new, has really come into its own.

And all of a sudden it became quite apparent to me, and it’s not just me, to many people that software development is changing. And this is not a matter of what will happen someday in the future. This is the current reality for software development. And I’m going to give a quick shout out to some researchers in Stanford. I’ll have to go find the link. So who basically did some, I think some important studies related to this.

David A. Wheeler (04:59)
When you’re developing software from scratch and trying to create a tiny program, the AI tools are hard to beat because basically they’re just creating, know, they’re just reusing a template, but that’s a misleading measure, okay? That doesn’t really tell you what normal software development is like. However, let’s assume that you’re taking existing programming and improving it, and you’re using a language for which there’s a lot of examples for training. Okay, we’re talking Python and Java and, you know, various widely used languages, okay?

David A. Wheeler (05:28)
If you do those, turns out the AI tools can generate a lot of code. Some of it’s right. So that means you have to do more rework, frankly, when you use these tools. Once you take the rework into account, they’re coming up with a 20 % improvement in productivity. That is astounding. And I will argue that is the end of the argument. Seriously, there are definitely, there are companies where they have a single customer and the customer pays them to write some software. If the customer says never use AI, fine, the customer’s willing to pay 20 % improvement, I will charge that extra to them. But out in most commercial open source settings, you can’t throw, you can’t ignore a 20 % improvement. And that’s current tech, that’s not future tech. mean, the reality is that we haven’t seen improvements like this since the switch from hut, from assembly to high level languages, the use of, you know, the use of structure programming, I think was another case we got that kind. And you can make a very good case that open source software was that was a third case where you got that digital, productivity. Now you could also argue that’s a little unfair because open source didn’t improve your ability to write software. makes you didn’t have to write the software.

David A. Wheeler (06:53)
But that’s okay. That’s still an improvement, right? So I think that counts. But for the most part, we’ve had a lot of technologies that claim to improve productivity. I’ve worked many over the years. I’ve been very interested in how do you improve productivity? Most of them turned out not to be true. I don’t think that’s true for AI. It’s quite clear for multiple studies. mean, not all studies agree with this, by the way, but I think there’s enough studies that there’s a productivity improvement.

David A. Wheeler (07:21)
It does depend on how you employ these, that, but you know, and they’ll get better. But the big problem now is everyone is on the list. This is a case where everyone, even if you’re a professional and you’ve been doing software development for years, everybody’s new at this part of the game. These tools are new. And the problem here is that the good news is that they can help you. The bad news is they can harm you. They can do.

David A. Wheeler (07:50)
They can produce terribly insecure software. They can also end up being the security vulnerability themselves. And so we’re trying to get ahead of the game and looking around what’s the latest information, what can we learn? And it turns out there’s a lot that we can learn that we actually think is gonna stay on the test of time. And so that’s what this course is, those basics they’re gonna apply no matter what tool you use.

David A. Wheeler (08:17)
How do you make it say you’re using these tools, but you’re not immediately becoming a security vulnerability? How is it so that you’re less likely to produce vulnerable code? And that turns out to be harder. We can talk about why that is, but that’s what this course is in a nutshell.

Yesenia (08:33)
Yeah, I know I had a sneak preview at the slide deck and I was just like, this is fantastic. Definitely needed it. And I wanted to take a moment and give a kudos to the researchers because the engine, the industry wouldn’t be what it is today without the researchers. Like they’re the ones that are firsthand, like try and failing and then somebody picks it up and builds it and it is open source or industry. then boom, it becomes like this whole new field. So I know AI has been around for a minute.

David A. Wheeler (09:01)
Yeah, let me add that. I agree with you. Let me actually separate different researchers because we’re building on the first of course, the researchers who created these original AI and ML systems more generally, obviously a lot of the LLM based research. You’ve got the research specifically in developing tools for developing, for improving software development. And then you’ve got the researchers who are trying to figure out the security impacts of this. And those folks,

Yesenia (09:30)
Those are my favorite. Those are my favorite.

David A. Wheeler (09:31)
I’m Well, we need all of these folks. But the thing is, what concerns me is that remarkably, even though we’ve got this is really new tech, we do have some really interesting research results and publications about their security impacts. The problem is, most of these researchers are really good about, you know, doing the analysis, creating controls, doing the research, publishing a paper, but for most people, publishing a paper has no impact. People are not going to go out and read every paper on a topic. That’s, know, they have work to do basically. So if you’re a researcher makes these are very valuable, but what we’ve tried to do is take the research and boil it down to as a practitioner, what do you need to know? And we do cite the research because

David A. Wheeler (10:29)
You know, if you’re, if you’re interested or you say, Hey, I’m not sure I believe that. Well, great. Curiosity is fantastic. Go read the studies. there’s always limitations on studies. We don’t have infinite time and infinite money. but I think the research is actually pretty consistent. at least with Ted Hayes technology, we, can’t guess what the great grand future holds.

David A. Wheeler (10:55)
But I’m going to guess that at least for the next couple of years, we’re going to see a lot of LLMs, LLM use, they’re going to build on other tools. And so there’s things that we know just based on that, that we can say, well, given the direction of current technology, what’s okay, what’s to be concerned about? And most importantly, what can we do practically to make this work in our favor? So we get that 20 % because we’re going to want it.

Yesenia (11:24)
Yeah, at this point in time, we’re seedling within the AI ML piece. What you said is really, really important. It’s just like, so much more to this. There’s so much more that’s growing. And I want to take it back to something you had mentioned. You’re talking about the good that is coming from the AI ML. And there is the bad, of course. And for the course that you’re coming out, what is one misconception about AI in the software development security that you hope that this course will shatter? What myth are you busting?

David A. Wheeler (11:53)
What myth am I busting? I guess I’m going to cheat because I’m going to respond with two. It’s by the fact that I actually can count. guess, okay, I’m going to turn it into one, which is, guess, basically either over or under indexing on the value of A. Basically expecting too much or expecting too little. Okay, basically trying to figure out what the real expectations you should have are and not go outside that. So there’s my one. So let me talk about over and under. We’ve got people.

Yesenia (12:30)
Well, I’m going to give you another one because in software everything starts with zero. So I’ll give you another one.

David A. Wheeler (13:47)
Okay, all right, so let me talk about the under. There are some who have excessive expectations. We’ve got the, you know, I think vibe coding in particular is a symptom of this, okay? Now, there are some people who use the word vibe coding as another word for using AI. I think that’s not what the original creator of the term meant. And I actually think that’s not helpful because it’s a whole lot like talking about automated carriages.

Um, very soon we’re only going to be talking about carriages. Okay. Everybody’s going to be using automation AI except the very few who don’t. Okay. So, so there’s no point in having a special term for the normal case. Um, so what, what I mean by vibe coding is what the original creator of the term meant, which is, Hey, AI system creates some code. I’m never going to review it. I’m never going to look at it. I’m not going to do anything. I will just blindly accept it. This is a terrible idea. If it matters what the quality of the code is. Now there are cases where frankly the quality of the code is irrelevant. I’ve seen some awesome examples where you’ve got little kids, know, eight, nine year olds running around and telling a computer what to do and they get a program that seems to kind of do that. And that is great. I mean, if you want to do vibe coding with that, that’s fantastic. But if the code actually does something that matters, with current tech, this is a terrible idea. They’re not.

They can sometimes get it correct, but even humans struggle making perfect code every time and they’re not that good. The other case though is, man, we can’t ever use this stuff. I mean, again, if you’ve got a customer who’s paying you extra to never do it, that’s wonderful. Do what the customer asks and is willing to pay for it. For most of us, that’s not a reasonable industry position. What we’re going to need to do instead is learn together as an industry how to use this well. The good news is that although we will all be learning together, there’s some things already known now. So let’s run to the researchers, find out what they’ve learned, go to the practitioners, basically find what has been learned so far, start with that. And then we can build on and improve and go and other things. You don’t expect too much, don’t expect too little.

Yesenia (15:28)
Yeah, the five coding is an interesting one, because sometimes it spews out like correct code. But as somebody who’s written code and reviewed code and like done all this with the supply chain, I’m like. It’s like that extra work you gotta kind of add to it to make sure that you’re validating your testing it and it hasn’t just accidentally thrown in some security vulnerability in in that work. And I think that was. Go ahead.

David A. Wheeler (15:51)
What I can interrupt you, one of the studies that we cited, they basically created a whole bunch of functions that could be written either insecurely or securely as a test. Did this whole bunch of times. And they found that 45 % of the time using today’s current tooling, they chose the insecure approach. And there’s reason for this. ML systems are finally based on their training sets.

They’ve been trained on lots of insecure programs. What did you expect to get? You know, so this is actually going to be a challenge because when you’re trying to fight what the ML systems are training on, that is harder than going with the flow. That doesn’t mean it can’t be done, but it does require extra effort.

Yesenia (16:41)
We’re going extra left at that point. All right, so you your one and I gave you, know, one more because we started at zero. Any other misconception that is being bumped at the course.

David A. Wheeler (16:57)
Um, I guess the, uh, I guess the misconception sort is nothing can be done. And, uh, of course the whole course is a, uh, a, a, a stated disagreement with examples, uh, because in fact, there are things we can do right now. Now I would actually concede if somebody said, Hey, we don’t know everything. Well, sure. Uh, you know, I think all of us are in a life journey and we’re all learning things as we go. Uh, but that doesn’t mean that we have to, um, you know, just accept that nothing can be done. That’s a fatalistic approach that I think serves no one. There are things we can do. There are things that are known, though maybe not by you, but that’s okay. That’s what a course is for. We’ve worked to boil down, try to identify what is known, and with relatively little time, you’ll be far more prepared than you would be otherwise.

Yesenia (17:49)
It is a good course and I the course is aimed for developers, software engineers, open source contributors. So how does it connect to real world open source work like those that are working on closed source versus that open source software?

David A. Wheeler (18:04)
Well, okay, I should first quickly note that I work for the Open Source Security Foundation, Open Source is in the name, so we’re very interested in improving the security of open source software. That is our fundamental focus. That said, sometimes the materials that we create are not really unique to open source software. Where it can be applied by closed source software, we try to make that clear. Sometimes we don’t make it clear as we should, but we’re working on that.

Um, and frankly, in many cases, I think there’s also worth noting that, um, if you’re developing closed source software, the vast majority of the components you’re using are open source software. I mean, the average is 70 to 90 % of the software in a closed source system software system is actually open source software components. Uh, simple because it doesn’t make sense to rebuild everything from scratch today. That’s not a, an M E economically viable option for most folks. So.

in this particular case for the AI, it is applicable equally to open source and closed source. It applies to everybody. and this is actually true also for our LFD 121 course on how to develop secure software. And when you think about it, it makes sense. The attackers don’t care what your license is. just, you know, they just don’t care. They’re going to try to do bad things to you regardless of, of the licensing.

And so while certainly a number of things that we develop like, you know, the best practices badge are very focused on open source software, you know, other things like baseline, other things like, for example, this course on LFT 121, the general how to develop secure software course, they’re absolutely for open source and closed source. Because again, the attackers don’t care.

Yesenia (19:53)
Yeah, they just they just don’t they’re they’re actually just trying to go around all this like they’re trying to make sure they learn it so that they know what to do. Unfortunately, that’s the case. And this course you said it offers a digital badge. Why is this important for developers and employers?

David A. Wheeler (20:11)
Well, I think the short answer is that anybody can say, yeah, I learned something. But it’s, think, for, I guess I should start with the employers because that’s the easier one to answer. Employers like to see that people know things and having a digital badge is a super easy way for an employer to, to make sure that, yeah, they actually learn at least the basics of, you know, that topic. you know, certainly it’s the same for you know, university degrees and other things. You when you’re an employer, you want the, it’s very, important that people who are working for you actually know something that’s critically important. And while a degree or digital badge doesn’t guarantee it, it at least gives that additional evidence. For people, mean, obviously if you want, are trying to get employed by someone, it’s always nice to be able to prove that. But I think it’s also a way to both show you know something to others and frankly encourage others to learn this stuff. We have a situation now where way too many people don’t know how to do some what to me are pretty basic stuff. You know I’ll point back to the LFD 121 course which is how to develop secure software. Most colleges, most universities that’s not a required part of the degree. I think it should be.

David A. Wheeler (21:35)
But since it isn’t, it’s really, really helpful for everybody to know, wait, this person coming in, do they not, they’ve got this digital badge. That gives me much higher confidence going in as somebody I’m working with and that sort of thing, as well as just encouraging others to say, hey, look, I carried enough to take the time to learn this, you can too. And both LFD 121 and this new AI course are free, so that in there online so you can take it at your pace. Those roadblocks do not exist. We’re trying to get the word out because this is important.

Yesenia (22:16)
Yeah, I love that these courses are more accessible and how you touched on the students, like students applying for universities that might be more highly competitive. They’re like, hey, look, I’m taking this extra path to learn and take these courses. Here’s kind of like the proof. And it’s like Pokemon. It’s good to collect them all, know, between the badges and the certifications and the degrees.

I definitely that’s the security professional’s journeys. Collect them all at this point with the credibility and benefits.

David A. Wheeler (22:46)
Well, indeed, course, the real goal, of course, is to learn, not the badges. But I think that badges are frankly, you know, you collecting the gold star, there is nothing more human and nothing more okay than saying, Hey, I, you know, I got to I got a gold star for if you’re doing something that’s good. Yes, revel in that. Enjoy it. It’s fun.

David A. Wheeler (23:11)
And I don’t think that these are impossible courses by any means. And unlike some other things which are, know, I’m not against playing games, playing games is fun, but this is a little thing that’s both can be interesting and is going to be important long-term to not only yourself, but every who uses the code you make. Cause that’s gonna be all of us are the users of the software that all developers as a community make.

Yesenia (23:42)
Yeah, there’s a wide range impact from this, not just like even if you don’t create software, just understanding and learning about this, you’re a user to understanding that basic understanding of it. So I want to transition a little bit to the course because I know we’re spending the whole time about it. Let’s say I’m a friendly person. I signed up for this free LFEL 1012. Can you walk me through the course structure? Like what am I expected to take away from the course in that time period?

David A. Wheeler (24:09)
Okay, yeah, so let’s see here. So I think what I should do is kind of first walk through the outline. Basically, I mean, the first two parts unsurprisingly are introduction and some context. And then we jump immediately into key AI concepts for secure development. We do not assume that someone taking this course is already an expert in AI. I mean, if you are, that’s great. It doesn’t take, we’re not spending a lot of time on it, but we wanna make sure that you understand the basics, the key terms that matter for software development. And then we drill right into the security risks of using AI assistance. I want to make it clear, we’re not saying you can’t use just because something has a risk, everything has risks, okay? But understanding what the potential issues are is important because then you can start addressing them. And then we go through what I would call kind of the meat of the course, best practices for secure assistant use.

You know, how do you reduce the risk that the assistant itself doesn’t become subverted, it starts working against you, things like that. Writing more secure code with AI, if you just say, write some code, a lot of it’s gonna be insecure. There are ways to deal with that, but it’s not so simple or straightforward. For example, it’s pretty common to tell AI systems that, hey, I’m an expert in this topic and suddenly it gets better. That trick doesn’t work.

No, you may laugh, but honestly, that trick works in a lot of situations, but it doesn’t work here. And we’ve actually researched showing it doesn’t work. So there are things that work, but it’s more it’s, it’s more than that. And finally, reviewing code changes in a world with AI. Now, of course, involves reviewing proposed changes from others. And in some cases, trying to deal with the potential DDoS attacks as people start creating far more code than anybody can reasonably review. Okay. We’re have to deal with this. and frankly, biggest problem, frankly, the folks who are vibe coding, you know, they, they run some program. It tells them 10 things. I’ll just dump all 10 things at them. And no, that’s a terrible idea. you know, and the, the curl folks, for example, have had an interesting point where.

They complained bitterly about some inputs from AI systems, which were absolute garbage, wasted their time. And they’ve praised other AI submissions because somebody took the time to make sure that they were actually helpful and correct and so on. And then that’s fantastic. you know, basically you need to push back on the junk and then find and then welcome the good stuff. And then, of course, a little conclusion wrap up kind of thing.

Yesenia (27:01)
I love it. it was a good outline. was not seeing it. Is this like videos accomplished with it or is this just like a module click through?

David A. Wheeler (27:10)
Well, basically we, we group them into little chapters. forgot what their official term is. It’s chapters, section modules. I don’t remember what the right term is. I guess I should, but basically after you go that, then there’s a couple of quiz questions and then a little videos. Basically the idea is that we want people to get it quickly, but you know, if it’s just watch a video for an hour, people fall asleep, don’t remember anything. That’s the goal is to learn, not just, you know, sleep through a video.

David A. Wheeler (27:39)
So little snippets, little quiz questions, and at the end there’s a little final exam. And if you get your answers right, you get your badge. So it’s not that terribly hard. We estimate, it varies, but we estimate about an hour for people. So it’s not a massive time commitment. Do it on lunch break or something. I think this is going to be, as I said, I think this is going to be time well spent.

David A. Wheeler (28:07)
This is the world that we are all moving to, or frankly, have already arrived at.

Yesenia (28:12)
Yeah, I’m already here. think I said it’s a seedling. It’s about to grow into that big tree. Any last minute thoughts, takeaways that you want to share about the course, your experience, open source, supply chain security, all of the above.

David A. Wheeler (28:27)
My goodness. I’ll think of 20 things after we’re done with this, of course. well, no, problem is I’ll think about them later in French. believe it’s called the wisdom of the stairs. It’s as you leave the party, you come up with the point you should have made. so I guess I’ll just say that, you know, if you develop software, whether you’re not, whether you realize or not, it’s highly likely that the work that you do will influence many, many.

Yesenia (28:31)
You only get zero and one.

David A. Wheeler (28:54)
About many, many people, many more than you probably realize. So I think it’s important for all software developers to learn how to develop software, secure software in general, because whether or not you know how to do it, the attackers know how to attack it and they will attack it. So it’s important to know that in general, since we are moving and essentially have already arrived at the world of AI and software development, it’s important to learn the basics and yes.

Do keep learning. Well, all of us are going to keep learning throughout our lives. As long as we’re in this field, that’s not a bad thing. I think it’s an awesome thing. I wake up happy that I get to learn new stuff. But that means you actually have to go and learn the new stuff. And the underlying technology remark is, it’s actually remarkably stable in many things. This is a case though, where, yes, a lot of things change in the detail, but the fundamentals don’t. But this is something where, yeah, actually there is something fundamental that is changing. One time we didn’t use AI often to help us develop software. Now we do. So how do we do that wisely? And there’s a long list of specifics. The course goes through it. I’ll give a specific example so it’s not just this highfalutin, high level stuff. So for example,

Pretty much all these systems are based very much on LLMs, which is great. LLMs have some amazing stuff, but they also have some weaknesses. One is in particular, they are incredibly gullible. If they are told something, they will believe it. And if you tell them to read a document that gives them instructions on some tech, and the document includes malicious instructions, that’s what it’s going to do because it heard the malicious instructions.

David A. Wheeler (30:48)
Now that doesn’t mean you can’t use these technologies. I think that’s a road too far for most folks. But it does mean that there’s new risks that we never had to deal with before. And so there’s new techniques that we’re going to need to apply to do it well. And I don’t think they’re unreasonable. They’re just, you know, we now have a new situation and we’re have to make some changes because of new situation.

Yesenia (31:11)
Yeah, it’s like you mentioned earlier, like you can ask it to be an expert in something and then it’s like, oh, I’m an expert. That’s what I laughing. I was like, yeah, I use that a lot. I’m like, the prompt is you’re an expert in sales. You’re an expert in brand. You’re an expert in this. And it’s like, OK, once it gets in.

David A. Wheeler (31:25)
But the thing is, that really does work in some fields, remarkably. And of course, we can only speculate sometimes why LLMs do better in some areas than others. But I think in some areas, it’s quite easy to distinguish the work of experts from non-experts. And the work of experts is manifestly and obviously different. And at least so far, LLMs struggle to do that.

David A. Wheeler (31:54)
This differentiation in this particular domain. And we can speculate why but basically, the research says that doesn’t work. So don’t do that. Do there are other techniques that have far more success, do those instead. And I would say, hey, I’m sure we’ll learn more things, there’ll be more research, use those as we learn them. But that doesn’t mean that we get to excuse ourselves from ignoring the research we have now, even though we don’t know it.

David A. Wheeler (32:23)
We don’t know everything. We won’t know everything next year either. Find out what you need to know now and be prepared to learn more Seagull.

Yesenia (32:32)
It’s a journey. Always learning every year, every month, every day. It’s great. We’re going to transition into our rapid fire. All right, so I’m going to ask the question. You got to answer quiz and there’s no edit on this point. All right, favorite programming language to teach security with.

David A. Wheeler (32:56)
I don’t have a favorite language. It’s like asking what my children, know, which of my children are my favorite. I like lots of programming languages. That said, I often use Java, Python, C to teach different things more because they’re decent exemplars of those kinds of languages. But so there’s your answer.

Yesenia (33:19)
Those are good range because you have your memory based one, which is the see your Python, which more scripts in the Java, which is more object oriented. So you got a good diverse group.

David A. Wheeler (33:28)
Static type, right, you’ve got your static typing, you’ve got your scripting, you’ve got your lower level. But indeed, I love lots of different programming languages. I know over 100, I’m not exaggerating, I counted, there’s a list on my website. But that’s less impressive than you might think because after you’ve learned a couple, the others tend to not, they often are similar too. Yes, Haskell and Lisp are really different.

David A. Wheeler (33:55)
But most Burmese languages are not as different as you might think, especially after you’ve learned a few. So I hope can help.

Yesenia (34:01)
Yeah, the newer ones too are very similar in nature. Next question, Dungeon and Dragon or Lord of the Rings?

David A. Wheeler (34:11)
I love both. What are we doing? What are you doing to me? Yeah, so I play Dungeons and Dragons. I have watched, I’ve read the books and watched the movies many times. So yes.

Yesenia (34:24)
Yes, yes, yes. First open source project you ever contributed to.

David A. Wheeler (34:30)
Wow, that is too long ago. I don’t remember. Seriously, but it was before the term open source software was created because that was created much later. So it was called free software then. So I honestly don’t remember. I sure it was some small contribution to something somewhere like many folks do, but I’m sorry. It’s lost in the midst of times back in the eighties. Maybe. Yeah. The eighties somewhere, probably mid eighties.

Yesenia (35:00)
You’re going to go to sleep now. Like,

David A. Wheeler (35:01)
So, yeah, yeah, I’m sure somebody will do the research and tell me. thank you.

Yesenia (35:09)
There wasn’t GitHub, so you can’t go back to commits.

David A. Wheeler (35:11)
That’s right. That’s right. No, was long before get long before GitHub and so on. Yep. Carry on.

Yesenia (35:18)
When you’re writing code, coffee or tea?

David A. Wheeler (35:22)
Neither! Coke Zero is my preferred caffeine of choice.

Yesenia (35:26.769)
And this is not sponsored.

David A. Wheeler (35:28.984)
It is not sponsored. However, I have a whole lot of empty cans next to me.

Yesenia (35:35)
AI tool you find the most useful right now.

David A. Wheeler (35:39)
Ooh, that one’s hard. I actually use about seven or eight depending on what they’re good for. For actual code right now, I’m tending to use Claude Code. Claude Code’s really one of the best ones out there for code. And of course, five minutes later, it may change. GitHub’s not bad either. There’s some challenges I’ve had with them. They had some bugs earlier, which I suspect they fixed by now.

But in fact, I think this is an interesting thing. We’ve got a race going on between different big competitors, and this is in many ways good for all of us. The way you get good at anything is by competing with others. So I think that we’re seeing a lot of improvements because you’ve got competing. And it’s okay if the answer changes over time. That’s an awesome thing.

Yesenia (36:36)
That is awesome. That’s technology. And the last one, this is for chaos. GIF or GIF?

David A. Wheeler (36:42)
It’s GIF. Graphics. Graphics has a guck in it. And yes, I’m aware that the original perpetrator doesn’t pronounce it that way, but it’s still GIF. I did see a cartoon caption which said GIF or GIF. And of course I can hear it just reading it.

Yesenia (36:53)
There you have it.

Yesenia (37:05)
My notes is literally spelled the same.

David A. Wheeler (37:08)
Hahaha!

Yesenia (37:11)
All right, well there you have it folks, another rapid fire. David, thank you so much for your time today, for your impact contribution to open source in the past couple decades. Really appreciate your time and all the contributors that were part of this course. Check it out on the Linux Foundation website. then David, do you want to close it out with anything on how they can access the course?

David A. Wheeler (37:38)
Yeah, so basically the course is ecure AI/ML-Driven Software Development its numbers LFEL 1012 And I’m sure we’ll put a link in the show. No, I’m not gonna try to read out the URL But we’ll put a link in there to to get to it But please please take that course. We’ve got some other courses

Software development, you’re always learning and this is an easy way to get the information you most need.

Yesenia (38:14)
Thank you so much for your time today and those listening. We’ll catch you on the next episode.

David A. Wheeler (38:19)
Thank you.