Skip to main content

📩 Stay Updated! Follow us on LinkedIn and join our mailing list for the latest news!

What’s in the SOSS? Podcast #3 – Mark Russinovich and AI’s Impact on Software Engineering and Open Source Software Security

By May 7, 2024August 5th, 2024Podcast

Summary

In this episode, Omkhar talks to Mark Russinovich, CTO of Microsoft Azure. Mark oversees the technical strategy and architecture of Microsoft’s cloud computing platform. Mark is also on the Governing Board of the OpenSSF. He’s a widely recognized expert in distributed systems, operating system internals, and cybersecurity. Mark’s also the author of the Jeff Aiken cyberthriller novels Zero Day, Trojan Horse and Rogue Code, and co-author of the Microsoft Press Windows Internals books.

Conversation Highlights

  • 00:36 – Mark on his role at Azure
  • 01:30 – Where AI is headed and its impact on enterprises
  • 04:06 – The task of teaching a machine learning model to unlearn Harry Potter
  • 06:32 – The good and bad of AI hallucinations
  • 10:35 – The promise of more secure open source software via AI
  • 13:05 – Mark answers Omkhar’s “rapid-fire” questions: mild or spicy food, Vim, Emacs or VS Code and tabs or spaces
  • 15:01 – Why aspiring software engineers should still learn to code

Transcript

Mark Russinovich soundbite (00:01)
I think we’re still a ways away from AI completely just taking over coding. Just like, you can have people that might be able to get away with never learning how to code, and just always prompting AI. When things go wrong, knowing what’s going on underneath will make you more effective than the person that doesn’t.

Omkhar Arasaratnam (00:18 )
Hi everyone, and welcome to What’s in the SOSS? I’m your host Omkhar Arasaratnam, the general manager of the OpenSSF. Today we have a good friend of mine, Mr. Mark Russinovich, Azure CTO. What does it mean to be the Azure CTO? Let’s get into that.

Mark Russinovich (00:36)
What I tell people, the short version is, lead technical strategy and architecture for the Azure platform. There’s a lot behind that, though. I work with engineering teams. I do work on architecture. I also, as part of it, focus on security and helped co-found the Open Source Security Foundation as part of looking at how we can improve all of our industry supply chain for open source.

Omkhar Arasaratnam (00:56)
And thank you for that. It is certainly a challenging mission we’re on. Now, you buried the lead a bit. You didn’t talk about the continued work that you’re doing on Sysinternals.

Mark Russinovich (01:04)
(Laughter) Yeah, I’m also known as Mr. Sysinternals. I still do occasional side work on Sysinternals. My favorite tool, by the way, and if you haven’t seen it and your’re Windows user, is Zoomit, which lets you annotate the screen and it’s great for demos and presentations.

Omkhar Arasaratnam (01:21)
And if I recall, for those whose eyesight has suffered over the years as mine, it helps with that too.

Mark Russinovich (01:26)
Yeah, I use it frequently myself for that.

Omkhar Arasaratnam (01:28)
(Laughter) Absolutely. You know, other than the leadership that you’ve provided in security, one of the other areas that you’ve been focusing on in terms of the leading edge of our industry is in AI and machine learning. Generative AI, in particular, holds a lot of promise. Where do you see this heading?

Mark Russinovich (01:48)
First of all, it’s hard to predict where things are heading because the rise of generative AI and the capabilities that we see in it took just about everybody by surprise. And I think that there’s probably more surprises in store for us. So there’s going to be some discontinuities, but generally, the trajectory is that we’re going to have AI assistance that are our personal assistants that help us in all aspects of our life. And then in the enterprise scenarios, we’ll have AI assistants that are automating a lot of the work that today humans are required to do and helping humans make decisions in all aspects of their work across enterprises.

Omkhar Arasaratnam (02:28)
On a personal level, if you don’t mind getting into it, how have you been using AI personally? How has that helped your day? What kind of toilsome tasks has AI been able to automate for you, and where are you seeing the limits? Like, where are we not quite there yet?

Mark Russinovich (02:42)
Well, there’s basically three different ways that I use AI. One of them is if there’s a topic that I’m not that familiar with and I want to know more, rather than going to a web search for it, I just go and ask an AI assistant to teach me about it. And the nice thing is I can tell it, hey, teach me as if I’m a high school student. Teach me as if I’m an expert in this, these other holes that I might have in my knowledge. And it crafts an appropriate response at the right altitude.

The other way is summarization. So looking at lots of papers saying to the AI, summarize this paper for me. And that gives me a high-level view of what’s in it. And then I can go dive into specific sections that I want to learn more about. And then the final way is as programming, both for Sysinternals and the AI programming work that I do on the side, I use GitHub Copilot, and it has transformed the way that I code to change me into an expert at things that I don’t really know much about, like Python and PyTorch, and changed the way that I approach programming and that I really don’t want to have to type any code anymore. I want to just tell AI to type it for me.

Omkhar Arasaratnam (03:51)
Now we were chatting a little while ago. You’d actually taken a sabbatical last year. And while you had a lot of wonderful quality time with the family, you also used some of that time to start picking up on generative AI. What kind of projects do you get into at that time?

Mark Russinovich (04:06)
So I wanted to get my hands dirty during the sabbatical where I had more time on exploring something that was novel and where I’d learn a lot in the process. And so one of the things that I recognized from the early days of me looking at the rise of generative AI and the cost of these large models — where they can cost millions to tens of millions of dollars or more to train — is the issue of training data that is problematic that you discover after you finish the training of the model.

For example, copyrighted information, GDPR, poison data, you want to have a version of the model that reflects not having been trained on that without spending, again, the millions of dollars to retrain it from scratch. And so, I thought unlearning would be a fantastic tool for these kinds of scenarios. And so me and Ronen Eldan, another researcher at Microsoft, decided to see if we could get large language model, specifically Llama 7b from Meta, to forget Harry Potter because these models know Harry Potter really deeply. And so that was the kind of summer project, and we succeeded in getting Llama 7b to forget Harry Potter. So when you ask it to complete a sentence like, “Harry went back to school that fall and saw his friends…” the pre-trained model would say, “Of course, Ron and Hermione,” even though there was no other Hogwarts reference there other than basically indirect reference to school and the name Harry. That’s how deeply they are trained on Harry Potter content. Now the version that we made, where it forgets the Harry Potter universe, will say, “Went back to school to see his friends Sarah and Joe and take a class from their favorite professor,” and with some generic name.

Omkhar Arasaratnam (05:48)
That’s, that’s really cool. That’s incredibly interesting. And I think addresses some of the, at least what I’ve heard of as some concerns, especially adversarial use of improperly trained models and things of that nature. One of the things that I have a personal concern about —and it’s probably just being keenly aware of my own limitations —you’d mentioned one of the use cases that you use AI for today is in quickly coming up to speed on a subject that you’re unaware of.

What do you think about this notion of hallucinations and the possibility that the AI may not give you the most accurate information today? And I recognize it’s a point-in-time statement. How do you get by that?

Mark Russinovich (06:32)
It seems like you might have read my mind or maybe we had a conversation about this that I don’t recall. Hallucination, I think, is actually the biggest challenge for use in high-impact scenarios, which many enterprise scenarios where you’d want to apply AI are high impact, where if the model hallucinates something, you could have a big problem. Let me just also say that hallucination actually has positive attributes to it.

So when you want to be creative, when you want to write a document that actually flows nicely, hallucinations, which is just the model going off of the training distribution a bit, helps it be more creative and easier to read. Now in the kind of enterprise scenario where automating a workflow, a hallucination can cause a problem. And especially when you get into multi-agent systems, that can really pose a problem where agents might have a dozen interactions, and somewhere along the lines one of the agents hallucinates something, and the workflow continues with the others not being aware of what happened and making decisions and continuing orchestration based off of the incorrect assumptions in that hallucination.

And that by the end of the workflow, you’ve got an output that is completely wrong but you can’t tell why it’s wrong or where it went wrong. So that’s an example of where I think taming hallucination is key. And there’s a few approaches to taming hallucination, just leveraging the existing capabilities of the models, like grounding it with RAG content, like meta prompts that tell it to check its work, like having another model or itself go back and review its work with a separate prompt.

But I think we need other techniques, too. Because even while that drives down hallucinations, depending on what scenario, you’ll see hallucination rates between 5% and 20% or 30%, depending on what the model is being asked to do if it’s off its training distribution. So I think that there’s a need for AI models that detect hallucinations, that correct hallucinations, and then even just kind of old fashioned validation of what the model is producing. And code is a great example of this, where you could see the models generating code.

Now, a lot of times it’s going to be correct. A lot of times, though, it’s got bugs in it, like referencing packages that don’t exist, because it’s like, oh, there should be a package named this that does this. And so it’ll put it in. And it actually doesn’t exist. And so your code doesn’t even compile or run. So a simple validation is just compile it or run it. Are there any errors? A more sophisticated validation is don’t just compile and run it, but check to see if it actually produces output.

And the third one, actually, first-level validation is just look at it and see if there’s any problems. Second would be run it and see if it actually runs. And third one would be create unit tests for it and then validate against the unit tests. So I think there’s this need for domain specific knowledge validators with degrees of validation based on how much cost you want to spend on validation, which is relative to the impact of a hallucination.

Omkhar Arasaratnam (09:35)
Yeah, that makes a lot of sense. We’ve, I think we’ve all heard about the Canadian airline a few months ago where their AI BackChat bot had made a particular statement about a ticket that somebody had purchased that had to do with them traveling for bereavement. And it had given them incorrect information, and the Canadian court system ended up finding in favor of the passenger. I mean, it was the airline’s chat bot which they took to be as gospel.

It’s very interesting that you brought up the notion of different regression tests or unit tests that we could take when writing software. Turning the focus of how we may apply AI now to a challenge that you and I both face on a daily basis, what are your thoughts about AI helping to secure open source software, whether it be challenges like the DARPA AI Cyber Challenge that we’re helping out on or maybe in more general?

Mark Russinovich (10:35)
So AI is going to do a tremendous amount over the next few years for open source software. And there’s a few things that you can see right away that it can do, like code reviews. Look for bugs. And already it’s good enough to detect certain kinds of bugs just by that. But we can continue to fine-tune it to learn better how to spot security vulnerabilities in software as just through code reviews. The other one is through this kind of validation that I talked about.

So, and people have already started to explore this, like fuzz testing based on AI-driven fuzzer that is more sophisticated about looking for problems by expecting the code and then deciding the best way to fuzz it. So it’s kind of combining the human type of reasoning with automated fuzzing.

The other one is helping to generate the code like we talked about. But I think one thing that is going to be a great boon and can already be done today, is documenting code. And there’s tremendous amounts of code out there in Linux and elsewhere that have no documentation. The code is the documentation. So somebody that is new to the code comes and says, “I want to contribute to this, but I have no idea what’s going on. It takes me a long time to come up to speed and learn the code.”

When an AI model can inspect the code and generate comments and in the header for the function or in inline comments to describe exactly what’s going on. And while that’s not rocket science, it can save tons of hours probably of work as somebody coming up to speed with the code base. Not just that, but you have an AI chatbot that sees the code, and you can ask it questions about the code to learn how the code is working more quickly. I think that one is a very near here-and-now capability that AI can use to help security and open source contributions.

Omkhar Arasaratnam (12:24)
I think that makes a lot of sense. In fact, as we think about things like self-documenting code where previously the documentation was the code, I think use cases like that, to the point that you made earlier, also provide almost a semi-automatic method, right?

So even if the quality isn’t right there today, and even if there’s a slight hallucination or imagination in terms of the LLM’s inference of what the code’s supposed to do. Presumably, you have enough familiarity with the syntax of the language that you would be able to pick up correctness of that interpretation. But even still the error rate is probably lower than you having to manually grok through all the code yourself. I see that.

I think we’re gonna move into the rapid-fire section now, Mark. And with any of these questions, I’m gonna give you an either-or but the reality is there could be a third answer, which is, “No, Omkhar, I actually, I feel this way.” So I think the first one is quite binary, but spicy or mild food?

Mark Russinovich (13:23)
Spicy.

Omkhar Arasaratnam (13:25 )
All right. You know, some of our other guests have been leaning mild, and we have a dinner coming up and I’m going to find someplace with spicy food for you. I think I know the answer to this, but I’m going to ask it anyway. Vim, Emacs or VS Code?

Mark Russinovich (13:39)
VS Code, but if it is, I think that the true question here, the pure question is Emacs or Vim or VI?

Omkhar Arasaratnam (13:48)
You know, I’d like to say that, but I’ve been messing around with VS Code lately, and it’s not just because you’re on this recording, Mark. I’m starting to dig VS Code.

Mark Russinovich (13:57)
Now, I love VS Code, that’s what I use. But prior to VS Code, and there was very much the Vim camp and the Emacs camp, I was strongly in the Emacs. In fact, I used Emacs until the late 90s, before I started to use Microsoft Visual C, its own editor. Like, I just don’t understand the VI people at all, I’m just baffled.

Omkhar Arasaratnam (14:18)
Well, I’m definitely a VI guy and you know, Emacs is a great operating system that happens to have an editor attached to it. (Laughter) Last one for the rapid –

Mark Russinovich (14:31)
I still have trouble quitting VIs.

Omkhar Arasaratnam (14:33)
(Laughter) I presume you can get Emacs key bindings for VS Code.

Mark Russinovich (14:38)
You know what I grew out of Emacs key bindings? But I mean Emacs key bindings, and by the way, I just don’t know why Emacs key bindings aren’t the default for shells either. The key bindings for shells that come default are just ridiculous.

Omkhar Arasaratnam (14:51)
They are. They are. Tabs or spaces?

Mark Russinovich (14:54)
I don’t really care that much. That’s one where I’m like, whatever, as long as it formats correctly visually.

Omkhar Arasaratnam (15:01)
Makes sense as far as it’s consistent, I guess. So to close us out, Mark, what advice do you have for somebody entering our field today? We’ve both been in the field for quite a while. We’ve seen a lot of stuff. A lot of stuff has changed with that wealth of knowledge. How would you guide somebody that’s entering our field today?

Mark Russinovich (15:18)
Entering our field meaning software engineering. I guess the elephant in the room aspect of that question is should they learn to code or not? Is that what’s going to happen? Do they need to? And I would actually say, yes, go ahead and learn how to code. And I’d say that for a couple of reasons. One, it’s a way to give you critical thinking about an end-to-end process from the high-level objective down to actually how you implement it.

And that translates to other domains as well. And even if AI is going to be doing some of the low-level lifting, you still need to have the high-level, how things fit together and flow that you’re going to get as you learn to code. The other reason is that I think we’re still a ways away from AI completely just taking over coding. And just like you can have people that might be able to get away with never learning how to code and just always prompting AI, when things go wrong, knowing what’s going on underneath will make you ten times more effective than the person that doesn’t. And so just, you know, that reason, but certainly the first one that I mentioned, I would say you’re not wasting your time by learning how to code.

Omkhar Arasaratnam (16:31)
I think that’s great advice for people entering software engineering today. Last question for you. What’s your call to action for our listeners? What would you have them do immediately following this show?

Mark Russinovich (16:41)
Go check out all the learning materials that the Open Source Security Foundation offers and learn how to secure your open source supply chains.

Omkhar Arasaratnam (16:48)
Thanks very much, Mark. And I can’t say, I can’t thank you enough for being a guest on our show. Look forward to catching up with you shortly and thank you for joining What’s in the SOSS?

Mark Russinovich (17:00)
Yeah, thanks for having me. I’m a great conversation.

Announcer (17:02)
Thank you for listening to What’s in the SOSS? An OpenSSF Podcsat. Be sure to subscribe to our series of conversations on Spotify, Apple, Amazon or wherever you get your podcasts. And to keep up to date on the Open Source Security Foundation community, join us online at OpenSSF.org/getinvolved. We’ll talk to you next time on What’s in the SOSS?