Skip to main content

📣 Submit your proposal: OpenSSF Community Day Korea | Open Source SecurityCon

Category

Podcast

What’s in the SOSS? Podcast #21 – Alpha-Omega’s Michael Winser and Catalyzing Sustainable Improvements in Open Source Security

By Podcast

Summary

In this episode, CRob talks to Michael Winser, Technical Strategist for Alpha-Omega, an associated project of the OpenSSF that with open source software project maintainers to systematically find new, as-yet-undiscovered vulnerabilities in open source code – and get them fixed – to improve global software supply chain security.

Conversation Highlights

  • 01:00 – Michael shares his origin story into open source
  • 02:09 – How Alpha-Omega came to be
  • 03:48 Alpha-Omega’s mission is catalyzing sustainable security improvements
  • 05:16 – The four types of investments Alpha-Omega makes to catalyze change
  • 11:33 – Michael expands on his “clean the beach” approach to impacting open source security
  • 16:41 – The 3F framework helps manage upstream dependencies effectively
  • 21:13 – Michael answers CRob’s rapid-fire questions
  • 23:06 – Michael’s advice to aspiring development and cybersecurity professionals
  • 24:44 – Michael’s call to action for listeners

Transcript

Michael Winser soundbite (00:01)
When some nice, well-meaning person shows up from a project that you can trust, it becomes a more interesting conversation. With that mindset, fascinating things happen. And if you imagine that playing itself out again and again and again, it becomes cultural.

CRob (00:18)
Hello, everybody, I’m CRob. I do security stuff on the internet. I’m also a community leader and the chief architect for the Open Source Security Foundation. One of the coolest things I get to do with the foundation is to host the OpenSSF’s “What’s in the SOSS?” podcast. In the podcast, we talk to leaders, maintainers and interesting people within the open source security ecosystem. This week we have a real treat. We’re talking with my pal, Michael Winser, AKA “one of the Michaels” from the Alpha-Omega project. Michael, welcome sir.

Michael Winser (00:52)
It’s great to be with you, CRob.

CRob (00:53)
So for those of us that may not be aware of you, sir, could you maybe give us your open source origin story?

Michael Winser (01:00)
I have to think about that because there’s so many different sort of forays, but I think that the origin-origin story is in 1985, I was at my first job. You know, I got the Minix book and it came with floppy disks of source code to an entire operating system and all the tools. And I’m like, wait, I get to do this? And I started compiling stuff and then I started porting it to different things and using the code and then just seeing how it worked. That was like a life-changing sort of beginning.

And then I think then at Google working in open source, you know Google has a tremendous history of open source and a community and culture of it and embracing it. And I got to my last part of my work at Google was working on open source supply chain security for Google’s vast supply chain, both in terms of producing and consuming. And so that’s really been another phase of the journey for me.

CRob (01:53)
So I bet things have changed quite a lot back from 1985. And that’s not quite the beginning of everything. But speaking about beginnings and endings, you’re one of the leaders of a project called Alpha-Omega. Could you maybe tell us a little bit about that and kind of what AO is trying to do?

Michael Winser (02:09)
Sure. So Alpha-Omega, it started out in, sort of, two almost distinct things. One was at the sort of, that moment of crisis when OpenSSF was created and various companies like Microsoft and Google were like, we got to do something about this. And both Microsoft and Google sort of never let a good crisis go to waste, put a chunk of money aside to say, whatever we do, how do we figure this stuff out? It’s going to take some money to fix things. Let’s put some money in and figure out what we’ll do with it later.

Separately, Michael Scovetta had been thinking about the problem, had written a paper titled, surprisingly enough, Alpha-Omega, and thinking about how one might, the how one might was looking at the Alpha, which is sort of like the most significant, most critical projects that we can imagine. And then the Omega is, what about all the hundreds of thousands of other projects?

And so that confluence of those two thoughts sat unrealized, unfulfilled, until I…I joined the ghost team at Google and someone said, you should talk to this guy, Michael Scovetta. And that’s really how Alpha-Omega was started was two guys named Michael sitting in a room talking about what we might do. And there’s been a lot of evolution of the thinking and how to do it and lessons learned. And that’s what we’re here to talk about today, I think.

CRob (03:31)
I remember that paper quite well in the beginnings of the foundation. Thinking more broadly, how does one try to solve a problem with the open source software supply chain? From an AO perspective, how do you approach this problem?

Michael Winser (03:48)
There’s so many ways to this question, but I’m actually just going to start with summarizing our mission, because I think it really, we spend a lot of time, as you know, I’m a bit of a zealot on the mission, vision, strategy and roadmap thinking. And so our mission is to protect society by, critical word here, catalyzing sustainable security improvements to the most critical open source projects and ecosystems.

The words here are super important. Catalyzing. With whatever money we have on tap, it’s still completely inadequate to the scale of the problem we have, right? Like I jokingly like to sort of describe the software supply chain problem as sort of like Y2K, but without the same clarity of problem, solution or date.

It’s big, it’s deep, it’s poorly understood. So our goal is not to be sort of this magical, huge and permanent endowment to fix all the problems of the open source. And we’ll talk more about how it’s not about just putting money in there, right? But to catalyze change and to catalyze change towards sustainable security. And so the word sustainable shows up in all the conversations and it is really two sides of the same coin. when we talk about security and sustainability, they’re almost really the same thing.

CRob (05:03)
You mentioned money is a potential solution sometimes, but maybe could you talk about some of the techniques to try to achieve some better security outcomes with the projects you’ve worked with?

Michael Winser (05:16)
Some of it was sort of historically tripping over things and trying them out, right? And I think that was a key thing for us. But where we’ve arrived, I think I’ll sort of rather than trying to tell all the origin stories of all the different strategies that we’ve evolved. Actually, I’ll summarize by, so Alpha-Omega has now come to mean, not the most critical projects and then the rest, but the highest points of leverage and then scalable solutions. And so I use those two words, Alpha effectively means leverage and Omega means scale. And in that context, we’ve developed essentially a four pronged strategy, four kinds of investment that we make to create change, to catalyze change.

And in no particular order, Category A is essentially staffing engagements at organizations that are able to apply that kind of leverage and to where adding somebody whose job it is to worry about security can have that kind of impact. It’s kind of crazy how when you make it someone’s job to worry about security, you elevate something from a tragedy to the commons where it’s everybody’s job and nobody’s job, nobody’s the expert, nobody can decide anything to something where, well, that person said we should do X, I guess we’re gonna do X, whatever X was.

And then having somebody whose job it is to say, okay, with all the thousands of security problems we have, we’re gonna tackle these two this year, and developing that kind of theme, and then working with all those humans to create a culture around that change. Again, if it’s someone’s job to do so, it’s more likely to happen than if there’s nobody’s job it is to do it. Category A, staffing engagements at significant open source organizations that have the both the resources to hire somebody and the leverage to have that person become effective. Right? And so there’s a lot of like packaged up in that resources to hire someone like, you know, they, like, humans are humans. They want to have, you know, jobs, benefits, promotions, other crazy stuff. Right? I’ve given up on all those things, but you know, that’s the world that people live in.

And we, we don’t want to be an employer, we want to be a catalyst, right? And so we’re not here to sort of create a giant organization of open source security. We’re here to cause other people to get there and ultimately to wean themselves from us and to have a sustainable model around that. And in fact, so for those grants, we encourage, we discuss, we ask, how are you diversifying your funding so that you can start supporting this as a line item on your normal budget as opposed to our sort of operational thing? And that’s, it’s a journey. So that’s category A.

Category B has some interesting overlap, but it really speaks to what I think of as the app stores of software development, the package ecosystems, right? There is no greater force on the internet than a developer working at some company with their boss is breathing down their neck and they’ve got a thing to get done. They got to get tab A into slot B. They can’t figure it out. They Google something, and it says do not use this code, it is terrible, you should not use it, but if you were to use it, npm install foo will get you tab A into slot B, right?

At this point, you can’t give them enough warnings, it doesn’t matter, they’re under pressure, they need to get something done, they’re installing foo, right? How can we elevate these package ecosystems so that organizations, individuals, publishers, consumers can all have better metadata, better security practices, and trust the statements that these packages are making about themselves and that other entities are making about these packages to start making informed decisions or even policy-based decisions about what is allowed or not allowed into a developer environment in some organization, right?

And then that’s just a tiny part of it. So like the whole package app store concept where you essentially, I am installing this thing and I expect some truths about that name to be not a name name-squatted thing. I expect the versions to be reasonably accurate about not being changed underneath me. And a thousand little things that we just want to take for granted, even without worrying about them somehow making it all be secure, is such a point of leverage and criticality that we find investing in that worthwhile. And so that’s a category for us.

Category C is actually where most of our conversations start. And perhaps I’m getting ahead of our conversation, but it’s with audits. And we love paying for audits into an organization that essentially is ready to have an audit done. And there’s so much that gets wrapped up in is an organization ready to have an audit? Do they want to have the audit? What are they going to do with the audits results? How do they handle?

And so as an early engagement, it’s remarkably cost-effective to like find out whether that organization is an entire giant, complicated ecosystem of thousands of projects and things like that, or three to five amazing hackers who work nights and weekends on some really important library. That audit tells everybody an awful lot about where that project is on their journey of security. And one of our underlying principles about doing grants is make us want to do it again. And how an organization responds to that audit is a very key indicator, whether they’re ready for more, could they handle it, what are they going to do with it, et cetera.

And then category D, you could name it almost anything you want. This is us embracing the deep truth that we have no idea what we are doing. Collectively as an industry, nobody knows what we’re doing in this space. It’s hard, right? And so this is an area where we think of it as experimentation and innovation. And it’s a grab bucket for things that we try to do. And one of our stakeholders pointed out that we weren’t failing often enough early on in our life cycle. And it was like, if you’re not trying enough things, you’re taking the easy money, easy bets and not learning important lessons. Like, okay, we’re gonna screw some things up, get ready!

Again, the journey of learning every step along the way. And it’s not like we are like, recklessy throwing money to see if you can just burn it into security that doesn’t work — we tried But we’re seeing you know, see what we can do and those lessons are fun, too.

CRob (11:16)
Excellent. So in parallel with your kind of your four strategies you’ve used you you and I have talked a lot about your concept of the “clean the beach.” Could you maybe talk a little bit more about your idea of cleaning the beach?

Michael Winser (11:33)
Absolutely. So one of our early engagements on the Omega side was to work with an organization called OpenRefactory that had developed some better than generic static analysis techniques. And I don’t even know how it works, and there’s probably some human that know there’s some humans in the loop to help manage the false positives and false negatives and things like that. They felt that they could scale this up to handle thousands of projects to go off and scan the source code to look for vulnerabilities previously not found in that source code. And then also to generate patches, pull requests for fixes to those things as well.

And this is, sort of, the of the holy grail dream of like, we’ve got all this problem, if only we just, we can just turn literally oil into energy into compute, into fixing all this stuff, right? And there’s a lot of interesting things along the way there. So the first time they did, they went and did a scan, 3,000 projects, and came back and said, look at us, we scanned 3,000 projects, we found I don’t know how many vulnerabilities, we reported this many, this many were accepted. There’s a conversation there that we should get back to about the humans in the loop. And after it, I’m like, okay, if I try to tell anybody about this work, I don’t know what difference it makes to anybody’s lives.

And I realized that it was the moral equivalent of taking a…some kind of boat, maybe a rowboat, maybe a giant barge, out to the Pacific garbage patch and picking up a lot of plastic and coming back and saying, look at all this plastic I brought back. And I’m like, that’s good. And maybe you’ve developed a technique for getting plastic at scale, but thousands of orders of magnitude off, like literally it’s gigatons, teratons of stuff out there. And you brought back a little bit. I’m like, I need to be more short-term in terms of getting work and impact. And we care about continuous results and learnings as opposed to like, great, we found a way to turn the next trillion dollars into like a lot of scans and things like that. And so we thought about this a lot.

And it was sort of around the same time as the somewhat terrifying XZ situation, right? And I realized that XZ showed us a lot about the frailty of projects because of the humanness of people involved. But it also showed us that, and I’m going to be kind of stern about this, open source projects that use upstream dependencies like XZ are treating those dependencies exactly the way that we complain about corporations using open source for free.

They assume that this source code is being taken care of by somebody else and that it comes down from the sky on a platter with unicorns and rainbows and whatever other…like how many people in these organizations that use XZ, whether they were for-profit entities or whatever we’re paying attention upstream and saying, hey, I wonder if any of our projects needs our help? I wonder if we should spend some more time working on our upstream. Said nobody ever.

And so coincidentally, we wanted to do some work with someone we met at PyCon, this gentleman named Jarek Potiuk who’s on the PMC for Apache Airflow. And he wanted us to talk about our security work at the Airflow conference. And I’m like, well, we’ve got to talk about something. And so we start talking about Airflow. And he was already down that road of looking at his dependencies and trying to analyze them a little bit. And we said, what can we do here?

And so bring this back to Pacific garbage patch, right? We’d all love for the Pacific garbage patch to go away, right? But day to day, we go to the beach. And wouldn’t it be nice if we could talk about a section of the beach as being not perfectly okay, but free of a common set of risks, right? So we thought about, so can we do that? And he’s like, well, I know exactly how many dependencies total Airflow has. It has 719 dependencies.

And we asked ourselves the question, has anybody ever done a complete audit across those dependencies? Where complete is across all 719, not a complete code analysis of every single piece of those projects. And the answer was no. And we said, well, we’re going to make the answer yes. And so we started a project to go and bring automatic scanning to that so that OpenRefactory instead of trying to scan 3,000 arbitrary projects or the top 3,000 or the top 3,000 dependencies, they pick 718 and scan those. And Jarek and his team put together some scripts to go off and pull key facts about projects that can be used to assess risk on an ongoing basis in terms of whether we need to get involved or should we do something or should we worry about this or not, right?

And it’s everything from understanding the governance to the size of the contribution pool to the project, to its vulnerability history, right? And just building up a picture where the goal is not to sort of audit the source code of each of these projects, because that’s actually not Airflow’s job, right? And they wouldn’t do a good job of it per se. But to understand across their dependencies where there is risk and where they might need to do something about it.

From that came another concept that is what I really like. Going back to the, let’s not pretend that this code came down from the sky on a silver platter with unicorns. What are we supposed to do about it if we see risk in one of our upstream dependencies? And from that, the framework that came out was essentially the three F’s. You either need to fix, fork or forego those dependencies. There’s another way of saying forego, but we’ll stick with forego. There’s a fourth one which is fund, and we’ll talk about why that is not actually something at the disposal of most projects.

The fixed part is kind of interesting. The fork part is an expensive decision. It’s saying, you know, they’re not doing it, but we need this and we can’t get something else. We can’t forego it because it’s whatever. So I guess it’s ours now, right? And taking responsibility for the code that you use, because every dependency you use, right, unless you’re using some very sophisticated sandboxing, every dependency you use has basically total access to your build environment and total access to your production environment. So it’s your code, it’s your responsibility.

So with that mindset, fascinating things happened. When an automated scan from OpenRefactory found a new vulnerability in one of the dependencies, they would report it through their private vulnerability reporting, or we had some editing that noticed that these people don’t have private vulnerability reporting.

And so one of the fixes was helping them turn on PBR, right? But let’s say they had PBR, they would file the fix or file the vulnerability. And because it looked like it came from a machine, right? Unfortunately, open source maintainers have been overwhelmed by well-meaning people with bots and a desire to become a security researcher with a lot of like, let’s just say, not the most important vulnerabilities on the planet.

And that’s a lot of signal-to-noise for them to deal with. So some of these reports were getting ignored, but then when an Apache Airflow maintainer would show up on the report and say, hey, my name is “Blah,” I’m from Apache, we depend upon you, would you be open to fixing this vulnerability, we would really greatly appreciate it. In other words, a human shows up and behave like a human. You’d be amazed at what happened. People are like, my God, you know I exist? You’re from Apache Airflow, I heard it, you guys. How can I help? I’ll put it right on like that, right? Like, the response changed dramatically. And that’s a key lesson, right?

And if I were to describe one of my goals for this sort of continued effort, right, is that within the Airflow community, there’s an adopt a dependency mindset where there’s somebody, at least one person for every dependency. And I mean, transitively, it’s not the top level. It’s the whole graph, because you can’t assume that your transitive people are behaving the same way as you and that. It’s easy when it’s like not a crisis, but when it’s a crisis, right?

Having somebody you know, talk to you about the situation, offer to help is very different than, oh my God, you’ve shown up on somebody’s radar as having a critical vulnerability and now everybody is dog is asking you about this. Lawyer-grams are coming. We’ve seen that pattern, right? But then Jarek from Apache Airflow shows up and says, hey, Mary, sorry you’re under the stress. We’re actually keen to help you as well. You know, who’s going to say no to that kind of help when it’s somebody they already know? Whereas the XZ situation has effectively taught people to say, I don’t know you, why am I letting you into my project? How do I know you’re not some hacker from some bad actor, right?

That mindset of let’s pick some beaches to focus on, understand the scope of that, and then take that 3F mindset, right? And so Airflow has changed their security roadmap for 2025 and that includes doing work with, on behalf for, towards their dependencies. They’ve taken some dependencies out, so they’ve done it forego. And some of the things they’re asking them to do is just turn on PDR or maybe do some branch protection, some of the things that you might describe in the open SSF space line for security, right?

That people don’t think they know they’re competent to do or haven’t worried about it yet or whatever. But when some nice, well-meaning person shows up from a project that you can trust, it becomes a more interesting conversation. And if you imagine that playing itself out again and again and again, it becomes cultural.

CRob (21:01)
Yeah, that’s amazing. Thank you for sharing. That’s some amazing insights. Well, let’s move on to the rapid fire section of podcast! First hard question. Spicy or mild food?

Michael Winser (21:13)
Oh, I think both. Like I don’t want to have spicy every single day, but I do enjoy a nice spicy pad Thai or something like that or whatever. I’m, you know, variety is the spice of life. So there you go.

CRob (21:25)
Excellent. Fair enough. Very contentious question: Vi or Emacs?

Michael Winser (21:32)
I confess to Vi as my default console editor. Along back in that 1985 time, did port Jove — Jonathan’s Own Version of Emacs. Still alive today. I used that. And then, in my Microsoft days, I used this tool called Epsilon that was an OS2 and DOS Emacs-derived editor. And the key bindings are all locked in my brain and worked really well. But then full-grown Emacs became available to me, and the key bindings were actually nuancedly different, and my brain skid off the tracks. And then as I became a product manager, the need became more casual, and so Vi has become just convenient enough. I still use the index key bindings on the Mac OS command line to move around.

CRob (21:19)
Oh, very nice. What’s your favorite adult beverage?

Michael Winser (22:23)
I think it’s beer. It really is.

CRob (22:25)
Beer’s great. A lot of variety, a lot of choices.

Michael Winser (22:28)
I think a good hefeweizen, a wheat beer, would be very nice.

CRob (22:23)
Okay, and our last most controversial question: tabs or spaces?

Michael Winser (22:39)
Oh, spaces. (Laughter) I’m not even like, like I am a pretty tolerant person, but there’s just no way it ends well with tabs.

CRob (22:50)
(Laughter) Fair enough, sir. Well, thank you for playing rapid fire. And as we close down, what advice do you have for someone that’s new or trying to get into this field today of development or cybersecurity?

Michael Winser (23:06)
The first piece of advice I would have is it’s about human connections, right? Like so much of what we do is about transparency and trust, right? Transparency happens, things that happen in the open and trust is about behaving in ways that cause people to want to do things with you again, right? There’s a predictability to trust too, in terms of like not doing randomly weird things and things like that.

And so, and then there’s also, you know, trust is built through shared positive experiences or non-fatal outcomes of challenges. So I think that anybody wanting to get into this space, showing up as a human being, being open about who you are and what you’re trying to do, and getting to know the people, and that sort of journey of humility of listening to people who you might think you know more than they do and you might even be right, but it’s their work as well. And so listening to them along the way, that’s personally one of my constant challenges. I’m an opinionated person with a lot of things to say. Really, it’s true.

It’s very generic guidance. I think that if you want to just get started, it’s pretty easy. Pick something you know anything about, show up there in some project and listen, learn, ask questions and then find some way to help. Taking notes in a working group meeting, it’s a pretty powerful way to build trust in terms of, this person seems to take notes that accurately represent what we tried to say in this conversation. In fact, better than what we said. We trust this person to represent our thoughts is a pretty powerful first step.

CRob (24:32)
Excellent. I really appreciate you sharing that. And to close, what call to action do you have for our listeners? What would you like them to take away or do after they listen to this podcast?

Michael Winser (24:44)
I would like them to apply the 3F framework to their upstream dependencies. I would like them to look at their dependencies as if they were a giant pile of poorly understood risk and not just through the lens of how many vulnerabilities do I have unpatched in my current application because of some, you know SBOM analyzing tool telling me. But from a longer-term organizational and human risk perspective, go look at your dependencies and their dependencies and their dependencies and build up just a heat map of where you think you should go off and apply that 3F framework.

And if you truly feel like you can’t do any one of those things, right, because you’re not competent to go fix or fork and you have no choice but to use the thing so you can’t forego it, right, then think about funding somebody who can.

CRob (25:34)
Excellent words of wisdom. Michael, thank you for your time and all of your contributions through your history and now through the Alpha and Omega projects. So we really appreciate you stopping by today.

Michael Winser (25:45)
It was my pleasure and thank you for having me. I’ve enjoyed this tremendously. It would be a foolish thing for me to let this conversation end without mentioning the three people at Alpha-Omega who really, without whom we’d be nowhere, right? And so, you know, Bob Callaway, Michael Scovetta, and Henri Yandell. And then there’s a support crew of other people as well, without whom we wouldn’t get anything done, right?

I get to be, in many ways, the sort of first point of contact and the loud point of contact. We also have Mila from Amazon, and we have Michelle Martineau and Tracy Li, who are our LF people. And again, this is what makes it work for us, is that we can get things done. I get to be the sort of loud face of it, but there’s a really great team of people whose wisdom is critical to how we make decisions.

CRob (26:32)
That’s amazing. We have a community helping the community. Thank you.

Michael Winser (26:35)
Thank you.

Announcer (26:37)
Like what you’re hearing? Be sure to subscribe to What’s in the SOSS? on Spotify, Apple Podcasts, AntennaPod, Pocket Casts or wherever you get your podcasts. There’s lots going on with the OpenSSF and many ways to stay on top of it all! Check out the newsletter for open source news, upcoming events and other happenings. Go to OpenSSF dot org slash newsletter to subscribe. Connect with us on LinkedIn for the most up-to-date OpenSSF news and insight. And be a part of the OpenSSF community at OpenSSF dot org slash get involved. Thanks for listening, and we’ll talk to you next time on What’s in the SOSS?

What’s in the SOSS? Podcast #20 – Jack Cable of CISA and Zach Steindler of GitHub Dig Into Package Repository Security

By Podcast

Summary

CRob discusses package repository security with two people who know a lot about the topic. Zach Steindler is a principal engineer at Github, a member of the OpenSSF TAC and co-chairs the OpenSSF Security Packages Repository Working Group. Jack Cable is a senior technical advisor at CISA. Earlier this year, Zach and Jack published a helpful guide of best practices entitled “Principles for Package Repository Security.”

Conversation Highlights

  • 00:48 – Jack and Zach share their backgrounds
  • 02:59 – What package repositories are and why they’re important to open source users
  • 04:17 – The positive impact package security has on downstream users
  • 07:06 – Jack and Zach offer insight into the Prinicples for Package Repository Security
  • 11:18 – Future endeavors of the Securing Software Repositories Working Group
  • 17:32 – Jack and Zach answer CRob’s rapid-fire questions
  • 19:31 – Advice for those entering the industry
  • 21:28 – Jack and Zach share their calls to action

Transcript

Zach Steindler soundbite (00:01)
We absolutely are not looking to go in and say, OK all ecosystems must do X. But what we are is sort of this forum where these conversations can take place. People who operate these package repositories can say here’s what’s working for us, here’s what’s not working for us. Share those ideas, share those experiences and learn from each other.

CRob (00:017)
Hello everybody, I’m CRob. I do security stuff on the internet and I’m also a community leader within the OpenSSF. And one of the fun things I get to do is talk to amazing people that have input and are developing and working within upstream open source.

And today we have a real treat. I have two amazing people. I have Zach and Jack, and they’re here to talk to us a little bit about package repository security. So before we start, could I ask each of you to maybe give us a brief introduction?

Jack Cable (00:48)
Great. Thank you so much for having us on here, CRob. I am Jack Cable. I’m a senior technical advisor at CISA, where I help lead our agency’s work around open source software security and secure by design. For those unfamiliar with CISA, the Cybersecurity and Infrastructure Security Agency, is the nation’s cyber defense agency. So we help to protect both the federal civilian government and critical infrastructure, of which there’s 16 sectors ranging from everything like water to power, financial services, healthcare, and so on. And probably as no surprise to anyone here, all of these sectors are heavily dependent on open source software, which is why we’re so eager about seeing how we can really be proactive in protecting the open source ecosystem.

I come from a background in security research, software development, spent some time doing bug bounty programs, finding vulnerabilities in companies. Gradually went over to the policy side of things, spent some time, for instance, in the Senate where I worked on legislation related to open source software security and then joined CISA about a year and a half ago.

CRob (02:04)
Awesome. Zach?

Zach Steindler (02:13.869)
Yeah, CRob, thanks so much for having us. My name is Zach Steindler. I’m a principal engineer at GitHub. I have a really amazing job that lets me work on open source supply chain security, both for GitHub’s enterprise customers, but also for the open source ecosystem. CRob, you and I are both on the OpenSSF TAC. And in addition to that, I co-chair the Securing Software Repositories Working Group, where we had a recent chance to collaborate with us on the Principles for Package Repository Security document.

CRob (02:40)
Excellent, which we will talk about in just a few moments. And, you know, thank you both for your past, current and future contributions to open source. We really appreciate it. So our first question. Would you tell us what a package repository is and why that’s something that’s important to open source users?

Zach Steindler (02:59)
Yeah, this is something that comes up a lot in the working group, and what we’ve discovered is that everyone has slightly different terminology that they prefer to use. Here when we’re talking about package repositories, we’re talking about systems like NPM, like PyPI, like RubyGems or Homebrew — places that people are going to download software that they then run on their machine. And that’s a little bit in contrast to other terminology you might hear around repositories.

So here we aren’t talking about, like, where you store your source code like in a Git repository or a Mercurial repository, that sort of thing. These patch repositories are widely used. Many of them serve hundreds of millions to billions of downloads per day, and those downloads are being run on developer’s machines that are being run on build servers, and they’re being run on people’s computers who, know, whatever you’re doing on your mobile phone or your desktop device. And so the software that’s stored in these package repositories are really used globally by almost everyone daily.

CRob (04:07)
Thinking about kind of this critical space within critical software here, how does improving a package repository security affect all the downstream folks from that?

Jack Cable (04:17)
Great. And really to what Zach was saying, that’s in part why we picked this as a priority area at CISA, recognizing that regardless, really, of what, say, critical infrastructure sector, regardless of whether you’re a small business, whether you’re a large company, whether you’re a government agency, you’re heavily dependent on open source software. And in all likelihood, that software is being integrated into the products you’re using through a package repository.

So we wanted to see, where are the places where we can have the biggest potential impact when it comes to security and package repositories really stood out as central points where virtually everyone who consumes open source software goes to download, to integrate that software. So it is very central to essentially all of the software that our world relies on today. And we also recognize that many of these package repositories themselves are resource constrained, often nonprofits who operate these really critical essential services, surveying millions of developers, billions of users across the world.

So what kind of can be done to help strengthen their security? Because we’ve seen attacks both on package repositories themselves, whether it’s compromising developers’ accounts or kind of some of these underlying pervasive flaws in open source packages. How can package repositories really bolster their security to make the entire open source ecosystem more resilient? And that’s what we set out, which I know we’ll get much more into the principles of package repository security framework we created. But the goal is to really aggregate some of the best practices that perhaps one or two package repositories are doing today, but we’re not seeing across the board.

Things that can be as basic, for instance, as requiring multifactor authentication for developers of really critical projects to make sure that that developer’s account is much harder to compromise. So some of these actions that we know take time and resources to implement and we want to see how we can help package repositories prioritize these actions, advocate for them, get funding to do them so that we can all benefit.

CRob (06:52)
Well, we’ve touched on it a few times already. Let’s talk about the Principles of the Package Repository Security. Could maybe you share a little bit about what this document’s about, how it came to be, and maybe a little bit about who helped collaborate to do it?

Jack Cable (07:06)
I’ll kick it off, and then Zach can jump in. So really, as I was saying, we wanted to create kind of a common set of best practices that any package repository could look to to kind of guide their future actions, Because, kind of, what we’ve been seeing, and I’m sure Zach can get much more into it with the work he’s led through the Securing Software Repositories Working Group, is that there’s many software repositories that do care significantly about security that really are taking a number of steps that, like we’ve seen for instance, both Python and PM requiring multi-factor authentication for their maintainers, Python even, shipping security tokens to their developers. Some of these actions that really have the potential to strengthen security.

So what the Principles for Package Repository Security Framework is, is really an aggregation of these security practices that we developed over the course of a few months collaboratively, both between CISA, Securing Software Repositories Working Group, and then many package repositories, and landed on a set of four buckets really around security best practices, including areas like authentication, authorization.

How are these package repositories, for instance, enforcing multi-factor authentication? What tiers of maturity might go into this to then, for instance, if they have a command line interface utility, how can that make security really seamless for developers who are integrating packages?

Say, if there is no vulnerabilities in those packages, is that at least flagged to the developer so they can make an informed decision around whether or not to integrate the version of the open source package they’re looking at? So maybe I’ll pass it over to Zach to cover what I missed

Zach Steindler (09:08)
Yeah, the beauty of open source is that no one’s in charge. And people sometimes misunderstand the Securing Software Repositories Working Group, and they’re like, can I come to that and, sort of like, mandate all the package repositories implement MFA? And the answer is no, you can’t, first because it’s against the purpose of the group to like tell people what to do. But also, it’s not a policy-making group. It’s not a mandate-creating group, right? Participation is voluntary.

Even if we were to, you know, issue a mandate, each of these ecosystems has like a rich history of why they’ve developed certain capabilities, things they can and cannot do, things that are easy for them, things that are hard. So we absolutely are not looking to go in and say, OK, you know, all ecosystems must do X. But what we are is sort of this forum where these conversations take place.

People who operate these package repositories can say, here’s what’s working for us, here’s what’s not working for us. Share those ideas, share those experiences and learn from each other. And so when it came to writing the Principles for Package Repository Security document, the goal was not to say, here’s what you must do, but these different ecosystems are all very busy, very resource constrained. And one of the items often on their backlog is to create a security road map or to put together a request for funding for like a full time security in residence position. But to do that, they need to have some idea of what that person is going to work on.

And that’s really where the principles document comes in, is where we’re creating this maturity model, this roadmap, whatever you want to call it, more as a menu that you can order off of and not a mandate that everyone must follow.

CRob (10:50)
That sounds like a really smart approach. I applaud your group for taking that tactic. The artifact itself is available today. You can go out and review it and maybe start adopting a thing or two in there if you manage repository, but also it took you a lot of time and effort to get there. But describe to us what’s next on your roadmap. Where does what does the future hold around your group and the idea around trying to institute some better security practices across repos?

Zach Steindler (11:18)
Yeah, I could start out to talk about the Securing Software Repositories Working Group. I’m not sure I would have had this grand plan at the time, but overtime it sort of crystallized that the purpose of the working group is to put together roadmaps like the principles document that we published. I gotta plug that all the work that we do is on repos.openssf.org. So it’s a great place to review all these documents.

The second thing that the working group is focused on, other than just being this venue where people can have these conversations, is to take the individual security capabilities and publish specific guidance on how an ecosystem implemented it, and then give sort of a design and security overview to make it easier for other ecosystems to also implement that capability. We have a huge success story here with a capability called Trusted Publishing.

So to take a step back, the point of Trusted Publishing is that when you are building your software on a build server and you need to get it to the package registry, you have to authenticate that you have the permission to publish that package namespace. Usually in the past, this has been done by taking someone’s user account and taking their password and storing it in the build pipeline. Maybe you could use an API key instead, but these are really juicy targets for hackers.

So Trusted Publishing is a way to use the workload identity of the build system to authorize the publish. And then you don’t have a API key that can be exfiltrated and broadly used to upload a lot of malicious content. And so this capability was first implemented in PyPI, shortly thereafter in RubyGems.

And then we asked Seth Larson, who’s a member of the working group and the Python Software Foundation security residents to write up implementation guidance based on what his team at the PSF learned and also based on what the RubyGems team learned. And it so happened that NuGet, the package manager for the dot net Microsoft ecosystem, was also interested in this capability, and the timing just just happened to work out perfectly where they started coming to the working group meetings.

We already had this drafted guidance on implementation, and they were able to take that and kind of accelerate their RFC process, adapt it so that it was relevant to the different concerns in their ecosystem. But they’re much further along on this track of implementing this capability than they would have otherwise been if they had to start a square one. So in addition to roadmaps, I think we’re going to be focusing more in the near future on finding more of these security capabilities to publish guidance on to help the package repositories learn from each other.

Jack Cable (14:08)
Yep, and just to add on to that, I think it’s super great to see some of the work that is coming out of the working group. We at CISA held a summit on open source software security in March, whereas part of that we announced actions that five of the major package repositories, including for Python, JavaScript, Rust, Java and PHP are taking in line with the principles for package repository security framework. And we know that this is going to be an ongoing journey for really all of the package repositories, but we’re encouraged to see alignment behind that. And we hope that can be a helpful resource for these package repositories to put together their roadmaps to make funding requests and so on.

But I do want to talk about kind of one of the broader outcomes that we want to help achieve at CISA, and this is in line with our secure design initiative, really where we want technology manufacturers to start taking ownership of, for instance, the security outcomes of their customers, because we know that they’re the ones who are best positioned to help drive down the constant stream of cyber attacks that we seem to be seeing.

As part of that, it’s essential that every technology manufacturer who is a consumer of open source software who integrates that into their products, who profits from that open source software is a responsible steward of the open source software that they depend upon. That means both having processes to responsibly consume that. It also means contributing back to those open source packages, whether financially or through developer time.

But what this also entails is making sure that there’s kind of a healthy ecosystem of the infrastructure supporting the open source communities of which package repositories are really a core part. So I encourage every software manufacturer to think about how they are helping to sustain these package repositories, helping to foster security improvements, because again, we know that many of these are nonprofits. They really do rely on their consumers to help sustain them, not just for security, but for operations more generally. So really we want to see how both we can help spur some of these developments directly, but then also how every company can help contribute to sustain this.

Zach Steindler (16:50)
Jack, I just wanted to say that we are sort of like maybe dancing around the elephant in the room, which is that a lot of this work is done by volunteers. Occasionally it is funded. I wanted to give a special shout out to Alpha-Omega, which is an associated project of the OpenSSF that has funded some of this work in individual package repositories. There’s also the Sovereign Tech Fund, which is funded by, I think, two different elements in the German government.

But, you know, this work doesn’t happen by itself. And part of the reason why we’re putting together this guidance, why we’re putting together these roadmaps is so that when funding is available, we’re making sure that we are conscious of where we can get the most results from that investment.

CRob (17:32)
Thank you both for your efforts in trying to help lead this, help make this large change across our whole ecosystem. Huge amount of downstream impact these types of efforts are going to have. But let’s move on to the rapid fire section of our interview. (Sound effect: Rapid fire!) I have a couple of fun questions. We’re going to start off easy, spicy or mild food?

Jack Cable (17:55)
Spicy.

Zach Steindler (17:57)
In the area that I live, there’s quite a scale of what spicy to mild means, depending on what kind of restaurant that you’re at. I’d say I tend towards spicy, though.

CRob (18:05)
(Sound effect: Oh, that’s spicy!) That’s awesome. All right. A harder question. Vi or Emacs?

Jack Cable (18:16)
I’m going to say nano — option number three.

CRob (18:20)
(Laughter) Also acceptable.

Zach Steindler (18:24)
CRob, always joking about college football rivalries, and I don’t feel a strong personal investment in my text editor. I do happen to use Vi most of the time.

CRob (18:37)
It is a religion in some parts of the community. So, that was a very diplomatic answer. Thank you, Another equally contentious issue, tabs or spaces?

Jack Cable (18:48)
Spaces all the way, two spaces.

Zach Steindler (18:52)
I’m also on team spaces, but I’ve had to set up my Go formatter and linter to make sure that it gets things just right for the agreed-upon ecosystem answer. That’s the real answer, right? It’s good tools, and everyone can be equally upset at the choices that the linter makes

CRob (19:09)
That’s phenomenal. (Sound effect: The sauce is the boss!) I want to thank you two for playing along real quickly there. And as we close out, let’s think about, again, continuing on my last question about the future. What advice do either of you have for folks entering the industry today, whether they’re going to be an open source developer maintainer, they’re into cybersecurity, they’re just trying to help out what advice do you have for them?

Jack Cable (19:31)
I can kick that off. I’d say first of all, I think there’s lots of great areas and community projects to get involved with, particularly in the open source space. The beauty of that, of course, is that everything is out there and you can read up on it, you can use it, you can start contributing to it. And specifically from the security perspective, And there is a real ability to make a difference because as Zach was saying, this is primarily volunteers who are doing this, not because they’re going to make a lot of money from it or because they’re going to get a ton of recognition for it necessarily, but because they can make an actual difference.

And we know that this is sorely needed. We know that the security of open source software is only going to become more and more important. And it’s up to all of us really to step in and take matters into our own hands and drive these necessary improvements. So I think you’ll find that people are quite welcoming, that there’s a lot of great areas to get involved and encourage reading up on what’s going on and seeing what areas appeal to you most and start contributing.

Zach Steindler (20:51)
I have two pieces of maybe contradicting advice, because the two failure modes that I see are people being too afraid to start participating or being like, I have to be an expert before I start participating, which is absolutely not the case. And then the other failure mode I see is people joining a 10-year old project and being like, I have all the answers. I know what’s going on. So I think my contradictory advice would be to show up. And when you do show up, listen.

CRob (21:19)
Excellent advice. I think it’s not that big a contradiction. As we close out, do you gentlemen have a call to action? I think I might know part of it.

Zach Steindler (21:28)
Yeah, my call to action would be please go to repos.openssf.org. That is where we publish all of our content. That also links to our GitHub repository where you can then find our past meeting minutes, upcoming meeting information, our Slack channel in the OpenSSF Slack. Do be aware, I guess, that we’re very much the blue hats defenders here. So sometimes people like, do you need me to, you know, report more script kiddies uploading, you malware to NPM? It’s like.

The folks who are sort of like operating these systems, and so we recognize it’s a small audience. That’s not to say that we don’t want input from the broader public. We absolutely do, but to my point earlier, you know a lot of these folks have been running these systems for a decade plus. And so do come, but do be do be cognizant that there’s probably a lot of context that these operators have that that you may not have as a user systems.

Jack Cable (22:17)
And please do check out the principles for package repository security framework. It’s on GitHub as well as the website Zach mentioned. We have an open ticket where you can leave feedback, comments, suggestions, changes. We’re very much open to new ideas, hearing how we can make this better, how we can continue iterating and how we can start to foster more adoption.

CRob (22:43)
Excellent. I want to thank Zach and Jack for joining us today, helping secure kind of the engine that most people interact with open source with. So thank you all. I appreciate your time and thanks for joining us on What’s in the SOSS? (Sound effect: That’s saucy!)

Zach Steindler (23:00)
Thanks for having us, CRob. I’m a frequent listener, and it’s an honor to be here.

Jack Cable (23:04 )
Thank you, CRob.

Announcer (23:05)
Like what you’re hearing? Be sure to subscribe to What’s in the SOSS? on Spotify, Apple Podcasts, AntennaPod, Pocket Casts or wherever you get your podcasts. There’s lots going on with the OpenSSF and many ways to stay on top of it all! Check out the newsletter for open source news, upcoming events and other happenings. Go to openssf.org/newsletter to subscribe. Connect with us on LinkedIn for the most up-to-date OpenSSF news and insight, and be a part of the OpenSSF community at openssf.org/getinvolved. Thanks for listening, and we’ll talk to you next time on What’s in the SOSS?

What’s in the SOSS? Podcast #19 – Red Hat’s Rodrigo Freire and the Impact of High-Profile Security Incidents

By Podcast

Summary

In this episode, CRob talks to Rodrigo Freire, Red Hat’s chief architect. They discuss high-profile incidents and vulnerability management in the open source community. Rodrigo has a distinguished track record of success and experience in several industries, especially high-performance and mission-critical environments in financial services.

Conversation Highlights

  • 01:08 – Rodrigo shares his entry into open source
  • 02:42 – Diving into the specifics of a high-profile incident
  • 06:22 – How security researchers coordinate a response to a high-profile incident
  • 10:33 – The benefits of a vulnerability disclosure program
  • 11:57 – Rodgiro answers CRob’s rapid-fire questions
  • 13:43 – Advice for anyone getting into the industry
  • 14:26 – Rodrigo’s call to action for listeners
  • 15:53 – The importance of the security community working together

Transcript

Rodgrigo Freire soundbite (00:01)
Who do I ask and to grab by the arm? Man, I need you to, right now, please assess this vulnerability! It’s very important asset to have that Rolodex of contacts and to know the ones to ask for help. You don’t have to the information — have to know who knows.

CRob (00:18)
Hello everybody. Welcome to What’s in the SOSS? The OpenSSF’s podcast where I and Omkhar get to talk to some amazing people in the open source community. Today, I’ve got a really amazing treat for you. Very special guest. My friend Rodrigo from Red Hat. I’ve known Rodrigo for awhile, and we’re here to talk to about a really important topic, kind of, both of us have worked a lot with.

Rodrigo Freire (00:44)
Thanks Chris. Hello. Yes, I had the pleasure to work with CRob for a good number of years and I was in charge of the vulnerability management team at Red Hat. Yes, it was five definitely fun and character-molding years.

CRob (01:01)
So maybe you could share with our audience a little bit about your open source origin story. How did you get into this amazing space?

Rodrigo Freire (01:08)
It’s funny. When you say that I worked with a Linux version 1 dot something, well that pretty much disclosed the age, right? It was back in the 90s. I was working on an internet service provider and there was that multi-port serial adapters for moldings, and that was pretty much the backbone of the ISP. And then sendmail, ISC buying DNS server. And back in the day there was not radios for authentication — it was Cisco stack hacks, so yeah. (Laughter)

I started on the classic ISP admin back in the 90s. That’s when I got involved and then worked in the Brazilian government promoting the open source, and it was interesting time. When the government was shifting from mainframes and going to the to the low platform. And then the Linux as a security thing, and then the Linux was more focused on performance and the security. So this is where I started to wetting my toes in open source software.

CRob (02:22)
So let’s dive into the meat of our conversation today, my friend. We’ve all seen them, and maybe you could share with the audience from your perspective — what is a high profile incident? You know, sometimes it’s called celebrity vulnerability or a branded flaw. Could you maybe share like what is that?

Rodrigo Freire (10:33)
Yeah, definitely. I don’t know how does that translate to English actually. So I live all the way down here in Brazil, but I like to perceive them as like creating commotion. So that’s going to attract media audience and Twitter clicks and engagement and, h my God, look what I found! And in the end, that might be somewhat another Brazilian saying for you guys: trimming the pig.
A lot of cries for very few actual hair.

So you create all that commotion, all that need and so that comes escalating from CEOs, whatsoever, security teams for something that in the end might be some moderate impact or sometimes even something that does not affect some customer systems. So it’s a lot of brouhaha, I would say. However, on the other hand, there are some security events that are definitely something that you should pay close attention to.

So for example, we had the Heartbleed and then there was Shellshock and ghosts. I’ve been over the course of the years, a number of GLIBC vulnerabilities that can elevate you to root, even to the extent that it was even used as a tool to get a root on the system that someone forgot the password. Yes, that happened once, to a customer that shall be renamed unnamed.

And then finally, I think the mother of all incidents that I worked with, it would be the XZ security incident that happened a couple of months ago. More often than not this is something that just created distress with the security people with the good people managing the data center without something that’s really putting the customer at risk. However, on the other hand sometimes, some less often there, will be definitely something that’s really of concern and because the customer should pay close attention for that.

CRob (04:52)
So what do you think the motivation is that last year there were like 25,000 vulnerabilities? What’s your perception of why some of these get a celebrity treatment and others don’t that may be more severe?

Rodrigo Freire (05:08)
I have read somewhere on the internet, over the internet, something more like in the lines that over promoting something for personal gain. That resonated very well with me. On the community industry, there’s a lot of effort that’s put for you to render your portfolio, your reputation across the industry. And so, someone shows on the resume, hey, I was a guy who found the Heartbleed or the Ghost vulnerability.

A lot of people are going to recognize you, oh my God, you found that vulnerability, So yeah, it might be something like that. Sometimes it might not be that intent, but in the end, Chris, I really don’t think that’s not something that’s changed the tide on the security landscape for a good impact, I would say.

CRob (06:00)
Yeah, I would agree. Thinking about you managing some of these high-profile incidents, for our audience, maybe you could shed some light on what goes on behind the scenes when a security researcher comes to an open source project or a vendor like Red Hat. How do you get all the stakeholders together? How do you run these types of things? How do you keep the team focused?

Rodrigo Freire (06:22)
Internally at Red Hat we have some internal prioritization of the CV based on the scale. We use a four point scale. We are not attached to the CVSS score or the ranking. We focus on the product rank for the security issue. Say for example, I use HTTP server, for example, Apache HTTP my system. Alright, so there’s a vulnerability affecting CVSS score in 10, a perfect 10 on CVSS for that.

However, this functionality it’s not exposed on my system or is not use it is not enabled is not supported. Why would I score that as a 10 since it’s not a valid usage on my product? So yes, I would just lump something as not a factor or even a factor, but the impact is low. Putting the customers at a heightened risk, we take that, so this is a Red Hat score as a product. I strongly believe that the the way we rank these vulnerabilities on our product is how the customers should actually be paying attention instead of taking the worst -case scenario in whatever possible use of the component.

I’m not saying that this is not important. It is, it is key. However, we do have people, we have a human operator that’s taking into account how that vulnerability is actually exposed on the product. So I think that’s something very important for the vendors to do. So they take a general vulnerability and then you issue a score for your product. How is that actually exposed on our product? So that said, this is how we select how and went to fix something.

And then, let’s say for example, in the case of a high-profile event, oh man, there was a very ugly vulnerability that showed at the eve of 2022 to 2023. It was December the 21, something like that. It was on the 20s. So we had the company at a freeze and I was working. So the…sorry, this still has to be taken, right? And then there was a KSMDB, it was a kernel SMB server vulnerability. Actually, it was a stream of them that was disclosed by Zero Day Initiative.

That was an uphill battle because in the end it was not affecting RAN because we don’t enable KSMDB on our kernels. So it was not affecting us. However, I needed to get all the techies, all the specialists to assure and ensure because customers questions were starting to pile up. It’s not only RAN had that runs 24-7, our customers as well were surprised. So we have to provide the answers. And then finding the right resources. This is one of the key abilities for everyone managing any security program. So it’s this vast network of contacts and who to ask and who to grab by the arms. Man, I need you to right now to please assess this vulnerability.

It’s a very important asset to have that disclosing the age again that Rolodex of context and to know the ones who ask for help to get information. You don’t have to know the information, you have to know who knows.

CRob (09:55)
Right, and I think it’s really important that some people in the supply chain, like a commercial Linux vendor, are able to contextualize that. Vulnerability may be abstract or not applicable, and I love that, that a lot of folks do that within the supply chain. Thinking about a vulnerability disclosure program, what we colloquially refer to as VDP, it’s important for large projects, and it’s required for a large commercial enterprise.

Could you maybe talk to some of our listeners about what the benefits to their downstreams would be to put the pieces in place to get some type of vulnerability disclosure program together?

Rodrigo Freire (10:33)
So Red Hat has a VDP in progress, so we credit for every finder that comes to us disclosing a vulnerability, we’re going to acknowledge, we’re going to point towards the person who finds this CVE. This is an integral part of our workflow for giving credit to the finder. Of course, we ask the finder, would you like to be credited? How would you like to see that credit get credited?

And also, that’s not only for the CVEs, however, for findings on our infrastructures as well. So for example, on the customer portal or on some catalog or webpage or whatever else they find something at Red Hat, we give credit to every finder. We don’t do bug bounties. However, we have this VDP, so someone is working their way to have a portfolio as a finder, as a pen tester, as a CVE finder. That’s 100% fine. We will give credit.

And then, this is getting adjusted, we will negotiate with the finder how much of time would you want to have that under embargo? So we have all this negotiation with the finder to make something that can accommodate everyone’s need.

CRob (11:48)
So it’s some good points. Well, let’s move on to the rapid-fire part of the interview. (Sound effect: Rapid fire!) Yeah!

Rodrigo Freire (11:56)
Here we go!

CRob (11:57)
First question. Here we go! Are you ready? Spicy or mild food?

Rodrigo Freire (12:03)
Definitely spicy, man. I’ve been to India in November on the end of last year, man. It was the time of my life eating any spicy food to the point of sweating in my head, man! That was a trip!

CRob (12:20)
Nice! (Sound effect: Oh, that’s spicy!) What’s your favorite whiskey?

Rodrigo Freire (12:26)
It’s Talisker. And I tell you what, if you’re having a Talisker and then you drink Blue Label, I’m sorry, Blue Label, that’s going to fade. Blue Label is just going to fade away. Talisker for the win.

CRob (12:42)
Very nice. Next question, Vi or Emacs?

Rodrigo Freire (12:46)
Vi, come on man!

CRob (12:48)
(Laughter) Nice! Rodrigo, what’s your favorite type of hat?

Rodrigo Freire (12:55)
Type of hat? Man, I actually found that, well, my favorite one is actually a Red Hat, right? But after I got a decision to become a bald person, I actually liked being bald and I seldom use any kind of hat, right? So I’m a proud bald, I’d say. On the other hand, it would be just a baseball cap.

CRob (13:17)
OK, fair enough. And last question, tabs or spaces?

Rodrigo Freire (13:22)
Tabs! Show some finesse!

CRob (13:26)
Nice, excellent, excellent. Well, now. (Sound effect: That’s saucy!) As we wind up, do you have any advice for someone that’s looking to get into the field, whether it’s cybersecurity incident response or open source development? What advice do you have for these newcomers?

Rodrigo Freire (13:43)
First of all, play nice. Show respect and make your due diligence. I think everyone is going to embrace you wholeheartedly because no one likes vulnerability. So if you’re going to find new stuff or even help to fix this stuff, show the attitude. So be positive, make your relationship network. That’s important because without it you’re not going to succeed or you’re going to earn some bad reputation as well. Everyone’s already fighting a hard battle, so play nice.

CRob (14:15)
Nice. That’s excellent, fantastic advice. And our last question, do you have a call to action that you want to inspire our listeners to go do as soon as they listen to this?

Rodrigo Freire (14:26)
Yeah, definitely. So, take into account your environment. So, no one likes emergencies. Emergencies are expensive. No one likes emergency maintenance windows. So, get to understand your environment. So, is this CVE, is this vulnerability really affecting? So, can you be that trusted advisor on your organization so you actually can be the person who sets the expectation, the needs of the company?

There’s some pressure from these high profile events from the upper floor asking hard questions. So get to understand your real need so you can actually schedule something that will not hurt your team or your availability or even the stability of your environment. And finally, I would say ask questions. So ask your vendor or your account reps or your consultants. So yeah, if you’re in doubt, go ask your questions. And I think I am positive that they are going to ensure you that you have a secure and stable environment.

CRob (15:38)
Excellent. That’s, I think, some great advice from someone that’s been there on the front lines helping fight the good fight for downstream and representing its customers. Rodrigo, thank you for joining us today on What’s in the SOSS? Really appreciate you coming and talking to us.

Rodrigo Freire (15:53)
Thank you, Chris. And one last word I would like to stress here. So on the security discussion, there’s no Red Hat. There’s no Canonical. There’s not Oracle. No. We all collaborate very closely when it gives regard to security issues. We are in close touch to everyone. Everyone knows each other. So there’s no, Red Hat’s only playing ball alone. No such a thing. I got to tell you guys, the XZ security incident was first disclosed to Debian and then Debian got in touch with us and then we started the coordination. So, yeah.

CRob (16:32)
I love that about our community, the fact that we all come together and able to put our colored hats to the side and come together and collaborate.

Rodrigo Freire (16:37)
Exactly, mister!

CRob (16:39)
Excellent. Well, thank you, Rodrigo. Have a great day.

Rodrigo Freire (16:42)
Thanks, Chris.

Announcer (16:43)
Thank you for listening to What’s in the SOSS? An OpenSSF podcast. Be sure to subscribe to our series of conversations on Spotify, Apple, Amazon or wherever you get your podcasts. And to keep up to date on the Open Source Security Foundation community, join us online at openssf.org/getinvolved. We’ll talk to you next time on What’s in the SOSS?

What’s in the SOSS? Podcast #18 – Canonical’s Stephanie Domas and Security Insight from a Self-Described “Tinkerer”

By Podcast

Summary

In this episode, CRob talks to Stephanie Domas, CISO at Canonical, the creators of the popular operating system Ubuntu. Having started her career with over 10 years of ethical hacking, reverse engineering and advanced vulnerability analysis, Stephanie has a deep knowledge and passion for the hacker mindset.

Conversation Highlights

  • 01:14: Stephanie shares how she got her start in security
  • 05:41 Interesting things Stephanie has discovered since becoming more directly involved with open source
  • 08:20 The challenge of instilling trust into those who consume open source
  • 12:42 Stephanie answers CRob’s rapid-fire questions
  • 14:07 Stephanie’s advice to those getting into cybersecurity
  • 15:43 Stephanie’s call to action for listeners

Transcript

Stephanie Domas soundbite (00:01)
For those who aren’t in security community yet, if you have that protector personality and you like to help and you like to make sure things are great when people use them, security may be for you, right? Those tinkerers and those protectors make such phenomenal security people. If those are you, we need you in security.

CRob (00:18)
Hello everybody, I’m CRob. I do security stuff on the internet. I’m also a community leader within the OpenSSF. And we have this nice little podcast you’re listening to called What’s in the SOSS? Where I get to talk to the most amazing people that work within and around open source software. And today we have a special treat. We have Stephanie Domas. She’s the CISO of Canonical. And she’s also a former teammate of mine and a fellow Ohioan. Stephanie, welcome to the show.

Stephanie Domas (00:52)
Thank you, CRob, it’s nice to see you again and thanks for inviting me.

CRob (00:55)
You’re very welcome. It’s nice to be seen. Gotta couple questions here. We’re going to have some fun questions later on, but let’s start off. Why don’t you describe to vthe audience kind of who you are? What’s your origin story and kind of what led you to this point where you’re working with one of the major commercial open source distributions today?

Stephanie Domas (01:14)
Yeah, absolutely. So the story of Stephanie and so it all starts back in middle school. And I won’t go, I won’t make this a huge long story, but I do think it’s, it’s, I don’t know, it’s colorful background, right? So back in middle school, right? I started to get into PC gaming, like all good nerds were at that time. And I, you know, hypothetically started to get very interested in how the cracked versions of things that I was hypothetically downloading worked.

And so while I was a consumer of these things, I really wanted to understand, you know, how were people figuring out where to patch? How were people figuring out how to change these games so I had more money or I had new powers. And so this led me on this spiral of just really wanting to understand how computers worked, right? So it all started with just how, how was this even happening? And so I kept digging deeper and deeper. And before, you know, I was in university studying electrical and computer engineering, and I was focused on processors.

And so I was very interested in essentially this, the brain of the computer, right? How is it doing it? Because at the end of the day, when I started to peel back the layers of cracks and the key gens, it all came back to trying to manipulate how the computer worked. And so I found this super interesting. And so, you know, I went to college, I started to get really interested in the cyber side of things, right? So when my university didn’t have a cyber program, I was still very interested in trying to peel back that onion.

And so I started, I joined an ethical hacking team. I participated in Capture the Flags or CTS and I was very fortunate that that first role I landed outside of college was on a security research team. And so I got to spend seven years just doing really fascinating security research. And given my focus was processors, as you can guess, my focus was on x86. So I did a tremendous amount of x86 security research for a number of years. And while that was immensely fun, at a certain point, I felt like I wanted to have a bigger impact on the world. And while my research was like interesting, right? I didn’t feel like I was having that big impact. And so I kind of did two things, right? I one decided to go do a startup and not just a startup, but I wanted to do it in an industry where I felt like cybersecurity was really, really weak. Right. And so I went and did a medical device cybersecurity startup. I felt like that, that industry, because of the impact for patient harm had this really high need for security and yet not a lot of security people were focused in the area.

And so I did a startup that, to this day is still having, I think, a profound impact on that community. And then I also started teaching because I wanted to give back, have a bigger impact. And so I started to adjunct at my alma mater, which is the Ohio State University, teaching a bunch of software and security courses and assembly. And, you know, I eventually started transitioning. I also started teaching at your traditional security conferences like Black Hat and DEF CON and DerbyCon. And then given my background in processors, I obviously ended up at Intel, which is where we had the privilege of meeting.

And so I got to be there for three years as the chief security technology strategist. And that was a ton of fun, right? Given Intel’s large impact across the world’s compute, right? I got to sort of fulfill my desire of driving impact across the world’s compute. And then last September, I had the honor of joining Canonical as their first CISO. And that’s really exciting for me because as you know, we all know listening to this, right? That just open source is such this beautiful thing where we’re capturing the world’s creativity as code. And while Canonical is the maintainer of dozens of open source projects, we are obviously most commonly known for Ubuntu.

And I’m also this fundamental believer that while a lot of people think of security as sort of guarding gates or building fences, it is all of that. But I actually believe it’s so much more that that security is also about building bridges and enabling compute that couldn’t have happened otherwise without security. And so I’m still on that mission to improve the world’s compute by doing amazing things in security. And so I’m so excited to be at Canonical to be a part of bringing that sort of how can security be an enabler to the world’s compute through open source.

CRob (05:14)
Nice. Well, you said something interesting that I want to circle back to in a future episode. I want to talk about DerbyCon, which was one of my favorite conferences ever.

Stephanie Domas (05:23)
I was so sad when they closed down.

CRob (05:25)
I know! #TrevorForget! But you know it being new to open source. What’s one of the most interesting differences that you’ve encountered kind of in your journey here and as you’ve gotten to know the culture around Canonical and the broader upstream open source?

Stephanie Domas (05:41)
Yeah, so this is a super fascinating thing for me because before joining Canonical, while I had been a consumer of open source and Ubuntu had been one of my daily drivers for, I don’t know, 15 years, basically, since they started doing security research, I wasn’t actually that familiar with or hadn’t really dug into the unique nuances of how you actually drive security into open source. And so that was obviously one of the first things that needed to happen in a transition here.

And so one of the really fascinating things to me was there are so many common practices in how you drive security into software, right? Commonly captured as things like your SDLC and SDLC best practices. And a lot of that is just, I don’t know, it’s relatively mature, right? Here’s all the things you need to do. And so one of the things that was super fascinating to me and is still just like a constant source of interest for me is how you translate all of those SDLC practices into open source. There’s so many nuances associated with one, it being open source, right? The fact that there are so many contributors and community members, but also one of the things that has been really eye-opening to me in the open source space is because it’s open source, you have so much more, you have much more complex dependency systems in the software, right?

Because it’s open source and because there’s a sense of community and because everyone sort of develops a library that does something and then everyone else consumes it, right? You get much more of these really complex interdependencies and upstreams and downstreams that just simply don’t exist in proprietary software. And so when you start trying to apply your traditional SDLC practices to this, a lot of it doesn’t fit. And so it’s an interesting paradigm of there are known good things to do and they don’t quite translate into open source. Some of them do, but a lot of them don’t. And so that’s been a really interesting journey for me to try and figure out what can we take, what doesn’t fit, how could we make it fit, how can we still achieve some of the same outcomes in this open source and really immensely complex dependency trees.

CRob (07:40)
It’s a great challenge that I’m glad that we have folks such as yourself helping try to drive this. And that kind of touches onto our next question. You’ve spent time within traditional large enterprises and generally with those types of companies, you’ve got well-defined boundaries and regulations and policies and whatnot. And part of Canonical’s job is making open source consumable for those types of customers. Talk a little bit about some of the processes that might work in an enterprise that can help instill trust into folks’ open source software consumption.

Stephanie Domas (08:20)
Yeah, so this one’s super fascinating as well because there’s open source and then there’s open source that is enterprise-ready. And while sometimes that means at the high level, right, it’s things like, it’s resilient, it’s been tested, maybe it’s supported. But I would say that’s actually kind of just cracking the surface of this, right? At the end of the day, you know, Canonical sits at this sort of in-between enterprise and open source. And some of the really interesting things, especially you see in the security space, is this desire for these companies to translate what they know as secure practices into the open source space.

And so I also mentioned in the last question, right, the SDLC, right? The number of questionnaires we get from customers that say, do you have an SDLC? Does it meet all these requirements? And it’s their standard questionnaire, right? It’s all those standard best practices I just talked about. And it’s really, really hard to say yes to those and feel like you can write like a real solid checkbox in that line. And so just giving like a super nerdy example is something that’s just been on my mind recently. So I’m going to throw out some nerd numbers here. So the OMB memorandum, M-22-18, right? I see you shaking your head and I know people can’t see this.

CRob (09:33)
Oh, I’m familiar with it.

Stephanie Domas (09:35)
The thing is just, this is, this is a real big thing right now and it is requiring software manufacturers to fill out a repository for software attestation art, sorry a secure software development attestation form to then file in the repository for software access stations and artifacts. This form is derived from the NIST SSDF, which is the NIST secure software development framework, SP 218. I’m throwing so many numbers at us right now, but the whole point is, right, this is an example of sort of what I talked about in the last question where enterprises have these known ways of doing things and then this SSDF is a common accepted way of doing secure development lifecycle, but a lot of it, well, not all of it translates cleanly to open source.

And so now you have these memorandums coming out asking software developers to fill out this form. And some of the questions in there, I would say at least half of the questions inside of it are around the development machines, right? Was the development done on a machine that is isolated? Was the development done on a machine that follows security best practices? Well, how on earth am I supposed to answer that question for open source? Do I answer with the mindset of just Canonical developers, in which case I can give a straight answer? Do I answer with the community members? And the form that they’ve developed has no area for you to explain nuance. You’re either in alignment with the form or you answer no and they consider you sort of out of alignment and you are considered to need to put together a plan for how you get in alignment.

And so things like this are really interesting in sitting in that intersection between enterprises and open source because a lot of these sort of regulations and efforts of what these enterprises are looking for, right? The checkboxes they need to get in order to be able to them satisfy their customers don’t translate. And so we sit at that intersection of trying to, you know, one, make it your traditional enterprise ready with resilience and testing and code coverage and all of those great things.

But also the really interesting part of the really complex part that I think a lot of the community members maybe don’t appreciate how chaotic it is, is how you translate all of these regulations and these internal NIST frameworks that all the customers want checkboxes for into how you meet those in an open source space in a way that you can say with confidence, right, we want to have confidence when we say, yes, we meet this is really, really difficult to do. And yeah, so that memorandum is on my mind a lot right now because we’re attempting to go through that process right now. And again, it’s like, how do I answer this question when I don’t control community members’ laptops, right?

CRob (12:11)
Yeah, it’s a lot of really interesting challenges. I could spend hours talking about SDLC too. I’m really excited again that we have folks that are bringing, kind of live in both worlds. You’re bridging the gap between enterprise and community and trying to help make a successful translation. I really appreciate that. And I also appreciate we’re at the time of the show where we’re going to do the rapid fire round!

Stephanie Domas (12:35)
Woo!

CRob (12:36)
All right, I got a series of fun questions and let’s see how you do. There are no wrong answers. First question, spicy or mild food?

Stephanie Domas (12:46)
My gosh, so mild. I am absolutely a wimp with spices.

CRob (12:49)
(Laughs) Alright, fair enough. Next question. What’s your favorite flavor of ice cream?

Stephanie Domas (12:55)
Vanilla.

CRob (12:56)
Vanilla? Alright. French vanilla? Vanilla bean?

Stephanie Domas (13:00)
(Laughs) I am not fancy enough for that one. My palate is not refined enough to know the difference.

CRob (13:06)
(Laugs) Very nice. All right. Vi or Emacs?

Stephanie Domas (13:12)
Vi, definitely.

CRob (13:14)
Yes, hooray! There are no wrong answers except if you pick Emacs.

Stephanie Domas (13:18)
Yes, my Vimrc file is complicated and every time I move computers and I haven’t moved it, it’s very painful. So it’s got a lot of customization, I won’t lie.

CRob (13:31)
Excellent. And last question from rapid-fire: tabs or spaces?

Stephanie Domas (13:36)
My gosh, I’m gonna get some enemies here. I’m a tabs person.

CRob (13:39)
Yeah? very nice. Again, there are no wrong answers. Everyone has their own way of working. That’s great. Thank you for sharing our little fun segment. And as we wind down, what advice do you have? You mentioned that you’ve been a teacher and you’ve given a lot of your time to try to help bring up the next generation of folks. Well, what advice do you have for people that are getting into either open source development or cybersecurity?

Stephanie Domas (14:07)
Yeah, I guess so. I’ll focus on the cybersecurity one and I’m going to get a bit of like social emotional on us here instead of technical. But my advice, my high level advice is just assume the best in your community team members until you are given a reason to otherwise. I see so many times some new vulnerability comes out or some new incident or a breach happens and I see people in the community kind of they jump to assuming negligence or assuming that people are dumb and you see statements like how could they not have done X, like that’s so obvious and it makes me really sad because I feel like most people in the community actually are really trying to do the right thing.

They are on limited resources. They have to make tough decisions and sometimes literally things just fall through the cracks and so I see people get burnout because, not because they’re not trying to do the right things because they’re trying to do the right thing, and people don’t aren’t appreciating that they’re trying to do the right thing right so we’re going to. It’s going to be a high-level one of be good to your fellow security members. And if you’re in a position to offer help to them, somebody who happens to be in the spotlight, is firefighting, is involved somehow in a breach or an incident. Instead of sitting there and trying to judge them offer to help them, even if it’s just to offer a shoulder for them to have somebody to not yell at them for a moment, send them a digital coffee, something. So assume the best in your security team members until they give you a reason to not.

CRob (15:31)
I love it. More empathy for everybody, I think, will make the world a much happier place. And finally, do you have a call to action for our listeners? Something you want to inspire them to do next?

Stephanie Domas (15:43)
Just, I know this is one, it’s also kind of cheesy. It’s just, I don’t know, just always be curious in how stuff works, right? I think there’s so many different reasons why people get into security. I got into it because I was a tinkerer and because I’m curious. If that’s your passion, right, follow that. I would also say the other really big one I see is people who have this protector personality. So if you feel like, for those who aren’t in the security community yet, right, if you ever, if you feel that protector personality and you like to help and you like to make sure things are great when people use them, right? Like security may be for you, right? Those tinkers and those protectors make such phenomenal security people. If those are you, right? We need you in security.

CRob (16:24)
That’s awesome. Such wise words. Thank you, Stephanie. I really appreciate your time. O-H…

Stephanie Domas (16:30)
I-O!

CRob (16:31)
Yes!

Announcer (16:32)
Thank you for listening to What’s in the SOSS? An OpenSSF podcast. Be sure to subscribe to our series of conversations on Spotify, Apple, Amazon or wherever you get your podcasts. And to keep up to date on the Open Source Security Foundation community, join us online at openssf.org/getinvolved. We’ll talk to you next time on What’s in the SOSS?

What’s in the SOSS? Podcast #17 – Intel’s Katherine Druckman and the Impact of Developer Relations

By Podcast

Summary

In this episode, CRob discusses the finer points of developer relations (DevRel) with Katherine Druckman, Open Source Evangelist at Intel and co-chair of the OpenSSF Marketing Advisory Council and DevRel Community. Katherine enjoys sharing her passion for a variety of open source topics and is a long-time open source advocate, developer and podcaster. She’s currently the host of Open at Intel and co-host of the FLOSS Weekly and Reality 2.0 podcasts. She spent over a decade at Linux Journal. A passionate Drupalist since she first downloaded a tarball in 2005, she has also been a Drupal contributor and engineer.

Additionally, Katherine will be a featured speaker at SOSS Fusion/24 in Atlanta on Oct. 22-23. SOSS Fusion/24 is a collaborative and forward-thinking initiative dedicated to securing open source software. This event will bring together a diverse community of professionals from the public sector, software developers, security engineers to cybersecurity experts, CISOs, CIOs, Founders and tech pioneers.

Katherine will be an active participant at SOSS Fusion/24 and will share her insight at the following presentations:

  • Roundtable: Building Developer Confidence in Software Security with the DevRel Community, with Lori Lorusso, Percona; Tabatha DiDomenico, G-Research. Oct 22, 11:30 a.m.
  • Keynote: Fireside Chat with Window Snyder, Founder & CEO, Thistle Technologies, Oct. 23, 9:30 a.m.
  • Keynote: Back to Security Basics: Evaluating, Consuming, and Contributing Open Source Software, Oct. 23, 9:55 a.m.

Check out the full schedule for SOSS Fusion/24.

Conversation Highlights

  • 01:42 Katherine shares her non-traditional journey into open source
  • 03:30 DevRel’s definition varies, depending on the organization
  • 06:11 Tips for making connections with developers
  • 08:23 How DevRel professionals can help integrate security practices and tooling into everyday maintainer activities
  • 09:38 Katherine answers CRob’s rapid-fire questions
  • 11:05 Katherine’s belief that all knowledge can be relevant — even if it’s outside of your field
  • 12:23 Developers and security folks should be working together

Transcript

Announcer (00:01)
Today’s guest on What’s in the SOOS? is Katherine Druckman, Open Source Evangelist at Intel. Katherine will be a featured speaker at SOSS Fusion/24 in Atlanta, October 22nd and 23rd. SOSS Fusion is a collaborative and forward-thinking initiative dedicated to securing open source software. The event will bring together a diverse community of professionals from the public sector, software developers, security engineers to cybersecurity experts, CISOs, CIOs, founders and tech pioneers. To learn more, to register and to see the full schedule visit openssf.org.

Katherin Druckman soundbite (00:36)
We solve technical problems with technical solutions, but there are also so many human problems with so many human solutions. And I think step one to effective engagement with open source maintainers is taking notes, find out what they really, really need and then try to connect the dots.

CRob (00:54)
Hello, everybody. Welcome to What’s in the SOSS? I’m CRob. I do security stuff on the internet and I do a lot of work with the Open Source Security Foundation. I work on the Technical Advisory Committee, the governing board and a bunch of the technical groups. And one of the great things I get to do is co-host What’s in the SOSS? — our podcast about learning more about interesting topics and people within the open source ecosystem. And today we have a real treat. We have my friend from work, real work, not fun upstream work Katherine Druckman from Intel. How are you doing today, Katherine?

Katherine Druckman (01:29)
I am doing well, thank you. I appreciate you having me. This is gonna be fun.

CRob (01:34)
It’s gonna be great. So for our listeners who may not get the opportunity to work with you all the time, could you maybe give us your open source origin story?

Katherine Druckman (01:42)
Oh yeah, sure. Wow, that’s a long time ago. (Laughter) Yeah, so this is funny. I like to talk about that I have a non-traditional background. Actually, I went to my, I have an art degree and then my graduate studies were in decorative arts history. It makes total sense why I would end up here, right? So at some point in there, I was doing some — let’s call them art things and art and antiques and decorative things — and I decided I needed a website for these things.

And I had a lot of nerd friends who were very involved in some tech startup at the time. And this was in, gosh, I don’t know, around 2002 to 2004 maybe. And I was always kind of a nerd, to be honest. Like I had dabbled in a little Linux before that. So I asked one of my nerd friends and I said, hey I heard there’s such a thing as an open source content management system. What’s that and can you recommend one? (Laughter) And he said, oh, here’s a few. I tried a few. I settled on Drupal to build a website. And then I started building other websites and then I started learning more and more. And anyway, long story short, I ended up at Linux Journal because I learned the Drupal. So that’s the short-ish version of my origin story. And then I had a lot of adventures along the way and somehow all of them led me here.

CRob (03:03)
I’m going to have to do a session sometime because there are a lot of us that come from non-traditional backgrounds that work and live in here in high tech. So that’s interesting to hear. So let’s talk about kind of what you do with the Open Source Security Foundation. And this is really introduced me to a very interesting concept. So for our audience, could you maybe explain what DevRel is and why it’s important?

Katherine Druckman (03:30)
Sure, yeah, yeah, yeah. So I co-chair the Marketing Advisory Council, is I believe what we’re calling it today. Apologies if I got that wrong. And as part of that, we created an initiative and created a DevRel community to do developer relations on behalf of the OpenSSF. And what that means, developer relations type work has a lot of names, right? Some people call it developer advocacy, evangelism and it really kind of depends on the organization where you’re doing it.

For the OpenSSF specifically, really we’re there to raise awareness where hopefully the mission is to connect developers and users and consumers of open source software and then in particular maintainers of open source software to all of the wonderful tools that brilliant people like you and all of our buddies are working on at the OpenSSF. So I got involved because, frankly, I was really into the mission of the OpenSSF even before I was at Intel.

When I heard about the formation of the OpenSSF, I was kind of following it because one of the things I do in my small amounts of free time is I occasionally co-host, and at the time I was co-hosting Floss Weekly, another podcast. And when we’re looking for news stories in the open source space, I came up with, oh look at this! There’s this new foundation. They’re doing work. It was always a source of insecurity slash curiosity for me. I never felt, when I was a software engineer, like I was fully prepared from a security perspective. So it was something that I pursued. So that’s where I jumped in.

But going back to the original question, which is, what is DevRel? The funny thing is if you asked 20 different DevRel-type people, they would probably all give you a slightly different answer. Because at the end of the day, you really kind of need to connect the goals with the specific organization with the work that you do. Because it can vary. Generally speaking, it’s whatever serves the needs of your organization. And it can be education. It can be being a catalyst between end users and a product. You might work with product teams, but you might be more educational and community focused like I am. The meaning varies depending on the organization. Yeah, and it’s just, it’s not an obvious answer, I don’t think.

CRob (05:49)
That makes sense. As you know, it’s very hard to quantify what the open source is. There’s so many different permutations, so I get that. Thinking about the role of DevRel and maybe in particular with the OpenSSF, from your perspective, what have you seen that works with trying to help get engaged with maintainers and then keeping them engaged?

Katherine Druckman (06:11)
I guess I’ve seen a lot (Laughter). So back to the thing about, you know, it varies, right? I think ultimately, developer advocates and developer relations people are there to identify with and advocate for the needs of developers, because we are them. Most people that are in the DevRel space were developers, were software engineers. And we’re kind of, we’re drawing on that on our personal experiences. And I think what works, if you want to engage, especially with open source maintainers, developers and maintainers just want to get things done. We’re ultimately, we’re makers, right? We’re makers and we’re creators. And I think we all crave resources to help with that.

Sometimes it’s education, sometimes it’s tools. Sometimes it’s just, being heard, I think. So something that’s resonated for me: I’ve started having some conversations recently about maintainer burnout that have gone unexpectedly well. And I did this, I think, for a lot of reasons, right? I like to talk to smart people about anything and everything. So any excuse to talk to a lot of really interesting open source maintainers, I’m all over. But this was a topic, I think, on my mind and on the minds of a lot of people on my team.

So I started talking to more and more people. And I think these conversations have resonated even more than I expected. And I, my suspicion is just because people feel heard and understood and listened to. And it’s, so, you know, I think if, if you want to engage with software maintainers, step one is listening to them. You know, forming those human connections, you know, I think, you know, we get bogged down in the world of software and it’s a very, we, we solve technical problems with technical solutions, but there are also so many human problems with very human solutions. And I think step one to effective engagement with open source maintainers is listening. Listening, taking notes, find out what they really, really need, and then try to connect the dots.

CRob (08:12)
Well, I’m going to put my listening ears on right now. From your perspective, how do you think DevRel can help get security practices and tooling better integrated into maintainer daily workflows?

Katherine Druckman (08:23)
Yeah, that’s such a good question and a complicated one to answer, but I’m going to give it a shot. I think it goes back to listening, right? I keep saying that, but I think with things like connecting tooling, it’s figuring out all the spots along the development lifecycle where maintainers and developers are stuck, right? Where in the process are things most difficult and where do they need the tools to unblock them along the process? I think so that’s part of it. Connecting people to the things that really, really help.

Tools that smooth processes and resources really of any kind, frankly that let them kind of unplug and sleep well at night, you know (Laughter). I also feel like I would caution people to not try and focus too much on ticking boxes that don’t necessarily help the developers and maintainers. I think when you’re on one side or other of a conversation, sometimes if you’re, let’s say, a tool creator, you kind of get in the mindset of ticking the boxes that you think that people need to solve. But it’s really important to make sure that you’re pursuing the right things that really do have a direct impact on just making developers and maintainers’ lives easier.

CRob (09:38)
Let’s move on to our rapid-fire section of the interview. (Sound effect “Rapid fire!”). I’ve got a couple questions for you. Are you ready?

Katherine Druckman (09:46)
Oh, I, sure.

CRob (09:48)
Do you like spicy or mild food?

Katherine Druckman (09:51)
Oh, I like spicy, but my stomach prefers mild.

CRob (09:54)
(Laughter) Fair. What’s your favorite cocktail?

Katherine Druckman (0958)
Oh, gosh, lately a Paloma.

CRob (10:01)
Vi or Emacs?

Katherine Druckman (10:02)
Vi.

CRob (10:04)
Oh, thank you. Yay. There are no wrong answers, but Vi is always right. Being that you’re a fellow podcaster, what’s your favorite type of microphone?

Katherine Druckman (10:14)
Ahhh, ohhh. That’s a…I like Shure. I have a couple really good Shure mics.

CRob (10:19)
I love it too. So last question, rapid-fire, tabs or spaces?

Katherine Druckman (10:24)
Oh, God. Spaces. But I’m probably gonna get…

CRob (10:28)
(Laughter) This is very controversial.

Katherine Druckman (10:29)
I know. I’m probably gonna get yelled at for that, but I know I’m supposed to…I feel like I’m supposed to say tabs, but if I’m being honest, I’m probably gonna say spaces.

CRob (10:39)
That’s fair. Again there are no wrong answers. It all goes up to personal style and especially working with developers. No two developers do their work the exact same way.

Katherine Druckman (10:48)
Fair.

CRob (10:49)
Thank you for those amazing insights. So as we wind down here and close out, what advice do you have for somebody that’s interested in starting a career, whether it’s as an open source developer or getting into like cybersecurity or anything? What advice do you have to the new next generation?

Katherine Druckman (11:05)
Sure, yeah. Well, as I mentioned when we first started, I have a very non-traditional path, right? And I would say don’t be afraid of that. Learn all the things because you would be surprised at what sort of obscure piece of knowledge you might dig up from all of your experiences that might help you. Something from another field. I really like kind of interdisciplinary thinking. The example I use a lot, probably too much, is ergonomics and design, German kitchens of the 1930s. Yeah, it’s a whole thing. That’s what happens when you go to grad school for design history. But it’s a thing.

And every now and then, I think back to it. And I think about just the effectiveness and the simplicity and the amount of attention to detail that people put into the evolution of the modern kitchen. And it comes out in unexpected ways. And that’s, you know, it’s kind of a random and possibly silly example, we are a whole people and we draw from our, from all of our experiences. So I would just recommend learn all the things. Nothing is, nothing is not relevant.

CRob (12:11)
Awesome advice and I really like the idea of kind of connecting your background to your passions. As our final question, what call to action do you have for our listeners? Is there anything you want to inspire them to go do?

Katherine Druckman (12:23)
Yeah, come join our OpenSSF DevRel community. That’s the biggest one. Yeah, we have office hours, we have meetings, this is open to anyone. We would love to see more developers and maintainers help get this thing off the ground. Have a really effective meeting of the security folks and the developers because I feel like sometimes we’re seen as almost like opposite sides, which doesn’t make sense to me because to me, I don’t think of it that way. I never have.

I’ve always been a developer who wanted to do the right thing from a security perspective. So I feel like we should all just be like me. (Laughter) But seriously, come to our meetings, come join us. You might have some fun. We’re solving important problems. And yeah, I look forward to seeing everyone. The other last piece of advice I would have is I just got a refrigerator that has a freezer that makes craft ice and it makes these balls, because we’re talking about cocktails, it makes spherical ice. So yeah, that’s my other piece of advice. Get your hands on one of those because it’s really cool. The cocktail question reminded me and I feel like I needed to mention that.

CRob (13:29)
(Sound effect: “That’s saucy!”) That’s awesome. Thank you so much, Katherine. I really appreciate our conversation and everything you do to help get developers engaged and help get them empowered to continue the amazing work they do. So thanks for joining us on What’s in the SOSS? And we look forward to seeing you next time. Thank you.

Announcer (13:48)
Thank you for listening to this episode of What’s In the SOSS? an OpenSSF Podcast. As a reminder, Katherine Druckman will be a featured speaker at SOSS Fusion/24 in Atlanta, October 22nd and 23rd. To learn more, to register and to see the full schedule, visit open ssf dot org. And to subscribe to our series of conversations on Spotify, Apple Podcasts, Overcast, Pocketcasts or wherever you get your podcasts. We’ll talk to you next time on What’s in the SOSS?

What’s in the SOSS? Podcast #16 – Dell’s Sarah Evans and Lisa Bradley and Ensuring Secure Open Source Software at the Enterprise Level

By Podcast

Summary

In this episode, CRob sits down with Sarah Evans, security research technologist at Dell and Lisa Bradley, senior director of product and application security at Dell. They dig into the challenges of implementing secure open software at a complex enterprise.  

Sarah sits on the OpenSSF Technical Advisory Council and at Dell’s she has been instrumental in cybersecurity innovation, conducting research within the global CTO R&D organization. Her career spans pivotal roles, including being an enterprise security architect and engaging in Identity and Access Management and IT at prestigious organizations like Wells Fargo and the U.S. Air Force.

Dr. Lisa Bradley is a distinguished cybersecurity expert and visionary leader. She has earned her reputation as a trailblazer in the field of security and vulnerability management. In her current role, she oversees Dell’s Product Security Incident Response Team (PSIRT), Bug Bounty Program, SBOM initiative, Dependency Management, and Security Champion and Training Programs.

Conversation Highlights

  • 02:38 How Dell is managing its ingestion and productization of open source software
  • 04:54 The complex task of managing open source software for a company the size of Dell
  • 06:34 The importance of executive support when implementing security initiatives
  • 10:40 Lisa and Sarah answer CRob’s rapid-fire questions
  • 12:40 Lisa and Sarah’s advice to aspiring developers and security professionals
  • 14:12 Lisa and Sarah’s call to action

Transcript

Sarah Evans soundbite (00:02)
That’s a game-changer when you can go into some of these technical engineering and security conversations and say well, it’s on Dell dot com, and we have a commitment to do this by a certain date and that partnership and that collaborative spirit really increases with that common goal.

CRob (00:20)
I’m CRob and I do security stuff on the internet. And I’m also a community leader within the OpenSSF. And one of the cool things I get to do with the OpenSSF is host What’s in the SOSS? And it’s a podcast where we talk about people within the open source ecosystem. And with us this week, I have two wonderful people that I’m so very pleased to call my friends. They both work at Dell. And so I want to introduce you all to Lisa Bradley and Sarah Evans. Ladies, welcome.

Lisa Bradley (00:49)
Thanks for having us.

CRob (00:50)
Maybe each of you just take a brief couple seconds to introduce like who you are and what you do.

Lisa Bradley (00:55)
Sure, I’ll take a stab first. Lisa Bradley, I’m in the product and application security team. I’m a senior director for Dell Technologies. My focus is vulnerability response or otherwise known as PSIRT in the industry. And I have the Bug Bounding program underneath me and some part of the Security Champion and our Dependency Management platform where we get to protect against open source and making sure that our customers are protected against the open source that we use in our product. And I also have a big part and role in the SBOM  initiative for Dell.

Sarah Evans (01:28)
I’m Sarah Evans. I am a security innovation researcher at Dell Technologies. I work in our global CTO research and development team. And I’ve had the opportunity in that role to get involved with OpenSSF, which is a foundation that is helping secure the open source software supply chain. Some of my efforts there are around the technical advisory council. And as a governing board observer and governance committee member, I also participate in the AI/ML working group. So this is a great topic because it brings something I’m passionate about, which is open source software security together with industry leadership that we’re doing within our company to improve our product security vulnerability response by improving our ingestion of open source software. So that’s really exciting.

CRob (02:20)
Dell is a very large OEM supplier of hardware and software solutions and you all use open source within your portfolio. Could maybe you talk a little bit about Dell’s open source journey and kind of maybe give us some insight into how you’re managing your ingestion and productization of open source software.

Sarah Evans (02:38)
I’ve been at Dell for four years. And when I joined the company and I got involved in OpenSSF and understanding our open source software supply chain, one of the things that really became obvious, especially through my work in partnership with Lisa and the security team, has been to kind of know your why, why are we doing this? And so our journey to secure our consumption of open source is really around protecting Dell customers and ensuring that they have a fortified supply chain. One of the quotes that I like is from Thomas Reed, where he says, the chain is only as strong as its weakest link, for if that fails, the chain fails. And so this is kind of the start of our journey and open source is doing the backwards math to understand how we need to secure our ingestion of open source software.

Lisa Bradley (03:26)
And that’s where I come in on the other side, is that when we’ve already released products that have open source in it, my job is to make sure that we are aware of the vulnerabilities in the open source that we’re using, that we inform those product teams and that those product teams go and get that update from that open source and repackage whatever their product is with it and ship that security update out to our customers to deploy those fixes for the open source vulnerabilities.

Sarah Evans  (03:55)
And just to add to what Lisa said, why this is so important for our company and our customers is that open source software has an outsized impact on our upstream link in our vulnerability. Open source has an outsized impact on the upstream link to vulnerability response because it’s everywhere. And Sonatype has done a really great research report where they showed that 98%  approximately of all code bases contain open source software and it’s vulnerable. 92% contain outdated or vulnerable code. And so if we are able to improve our processes around open source software ingestion and then the associated incident response it really has a big impact on the process as a whole.

CRob (04:34)
Dell is not a small provider. So this seems like this would be a really big task of trying to understand the scope of everything you’re using within open source and then like the vulnerability management piece. Could maybe the two of you shed a little insight into maybe some of the steps or how you started to implement some type of management of all this software?

Lisa Bradley (04:54)
A lot of it that we started with was SCA type of tools, doing scanning to make sure that we even knew about the inventory that existed because there were products written before the thought of having to do security or even the thought of having to know what your inventory is. So we spend a lot of time building up, making sure that we had the right tooling for our teams to understand that inventory.

Then we started focusing on shifting it left. So while we’re actually building to making sure that we’re scanning and knowing about our inventory throughout the whole process, not just after the fact when that product was available and that release was available. This has allowed us to have a strong inventory, not only of our open source, but our vendor components that we utilize and our internal components that we utilize.

So we’ve been focusing very heavily on making sure that our product teams have an inventory. It’s part of our SDL process. It’s one of the controls that they need to have an inventory. And then what we’ve been focusing on recently is making sure that we integrate that inventory into what I call DMP, which is our dependency management platform. And basically, we ingest the inventory. We could produce a customer-facing SBOM so that we have consistency of the SBOMs that we’re giving our customers and that we could understand the different parts of that inventory, especially in the open source side, so that we could be aware about the vulnerabilities. 

It has been a long journey. Tooling support has been key. Making sure that our product teams understand the importance of knowing what they put in their software and keeping up their software. It’s been quite a journey. We’re still on it, especially with some of the new things with SBOM and the new fun twists coming out with some of the regulatory asks. But I feel quite positive of where we’re at and where we’re continuing to go.

Sarah Evans (06:34)
And everything that Lisa just described is a huge effort on behalf of the company. And one of the things that we have found to be very helpful is the strong executive and operational support that security teams receive when they are on this technical journey to help work with engineers to accomplish and achieve all of these goals. The tone from the top has been really important and the executive support that we’ve had here at Dell has really been a very helpful driving force in accomplishing some of these challenging technical security goals in partnership with engineering. 

One of the things right now on Dell dot com are our ESG goals, our environmental, social and governance goals that talk about how we are building trust with our customers. And so there is a section that talks about some of those key drivers. One of those is the software bill of materials associated with some of that regulation that Lisa was just talking about. Now by 2025, 100% of all actively sold Dell designed and branded products and offerings will publish a software bill of materials, providing transparency on third party and open source components.

So that’s a game-changer when you can go into some of these technical, engineering and security conversations and say, well, it’s on Dell dot com and we have a commitment to do this by a certain date. And that partnership and that collaborative spirit really increases with that common goal.

CRob (07:58)
Very impressive to hear. Putting another hat on for the moment as security practitioner, I know it’s always a challenge as a internal security person to try to get that executive buy-in and that backing. Can you maybe share a little insight into how you all were able to get that? It sounds like it’s coming right from the very top of your organization. So that sounds amazing.

Lisa Bradley (08:20)
Our CISO is a big strong supporter of pushing the goals. And so a few of us worked together and provided him the suggested goals for the ESG. He took them further up to get them published. I think also one of the things that’s been helping us is we put together a PSoC and  within the PSoC, we’re really focused on the security practices and the product portfolio. It has executive leadershi extremely high up. And so bringing forward topics like the executive order and, you know, the EU CRA and other things like that.  And what do we see from the industry? What do we see from our customers and what are the right security practices we should be doing? 

And so the open source, the inventory, all of that have been brought into that. And then what we’ve done as a security team is we work through that and we worked with the different governance, security governance teams within the different, I’m gonna call them brands just to make it easier or business units, within the company to partner on joint goals together. It’s not that we’re working across just the security team into the team, but then there’s people that are right in the business that are pushing those goals also.

Sarah Evans (09:28)
Yes, and the Product Security Operations Committee gives senior leaders who are responsible for different parts of the company an opportunity to align on, yes, this is a hard problem. They align on timelines. They talk about, you know, this is the right thing to be doing. And then through that alignment, then they are able to go and execute with leadership to their teams. So it’s happening in the engineering teams, in the IT teams, and of course in our security teams. At a company the size of Dell, having that executive leadership alignment is also a really big driving force behind the success.

Lisa Bradley (10:06)
And there’s another layer and then there’s like layers underneath of like who’s gonna drive the different work streams to make it happen and then layers underneath of the people actually doing the work. It all sort of comes together. It is making our job on the security team significantly easier when we can say things like there’s an ESG goal, there’s a regulatory ask and there is a PSoC-approved goal that came up from above. In the past we were always struggling to get that attention in the security space. And now a lot of these things that we put in place are helping get that attention and drive that awareness. And we’re hearing significantly less no’s than we ever used to.

CRob (10:40)
It’s really amazing to hear about the progress that you’ve made with your security programs and just how you’ve embraced a lot of kind of open source ideals and your integration with open source within your organization. So let’s move on to the rapid-fire section of the talk here. First question to both of you: spicy or mild food?

Lisa Bradley (11:01)
Spicy, I’m from Buffalo.

Sarah Evans (11:04)
Mild. (Laughter)

CRob (11:06)
Next question. Open or closed source?

Lisa Bradley (11:09)
Open.

Sarah Evans (11:10)
open.

CRob (11:11)
Yeah, alright, right answer. Next question. This was predominantly focused at Lisa. Trigonometry or calculus?

Lisa Bradley (11:18)
Calculus. That was easy! I didn’t have to blink! (Laughter)

CRob (11:22)
Alright, next question to both of you, bourbon or Scotch?

Lisa Bradley (11:26)
Tequila. (Laughter)

Sarah Evans (11:27)
Bourbon.

CRob (11:28)
Fair, fair. And then because I know that Lisa was a developer back in her way past: tabs or spaces?

Lisa Bradley (11:34)
That’s a hard one. Tabs!.

CRob (11:37)
And, Sarah, did you ever get the opportunity to do development in your past?

Sarah Evans (11:40)
No, I haven’t.

CRob (11:42)
You’re exempt from the tabs versus spaces debate.

Sarah Evans (11:46)
Actually, because I didn’t do development in my past, I had some real imposter syndrome about getting involved in open source software and the security of it. But I leaned in and I have been able to overcome that. So especially with support from colleagues such as yourself, CRob.

CRob (12:04)
And that’s what I honestly love about the open source ecosystem is that allows people to contribute their best selves. How are they best see fit? Some people rate code. Some people help provide that translation from like regulatory speak or InfoSec speak to the development community. So yeah, I appreciate these different perspectives. So as we wind down here, a couple of questions for you. What advice might you two ladies be able to share with people that are new to the ecosystem? Someone that wants to get into open source development or cyber security.

Lisa Bradley (12:40)
One of the things that Sarah just sort of pointed out is that you don’t have to always be a technical person. There’s always passion and drive and there’s a lot of information out there that you could look at to learn. So don’t be afraid to learn and jump in. We all need all the help that we could get right now, I think. Making sure that we continue to fight the good fight is important. So don’t be afraid to jump in.

Sarah Evans (13:04)
And on the flip side of that, I had a dollar for every time that I worked with a software developer who prefaced any conversation with me by saying, well, I’m not a security person, I would probably be on an island in Fiji right now. Security is one of these topics that you have been exposed to without even realizing it. And so you can definitely always build from what you know and launch into how what you know ties into security. I did a recent talk at Open Source Summit with my colleague Jay White, and we posited that being in security is sometimes like a driver’s license. There’s people of all occupations and careers and lifestyles sitting behind the wheel on the highway when we’re sitting in rush hour traffic. But we all have these common driver’s license and rules of the road that we follow. And so understanding security principles is something that everyone and anyone can learn and that software developers, including open source software developers are in a perfect position to bring into their knowledge suite.

CRob (14:06)
And then finally, either of you have a call to action you want to share with our audience to help inspire them?

Lisa Bradley (14:12)
One of the things that is top of mind is AI and how do you utilize AI now to be better at the job that you do in security and be safer and the better protect your customers. And the other thing that comes to mind is you should always be thinking of your customers. They are always the most important things to protect. And so have that viewpoint when you’re coding, when you’re developing, how long you’re taking the fixed vulnerabilities knowing about vulnerabilities, knowing about what you’re consuming in the first place, trusting what you consume — all of that sort of all comes into play. And just think about the customer viewpoint when you’re doing all that job.

Sarah Evans (14:49)
And I would encourage and call any open source developer, every open source developer, that as you’re innovating with an emerging technology, to think about the lessons learned from the prior decades and bring those forward with you into the place where there’s a lot of unknowns. So as Lisa pointed out, AI is a space in which we’re really leaning in around the innovation. But those software development and security lessons and system security lessons that we’ve learned the past decade still very much apply going forward. Even though we still have a lot of unknowns that we haven’t figured out, I call to action all open source software innovation developers to continue leveraging security fundamentals. And if there is an opportunity to innovate to incorporate those more easily and smoothly, lets figure out how to do that.

CRob (15:42)
Ladies, I really appreciate you both showing up today on What’s in the SOSS? And I think keep up the amazing work of your program at Dell and please keep being amazing contributors to our open source ecosystem. And that’s a wrap. Thanks folks.

Announcer (15:57)
Thank you for listening to What’s in the SOSS? An OpenSSF podcast. Be sure to subscribe to our series of conversations on Spotify, Apple, Amazon or wherever you get your podcasts. And to keep up to date on the Open Source Security Foundation community, join us online at OpenSSF dot org slash get involved. We’ll talk to you next time on What’s in the SOSS?

What’s in the SOSS? Podcast #15 – Bidding Adieu to Omkhar Arasaratnam

By Podcast

Summary

In this episode, CRob chats with Omkhar Arasaratnam, who has served as the general manager of the OpenSSF and was co-host of What’s in the SOSS? As Omkhar moves on to the next chapter of his occupational journey, he reflects on his tenure with the OpenSSF, shares his open source origin story and highlights the achievements of the OpenSSF and the tactics he used to engage different stakeholders.

Conversation Highlights

  • Omkhar shares his open source origin story
  • 02:14 – Things Omkhar is proud of during his tenure at the OpenSSF
  • 04:36 – The challenge of keeping myriad stakeholders engaged
  • 07:12 – Areas of open source supply chains that public policymakers and regulators should better understand
  • 09:44 – Some challenges ahead for the open source ecosystem
  • 14:58 – Omkhar answers CRob’s rapid-fire questions
  • 17:57 – Omkhar’s advice for people entering the open source community

Transcript

Omkhar Arasaratnam soundbite (00:01)
Finding a way to bring technical knowledge to somebody that may not be as technical or non-technical knowledge and options to those that are deeply technical is definitely an area where I chose to spend a lot of my time. And hopefully to some good effect.  

CRob (00:19)
Hello everybody, I’m CRob. I do security stuff on the internet and I’m also a community leader within the OpenSSF. And one of the cool things I get to do is I get to host “What’s in the SOSS?” the OpenSSF security podcast where we talk to amazing people within the open source ecosystem. And today we have a real treat. My dear friend and pal, Omkhar is here with us. For those of you that don’t know, Omkhar has been with us for the last year and half as the general manager of the Open Source Security Foundation. And today we’re going to talk a little bit about kind of reflecting upon his tenure here with the foundation. Maybe start us off, Omkhar. I don’t know if anyone has officially heard, but could you share with us kind of what your open source origin story was?

Omkhar Arasaratnam (01:04)
Absolutely. It’s a pleasure to be here on this side of the table. I began messing around in open source back in the late 90s. I actually began my career at IBM and very humble beginnings. I began doing tech support. So for all those that have seen the tropes about, you all the best folks start in tech support, I heartily endorse that. Going through the ranks of tech support, IBM was quite focused on ensuring that their PowerPC platform was well supported under Linux. And there was just this nexus of me having time on my hands, me being interested in the subject and dabbling around. So to answer your question, my earliest kind of foray into open source was back in the late 90s. So that’s like 25 years ago. Been a minute.

CRob (01:52)
We will not speak of dates, sir. That will not reflect well on either of us.

Omkhar Arasaratnam (01:56)
Indeed, indeed. I am noticing that in the camera that you guys can’t see, there may be a few grays in my hair that we’ll need to address. (laughter)

CRob (02:06)
Reflecting back on your tenure here within the OpenSSF, what are you most proud of that we’ve achieved over the last year?

Omkhar Arasaratnam (02:14)
That’s a really interesting question. I’ll bifurcate this into the behind the scenes versus what we’ve what we’ve been able to provide kind of outwardly facing. When I joined May 1st, 2023, my goal was to make board meetings boring. I guess the preconditions that we needed to satisfy in order for that to be true included building up trust, building up predictability, building up a cadence and a rigor where people felt like even when they weren’t actively involved in a meeting ,the right thing was being done so I was quite proud that I guess our last board meeting was in August and Seemed pretty boring to me. (laughter) That’s a good thing.

The thing I was most proud of from an outwardly-facing perspective was the work that we did around the technical initiative funding process. So for those that aren’t aware, we have a number of technical initiatives under the OpenSSF, both code hosting as well as specs and documentation and stuff like that. And in conjunction with the TAC and the board, we came up with a way of basically writing grants once a quarter for any of our technical initiatives. And I think that’s a great way of taking and taking the funds that the foundation has and really deploying them to good use. 

We have this really, really daunting task of securing all the open source. And any way that we can find to create these asymmetric opportunities where the amount of effort going in doesn’t necessarily scale linearly with the positive effect coming out, we need to take advantage of them. And, you know, there’s so much goodwill and elbow grease that engineers can put in and eventually we may need to solve funds. So I was really proud of the work that we did. And thank you for the collaboration on that as the lead of the TAC, around putting in the technical initiative funding to support our technical initiatives at the OpenSSF.

CRob (04:16)
I agree. I was very proud of us being able to implement that. Thinking of it, we live in a really complex space. We have a lot of different personas and stakeholders we get to engage with. Could you maybe share with us some of the tactics that you use to help keep these different stakeholders engaged throughout your tenure?

Omkhar Arasaratnam (04:36)
I’ll first self-reflect and say I’m not sure that I did a perfect job at it, but I certainly tried my best constrained by the amount of hours in a day, the amount of, you know, time zones that we have to cross and all that. What I try and anchor on is really trying to find a way to figure out what a win is for each person and, conversely, what they’re most concerned about. And in some cases, a win for one person may not be completely aligned with a win for others and we need to be able to construct our goals in such a way that we’re achieving maximum happiness, which at times may disenfranchise others. But I think with any of these spects of engagement, the best thing — the thing that always rings true — is to have a transparent and clear decision-making process.

And whether you agree with the outcome of the decision or not, nobody feels like they were they were done wrong. Like nobody feels like somebody achieved something through a sleight of hand. And I think by building that trust, that’s most important. I also think that, I mean it seems obvious, but everybody comes with a different background and a different perspective. So when we’re speaking to senior leaders within the government who are very adept at working through their legislative process in order to make stuff happen for citizens, that’s a very different set of skills than somebody that may be an expert in cryptography. 

And being able to find a way to align between the two is is an area I spent a lot of time. And I admit this as a software engineer, right, there are people that are genuinely smart in their own way. Just because you don’t understand data structures and algorithms doesn’t mean that, you know, you’re automatically demoted to the bottom of the stack. Quite the contrary, finding a way to be able to bring technical knowledge to somebody that may not be as technical or non-technical knowledge and options to those that are deeply technical is, is definitely an area where I chose to spend a lot of my time and hopefully to some good effect.

CRob (06:54)
One of the stakeholders we worked a lot with were broadly government bodies. So thinking about that, is there one thing that you wish these global public policymakers and regulators understood better about open source software and open source software supply chains?

Omkhar Arasaratnam (07:12)
Yes, and honestly, think this applies not just to governments, but all of the stakeholders, right, and those stakeholders could be members of the community, it could be government, it could be private corporations, it could be foundations. Everybody needs to figure out what that Rosetta stone is so that we can move forward.

One of the tripping hazards that we had early on — to quote a specific and well-known issue — was with the CRA, the Cyber Resilience Act in Europe. There were some provisions in early drafts which weren’t necessarily best aligned with the open source community. And specifically there were some concerns around how, at least in very early drafts, open source maintainers may have liabilities associated with software defects, or contributors could have liabilities associated with their contributions. And it just didn’t fit the culture of open source.  

And, you know, when push came to shove, there was a point at which when certain executives within the European Commission were asked, well, why didn’t you ask? They said, well, you know, we asked and nobody turned up. And the mediation that needs to occur in order to provide a good outcome is that neither party can kind of sit in their corner with their arms crossed and be like, hey, you meet me, you meet me here. There has to be a meeting in the middle. 

The good news story coming out of this is, I think, through a lot of hard work within the OpenSSF, as well as other foundations, there has been a much more constructive discussion around the CRA. The government, I believe, has understood the best way to engage the community, and the community has also coalesced around certain forums in which they can express their concerns as well as provide an opportunity to provide feedback on how the implementation will go. 

To get back to it, I think this anecdote really provides a clear view of why we need to be able to meet in the middle. And as I said, that extends not only between government and the community, but also foundations in the community, the community and commercial entities, things of that nature. If we can really figure out how to collaborate rather than attempting to conform one party to the other party’s thinking, that’s what really gets us to the best outcome.

CRob (09:36)
It’s a great perspective. Let’s get out our crystal ball. What challenges do you see ahead of the open source ecosystem?

Omkhar Arasaratnam (09:44)
I want to start by acknowledging and fact-dropping. So my good buddy, Frank Nagel over at Harvard Business School produced a study which determined that the supply side of open source, that’s the amount of money that goes into building, sustaining, contributing to open source,is about four, just over $4 billion. That’s a lot of, a lot of money. But the demand side, like the value provided through that investment is about $8.8 trillion with a T. That’s,  I mean, I don’t even know how many commas is in that. I think that’s —

CRob (10:22)
I ran out of fingers. (laughter)

Omkhar Arasaratnam (10:24)
Gonna have to start counting toes soon.  And of course there’s the oft-quoted, study from Sonatype that cites, and like my genuine opinion and I’m not questioning Sonatype’s calculation here, is that this is probably an undercount and that 90% of commercial software contains open source. So putting that together, the curation of open source and ensuring that it is secure isn’t just, an intense desire by a bunch of geeks. It is securing public good. It’s an incredibly important mission and something that we should take very seriously as we gaze into that crystal ball. 

Some of the hurdles that I worry about in the future are, an we used to, we used to talk about this when I worked in corporate, bad people don’t care about your risk acceptances, right? Like they’re not going to be like, oh your record of compliance excluded that system. Okay. I’m not going to append that. That’s just not how things work. 

And I think the analog in open source is less about like a particular control or a particular scope statement or a particular risk acceptance, but more around Balkanization. What I really worry about is that we’ll align into these little fiefdoms — be it in the community, be it in the commercial sector, be it in the public sector, like wherever, we’ll align to these little fiefdoms — and the bad person that’s trying to make a bad thing occur will be able to jump over these walls quite easily and trivially because that Balkanization results in a failure mode that they can exploit. 

I know there’s a lot of pride in open source that comes from meritocracy in that anybody can contribute. Anybody can, make a suggestion, and ultimately it’s up to the maintainers and community whether to accept that. But what i’ve noticed is there’s also the other side of it, which is if we drift too far We get into a scenario Where the project itself becomes Balkanized and is no longer accepting open ideas.

So there’s this balance between having this meritocracy and a healthy culture, and the culture devolving into something that’s unhealthy which causes concerns about safety in terms of people not wanting to contribute and various aspects of the community dwindling. The strength of the open source community is the community. And we produce some of our greatest code through community collaboration based on meritocracy.

I get really uncomfortable and concerned when discussions or debate drift outside of, you know, passionate debate on sides of which the better text editor is and into, you know, things drifting into thought police territory. I think that can be quite a negative. I think the other side of it is, back to the HBS study, order of magnitude difference between investment and the outcome. Y’all need to step up. And by y’all, I mean private sector and public sector. I think there are a lot of good actors within the ecosystem. I think, I’ve seen a lot of great contributions from larger organizations and smaller organizations alike. 

On the public sector side, we have organizations like the Sovereign Tech Fund in Germany. I’d love, as a taxpaying American citizen, to see our government put money behind open source. There has been great progress that has been made through various federal, state and local and tribal organizations within the US. But I would love to see something like the US equivalent of a Sovereign Tech Fund run within our government. I know this is a lofty goal as we record this two months before a presidential election, but I’d love to see that. 

I will state in closing that not all of our problems are financial. Through various OpenSSF programs including the Technical Initiative Funding and Alpha Omega, I can assure you all the problems aren’t technical. But if we can kind of get those out of the way, focusing on some of the gnarlier, non-technical problems start to become a bit easier.

CRob (14:58)
Very nice. Well said, sir. Thank you. Well, let’s move along to the rapid-fire part of the show. (sound effect: “Rapid fire!”) First question. Omkhar, what’s your go-to Linux distro?

Omkhar Arasaratnam (15:11)
I am currently a Debian guy. Although I have a clear Linux install from our good friends at Intel. 

Back in my day, I used to be a Gentoo developer. I used to be a maintainer for the PowerPC64 platform as part of my duties at IBM. So that’ll always hold a place in my heart, and I need the audience to know if you are running Google Chrome OS, that is literally built on Gentoo portage. So you’re using Gentoo in my opinion.

CRob (15:41)
Hahahaha! Very nice. Thinking back across your career, what’s your favorite programming language and why?

Omkhar Arasaratnam (15:48)
Oh, it’s C because that’s where I cut my teeth and I can make all kinds of heinous mistakes in it. Although in my recently-found free time, I, I — Rust community, don’t hate me — I tried picking up Rust, and it’s hard to bend my brain around it. It’s a me problem. It’s not a Rust problem. But I did find for whatever reason, Go to be very intuitive. And I don’t know whether part of it was knowing Pascal in my past, but I just found, like, I picked up Go in a weekend. Rust. we’re, we’re having counseling sessions.

CRob (16:25)
Hahaha! Very nice. Thinking across the amazing, the many things you’ve put in your mouth, what’s your favorite adult beverage?

Omkhar Arasaratnam (16:35)
Oof. I’m going to go with — my preferences around beer are in the extremes. So I like really dark, heavy porters and stouts. And I like right light crisp Pilsner’s and I’m not a big hops fan. The in-between isn’t an area I dwell much in. So looking at the darker side, there is, I think it’s a stout, maybe a porter, called Mexican Cake, which is super dark, super heavy, really sweet and has a habanero back to it. And as a —

CRob (17:10)
What?

Omkhar Arasaratnam (21:43.83)
Yeah, it is, it is great! So that, that is currently top of mind.

(Sound effect: “Oh, that’s spicy!”)

Haha! That’s right. That’s right, it is.

CRob (17:20)
And potentially the most controversial question of all, what’s your favorite open source mascot?

Omkhar Arasaratnam (17:27)
Honk. I mean, without question. Close second is Tux the Penguin.

CRob (17:32)
Nice. Well said, sir. Thank you very much for playing. (Sound effect: “That’s saucy!” )As we wrap up today, I want to thank you for your partnership . It’s been incredible to work alongside you, helping the community and helping our different stakeholders. What advice do you have? If there was an inspired listener out there, how could you encourage them or route them to be able to participate and join this amazing community?

Omkhar Arasaratnam (17:57)
I’ll do you one better. I’ll give you, I’ll give, I’ll give two bits of advice. The first to the community itself. Be welcoming and know your biases. We shouldn’t be increasing hurdles to entry for getting people to participate. Like, the intellectual hazing that sometimes occurs is unnecessary and it leads to really bad outcomes and discourages people. For those that do want to participate, roll up your sleeves and join. 

I mean, to take the OpenSSF community as an example, one of which I know quite well, join the Slack. Join the weekly meetings, show up on the mailing list, participate. There is no judgment. There is no downside. And by engaging in the community, you will get to contribute to making open source more secure for everyone. And we don’t just need software engineers. We need people that are community managers. We need people that are in marketing. We need people that are DevRel, we need everyone. So show up and find out the topic that’s most interesting. 

I’ll say the other thing to bear in mind for those that are looking to participate for the first time, this is largely a homogenous group of volunteers that are only driven by contributing. So if you want to volunteer, don’t show up and be like, hey, we’re can I help? Do a little bit of digging.

See which topic interests you the most. And hey, if you don’t see one, maybe that’s an opportunity to create a new work group or a SIG.

CRob (19:30)
Very nice, thank you. That was some excellent advice. And again, it has been a pleasure. I look forward to catching up with you in the future as you go off on your adventures and start your new journey. Thank you for everything you’ve done for us.

Omkhar Arasaratnam (19:434)
It’s been a pleasure. Thank you for giving me the opportunity to serve the community, and I stand by the sidelines cheering y’all on. I really believe in the mission. can’t wait to see all the great things that y’all are going to accomplish moving forward. (Sound effect: “The SOSS is the boss!”

CRob (19:59)
Thank you, sir. And with that, well, this is a wrap. Have a great day, everybody.

Omkhar Arasaratnam (20:02)
Thanks, you too, CRob.

Announcer (20:04)
Thank you for listening to What’s in the SOSS? An OpenSSF podcast. Be sure to subscribe to our series of conversations on Spotify, Apple, Amazon or wherever you get your podcasts. And to keep up to date on the Open Source Security Foundation community, join us online at OpenSSF.org/get involved. We’ll talk to you next time on What’s in the SOSS?

What’s in the SOSS? Podcast #14 – CoSAI, OpenSSF and the Interesting Intersection of Secure AI and Open Source

By Podcast

Summary

Omkhar is joined by Dave LaBianca, security engineering director at Google, Mihai Maruseac, member of the Google Open Source Security Team, and Jay White, security principal program manager at Microsoft. David and Jay are on the Project Governing Board for the Coalition for Secure AI (CoSAI), an alliance of industry leaders, researchers and developers dedicated to enhancing the security of AI implementations. Additionally, Jay — along with Mihai — are leads on the OpenSSF AI/ML Security Working Group. In this conversation, they dig into CoSAI’s goals and the potential partnership with the OpenSSF.

Conversation Highlights

  • 00:57 – Guest introductions
  • 01:56 – Dave and Jay offer insight into why CoSAI was necessary
  • 05:16 – Jay and Mihai explain the complementary nature of OpenSSF’s AI/ML Security Working Group and CoSAI
  • 07:21 – Mihai digs into the importance of proving model provenance
  • 08:50 – Dave shares his thoughts on future CoSAI/OpenSSF collaborations
  • 11:13 – Jay, Dave and Mihai answer Omkhar’s rapid-fire questions
  • 14:12 – The guests offer their advice to those entering the field today and their call to action for listeners

Transcript

Jay White soundbite (00:01)
We are always talking about building these tentacles that spread out from the AI/ML security working group and the OpenSSF. And how can we spread out across the other open source communities that are out there trying to tackle the same problem but from different angles? This is the right moment, the right time and we’re the right people to tackle it. 

Omkhar Arasaratnam (00:18)
Hi everyone, and welcome to What’s in the SOSS? I’m your host Omkhar Arasaratnam. I’m also the general manager of the OpenSSF. And today we’ve got a fun episode for y ‘all. We have not one, not two, but three friends on to talk about CoSAI, OpenSSF AI and ML, how they can be complementary, what they do together, how they will be focusing on different areas and what we have ahead in the exciting world of security and AI/ML. So to begin things, I’d like to turn to my friend David LaBianca. David, can you let the audience know what we do?

Dave LaBianca (00:57)
Yep, hey, so I’m David LaBianca. I’m at Google and I’m a security engineering director there and I do nowadays a lot of work in the secure AI space.

Omkhar Arasaratnam (01:06)
Thanks so much, David. Moving along to my friend Jay White. Jay, can you tell the audience what you do?

Jay White (01:16)
I’m Jay White. I work at Microsoft. I’m a security principal program manager. I cover the gamut across open source security strategy, supply chain security strategy, and AI security strategy.

Omkhar Arasaratnam (01:23)
Thank you, Jay. And last but not least, my good friend Mihai. Mihai, can you tell us a little bit about yourself and what you do?

Mihai Maruseac (01:30)
Hello, I am at Google and I’m working on the secure AI framework, mostly on model signing and supply chain integrity for models. And collate with the GI, I collate the OpenSSF AI working group.

Omkhar Arasaratnam (01:43)
Amazing. Thank you so much and welcome to one and all. It is a pleasure to have you here. So to kick things off, who’d like to tell the audience a little bit about CoSAI, the goals, why did we need another forum?

Dave LaBianca (01:56)
I can definitely jump in on that one, Omkhar. I think it’s a great question. What we saw since, you know, ChatGPT becoming a big moment was a lot of new questions, queries, inbounds to a whole bunch of the founders of CoSAI surrounding, hey, how are you doing this securely? How do I do this securely? What are the lessons learned? How do I avoid the mistakes that you guys bumped into to get to your secure point today?

And as we all saw this groundswell of questions and need and desire for better insight, we saw a couple things really happening. One is we had an opportunity to really work towards democratizing the access to the information required, the intelligence required, the best practices required to secure AI. And then everybody can execute to their heart’s content at whatever level they’re able to, but it’s not about not knowing how to do it. So we knew that that was our goal. 

And then why another forum? It was because there’s amazing work going on in very precise domains. OpenSSF is an example, but also great work in Frontier Model Forum, in OWASP, in Cloud Security Alliance, on different aspects of what you do around AI security. And the gap we saw was, well, where’s the glue? Where’s the meta program that tells you how you use all these elements together? How do you address this if you’re an ENG director or a CTO looking at the risk to your company from the security elements of AI?

How do you approach tying together classical systems and AI systems when you’re thinking about secure software development, supply chain security? How do you build claims out of these things? So the intent here was, well, how do we make the ecosystem better by filling that gap, that meta gap, and then really working hand in hand with all of the different forums that are going to go deep in particular areas? And then wherever possible, you’ll figure out how we fill any other gaps we identify as we go along.

Omkhar Arasaratnam (04:00)
That’s great. Jay, Mihai, anything to add there?

Jay White (04:02)
Nothing to add, just a bit of a caveat. When David and I spoke way back early on in this year, I was extremely excited about it because as he said, what’s the glue that brings it all together? And you know, up to that point, Mihai and I had already started the AI/ML Security Working Group under the OpenSSF. We’re sitting here thinking about security from the standpoint of well, what’s happening now with these open large language models? How are we creating security apparatus around these models? How is that tying into the broader supply chain security apparatus? And what ways can we think about how to do that kind of stuff? 

And then of course, when I met David, I said, man, this is phenomenal. We are always talking about building these tentacles, right? The tentacles that spread out from the AI/ML security workgroup in the OpenSSF. How can we spread out across the other open source communities that are out there trying to tackle the same problem but from different angles? So, this the right moment, at the right time and we’re the right people to tackle it.

Omkhar Arasaratnam (05:01)
That’s a great summary, Jay. It takes a village for sure. Now we have two of the OpenSSF AI work group leads on the podcast today. So, I mean, how does this relate to the work that we’re doing there, guys? Sounds very complementary, but could you add more color?

Jay White (05:17)
The way that we think about this is well, let’s start with the data. Let’s start with the models and see how we can build some sort of guidance or guideline or spec around how we sign models and how we think about model transparency. And then of course, bringing on a SIG, the model signing SIG, which actually built code. We have an API, a working API that right now we’re taking the next steps towards trying to market. I’ll let Mihai talk about that a little bit further. As a look forward into this conversation, I sit in both CoSAI and AI/ML Security Working Group. So, when we get to that level of discussion, the tie-in is amazing. But Mihai, please talk about the technical stuff that we got going on.

Mihai Maruseac (06:03)
We have two main technical approaches that we have tackled so far in the working group and they are very related. So one is model signing and the other one is trying to get moving forward for some way of registering the supply chain for a model. So the provenance, the SLSA provenance are similar. And I’m saying that they are both related because in the end, in both of these, we need to identify a model by its hash, so we to compute the digest of the model. 

As we work for the model signing, we discover that just simple hashing it as a Blob on disk is going to be very bad because it’s going to take a lot of time. The model is large. So we are investigating different approaches to make hashing efficient. And they can be reused both in model signing and in provenances and any other statement that we can do about AI supply chain. They would all be based on the same hashing scheme for models.

Omkhar Arasaratnam (07:02)
And Mihai, maybe to drill into that a little bit for the folks listening in the audience. So, we can use various hashing techniques to prove provenance, but what does that solve? Why is it that we want provenance of our models? What does that allow us to better reason over?

Mihai Maruseac (07:21)
Yeah, so there are two main categories that we can solve with the hashing of a model. One is making sure that the model has not been tampered with between training and using it in production. We have seen cases where model hubs got compromised and so on. So we can detect all of these compromises before we load the model. The other scenario is trying to determine a path from the model that gets used into an application to the model that got trained or to the data sets that have been used for training. 

When you train a model or when you have a training pipeline, you don’t just take the model from the first training job and put it directly into the application. In general, there are multiple steps, fine-tuning the model, combining multiple models, or you might do quantization. You might transform a model from one format to another.

For example, right now, a lot of the people are moving from pickle formats to safe tensile formats. So each of these steps should be recorded. In case there is some compromise in the environment, you will be able to detect it via the provenance.

Omkhar Arasaratnam (08:27)
Got it. David, I know that your focus has been more on the leadership of CoSAI, but certainly, it’s not the first time you and I have spoken about OpenSSF. I’m curious as you look across the work at OpenSSF, if there’s other opportunities where we may be able to collaborate in CoSAI and how you see that collaboration evolving in the future.

Dave LaBianca (08:50)
I think it’s a great question. We have three in our real work streams. One of them is fundamentally around AI software supply chain, secure software development frameworks. The other two are preparing the defender and AI security governance. But the beginning, the inception of this conversation around that CoSAI wanted to do something in this AI secure supply chain space was conversations with Mihai and others at Google, and Jay, and realizing that there were actually lots of opportunities here. 

You know, one of them that was really straightforward from everybody was, hey, nobody’s really just looking for a provenance statement when you’re a CTO, CIO, director or the like. They want to claim there’s something they’re trying to prove to the outside world or at least state to the outside world. How you compose all of those elements together, especially when it’s not just a model, it’s your entire application set that you’re binding this to. 

It’s the way you did fine-tuning or the way you’re using large token prompts, pulling it all together and being able to make that claim. There needs to be guidance and best practices and things that you can start from so that you don’t have to figure this out all out yourself. So that was one key area. 

Another area was there’s really truly amazing efforts going on in OpenSSF in this element of the AI space on provenance. One of the things that we feel that a group like CoSAI can really help with is collaborating on what are those additional not technical bits of how you prove or create the provenance statement, but what are the other things that over time a practitioner would want to see in a provenance statement or be able to be attested to with a providence statement so that the provenance statement actually ties more closely to a claim in the future? You know, things like, hey, should we have to state what geography the training data was allowed for as part of a statement as you go forward? Things like that. 

So bringing that AI expertise, that ecosystem expertise around things that people want to do with these solutions. And then working with and collaborating with OpenSSF on what does that mean? How do you actually use that? Do you use that in a provenance statement? We see that that’s the type of amazing opportunity, especially because we have really wonderful overlap. Having Jay and Mihai as part of this and all of the other board members that are doing OpenSSF, we see this really great opportunity for collaboration. There’s always that possibility that teams bump heads on things, but like the idea that we’re all working towards the same mission, the same goal, it should be good challenges as we get to the weird edges of this in the future.

Omkhar Arasaratnam (11:13)
And that’s certainly one of the biggest benefits of open source that we can all collaborate. We all bring our points and perspectives to a particular problem area. So at this part of the show, we go through three rapid-fire questions. This is the first time we’ve had three guests on. So I’m going to iterate through each of y’all. Feel free to elaborate. As with any of these answers, I’m going to give you a couple choices. But a valid answer is no, Omkhar, actually, it’s choice number X and here’s why. Fair warning: the first question I’m gonna make judgment and the first question is spicy or mild food? Jay White, let’s begin with you.

Jay White (11:53)
You know what? Somewhere in the middle. I like some things spicy. I like some things mild. I like my salsa spicy, but I’m not a fan of spicy wings.

Omkhar Arasaratnam (12:02)
I mean that was a politically correct statement. Let’s see if Mihai has something a little more deterministic.

Mihai Maruseac (12:08)
I am kind of similar to Jay, except on the other side. I like the spicy wings, but the mild salsa.

Omkhar Arasaratnam (12:15)
Oh man, you guys, you guys, you guys need to run for office. You’re just trying to please the whole crowd. Dave from your days on Wall Street, I know you can be much more black and white with, are you a spicy guy or a mild guy? 

Dave LaBianca (12:29)
Always spicy. 

Omkhar Arasaratnam (12:30)
Always spicy. See, that’s why Dave and I are going to hang out and have dinner next week. All right. Moving into another controversial topic: text editors, Vim, VS Code, or Emacs? Let’s start with Mihai.

Mihai Maruseac (12:43)
I use Vim everywhere for no matter what I’m writing.

Dave LaBianca (12:45)
I mean, it’s the only right answer. I mean, there’s only one right answer in that list and Mihai just said it. So, I mean, like that’s easy.

Omkhar Arasaratnam (12:51)
Absolutely. How about you, Jay? What are you going to weigh in with? 

Jay White (12:54)
It’s Vim. It’s the only answer.

Omkhar Arasaratnam (12:59)
The last controversial question, and we’ll start with Mr. LaBianca for this. Tabs or spaces?

Dave LaBianca (13:09)
That we even have to have this argument is more of the fun of it. It’s got to be spaces. It’s got to be spaces. Otherwise somebody’s controlling your story with tabs. And like, I don’t want that. I want the flexibility. I want to know that I’m using three or I’m using four. It’s spaces.

Omkhar Arasaratnam (13:23)
There’s a statement about control embedded in there somewhere. Mihai, how about you?

Mihai Maruseac (13:28)
I prefer to configure a formatter and then just use what the formatter says.

Omkhar Arasaratnam (13:34)
Ahh, make the computer make things consistent. I like that. Jay?

Jay White (13:38)
I’m spaces. I’m with David. I had to use a typewriter early on in life.

Omkhar Arasaratnam (13:42)
Hahaha. I got it.

Dave LaBianca (13:47)
I have those same scars, Jay.

Jay White (13:48)
Hahaha. Yeah!

Omkhar Arasaratnam (13:52)
So the last part of our podcast is is a bit of a reflection and a call to action. So I’m going to give each of you two questions. The first question is going to be what advice do you have for somebody entering our field today? And the second question will be a call to action for our listeners. So Mihai, I’m going to start with you, then we’ll go to Jay and wrap up with Mr. Labianca. Mihai, what advice do you have for somebody entering our field today?

Mihai Maruseac (16:52.204)
So I think right now the field is evolving very very fast, but that shouldn’t be treated as a blocker or a panic reason. There is a firehouse of papers on archive, a firehouse of new model formats and so on, but most of them have the same basis and once you understand the basis it will be easier to understand the rest.

Omkhar Arasaratnam (14:35)
Thanks, Mihai. What’s your call to action for our listeners?

Mihai Maruseac (14:39)
I think the principle call to action would be to get involved into any of the forums that we are talking about AI and security. It doesn’t matter which one, start with one and then from there expand as time allows to all of the other ones.

Omkhar Arasaratnam (14:52)
Jay, what advice do you have for somebody entering our field today?

Jay White (14:56)
Fundamentals, fundamentals, fundamentals. I can’t stress it enough. Do not start running. Start crawling. Take a look at you know, what was old because what was old is what is new again, and the perspective of what is old is lost on today’s engineer for automation and what’s new. So, your perspective might be very welcomed, especially if you concentrate on the fundamentals. So, anyone coming in today, become a master of the fundamentals, easiest path in and that way your conversation will start there and then everyone else who’s a bit more advanced will plus you up immediately because respect to be given to those fundamentals.

Omkhar Arasaratnam (15:39)
Completely agree. Fundamentals are fundamentals for a reason. And what is your call to action for our listeners?

Jay White (15:46)
So, my call to action is going to be a little different. I’m going to tackle this making a statement for everyone but targeting underrepresented communities because I also co-chair the DE&I working group inside of OpenSSF as well. I feel like this is an excellent opportunity not just for me to tackle it from this standpoint in terms of the AI/ML security working group but also for CoSAI as well. Look, just walk into the walk into the room. I don’t care whether you sit as a fly on the wall for a couple of minutes or whether you open your mouth to speak. Get in the room, be seen in the room. If you don’t know anything say, hey guys, I’m here, I’m interested, I don’t know much, but I want to learn. And be open, be ready to drink from the firehose, be ready to roll up your sleeves and get started. And that’s for the people in the underrepresented community.

And for everyone I would say, I would generally say the same thing. These rooms are free. The game is free. This is free game we’re giving you. Come in and get this free game and hold us accountable. And the merging of information security in general, cybersecurity down into this great AI engine that’s spinning back up again. The merging of these worlds are colliding at such a rapid pace, it’s an excellent time to get in and not know anything. Because guess what? Nine times out of ten, the people in there with the most to talk about don’t know anything either. They’re just talking and talking and talking until they say something right. So, get in and be ready and open to receive.

Omkhar Arasaratnam (17:28)
Those are extremely welcoming words and that outreach is welcome and amazing. Wrapping things up, Mr. LaBianica, what advice do you have for somebody entering our field today?

Jay White (17:41)
So I think honestly, it’s gotta be leaning into where Jay was going. For me, foundational security is the way you don’t speed run the last 40 years of vulnerabilities in your product, right? And whether you like the academic approach of reading and you want to go read Ross Anderson’s Security Engineering, rest in peace, whether you want to find it yourself, but there is so much knowledge out there that’s hidden in silos. 

And this doesn’t just go for people who starting their career. Twenty years in, if you’ve never looked at what information warfare sites the house have found or what signals intelligence have found, like if you haven’t looked across the lines and seen all the other ways these systems go wrong, you’re still working from a disadvantage. So it’s that foundational element and learn the history of it. You don’t have to worry about some of that stuff anymore, but knowing how you got here and why it’s happening now is so critical to the story.

And then especially with AI, please, please don’t forget that, yes, there’s an ooh shiny of the AI and it’s got new security problems, but making sure you’ve prevented access to your host, that you’ve got really strong authorization between systems, that you know why you’re using it and what your data is. Like these things are fundamental, and then you can worry about the more serious newer threats in these domains. But if you don’t do that stuff, really, really hard to catch up.

Omkhar Arasaratnam (18:56)
Words of wisdom to completely agree. And your call to action for our listeners, Dave?

Dave LaBianca (19:02)
I’m gonna tie it together, both Jay and Mihai, because I think both stories are super important. CoSAI is based on this idea of we really wanna democratize security intelligence and knowledge around securing AI. You can’t do that when it’s a single voice in the room, when it’s just Google or just tech companies in the US or just take your pick on what that just is. So my call to action is one, our work streams are starting, please lean in or anybody’s workstreams. It doesn’t have to be CoSAI’s workstreams. Lean in, bring your voice to the table because we need the different viewpoints in the room. We need the 10th person in the room going, wait, but I need a cost-effective solution that works on this low-end device in this region. Nobody can fix that if we don’t make sure that folks are in the room. 

Yes, you have to hold us accountable to make sure we make that space as Jay was saying, but we then also need those voices in the room. And regardless of where you come from, we need that contribution and that community building because there’s no way you can ever pick up anything whether it’s from Microsoft or Anthropic or IBM, Intel or Google and then say that’s gonna work for me. You need those diverse inputs, especially on the security side, right? They’ll let you go, OK well my company thinks about it or my entity thinks about it this way and I need to then figure out how I Find solutions that build to it So I think, you know, get involved, bring your voice to the table and help us all see the things that we’re missing that we’re kind of blind to because of either, you know, we work at a big tech company or whatever.

Omkhar Arasaratnam (20:29)
Thanks so much, Dave. As we close out, I’d love a quick, how should folks that are interested get involved with CoSAI as well as the AI/ML workgroup at the OpenSSF? Maybe we’ll start with Mihai. Mihai, if somebody’s interested in getting involved at the OpenSSF AI ML workgroup, where do they go to? How do they start?

Mihai Maruseac (20:49)
So join a meeting and we can go from there. The meeting is on the OpenSSF calendar every other Monday. 

Omkhar Arasaratnam (20:55)
For anyone that’s looking for where to check that out go to openssf.org, and it’s all up there on the home page. Dave if folks want to get involved with CoSAI, per your generous invitation, where can they go to learn more and how can they get plugged into those work groups that are spinning up?

Dave LaBianca (21:11)
So first things first, go to one word coalitionforsecureai.org. That gets you your starting points around how to find us and how to see where we go. Look at our archives of our email lists. Look at our GitHub repo that shows what we’ve currently published around our governance and our controlling rules. And then in September, look out for the calls to action from our work streams around participation and the rest. And then it’ll be exactly as Mihai said. Join us. Come troll on a GitHub repo, whatever you want to do, but find a way to get engaged because we’d love to have you there.

Omkhar Arasaratnam (21:40)
Amazing. Jay, anything to add in terms of new folks looking to join either project as you span both?

Jay White (21:45)
I’m accessible all over the place. You can find me on LinkedIn. But find me, find Mihai, find David, and contact us directly. We are more than happy to usher you in and bring you in, especially once the workstreams spin up in the Coalition for Secure AI. But Mihai and I were there every other Monday at 10 A.M. Pacific Standard Time. 

Omkhar Arasaratnam (22:08)
All right, well, thanks so much, guys. Really appreciate you joining, and I look forward to y’all helping to secure and democratize secure AI for everyone.

David LaBianca (22:16)
Hey, Omkhar, thank you for having us.

Omkhar Arasaratnam (22:18)
It’s a pleasure.

Announcer (22:19)
Thank you for listening to What’s in the SOSS? An OpenSSF podcast. Be sure to subscribe to our series of conversations on Spotify, Apple, Amazon or wherever you get your podcasts. And to keep up to date on the Open Source Security Foundation community, join us online at OpenSSF.org/get involved. We’ll talk to you next time on What’s in the SOSS?

What’s in the SOSS? Podcast #13 – GitHub’s Mike Hanley and Transforming the “Dept. of No” Into the Dept. of “Yes And…”

By Podcast

Summary

In this episode, Omkhar chats with Mike Hanley, Chief Security Officer and SVP of Engineering at GitHub. Prior to GitHub, Mike was the Vice President of Security at Duo Security, where he built and led the security research, development, and operations functions.

After Duo’s acquisition by Cisco for $2.35 billion in 2018, Mike led the transformation of Cisco’s cloud security framework and later served as CISO for the company. Mike also spent several years at CERT/CC as a Senior Member of the Technical Staff and security researcher focused on applied R&D programs for the US Department of Defense and the Intelligence Community.

When he’s not talking about security at GitHub, Mike can be found enjoying Ann Arbor, MI with his wife and nine kids.

Conversation Highlights

  • 01:21  Mike shares insight into transporting a family of 11
  • 02:02  Mike’s day-to-day at GitHub
  • 03:53  Advice on communicating supply chain risk
  • 08:19  Transforming the “Department of No” into the “Department of Yes And…”
  • 12:44  AI’s potential impact on secure open software and, specifically, on software supply chains
  • 18:02  Mike answers Omkhar’s rapid-fire questions
  • 19:26  Advice Mike would give to aspiring security or software professionals
  • 20:38  Mike’s call to action for listeners

Transcript

Mike Hanley soundbite (00:01)

Core to everything that we do is this thesis that good security really starts with the developer. Internally, for our engineers, that means the developers who are Hubbers. But it’s also all of our customers and all of the developers and all of the open source maintainers and communities that are using GitHub. 

Omkhar Arasaratnam (00:18)
Welcome to What’s in the SOSS? I am this week’s host, Omkhar Arasaratnam. I’m also the general manager of the OpenSSF. Joining us this week is Mike Hanley. Mike is the CSO and SVP of engineering at GitHub. Prior to joining GitHub, Mike was the vice president at Duo Security where he built and led the security research, development and operations function. After Duo’s acquisition by Cisco, for many billions of dollars in 2018, Mike led the transformation of Cisco’s cloud security framework and later served as CISO for the company. And when he’s not talking about security at GitHub, Mike can be found enjoying Ann Arbor, Michigan with his wife and eight kids. Hold on. Do we need a pull request, Mike?

Mike Hanley (01:02)
I think we do. I think we need to update that to nine kids.

Omkhar Arasaratnam (01:04)
Well, congratulations to the Hanley family on a ninth. 

Mike Hanley (01:08)
Thank you, Omkhar I appreciate it.

Omkhar Arasaratnam (01:11)
I’ve got to ask you a question that every parent has probably asked you. How do you transport your family around? I mean, logistically, I’m curious.

Mike Hanley (01:21)
We have one of those amazing vans that looks like we’re running a summer camp. We have a Ford Transit, and it’s all-wheel drive, because I’m in Ann Arbor, Michigan, so we’re a four-seasons kind of area, so we need to make sure we can get around in the snow. But it’s great. It’s just got the side door, so when we get to school, everybody just sort of files out in a line, and we throw the door shut, and we’re off to the races. So Ford Transit, 12 seats. I got space for one more. (Laughter) So we love that thing. It’s great for getting around with the whole fam.

Omkhar Arasaratnam (01:48)
That’s amazing, a feat of engineering. Speaking of engineering, you’re the CSO and SVP of engineering at GitHub. Can you walk me through your role overseeing both of these teams and what it means for you, what it means for how you think about secure software development at GitHub?

Mike Hanley (02:02)
Yeah, so I initially joined GitHub a little over three years ago to be the first chief security officer. So bringing all the security programs together and then sort of writing the next chapter of that story. And about a year after I got to GitHub, so a little over two years ago, I said yes to the opportunity to also take on running all of engineering at GitHub. And having those two hats may seem a little bit unique on the outside, but I think it’s actually very natural for us at GitHub because core to everything that we do is this thesis that good security really starts with the developer and internally for our engineers, that means the developers who are Hubbers at GitHub building everything related to GitHub. 

But it’s also all of our customers and all the developers and all the open source maintainers and communities that are using GitHub. We want security to start with them as well and start with in a way that works for them as well. So secure in their accounts. So things like the 2FA work that we’ve done, but also that it’s easy for them to get security value into the projects and communities that they are a part of.

Omkhar Arasaratnam (03:05)

You’re the home for the world’s developers, and GitHub has this unique role in helping to secure the software supply chain, which is an area that can be really difficult for leaders to understand in depth and breadth. How are you advising organizations that are on their journey to better understand their supply chain? What would you advise security leaders or developers tackling this issue? 

I mean, as, I consider myself a software engineer who’s been doing security for a long time. Not every security leader has a background like you or I, where it comes from a place of software engineering. I’m curious how you articulate the complexity to those that may be more oriented towards risk or infrastructure, things of that nature.

Mike Hanley (03:53)
Yeah, I think the biggest danger, Omkhar, in terms of organizations or teams understanding their supply chain risk is they think too small or they have a too narrow of a lens. It’s like, well, my supply chain is secure because I know or have an attestation as to what totally composes the software that I’m actually building, packaging and publishing. And while that’s good, and certainly it’s something that organizations should do — and if they’re doing it great pat on the back to them because frankly, they’re already a leg up over a lot of other places that are doing nothing — but it’s important not to stop there. 

It’s…we’ve seen this actually with if you look at a lot of the mainstream incidents that have kind of hit the news in the last few years, of course you’re very familiar with all these, but they’ve involved things like attacking build systems. Some of them have been back doors in software

Some of them have been insider threats, and you sort of have this range of potential attack vectors on the security of your software. So I think you need to consider things like, of course, having an inventory of what’s there and doing, you know, sort of the bread and butter vulnerability management that you would, and dependency management that you, that you would or should be doing as a best practice. 

But it’s also considering, you know, how do I trust or how do I assess the organizations that are actually producing the software that I depend on? Are they secure in their persons? How do you think about the accounts and communities that are actually contributing to that work? Do I understand all the third-party integrations and tools that I’m using, of which many organizations I would suggest don’t have a full inventory of many of those? When we look at third party integrations on GitHub, there are a vast sort of sea of those that customers and developers and organizations use. 

Having an understanding of like, what’s plugged in where? What do they have access to? The people who develop and operate that integration or the service I’m integrating with, what’s their security look like? And I think it’s really just understanding this very broad kind of network of effects that happens goes well beyond just, like, the idea that somebody could potentially commit a backdoor, which is obviously an important thing to assess or that there might be a vulnerability and a dependency that you have. These are actually important things to assess, but your threat model needs to be much, much broader than that. 

And I think for every organization that’s really worried about their code getting backdoored, good, they should be thinking about that. But they also need to make sure they actually go look and say like, what third-party applications of our developers authorized sort of full and unfettered access to our build systems on? If you haven’t done that, you need to make sure that you’re looking at some of those things. And this has really actually informed a lot of the decisions that we’ve tried to make at GitHub in the course of the last few years. 

I mean, the one that I think is what I hope will be one of the most impactful things that’s happened in supply chain security in the last several years was actually driving two-factor authentication adoption. And you’ll remember the sort of package hijacking attacks that happened on the NPM ecosystem in late 2021. And that was a really interesting set of learnings for us because the adversary incentives are so clear. If I can just get one piece of code to run a Monero miner on this package that’s in the top 500 on NPM, I’m going to get at least 10,000 downloads an hour out of that. 

And so the incentive is very high to go attack those accounts, but the vast majority of maintainers and developers that we found at the time actually weren’t using 2FA. And it’s sort of interesting to look and say, well, in the IT world, if you started a corporate job tomorrow, you would get a managed laptop, an account, and something that was enrolled in 2FA, because that’s now a best practice and a standard in IT. Yet we don’t have, or we haven’t quite caught up with those types of standards and best practices in the broader sort of developer ecosystem. 

So I think the 2FA work that we did — while it was hard to get many, many millions of developers enrolled in 2FA — that’s the kind of thing that just raises the bar for everybody and it substantially increases the attack cost of a supply chain attack because you’re kind of crowding out account takeover and sort of these other low hanging fruit that would most commonly be exploited by a bad guy. 

Omkhar Arasaratnam (07:38)
I think that’s a great point and thanks for all the great work that you all have done in order to increase some of the security basics like 2FA more broadly. I want to jump back to your role at GitHub. And as I mentioned before, as somebody that personally identifies as a software engineer who’s been doing security for a really long time, you know, software engineers the world over have oft complained about the security department coming in and saying, you can’t do this, blah, blah, blah. And then their velocity slows down and they can’t do the cool thing because security person said no. Both teams roll up to you. How do you balance that? How do you, how do you see that impacting your day to day? Is that a, is that a good thing? Bad thing?

Mike Hanley (08:19)
What you described is what I often call the Department of No. And really, I think a modern security team, especially in a modern software-driven organization, of which the vast majority of companies are at this point, you have to be the Department of Yes And. Which is, yes, we want to do the new thing. We assume that our engineers, our finance people, our marketing people have the right goals for the company. And we want to figure out how to make that particular endeavor successful and contemplate the risk management needs that we have as a company. 

We want to make sure the good thing happens while making sure that the bad thing does not happen. I think for us internally, having both teams under one roof while there’s traditionally this separation that you mentioned, that separation comes with challenges because it actually, in many cases, encourages the security team to sort of sit in the ivory tower and come down and say, well, Omkhar, you can’t do that because reasons. And usually that engagement happens

at the 11th hour, right? I mean, it’s very rare that you get sort of negative security feedback early in a model like that.

And I find that by having the teams together, I mean, literally all the security leaders are sitting in the same calls as the engineering leaders because they all are part of the same team. And we have this notion internally that really everybody in the company is on the security team. And obviously, that means something specific for engineering because they’re doing development work that impacts the security of the company, the security of our stakeholders. They’re building things to make sure that we can realize good security outcomes, especially for open source maintainers and our paying customers. So they kind of have a threefold mission there where they’re helping with security.

But it’s also true for the finance people who report a gift card scam from our CEO or the HR and recruiting folks who are looking out for fake potential employees who are trying to scam their way into the company. So that idea that everybody’s on the security team is really a cultural approach for us to make sure that everybody’s a part of that broader network. And so this is basically the antithesis of this idea that humans are the weakest link. In fact, we view them as the strongest part of our security program and actually an enhancement of all the tooling that we have by getting everybody engaged. 

But specific to engineering, I think it’s great because it actually makes sure the incentives are tightly aligned, right? Like I’m responsible for making sure that the website is running and secure all the time. Like those are the two things that are effectively my set of responsibilities to the business and they are not in any way intention. In fact, if you look at the vast majority of our investment profile, it is actually going toward those two things. Now we’re actually doing a lot of net new features and we’re building new things all the time and we’ve got you know experimentation on bets that we want to place in the future, but the overwhelming amount of our work goes to to those two priorities that I mentioned a minute ago.

You know we’re really geared toward like making sure that security is a good experience for everybody not not a bad one, which doesn’t mean we don’t say no from time to time. But I think you minimize the number of no’s by having security as deeply engaged in what’s happening at the edge as possible, because the security team can’t actually be everywhere all at once. Our security team is not equivalent to the number of engineers that we have in the company. I don’t know. I’ve yet to meet anybody who can say that. And I don’t think I will in this lifetime. Certainly not with the jobs reports that we see that suggest there’s a vast shortage of security talent and expertise. And certainly you and I see this when we’re thinking about how to help open source maintainers. It’s just not out there. It doesn’t exist en masse. And it doesn’t exist in sort of broadly available sense. 

But if you’re really leaning into the idea that we want to help the engineers and you recognize that security team is not going to be a part of great security work all the time, you actually want to make sure the engineers are the superheroes from a security standpoint or the finance folks are the superheroes from a security standpoint and you shift that mindset toward one that’s actually trying to drive that outside the security team. This actually works really, really well for us, and I think creates a really tight coupling between those two teams. And it also allows us to, I think, focus on like higher order concepts. 

So for example, the idea not just that we do security, but that we do it well, and we do it in a way that’s actually a great experience. So we talked a little bit about 2FA, spending the time with the design team and the product teams to figure out what is doing this well and in such a way that it’s not alienating, that it’s actually a good experience, that people actually adopt it at scale without feeling like it’s being foisted on them in a counterproductive way. 

Because when you end up in that scenario, people find a way to get around your security controls, whether they’re your internal developers or stakeholders that you’re affecting with an initiative like that. So really having everybody together under one roof driving those common goals, I think has actually been like very, very healthy for both the company’s internal security objectives, but also for our ability to affect broad change in the open source ecosystem as well with things like that 2FA initiative that we did.

Omkhar Arasaratnam (12:44)
I think you’re spot on. Humans are ingenious. Humans will want to do the thing that provides them the least friction. Let’s make the security, the secure thing, the thing that provides least friction. Your story about everyone’s accountability being security. It’s not just the security team, it’s finance, it’s engineering, et cetera, reminds me and sounds a little corny, but 20 years ago when I was just getting into security, I was at this conference and everybody had this lanyard that said, I’m a firewall. And, you know, 20 years later, it seems kind of corny, but the idea was security was everyone’s accountability and it didn’t need to be backstopped by SecOps. If somebody at the frontline could stop that upstream, the benefit was even larger. 

Now, if we switch gears a little bit and think about everyone’s favorite topic, AI. How do you think AI will impact open source security, and what ways do you see it helping us to secure the software supply chain?

Mike Hanley (13:42)
My view on this is very bullish and it is summed up as this. I think AI will be two things. One is it will redefine the idea of shifting left in software development, which is, you know, we’ve been talking about this for more than 10 years at this point. The idea obviously that you move security value as far left as possible. What that’s meant to date has been, you’re generally getting feedback sort of at CI/CD time, right? Like after you’ve written the code, submitted it for review and it’s going through testing, that’s typically when you’re getting feedback. So that’s left obviously of boom in most cases where boom is your thing has been deployed to production, you find a vulnerability or you get breached. 

AI is basically saying when you are in the editor, bringing your idea to code through the keyboard or other input device, at that moment, you don’t just have security expertise, you actually have an AI pair programmer who has a vast range of knowledge and expertise in a variety of topics, accessibility, security, scalability, the particular language or framework that you’re using, right there with you, giving you more secure suggestions, helping you check your work, and literally just paired with you the entire time. 

So I think that is the realization of what we really want with shift left, because you can’t actually go any further left than when you’re bringing your idea to code in the editor. And I think that’s the highest value place to add security value and it’s also the best experience for developers because you are in the flow doing the work in that moment, not, hey, I committed my changes up, I’m waiting for tests to run while I go get lunch and then I come back or maybe even it’s the next day depending on how slow your build systems are. 

The next day I get feedback and I gotta go, what was I thinking at that moment when I did this? So that shift left and adding security value right in the idea in real time is huge and that is uniquely available to us because of the advances in AI, certainly with GitHub Copilot, but there’s others out there as well that are trying to do similar things. So that’s one. 

The second is AI gives us an opportunity not just to help address vulnerabilities in new and existing code, right? So this idea that as I’m writing it, I can get higher quality, more secure code in real-time, but also I’m adding value to things like my CI/CD. So it’s, I’m getting better suggestions from automatically getting suggested fixes from some of the tooling that I have now, instead of just giving, hey, there’s a problem here. It’s there’s a problem here and here’s how you should fix it. So there’s tons of value there, but, but it also enables the idea to retrospectively fix things. 

And I think this is one of the like grand challenges that we’ve had frankly in, in security generally, but especially in open source security Omkhar as you know, like a lot of the building blocks that we have for everything that we do and experience are maintained by a small number of people who don’t necessarily have robust access to security resources. And in many cases, fundamental building blocks of the internet are unmaintained at this point. Major projects that people depend on are unmaintained. And that is a huge challenge for the whole ecosystem. 

But the idea that, particularly through AI and agents, that we might actually be able to at scale

ingest process and refactor or fix bugs at scale in open source projects to me is super exciting, super compelling. It’s a way to supercharge, not just your internal developers who are writing first party code, but actually help supercharge open source developers or open source communities and really empower them to go do this work. They’re like, they may not be incentivized to do, they may not have the resources to do, they may not have the expertise as part of their community of maintainers and their project to go do. That to me is super compelling. 

And you know, it’s interesting when you say like, well, we’re dependent on these things, if only we could figure out what to do about it. And often, we talk about, well, we can deploy people to go help. And yeah, that’s good. That helps for the short term. But that doesn’t scale. And it doesn’t necessarily help with these massive excavations that it takes to go back into projects that have 10, 20, or more years of history behind them. So I’m excited that AI can actually help us get the leverage that goes beyond what we can do with human scale, especially where we have tactically deployed things or sort of volunteer firefighter resources that can go help with a small number of things. AI is going to give us the lift, I think, to go scan and support and help fix and really renovate, maybe is a nice way to put it, some of the projects that we depend on for the next 10, 20, and 30 years.

Omkhar Arasaratnam (18:02)
That’s a great point, Mike. And it’s also why we look forward to programs like the AI Cyber Challenge from DARPA and seeing the work that’ll come out of that. Switching gears a bit, we’re going to go into rapid-fire mode. So I’m going to ask you a series of questions, give you a couple of possible answers. And of course, it’s up to you whether to choose one of them or say, hey, Omkhar, you got it wrong. Actually, this is how I feel. So are you ready to go?

Mike Hanley (18:30)
I’m ready, go.

Omkhar Arasaratnam (18:32)
Spicy vs. mild food?

Mike Hanley (18:35)
It’s basically always spicy for me.

Omkhar Arasaratnam (18:37)
Always spicy. I knew we got along good for a reason. Now this is a controversial one. We talked about bringing AI to the IDE. So let’s talk about text editors, Vim, VS code or Emacs?

Mike Hanley (18:48)
My tastes have changed over time. I would say I’m more of a Vim guy now, but at times I’ve actually just used Nano, which I don’t know that that’s a popular answer. (Laughter) I’m not proud of that, to be clear, but at times that was the preferred editor in a prior life.

Omkhar Arasaratnam (19:03)
Why does Nano bring such shame? (Laughter) All right, tabs or spaces?

Mike Hanley (19:09)
I’m generally a tabs, tabs kind of guy. Keep it, keep it simple.

Omkhar Arasaratnam (19:12)
Awesome. Alright, closing things out, Mike, you’ve had a wonderful illustrious career so far. What advice do you have somebody for somebody entering our field today? Be it from a software engineering or security perspective?

Mike Hanley (19:26)
I think for either one, find an opportunity to meet people and get involved and do something, have something tangible. And the great thing about open source is this is actually one of the very best ways to build your resume because your work is easily visible. You can show people what you’ve done. You see a lot of resumes, a lot of experiences, a lot of schools.

But what can really differentiate, I think, is when you can say, like, here’s a thing that I did, that I built, and I learned something unique from that. And you don’t always necessarily get that from just going through things like coursework. When you’ve really had to, like, duct tape something together and deal with the messy reality of getting things to work the way you want them to work outside of a sandbox, I think there’s a lot of value and sort of real-world knowledge that comes from that that is impossible to…there’s no compression algorithm for experience is a way I’ve heard this put previously.

And just hacking on the weekends with some of those projects or finding some people who you want to work on was for the sole sake of just learning how to do something new is incredibly valuable and it’s a great way to stand out.

Omkhar Arasaratnam (20:25)
That’s great advice. A mentor of mine once said, well, you should never test in prod learning and prod is a, is a useful lesson.?last question for you. What’s your call to action for our listeners?

Mike Hanley (20:38)
I think it’s similar. It’s getting involved in some meaningful way. But I think going back to some of the things that we talked about earlier, Omkar, just to give a slightly different answer to that, would be ask some hard questions in your organization about your understanding of open source security, and particularly your supply chain. I do think when I think about 20-plus years in the security space, it is one of those things that stands out to me as unique in that many organizations in security still don’t do the basics. I mean, it is 2024, and we still have to tell people to turn on two -factor authentication. So that is what it is, I would say. 

But it’s also true that not every organization does a great job with inventory management and with configuration management, or even just sort of mapping out their broader ecosystem and infrastructure. And I think just going back and saying, like, do we really understand our supply chain? Do we really understand our dependencies? Do we really have, like, provenance of our assets, our inventory, our vulnerability management? 

So I think again, in the context of open source security, like really just going back and saying, like, do we really know how we use this stuff and challenge maybe what the assumptions have been in the previous weeks and months and years? I guarantee you that whatever you’re looking at is far too narrow. And I think that can be an important conversation. It can really help your organization up-level its supply chain because again you might find healthy initiatives out there that are low-hanging fruit that you can take advantage of, so that’d be maybe a call to action is to have that conversation in your next security team meeting.

Omkhar Arasaratnam (22:02)
Great advice from Mike Hanley, the CSO and SVP of engineering at GitHub. Mike, thanks so much for joining us on What’s in the SOSS? and we look forward to having you on again soon, perhaps.

Mike Hanley (22:13)
Thank you Omkhar, great to be here.

Announcer (22:15)
Thank you for listening to What’s in the SOSS? An OpenSSF podcast. Be sure to subscribe to our series of conversations on Spotify, Apple, Amazon or wherever you get your podcasts. And to keep up to date on the Open Source Security Foundation community, join us online at OpenSSF.org/get involved. We’ll talk to you next time on What’s in the SOSS?

What’s in the SOSS? Podcast #12 – CISA’s Aeva Black and the Public Sector View of Open Source Security

By Podcast

Summary

In this episode, Omkhar Arasaratnam visits with Aeva Black, who currently serves as the Section Chief for Open Source Security at CISA, and is an open source hacker and international public speaker with 25 years of experience building open source software projects at large technology companies.

She previously led open source security strategy within the Microsoft Azure Office of the CTO, and served on the OpenSSF Technical Advisory Committee, the OpenStack Technical Committee, and the Kubernetes Code of Conduct Committee. In her spare time, Aeva enjoys riding motorcycles up and down the west coast.

Conversation Highlights

  • 01:37- Aeva describes a day in the life at CISA
  • 02:38 – Details on the use of open source in the public sector
  • 04:27 – Why open source needs corporate investment to maintain security
  • 06:20 – Aeva shares what their second year at CISA looks like
  • 07:58 – Aeva answers Omkhar’s rapid-fire questions
  • 09:28 – Advice for people entering the world of security
  • 10:16 – Certs are nice to have, but they aren’t everything
  • 10:42 – Aeva’s call to action for listeners

Transcript

Aeva Black soundbite (00:01)
The burden of securing open source — its ongoing maintenance, its testing, quality assurance, getting signing —  to make open source continue to be deserving of the trust we’ve all placed in it that can’t rest solely on unfunded volunteers. Companies have to participate, shoulder up and help.

Omkhar Arasaratnam (00:19)
Welcome to What’s in the SOSS? I’m your host Omkhar Arasaratnam. I am the general manager of the OpenSSF. And today we have my good friend Aeva Black joining us. Hi Aeva!

Aeva Black (00:32)
Hi, Omkhar, thanks so much for having me on today.

Omkhar Arasaratnam (00:34)
It’s a pleasure. Now, to start things off, why don’t you tell our listeners a little bit about your title and what you do?

Aeva Black (00:43)
Sure. So my official title is Section Chief for Open Source Security. Sounds kind of anime. I like it. I’m also a technical advisor here at CISA, the US Cybersecurity and Infrastructure Security Agency. We’re so enthusiastic about security, put it in our name twice. What I do day to day is just kind of work on solving open source security problems that I have been working on before, but now on this end of the fence.

Omkhar Arasaratnam (01:10)
Well, as I think I’ve told you in the past, my son is a huge anime fan. I literally had to bring a check bag back with me from Tokyo with all the various paraphernalia. But aside from indulging in my excitement about hearing CISA titles associated with anime, can you tell us a little bit more about the day-to-day? I mean, Section Chief sounds like a pretty cool role and you have been involved in the community for a while. What’s your day to day look like Aeva?

Aeva Black (01:37)
Honestly, you know, my previous careers, I often wrote code these days. Day-to-day looks more like answering emails, hopping on meetings, whether they’re internal meetings or interagency meetings, meetings with the open source communityies, but it’s a lot of talking and writing and speaking in public.

Omkhar Arasaratnam
When you announced new role at CISA and that you’d be joining CISA —  I think it was late last summer, I think about August, if memory serves — I was incredibly excited because I’ve seen CISA over the years take a stronger, better, more supportive approach when it comes to open source software. And I was really excited to see somebody like you that has had such a long history of open source support and advocacy join CISA. Can you talk to me about what it looks like on the inside, as everybody’s sitting back with their developer keyboard, clickety clacking, doing git commits all day? Has, has the government evolved into pure open source? How’s that going?

Aeva Black (02:38)
You and our listeners might be surprised to realize just how much both federal and state governments have always used open source. Our, our friend Deb Bryant — she’s been around, used to be at Red Hat, helped out in the open source initiative — used to actually run the open source programs office for the state of Oregon more than 10 years ago. So I think what it looks like today in CISA is pretty much what it has looked like. There’s more clearance, there’s more coverage, should we say, for folks who want to contribute to open source as part of their day job.

We’ve seen that get written down in sort of a guidance way, both in the DoD CIO’s memo a couple years ago, the DHS CIO memo for all DHS agencies includes groups like Coast Guard to use more open source, to contribute to open source, to be good participants in the community. So we’re seeing certainly more support for that. But again, folks across government have always used open sourcing. My first moment of realizing that, probably 2008, I saw some folks from the US Navy give a talk on using MySQL in a cluster running in their ships for battlefield awareness. It was the best database they could find at the time for what they needed. So it’s really nothing new.

Omkhar Arasaratnam (04:00)
Thanks for letting us know. I hadn’t realized that. And it’s very encouraging to hear that not only are we seeing broad adoption of open source within private sector, but also within the public sector. Now, security is a really important mission with a near infinite problem space, especially when it comes to open source security. You’ve been doing this for a while, where should we start? Because it seems like we could start just about anywhere and still have a life’s work ahead of us.

Aeva Black (04:27)
Yeah. Like you said, I’ve been doing this a while since the late 90s, and really as part of my job since the early 2000s. What hasn’t changed: the breadth and the diversity of open source communities is our strength, it is global participation in these communities. And so for today, in light of some of the recent threats against open source, and the pretty big compromises or vulnerabilities in open source that have affected products, we still need to recognize that open source is maintained mostly by volunteers in a participatory community-driven approach.

And yes, of course, companies have a big role to play too. But money isn’t always the solution, but research and common sense have shown that it usually is part of the solution. The burden of securing open source, its ongoing maintenance, its testing, quality assurance, getting signing all of those sorts of activities to make open source continue to be deserving of the trust we’ve all placed in it, that can’t rest solely on unfunded volunteers. Companies have to participate, shoulder up and help. that the transparency in open source, the promise that anyone can modify and study the source code, that transparency has to also be sort of dialed up for the amount of code that’s out there today. There’s so much more code than there used to be in open source, and the ratio of number of humans reviewing code to amount of code published has changed. That increases the risk a bit.

Omkhar Arasaratnam (05:57)
That’s some great advice as to where to start. Now we can slowly see over the horizon, the holiday season fast approaching. I know I’m, you’ve certainly had some great accomplishments. We’ve had some great shared work that we’ve done together. As you look to your second year in your role, what are your priorities? What’s in front of you and what would you like us to focus on?

Aeva Black (06:20)
Yeah, for myself and my team here at CISA, I’ll share that I knew things would be different in the public sector. It’s my first time in a public sector role. Hiring in any role is never as fast as we want it to be. We find a great candidate and the machinery of the organization, private sector or public sector, it’s always slower than we wish. So one of my priorities is continuing to grow my team and to bring more knowledge about open source and from the open source community into roles in the public sector, not just in my team, right, but across the agency and supporting other teams that don’t yet have as much knowledge about open source. Right? So a lot of internal awareness and training in terms of outward work. There’s been a lot that I find really encouraging with groups like FreeBSD’s attestation to the NIST secure software development framework.

A year ago, I had thought that there was no way to make the SSDF work for open source. And I was proven wrong and I’m delighted by that. And now I’m seeing a number of additional foundations and projects working towards a similar goal with their community and their funders working together to raise the bar on how and what secure assurances can be made about the process by which community-stewarded open source is developed. It’s not interesting who’s writing it, but how is it written? How is it tested? How is it assured? I’m really encouraged to see more of that and look forward to partnering with folks, including the OpenSSF towards more of that.

Omkhar Arasaratnam (07:58)
And we look forward to working with you, Aeva. So now is the time in the podcast in which we move to the rapid-fire section. I’m going to prompt you with a couple of different answers. There’s always a possibility that I’ve missed something, and you give me what you’d prefer your answer to be. Now I feel like I have some insight to the first question because we’ve eaten together several times,

Aeva Black (08:23)
That we have!

Omkhar Arasaratnam (8:24)
But spicy versus mild food, Aeva?

Aeva Black (08:27)
It depends if it’s Indian food, spicy, if it’s Mexican medium to mild.

Omkhar Arasaratnam (08:32)
And if it’s sushi, mild.

Aeva Black (08:34)
I mean, jalapenos on sushi can be really good.

Omkhar Arasaratnam (08:37)
Hmm. Yes. Yes, I agree. I take that back. Fair enough. Or a nice spicy salmon roll, perhaps.

Aeva Black (08:45)
True. Yeah.

Omkhar Arasaratnam (08:47)
Alright. Text editor of choice: Vi,  VS code, Emacs?

Aeva Black (08:52)
Easy, easy. Vim. I’ve always used Vim. I have my system set up, put me in Emacs, I usually have to shell, like use a different shell to kill it because I get stuck.

Omkhar Arasaratnam (09:00)
(Laughter) Well, I mean, Emacs is an operating system on its own, to be fair. (Laughter)

Aeva Black (09:04)
Yeah, just not one that I’m comfortable in.

Omkhar Arasaratnam (09:06)
I am also a Vim person, so shared, shared joy there. Tabs are spaces?

Aeva Black (09:13)
Spaces.

Omkhar Arasaratnam (09:14)
I knew it. Awesome. All right, Aeva, we’re wrapping up now. So in closing out, I have two final questions. The first one, what advice do you have for somebody entering our field today?

Aeva Black (09:28)
I wish I had an entire podcast on just this one, but really find your hyper-focus. For a lot of us, we can get stuck on things. Figuring out how to get stuck on the things that were good for my career helped me out early on. And building a T-shaped set of knowledge, so go deep first. Once you’ve gone as far as you want to go, then do it again on a different topic, and that builds breadth over time. Certs are nice to have to get past resume filters, but your network is everything. Maintain relationships across jobs. That’s the second big piece of advice I’d give.

Omkhar Arasaratnam (10:05)
I’ll let you in on a secret. I think the last cert that I got was as a Red Hat certified engineer in 2002. Do you want to share with the audience what last cert you got, if any?

Aeva Black (10:16)
It’s the if any part. Yeah. (Laughter) I considered a couple of certs back in the old MySQL days, early career. I never bothered with the Linux certs or the networking certs because I’ve just logged into a system and show that I knew my stuff.

Omkhar Arasaratnam (10:35)
Absolutely agree. Last question, Aeva. What’s your call to action for our listeners?

Aeva Black (10:42)
Well, for the listeners that are or work at a company, be a responsible consumer of open source. And that means participating in the project so you have insight. It means vetting the code and staging it appropriately locally. If you’re not a large corporation, but a member of a community, then my advice is make sure you’re building your community with stable governance and documented norms so that companies can understand how to work with you and that you behave as a group of a community in a predictable way. Predictable release cycles, predictable vulnerability management, all of those sorts of activities as an open source developer help to grow the project. And leave breadcrumbs, leave gaps for new contributors to fill and make sure you’re passing down the ladder to the next generation of contributors.

Omkhar Arasaratnam (11:38)
Excellent advice as always. Aeva Black, thank you so much for joining us on What’s in the SOSS?

Aeva Black (11:43)
Thanks so much for having me, Omkhar. See you around.

Announcer (11:46)
Thank you for listening to What’s in the SOSS? An OpenSSF podcast. Be sure to subscribe to our series of conversations on Spotify, Apple, Amazon, or wherever you get your podcasts. And to keep up to date on the open source security foundation community, join us online at openssf.org slash get involved. We’ll talk to you next time on What’s in the SOSS?