Skip to main content

📩 Stay Updated! Follow us on LinkedIn and join our mailing list for the latest news!

What’s in the SOSS? Podcast #21 – Alpha-Omega’s Michael Winser and Catalyzing Sustainable Improvements in Open Source Security

By December 10, 2024Podcast

Summary

In this episode, CRob talks to Michael Winser, Technical Strategist for Alpha-Omega, an associated project of the OpenSSF that with open source software project maintainers to systematically find new, as-yet-undiscovered vulnerabilities in open source code – and get them fixed – to improve global software supply chain security.

Conversation Highlights

  • 01:00 – Michael shares his origin story into open source
  • 02:09 – How Alpha-Omega came to be
  • 03:48 Alpha-Omega’s mission is catalyzing sustainable security improvements
  • 05:16 – The four types of investments Alpha-Omega makes to catalyze change
  • 11:33 – Michael expands on his “clean the beach” approach to impacting open source security
  • 16:41 – The 3F framework helps manage upstream dependencies effectively
  • 21:13 – Michael answers CRob’s rapid-fire questions
  • 23:06 – Michael’s advice to aspiring development and cybersecurity professionals
  • 24:44 – Michael’s call to action for listeners

Transcript

Michael Winser soundbite (00:01)
When some nice, well-meaning person shows up from a project that you can trust, it becomes a more interesting conversation. With that mindset, fascinating things happen. And if you imagine that playing itself out again and again and again, it becomes cultural.

CRob (00:18)
Hello, everybody, I’m CRob. I do security stuff on the internet. I’m also a community leader and the chief architect for the Open Source Security Foundation. One of the coolest things I get to do with the foundation is to host the OpenSSF’s “What’s in the SOSS?” podcast. In the podcast, we talk to leaders, maintainers and interesting people within the open source security ecosystem. This week we have a real treat. We’re talking with my pal, Michael Winser, AKA “one of the Michaels” from the Alpha-Omega project. Michael, welcome sir.

Michael Winser (00:52)
It’s great to be with you, CRob.

CRob (00:53)
So for those of us that may not be aware of you, sir, could you maybe give us your open source origin story?

Michael Winser (01:00)
I have to think about that because there’s so many different sort of forays, but I think that the origin-origin story is in 1985, I was at my first job. You know, I got the Minix book and it came with floppy disks of source code to an entire operating system and all the tools. And I’m like, wait, I get to do this? And I started compiling stuff and then I started porting it to different things and using the code and then just seeing how it worked. That was like a life-changing sort of beginning.

And then I think then at Google working in open source, you know Google has a tremendous history of open source and a community and culture of it and embracing it. And I got to my last part of my work at Google was working on open source supply chain security for Google’s vast supply chain, both in terms of producing and consuming. And so that’s really been another phase of the journey for me.

CRob (01:53)
So I bet things have changed quite a lot back from 1985. And that’s not quite the beginning of everything. But speaking about beginnings and endings, you’re one of the leaders of a project called Alpha-Omega. Could you maybe tell us a little bit about that and kind of what AO is trying to do?

Michael Winser (02:09)
Sure. So Alpha-Omega, it started out in, sort of, two almost distinct things. One was at the sort of, that moment of crisis when OpenSSF was created and various companies like Microsoft and Google were like, we got to do something about this. And both Microsoft and Google sort of never let a good crisis go to waste, put a chunk of money aside to say, whatever we do, how do we figure this stuff out? It’s going to take some money to fix things. Let’s put some money in and figure out what we’ll do with it later.

Separately, Michael Scovetta had been thinking about the problem, had written a paper titled, surprisingly enough, Alpha-Omega, and thinking about how one might, the how one might was looking at the Alpha, which is sort of like the most significant, most critical projects that we can imagine. And then the Omega is, what about all the hundreds of thousands of other projects?

And so that confluence of those two thoughts sat unrealized, unfulfilled, until I…I joined the ghost team at Google and someone said, you should talk to this guy, Michael Scovetta. And that’s really how Alpha-Omega was started was two guys named Michael sitting in a room talking about what we might do. And there’s been a lot of evolution of the thinking and how to do it and lessons learned. And that’s what we’re here to talk about today, I think.

CRob (03:31)
I remember that paper quite well in the beginnings of the foundation. Thinking more broadly, how does one try to solve a problem with the open source software supply chain? From an AO perspective, how do you approach this problem?

Michael Winser (03:48)
There’s so many ways to this question, but I’m actually just going to start with summarizing our mission, because I think it really, we spend a lot of time, as you know, I’m a bit of a zealot on the mission, vision, strategy and roadmap thinking. And so our mission is to protect society by, critical word here, catalyzing sustainable security improvements to the most critical open source projects and ecosystems.

The words here are super important. Catalyzing. With whatever money we have on tap, it’s still completely inadequate to the scale of the problem we have, right? Like I jokingly like to sort of describe the software supply chain problem as sort of like Y2K, but without the same clarity of problem, solution or date.

It’s big, it’s deep, it’s poorly understood. So our goal is not to be sort of this magical, huge and permanent endowment to fix all the problems of the open source. And we’ll talk more about how it’s not about just putting money in there, right? But to catalyze change and to catalyze change towards sustainable security. And so the word sustainable shows up in all the conversations and it is really two sides of the same coin. when we talk about security and sustainability, they’re almost really the same thing.

CRob (05:03)
You mentioned money is a potential solution sometimes, but maybe could you talk about some of the techniques to try to achieve some better security outcomes with the projects you’ve worked with?

Michael Winser (05:16)
Some of it was sort of historically tripping over things and trying them out, right? And I think that was a key thing for us. But where we’ve arrived, I think I’ll sort of rather than trying to tell all the origin stories of all the different strategies that we’ve evolved. Actually, I’ll summarize by, so Alpha-Omega has now come to mean, not the most critical projects and then the rest, but the highest points of leverage and then scalable solutions. And so I use those two words, Alpha effectively means leverage and Omega means scale. And in that context, we’ve developed essentially a four pronged strategy, four kinds of investment that we make to create change, to catalyze change.

And in no particular order, Category A is essentially staffing engagements at organizations that are able to apply that kind of leverage and to where adding somebody whose job it is to worry about security can have that kind of impact. It’s kind of crazy how when you make it someone’s job to worry about security, you elevate something from a tragedy to the commons where it’s everybody’s job and nobody’s job, nobody’s the expert, nobody can decide anything to something where, well, that person said we should do X, I guess we’re gonna do X, whatever X was.

And then having somebody whose job it is to say, okay, with all the thousands of security problems we have, we’re gonna tackle these two this year, and developing that kind of theme, and then working with all those humans to create a culture around that change. Again, if it’s someone’s job to do so, it’s more likely to happen than if there’s nobody’s job it is to do it. Category A, staffing engagements at significant open source organizations that have the both the resources to hire somebody and the leverage to have that person become effective. Right? And so there’s a lot of like packaged up in that resources to hire someone like, you know, they, like, humans are humans. They want to have, you know, jobs, benefits, promotions, other crazy stuff. Right? I’ve given up on all those things, but you know, that’s the world that people live in.

And we, we don’t want to be an employer, we want to be a catalyst, right? And so we’re not here to sort of create a giant organization of open source security. We’re here to cause other people to get there and ultimately to wean themselves from us and to have a sustainable model around that. And in fact, so for those grants, we encourage, we discuss, we ask, how are you diversifying your funding so that you can start supporting this as a line item on your normal budget as opposed to our sort of operational thing? And that’s, it’s a journey. So that’s category A.

Category B has some interesting overlap, but it really speaks to what I think of as the app stores of software development, the package ecosystems, right? There is no greater force on the internet than a developer working at some company with their boss is breathing down their neck and they’ve got a thing to get done. They got to get tab A into slot B. They can’t figure it out. They Google something, and it says do not use this code, it is terrible, you should not use it, but if you were to use it, npm install foo will get you tab A into slot B, right?

At this point, you can’t give them enough warnings, it doesn’t matter, they’re under pressure, they need to get something done, they’re installing foo, right? How can we elevate these package ecosystems so that organizations, individuals, publishers, consumers can all have better metadata, better security practices, and trust the statements that these packages are making about themselves and that other entities are making about these packages to start making informed decisions or even policy-based decisions about what is allowed or not allowed into a developer environment in some organization, right?

And then that’s just a tiny part of it. So like the whole package app store concept where you essentially, I am installing this thing and I expect some truths about that name to be not a name name-squatted thing. I expect the versions to be reasonably accurate about not being changed underneath me. And a thousand little things that we just want to take for granted, even without worrying about them somehow making it all be secure, is such a point of leverage and criticality that we find investing in that worthwhile. And so that’s a category for us.

Category C is actually where most of our conversations start. And perhaps I’m getting ahead of our conversation, but it’s with audits. And we love paying for audits into an organization that essentially is ready to have an audit done. And there’s so much that gets wrapped up in is an organization ready to have an audit? Do they want to have the audit? What are they going to do with the audits results? How do they handle?

And so as an early engagement, it’s remarkably cost-effective to like find out whether that organization is an entire giant, complicated ecosystem of thousands of projects and things like that, or three to five amazing hackers who work nights and weekends on some really important library. That audit tells everybody an awful lot about where that project is on their journey of security. And one of our underlying principles about doing grants is make us want to do it again. And how an organization responds to that audit is a very key indicator, whether they’re ready for more, could they handle it, what are they going to do with it, et cetera.

And then category D, you could name it almost anything you want. This is us embracing the deep truth that we have no idea what we are doing. Collectively as an industry, nobody knows what we’re doing in this space. It’s hard, right? And so this is an area where we think of it as experimentation and innovation. And it’s a grab bucket for things that we try to do. And one of our stakeholders pointed out that we weren’t failing often enough early on in our life cycle. And it was like, if you’re not trying enough things, you’re taking the easy money, easy bets and not learning important lessons. Like, okay, we’re gonna screw some things up, get ready!

Again, the journey of learning every step along the way. And it’s not like we are like, recklessy throwing money to see if you can just burn it into security that doesn’t work — we tried But we’re seeing you know, see what we can do and those lessons are fun, too.

CRob (11:16)
Excellent. So in parallel with your kind of your four strategies you’ve used you you and I have talked a lot about your concept of the “clean the beach.” Could you maybe talk a little bit more about your idea of cleaning the beach?

Michael Winser (11:33)
Absolutely. So one of our early engagements on the Omega side was to work with an organization called OpenRefactory that had developed some better than generic static analysis techniques. And I don’t even know how it works, and there’s probably some human that know there’s some humans in the loop to help manage the false positives and false negatives and things like that. They felt that they could scale this up to handle thousands of projects to go off and scan the source code to look for vulnerabilities previously not found in that source code. And then also to generate patches, pull requests for fixes to those things as well.

And this is, sort of, the of the holy grail dream of like, we’ve got all this problem, if only we just, we can just turn literally oil into energy into compute, into fixing all this stuff, right? And there’s a lot of interesting things along the way there. So the first time they did, they went and did a scan, 3,000 projects, and came back and said, look at us, we scanned 3,000 projects, we found I don’t know how many vulnerabilities, we reported this many, this many were accepted. There’s a conversation there that we should get back to about the humans in the loop. And after it, I’m like, okay, if I try to tell anybody about this work, I don’t know what difference it makes to anybody’s lives.

And I realized that it was the moral equivalent of taking a…some kind of boat, maybe a rowboat, maybe a giant barge, out to the Pacific garbage patch and picking up a lot of plastic and coming back and saying, look at all this plastic I brought back. And I’m like, that’s good. And maybe you’ve developed a technique for getting plastic at scale, but thousands of orders of magnitude off, like literally it’s gigatons, teratons of stuff out there. And you brought back a little bit. I’m like, I need to be more short-term in terms of getting work and impact. And we care about continuous results and learnings as opposed to like, great, we found a way to turn the next trillion dollars into like a lot of scans and things like that. And so we thought about this a lot.

And it was sort of around the same time as the somewhat terrifying XZ situation, right? And I realized that XZ showed us a lot about the frailty of projects because of the humanness of people involved. But it also showed us that, and I’m going to be kind of stern about this, open source projects that use upstream dependencies like XZ are treating those dependencies exactly the way that we complain about corporations using open source for free.

They assume that this source code is being taken care of by somebody else and that it comes down from the sky on a platter with unicorns and rainbows and whatever other…like how many people in these organizations that use XZ, whether they were for-profit entities or whatever we’re paying attention upstream and saying, hey, I wonder if any of our projects needs our help? I wonder if we should spend some more time working on our upstream. Said nobody ever.

And so coincidentally, we wanted to do some work with someone we met at PyCon, this gentleman named Jarek Potiuk who’s on the PMC for Apache Airflow. And he wanted us to talk about our security work at the Airflow conference. And I’m like, well, we’ve got to talk about something. And so we start talking about Airflow. And he was already down that road of looking at his dependencies and trying to analyze them a little bit. And we said, what can we do here?

And so bring this back to Pacific garbage patch, right? We’d all love for the Pacific garbage patch to go away, right? But day to day, we go to the beach. And wouldn’t it be nice if we could talk about a section of the beach as being not perfectly okay, but free of a common set of risks, right? So we thought about, so can we do that? And he’s like, well, I know exactly how many dependencies total Airflow has. It has 719 dependencies.

And we asked ourselves the question, has anybody ever done a complete audit across those dependencies? Where complete is across all 719, not a complete code analysis of every single piece of those projects. And the answer was no. And we said, well, we’re going to make the answer yes. And so we started a project to go and bring automatic scanning to that so that OpenRefactory instead of trying to scan 3,000 arbitrary projects or the top 3,000 or the top 3,000 dependencies, they pick 718 and scan those. And Jarek and his team put together some scripts to go off and pull key facts about projects that can be used to assess risk on an ongoing basis in terms of whether we need to get involved or should we do something or should we worry about this or not, right?

And it’s everything from understanding the governance to the size of the contribution pool to the project, to its vulnerability history, right? And just building up a picture where the goal is not to sort of audit the source code of each of these projects, because that’s actually not Airflow’s job, right? And they wouldn’t do a good job of it per se. But to understand across their dependencies where there is risk and where they might need to do something about it.

From that came another concept that is what I really like. Going back to the, let’s not pretend that this code came down from the sky on a silver platter with unicorns. What are we supposed to do about it if we see risk in one of our upstream dependencies? And from that, the framework that came out was essentially the three F’s. You either need to fix, fork or forego those dependencies. There’s another way of saying forego, but we’ll stick with forego. There’s a fourth one which is fund, and we’ll talk about why that is not actually something at the disposal of most projects.

The fixed part is kind of interesting. The fork part is an expensive decision. It’s saying, you know, they’re not doing it, but we need this and we can’t get something else. We can’t forego it because it’s whatever. So I guess it’s ours now, right? And taking responsibility for the code that you use, because every dependency you use, right, unless you’re using some very sophisticated sandboxing, every dependency you use has basically total access to your build environment and total access to your production environment. So it’s your code, it’s your responsibility.

So with that mindset, fascinating things happened. When an automated scan from OpenRefactory found a new vulnerability in one of the dependencies, they would report it through their private vulnerability reporting, or we had some editing that noticed that these people don’t have private vulnerability reporting.

And so one of the fixes was helping them turn on PBR, right? But let’s say they had PBR, they would file the fix or file the vulnerability. And because it looked like it came from a machine, right? Unfortunately, open source maintainers have been overwhelmed by well-meaning people with bots and a desire to become a security researcher with a lot of like, let’s just say, not the most important vulnerabilities on the planet.

And that’s a lot of signal-to-noise for them to deal with. So some of these reports were getting ignored, but then when an Apache Airflow maintainer would show up on the report and say, hey, my name is “Blah,” I’m from Apache, we depend upon you, would you be open to fixing this vulnerability, we would really greatly appreciate it. In other words, a human shows up and behave like a human. You’d be amazed at what happened. People are like, my God, you know I exist? You’re from Apache Airflow, I heard it, you guys. How can I help? I’ll put it right on like that, right? Like, the response changed dramatically. And that’s a key lesson, right?

And if I were to describe one of my goals for this sort of continued effort, right, is that within the Airflow community, there’s an adopt a dependency mindset where there’s somebody, at least one person for every dependency. And I mean, transitively, it’s not the top level. It’s the whole graph, because you can’t assume that your transitive people are behaving the same way as you and that. It’s easy when it’s like not a crisis, but when it’s a crisis, right?

Having somebody you know, talk to you about the situation, offer to help is very different than, oh my God, you’ve shown up on somebody’s radar as having a critical vulnerability and now everybody is dog is asking you about this. Lawyer-grams are coming. We’ve seen that pattern, right? But then Jarek from Apache Airflow shows up and says, hey, Mary, sorry you’re under the stress. We’re actually keen to help you as well. You know, who’s going to say no to that kind of help when it’s somebody they already know? Whereas the XZ situation has effectively taught people to say, I don’t know you, why am I letting you into my project? How do I know you’re not some hacker from some bad actor, right?

That mindset of let’s pick some beaches to focus on, understand the scope of that, and then take that 3F mindset, right? And so Airflow has changed their security roadmap for 2025 and that includes doing work with, on behalf for, towards their dependencies. They’ve taken some dependencies out, so they’ve done it forego. And some of the things they’re asking them to do is just turn on PDR or maybe do some branch protection, some of the things that you might describe in the open SSF space line for security, right?

That people don’t think they know they’re competent to do or haven’t worried about it yet or whatever. But when some nice, well-meaning person shows up from a project that you can trust, it becomes a more interesting conversation. And if you imagine that playing itself out again and again and again, it becomes cultural.

CRob (21:01)
Yeah, that’s amazing. Thank you for sharing. That’s some amazing insights. Well, let’s move on to the rapid fire section of podcast! First hard question. Spicy or mild food?

Michael Winser (21:13)
Oh, I think both. Like I don’t want to have spicy every single day, but I do enjoy a nice spicy pad Thai or something like that or whatever. I’m, you know, variety is the spice of life. So there you go.

CRob (21:25)
Excellent. Fair enough. Very contentious question: Vi or Emacs?

Michael Winser (21:32)
I confess to Vi as my default console editor. Along back in that 1985 time, did port Jove — Jonathan’s Own Version of Emacs. Still alive today. I used that. And then, in my Microsoft days, I used this tool called Epsilon that was an OS2 and DOS Emacs-derived editor. And the key bindings are all locked in my brain and worked really well. But then full-grown Emacs became available to me, and the key bindings were actually nuancedly different, and my brain skid off the tracks. And then as I became a product manager, the need became more casual, and so Vi has become just convenient enough. I still use the index key bindings on the Mac OS command line to move around.

CRob (21:19)
Oh, very nice. What’s your favorite adult beverage?

Michael Winser (22:23)
I think it’s beer. It really is.

CRob (22:25)
Beer’s great. A lot of variety, a lot of choices.

Michael Winser (22:28)
I think a good hefeweizen, a wheat beer, would be very nice.

CRob (22:23)
Okay, and our last most controversial question: tabs or spaces?

Michael Winser (22:39)
Oh, spaces. (Laughter) I’m not even like, like I am a pretty tolerant person, but there’s just no way it ends well with tabs.

CRob (22:50)
(Laughter) Fair enough, sir. Well, thank you for playing rapid fire. And as we close down, what advice do you have for someone that’s new or trying to get into this field today of development or cybersecurity?

Michael Winser (23:06)
The first piece of advice I would have is it’s about human connections, right? Like so much of what we do is about transparency and trust, right? Transparency happens, things that happen in the open and trust is about behaving in ways that cause people to want to do things with you again, right? There’s a predictability to trust too, in terms of like not doing randomly weird things and things like that.

And so, and then there’s also, you know, trust is built through shared positive experiences or non-fatal outcomes of challenges. So I think that anybody wanting to get into this space, showing up as a human being, being open about who you are and what you’re trying to do, and getting to know the people, and that sort of journey of humility of listening to people who you might think you know more than they do and you might even be right, but it’s their work as well. And so listening to them along the way, that’s personally one of my constant challenges. I’m an opinionated person with a lot of things to say. Really, it’s true.

It’s very generic guidance. I think that if you want to just get started, it’s pretty easy. Pick something you know anything about, show up there in some project and listen, learn, ask questions and then find some way to help. Taking notes in a working group meeting, it’s a pretty powerful way to build trust in terms of, this person seems to take notes that accurately represent what we tried to say in this conversation. In fact, better than what we said. We trust this person to represent our thoughts is a pretty powerful first step.

CRob (24:32)
Excellent. I really appreciate you sharing that. And to close, what call to action do you have for our listeners? What would you like them to take away or do after they listen to this podcast?

Michael Winser (24:44)
I would like them to apply the 3F framework to their upstream dependencies. I would like them to look at their dependencies as if they were a giant pile of poorly understood risk and not just through the lens of how many vulnerabilities do I have unpatched in my current application because of some, you know SBOM analyzing tool telling me. But from a longer-term organizational and human risk perspective, go look at your dependencies and their dependencies and their dependencies and build up just a heat map of where you think you should go off and apply that 3F framework.

And if you truly feel like you can’t do any one of those things, right, because you’re not competent to go fix or fork and you have no choice but to use the thing so you can’t forego it, right, then think about funding somebody who can.

CRob (25:34)
Excellent words of wisdom. Michael, thank you for your time and all of your contributions through your history and now through the Alpha and Omega projects. So we really appreciate you stopping by today.

Michael Winser (25:45)
It was my pleasure and thank you for having me. I’ve enjoyed this tremendously. It would be a foolish thing for me to let this conversation end without mentioning the three people at Alpha-Omega who really, without whom we’d be nowhere, right? And so, you know, Bob Callaway, Michael Scovetta, and Henri Yandell. And then there’s a support crew of other people as well, without whom we wouldn’t get anything done, right?

I get to be, in many ways, the sort of first point of contact and the loud point of contact. We also have Mila from Amazon, and we have Michelle Martineau and Tracy Li, who are our LF people. And again, this is what makes it work for us, is that we can get things done. I get to be the sort of loud face of it, but there’s a really great team of people whose wisdom is critical to how we make decisions.

CRob (26:32)
That’s amazing. We have a community helping the community. Thank you.

Michael Winser (26:35)
Thank you.

Announcer (26:37)
Like what you’re hearing? Be sure to subscribe to What’s in the SOSS? on Spotify, Apple Podcasts, AntennaPod, Pocket Casts or wherever you get your podcasts. There’s lots going on with the OpenSSF and many ways to stay on top of it all! Check out the newsletter for open source news, upcoming events and other happenings. Go to OpenSSF dot org slash newsletter to subscribe. Connect with us on LinkedIn for the most up-to-date OpenSSF news and insight. And be a part of the OpenSSF community at OpenSSF dot org slash get involved. Thanks for listening, and we’ll talk to you next time on What’s in the SOSS?