Skip to main content

📩 Stay Updated! Follow us on LinkedIn and join our mailing list for the latest news!

All Posts By

OpenSSF

What’s in the SOSS? Podcast #21 – Alpha-Omega’s Michael Winser and Catalyzing Sustainable Improvements in Open Source Security

By Podcast

Summary

In this episode, CRob talks to Michael Winser, Technical Strategist for Alpha-Omega, an associated project of the OpenSSF that with open source software project maintainers to systematically find new, as-yet-undiscovered vulnerabilities in open source code – and get them fixed – to improve global software supply chain security.

Conversation Highlights

  • 01:00 – Michael shares his origin story into open source
  • 02:09 – How Alpha-Omega came to be
  • 03:48 Alpha-Omega’s mission is catalyzing sustainable security improvements
  • 05:16 – The four types of investments Alpha-Omega makes to catalyze change
  • 11:33 – Michael expands on his “clean the beach” approach to impacting open source security
  • 16:41 – The 3F framework helps manage upstream dependencies effectively
  • 21:13 – Michael answers CRob’s rapid-fire questions
  • 23:06 – Michael’s advice to aspiring development and cybersecurity professionals
  • 24:44 – Michael’s call to action for listeners

Transcript

Michael Winser soundbite (00:01)
When some nice, well-meaning person shows up from a project that you can trust, it becomes a more interesting conversation. With that mindset, fascinating things happen. And if you imagine that playing itself out again and again and again, it becomes cultural.

CRob (00:18)
Hello, everybody, I’m CRob. I do security stuff on the internet. I’m also a community leader and the chief architect for the Open Source Security Foundation. One of the coolest things I get to do with the foundation is to host the OpenSSF’s “What’s in the SOSS?” podcast. In the podcast, we talk to leaders, maintainers and interesting people within the open source security ecosystem. This week we have a real treat. We’re talking with my pal, Michael Winser, AKA “one of the Michaels” from the Alpha-Omega project. Michael, welcome sir.

Michael Winser (00:52)
It’s great to be with you, CRob.

CRob (00:53)
So for those of us that may not be aware of you, sir, could you maybe give us your open source origin story?

Michael Winser (01:00)
I have to think about that because there’s so many different sort of forays, but I think that the origin-origin story is in 1985, I was at my first job. You know, I got the Minix book and it came with floppy disks of source code to an entire operating system and all the tools. And I’m like, wait, I get to do this? And I started compiling stuff and then I started porting it to different things and using the code and then just seeing how it worked. That was like a life-changing sort of beginning.

And then I think then at Google working in open source, you know Google has a tremendous history of open source and a community and culture of it and embracing it. And I got to my last part of my work at Google was working on open source supply chain security for Google’s vast supply chain, both in terms of producing and consuming. And so that’s really been another phase of the journey for me.

CRob (01:53)
So I bet things have changed quite a lot back from 1985. And that’s not quite the beginning of everything. But speaking about beginnings and endings, you’re one of the leaders of a project called Alpha-Omega. Could you maybe tell us a little bit about that and kind of what AO is trying to do?

Michael Winser (02:09)
Sure. So Alpha-Omega, it started out in, sort of, two almost distinct things. One was at the sort of, that moment of crisis when OpenSSF was created and various companies like Microsoft and Google were like, we got to do something about this. And both Microsoft and Google sort of never let a good crisis go to waste, put a chunk of money aside to say, whatever we do, how do we figure this stuff out? It’s going to take some money to fix things. Let’s put some money in and figure out what we’ll do with it later.

Separately, Michael Scovetta had been thinking about the problem, had written a paper titled, surprisingly enough, Alpha-Omega, and thinking about how one might, the how one might was looking at the Alpha, which is sort of like the most significant, most critical projects that we can imagine. And then the Omega is, what about all the hundreds of thousands of other projects?

And so that confluence of those two thoughts sat unrealized, unfulfilled, until I…I joined the ghost team at Google and someone said, you should talk to this guy, Michael Scovetta. And that’s really how Alpha-Omega was started was two guys named Michael sitting in a room talking about what we might do. And there’s been a lot of evolution of the thinking and how to do it and lessons learned. And that’s what we’re here to talk about today, I think.

CRob (03:31)
I remember that paper quite well in the beginnings of the foundation. Thinking more broadly, how does one try to solve a problem with the open source software supply chain? From an AO perspective, how do you approach this problem?

Michael Winser (03:48)
There’s so many ways to this question, but I’m actually just going to start with summarizing our mission, because I think it really, we spend a lot of time, as you know, I’m a bit of a zealot on the mission, vision, strategy and roadmap thinking. And so our mission is to protect society by, critical word here, catalyzing sustainable security improvements to the most critical open source projects and ecosystems.

The words here are super important. Catalyzing. With whatever money we have on tap, it’s still completely inadequate to the scale of the problem we have, right? Like I jokingly like to sort of describe the software supply chain problem as sort of like Y2K, but without the same clarity of problem, solution or date.

It’s big, it’s deep, it’s poorly understood. So our goal is not to be sort of this magical, huge and permanent endowment to fix all the problems of the open source. And we’ll talk more about how it’s not about just putting money in there, right? But to catalyze change and to catalyze change towards sustainable security. And so the word sustainable shows up in all the conversations and it is really two sides of the same coin. when we talk about security and sustainability, they’re almost really the same thing.

CRob (05:03)
You mentioned money is a potential solution sometimes, but maybe could you talk about some of the techniques to try to achieve some better security outcomes with the projects you’ve worked with?

Michael Winser (05:16)
Some of it was sort of historically tripping over things and trying them out, right? And I think that was a key thing for us. But where we’ve arrived, I think I’ll sort of rather than trying to tell all the origin stories of all the different strategies that we’ve evolved. Actually, I’ll summarize by, so Alpha-Omega has now come to mean, not the most critical projects and then the rest, but the highest points of leverage and then scalable solutions. And so I use those two words, Alpha effectively means leverage and Omega means scale. And in that context, we’ve developed essentially a four pronged strategy, four kinds of investment that we make to create change, to catalyze change.

And in no particular order, Category A is essentially staffing engagements at organizations that are able to apply that kind of leverage and to where adding somebody whose job it is to worry about security can have that kind of impact. It’s kind of crazy how when you make it someone’s job to worry about security, you elevate something from a tragedy to the commons where it’s everybody’s job and nobody’s job, nobody’s the expert, nobody can decide anything to something where, well, that person said we should do X, I guess we’re gonna do X, whatever X was.

And then having somebody whose job it is to say, okay, with all the thousands of security problems we have, we’re gonna tackle these two this year, and developing that kind of theme, and then working with all those humans to create a culture around that change. Again, if it’s someone’s job to do so, it’s more likely to happen than if there’s nobody’s job it is to do it. Category A, staffing engagements at significant open source organizations that have the both the resources to hire somebody and the leverage to have that person become effective. Right? And so there’s a lot of like packaged up in that resources to hire someone like, you know, they, like, humans are humans. They want to have, you know, jobs, benefits, promotions, other crazy stuff. Right? I’ve given up on all those things, but you know, that’s the world that people live in.

And we, we don’t want to be an employer, we want to be a catalyst, right? And so we’re not here to sort of create a giant organization of open source security. We’re here to cause other people to get there and ultimately to wean themselves from us and to have a sustainable model around that. And in fact, so for those grants, we encourage, we discuss, we ask, how are you diversifying your funding so that you can start supporting this as a line item on your normal budget as opposed to our sort of operational thing? And that’s, it’s a journey. So that’s category A.

Category B has some interesting overlap, but it really speaks to what I think of as the app stores of software development, the package ecosystems, right? There is no greater force on the internet than a developer working at some company with their boss is breathing down their neck and they’ve got a thing to get done. They got to get tab A into slot B. They can’t figure it out. They Google something, and it says do not use this code, it is terrible, you should not use it, but if you were to use it, npm install foo will get you tab A into slot B, right?

At this point, you can’t give them enough warnings, it doesn’t matter, they’re under pressure, they need to get something done, they’re installing foo, right? How can we elevate these package ecosystems so that organizations, individuals, publishers, consumers can all have better metadata, better security practices, and trust the statements that these packages are making about themselves and that other entities are making about these packages to start making informed decisions or even policy-based decisions about what is allowed or not allowed into a developer environment in some organization, right?

And then that’s just a tiny part of it. So like the whole package app store concept where you essentially, I am installing this thing and I expect some truths about that name to be not a name name-squatted thing. I expect the versions to be reasonably accurate about not being changed underneath me. And a thousand little things that we just want to take for granted, even without worrying about them somehow making it all be secure, is such a point of leverage and criticality that we find investing in that worthwhile. And so that’s a category for us.

Category C is actually where most of our conversations start. And perhaps I’m getting ahead of our conversation, but it’s with audits. And we love paying for audits into an organization that essentially is ready to have an audit done. And there’s so much that gets wrapped up in is an organization ready to have an audit? Do they want to have the audit? What are they going to do with the audits results? How do they handle?

And so as an early engagement, it’s remarkably cost-effective to like find out whether that organization is an entire giant, complicated ecosystem of thousands of projects and things like that, or three to five amazing hackers who work nights and weekends on some really important library. That audit tells everybody an awful lot about where that project is on their journey of security. And one of our underlying principles about doing grants is make us want to do it again. And how an organization responds to that audit is a very key indicator, whether they’re ready for more, could they handle it, what are they going to do with it, et cetera.

And then category D, you could name it almost anything you want. This is us embracing the deep truth that we have no idea what we are doing. Collectively as an industry, nobody knows what we’re doing in this space. It’s hard, right? And so this is an area where we think of it as experimentation and innovation. And it’s a grab bucket for things that we try to do. And one of our stakeholders pointed out that we weren’t failing often enough early on in our life cycle. And it was like, if you’re not trying enough things, you’re taking the easy money, easy bets and not learning important lessons. Like, okay, we’re gonna screw some things up, get ready!

Again, the journey of learning every step along the way. And it’s not like we are like, recklessy throwing money to see if you can just burn it into security that doesn’t work — we tried But we’re seeing you know, see what we can do and those lessons are fun, too.

CRob (11:16)
Excellent. So in parallel with your kind of your four strategies you’ve used you you and I have talked a lot about your concept of the “clean the beach.” Could you maybe talk a little bit more about your idea of cleaning the beach?

Michael Winser (11:33)
Absolutely. So one of our early engagements on the Omega side was to work with an organization called OpenRefactory that had developed some better than generic static analysis techniques. And I don’t even know how it works, and there’s probably some human that know there’s some humans in the loop to help manage the false positives and false negatives and things like that. They felt that they could scale this up to handle thousands of projects to go off and scan the source code to look for vulnerabilities previously not found in that source code. And then also to generate patches, pull requests for fixes to those things as well.

And this is, sort of, the of the holy grail dream of like, we’ve got all this problem, if only we just, we can just turn literally oil into energy into compute, into fixing all this stuff, right? And there’s a lot of interesting things along the way there. So the first time they did, they went and did a scan, 3,000 projects, and came back and said, look at us, we scanned 3,000 projects, we found I don’t know how many vulnerabilities, we reported this many, this many were accepted. There’s a conversation there that we should get back to about the humans in the loop. And after it, I’m like, okay, if I try to tell anybody about this work, I don’t know what difference it makes to anybody’s lives.

And I realized that it was the moral equivalent of taking a…some kind of boat, maybe a rowboat, maybe a giant barge, out to the Pacific garbage patch and picking up a lot of plastic and coming back and saying, look at all this plastic I brought back. And I’m like, that’s good. And maybe you’ve developed a technique for getting plastic at scale, but thousands of orders of magnitude off, like literally it’s gigatons, teratons of stuff out there. And you brought back a little bit. I’m like, I need to be more short-term in terms of getting work and impact. And we care about continuous results and learnings as opposed to like, great, we found a way to turn the next trillion dollars into like a lot of scans and things like that. And so we thought about this a lot.

And it was sort of around the same time as the somewhat terrifying XZ situation, right? And I realized that XZ showed us a lot about the frailty of projects because of the humanness of people involved. But it also showed us that, and I’m going to be kind of stern about this, open source projects that use upstream dependencies like XZ are treating those dependencies exactly the way that we complain about corporations using open source for free.

They assume that this source code is being taken care of by somebody else and that it comes down from the sky on a platter with unicorns and rainbows and whatever other…like how many people in these organizations that use XZ, whether they were for-profit entities or whatever we’re paying attention upstream and saying, hey, I wonder if any of our projects needs our help? I wonder if we should spend some more time working on our upstream. Said nobody ever.

And so coincidentally, we wanted to do some work with someone we met at PyCon, this gentleman named Jarek Potiuk who’s on the PMC for Apache Airflow. And he wanted us to talk about our security work at the Airflow conference. And I’m like, well, we’ve got to talk about something. And so we start talking about Airflow. And he was already down that road of looking at his dependencies and trying to analyze them a little bit. And we said, what can we do here?

And so bring this back to Pacific garbage patch, right? We’d all love for the Pacific garbage patch to go away, right? But day to day, we go to the beach. And wouldn’t it be nice if we could talk about a section of the beach as being not perfectly okay, but free of a common set of risks, right? So we thought about, so can we do that? And he’s like, well, I know exactly how many dependencies total Airflow has. It has 719 dependencies.

And we asked ourselves the question, has anybody ever done a complete audit across those dependencies? Where complete is across all 719, not a complete code analysis of every single piece of those projects. And the answer was no. And we said, well, we’re going to make the answer yes. And so we started a project to go and bring automatic scanning to that so that OpenRefactory instead of trying to scan 3,000 arbitrary projects or the top 3,000 or the top 3,000 dependencies, they pick 718 and scan those. And Jarek and his team put together some scripts to go off and pull key facts about projects that can be used to assess risk on an ongoing basis in terms of whether we need to get involved or should we do something or should we worry about this or not, right?

And it’s everything from understanding the governance to the size of the contribution pool to the project, to its vulnerability history, right? And just building up a picture where the goal is not to sort of audit the source code of each of these projects, because that’s actually not Airflow’s job, right? And they wouldn’t do a good job of it per se. But to understand across their dependencies where there is risk and where they might need to do something about it.

From that came another concept that is what I really like. Going back to the, let’s not pretend that this code came down from the sky on a silver platter with unicorns. What are we supposed to do about it if we see risk in one of our upstream dependencies? And from that, the framework that came out was essentially the three F’s. You either need to fix, fork or forego those dependencies. There’s another way of saying forego, but we’ll stick with forego. There’s a fourth one which is fund, and we’ll talk about why that is not actually something at the disposal of most projects.

The fixed part is kind of interesting. The fork part is an expensive decision. It’s saying, you know, they’re not doing it, but we need this and we can’t get something else. We can’t forego it because it’s whatever. So I guess it’s ours now, right? And taking responsibility for the code that you use, because every dependency you use, right, unless you’re using some very sophisticated sandboxing, every dependency you use has basically total access to your build environment and total access to your production environment. So it’s your code, it’s your responsibility.

So with that mindset, fascinating things happened. When an automated scan from OpenRefactory found a new vulnerability in one of the dependencies, they would report it through their private vulnerability reporting, or we had some editing that noticed that these people don’t have private vulnerability reporting.

And so one of the fixes was helping them turn on PBR, right? But let’s say they had PBR, they would file the fix or file the vulnerability. And because it looked like it came from a machine, right? Unfortunately, open source maintainers have been overwhelmed by well-meaning people with bots and a desire to become a security researcher with a lot of like, let’s just say, not the most important vulnerabilities on the planet.

And that’s a lot of signal-to-noise for them to deal with. So some of these reports were getting ignored, but then when an Apache Airflow maintainer would show up on the report and say, hey, my name is “Blah,” I’m from Apache, we depend upon you, would you be open to fixing this vulnerability, we would really greatly appreciate it. In other words, a human shows up and behave like a human. You’d be amazed at what happened. People are like, my God, you know I exist? You’re from Apache Airflow, I heard it, you guys. How can I help? I’ll put it right on like that, right? Like, the response changed dramatically. And that’s a key lesson, right?

And if I were to describe one of my goals for this sort of continued effort, right, is that within the Airflow community, there’s an adopt a dependency mindset where there’s somebody, at least one person for every dependency. And I mean, transitively, it’s not the top level. It’s the whole graph, because you can’t assume that your transitive people are behaving the same way as you and that. It’s easy when it’s like not a crisis, but when it’s a crisis, right?

Having somebody you know, talk to you about the situation, offer to help is very different than, oh my God, you’ve shown up on somebody’s radar as having a critical vulnerability and now everybody is dog is asking you about this. Lawyer-grams are coming. We’ve seen that pattern, right? But then Jarek from Apache Airflow shows up and says, hey, Mary, sorry you’re under the stress. We’re actually keen to help you as well. You know, who’s going to say no to that kind of help when it’s somebody they already know? Whereas the XZ situation has effectively taught people to say, I don’t know you, why am I letting you into my project? How do I know you’re not some hacker from some bad actor, right?

That mindset of let’s pick some beaches to focus on, understand the scope of that, and then take that 3F mindset, right? And so Airflow has changed their security roadmap for 2025 and that includes doing work with, on behalf for, towards their dependencies. They’ve taken some dependencies out, so they’ve done it forego. And some of the things they’re asking them to do is just turn on PDR or maybe do some branch protection, some of the things that you might describe in the open SSF space line for security, right?

That people don’t think they know they’re competent to do or haven’t worried about it yet or whatever. But when some nice, well-meaning person shows up from a project that you can trust, it becomes a more interesting conversation. And if you imagine that playing itself out again and again and again, it becomes cultural.

CRob (21:01)
Yeah, that’s amazing. Thank you for sharing. That’s some amazing insights. Well, let’s move on to the rapid fire section of podcast! First hard question. Spicy or mild food?

Michael Winser (21:13)
Oh, I think both. Like I don’t want to have spicy every single day, but I do enjoy a nice spicy pad Thai or something like that or whatever. I’m, you know, variety is the spice of life. So there you go.

CRob (21:25)
Excellent. Fair enough. Very contentious question: Vi or Emacs?

Michael Winser (21:32)
I confess to Vi as my default console editor. Along back in that 1985 time, did port Jove — Jonathan’s Own Version of Emacs. Still alive today. I used that. And then, in my Microsoft days, I used this tool called Epsilon that was an OS2 and DOS Emacs-derived editor. And the key bindings are all locked in my brain and worked really well. But then full-grown Emacs became available to me, and the key bindings were actually nuancedly different, and my brain skid off the tracks. And then as I became a product manager, the need became more casual, and so Vi has become just convenient enough. I still use the index key bindings on the Mac OS command line to move around.

CRob (21:19)
Oh, very nice. What’s your favorite adult beverage?

Michael Winser (22:23)
I think it’s beer. It really is.

CRob (22:25)
Beer’s great. A lot of variety, a lot of choices.

Michael Winser (22:28)
I think a good hefeweizen, a wheat beer, would be very nice.

CRob (22:23)
Okay, and our last most controversial question: tabs or spaces?

Michael Winser (22:39)
Oh, spaces. (Laughter) I’m not even like, like I am a pretty tolerant person, but there’s just no way it ends well with tabs.

CRob (22:50)
(Laughter) Fair enough, sir. Well, thank you for playing rapid fire. And as we close down, what advice do you have for someone that’s new or trying to get into this field today of development or cybersecurity?

Michael Winser (23:06)
The first piece of advice I would have is it’s about human connections, right? Like so much of what we do is about transparency and trust, right? Transparency happens, things that happen in the open and trust is about behaving in ways that cause people to want to do things with you again, right? There’s a predictability to trust too, in terms of like not doing randomly weird things and things like that.

And so, and then there’s also, you know, trust is built through shared positive experiences or non-fatal outcomes of challenges. So I think that anybody wanting to get into this space, showing up as a human being, being open about who you are and what you’re trying to do, and getting to know the people, and that sort of journey of humility of listening to people who you might think you know more than they do and you might even be right, but it’s their work as well. And so listening to them along the way, that’s personally one of my constant challenges. I’m an opinionated person with a lot of things to say. Really, it’s true.

It’s very generic guidance. I think that if you want to just get started, it’s pretty easy. Pick something you know anything about, show up there in some project and listen, learn, ask questions and then find some way to help. Taking notes in a working group meeting, it’s a pretty powerful way to build trust in terms of, this person seems to take notes that accurately represent what we tried to say in this conversation. In fact, better than what we said. We trust this person to represent our thoughts is a pretty powerful first step.

CRob (24:32)
Excellent. I really appreciate you sharing that. And to close, what call to action do you have for our listeners? What would you like them to take away or do after they listen to this podcast?

Michael Winser (24:44)
I would like them to apply the 3F framework to their upstream dependencies. I would like them to look at their dependencies as if they were a giant pile of poorly understood risk and not just through the lens of how many vulnerabilities do I have unpatched in my current application because of some, you know SBOM analyzing tool telling me. But from a longer-term organizational and human risk perspective, go look at your dependencies and their dependencies and their dependencies and build up just a heat map of where you think you should go off and apply that 3F framework.

And if you truly feel like you can’t do any one of those things, right, because you’re not competent to go fix or fork and you have no choice but to use the thing so you can’t forego it, right, then think about funding somebody who can.

CRob (25:34)
Excellent words of wisdom. Michael, thank you for your time and all of your contributions through your history and now through the Alpha and Omega projects. So we really appreciate you stopping by today.

Michael Winser (25:45)
It was my pleasure and thank you for having me. I’ve enjoyed this tremendously. It would be a foolish thing for me to let this conversation end without mentioning the three people at Alpha-Omega who really, without whom we’d be nowhere, right? And so, you know, Bob Callaway, Michael Scovetta, and Henri Yandell. And then there’s a support crew of other people as well, without whom we wouldn’t get anything done, right?

I get to be, in many ways, the sort of first point of contact and the loud point of contact. We also have Mila from Amazon, and we have Michelle Martineau and Tracy Li, who are our LF people. And again, this is what makes it work for us, is that we can get things done. I get to be the sort of loud face of it, but there’s a really great team of people whose wisdom is critical to how we make decisions.

CRob (26:32)
That’s amazing. We have a community helping the community. Thank you.

Michael Winser (26:35)
Thank you.

Announcer (26:37)
Like what you’re hearing? Be sure to subscribe to What’s in the SOSS? on Spotify, Apple Podcasts, AntennaPod, Pocket Casts or wherever you get your podcasts. There’s lots going on with the OpenSSF and many ways to stay on top of it all! Check out the newsletter for open source news, upcoming events and other happenings. Go to OpenSSF dot org slash newsletter to subscribe. Connect with us on LinkedIn for the most up-to-date OpenSSF news and insight. And be a part of the OpenSSF community at OpenSSF dot org slash get involved. Thanks for listening, and we’ll talk to you next time on What’s in the SOSS?

In the Face of Mounting Regulatory Oversight, Honda and Guidewire Join Industry Leaders Securing Software Development at the Open Source Security Foundation (OpenSSF)

By Blog, Press Release

Growing Member Base and Launch of SOSS Community Day India Continue to Advance Open Source Software Security

Delhi, India – December 10, 2024 – The Open Source Security Foundation (OpenSSF), a global cross-industry initiative of the Linux Foundation, helps individuals and organizations build secure software by providing guidance, tools, and best practices applicable to all software development. Today, the OpenSSF announced new members from the automotive and insurance technology industries at the first-of-its-kind Secure Open Source Software (SOSS) Community Day India. SOSS Community Day India brings together community members from across the security and open source ecosystem to share ideas and advance solutions for sustainably securing the software we all depend on, building a foundation for a more secure and innovative future.

New general member commitments come from Honda Motor Co., Ltd. and Guidewire Software, Inc. With support from these new organizations, the OpenSSF heads into the last month of 2024 with 126 members that together recognize the importance of backing, maintaining, and promoting secure open source software.

“We are excited to welcome our newest members and celebrate this milestone with the launch of the first SOSS Community Day in India,” said Arun Gupta, Vice President and General Manager of Developer Programs at Intel and OpenSSF Governing Board Chair. “India has an incredible open source ecosystem, and this event provides an opportunity to foster collaboration, address shared challenges, and ensure the security of the open source software powering the digital world. Together, we’re building a more secure and innovative future.”

SOSS Community Day India features a packed agenda with sessions led by top experts on topics like education, innovation, tooling, vulnerabilities, and threats. The event not only highlights the OpenSSF community’s ongoing work, but also provides an avenue to expand its reach through new partnerships and memberships, welcoming inquiries from potential collaborators. Participants will see how the OpenSSF community is driving improvements in open source software security and advancing its mission to create a more secure ecosystem for everyone.

General Member Quotes

Honda Motor Co., Ltd.

“Honda is pleased to be able to participate in the OpenSSF project as OSS security becomes increasingly important. In addition to contributing to the OpenSSF community, we look forward to working to strengthen OSS security across the industry in the future.” Yuichi Kusakabe, Chief Architect – IVI software PF/OSPO Tech Lead, Honda Motor Co., Ltd.

Guidewire Software, Inc.

“We’re excited to become a member of OpenSSF,” said Anoop Gopalakrishnan, vice president, Engineering, Guidewire. “This partnership reflects our continued commitment to advancing open source security and collaborating with like-minded innovators to create a more secure and resilient software ecosystem.” 

Additional Resources

  • View the complete list of OpenSSF members.
  • Explore the SOSS Community Day India program schedule to see the lineup of sessions and speakers.
  • To learn more about the OpenSSF community, including information about membership, contribution, project participation, and more, contact us here.

###

About the OpenSSF

The Open Source Security Foundation (OpenSSF) is a cross-industry initiative by the Linux Foundation that brings together the industry’s most important open source security initiatives and the individuals and companies that support them. The OpenSSF is committed to collaboration and working both upstream and with existing communities to advance open source security for all. For more information, please visit us at openssf.org.

About the Linux Foundation

The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world’s infrastructure, including Linux, Kubernetes, Node.js, ONAP, OpenChain, OpenSSF, PyTorch, RISC-V, SPDX, Zephyr, and more. The Linux Foundation focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contact
Jennifer Tanner
Look Left Marketing
openssf@lookleftmarketing.com

What’s in the SOSS? Podcast #20 – Jack Cable of CISA and Zach Steindler of GitHub Dig Into Package Repository Security

By Podcast

Summary

CRob discusses package repository security with two people who know a lot about the topic. Zach Steindler is a principal engineer at Github, a member of the OpenSSF TAC and co-chairs the OpenSSF Security Packages Repository Working Group. Jack Cable is a senior technical advisor at CISA. Earlier this year, Zach and Jack published a helpful guide of best practices entitled “Principles for Package Repository Security.”

Conversation Highlights

  • 00:48 – Jack and Zach share their backgrounds
  • 02:59 – What package repositories are and why they’re important to open source users
  • 04:17 – The positive impact package security has on downstream users
  • 07:06 – Jack and Zach offer insight into the Prinicples for Package Repository Security
  • 11:18 – Future endeavors of the Securing Software Repositories Working Group
  • 17:32 – Jack and Zach answer CRob’s rapid-fire questions
  • 19:31 – Advice for those entering the industry
  • 21:28 – Jack and Zach share their calls to action

Transcript

Zach Steindler soundbite (00:01)
We absolutely are not looking to go in and say, OK all ecosystems must do X. But what we are is sort of this forum where these conversations can take place. People who operate these package repositories can say here’s what’s working for us, here’s what’s not working for us. Share those ideas, share those experiences and learn from each other.

CRob (00:017)
Hello everybody, I’m CRob. I do security stuff on the internet and I’m also a community leader within the OpenSSF. And one of the fun things I get to do is talk to amazing people that have input and are developing and working within upstream open source.

And today we have a real treat. I have two amazing people. I have Zach and Jack, and they’re here to talk to us a little bit about package repository security. So before we start, could I ask each of you to maybe give us a brief introduction?

Jack Cable (00:48)
Great. Thank you so much for having us on here, CRob. I am Jack Cable. I’m a senior technical advisor at CISA, where I help lead our agency’s work around open source software security and secure by design. For those unfamiliar with CISA, the Cybersecurity and Infrastructure Security Agency, is the nation’s cyber defense agency. So we help to protect both the federal civilian government and critical infrastructure, of which there’s 16 sectors ranging from everything like water to power, financial services, healthcare, and so on. And probably as no surprise to anyone here, all of these sectors are heavily dependent on open source software, which is why we’re so eager about seeing how we can really be proactive in protecting the open source ecosystem.

I come from a background in security research, software development, spent some time doing bug bounty programs, finding vulnerabilities in companies. Gradually went over to the policy side of things, spent some time, for instance, in the Senate where I worked on legislation related to open source software security and then joined CISA about a year and a half ago.

CRob (02:04)
Awesome. Zach?

Zach Steindler (02:13.869)
Yeah, CRob, thanks so much for having us. My name is Zach Steindler. I’m a principal engineer at GitHub. I have a really amazing job that lets me work on open source supply chain security, both for GitHub’s enterprise customers, but also for the open source ecosystem. CRob, you and I are both on the OpenSSF TAC. And in addition to that, I co-chair the Securing Software Repositories Working Group, where we had a recent chance to collaborate with us on the Principles for Package Repository Security document.

CRob (02:40)
Excellent, which we will talk about in just a few moments. And, you know, thank you both for your past, current and future contributions to open source. We really appreciate it. So our first question. Would you tell us what a package repository is and why that’s something that’s important to open source users?

Zach Steindler (02:59)
Yeah, this is something that comes up a lot in the working group, and what we’ve discovered is that everyone has slightly different terminology that they prefer to use. Here when we’re talking about package repositories, we’re talking about systems like NPM, like PyPI, like RubyGems or Homebrew — places that people are going to download software that they then run on their machine. And that’s a little bit in contrast to other terminology you might hear around repositories.

So here we aren’t talking about, like, where you store your source code like in a Git repository or a Mercurial repository, that sort of thing. These patch repositories are widely used. Many of them serve hundreds of millions to billions of downloads per day, and those downloads are being run on developer’s machines that are being run on build servers, and they’re being run on people’s computers who, know, whatever you’re doing on your mobile phone or your desktop device. And so the software that’s stored in these package repositories are really used globally by almost everyone daily.

CRob (04:07)
Thinking about kind of this critical space within critical software here, how does improving a package repository security affect all the downstream folks from that?

Jack Cable (04:17)
Great. And really to what Zach was saying, that’s in part why we picked this as a priority area at CISA, recognizing that regardless, really, of what, say, critical infrastructure sector, regardless of whether you’re a small business, whether you’re a large company, whether you’re a government agency, you’re heavily dependent on open source software. And in all likelihood, that software is being integrated into the products you’re using through a package repository.

So we wanted to see, where are the places where we can have the biggest potential impact when it comes to security and package repositories really stood out as central points where virtually everyone who consumes open source software goes to download, to integrate that software. So it is very central to essentially all of the software that our world relies on today. And we also recognize that many of these package repositories themselves are resource constrained, often nonprofits who operate these really critical essential services, surveying millions of developers, billions of users across the world.

So what kind of can be done to help strengthen their security? Because we’ve seen attacks both on package repositories themselves, whether it’s compromising developers’ accounts or kind of some of these underlying pervasive flaws in open source packages. How can package repositories really bolster their security to make the entire open source ecosystem more resilient? And that’s what we set out, which I know we’ll get much more into the principles of package repository security framework we created. But the goal is to really aggregate some of the best practices that perhaps one or two package repositories are doing today, but we’re not seeing across the board.

Things that can be as basic, for instance, as requiring multifactor authentication for developers of really critical projects to make sure that that developer’s account is much harder to compromise. So some of these actions that we know take time and resources to implement and we want to see how we can help package repositories prioritize these actions, advocate for them, get funding to do them so that we can all benefit.

CRob (06:52)
Well, we’ve touched on it a few times already. Let’s talk about the Principles of the Package Repository Security. Could maybe you share a little bit about what this document’s about, how it came to be, and maybe a little bit about who helped collaborate to do it?

Jack Cable (07:06)
I’ll kick it off, and then Zach can jump in. So really, as I was saying, we wanted to create kind of a common set of best practices that any package repository could look to to kind of guide their future actions, Because, kind of, what we’ve been seeing, and I’m sure Zach can get much more into it with the work he’s led through the Securing Software Repositories Working Group, is that there’s many software repositories that do care significantly about security that really are taking a number of steps that, like we’ve seen for instance, both Python and PM requiring multi-factor authentication for their maintainers, Python even, shipping security tokens to their developers. Some of these actions that really have the potential to strengthen security.

So what the Principles for Package Repository Security Framework is, is really an aggregation of these security practices that we developed over the course of a few months collaboratively, both between CISA, Securing Software Repositories Working Group, and then many package repositories, and landed on a set of four buckets really around security best practices, including areas like authentication, authorization.

How are these package repositories, for instance, enforcing multi-factor authentication? What tiers of maturity might go into this to then, for instance, if they have a command line interface utility, how can that make security really seamless for developers who are integrating packages?

Say, if there is no vulnerabilities in those packages, is that at least flagged to the developer so they can make an informed decision around whether or not to integrate the version of the open source package they’re looking at? So maybe I’ll pass it over to Zach to cover what I missed

Zach Steindler (09:08)
Yeah, the beauty of open source is that no one’s in charge. And people sometimes misunderstand the Securing Software Repositories Working Group, and they’re like, can I come to that and, sort of like, mandate all the package repositories implement MFA? And the answer is no, you can’t, first because it’s against the purpose of the group to like tell people what to do. But also, it’s not a policy-making group. It’s not a mandate-creating group, right? Participation is voluntary.

Even if we were to, you know, issue a mandate, each of these ecosystems has like a rich history of why they’ve developed certain capabilities, things they can and cannot do, things that are easy for them, things that are hard. So we absolutely are not looking to go in and say, OK, you know, all ecosystems must do X. But what we are is sort of this forum where these conversations take place.

People who operate these package repositories can say, here’s what’s working for us, here’s what’s not working for us. Share those ideas, share those experiences and learn from each other. And so when it came to writing the Principles for Package Repository Security document, the goal was not to say, here’s what you must do, but these different ecosystems are all very busy, very resource constrained. And one of the items often on their backlog is to create a security road map or to put together a request for funding for like a full time security in residence position. But to do that, they need to have some idea of what that person is going to work on.

And that’s really where the principles document comes in, is where we’re creating this maturity model, this roadmap, whatever you want to call it, more as a menu that you can order off of and not a mandate that everyone must follow.

CRob (10:50)
That sounds like a really smart approach. I applaud your group for taking that tactic. The artifact itself is available today. You can go out and review it and maybe start adopting a thing or two in there if you manage repository, but also it took you a lot of time and effort to get there. But describe to us what’s next on your roadmap. Where does what does the future hold around your group and the idea around trying to institute some better security practices across repos?

Zach Steindler (11:18)
Yeah, I could start out to talk about the Securing Software Repositories Working Group. I’m not sure I would have had this grand plan at the time, but overtime it sort of crystallized that the purpose of the working group is to put together roadmaps like the principles document that we published. I gotta plug that all the work that we do is on repos.openssf.org. So it’s a great place to review all these documents.

The second thing that the working group is focused on, other than just being this venue where people can have these conversations, is to take the individual security capabilities and publish specific guidance on how an ecosystem implemented it, and then give sort of a design and security overview to make it easier for other ecosystems to also implement that capability. We have a huge success story here with a capability called Trusted Publishing.

So to take a step back, the point of Trusted Publishing is that when you are building your software on a build server and you need to get it to the package registry, you have to authenticate that you have the permission to publish that package namespace. Usually in the past, this has been done by taking someone’s user account and taking their password and storing it in the build pipeline. Maybe you could use an API key instead, but these are really juicy targets for hackers.

So Trusted Publishing is a way to use the workload identity of the build system to authorize the publish. And then you don’t have a API key that can be exfiltrated and broadly used to upload a lot of malicious content. And so this capability was first implemented in PyPI, shortly thereafter in RubyGems.

And then we asked Seth Larson, who’s a member of the working group and the Python Software Foundation security residents to write up implementation guidance based on what his team at the PSF learned and also based on what the RubyGems team learned. And it so happened that NuGet, the package manager for the dot net Microsoft ecosystem, was also interested in this capability, and the timing just just happened to work out perfectly where they started coming to the working group meetings.

We already had this drafted guidance on implementation, and they were able to take that and kind of accelerate their RFC process, adapt it so that it was relevant to the different concerns in their ecosystem. But they’re much further along on this track of implementing this capability than they would have otherwise been if they had to start a square one. So in addition to roadmaps, I think we’re going to be focusing more in the near future on finding more of these security capabilities to publish guidance on to help the package repositories learn from each other.

Jack Cable (14:08)
Yep, and just to add on to that, I think it’s super great to see some of the work that is coming out of the working group. We at CISA held a summit on open source software security in March, whereas part of that we announced actions that five of the major package repositories, including for Python, JavaScript, Rust, Java and PHP are taking in line with the principles for package repository security framework. And we know that this is going to be an ongoing journey for really all of the package repositories, but we’re encouraged to see alignment behind that. And we hope that can be a helpful resource for these package repositories to put together their roadmaps to make funding requests and so on.

But I do want to talk about kind of one of the broader outcomes that we want to help achieve at CISA, and this is in line with our secure design initiative, really where we want technology manufacturers to start taking ownership of, for instance, the security outcomes of their customers, because we know that they’re the ones who are best positioned to help drive down the constant stream of cyber attacks that we seem to be seeing.

As part of that, it’s essential that every technology manufacturer who is a consumer of open source software who integrates that into their products, who profits from that open source software is a responsible steward of the open source software that they depend upon. That means both having processes to responsibly consume that. It also means contributing back to those open source packages, whether financially or through developer time.

But what this also entails is making sure that there’s kind of a healthy ecosystem of the infrastructure supporting the open source communities of which package repositories are really a core part. So I encourage every software manufacturer to think about how they are helping to sustain these package repositories, helping to foster security improvements, because again, we know that many of these are nonprofits. They really do rely on their consumers to help sustain them, not just for security, but for operations more generally. So really we want to see how both we can help spur some of these developments directly, but then also how every company can help contribute to sustain this.

Zach Steindler (16:50)
Jack, I just wanted to say that we are sort of like maybe dancing around the elephant in the room, which is that a lot of this work is done by volunteers. Occasionally it is funded. I wanted to give a special shout out to Alpha-Omega, which is an associated project of the OpenSSF that has funded some of this work in individual package repositories. There’s also the Sovereign Tech Fund, which is funded by, I think, two different elements in the German government.

But, you know, this work doesn’t happen by itself. And part of the reason why we’re putting together this guidance, why we’re putting together these roadmaps is so that when funding is available, we’re making sure that we are conscious of where we can get the most results from that investment.

CRob (17:32)
Thank you both for your efforts in trying to help lead this, help make this large change across our whole ecosystem. Huge amount of downstream impact these types of efforts are going to have. But let’s move on to the rapid fire section of our interview. (Sound effect: Rapid fire!) I have a couple of fun questions. We’re going to start off easy, spicy or mild food?

Jack Cable (17:55)
Spicy.

Zach Steindler (17:57)
In the area that I live, there’s quite a scale of what spicy to mild means, depending on what kind of restaurant that you’re at. I’d say I tend towards spicy, though.

CRob (18:05)
(Sound effect: Oh, that’s spicy!) That’s awesome. All right. A harder question. Vi or Emacs?

Jack Cable (18:16)
I’m going to say nano — option number three.

CRob (18:20)
(Laughter) Also acceptable.

Zach Steindler (18:24)
CRob, always joking about college football rivalries, and I don’t feel a strong personal investment in my text editor. I do happen to use Vi most of the time.

CRob (18:37)
It is a religion in some parts of the community. So, that was a very diplomatic answer. Thank you, Another equally contentious issue, tabs or spaces?

Jack Cable (18:48)
Spaces all the way, two spaces.

Zach Steindler (18:52)
I’m also on team spaces, but I’ve had to set up my Go formatter and linter to make sure that it gets things just right for the agreed-upon ecosystem answer. That’s the real answer, right? It’s good tools, and everyone can be equally upset at the choices that the linter makes

CRob (19:09)
That’s phenomenal. (Sound effect: The sauce is the boss!) I want to thank you two for playing along real quickly there. And as we close out, let’s think about, again, continuing on my last question about the future. What advice do either of you have for folks entering the industry today, whether they’re going to be an open source developer maintainer, they’re into cybersecurity, they’re just trying to help out what advice do you have for them?

Jack Cable (19:31)
I can kick that off. I’d say first of all, I think there’s lots of great areas and community projects to get involved with, particularly in the open source space. The beauty of that, of course, is that everything is out there and you can read up on it, you can use it, you can start contributing to it. And specifically from the security perspective, And there is a real ability to make a difference because as Zach was saying, this is primarily volunteers who are doing this, not because they’re going to make a lot of money from it or because they’re going to get a ton of recognition for it necessarily, but because they can make an actual difference.

And we know that this is sorely needed. We know that the security of open source software is only going to become more and more important. And it’s up to all of us really to step in and take matters into our own hands and drive these necessary improvements. So I think you’ll find that people are quite welcoming, that there’s a lot of great areas to get involved and encourage reading up on what’s going on and seeing what areas appeal to you most and start contributing.

Zach Steindler (20:51)
I have two pieces of maybe contradicting advice, because the two failure modes that I see are people being too afraid to start participating or being like, I have to be an expert before I start participating, which is absolutely not the case. And then the other failure mode I see is people joining a 10-year old project and being like, I have all the answers. I know what’s going on. So I think my contradictory advice would be to show up. And when you do show up, listen.

CRob (21:19)
Excellent advice. I think it’s not that big a contradiction. As we close out, do you gentlemen have a call to action? I think I might know part of it.

Zach Steindler (21:28)
Yeah, my call to action would be please go to repos.openssf.org. That is where we publish all of our content. That also links to our GitHub repository where you can then find our past meeting minutes, upcoming meeting information, our Slack channel in the OpenSSF Slack. Do be aware, I guess, that we’re very much the blue hats defenders here. So sometimes people like, do you need me to, you know, report more script kiddies uploading, you malware to NPM? It’s like.

The folks who are sort of like operating these systems, and so we recognize it’s a small audience. That’s not to say that we don’t want input from the broader public. We absolutely do, but to my point earlier, you know a lot of these folks have been running these systems for a decade plus. And so do come, but do be do be cognizant that there’s probably a lot of context that these operators have that that you may not have as a user systems.

Jack Cable (22:17)
And please do check out the principles for package repository security framework. It’s on GitHub as well as the website Zach mentioned. We have an open ticket where you can leave feedback, comments, suggestions, changes. We’re very much open to new ideas, hearing how we can make this better, how we can continue iterating and how we can start to foster more adoption.

CRob (22:43)
Excellent. I want to thank Zach and Jack for joining us today, helping secure kind of the engine that most people interact with open source with. So thank you all. I appreciate your time and thanks for joining us on What’s in the SOSS? (Sound effect: That’s saucy!)

Zach Steindler (23:00)
Thanks for having us, CRob. I’m a frequent listener, and it’s an honor to be here.

Jack Cable (23:04 )
Thank you, CRob.

Announcer (23:05)
Like what you’re hearing? Be sure to subscribe to What’s in the SOSS? on Spotify, Apple Podcasts, AntennaPod, Pocket Casts or wherever you get your podcasts. There’s lots going on with the OpenSSF and many ways to stay on top of it all! Check out the newsletter for open source news, upcoming events and other happenings. Go to openssf.org/newsletter to subscribe. Connect with us on LinkedIn for the most up-to-date OpenSSF news and insight, and be a part of the OpenSSF community at openssf.org/getinvolved. Thanks for listening, and we’ll talk to you next time on What’s in the SOSS?

OpenSSF Newsletter – November 2024

By Newsletter

Welcome to the November 2024 edition of the OpenSSF Newsletter! Here’s a roundup of the latest developments, key events, and upcoming opportunities in the Open Source Security community.

The SOSS Fusion 2024 Playlist is Live!

Catch up on the highlights from SOSS Fusion 2024, The Conference for Secure Open Source Software with the full YouTube playlist. Explore keynotes, technical sessions, and workshops from industry leaders like Dan Lorenc and Cory Doctorow. Discover actionable insights and tools to secure open source software.

📺 Watch now: SOSS Fusion 2024 YouTube Playlist

Secure Your Software Supply Chain with Abhisek Datta

Join us for an insightful webinar, Policy, Security, and the Software Supply Chain, featuring security expert Abhisek Datta on November 27 from 2:00 PM – 3:00 PM. This event is hosted in the lead-up to SOSS Community Day, India, co-located with KubeCon + CloudNativeCon India 2024.

Mark your calendars and register today!

Join us in Delhi for SOSS Community Day India on December 10, 2024, co-located with KubeCon + CloudNativeCon India

Hosted by the OpenSSF, this event will bring together open source security enthusiasts to connect, collaborate, and share knowledge. Whether you’re an industry leader or a passionate technologist, this is your opportunity to dive deep into the latest open source security trends, learn from experts, and network with the vibrant open source community. Don’t miss out—register today and be part of the conversation on securing open source software!

Learn more

2025 Virtual Tech Talk Call for Proposal (CFP)

We are excited to invite proposals for the 2025 Virtual Tech Talk Series, providing a platform for in-depth discussions on critical initiatives to secure open source software within the OpenSSF community. These tech talks are designed to foster knowledge sharing, highlight innovative technical projects, and showcase efforts driving the future of open source security.

Have a topic or expertise you’d like to share? Submit your Call for Proposals (CFP) by December 13, 2024, to ensure ample time for review and planning. This is your chance to contribute, connect with peers, and inspire others in the field.

Submit your CFP

Case Study: Kusari’s Implementation of OpenSSF Tools and Services


Kusari has tackled software supply chain challenges like transparency and inefficiencies by integrating OpenSSF tools such as AllStar, Scorecard, and GUAC, while adopting open standards like SLSA and OpenVEX. These solutions have enhanced their ability to manage risks and contribute actively to the OpenSSF community.

Participating in open source communities allows us to shape the future of software supply chain technology,” says Parth Patel, Kusari’s Co-founder.

➡️ Read more about Kusari’s journey and the tools they use.

October was Cybersecurity Awareness Month!

CybersecurityMonth
This year, the focus was on collective action across sectors to enhance cybersecurity resilience. Organizations prioritized OSS governance, developers adopted secure coding practices, and academic institutions prepared the next generation of professionals—all contributing to safer digital ecosystems.

OpenSSF supported these efforts with resources like Developing Secure Software (LFD121) and events like SOSS Fusion, which fostered collaboration and knowledge sharing.

➡️ Read more about how we worked together to stay secure and informed.

OpenSSF Adds Minder as a Sandbox Project to Simplify the Integration and Use of Open Source Security Tools

Minder, contributed by Stacklok, simplifies the integration and use of open source security tools through a policy-based approach that spans the entire software development lifecycle. With features like noise reduction, auto-remediation, and integration with OpenSSF tools such as Sigstore, Minder empowers organizations to strengthen their security posture.

➡️ Explore Minder and see how it enhances open source security.

OpenSSF Expands Secure Development Course with Interactive Labs


The Open Source Security Foundation (OpenSSF) has enhanced its free “Developing Secure Software” course (LFD121) with hands-on labs and interactive activities. These new features provide developers with practical techniques to counter modern cyberattacks, improving engagement and knowledge retention.

With over 25,000 enrollments globally, this course offers a comprehensive learning experience covering secure design principles, implementation, and verification techniques. Developers can earn a completion certificate and access optional browser-based labs for an immersive learning experience.

➡️ Enroll in LFD121 and start building secure software today!

OpenSSF Welcomes New Members and Introduces New Initiatives at SOSS Community Day Japan

At SOSS Community Day Japan, OpenSSF celebrated its growing community with the addition of new members, including Arm, embraceable AI, Fujitsu, Ruby Central, and Trifecta Tech, furthering its mission to secure open source software.

In a recent press release, OpenSSF also announced new initiatives: Minder, a sandbox project simplifying security tool integration; bomctl, enhancing SBOM management; and Zarf, enabling secure software delivery in air-gapped environments.

➡️ Read more about our new members and initiatives.

 

Red Hat’s Collaboration with the OpenSSF and OSV.dev Yields Results: Red Hat Security Data Now Available in the OSV Format

RedHat'sCollaborationwithOpenSSF

Red Hat has partnered with OpenSSF and Google’s OSV.dev to make its security data available in the OSV format. This enhances transparency, accessibility, and integration with tools like OSV-Scanner, supporting better vulnerability management.

➡️ Learn more about this collaboration.

 

How We Can Learn from Open Source Software to Address the Challenges of AI

How_We_Can_Learn_from_Open_Source_Software_to_Address_the_Challenges_of_AI

AI models bring transformative potential but also risks like deepfakes, bias, and misuse. Drawing from open source principles, we can address these challenges by fostering collaboration across industry, academia, and government, securing the AI supply chain, and building “secure by default” models.

OpenSSF’s work with agencies like CISA offers a roadmap for leveraging open source security principles to improve the safety and reliability of open foundation models.

➡️ Read how open source lessons can shape a secure AI future.

 

The OpenSSF Armored Goose “Honk”: Advancing Open Source Security

ArmouredGooseHonk

The Open Source Security Foundation’s (OpenSSF) logo features “Honk,” an armored goose holding a shield, embodying the foundation’s mission to protect open source software. Representing adaptability, resilience, and teamwork, Honk symbolizes the innovative approaches OpenSSF employs to enhance security in the open source ecosystem.

Discover the story behind Honk and how OpenSSF champions collaboration and defense in open source security.

➡️ Learn more about Honk and join the mission.

In the News

Meet OpenSSF at These Upcoming Events!

Get Involved in OpenSSF

You’re invited to…

See You Next Month

We want to get you the information you most want to see in your inbox. Have ideas or suggestions for next month’s newsletter about the OpenSSF? Let us know at marketing@openssf.org, and see you next month! 

Regards,

The OpenSSF Team

What’s in the SOSS? Podcast #19 – Red Hat’s Rodrigo Freire and the Impact of High-Profile Security Incidents

By Podcast

Summary

In this episode, CRob talks to Rodrigo Freire, Red Hat’s chief architect. They discuss high-profile incidents and vulnerability management in the open source community. Rodrigo has a distinguished track record of success and experience in several industries, especially high-performance and mission-critical environments in financial services.

Conversation Highlights

  • 01:08 – Rodrigo shares his entry into open source
  • 02:42 – Diving into the specifics of a high-profile incident
  • 06:22 – How security researchers coordinate a response to a high-profile incident
  • 10:33 – The benefits of a vulnerability disclosure program
  • 11:57 – Rodgiro answers CRob’s rapid-fire questions
  • 13:43 – Advice for anyone getting into the industry
  • 14:26 – Rodrigo’s call to action for listeners
  • 15:53 – The importance of the security community working together

Transcript

Rodgrigo Freire soundbite (00:01)
Who do I ask and to grab by the arm? Man, I need you to, right now, please assess this vulnerability! It’s very important asset to have that Rolodex of contacts and to know the ones to ask for help. You don’t have to the information — have to know who knows.

CRob (00:18)
Hello everybody. Welcome to What’s in the SOSS? The OpenSSF’s podcast where I and Omkhar get to talk to some amazing people in the open source community. Today, I’ve got a really amazing treat for you. Very special guest. My friend Rodrigo from Red Hat. I’ve known Rodrigo for awhile, and we’re here to talk to about a really important topic, kind of, both of us have worked a lot with.

Rodrigo Freire (00:44)
Thanks Chris. Hello. Yes, I had the pleasure to work with CRob for a good number of years and I was in charge of the vulnerability management team at Red Hat. Yes, it was five definitely fun and character-molding years.

CRob (01:01)
So maybe you could share with our audience a little bit about your open source origin story. How did you get into this amazing space?

Rodrigo Freire (01:08)
It’s funny. When you say that I worked with a Linux version 1 dot something, well that pretty much disclosed the age, right? It was back in the 90s. I was working on an internet service provider and there was that multi-port serial adapters for moldings, and that was pretty much the backbone of the ISP. And then sendmail, ISC buying DNS server. And back in the day there was not radios for authentication — it was Cisco stack hacks, so yeah. (Laughter)

I started on the classic ISP admin back in the 90s. That’s when I got involved and then worked in the Brazilian government promoting the open source, and it was interesting time. When the government was shifting from mainframes and going to the to the low platform. And then the Linux as a security thing, and then the Linux was more focused on performance and the security. So this is where I started to wetting my toes in open source software.

CRob (02:22)
So let’s dive into the meat of our conversation today, my friend. We’ve all seen them, and maybe you could share with the audience from your perspective — what is a high profile incident? You know, sometimes it’s called celebrity vulnerability or a branded flaw. Could you maybe share like what is that?

Rodrigo Freire (10:33)
Yeah, definitely. I don’t know how does that translate to English actually. So I live all the way down here in Brazil, but I like to perceive them as like creating commotion. So that’s going to attract media audience and Twitter clicks and engagement and, h my God, look what I found! And in the end, that might be somewhat another Brazilian saying for you guys: trimming the pig.
A lot of cries for very few actual hair.

So you create all that commotion, all that need and so that comes escalating from CEOs, whatsoever, security teams for something that in the end might be some moderate impact or sometimes even something that does not affect some customer systems. So it’s a lot of brouhaha, I would say. However, on the other hand, there are some security events that are definitely something that you should pay close attention to.

So for example, we had the Heartbleed and then there was Shellshock and ghosts. I’ve been over the course of the years, a number of GLIBC vulnerabilities that can elevate you to root, even to the extent that it was even used as a tool to get a root on the system that someone forgot the password. Yes, that happened once, to a customer that shall be renamed unnamed.

And then finally, I think the mother of all incidents that I worked with, it would be the XZ security incident that happened a couple of months ago. More often than not this is something that just created distress with the security people with the good people managing the data center without something that’s really putting the customer at risk. However, on the other hand sometimes, some less often there, will be definitely something that’s really of concern and because the customer should pay close attention for that.

CRob (04:52)
So what do you think the motivation is that last year there were like 25,000 vulnerabilities? What’s your perception of why some of these get a celebrity treatment and others don’t that may be more severe?

Rodrigo Freire (05:08)
I have read somewhere on the internet, over the internet, something more like in the lines that over promoting something for personal gain. That resonated very well with me. On the community industry, there’s a lot of effort that’s put for you to render your portfolio, your reputation across the industry. And so, someone shows on the resume, hey, I was a guy who found the Heartbleed or the Ghost vulnerability.

A lot of people are going to recognize you, oh my God, you found that vulnerability, So yeah, it might be something like that. Sometimes it might not be that intent, but in the end, Chris, I really don’t think that’s not something that’s changed the tide on the security landscape for a good impact, I would say.

CRob (06:00)
Yeah, I would agree. Thinking about you managing some of these high-profile incidents, for our audience, maybe you could shed some light on what goes on behind the scenes when a security researcher comes to an open source project or a vendor like Red Hat. How do you get all the stakeholders together? How do you run these types of things? How do you keep the team focused?

Rodrigo Freire (06:22)
Internally at Red Hat we have some internal prioritization of the CV based on the scale. We use a four point scale. We are not attached to the CVSS score or the ranking. We focus on the product rank for the security issue. Say for example, I use HTTP server, for example, Apache HTTP my system. Alright, so there’s a vulnerability affecting CVSS score in 10, a perfect 10 on CVSS for that.

However, this functionality it’s not exposed on my system or is not use it is not enabled is not supported. Why would I score that as a 10 since it’s not a valid usage on my product? So yes, I would just lump something as not a factor or even a factor, but the impact is low. Putting the customers at a heightened risk, we take that, so this is a Red Hat score as a product. I strongly believe that the the way we rank these vulnerabilities on our product is how the customers should actually be paying attention instead of taking the worst -case scenario in whatever possible use of the component.

I’m not saying that this is not important. It is, it is key. However, we do have people, we have a human operator that’s taking into account how that vulnerability is actually exposed on the product. So I think that’s something very important for the vendors to do. So they take a general vulnerability and then you issue a score for your product. How is that actually exposed on our product? So that said, this is how we select how and went to fix something.

And then, let’s say for example, in the case of a high-profile event, oh man, there was a very ugly vulnerability that showed at the eve of 2022 to 2023. It was December the 21, something like that. It was on the 20s. So we had the company at a freeze and I was working. So the…sorry, this still has to be taken, right? And then there was a KSMDB, it was a kernel SMB server vulnerability. Actually, it was a stream of them that was disclosed by Zero Day Initiative.

That was an uphill battle because in the end it was not affecting RAN because we don’t enable KSMDB on our kernels. So it was not affecting us. However, I needed to get all the techies, all the specialists to assure and ensure because customers questions were starting to pile up. It’s not only RAN had that runs 24-7, our customers as well were surprised. So we have to provide the answers. And then finding the right resources. This is one of the key abilities for everyone managing any security program. So it’s this vast network of contacts and who to ask and who to grab by the arms. Man, I need you to right now to please assess this vulnerability.

It’s a very important asset to have that disclosing the age again that Rolodex of context and to know the ones who ask for help to get information. You don’t have to know the information, you have to know who knows.

CRob (09:55)
Right, and I think it’s really important that some people in the supply chain, like a commercial Linux vendor, are able to contextualize that. Vulnerability may be abstract or not applicable, and I love that, that a lot of folks do that within the supply chain. Thinking about a vulnerability disclosure program, what we colloquially refer to as VDP, it’s important for large projects, and it’s required for a large commercial enterprise.

Could you maybe talk to some of our listeners about what the benefits to their downstreams would be to put the pieces in place to get some type of vulnerability disclosure program together?

Rodrigo Freire (10:33)
So Red Hat has a VDP in progress, so we credit for every finder that comes to us disclosing a vulnerability, we’re going to acknowledge, we’re going to point towards the person who finds this CVE. This is an integral part of our workflow for giving credit to the finder. Of course, we ask the finder, would you like to be credited? How would you like to see that credit get credited?

And also, that’s not only for the CVEs, however, for findings on our infrastructures as well. So for example, on the customer portal or on some catalog or webpage or whatever else they find something at Red Hat, we give credit to every finder. We don’t do bug bounties. However, we have this VDP, so someone is working their way to have a portfolio as a finder, as a pen tester, as a CVE finder. That’s 100% fine. We will give credit.

And then, this is getting adjusted, we will negotiate with the finder how much of time would you want to have that under embargo? So we have all this negotiation with the finder to make something that can accommodate everyone’s need.

CRob (11:48)
So it’s some good points. Well, let’s move on to the rapid-fire part of the interview. (Sound effect: Rapid fire!) Yeah!

Rodrigo Freire (11:56)
Here we go!

CRob (11:57)
First question. Here we go! Are you ready? Spicy or mild food?

Rodrigo Freire (12:03)
Definitely spicy, man. I’ve been to India in November on the end of last year, man. It was the time of my life eating any spicy food to the point of sweating in my head, man! That was a trip!

CRob (12:20)
Nice! (Sound effect: Oh, that’s spicy!) What’s your favorite whiskey?

Rodrigo Freire (12:26)
It’s Talisker. And I tell you what, if you’re having a Talisker and then you drink Blue Label, I’m sorry, Blue Label, that’s going to fade. Blue Label is just going to fade away. Talisker for the win.

CRob (12:42)
Very nice. Next question, Vi or Emacs?

Rodrigo Freire (12:46)
Vi, come on man!

CRob (12:48)
(Laughter) Nice! Rodrigo, what’s your favorite type of hat?

Rodrigo Freire (12:55)
Type of hat? Man, I actually found that, well, my favorite one is actually a Red Hat, right? But after I got a decision to become a bald person, I actually liked being bald and I seldom use any kind of hat, right? So I’m a proud bald, I’d say. On the other hand, it would be just a baseball cap.

CRob (13:17)
OK, fair enough. And last question, tabs or spaces?

Rodrigo Freire (13:22)
Tabs! Show some finesse!

CRob (13:26)
Nice, excellent, excellent. Well, now. (Sound effect: That’s saucy!) As we wind up, do you have any advice for someone that’s looking to get into the field, whether it’s cybersecurity incident response or open source development? What advice do you have for these newcomers?

Rodrigo Freire (13:43)
First of all, play nice. Show respect and make your due diligence. I think everyone is going to embrace you wholeheartedly because no one likes vulnerability. So if you’re going to find new stuff or even help to fix this stuff, show the attitude. So be positive, make your relationship network. That’s important because without it you’re not going to succeed or you’re going to earn some bad reputation as well. Everyone’s already fighting a hard battle, so play nice.

CRob (14:15)
Nice. That’s excellent, fantastic advice. And our last question, do you have a call to action that you want to inspire our listeners to go do as soon as they listen to this?

Rodrigo Freire (14:26)
Yeah, definitely. So, take into account your environment. So, no one likes emergencies. Emergencies are expensive. No one likes emergency maintenance windows. So, get to understand your environment. So, is this CVE, is this vulnerability really affecting? So, can you be that trusted advisor on your organization so you actually can be the person who sets the expectation, the needs of the company?

There’s some pressure from these high profile events from the upper floor asking hard questions. So get to understand your real need so you can actually schedule something that will not hurt your team or your availability or even the stability of your environment. And finally, I would say ask questions. So ask your vendor or your account reps or your consultants. So yeah, if you’re in doubt, go ask your questions. And I think I am positive that they are going to ensure you that you have a secure and stable environment.

CRob (15:38)
Excellent. That’s, I think, some great advice from someone that’s been there on the front lines helping fight the good fight for downstream and representing its customers. Rodrigo, thank you for joining us today on What’s in the SOSS? Really appreciate you coming and talking to us.

Rodrigo Freire (15:53)
Thank you, Chris. And one last word I would like to stress here. So on the security discussion, there’s no Red Hat. There’s no Canonical. There’s not Oracle. No. We all collaborate very closely when it gives regard to security issues. We are in close touch to everyone. Everyone knows each other. So there’s no, Red Hat’s only playing ball alone. No such a thing. I got to tell you guys, the XZ security incident was first disclosed to Debian and then Debian got in touch with us and then we started the coordination. So, yeah.

CRob (16:32)
I love that about our community, the fact that we all come together and able to put our colored hats to the side and come together and collaborate.

Rodrigo Freire (16:37)
Exactly, mister!

CRob (16:39)
Excellent. Well, thank you, Rodrigo. Have a great day.

Rodrigo Freire (16:42)
Thanks, Chris.

Announcer (16:43)
Thank you for listening to What’s in the SOSS? An OpenSSF podcast. Be sure to subscribe to our series of conversations on Spotify, Apple, Amazon or wherever you get your podcasts. And to keep up to date on the Open Source Security Foundation community, join us online at openssf.org/getinvolved. We’ll talk to you next time on What’s in the SOSS?