Skip to main content

📩 Stay Updated! Follow us on LinkedIn and join our mailing list for the latest news!

All Posts By

OpenSSF

OpenSSF Newsletter – December 2024

By Newsletter

Welcome to the December 2024 edition of the OpenSSF Newsletter! Here’s a roundup of the latest developments, key events, and upcoming opportunities in the Open Source Security community.

Thank You for an Amazing 2024!

OpenSSFAnnualReport

As 2024 comes to a close, we want to take a moment to express our deepest gratitude for the dedication, collaboration, and innovation you have brought to the OpenSSF community this year. Together, we achieved remarkable milestones—from expanding our global membership and launching impactful education initiatives to advancing critical security projects and fostering collaborations with public and private sectors. Your contributions have strengthened our shared mission to secure the open source ecosystem and build a safer, more reliable digital future.

As we look forward to 2025, we’re excited to continue fostering a vibrant and inclusive community, deepening collaborations, and driving meaningful change together. We appreciate your role in this journey.

Wishing you a safe and joyful holiday season!

Download report

The Open Source Software Stewards and Manufacturers Workshop and the EU Cyber Resilience Act (CRA)

In December, the Linux Foundation Europe and the OpenSSF hosted the Open Source Software Stewards and Manufacturers Workshop in Amsterdam, focusing on the implications of the EU Cyber Resilience Act (CRA). The event brought together industry leaders, community experts, and government officials to align on CRA obligations and foster collaboration for compliance.

Key outcomes included the formation of the Global Cyber Policy Working Group and three workstreams: CRA Readiness & Awareness, CRA Tooling & Processes, and CRA Standardization.

Details on how to participate and learn more:

Understanding the CRA: OpenSSF’s Role in the Cyber Resilience Act Implementation – Part 1

UnderstandingCRA1

Published as Regulation (EU) 2024/2847 in the Official Journal of the European Union, the Cyber Resilience Act (CRA) entered into force (EIF) on December 10, 2024. The CRA will fully apply three years later, on December 11, 2027. The CRA will obligate all products with digital elements, including their remote data processing, put on the European market to follow this regulation. This new blog series will cover the implementation of the CRA and its relevance to open source software.

In Part 1, we will provide a general overview of the CRA and highlight LF Europe and the OpenSSF’s current activities in relation to the implementation.

Learn more

Understanding the CRA: OpenSSF’s Role in the Cyber Resilience Act Implementation – Part 2

CRABlog2
In Part 1, we provided a general overview of the CRA and highlighted OpenSSF’s current activities related to its implementation. In Part 2, we’ll take a closer look at the three-year implementation timeline and what lies ahead. 

Read more

Shaping the Future of Generative AI: A Focus on Security

GenAIstudy

The Shaping the Future of Generative AI report, sponsored by LF AI & Data and CNCF, highlights how organizations prioritize security, cost, and performance as they adopt GenAI. Security remains a top concern, particularly in sectors like finance and healthcare, where privacy and regulatory compliance are critical.

The Open Source Security Foundation (OpenSSF) AI/ML Working Group plays a vital role in this landscape, focusing on initiatives like model signing with Sigstore to enhance trust and security in AI systems. This blog ties together insights from the report and OpenSSF’s ongoing efforts to address security challenges in GenAI adoption.

Open Source Usage Trends and Security Challenges Revealed in New Study

Census III Report

The Linux Foundation and Harvard released Census III, a groundbreaking study analyzing Free and Open Source Software (FOSS) usage and security challenges. Findings reveal trends like the rise of cloud-specific packages, increased reliance on Rust, and the critical role of a small group of contributors.

Learn more

Download report

 

Honda and Guidewire Join the Open Source Security Foundation (OpenSSF)


At the inaugural SOSS Community Day India, OpenSSF welcomed Honda and Guidewire Software as new members, expanding its growing global network to 126 organizations. The event highlights India’s thriving open source ecosystem and brings together leaders to collaborate on securing the software we all depend on.

Learn more

SigstoreCon 2024: Advancing Software Supply Chain Security

SigstoreCon

On November 12, 2024, the software security community gathered in Salt Lake City for SigstoreCon: Supply Chain Day, co-located with KubeCon North America 2024. The one-day conference brought together developers, maintainers, and security experts to explore how Sigstore is transforming software supply chain security through simplified signing and verification of digital artifacts.

Read more

News from OpenSSF Community Meetings and Projects:

In the News:

Meet OpenSSF at These Upcoming Events!

You’re invited to…

See You Next Year! 

We want to get you the information you most want to see in your inbox. Have ideas or suggestions for next month’s newsletter about the OpenSSF? Let us know at marketing@openssf.org, and see you in 2025! 

Regards,

The OpenSSF Team

What’s in the SOSS? Podcast #22 – Sovereign Tech Agency’s Tara Tarakiyee and Funding Important Open Source Projects

By Podcast

Summary

In this episode, CRob talks to Tara Tarakiyee, FOSS technologist at the Sovereign Tech Agency, which supports the development, improvement and maintenance of open digital infrastructure. The Sovereign Tech Agency’s goal is to sustainably strengthen the open source ecosystem, focusing on security, resilience, technological diversity and the people behind the code.

Conversation Highlights

  • 01:42 – Why the Sovereign Tech Fund became the Sovereign Tech Agency
  • 03:59 – The ways the Sovereign Tech Agency supports open source infrastructure initiatives
  • 04:42 – The four criteria for Sovereign Tech Agency funding: prevalence, relevance, vulnerability and public interest
  • 06:51 – Sovereign Tech Agency success stories
  • 09:09 Plans for the Sovereign Tech Agency in 2025
  • 11:54 – Tara answers CRob’s rapid-fire questions
  • 13:54 – Advice to those entering open source development or security field
  • 14:55 – Tara’s call to action for listeners

Transcript

Tara Tarakiyee soundbite (00:01)
You can actually hear the relief when we’re talking to maintainers about how can we sort of get this kickstarted? How can we get the ball rolling? Hopefully those maintainers can also show the benefits of investing in security, investing in resilience to people that depend on their software and get them to invest in it as well.

CRob (00:17)
Hello everybody, I’m CRob. I do security stuff on the internet. I’m a community leader and I’m also the chief architect within the Open Source Security Foundation. One of the coolest things I get to do as part of this role is to host “What’s in the SOSS?” podcast, where I talk to interesting people, maintainers, leaders and folks involved with upstream open source security and open source supply chain security.

Today, we have a real treat. We have Tara from the Sovereign Tech Agency, and they are here to talk about the amazing work within the upstream community for the last several years. So maybe could you introduce yourself and explain a little bit about the organization you’re working with?

Tara Tarakiyee (01:00)
Thank you. I work with the Sovereign Tech Agency. We are a GC that’s funded by the German government, specifically through the Ministry of Economy and Climate to essentially strengthen the open source ecosystem, which is our mission. And we do that by investing in the components of our open digital infrastructure that are, I’m sure as you know, like maintained by very few people, but relied upon by millions and millions, what we call the roads and bridges of our digital world.

CRob (01:33)
I like that. That’s nice phrasing. As I mentioned, you all went through a little bit of a rebranding recently. Could you maybe talk about the change for us?

Tara Tarakiyee (01:42)
Yeah. So we had the whole concept that was developed by our co-founders, Fiona Krakenbürger and Adriana Groh to provide like an investment fund to support this critical infrastructure. And that was sort of like our first, let’s say, vehicle of support for projects. But essentially, what we’re trying to do is meet the community where they are, providing what they need. And we know that, sure, investments are good, but support for something as complex as our post-request instruction needs to come in different forms and factors.

So since then, we’ve also introduced two other programs, what was called the Bug Resilience Program, which is now called the Sovereign Tech Resilience, as part of the rebrand, and also our Sovereign Tech Fellowship. We provide services. We work with the vendors in this space who have experience with vulnerability management, with reducing technical debt, with doing code reviews and providing audits, and also with setting up and running bug bounty programs. And we provide those vulnerability management services. to open source projects indirectly. So we pay for it, but the services go to the open source project.

And with the fellowships, we are looking for maintainers who are key people in their communities who support several projects that for them, like, it wouldn’t make sense to apply it through something like the Sovereign Tech Fund. Usually what we do with the Sovereign Tech Fund is these service agreements that are sort of like deliverable based.

With the fellowship, we’re providing like a different way of providing support for maintainers through our fellowship where we support maintainers who are key in their communities by providing either with a board contract or with a six-month fellowship, three-month fellowship.

Those are sort of the sort of a bundle of services that we’re providing and under the banner of the Sovereign Tech Agency. We all have the same mission. We’re still doing the same things. It’s just the name, name change was just to reflect that there’s like a big house now where all these different programs live in.

CRob (03:47)
Makes a lot of sense. Could you maybe just share a little about how the agency kind of executes on this mission? How does someone become aware of these programs and how does someone take advantage of them to participate?

Tara Tarakiyee (03:59)
For the fellowship, we issued a call on our website. Currently, the call is closed for this year as we sort of review through the application that came in. For the Sovereign Tech Fund, we are still accepting ongoing applications on our website. So if you go to sovereign dot tech, you will find our website, and there you can navigate to the apply section where you can learn about our criteria, what we look for in critical infrastructure, open source, and from there it will take you to our application platform.

CRob (04:32)
If one of the programs is open, are there any kind of limits on who can qualify to participate? Does it have to be an EU citizen or can it be anywhere from around the world?

Tara Tarakiyee (04:42)
Anyone in the world can apply as long as you’re maintaining open source critical infrastructure. The way we, it’s hard to define something as open source critical infrastructure, you know? So for us, we take four criteria. So we look at sort of the relevance of your open source project. Is it used in different places, in many places, by many people?

We also look at…sorry, that was prevalence…then relevance, is it used in particular sectors that are particularly important? Like it could be not be used by many people, but if it’s using like the energy sector or aviation or something that’s, like, highly critical, then that’s another factor that of balances that out.

And then we look at vulnerability. So, I mean, it’s not a nice question, but like what would happen if your software component would disappear tomorrow? Would like people panic? Like that’s probably a good sign that it’s infrastructure. But also we balanced the question out also by looking at different aspects of like, why is this not receiving funding?

Because I think that’s a fundamental thing for us. Like we exist to support infrastructure because in general, like those are things that are hard to fund. It’s something, it’s a resource that everyone depends upon, but very few people contribute to. And that’s, that’s sort of like our niche. So that’s also something we look at in vulnerability.

And finally, we do an evaluation, like, is this a software that’s in the public interest? So is it being used in applications where it’s generally good for society? So, based on our evaluation of these four criteria and also look at the activities, like is it more maintenance activities or generally like you want to develop new features? Would you occasionally fund or invest in new features? But that’s only when there’s like a strong sort of public interest argument for it and no one else would fund it. In general, we mostly focus on improving the maintainability and security of those critical software components.

CRob (06:35)
Thinking back, you all have been operating, whether it’s the fund or the agency, for a little over two years. And thinking back over that, are there any particularly interesting success stories or where you felt that the fund or the agency made a real difference?

Tara Tarakiyee (06:51)
I mean, it’s generally nice just to hear the feedback from the different projects. It’s hard for me to name, like, one particular example or pick a favorite. In general, think like, when I look back and see like projects where they struggled for a long time to get the people that depend on them interested in security. Even though, like, it’s a critical dependency for, like, many companies and stuff, but nobody wants to fund like a security team.

People would rather fund new features and, which just like sort of exacerbates the problem. Like it just creates more pressure on the maintainer and creates more technical debt and more potential for things to go wrong. You can actually hear the relief when we’re talking to maintainers about, yeah, like we’re interested in your security plans. Like how can you sort of get this kickstarted? How can we get maybe those other people also interested? Cause again, like it’s such a big lift sometimes and with some software that we can’t do it all on our own.

So we try getting the ball rolling and then hopefully those maintainers can also show the benefits of investing in security, investing in resilience to the people that depend on their software and get them to invest in it as well. I’m also very proud of our investments in, for example, Fortran, where it’s a technology that’s still very important. Like people hear about it and think like, remember it like maybe from their university days or reading about it on Wikipedia, but it’s still there. It’s still lots of code written in it.

I think Fortran developers deserve the ammenities that modern day developers have, like a good package manager having the developer tooling. So I was very proud of our investment there because, again, like, also considering the state of the world right now, Fortran is very vital in climate modeling and us understanding the world around us. So it’s a very critical time for investment in such technology.

CRob (08:50)
Excellent. Yeah, the older languages deserve the same love that the newer ones do. I totally agree. Getting out your crystal ball, it’s towards the end of 2024 here. What’s in the future for the Sovereign Tech Agency in your programs for next year? Any big plans or anything you’re very excited about to get to work on next year?

Tara Tarakiyee (09:09)
So for work, we learned a lot from the past two years. So I think now it’s time for us to also start exploring ways of bringing in more people into the field of open source. I think, like, a common concern is looking at open source technology, like, there are very few maintainers and not so many are able to come in. Like there’s a high barrier for entry. So maybe I think looking at ways of opening up the field and getting more people, because I mean, the door is open, but that doesn’t mean that people automatically come in. Like, people need help to be able to get into open source.

And also we work with some very complicated projects because their infrastructure, because they’re written in sometimes like high-performance languages that are harder to get into. So I don’t want to compare, but like it’s not maybe as easy as, like, web development where sometimes the languages are a bit more accessible and there are already like a plethora of resources existing to help people get into them.

So I think just getting more people through the door, getting more, let’s say communities that don’t have access to the resources to become open source developers, helping get to the door, get them to becoming the maintainers of the future, I would say, is, would be something I would be very interested in working on or a problem to tackle.

With open source, it’s important to consider that interoperability needs standards because that’s how you create sort of like a healthy technology ecosystem. Because you don’t want like sort of a monoculture where like one software becomes a dominant thing and then that just creates lots of issues. So you want to have a variety of implementations around the standard to solve a particular problem. That just creates healthier software.

I think exploring how maintainers interact with standards bodies that exist. Also, you have increasing regulation and standardization coming from governments. And finally, I think there are some not official standard bodies, but bodies that help certain technologies communities or programming languages sort of improve their work that the maintainers know about these, but most people don’t. And I think getting more involved in sort of supporting the work that happens there to create better specifications, move technologies forward and get more maintainers involved in the conversations about the technologies that they’re developing at standard bodies will be another area of interest for us.

CRob (11:42)
Very nice. Yeah, that’s an interesting vision. A docket of things that I think we’ll probably be working on together next year. Well, let’s move on to the rapid-fire part of the interview.

(Musical sound effect: Rapid fire, rapid fire!)

All right, I have a couple quick questions. I want you to just answer right off the top of your head. Spicy or mild food?

Tara Tarakiyee: (12:06)
Spicy, but I have a limit.

(Sound effect: Ooh, that’s spicy!)

CRob (12:12)
Excellent. From your perspective, what’s your favorite open source mascot?

Tara Tarakiyee (12:17)
Oh, I mean, have to give it to Penguin, like Linux Penguin.

CRob (12:22)
Tux! Very nice!

Tara Tarakiyee (12:24)
I do sometimes get jealous of the FreeBSD devil, because it’s slightly cooler.

CRob (12:28)
Absolutely! Thinking back on your career with interacting with open source, what was your first open source project you remember using?

Tara Tarakiyee (12:37)
I mean, the first one I actively used knowing it was open source was Firefox. I wa a big part of the Firefox community early on in university. So I think how I got my start into open source advocacy was by organizing. I think, back then we were throwing these Firefox launch parties in Jordan. And from there, I got into Linux.

CRob (13:02)
That’s awesome. Well, thank you for sharing. As we wind down, do you have any advice that you would want to share to either someone entering the open source development or security field or is currently a maintainer?

Tara Tarakiyee (13:15)
I think it’s important for people to start listening more to maintainers. From my experience, like for the past two years working with maintainers, they know what they want, know where the problems are. There are people who really care about all these critical pieces of infrastructure that we depend upon, and they do have a good sense of what the problems are.

It’s just that I think not that many people listen to them that someone who really cares about software development in a way that’s… I compare it a bit to being an artisan where it’s more about the craft of the software and you just want to create the best software ever and sometimes occasionally they create things that are very important and used in many places. Sometimes not accidentally, sometimes intentionally as well and then, yeah, when it gets to that scale.

I think my advice is also don’t be afraid to say you need help. I think many maintainers feel like they need to do it on their own or think that people don’t care about their issues, but there are people out there who care about giving the adequate support to maintainers and creating communities of care for them. Definitely don’t be afraid. My advice for maintainers is don’t be afraid to ask for help and people do care about the work that you do. And my advice for others is please listen to maintainers. They know what they’re doing.

CRob (14:42)
Excellent. That’s excellent advice. Thank you. And finally, do you have a call to action, whether it’s kind of personal, like you just mentioned about for maintainers or contributors, or kind of around the Sovereign Tech Agency?

Tara Tarakiyee (14:55)
We do see the significant need or the significant under supply of what level of resources we need to put into our digital infrastructure. And there’s a huge gap between how many resources we’re putting in right now compared to what’s actually needed to create like a healthy, vibrant system.

Like, we’re still far off at the moment that, and I don’t think that many people realize that. So my call to action would be, let’s take this problem more seriously. Let’s invest like real resources, solving, like, the very real problems. We can’t wait til the next Log4j to happen and then say, oh my God, this could have been avoided.

I’m sort of also…maybe because like I’ve been working, doing this work for like 15 years now, tired of like that cyclical nature of like something big happens, people start caring. And then two years later, things revert back. Yeah, let’s, let’s try to break that cycle a little and put, like, significant investment that’s more long-term into creating maintainable, like sustainable support systems for our open source infrastructure.

CRob (16:00)
Excellent. Thank you. I appreciate you coming in to share your wisdom and your experiences through the Sovereign Tech Agency. I wish you a great day.

Announcer (16:09)
Like what you’re hearing? Be sure to subscribe to “What’s in the SOSS?” on Spotify, Apple Podcasts, AntennaPod, Pocket Casts or wherever you get your podcasts. There’s a lot going on with the OpenSSF and many ways to stay on top of it all. Check out the newsletter for open source news, upcoming events and other happenings. Go to openssf.org slash newsletter to subscribe. Connect with us on LinkedIn for the most up-to-date OpenSSF news and insight and be a part of the OpenSSF community at openssf.org slash get involved. Thanks for listening, and we’ll talk to you next time on What’s in the SOSS?

What’s in the SOSS? Podcast #21 – Alpha-Omega’s Michael Winser and Catalyzing Sustainable Improvements in Open Source Security

By Podcast

Summary

In this episode, CRob talks to Michael Winser, Technical Strategist for Alpha-Omega, an associated project of the OpenSSF that with open source software project maintainers to systematically find new, as-yet-undiscovered vulnerabilities in open source code – and get them fixed – to improve global software supply chain security.

Conversation Highlights

  • 01:00 – Michael shares his origin story into open source
  • 02:09 – How Alpha-Omega came to be
  • 03:48 Alpha-Omega’s mission is catalyzing sustainable security improvements
  • 05:16 – The four types of investments Alpha-Omega makes to catalyze change
  • 11:33 – Michael expands on his “clean the beach” approach to impacting open source security
  • 16:41 – The 3F framework helps manage upstream dependencies effectively
  • 21:13 – Michael answers CRob’s rapid-fire questions
  • 23:06 – Michael’s advice to aspiring development and cybersecurity professionals
  • 24:44 – Michael’s call to action for listeners

Transcript

Michael Winser soundbite (00:01)
When some nice, well-meaning person shows up from a project that you can trust, it becomes a more interesting conversation. With that mindset, fascinating things happen. And if you imagine that playing itself out again and again and again, it becomes cultural.

CRob (00:18)
Hello, everybody, I’m CRob. I do security stuff on the internet. I’m also a community leader and the chief architect for the Open Source Security Foundation. One of the coolest things I get to do with the foundation is to host the OpenSSF’s “What’s in the SOSS?” podcast. In the podcast, we talk to leaders, maintainers and interesting people within the open source security ecosystem. This week we have a real treat. We’re talking with my pal, Michael Winser, AKA “one of the Michaels” from the Alpha-Omega project. Michael, welcome sir.

Michael Winser (00:52)
It’s great to be with you, CRob.

CRob (00:53)
So for those of us that may not be aware of you, sir, could you maybe give us your open source origin story?

Michael Winser (01:00)
I have to think about that because there’s so many different sort of forays, but I think that the origin-origin story is in 1985, I was at my first job. You know, I got the Minix book and it came with floppy disks of source code to an entire operating system and all the tools. And I’m like, wait, I get to do this? And I started compiling stuff and then I started porting it to different things and using the code and then just seeing how it worked. That was like a life-changing sort of beginning.

And then I think then at Google working in open source, you know Google has a tremendous history of open source and a community and culture of it and embracing it. And I got to my last part of my work at Google was working on open source supply chain security for Google’s vast supply chain, both in terms of producing and consuming. And so that’s really been another phase of the journey for me.

CRob (01:53)
So I bet things have changed quite a lot back from 1985. And that’s not quite the beginning of everything. But speaking about beginnings and endings, you’re one of the leaders of a project called Alpha-Omega. Could you maybe tell us a little bit about that and kind of what AO is trying to do?

Michael Winser (02:09)
Sure. So Alpha-Omega, it started out in, sort of, two almost distinct things. One was at the sort of, that moment of crisis when OpenSSF was created and various companies like Microsoft and Google were like, we got to do something about this. And both Microsoft and Google sort of never let a good crisis go to waste, put a chunk of money aside to say, whatever we do, how do we figure this stuff out? It’s going to take some money to fix things. Let’s put some money in and figure out what we’ll do with it later.

Separately, Michael Scovetta had been thinking about the problem, had written a paper titled, surprisingly enough, Alpha-Omega, and thinking about how one might, the how one might was looking at the Alpha, which is sort of like the most significant, most critical projects that we can imagine. And then the Omega is, what about all the hundreds of thousands of other projects?

And so that confluence of those two thoughts sat unrealized, unfulfilled, until I…I joined the ghost team at Google and someone said, you should talk to this guy, Michael Scovetta. And that’s really how Alpha-Omega was started was two guys named Michael sitting in a room talking about what we might do. And there’s been a lot of evolution of the thinking and how to do it and lessons learned. And that’s what we’re here to talk about today, I think.

CRob (03:31)
I remember that paper quite well in the beginnings of the foundation. Thinking more broadly, how does one try to solve a problem with the open source software supply chain? From an AO perspective, how do you approach this problem?

Michael Winser (03:48)
There’s so many ways to this question, but I’m actually just going to start with summarizing our mission, because I think it really, we spend a lot of time, as you know, I’m a bit of a zealot on the mission, vision, strategy and roadmap thinking. And so our mission is to protect society by, critical word here, catalyzing sustainable security improvements to the most critical open source projects and ecosystems.

The words here are super important. Catalyzing. With whatever money we have on tap, it’s still completely inadequate to the scale of the problem we have, right? Like I jokingly like to sort of describe the software supply chain problem as sort of like Y2K, but without the same clarity of problem, solution or date.

It’s big, it’s deep, it’s poorly understood. So our goal is not to be sort of this magical, huge and permanent endowment to fix all the problems of the open source. And we’ll talk more about how it’s not about just putting money in there, right? But to catalyze change and to catalyze change towards sustainable security. And so the word sustainable shows up in all the conversations and it is really two sides of the same coin. when we talk about security and sustainability, they’re almost really the same thing.

CRob (05:03)
You mentioned money is a potential solution sometimes, but maybe could you talk about some of the techniques to try to achieve some better security outcomes with the projects you’ve worked with?

Michael Winser (05:16)
Some of it was sort of historically tripping over things and trying them out, right? And I think that was a key thing for us. But where we’ve arrived, I think I’ll sort of rather than trying to tell all the origin stories of all the different strategies that we’ve evolved. Actually, I’ll summarize by, so Alpha-Omega has now come to mean, not the most critical projects and then the rest, but the highest points of leverage and then scalable solutions. And so I use those two words, Alpha effectively means leverage and Omega means scale. And in that context, we’ve developed essentially a four pronged strategy, four kinds of investment that we make to create change, to catalyze change.

And in no particular order, Category A is essentially staffing engagements at organizations that are able to apply that kind of leverage and to where adding somebody whose job it is to worry about security can have that kind of impact. It’s kind of crazy how when you make it someone’s job to worry about security, you elevate something from a tragedy to the commons where it’s everybody’s job and nobody’s job, nobody’s the expert, nobody can decide anything to something where, well, that person said we should do X, I guess we’re gonna do X, whatever X was.

And then having somebody whose job it is to say, okay, with all the thousands of security problems we have, we’re gonna tackle these two this year, and developing that kind of theme, and then working with all those humans to create a culture around that change. Again, if it’s someone’s job to do so, it’s more likely to happen than if there’s nobody’s job it is to do it. Category A, staffing engagements at significant open source organizations that have the both the resources to hire somebody and the leverage to have that person become effective. Right? And so there’s a lot of like packaged up in that resources to hire someone like, you know, they, like, humans are humans. They want to have, you know, jobs, benefits, promotions, other crazy stuff. Right? I’ve given up on all those things, but you know, that’s the world that people live in.

And we, we don’t want to be an employer, we want to be a catalyst, right? And so we’re not here to sort of create a giant organization of open source security. We’re here to cause other people to get there and ultimately to wean themselves from us and to have a sustainable model around that. And in fact, so for those grants, we encourage, we discuss, we ask, how are you diversifying your funding so that you can start supporting this as a line item on your normal budget as opposed to our sort of operational thing? And that’s, it’s a journey. So that’s category A.

Category B has some interesting overlap, but it really speaks to what I think of as the app stores of software development, the package ecosystems, right? There is no greater force on the internet than a developer working at some company with their boss is breathing down their neck and they’ve got a thing to get done. They got to get tab A into slot B. They can’t figure it out. They Google something, and it says do not use this code, it is terrible, you should not use it, but if you were to use it, npm install foo will get you tab A into slot B, right?

At this point, you can’t give them enough warnings, it doesn’t matter, they’re under pressure, they need to get something done, they’re installing foo, right? How can we elevate these package ecosystems so that organizations, individuals, publishers, consumers can all have better metadata, better security practices, and trust the statements that these packages are making about themselves and that other entities are making about these packages to start making informed decisions or even policy-based decisions about what is allowed or not allowed into a developer environment in some organization, right?

And then that’s just a tiny part of it. So like the whole package app store concept where you essentially, I am installing this thing and I expect some truths about that name to be not a name name-squatted thing. I expect the versions to be reasonably accurate about not being changed underneath me. And a thousand little things that we just want to take for granted, even without worrying about them somehow making it all be secure, is such a point of leverage and criticality that we find investing in that worthwhile. And so that’s a category for us.

Category C is actually where most of our conversations start. And perhaps I’m getting ahead of our conversation, but it’s with audits. And we love paying for audits into an organization that essentially is ready to have an audit done. And there’s so much that gets wrapped up in is an organization ready to have an audit? Do they want to have the audit? What are they going to do with the audits results? How do they handle?

And so as an early engagement, it’s remarkably cost-effective to like find out whether that organization is an entire giant, complicated ecosystem of thousands of projects and things like that, or three to five amazing hackers who work nights and weekends on some really important library. That audit tells everybody an awful lot about where that project is on their journey of security. And one of our underlying principles about doing grants is make us want to do it again. And how an organization responds to that audit is a very key indicator, whether they’re ready for more, could they handle it, what are they going to do with it, et cetera.

And then category D, you could name it almost anything you want. This is us embracing the deep truth that we have no idea what we are doing. Collectively as an industry, nobody knows what we’re doing in this space. It’s hard, right? And so this is an area where we think of it as experimentation and innovation. And it’s a grab bucket for things that we try to do. And one of our stakeholders pointed out that we weren’t failing often enough early on in our life cycle. And it was like, if you’re not trying enough things, you’re taking the easy money, easy bets and not learning important lessons. Like, okay, we’re gonna screw some things up, get ready!

Again, the journey of learning every step along the way. And it’s not like we are like, recklessy throwing money to see if you can just burn it into security that doesn’t work — we tried But we’re seeing you know, see what we can do and those lessons are fun, too.

CRob (11:16)
Excellent. So in parallel with your kind of your four strategies you’ve used you you and I have talked a lot about your concept of the “clean the beach.” Could you maybe talk a little bit more about your idea of cleaning the beach?

Michael Winser (11:33)
Absolutely. So one of our early engagements on the Omega side was to work with an organization called OpenRefactory that had developed some better than generic static analysis techniques. And I don’t even know how it works, and there’s probably some human that know there’s some humans in the loop to help manage the false positives and false negatives and things like that. They felt that they could scale this up to handle thousands of projects to go off and scan the source code to look for vulnerabilities previously not found in that source code. And then also to generate patches, pull requests for fixes to those things as well.

And this is, sort of, the of the holy grail dream of like, we’ve got all this problem, if only we just, we can just turn literally oil into energy into compute, into fixing all this stuff, right? And there’s a lot of interesting things along the way there. So the first time they did, they went and did a scan, 3,000 projects, and came back and said, look at us, we scanned 3,000 projects, we found I don’t know how many vulnerabilities, we reported this many, this many were accepted. There’s a conversation there that we should get back to about the humans in the loop. And after it, I’m like, okay, if I try to tell anybody about this work, I don’t know what difference it makes to anybody’s lives.

And I realized that it was the moral equivalent of taking a…some kind of boat, maybe a rowboat, maybe a giant barge, out to the Pacific garbage patch and picking up a lot of plastic and coming back and saying, look at all this plastic I brought back. And I’m like, that’s good. And maybe you’ve developed a technique for getting plastic at scale, but thousands of orders of magnitude off, like literally it’s gigatons, teratons of stuff out there. And you brought back a little bit. I’m like, I need to be more short-term in terms of getting work and impact. And we care about continuous results and learnings as opposed to like, great, we found a way to turn the next trillion dollars into like a lot of scans and things like that. And so we thought about this a lot.

And it was sort of around the same time as the somewhat terrifying XZ situation, right? And I realized that XZ showed us a lot about the frailty of projects because of the humanness of people involved. But it also showed us that, and I’m going to be kind of stern about this, open source projects that use upstream dependencies like XZ are treating those dependencies exactly the way that we complain about corporations using open source for free.

They assume that this source code is being taken care of by somebody else and that it comes down from the sky on a platter with unicorns and rainbows and whatever other…like how many people in these organizations that use XZ, whether they were for-profit entities or whatever we’re paying attention upstream and saying, hey, I wonder if any of our projects needs our help? I wonder if we should spend some more time working on our upstream. Said nobody ever.

And so coincidentally, we wanted to do some work with someone we met at PyCon, this gentleman named Jarek Potiuk who’s on the PMC for Apache Airflow. And he wanted us to talk about our security work at the Airflow conference. And I’m like, well, we’ve got to talk about something. And so we start talking about Airflow. And he was already down that road of looking at his dependencies and trying to analyze them a little bit. And we said, what can we do here?

And so bring this back to Pacific garbage patch, right? We’d all love for the Pacific garbage patch to go away, right? But day to day, we go to the beach. And wouldn’t it be nice if we could talk about a section of the beach as being not perfectly okay, but free of a common set of risks, right? So we thought about, so can we do that? And he’s like, well, I know exactly how many dependencies total Airflow has. It has 719 dependencies.

And we asked ourselves the question, has anybody ever done a complete audit across those dependencies? Where complete is across all 719, not a complete code analysis of every single piece of those projects. And the answer was no. And we said, well, we’re going to make the answer yes. And so we started a project to go and bring automatic scanning to that so that OpenRefactory instead of trying to scan 3,000 arbitrary projects or the top 3,000 or the top 3,000 dependencies, they pick 718 and scan those. And Jarek and his team put together some scripts to go off and pull key facts about projects that can be used to assess risk on an ongoing basis in terms of whether we need to get involved or should we do something or should we worry about this or not, right?

And it’s everything from understanding the governance to the size of the contribution pool to the project, to its vulnerability history, right? And just building up a picture where the goal is not to sort of audit the source code of each of these projects, because that’s actually not Airflow’s job, right? And they wouldn’t do a good job of it per se. But to understand across their dependencies where there is risk and where they might need to do something about it.

From that came another concept that is what I really like. Going back to the, let’s not pretend that this code came down from the sky on a silver platter with unicorns. What are we supposed to do about it if we see risk in one of our upstream dependencies? And from that, the framework that came out was essentially the three F’s. You either need to fix, fork or forego those dependencies. There’s another way of saying forego, but we’ll stick with forego. There’s a fourth one which is fund, and we’ll talk about why that is not actually something at the disposal of most projects.

The fixed part is kind of interesting. The fork part is an expensive decision. It’s saying, you know, they’re not doing it, but we need this and we can’t get something else. We can’t forego it because it’s whatever. So I guess it’s ours now, right? And taking responsibility for the code that you use, because every dependency you use, right, unless you’re using some very sophisticated sandboxing, every dependency you use has basically total access to your build environment and total access to your production environment. So it’s your code, it’s your responsibility.

So with that mindset, fascinating things happened. When an automated scan from OpenRefactory found a new vulnerability in one of the dependencies, they would report it through their private vulnerability reporting, or we had some editing that noticed that these people don’t have private vulnerability reporting.

And so one of the fixes was helping them turn on PBR, right? But let’s say they had PBR, they would file the fix or file the vulnerability. And because it looked like it came from a machine, right? Unfortunately, open source maintainers have been overwhelmed by well-meaning people with bots and a desire to become a security researcher with a lot of like, let’s just say, not the most important vulnerabilities on the planet.

And that’s a lot of signal-to-noise for them to deal with. So some of these reports were getting ignored, but then when an Apache Airflow maintainer would show up on the report and say, hey, my name is “Blah,” I’m from Apache, we depend upon you, would you be open to fixing this vulnerability, we would really greatly appreciate it. In other words, a human shows up and behave like a human. You’d be amazed at what happened. People are like, my God, you know I exist? You’re from Apache Airflow, I heard it, you guys. How can I help? I’ll put it right on like that, right? Like, the response changed dramatically. And that’s a key lesson, right?

And if I were to describe one of my goals for this sort of continued effort, right, is that within the Airflow community, there’s an adopt a dependency mindset where there’s somebody, at least one person for every dependency. And I mean, transitively, it’s not the top level. It’s the whole graph, because you can’t assume that your transitive people are behaving the same way as you and that. It’s easy when it’s like not a crisis, but when it’s a crisis, right?

Having somebody you know, talk to you about the situation, offer to help is very different than, oh my God, you’ve shown up on somebody’s radar as having a critical vulnerability and now everybody is dog is asking you about this. Lawyer-grams are coming. We’ve seen that pattern, right? But then Jarek from Apache Airflow shows up and says, hey, Mary, sorry you’re under the stress. We’re actually keen to help you as well. You know, who’s going to say no to that kind of help when it’s somebody they already know? Whereas the XZ situation has effectively taught people to say, I don’t know you, why am I letting you into my project? How do I know you’re not some hacker from some bad actor, right?

That mindset of let’s pick some beaches to focus on, understand the scope of that, and then take that 3F mindset, right? And so Airflow has changed their security roadmap for 2025 and that includes doing work with, on behalf for, towards their dependencies. They’ve taken some dependencies out, so they’ve done it forego. And some of the things they’re asking them to do is just turn on PDR or maybe do some branch protection, some of the things that you might describe in the open SSF space line for security, right?

That people don’t think they know they’re competent to do or haven’t worried about it yet or whatever. But when some nice, well-meaning person shows up from a project that you can trust, it becomes a more interesting conversation. And if you imagine that playing itself out again and again and again, it becomes cultural.

CRob (21:01)
Yeah, that’s amazing. Thank you for sharing. That’s some amazing insights. Well, let’s move on to the rapid fire section of podcast! First hard question. Spicy or mild food?

Michael Winser (21:13)
Oh, I think both. Like I don’t want to have spicy every single day, but I do enjoy a nice spicy pad Thai or something like that or whatever. I’m, you know, variety is the spice of life. So there you go.

CRob (21:25)
Excellent. Fair enough. Very contentious question: Vi or Emacs?

Michael Winser (21:32)
I confess to Vi as my default console editor. Along back in that 1985 time, did port Jove — Jonathan’s Own Version of Emacs. Still alive today. I used that. And then, in my Microsoft days, I used this tool called Epsilon that was an OS2 and DOS Emacs-derived editor. And the key bindings are all locked in my brain and worked really well. But then full-grown Emacs became available to me, and the key bindings were actually nuancedly different, and my brain skid off the tracks. And then as I became a product manager, the need became more casual, and so Vi has become just convenient enough. I still use the index key bindings on the Mac OS command line to move around.

CRob (21:19)
Oh, very nice. What’s your favorite adult beverage?

Michael Winser (22:23)
I think it’s beer. It really is.

CRob (22:25)
Beer’s great. A lot of variety, a lot of choices.

Michael Winser (22:28)
I think a good hefeweizen, a wheat beer, would be very nice.

CRob (22:23)
Okay, and our last most controversial question: tabs or spaces?

Michael Winser (22:39)
Oh, spaces. (Laughter) I’m not even like, like I am a pretty tolerant person, but there’s just no way it ends well with tabs.

CRob (22:50)
(Laughter) Fair enough, sir. Well, thank you for playing rapid fire. And as we close down, what advice do you have for someone that’s new or trying to get into this field today of development or cybersecurity?

Michael Winser (23:06)
The first piece of advice I would have is it’s about human connections, right? Like so much of what we do is about transparency and trust, right? Transparency happens, things that happen in the open and trust is about behaving in ways that cause people to want to do things with you again, right? There’s a predictability to trust too, in terms of like not doing randomly weird things and things like that.

And so, and then there’s also, you know, trust is built through shared positive experiences or non-fatal outcomes of challenges. So I think that anybody wanting to get into this space, showing up as a human being, being open about who you are and what you’re trying to do, and getting to know the people, and that sort of journey of humility of listening to people who you might think you know more than they do and you might even be right, but it’s their work as well. And so listening to them along the way, that’s personally one of my constant challenges. I’m an opinionated person with a lot of things to say. Really, it’s true.

It’s very generic guidance. I think that if you want to just get started, it’s pretty easy. Pick something you know anything about, show up there in some project and listen, learn, ask questions and then find some way to help. Taking notes in a working group meeting, it’s a pretty powerful way to build trust in terms of, this person seems to take notes that accurately represent what we tried to say in this conversation. In fact, better than what we said. We trust this person to represent our thoughts is a pretty powerful first step.

CRob (24:32)
Excellent. I really appreciate you sharing that. And to close, what call to action do you have for our listeners? What would you like them to take away or do after they listen to this podcast?

Michael Winser (24:44)
I would like them to apply the 3F framework to their upstream dependencies. I would like them to look at their dependencies as if they were a giant pile of poorly understood risk and not just through the lens of how many vulnerabilities do I have unpatched in my current application because of some, you know SBOM analyzing tool telling me. But from a longer-term organizational and human risk perspective, go look at your dependencies and their dependencies and their dependencies and build up just a heat map of where you think you should go off and apply that 3F framework.

And if you truly feel like you can’t do any one of those things, right, because you’re not competent to go fix or fork and you have no choice but to use the thing so you can’t forego it, right, then think about funding somebody who can.

CRob (25:34)
Excellent words of wisdom. Michael, thank you for your time and all of your contributions through your history and now through the Alpha and Omega projects. So we really appreciate you stopping by today.

Michael Winser (25:45)
It was my pleasure and thank you for having me. I’ve enjoyed this tremendously. It would be a foolish thing for me to let this conversation end without mentioning the three people at Alpha-Omega who really, without whom we’d be nowhere, right? And so, you know, Bob Callaway, Michael Scovetta, and Henri Yandell. And then there’s a support crew of other people as well, without whom we wouldn’t get anything done, right?

I get to be, in many ways, the sort of first point of contact and the loud point of contact. We also have Mila from Amazon, and we have Michelle Martineau and Tracy Li, who are our LF people. And again, this is what makes it work for us, is that we can get things done. I get to be the sort of loud face of it, but there’s a really great team of people whose wisdom is critical to how we make decisions.

CRob (26:32)
That’s amazing. We have a community helping the community. Thank you.

Michael Winser (26:35)
Thank you.

Announcer (26:37)
Like what you’re hearing? Be sure to subscribe to What’s in the SOSS? on Spotify, Apple Podcasts, AntennaPod, Pocket Casts or wherever you get your podcasts. There’s lots going on with the OpenSSF and many ways to stay on top of it all! Check out the newsletter for open source news, upcoming events and other happenings. Go to OpenSSF dot org slash newsletter to subscribe. Connect with us on LinkedIn for the most up-to-date OpenSSF news and insight. And be a part of the OpenSSF community at OpenSSF dot org slash get involved. Thanks for listening, and we’ll talk to you next time on What’s in the SOSS?

In the Face of Mounting Regulatory Oversight, Honda and Guidewire Join Industry Leaders Securing Software Development at the Open Source Security Foundation (OpenSSF)

By Blog, Press Release

Growing Member Base and Launch of SOSS Community Day India Continue to Advance Open Source Software Security

Delhi, India – December 10, 2024 – The Open Source Security Foundation (OpenSSF), a global cross-industry initiative of the Linux Foundation, helps individuals and organizations build secure software by providing guidance, tools, and best practices applicable to all software development. Today, the OpenSSF announced new members from the automotive and insurance technology industries at the first-of-its-kind Secure Open Source Software (SOSS) Community Day India. SOSS Community Day India brings together community members from across the security and open source ecosystem to share ideas and advance solutions for sustainably securing the software we all depend on, building a foundation for a more secure and innovative future.

New general member commitments come from Honda Motor Co., Ltd. and Guidewire Software, Inc. With support from these new organizations, the OpenSSF heads into the last month of 2024 with 126 members that together recognize the importance of backing, maintaining, and promoting secure open source software.

“We are excited to welcome our newest members and celebrate this milestone with the launch of the first SOSS Community Day in India,” said Arun Gupta, Vice President and General Manager of Developer Programs at Intel and OpenSSF Governing Board Chair. “India has an incredible open source ecosystem, and this event provides an opportunity to foster collaboration, address shared challenges, and ensure the security of the open source software powering the digital world. Together, we’re building a more secure and innovative future.”

SOSS Community Day India features a packed agenda with sessions led by top experts on topics like education, innovation, tooling, vulnerabilities, and threats. The event not only highlights the OpenSSF community’s ongoing work, but also provides an avenue to expand its reach through new partnerships and memberships, welcoming inquiries from potential collaborators. Participants will see how the OpenSSF community is driving improvements in open source software security and advancing its mission to create a more secure ecosystem for everyone.

General Member Quotes

Honda Motor Co., Ltd.

“Honda is pleased to be able to participate in the OpenSSF project as OSS security becomes increasingly important. In addition to contributing to the OpenSSF community, we look forward to working to strengthen OSS security across the industry in the future.” Yuichi Kusakabe, Chief Architect – IVI software PF/OSPO Tech Lead, Honda Motor Co., Ltd.

Guidewire Software, Inc.

“We’re excited to become a member of OpenSSF,” said Anoop Gopalakrishnan, vice president, Engineering, Guidewire. “This partnership reflects our continued commitment to advancing open source security and collaborating with like-minded innovators to create a more secure and resilient software ecosystem.” 

Additional Resources

  • View the complete list of OpenSSF members.
  • Explore the SOSS Community Day India program schedule to see the lineup of sessions and speakers.
  • To learn more about the OpenSSF community, including information about membership, contribution, project participation, and more, contact us here.

###

About the OpenSSF

The Open Source Security Foundation (OpenSSF) is a cross-industry initiative by the Linux Foundation that brings together the industry’s most important open source security initiatives and the individuals and companies that support them. The OpenSSF is committed to collaboration and working both upstream and with existing communities to advance open source security for all. For more information, please visit us at openssf.org.

About the Linux Foundation

The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world’s infrastructure, including Linux, Kubernetes, Node.js, ONAP, OpenChain, OpenSSF, PyTorch, RISC-V, SPDX, Zephyr, and more. The Linux Foundation focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contact
Jennifer Tanner
Look Left Marketing
openssf@lookleftmarketing.com

What’s in the SOSS? Podcast #20 – Jack Cable of CISA and Zach Steindler of GitHub Dig Into Package Repository Security

By Podcast

Summary

CRob discusses package repository security with two people who know a lot about the topic. Zach Steindler is a principal engineer at Github, a member of the OpenSSF TAC and co-chairs the OpenSSF Security Packages Repository Working Group. Jack Cable is a senior technical advisor at CISA. Earlier this year, Zach and Jack published a helpful guide of best practices entitled “Principles for Package Repository Security.”

Conversation Highlights

  • 00:48 – Jack and Zach share their backgrounds
  • 02:59 – What package repositories are and why they’re important to open source users
  • 04:17 – The positive impact package security has on downstream users
  • 07:06 – Jack and Zach offer insight into the Prinicples for Package Repository Security
  • 11:18 – Future endeavors of the Securing Software Repositories Working Group
  • 17:32 – Jack and Zach answer CRob’s rapid-fire questions
  • 19:31 – Advice for those entering the industry
  • 21:28 – Jack and Zach share their calls to action

Transcript

Zach Steindler soundbite (00:01)
We absolutely are not looking to go in and say, OK all ecosystems must do X. But what we are is sort of this forum where these conversations can take place. People who operate these package repositories can say here’s what’s working for us, here’s what’s not working for us. Share those ideas, share those experiences and learn from each other.

CRob (00:017)
Hello everybody, I’m CRob. I do security stuff on the internet and I’m also a community leader within the OpenSSF. And one of the fun things I get to do is talk to amazing people that have input and are developing and working within upstream open source.

And today we have a real treat. I have two amazing people. I have Zach and Jack, and they’re here to talk to us a little bit about package repository security. So before we start, could I ask each of you to maybe give us a brief introduction?

Jack Cable (00:48)
Great. Thank you so much for having us on here, CRob. I am Jack Cable. I’m a senior technical advisor at CISA, where I help lead our agency’s work around open source software security and secure by design. For those unfamiliar with CISA, the Cybersecurity and Infrastructure Security Agency, is the nation’s cyber defense agency. So we help to protect both the federal civilian government and critical infrastructure, of which there’s 16 sectors ranging from everything like water to power, financial services, healthcare, and so on. And probably as no surprise to anyone here, all of these sectors are heavily dependent on open source software, which is why we’re so eager about seeing how we can really be proactive in protecting the open source ecosystem.

I come from a background in security research, software development, spent some time doing bug bounty programs, finding vulnerabilities in companies. Gradually went over to the policy side of things, spent some time, for instance, in the Senate where I worked on legislation related to open source software security and then joined CISA about a year and a half ago.

CRob (02:04)
Awesome. Zach?

Zach Steindler (02:13.869)
Yeah, CRob, thanks so much for having us. My name is Zach Steindler. I’m a principal engineer at GitHub. I have a really amazing job that lets me work on open source supply chain security, both for GitHub’s enterprise customers, but also for the open source ecosystem. CRob, you and I are both on the OpenSSF TAC. And in addition to that, I co-chair the Securing Software Repositories Working Group, where we had a recent chance to collaborate with us on the Principles for Package Repository Security document.

CRob (02:40)
Excellent, which we will talk about in just a few moments. And, you know, thank you both for your past, current and future contributions to open source. We really appreciate it. So our first question. Would you tell us what a package repository is and why that’s something that’s important to open source users?

Zach Steindler (02:59)
Yeah, this is something that comes up a lot in the working group, and what we’ve discovered is that everyone has slightly different terminology that they prefer to use. Here when we’re talking about package repositories, we’re talking about systems like NPM, like PyPI, like RubyGems or Homebrew — places that people are going to download software that they then run on their machine. And that’s a little bit in contrast to other terminology you might hear around repositories.

So here we aren’t talking about, like, where you store your source code like in a Git repository or a Mercurial repository, that sort of thing. These patch repositories are widely used. Many of them serve hundreds of millions to billions of downloads per day, and those downloads are being run on developer’s machines that are being run on build servers, and they’re being run on people’s computers who, know, whatever you’re doing on your mobile phone or your desktop device. And so the software that’s stored in these package repositories are really used globally by almost everyone daily.

CRob (04:07)
Thinking about kind of this critical space within critical software here, how does improving a package repository security affect all the downstream folks from that?

Jack Cable (04:17)
Great. And really to what Zach was saying, that’s in part why we picked this as a priority area at CISA, recognizing that regardless, really, of what, say, critical infrastructure sector, regardless of whether you’re a small business, whether you’re a large company, whether you’re a government agency, you’re heavily dependent on open source software. And in all likelihood, that software is being integrated into the products you’re using through a package repository.

So we wanted to see, where are the places where we can have the biggest potential impact when it comes to security and package repositories really stood out as central points where virtually everyone who consumes open source software goes to download, to integrate that software. So it is very central to essentially all of the software that our world relies on today. And we also recognize that many of these package repositories themselves are resource constrained, often nonprofits who operate these really critical essential services, surveying millions of developers, billions of users across the world.

So what kind of can be done to help strengthen their security? Because we’ve seen attacks both on package repositories themselves, whether it’s compromising developers’ accounts or kind of some of these underlying pervasive flaws in open source packages. How can package repositories really bolster their security to make the entire open source ecosystem more resilient? And that’s what we set out, which I know we’ll get much more into the principles of package repository security framework we created. But the goal is to really aggregate some of the best practices that perhaps one or two package repositories are doing today, but we’re not seeing across the board.

Things that can be as basic, for instance, as requiring multifactor authentication for developers of really critical projects to make sure that that developer’s account is much harder to compromise. So some of these actions that we know take time and resources to implement and we want to see how we can help package repositories prioritize these actions, advocate for them, get funding to do them so that we can all benefit.

CRob (06:52)
Well, we’ve touched on it a few times already. Let’s talk about the Principles of the Package Repository Security. Could maybe you share a little bit about what this document’s about, how it came to be, and maybe a little bit about who helped collaborate to do it?

Jack Cable (07:06)
I’ll kick it off, and then Zach can jump in. So really, as I was saying, we wanted to create kind of a common set of best practices that any package repository could look to to kind of guide their future actions, Because, kind of, what we’ve been seeing, and I’m sure Zach can get much more into it with the work he’s led through the Securing Software Repositories Working Group, is that there’s many software repositories that do care significantly about security that really are taking a number of steps that, like we’ve seen for instance, both Python and PM requiring multi-factor authentication for their maintainers, Python even, shipping security tokens to their developers. Some of these actions that really have the potential to strengthen security.

So what the Principles for Package Repository Security Framework is, is really an aggregation of these security practices that we developed over the course of a few months collaboratively, both between CISA, Securing Software Repositories Working Group, and then many package repositories, and landed on a set of four buckets really around security best practices, including areas like authentication, authorization.

How are these package repositories, for instance, enforcing multi-factor authentication? What tiers of maturity might go into this to then, for instance, if they have a command line interface utility, how can that make security really seamless for developers who are integrating packages?

Say, if there is no vulnerabilities in those packages, is that at least flagged to the developer so they can make an informed decision around whether or not to integrate the version of the open source package they’re looking at? So maybe I’ll pass it over to Zach to cover what I missed

Zach Steindler (09:08)
Yeah, the beauty of open source is that no one’s in charge. And people sometimes misunderstand the Securing Software Repositories Working Group, and they’re like, can I come to that and, sort of like, mandate all the package repositories implement MFA? And the answer is no, you can’t, first because it’s against the purpose of the group to like tell people what to do. But also, it’s not a policy-making group. It’s not a mandate-creating group, right? Participation is voluntary.

Even if we were to, you know, issue a mandate, each of these ecosystems has like a rich history of why they’ve developed certain capabilities, things they can and cannot do, things that are easy for them, things that are hard. So we absolutely are not looking to go in and say, OK, you know, all ecosystems must do X. But what we are is sort of this forum where these conversations take place.

People who operate these package repositories can say, here’s what’s working for us, here’s what’s not working for us. Share those ideas, share those experiences and learn from each other. And so when it came to writing the Principles for Package Repository Security document, the goal was not to say, here’s what you must do, but these different ecosystems are all very busy, very resource constrained. And one of the items often on their backlog is to create a security road map or to put together a request for funding for like a full time security in residence position. But to do that, they need to have some idea of what that person is going to work on.

And that’s really where the principles document comes in, is where we’re creating this maturity model, this roadmap, whatever you want to call it, more as a menu that you can order off of and not a mandate that everyone must follow.

CRob (10:50)
That sounds like a really smart approach. I applaud your group for taking that tactic. The artifact itself is available today. You can go out and review it and maybe start adopting a thing or two in there if you manage repository, but also it took you a lot of time and effort to get there. But describe to us what’s next on your roadmap. Where does what does the future hold around your group and the idea around trying to institute some better security practices across repos?

Zach Steindler (11:18)
Yeah, I could start out to talk about the Securing Software Repositories Working Group. I’m not sure I would have had this grand plan at the time, but overtime it sort of crystallized that the purpose of the working group is to put together roadmaps like the principles document that we published. I gotta plug that all the work that we do is on repos.openssf.org. So it’s a great place to review all these documents.

The second thing that the working group is focused on, other than just being this venue where people can have these conversations, is to take the individual security capabilities and publish specific guidance on how an ecosystem implemented it, and then give sort of a design and security overview to make it easier for other ecosystems to also implement that capability. We have a huge success story here with a capability called Trusted Publishing.

So to take a step back, the point of Trusted Publishing is that when you are building your software on a build server and you need to get it to the package registry, you have to authenticate that you have the permission to publish that package namespace. Usually in the past, this has been done by taking someone’s user account and taking their password and storing it in the build pipeline. Maybe you could use an API key instead, but these are really juicy targets for hackers.

So Trusted Publishing is a way to use the workload identity of the build system to authorize the publish. And then you don’t have a API key that can be exfiltrated and broadly used to upload a lot of malicious content. And so this capability was first implemented in PyPI, shortly thereafter in RubyGems.

And then we asked Seth Larson, who’s a member of the working group and the Python Software Foundation security residents to write up implementation guidance based on what his team at the PSF learned and also based on what the RubyGems team learned. And it so happened that NuGet, the package manager for the dot net Microsoft ecosystem, was also interested in this capability, and the timing just just happened to work out perfectly where they started coming to the working group meetings.

We already had this drafted guidance on implementation, and they were able to take that and kind of accelerate their RFC process, adapt it so that it was relevant to the different concerns in their ecosystem. But they’re much further along on this track of implementing this capability than they would have otherwise been if they had to start a square one. So in addition to roadmaps, I think we’re going to be focusing more in the near future on finding more of these security capabilities to publish guidance on to help the package repositories learn from each other.

Jack Cable (14:08)
Yep, and just to add on to that, I think it’s super great to see some of the work that is coming out of the working group. We at CISA held a summit on open source software security in March, whereas part of that we announced actions that five of the major package repositories, including for Python, JavaScript, Rust, Java and PHP are taking in line with the principles for package repository security framework. And we know that this is going to be an ongoing journey for really all of the package repositories, but we’re encouraged to see alignment behind that. And we hope that can be a helpful resource for these package repositories to put together their roadmaps to make funding requests and so on.

But I do want to talk about kind of one of the broader outcomes that we want to help achieve at CISA, and this is in line with our secure design initiative, really where we want technology manufacturers to start taking ownership of, for instance, the security outcomes of their customers, because we know that they’re the ones who are best positioned to help drive down the constant stream of cyber attacks that we seem to be seeing.

As part of that, it’s essential that every technology manufacturer who is a consumer of open source software who integrates that into their products, who profits from that open source software is a responsible steward of the open source software that they depend upon. That means both having processes to responsibly consume that. It also means contributing back to those open source packages, whether financially or through developer time.

But what this also entails is making sure that there’s kind of a healthy ecosystem of the infrastructure supporting the open source communities of which package repositories are really a core part. So I encourage every software manufacturer to think about how they are helping to sustain these package repositories, helping to foster security improvements, because again, we know that many of these are nonprofits. They really do rely on their consumers to help sustain them, not just for security, but for operations more generally. So really we want to see how both we can help spur some of these developments directly, but then also how every company can help contribute to sustain this.

Zach Steindler (16:50)
Jack, I just wanted to say that we are sort of like maybe dancing around the elephant in the room, which is that a lot of this work is done by volunteers. Occasionally it is funded. I wanted to give a special shout out to Alpha-Omega, which is an associated project of the OpenSSF that has funded some of this work in individual package repositories. There’s also the Sovereign Tech Fund, which is funded by, I think, two different elements in the German government.

But, you know, this work doesn’t happen by itself. And part of the reason why we’re putting together this guidance, why we’re putting together these roadmaps is so that when funding is available, we’re making sure that we are conscious of where we can get the most results from that investment.

CRob (17:32)
Thank you both for your efforts in trying to help lead this, help make this large change across our whole ecosystem. Huge amount of downstream impact these types of efforts are going to have. But let’s move on to the rapid fire section of our interview. (Sound effect: Rapid fire!) I have a couple of fun questions. We’re going to start off easy, spicy or mild food?

Jack Cable (17:55)
Spicy.

Zach Steindler (17:57)
In the area that I live, there’s quite a scale of what spicy to mild means, depending on what kind of restaurant that you’re at. I’d say I tend towards spicy, though.

CRob (18:05)
(Sound effect: Oh, that’s spicy!) That’s awesome. All right. A harder question. Vi or Emacs?

Jack Cable (18:16)
I’m going to say nano — option number three.

CRob (18:20)
(Laughter) Also acceptable.

Zach Steindler (18:24)
CRob, always joking about college football rivalries, and I don’t feel a strong personal investment in my text editor. I do happen to use Vi most of the time.

CRob (18:37)
It is a religion in some parts of the community. So, that was a very diplomatic answer. Thank you, Another equally contentious issue, tabs or spaces?

Jack Cable (18:48)
Spaces all the way, two spaces.

Zach Steindler (18:52)
I’m also on team spaces, but I’ve had to set up my Go formatter and linter to make sure that it gets things just right for the agreed-upon ecosystem answer. That’s the real answer, right? It’s good tools, and everyone can be equally upset at the choices that the linter makes

CRob (19:09)
That’s phenomenal. (Sound effect: The sauce is the boss!) I want to thank you two for playing along real quickly there. And as we close out, let’s think about, again, continuing on my last question about the future. What advice do either of you have for folks entering the industry today, whether they’re going to be an open source developer maintainer, they’re into cybersecurity, they’re just trying to help out what advice do you have for them?

Jack Cable (19:31)
I can kick that off. I’d say first of all, I think there’s lots of great areas and community projects to get involved with, particularly in the open source space. The beauty of that, of course, is that everything is out there and you can read up on it, you can use it, you can start contributing to it. And specifically from the security perspective, And there is a real ability to make a difference because as Zach was saying, this is primarily volunteers who are doing this, not because they’re going to make a lot of money from it or because they’re going to get a ton of recognition for it necessarily, but because they can make an actual difference.

And we know that this is sorely needed. We know that the security of open source software is only going to become more and more important. And it’s up to all of us really to step in and take matters into our own hands and drive these necessary improvements. So I think you’ll find that people are quite welcoming, that there’s a lot of great areas to get involved and encourage reading up on what’s going on and seeing what areas appeal to you most and start contributing.

Zach Steindler (20:51)
I have two pieces of maybe contradicting advice, because the two failure modes that I see are people being too afraid to start participating or being like, I have to be an expert before I start participating, which is absolutely not the case. And then the other failure mode I see is people joining a 10-year old project and being like, I have all the answers. I know what’s going on. So I think my contradictory advice would be to show up. And when you do show up, listen.

CRob (21:19)
Excellent advice. I think it’s not that big a contradiction. As we close out, do you gentlemen have a call to action? I think I might know part of it.

Zach Steindler (21:28)
Yeah, my call to action would be please go to repos.openssf.org. That is where we publish all of our content. That also links to our GitHub repository where you can then find our past meeting minutes, upcoming meeting information, our Slack channel in the OpenSSF Slack. Do be aware, I guess, that we’re very much the blue hats defenders here. So sometimes people like, do you need me to, you know, report more script kiddies uploading, you malware to NPM? It’s like.

The folks who are sort of like operating these systems, and so we recognize it’s a small audience. That’s not to say that we don’t want input from the broader public. We absolutely do, but to my point earlier, you know a lot of these folks have been running these systems for a decade plus. And so do come, but do be do be cognizant that there’s probably a lot of context that these operators have that that you may not have as a user systems.

Jack Cable (22:17)
And please do check out the principles for package repository security framework. It’s on GitHub as well as the website Zach mentioned. We have an open ticket where you can leave feedback, comments, suggestions, changes. We’re very much open to new ideas, hearing how we can make this better, how we can continue iterating and how we can start to foster more adoption.

CRob (22:43)
Excellent. I want to thank Zach and Jack for joining us today, helping secure kind of the engine that most people interact with open source with. So thank you all. I appreciate your time and thanks for joining us on What’s in the SOSS? (Sound effect: That’s saucy!)

Zach Steindler (23:00)
Thanks for having us, CRob. I’m a frequent listener, and it’s an honor to be here.

Jack Cable (23:04 )
Thank you, CRob.

Announcer (23:05)
Like what you’re hearing? Be sure to subscribe to What’s in the SOSS? on Spotify, Apple Podcasts, AntennaPod, Pocket Casts or wherever you get your podcasts. There’s lots going on with the OpenSSF and many ways to stay on top of it all! Check out the newsletter for open source news, upcoming events and other happenings. Go to openssf.org/newsletter to subscribe. Connect with us on LinkedIn for the most up-to-date OpenSSF news and insight, and be a part of the OpenSSF community at openssf.org/getinvolved. Thanks for listening, and we’ll talk to you next time on What’s in the SOSS?