Skip to main content
All Posts By

craig

What’s in the SOSS? Podcast #2 – Christoph Kern and the Challenge of Keeping Google Secure

By Podcast

Summary

In this episode, Omkhar talks to Christoph Kern, Principal Software Engineer in Google’s Information Security Engineering organization. Christoph helps to keep Google’s products secure and users safe. His main focus is on developing scalable, principled approaches to software security.

Conversation Highlights

  • 00:42 – Christoph offers a rundown of his duties at Google
  • 01:38 – Google’s general approach to security
  • 03:02 – What Christoph describes as “stubborn vulnerabilities” and how to stop them
  • 06:42 – An overview of Google’s security ecosystem
  • 10:00 – Why memory safety is so important
  • 12:23 – Solving memory safety problems via languages
  • 16:23 – Omkhar’s rapid-fire questions
  • 18:28 – Why Christoph thinks this may be a great time for young professionals to enter the cybersecurity industry

Transcript

Christoph Kern soundbite (00:01)
The White House just put out a memo talking about memory safety and formal methods for security. I would have never believed this a couple of years ago, right? It’s becoming a more important table-stakes. It might be actually a very interesting time to get into this space without having to sort of swim upstream the whole time.

Omkhar Arasaratnam (00:17)
Hi everyone, it’s Omkhar Arasaratnam. I am the general manager of the OpenSSF and the host of the What’s in the Sauce? podcast. With us this week, we have Christoph Kern. Christoph, welcome.

Christoph Kern (00:30)
Thank you, Omkhar, for having me. It’s an honor to be here, and I’m looking forward to this conversation.

Omkhar Arasaratnam (00:34)
It’s a pleasure, Christoph. So, background. Tell us a little bit about where you work and what you do.

Christoph Kern (00:42)
I’m a principal engineer at Google. I’ve been there about 20 years and a bit, so quite a long while. I work in our information security engineering team, which is basically product security. So we look after the security posture of all the services and applications that Google offers to our users and customers. And a lot of that time, I focused on essentially trying to figure out scalable ways of providing security posture across hundreds of applications and to a high degree of assurance at that.

Omkhar Arasaratnam (01:13)
Well, I think if memory serves, we spoke a couple of times when I was at Google, a couple of times after Google. I mean, securing Google full stop, no caveat, no asterisk. That’s a lot of stuff. So what are some of the ways that y’all have thought about securing all the things within Google? I presume you just don’t have a fleet of security engineers that descend upon every project.

Christoph Kern (01:38)
Right, exactly. To make this scale, you really have to think about invariance that you want to hold for every application, and also classes of common defects that you want to avoid having in any of these hundreds of applications. And the traditional way of doing this has been to try to educate developers and to use sort of after the fact code reviews and testing and penetration testing and, you know, In our experience, this has not actually worked all that well. And we, over the years, sort of realized that we really need to think about the environments in which these applications are being built. And so usually there’s like many applications that are fairly similar, right? Like we have hundreds of web front ends and they have many aspects of their threat model that are actually the same for all of them, right?

Cross-site scripting, for instance, is an issue for every web app, irrespective of whether it’s a photo editor or a banking app or an online email system. And so we can kind of take advantage of this observation to scale the prevention of these types of problems by actually building that into the application framework and the underlying libraries and the entire developer ecosystem, really, that developers use to build applications. And that has turned out to work really quite well.

Omkhar Arasaratnam (02:53)
Now, in the past, you’ve referred to this class of stubborn vulnerabilities. Can you say a little bit more about stubborn vulnerabilities and what makes them so stubborn and hard to eliminate?

Christoph Kern (03:02)
Yeah, there’s a list of vulnerabilities that the folks who make this common weakness enumeration, the CWE, put out. So they’ve been putting out the, sort of, top 25 most dangerous vulnerabilities list for years. And recently, they started also making a list of the ones that consistently appear near the top of these lists over many years. And those are then, evidently, classes of problems that are in principle well understood.

We know exactly why they happen and what the, sort of, individual root cause is, and yet it turns out to be extremely difficult to actually get rid of them at scale and consistently. And this is then evidenced in the fact that they just keep reappearing, even though there’s been guidance on how to, in principle, in theory, avoid them for years, right? And it’s well understood what you, in principle, need to do. But applying that consistently is very difficult.

Omkhar Arasaratnam (03:52)
Software engineer to software engineer. What’s the right way of fixing these vulnerabilities? I mean we’ve thrown WAFs at them, we’ve taken all kinds of input validation techniques. What would you recommend? Like, how does Google stop those?

Christoph Kern (04:06)
I think the systemic root cause for these vulnerabilities being so prevalent is that there is an underlying API that developers use that puts the burden on developers to use it correctly and safely. Essentially, all of these APIs that are in this class of injection vulnerabilities consume a string, a sequence of characters, that is then interpreted in some language. It could be SQL in the case of SQL APIs. And then leading to SQL injection or JavaScript embedded in HTML in the case of XSS, right?

And the burden is on developers to make sure that when they write code, they assemble strings that are then passed to one of those APIs, that the way they’re assembled is following secure coding guidelines. In this case, questions of how you would escape or sanitize an untrusted string that’s embedded in HTML markup, for instance. And you have to do this hundreds of times over in a large application because there’s lots of code in a typical web app that assembles smaller strings into HTML markup that is then shipped to a browser and rendered. And it’s extremely difficult to not forget in one of those places or apply the wrong rule or apply it inconsistently. And this is just really, really difficult, right? And this is why those vulnerabilities keep appearing.

Now, to get rid of them, what we found, the only thing that actually works, is to really rethink the design of the API and change it. And so we just went ahead effectively and changed the API so it no longer consumes a string, but rather consumes a specific type that is dedicated to that API and essentially holds the type contract, the promise, that its value is actually safe to use in that context. And then we provide libraries of builders and constructors that are written by experts, by security engineers, that actually follow safe coding rules.

And then as an application developer, you really don’t have the opportunity to incorrectly use that API anymore, because the only way to make a value that will be accepted by the API is to use those expert-built libraries, right? And then effectively the type system of the language just glues everything together. It then also makes sure that when a value is constructed in one module, there’s like some module, maybe even in a backend, that makes HTML markup or a snippet of HTML markup that’s shipped to a browser and then embedded into the DOM in the browser, the type system ties those two places that are otherwise very difficult to understand because they’re very far away. They might be written by different teams. The type system ties those two things together and actually makes sure that the underlying coding rules are actually followed consistently and always.

Omkhar Arasaratnam (06:42)
Other than SQL injection and cross-site scripting, can you provide any other practical examples or maybe just to reflect back on how this has shown up in the security properties of Google products? Has this been broadly adopted by Google developers? Has there been some resistance? Can you talk a little bit about that from a developer experience perspective?

Christoph Kern (07:06)
The way Google’s developer ecosystem evolved for different reasons, really for productivity and quality reasons, the design of that ecosystem actually helped us greatly, right? So Google has this single monorepo where all the common infrastructure, including compilers and toolchains and libraries, are provided by central teams to all the application developers. And there’s really no practical way for somebody to build code without using that. It would be just very expensive and outlandish to even think of. And so if we build these things into those centrally-provided components, and we do it in a way that doesn’t cause undue friction, most people just don’t even notice.

They’ll just use a different API to make strings that get sent to a SQL query, and it just works, right? If it doesn’t work, then they’ll read the document and say, “Oh, this API wants a trusted SQL string instead of a string, so I’ll have to figure out how to make that and here’s the docs.” And once they figure this out once, they’re on their way. And so we’ve actually seen fairly little resistance to that. And of course, we’ve designed it so that it’s easy to use, right, otherwise we would see complaints.

One interesting thing we’ve done, I think that actually sort of in hindsight helped a lot is that we’ve chosen ourselves to make the maintainers and developers of these APIs, the security engineers, the first line customer support, so to speak, for developers using them. So we have this internal, sort of,  equivalent of Stack Overflow, where people can ask questions. And our team actually monitors the questions about these APIs. And that inherently requires us to, so we don’t get drowned in questions or problems, to design them and iterate them on an ongoing basis to make them easier to use. So that in almost all use cases, developers can just be on their way by themselves without needing any help. And so that’s really helped to sort of tune these APIs and figure out their corner cases in their usability and make them both easy to use and secure at the same time.

Omkhar Arasaratnam (09:00)
That’s a wonderful overview. And just to summarize, by baking these kind of protections right into the tooling that the developers use, they don’t have to waste mental effort on trying to figure out how to sanitize a string. It’s already there. It’s already present. If you have a new developer coming in from the outside who maybe doesn’t have experience with using these trusted types, the actual API that they would call won’t accept a raw string. So they’re forced into it.

And I guess the counterbalance to ensure that you have a usable API for your tens of thousands of developers within Google is that essentially the people that write this also have to support it. So it’s in their best interest to make it as friction-free as possible for the average developer inside of Google. I think that’s, that’s excellent.

We’re going to switch gears. Google recently published a paper on memory safety of which you were one of the co-authors. So let’s talk about memory safety a little bit. Can you explain to the listeners why it is important?

Christoph Kern (10:00)
Yes, I think memory safety essentially, or the memory safety problem, is essentially an instance of this sort of problem we just talked about, right? It is due to the design in this case of a programming language or programming languages that have language primitives that are inherently unsafe to use, where the burden is on the developer to make sure that the surrounding code ensures the safety precondition for that primitive. So for instance, if you’re dereferencing a pointer and C or C++, it’s your responsibility as a programmer to be sure that anytime execution gets to that point, that pointer still points to validly allocated memory, and it hasn’t been deallocated by some other part of the code previously before you got here, right?

And so if that was the case, it would lead to a temporal safety violation because you have like a use-after-free, for instance, vulnerability. Similarly, when you’re indexing into an array, it’s your responsibility to make sure that the index is in bounds and you’re not reading off the end of the array or previous to the beginning. Otherwise, you have a spatial safety issue.

And I think what makes memory safety particularly stubborn is that the density of uses of potentially unsafe APIs or language features for memory safety is orders of magnitude more than some of these other vulnerability classes. So if you look at SQL injection, in a large program you might maybe have tens of places where the code is making a SQL query versus in a large C program or C++ program, you’ll have you know, thousands or tens-of-thousands of places that dereference pointers, like every other line of code literally is a potential safety violation, right? And so with that density of potential mistakes, there will be mistakes.

There’s absolutely no way around it. And that sort of is borne out by experience in that code that is written in an unsafe language tends to have its vulnerabilities be memory safety vulnerabilities.

Omkhar Arasaratnam (11:50)
Many languages nowadays, be it Python, JavaScript, or for lower level software development, Golang or Rust, all proclaim these memory safety properties and are often referred to with absolutes, as in solving an entire class of problem. I think you and I have been around software engineering for long enough that such bold claims are often met with a bit of cynicism. Can you talk about how these languages are actually solving these entire classes of memory safety problems?

Christoph Kern (12:23)
Yes, I think the key to that is that if you use them to good effect, you can design your overall program to enable modular reasoning about the safety of the whole thing. And in particular, design the small fragments of your code that do need to use unsafe features. So in Rust, it might be that you might need a module. For instance, if you want to implement a linked list, a doubly linked list, you need unsafe, right? Or you need reference counters.

But what you can do is write this one module that uses potentially unsafe features so that it is self-contained and its correctness and safety can be reasoned about without having to think about the rest of the program. So basically when you write this linked list implementation, for instance, in Rust, you will write it in a way such that the assumptions it needs to make about the rest of the program are entirely captured in the type signatures of its API.

And you can then validate by really thinking about it hard, but it’s a small piece of code, and you might get your experts in unsafe Rust to look at it, that module will behave safely for any well-typed program it is embedded in, right? And once you are in that kind of a place, then you will get actually a very high assurance of safety of the whole program. Just out of the fact that it type checks, right?

Because the components that use unsafe features are safe for any well-typed caller, and the rest of it is inherently safe due to the design of the language, there really is very little opportunity for a potential mistake. And that’s, I think, again, borne out of practice in that, like, in Java or JVM-based languages, memory safety really is a very rare problem. We’ve had some buffer overflows in, I don’t know, like, image parsers that use native code and stuff like that.

But it’s otherwise a relatively rare problem compared to the density of this type of bug in code that’s written in a language where unsafety is basically everywhere across the entire code base.

Omkhar Arasaratnam (14:23)
So, I mean, the obvious thing seems to be, OK, let’s wave our magic wands and rewrite everything in a memory-safe language. Obviously, things aren’t that simple. So what are the challenges with simply shifting languages, and how do you address large legacy code bases?

Christoph Kern (14:39)
Unless there is some breakthrough in like ML-based automated rewriting, I think we have to live with the assumption that the vast majority of C++ code that is in existence now will keep running as C++ until it reaches its natural end of life. And so we have to really think about,  as we make this transition to memory safety, probably over a span of like decades, really, where do we put our energy to get the best benefit for our investments in terms of increased security posture?

And so I think there’s a couple of areas where we can look at, right? So for instance, there’s some types of code that are particularly risky, it’s most likely very valuable to focus on those and replace them with a memory-safe implementation. So we might replace an image parser that’s written in C or C++ with one that’s written in Rust, for instance.

And then beyond that, if we have a large C++ code base that we can’t really rewrite feasibly and we can’t just stop using it because we need it, we’ll have to look at incremental ways we can improve its security posture. And there is some interesting approaches for instance, we for instance, I think are somewhat confident that it’s possible to achieve a reasonable assurance for spatial safety in C++ through approaches like safe buffers by basically adding runtime checks.

For temporal safety, it’s much more difficult. There are some ideas, you know, there’s like some work in Chrome for instance, using these wrapper types for pointers called MiraclePtr. There might be some hardware mechanisms like MTE. And there’s a lot of trade-offs between cost and performance impact and achievable security improvement that will really probably take some time to shake out. But you know, we’ll get there at some point.

Omkhar Arasaratnam (16:23)
I’m glad to hear that the problem is at least tractable now. Moving over to the next part of our podcast Christoph we’re gonna go through a series of rapid-fire questions. Some of these will have one or two options, but the last option is always, “No Omkhar. Actually, I think it’s this.” So we’re gonna start off with one that’s quite binary, which is spicy or mild food.

Christoph Kern (16:45)
I don’t think it’s actually that binary. In the winter, when it’s cold out, I tend to gravitate to more like sort of, you know, savory  German type cooking. That’s my cultural background. And then in the summer, I’m more leaning towards the like zesty, more spicy flavors, you know, so it maybe varies throughout the year.

Omkhar Arasaratnam (17:02)
Interesting. For me I tend to gravitate to spicy as a default, but then when the weather gets cooler, I find that spicy is even higher priority for me as it helps me to feel a bit warm. OK, the next one’s a bit controversial one based on some previous guests: VI, VS Code, Emacs?

Christoph Kern (17:22)
For me, it really depends on what I’m working on. I’ll use whatever code editor that’s most well supported for the language. So for like, say Rust it might be VS Code, but then in Google we have our own thing that’s supported by a central team. My muscle memory is definitely VI key bindings, but I actually, at the age of like 45 or something, I decided to finally learn Emacs so I could use org mode, but I do use it with a VI key binding, so.

Omkhar Arasaratnam (17:48)
Excellent. Tabs or spaces?

Christoph Kern (17:51)
You know, I haven’t thought about that in a long time. I think many years ago, the language platform teams at Google basically decided that all code needs to be automatically formatted. And so basically, you do whatever the thing does and what’s built into the editors. And it never really occurs even as a question anymore.

Omkhar Arasaratnam (18:09)
Makes it easier. One less thing to worry about. To close it out, what advice do you have for somebody entering our field today, somebody that’s just graduating with their undergrad in comp sci or maybe just transitioning into an engineering field from another field that’s interested in tackling this problem of security?

Christoph Kern (18:28)
You know, maybe it’s actually a particularly good time to get into this field. I think I’ve been very fortunate to have worked in an organization that really does make security a priority. And so usually when you approach somebody you want to work with on improving security posture of a product, it’s rarely a question of whether or not this should be done at all.

You don’t have to justify your existence, right? It’s really usually questions about the engineering of exactly how to do it. And that’s a very nice place to be, right? You’re not constantly arguing to even be there, right? At the table. And I think maybe I’m a little hopeful that this is now changing for other organizations where that’s not so obvious, right? Like, you hear a lot more talk about security and security design.

I mean, the White House just put out a memo talking about memory safety and formal methods for security. It was like, I would have never believed this if you’d told me this a couple of years ago, right? So I think it’s becoming a more important and sort of obvious table-stakes part of the conversation. And so it might be actually a very interesting time to get into this space without having to sort of swim upstream the whole time.

Omkhar Arasaratnam (19:34)
We’ve talked about stubborn vulnerabilities, safe coding, memory safety. What is your call to action for our listeners, having absorbed all this new information?

Christoph Kern (19:44)
I think it is well past time to no longer put the burden on developers and really view these problems as systemic outcomes of the design of the frameworks and application frameworks and production environments, the entire developer ecosystem. Like in the article we recently published, we kind of put this as t he security posture is an emergent property of this entire developer ecosystem, and you really can’t actually change the outcome by not focusing on that and only blaming developers. It’s not going to work.

Omkhar Arasaratnam (20:18)
Christoph, thank you for joining us, and it was a pleasure to have you. Look forward to speaking to you again soon.

Christoph Kern (20:23)
Thank you, yeah, it was a pleasure to be here.

Announcer (20:25)
Thank you for listening to What’s in the SOSS? An OpenSSF podcast. Be sure to subscribe to our series of conversations on Spotify, Apple, Amazon or wherever you get your podcasts. And to keep up to date on the Open Source Security Foundation community, join us online at OpenSSF.org/getinvolved. We’ll talk to you next time on What’s in the SOSS?

What’s in the SOSS? Podcast #1 – Vincent Danen and the Art of Vulnerability Management

By Podcast

Summary

In this episode, Omkhar talks to Vincent Danen, Vice President of Product Security at Red Hat, responsible for security and compliance activities for all Red Hat products and services. He’s also on the Governing Board of the OpenSSF. Vincent has been involved with open source and software security for over 20 years, leading security teams and participating in open source communities and development.

Conversation Highlights

  • 00:39 – Vincent shares his background in security and responsibilities at Red Hat
  • 03:36 – The importance of maintaining a sense of calm during security incidents
  • 05:18 – Omkhar and Vincent discuss their experiences learning about the infamous Heartbleed Bug
  • 09:05 – Vincent offers advice on how to address vulnerability management and the importance of trusting your vendors
  • 11:34 – Not every threat or vulnerability requires swift and immediate action
  • 12:46 – Pitfalls organizations should avoid in vulnerability management
  • 15:40 – Vincent answers Omkhar’s “rapid-fire” questions: mild vs. spicy food, text editor or choice and tabs vs. spaces
  • 16:32 – Advice Vincent would give to aspiring security professionals and the importance of being open-minded

Transcript

Vincent Danen soundbite (00:01)
I want somebody to come out and create a bug scanner. Go tell me all the bugs that are in the software that I have. Not the security issues but the bugs. Because that list is gonna be way longer. And I guarantee you that some of those bugs are far more impactful for you as a user than some of these security issues.

Omkhar Arasaratnam (00:17)
Welcome to What’s in the SOSS? I’m your host Omkar Arasaratnam and with me this week we have fellow Canadian Vincent Danen. Vincent, how are you doing my friend?

Vincent Danen (00:28)
Good, Omkhar. How are you?

Omkhar Arasaratnam (00:29)
I’m doing just dandy. So for our audience, I would love to do a quick intro. Why don’t you give them your name, title and what you do?

Vincent Danen (00:39)
Sure, so Vincent Danen, Vice President of Product Security at Red Hat. I just actually celebrated 15 years at Red Hat a month ago.

Omkhar Arasaratnam (00:47)
Congratulations.

Vincent Danen (00:49)
Thank you. Prior to that, I was at Mandriva for those long-time listeners who know the history of Linux. I was doing security work for them for about eight years. So I’ve been knee-deep in open source security for over 20 years now, and it just makes me feel old.

Omkhar Arasaratnam (01:04)
You’re, you’re an O.G. as the kids say, and let me let me drop some street cred: You know, I used to be a Red Hat certified engineer in Red Hat 7.2 And I didn’t say RHEL, I said Red Hat 7.2 because I’m an old guy, too.

Vincent Danen (01:20)
Yeah, well you got some street cred for sure.

Omkhar Arasaratnam (01:23 )
That’s a really cool title. Sounds incredibly important. Can you give our listeners a bit of an overview as to, you know, being the person in charge of product security? What does that mean at Red Hat?

Vincent Danen (01:35)
Yeah, I mean, product security at Red Hat has, I mean, that name kind of gives it away, right? It is about the security of our products. Our remit is effectively all of the proactive/reactive security concerns around our portfolio of products. So if you think about it, that, you’d mentioned RHEL, that’s one. OpenShift, Ansible, Middleware, EAP, a ton of products. And of course, we like to support these things for a very long time. So multiple versions of the same product. So effectively, my team ingresses a number of vulnerability information. So new CVEs are discovered, either they’re reported directly to us, either under embargo or not.

We get information from CVE, other reporters. You’re familiar with the Linux distros mailing list. So we get information that way as well. So we’re kind of ingressing all of these vulnerabilities. We triage them and determine their effectiveness or effectedness to our products. And then we kind of go through the whole process of rating the vulnerability in terms of its severity, how it’s impacted in the products.

And then kind of just follow that through with engineering who are going to fix these things and test them, release them out to our customers. We provide a ton of information about CVEs because customers really like to know, “ What does this thing do and should I be sweating or is this okay?” We also focus a lot on, say, our internal build pipelines, how we curate the open source, how we interact with upstream. We do a lot on the compliance front as well. So it’s like a very robust view of security, kind of from front to end for all of our products.

Omkhar Arasaratnam (03:14)
That sounds like an incredibly broad scope. And at some point, you have to tell the listeners when you have time to sleep It sounds like you’re on all the time, like most of us are in cybersecurity.

Vincent Danen (03:25)
Yes, although I do sleep and actually sleep pretty good. One of the benefits of having a fantastic team to work with. So I don’t have to worry about everything. I have a great team to work with,  and they do a lot of the heavy lifting.

Omkhar Arasaratnam (03:36)
That’s wonderful to hear. And I certainly get that. Back in the day, when I first started in cybersecurity, incident response was one of the things that I had. And an old manager of mine often said, whenever we have to deal with an incident, there should be a sense of urgency, but it shouldn’t be panic. And what I’m hearing from you is you’ve got a team that’s really set up to handle that sense of urgency properly without the panic that could be a negative force.

Vincent Danen (04:03 )
It’s actually interesting that you mentioned that because one of our goals is to, particularly with a lot of these named vulnerabilities, so those have been a phenomenon for at least the last dozen years. Because Heartbleed actually just celebrated a 10-year anniversary, I think it was earlier this week or last week.

Omkhar Arasaratnam (04:19)
Yeah, I didn’t get a cake, but I remember.

Vincent Danen (04:22)
I didn’t get a cake either, but I do remember when it happened. I was hip-deep in that as well. But one of our goals is to maybe quell that sense of panic that our customers or other people in the industry have. So we really try to take a look at these vulnerabilities from the perspective of what does it actually do and do I need to be worried? And then convey that information as clearly and concisely as possible to our customers so that we’re not seeing undue panic.

I mean, there are certain things we should absolutely be panicking about, right? Like, these are things where if we produce a patch, I mean, we want you to apply it as quickly as possible. There is that sense of urgency. But when we’re looking and analyzing these things, I kind of think of it more akin to a firefighter. If you’re in the middle of a blaze trying to put that fire out and you’re panicking, you’re not going to be very effective, right? So we want to be as kind of calm, cool, collected, measured, as clear as possible.

Omkhar Arasaratnam (05:18)
The analog that I use often to describe that same concept: A neighbor of mine is a paramedic, and one of the things he pointed out to me was you’ll notice that paramedics never run at an accident scene. And, It’s not, I mean they certainly move with urgency, but they don’t run because they don’t want to cause more harm through acting in a non-stoic and measured manner by kind of running, running into the proverbial scene. Of course, we see that on TV all the time, but you know, TV is not reality.

I do want to come back to the Heartbleed thing for just a moment. It’s said that when you look back on your life, there are certain key moments that everybody remembers. And for those that were maybe the generation prior to us, it was the JFK assassination. For our generation to betray our age to the viewers — or the listeners — it was probably the Challenger explosion. It was probably, you know, 9/11.

I have that indelible kind of memory of Heartbleed, and the reason I have that indelible memory is I have very poor discipline when it comes to turning off work. And I wish I had better discipline. My wife also wishes I had better discipline. But ten years ago, I had promised my wife and kids were going to go to Hawaii for the first time. We were in Maui, and this was still back in the days when everybody had a Blackberry. I left my Blackberry at home turned off, and we were on the beach in Maui, and I came back in and I turned on the TV. And I was like, “Oh boy, what a day to be disconnected from work.” What was, what was your experience?

Vincent Danen (07:04)
First, I’ll say you were one of the lucky ones to be disconnected from it.

Omkhar Arasaratnam (07:07)
By total coincidence.

Vincent Danen (07:10)
Yeah, yeah. No, the thing that sticks out for me the most, there’s two. One is that our Red Hat Summit was about a month later, and that was all anyone wanted to talk about was Heartbleed. And that’s not what I was there for. Right, so that was interesting and that kind of sticks in my head. The other one was I actually remember my mother phoning me, and she’s completely, sorry Mom if you hear this, completely computer illiterate. Right? I have to go to her house to help her fix the remote because she did something to the TV, and it’s literally one button, right? But she phones me, and she’s like, “Hey, I heard about this computer thing on the radio.”

And I was like, “What are you even talking about?” A, you picked up on this,  and you knew it was somehow relevant to me, which was shocking. And then secondly, it was like, it was on the radio. And to that point, I had never heard, like the local news, I had never heard of a security issue in software ever getting that kind of airplay. This thing was really noisy.

Omkhar Arasaratnam (08:05)
My family non-technical analog is my dad, and again, apologies to my dad if he hears this. Like, my dad wants to send me articles about, you know, the latest scam that’s out there and, you know, don’t get title scammed out of your house and stuff like that and the odd meme and not do much else. But when dad starts sending me stuff that’s like, “Hey, do you know about this?” Yeah, then then it really puts things in perspective as to how this affects society.

So I think the, I mean, the conclusion in all this that I’m drawing to is vulnerability management is hard to do properly. And being able to kind of filter signal and noise and get down to something that’s actually actionable shouldn’t be based on whether Vincent’s mom hears it on the radio or Omkhar’s dad finds it on a news website. What are some key considerations for our listeners? What should they be thinking about when they start thinking about vulnerability management?

Vincent Danen (09:05)
That’s a great question because it’s something I think about a lot. I actually talk about it a lot as well. The caveat here being I work for Red Hat, and so this is my day job, right? And so I deal with a lot of customers who have a lot of questions, particularly about this topic, right? So the first thing that I would say is you have to know your vendor before you pick them. There’s a fundamental trust factor that comes into play with your vendor. And I’m not even talking just from a security perspective, right? Like, you have to be able to trust the software that you’re using or the vendor who puts it out, right? And there’s a couple of reasons for that.

Vendors typically will assess a vulnerability themselves, right? I know we have things like NVD and OSV and, like, other kind of CVE aggregation systems, but a vendor typically rates the severity of a vulnerability in terms of their product. I’ve heard in the past, I haven’t heard it recently, but somebody actually accused me of lowballing a vulnerability because I didn’t want to have to fix it. I was like, well, that’s really weird. You know, like, you trust me to run your workloads, to do all this work that you’re doing, to build value in your business, right? To run your platforms and whatnot. But you’re not going to trust me when I say that this vulnerability doesn’t matter for these particular reasons, right? Which is a little weird. You trust me for one thing, but you don’t trust me for the other.

Omkhar Arasaratnam (10:20)
It is strange.

Vincent Danen (10:22)
So, I mean, there is a trust relationship with your vendor, and I think that extends to when they say something is impactful or not, you have to kind of believe that, right? And it’s really important because I was looking at a, I think, it was a GRUB vulnerability a couple months ago.

Omkhar Arasaratnam (10:39)
The Bootloader?

Vincent Danen (10:41)
Yeah, the Bootloader. And when I was looking at the CVSS ratings for that GRUB vulnerability, we had it rated one way. I think Debian had it rated a different way. SUSE rated it the same. F5 rated it like really low, right? In the context of their environment and how accessible it is in their devices. Right? So I mean, they rated it in the context of the way that they use it and kind of the environment around it. And that’s typically what vendors do. So I wouldn’t sit there and say, “Yeah, go look at, you know, how Debian writes stuff, and that’s exactly how it works for Red Hat.” Because it’s not true.

Omkhar Arasaratnam (11:16)
And presumably there may be some, I mean, it could be mitigations in your build chain that you include. It could be, to your point, is this an appliance? And is this something that’s a materially accessible vulnerability remotely or something of that nature based on your usage?

Vincent Danen (11:34)
A hundred percent. RHEL being an operating system, and you can do whatever you want with it, we don’t know. OpenShift is more of an appliance platform and it’s built a very specific way, and there’s a limited amount that you can do with it, right? In terms of how you’re messing around with the different components. The same component might be present in both. In RHEL, I can use it however I want. I can use it as part of my own application, I can use it on the system, whatever. In OpenShift, that might be one very specific piece of plumbing with one very specific use that’s either the vulnerable code isn’t being used, or there’s literally no way for a user or an attacker to access it. So the fact that the vulnerability is there, I mean, okay, yes, technically it’s there, but in any possible use of OpenShift, it’s not going to be material. You’d have to break OpenShift really, really bad in order to even access it, and then you’ve got bigger problems.

Omkhar Arasaratnam (12:30)
Absolutely. So the notion of reachability or exploitability is obviously key and a huge part of how people should be triaging these vulnerabilities as they do come up. What are some other pitfalls that people should avoid in vulnerability management?

Vincent Danen (12:46)
Well, I think one of them is just the notion that, you know, as we were discussing here, that every vulnerability matters, right? Most of them don’t. So I kind of look at it as like, don’t sweat the fact that your scanner is showing up a bunch of low or medium or moderate vulnerabilities. That’s probably fine, right?

I would worry more about the critical and important or high vulnerabilities that it’s showing because those are the ones that are more likely to be exploited and are more likely to be damaging if they are. Interestingly enough, Red Hat produces a risk report on an annual basis. Last year, out of the, what is it, about 1,600 vulnerabilities that impacted us, only 1.2 % were actually known to be exploited. The prior year was at 0.4%. Now the majority of those are in those critical and important vulnerabilities. And there was like a handful in the moderate levels, like, I think three.

So I think about it like about a thousand moderates and two of them are exported. Like, why are we panicking over the other 998 that are effectively immaterial and not actually being used? Now, a little plug for Red Hat here is when we find out that something is being exploited, that kind of raises it to our level of, “OK, this is actually an issue.” And if we hadn’t fixed it already, we’re going to fix it. So we’ll always proactively do the criticals and the importance because it could be any one of those that could be exploited, cause damage.

But we’re not worrying about all of them because, I mean, frankly, I actually had this thought the other day. Because I hear a lot about these vulnerability scanners, right? And they’re very noisy. Sometimes they’re not very accurate and they show a lot of things. I want somebody to come out and create a bug scanner. Go tell me all the bugs that are in the software that I have. Like not the security issues, but the bugs. Because that list is gonna be way longer. And I guarantee you that some of those bugs are far more impactful for you as a user than some of these security issues, particularly the low vulnerabilities.

Omkhar Arasaratnam (14:44)
Absolutely. I mean security properties of a program are essentially an aspect of quality. And looking at them holistically in terms of all quality issues is an interesting view. One of the ways  I’ve described this in the past is security is like this infinite problem space, and if you don’t have a way of reasoning over what’s actually important, you’re going to be chasing down rabbit holes forever and a day. And some of the work that we’re actually doing in the Security Toolbelt group within the OpenSSF is around doing these kind of threat modeling and risk assessments to really pick up on, “Look, OK, in the fullness of time, we should address all the things, but what do I need to address now? And how do I need to address it?”

Vincent with all that said, I think we’re going to jump into the rapid-fire round. Are you ready?

Vincent Danen (15:39)
Absolutely.

Omkhar Arasaratnam (15:40)
All right. Spicy or mild food?

Vincent Danen (15:44)
Mild. Although my mother likes spicy food, and I think that turned me off as a youngster. I’m starting to get back into handling a little bit of heat.

Omkhar Arasaratnam (15:51)
I’d like to be the Sherpa on your journey.

Vincent Danen (15:54)
Thank you.

Omkhar Arasaratnam (15:55)
Text editor of choice: Vim, VS Code, Emacs or other. That’s an option as well.

Vincent Danen (16:01)
Vim.

Omkhar Arasaratnam (16:03)
Yes! All right. You know, you, you slipped on the spicy food. You redeemed yourself on the text editor. This next one is incredibly influential: tabs or spaces?

Vincent Danen (16:14)
I’m a spaces guy.

Omkhar Arasaratnam (16:15)
Yes! Alright. We, we will continue to be good friends, Vincent

Vincent Danen (16:20)
Awesome!

Omkhar Arasaratnam (16:21)
In closing out, thank you so much for all your great advice, but for somebody that’s entering our field today, what would you tell them? What sage wisdom would you impart?

Vincent Danen (16:32)
Probably two things. One, as you and I are both aware, I’ve been here for a long time. It’s very easy to be burnt out and stressed out and everything else by this work. Not to take anything away from the fantastic firefighters and paramedics and everything else, but it feels a lot like first responder-type work. So I say, take care of yourself first. If you don’t take care of yourself, you’re no good to anybody else. And we’re here to be good to other people, right?

And then the other part I would say that I think is actually really, really important is for people to stay curious. Right? If we think about this XZ Backdoor that we just had recently, it was curiosity that found it. I mean, at the end of the day, that’s what it was. This thing is a little bit weird, and I don’t understand it, so I’m gonna go digging. We have to be curious. I don’t really care how you build it, I wanna know how you break it. Right? And I think that’s a very important mindset for security people, so being curious is super important.

Omkhar Arasaratnam (17:25)
That’s some great advice. Last but not least, what’s your call to action for our audience?

Vincent Danen (17:32)
Be open-minded. Find a good reputable vendor to enable you on your — I hate the term digital transformation — but your digital transformation journey, right? Find a reputable vendor to work with there and then trust them, right? There’s a lot of great software vendors out there, a lot of great open source communities, projects, et cetera, who are desperately doing the right thing for those around them. And I think that that should inspire and has earned trust. And we have to trust the people we work with.

Omkhar Arasaratnam (18:02)
Vincent, thank you so much for being generous with your time. Be safe, and thank you so much for coming on What’s in the SOSS?

Vincent Danen (18:10)
Thanks, Omkhar.

Announcer (18:11)
Thank you for listening to What’s in the SOSS? An OpenSSF podcast. Be sure to subscribe to our series of conversations on Spotify, Apple, Amazon or wherever you get your podcasts. And to keep up to date on the Open Source Security Foundation community, join us online at OpenSSF.org/getinvolved. We’ll talk to you next time on What’s in the SOSS?

OpenSSF Newsletter: January 2024

By Newsletter

Open Source Security Foundation(OpenSSF) – Who We Are

The OpenSSF is a diverse global community dedicated to making the world a better place through open source software. Join us in enhancing the security of open source, and together, let’s create a safer world. Check out our new video!

OpenSSF Election Results for Technical Advisory Council and Representatives to the Governing Board

We are thrilled to kick off 2024 by announcing the OpenSSF representatives to the Governing Board and the establishment of a new and expanded Technical Advisory Council elected by the community. Congratulations, and we look forward to a great year ahead!

Election Image TAC

Open Source Security Foundation Announces Education Courses and Participation Initiatives to Advance its Commitment to Securing the World’s Software Infrastructure

By Press Release

Free training opportunities, new member investments, consolidation with Core Infrastructure Initiative and new opportunities for anyone to contribute accelerate work on open source security

 

SAN FRANCISCO, Calif., Oct 29, 2020 OpenSSF, a cross-industry collaboration to secure the open source ecosystem, today announced free training for developing secure software, a new OpenSSF professional certificate program called Secure Software Development Fundamentals and additional program and technical initiatives. It is also announcing new contributors to the Foundation and newly elected advisory council and governing board members.

Open source software has become pervasive across industries, and ensuring its security is of primary importance. The OpenSSF, hosted at the Linux Foundation, provides a structured forum for a collaborative, cross-industry effort. The foundation is committed to working both upstream and with existing communities to advance open source security for all.

Open Source Security Training and Education

OpenSSF has developed a set of three free courses on how to develop secure software on the non-profit edX learning platform. These courses are intended for software developers (including DevOps professionals, software engineers, and web application developers) and others interested in learning how to develop secure software. The courses are specifically designed to teach professionals how to develop secure software while reducing damage and increasing the speed of the response when a vulnerability is found.

The OpenSSF training program includes a Professional Certificate program, Secure Software Development Fundamentals, which can allow individuals to demonstrate they’ve mastered this material. Public enrollment for the courses and certificate is open now. Course content and the Professional Certificate program tests will become available on November 5.

“The OpenSSF has already demonstrated incredible momentum which underscores the increasing priorities placed on open source security,” said Mike Dolan, Senior VP and GM of Projects at The Linux Foundation. “We’re excited to offer the Secure Software Development Fundamentals professional certificate program to support an informed talent pool about open source security best practices.”

New Member Investments

Sixteen new contributors have joined as members of OpenSSF since earlier this year: Arduino; AuriStor; Canonical; Debricked; Facebook; Huawei Technologies; iExec Blockchain Tech; Laboratory for Innovation Science at Harvard (LISH); Open Source Technology Improvement Fund; Polyverse Corporation; Renesas; Samsung; Spectral; SUSE; Tencent; Uber; and WhiteSource. For more information on founding and new members, please visit: https://openssf.org/about/members/

Core Infrastructure Initiative Projects Integrate with OpenSSF

The OpenSSF is also bringing together existing projects from the Core Infrastructure Initiative (CII), including the CII Census (a quantitative analysis to identify critical OSS projects) and CII FOSS Contributor Survey (a quantitative survey of FOSS developers). Both will become part of the OpenSSF Securing Critical Projects working group. These two efforts will continue to be implemented by the Laboratory for Innovation Science at Harvard (LISH). The CII Best Practices badge project is also being transitioned into the OpenSSF.

OpenSSF Leadership

The OpenSSF has elected Kay Williams from Microsoft as Governing Board Chair. Newly elected Governing Board members include:

  • Jeffrey Eric Altman, AuriStor, Inc.;
  • Lech Sandecki, Canonical;
  • Anand Pashupathy, Intel Corporation; and
  • Dan Lorenc from Google as Technical Advisory Committee (TAC) representative.

An election for a Security Community Individual Representative to the Governing Board is currently underway and results will be announced by OpenSSF in November. Ryan Haning from Microsoft has been elected Chair of the Technical Advisory Council (TAC).

There will be an OpenSSF Town Hall on Monday, November 9, 2020, 10:00a -12:00p PT, to share updates and celebrate accomplishments during the first three months of the project.  Attendees will hear from our Governing Board, Technical Advisory Council and Working Group leads, have an opportunity for Q+A and learn more about how to get involved in the project. Register here.

Membership is not required to participate in the OpenSSF. For more information and to learn how to get involved, including information about participating in working groups and advisory forums, please visit https://openssf.org/getinvolved.

 

New Member Comments

Arduino

“As an open-source company, Arduino always considered security as a top priority for us and for our community,” said Massimo Banzi, Arduino co-founder. ’”We are excited to join the Open Source Security Foundation and we look forward to collaborating with other members to improve the security of any open-source ecosystem.”

AuriStor

“One of the strengths of the open protocols and open source software ecosystems is the extensive reuse of code and APIs which expands the spread of security vulnerabilities across software product boundaries.  Tracking the impacted downstream software projects is a time-consuming and expensive process often reaching into the tens of thousands of U.S. dollars.  In Pixar’s Ratatouille, Auguste Gusteau was famous for his belief that “anyone can cook”.  The same is true for software: “anyone can code” but the vast majority of software developers have neither the resources or incentives to prioritize security-first development practices nor to trace and notify impact downstream projects.  AuriStor joins the OSSF to voice the importance of providing resources to the independent developers responsible for so many critical software components.” – Jeffrey Altman, Founder and CEO or AuriStor.

Canonical Group

“It is our collective responsibility to constantly improve the security of open source ecosystem, and we’re excited to join the Open Source Security Foundation,” said Lech Sandecki, Security Product Manager at Canonical. “As publishers of Ubuntu, the most popular Linux distribution, we deliver up to 10 years of security maintenance to millions of Ubuntu users worldwide. By sharing our knowledge and experience with the OSFF community, together, we can make the whole open source more secure.”

Debricked

“The essence of open source is collaboration, and we strongly believe that the OSSF initiative will improve open source security at large. With all of the members bringing something different to the table we can create a diverse community where knowledge, experience and best practices can help shape this space to the better. Debricked has a strong background in research and extensive insight in tooling; knowledge which we hope will be a valuable contribution to the working groups,” said Daniel Wisenhoff, CEO and co-founder of Debricked.

Huawei

“With open source software becoming a crucial foundation in today’s world, how to ensure its security is the responsibility of every stakeholder. We believe the establishment of the Open Source Security Foundation will drive common understanding and best practices on the security of the open source supply chain and will benefit the whole industry,” said Peixin Hou, Chief Expert on Open System and Software, Huawei. “We look forward to making contributions to this collaboration and working with everybody in an open manner. This reaffirms Huawei’s long-standing commitment to make a better, connected and more secure and intelligent world.”

Laboratory for Innovation Science at Harvard

“We are excited to bring the Core Infrastructure Initiative’s research on the prevalence and current practices of open source into this broader network of industry and foundation partners,” said Frank Nagle, Assistant Professor at Harvard Business School and Co-Director of the Core Infrastructure Initiative at the Laboratory for Innovation Science at Harvard. “Only through coordinated, strategically targeted efforts – among competitors and collaborators alike – can we effectively address the challenges facing open source today.”

Open Source Technology Improvement Fund

“OSTIF is thrilled to collaborate with industry leaders and apply it’s methodology and broad expertise for securing open-source technology on a larger scale. The level of engagement across organizations and industries is inspiring, and we look forward to participating via the Securing Critical Projects Working Group,” said Chief Operating Officer Amir Montazery. “Linux Foundation and OpenSSF have been instrumental in aligning efforts towards improving open-source software, and OSTIF is grateful to be involved in the process.”

Polyverse

“Polyverse is honored to be a member of OpenSSF. The popularity of open source as the ‘go-to’ option for mission critical data, systems and solutions has brought with it increased cyberattacks. Bringing together organizations to work on this problem collaboratively is exactly what open source is all about and we’re eager to accelerate progress in this area,” said Archis Gore, CTO, Polyverse.

Renesas

“Renesas provides embedded processors for various application segments, including automotive, industrial automation, and IoT. Renesas is committed to ensuring the integrity and confidentiality of systems and data while mitigating cybersecurity risks. To enable our customers to develop robust systems, it is essential to provide root-of-trust of the open source software that runs on our products,” said Shinichi Yoshioka, Senior Vice President and CTO of Renesas. “We are excited to join the Open Source Security Foundation and to collaborate with industry-leading security professionals to advance more secure computing environments for the society.”

Samsung

“Samsung is trying to provide best-in-class security with our technologies and activities. Not only are security risks reviewed and removed in all development phases of our products, but they are also monitored continuously and patched quickly,” said Yong Ho Hwang, Corporate Vice President and Head of Samsung Research Security Team, Samsung Electronics. “Open source is one of the best approaches to drive cross-industry effort in responding quickly and transparently to security threats. Samsung will continue to be a leader in providing high-level security by actively contributing and collaborating with the Open Source Security Foundation.”

Spectral

“Spectral’s mission is to enable developers to build and ship software at scale without worry. We feel that the OpenSSF initiative is the perfect venue to discuss and improve open source security and is a natural platform that empowers developers. The Spectral team is happy to participate in the working groups and share their expertise in security analysis and research of technology stacks at scale, developer experience (DX) and tooling, open source codebases analysis and trends, developer behavioral analysis, though the ultimate goal of improving open source security and developer happiness,” said Dotan Nahum, CEO and co-founder of Spectral.

SUSE

“At SUSE, we power innovation in data centers, cars, phones, satellites and other devices. It has never been more critical to deliver trustworthy security from the core all the way to the edge,” said Markus Noga, VP Solutions Technology at SUSE. “We are committed to OpenSSF as the forum for the open source community to collaborate on vulnerability disclosures, security tooling, and to create best practices to keep all users of open source solutions safe.”

Tencent

“Tencent believes in the power of open source technology and collaboration to deliver incredible solutions to today’s challenges. As open source has become the de facto way to build software, its security has become a critical component for building and maintaining the software and infrastructure,” said Mark Shan, Chair of Tencent Open Source Alliance and Board Chair of the TARS Foundation. “By bringing different organizations together, OpenSSF provides a platform where developers can collaboratively build solutions needed to protect the open source security supply chain. Tencent is very excited to join this collaborative effort as an OpenSSF member and contribute to its open source security initiatives and best practices.

WhiteSource

“In today’s world, software development teams simply cannot develop software at today’s pace without using open source. Our goal has always been to empower teams to harness the power of open source easily and securely. We’re honored to get the opportunity to join the Open Source Security Foundation where we can join forces with others to contribute, together, towards open source security best practices and initiatives.” David Habusha, VP Product.

About the Open Source Security Foundation (OpenSSF)

Hosted by the Linux Foundation, the OpenSSF (launched in August 2020) is a cross-industry organization that brings together the industry’s most important open source security initiatives and the individuals and companies that support them. It combines the Linux Foundation’s Core Infrastructure Initiative (CII), founded in response to the 2014 Heartbleed bug, and the Open Source Security Coalition, founded by the GitHub Security Lab to build a community to support the open source security for decades to come. The OpenSSF is committed to collaboration and working both upstream and with existing communities to advance open source security for all.

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page:  https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contact
Jennifer Cloer
Story Changes Culture
503-867-2304
jennifer@storychangesculture.com