Skip to main content

đź“© Stay Updated! Follow us on LinkedIn and join our mailing list for the latest news!

Security Scorecards for Open Source Projects

By November 6, 2020Blog

Author: Kim Lewandowski, Google Product Manager

One of the first things I wanted to do when the OpenSSF launched was help people make better decisions about security when consuming open source projects, and draw more awareness to the health of these critical projects we all depend on. Some might argue that it’s almost too easy to introduce a new dependency into your software systems. I’m definitely guilty of this in my previous life as an engineer. I remember pulling in random Python packages when building my own websites and not putting any thought into security. It should be fine if so many other people are using the same package, right?

With the uptick of open source software attacks, there’s more general awareness now that pulling in open source code you didn’t write into your software supply chains should warrant closer review. At large organizations this can get a bit tricky when trying to scale out automated analysis and trust decisions of any new dependencies, or keeping updated on the hygiene of existing ones. These issues are what inspired the new “Scorecards” project with the OpenSSF that we are releasing today. 

The goal of Scorecards is to auto-generate a “security score” for open source projects to help users as they decide the trust, risk, and security posture for their use case. This data can also be used to augment any decision making in an automated fashion when new open source dependencies are introduced inside projects or at organizations. For example, organizations may decide that any new dependency with low scores has to go through additional evaluation. These checks could help mitigate malicious dependencies from getting deployed to production systems like we’ve seen recently with malicious NPM packages.

We have defined an initial evaluation criteria that will be used to generate a scorecard for an open source project in a fully automated way. Currently the code only works with software repositories from GitHub, but we will extend it to cover other source code repositories. Some of the evaluation metrics used include a well-defined security policy, code review process and continuous test coverage with fuzzing and static code analysis tools.

Using the scorecard data, we want to build a culture of security through improved visibility. We want to work with the community and improve the security health of the critical projects we all depend on.

It’s early days for this project, though we have made some progress on this problem, we have not solved it and need the community’s help in improving these security evaluation metrics, and adding new ones. There’s a small wishlist of issues already in the repo. Let’s work together on a more secure future for open source software!