Skip to main content

đź“© Stay Updated! Follow us on LinkedIn and join our mailing list for the latest news!

How We Can Learn from Open Source Software to Address the Challenges of AI

By November 4, 2024Blog
How_We_Can_Learn_from_Open_Source_Software_to_Address_the_Challenges_of_AI

By Ashwin Ramaswami

With the development of new artificial intelligence (AI) models and capabilities, attention has been drawn to their potential harms and misuse: from generating deepfakes and disinformation, algorithmic bias, or being used to perpetuate other harms or biases.

What does this have to do with open source software? Just like “open” and “closed” software, there have been similar discussions and debates about “open” and “closed” AI models. At the core of the debate is a similar distinction: should AI models’ weights, training algorithms, or training data, be openly accessible or only closed to a few?

A recent post by the Cybersecurity and Infrastructure Security Agency (CISA) in the federal government helps shed light on this connection. CISA’s role includes protecting our infrastructure through cybersecurity and working with public and private sector partners to ensure the safety of software. This post summarizes how we can learn from the history of open source software for “open foundation models.” 

Open and closed AI, and open source

As new AI foundation models have been launched, some have been released and developed in the open, while some have been closed. “Open source” is often used as a term to refer to some of these open models.

However, there is no consensus on what “open source AI” actually means: does this include the weights, or code, or something else? In fact, the Open Source Initiative has been working together on a process to define the term. 

Most of the models referred to as open are really models that have widely available weights: which means that the weights of the final model are free for people to download and use as they please. Therefore, like with the CISA blog post, we refer to these models with the term “open foundation models.”

Issues with open foundation models

In their post, the CISA team categorizes two classes of harms from open foundation models. First are harms deliberately sought by the deployer of the model, for example using models to conduct cyberattacks. Second are unwanted harms: for example, cybersecurity vulnerabilities in a model that may unintentionally leak private data.

These classes of harms are similar to traditional software: like for AI models, any library or software package can be developed maliciously to do harm, or an unintentional piece of code could cause a major vulnerability.

How open source security principles applies to open foundation models

There has been an upsurge in work around strengthening open source software security in the past couple of years: and the OpenSSF has been at the center of it. From our collaborations with agencies such as CISA to help create their Open Source Software Security Roadmap, to projects such as Sigstore and Alpha Omega, we have seen that a multi-stakeholder effort towards open source security is important.

Moreover, many open source security efforts have been aimed at the supply chain. For example, major package repositories have been working on ensuring they have signed packages and use secure by default coding practices.

Similarly, these principles can be applied to open foundation models. All the different players — industry, academia, and government — should work together in a collaborative process to identify areas of issue and areas of improvement they can invest in. And time and effort should be invested in the open foundation model supply chain: from datasets to labeling processes to software that is used to build AI models.

Finally, “secure by default” should apply to open foundation models as well. They can be built with safeguards so that they are easier to protect from misuse, and generally less susceptible to unintended uses. But this requires a lot of investment; as we’ve seen, for example, Alpha Omega has cost over $5 million just to start and it is just addressing the tip of the iceberg.

Future steps

CISA’s response is encouraging and shows that government agencies have the resources and talent to address these problems head-on. The OpenSSF looks forward to continuing to collaborate with all stakeholders to ensure that we can continue to address further challenges caused by the development of these new technologies.