Skip to main content
All Posts By

OpenSSF

AI/MLWG_Spotlight

Spotlight on the OpenSSF AI/ML Working Group

By Blog, Guest Blog

By Mihai Maruseac and Jay White

What do open source software, security and AI/ML have in common? The intersection of these topics is what the OpenSSF AI/ML Working Group tackles. Almost a year ago, a group of people at the confluence of security and AI/ML came together under the OpenSSF umbrella to create this working group, whose goal is securing AI/ML. This was deemed to be a necessity given the rapid spread of AI-technology, closely matched by the speed with which security incidents in products using AI or related to ML models are being discovered.

The AI/ML Working Group focuses on addressing open source software security for AIML workloads: to do so, it aims to tackle both the new types of attacks that can arise from the ways ML is developed and deployed, but also adapt security practices of the traditional software industry (i.e., software that does not use AI) to the field of AI. To achieve these goals, the group will also collaborate with other communities, such as the Cloud Native Computing Foundation (CNCF), LF AI & Data, and others.

Highlights of the Past Few Months

We started the year with a presentation from David A. Wheeler, Director of Open Source Supply Chain Security at OpenSSF, about the security risks associated with AI-powered applications, including a discussion on what is known about what does and does not work. We also had a presentation from Dr. Christina Liaghati from MITRE about the MITRE ATLAS — a knowledge base of adversary tactics and techniques against Al-enabled systems — and how it can be applied to large language models (LLMs).

Several members of the working group contributed to a response to a “Secure by Design” Request for Information (RFI) from the Cybersecurity and Infrastructure Security Agency (CISA), covering areas related to AI software development. We identified that there are a large number of topics that we can cover and decided to scope down on what security threats we can defend against now and what topics should be covered in the future. Since we cannot solve all problems related to security of AI (not even if we restrict to GenAI), we decided to focus on initiatives that move the security needle significantly and in a reasonable time. We concluded that there is an urgent need to fund practical research on developing AI that is adequately secure against real-world attacks.

Members of the working group discussed the importance of signing models to ensure model integrity post-training, in a presentation with participants from Google, Nvidia, and HiddenLayer. This is achieved by reusing existing infrastructure from traditional software: signing is achieved via Sigstore. As a result of the presentation, the working group voted unanimously to create a Special Interest Group (SIG) that would focus on developing the signing technology and ensuring its adoption in OSS ML communities.

Finally, the working group has had several discussions about OpenSSF’s support for DARPA’s Artificial Intelligence Cyber Challenge (AIxCC). The AIxCC competition focuses on bringing together top AI and cybersecurity talent to create automated systems that can make software more secure at scale. OpenSSF serves as a challenge advisor to promote openness, fairness, and community benefit throughout the competition process. 

New and Upcoming Initiatives

The group started discussions around disclosure policies related to vulnerabilities in ML-powered applications. This is currently a work in progress. The goal is to evolve the existing traditional software vulnerability disclosure practices into a document that can be applied to AI.

We are looking forward to formally establishing the model signing SIG, where participants interested in further developing and deploying model signing technology can collaborate in weekly meetings. We have a repository under the Sigstore organization on GitHub and are planning to make a stable release in the coming weeks.

Get Involved

You can get involved in either of the two initiatives that the working group currently has in progress: either the vulnerability disclosure policies for ML or the model signing SIG. We are also accepting contributions in other areas at the intersection of AI and security. It is a vast field and there can be multiple parallel efforts that together would create beneficial changes for the OSS community.

To learn more or contribute to our work, read on GitHub or join the mailing list. The working group also has bi-weekly meetings on Zoom. We are looking forward to hearing from you!

About the Author(s)

Mihai_MaruseacMihai Maruseac is a member of the Google Open Source Security Team (GOSST), working on Supply Chain Security for ML, as co-lead on a Secure AI Framework (SAIF) workstream. Previously, Mihai worked on GUAC. Before GOSST, Mihai created the TensorFlow Security team. Prior to Google, Mihai worked on adding Differential Privacy to Machine Learning algorithms. Mihai has a PhD in Differential Privacy from UMass Boston.

Jay WhiteJautau “Jay” White, PhD, MBA, MS, CISM, CISSP-ISSAP, OSCP, CDPSE, Security Principal Program Manager, OSS Incubations + Ecosystem Team, Azure Office of the CTO – Microsoft. Jay has over 20 years of team building and leading through information security experiences with supply chain, cyber risk, and AI. He specializes in security, privacy, and compliance. He provides a combined tactical and strategic balance towards the implementation of enterprise and cyber risk management, security and compliance requirements that aligns to an organization’s broader business strategy. Jay believes that companies should go beyond the status quo for their customers and partners and take the teamwork/community approach to understanding business unit needs. Jay is a friend, trusted advisor, and a proud US Army retiree.