

By Christopher “CRob” Robinson, Director of Security Communications, Intel Product Assurance and Security, Intel Corporation; and Bennett Pursell, Ecosystem Strategist, OpenSSF
In the ever-evolving landscape of cybersecurity threats, collaboration and information sharing are paramount. Now, more than ever, the open source community needs a centralized platform to exchange threat intelligence efficiently. Introducing Siren, a threat intelligence sharing list hosted by Open Source Security Foundation (OpenSSF), a groundbreaking initiative aimed at fortifying the defenses of open source projects worldwide.
It’s estimated that open source software powers up to 90% of modern software, from web servers to mobile applications. However, with its widespread adoption comes increased scrutiny from threat actors seeking to exploit vulnerabilities for their gain. Recent attacks on projects like XZ-Utils and the OpenJS community are stark reminders of the importance of proactive security measures.
While the community has proven methods of communicating vulnerabilities to others within the community, such as the oss-security mailing lists, we do not have a means of communicating information about exploits efficiently with the broader downstream audience.
While consumers and enterprises may have intelligence sharing structures in place, this does not always extend to the upstream open source community. OpenSSF Siren is an open source resource that fills this gap.
The OpenSSF Siren is a collaborative effort to aggregate and disseminate threat intelligence specific to open source projects. Hosted by the OpenSSF, this platform provides a secure and transparent environment for sharing Tactics, Techniques, and Procedures (TTPs) and Indicators of Compromise (IOCs) associated with recent cyber attacks. Siren is intended to be a post-disclosure means of keeping the community informed of threats and activities after the initial sharing and coordination.
Key features of the OpenSSF Siren include:
By leveraging the collective knowledge and expertise of the open source community and other security experts, the OpenSSF Siren empowers projects of all sizes to bolster their cybersecurity defenses and increase their overall awareness of malicious activities. Whether you’re a developer, maintainer, or security enthusiast, your participation is vital in safeguarding the integrity of open source software.
Join us in the fight against cyber threats by becoming a member of the OpenSSF Siren today. Together, we can build a more resilient and secure open source ecosystem for generations to come.
Ready to take action? Here’s how you can contribute:
Together, let’s make open source software secure for everyone. Join the OpenSSF Siren today and be part of the solution. You also can join the conversation within the OpenSSF’s Vulnerability Disclosure working group to engage with other community security experts that are helping demystify vulnerabilities within our open source ecosystem.
By Mihai Maruseac and Jay White
What do open source software, security and AI/ML have in common? The intersection of these topics is what the OpenSSF AI/ML Working Group tackles. Almost a year ago, a group of people at the confluence of security and AI/ML came together under the OpenSSF umbrella to create this working group, whose goal is securing AI/ML. This was deemed to be a necessity given the rapid spread of AI-technology, closely matched by the speed with which security incidents in products using AI or related to ML models are being discovered.
The AI/ML Working Group focuses on addressing open source software security for AIML workloads: to do so, it aims to tackle both the new types of attacks that can arise from the ways ML is developed and deployed, but also adapt security practices of the traditional software industry (i.e., software that does not use AI) to the field of AI. To achieve these goals, the group will also collaborate with other communities, such as the Cloud Native Computing Foundation (CNCF), LF AI & Data, and others.
We started the year with a presentation from David A. Wheeler, Director of Open Source Supply Chain Security at OpenSSF, about the security risks associated with AI-powered applications, including a discussion on what is known about what does and does not work. We also had a presentation from Dr. Christina Liaghati from MITRE about the MITRE ATLAS — a knowledge base of adversary tactics and techniques against Al-enabled systems — and how it can be applied to large language models (LLMs).
Several members of the working group contributed to a response to a “Secure by Design” Request for Information (RFI) from the Cybersecurity and Infrastructure Security Agency (CISA), covering areas related to AI software development. We identified that there are a large number of topics that we can cover and decided to scope down on what security threats we can defend against now and what topics should be covered in the future. Since we cannot solve all problems related to security of AI (not even if we restrict to GenAI), we decided to focus on initiatives that move the security needle significantly and in a reasonable time. We concluded that there is an urgent need to fund practical research on developing AI that is adequately secure against real-world attacks.
Members of the working group discussed the importance of signing models to ensure model integrity post-training, in a presentation with participants from Google, Nvidia, and HiddenLayer. This is achieved by reusing existing infrastructure from traditional software: signing is achieved via Sigstore. As a result of the presentation, the working group voted unanimously to create a Special Interest Group (SIG) that would focus on developing the signing technology and ensuring its adoption in OSS ML communities.
Finally, the working group has had several discussions about OpenSSF’s support for DARPA’s Artificial Intelligence Cyber Challenge (AIxCC). The AIxCC competition focuses on bringing together top AI and cybersecurity talent to create automated systems that can make software more secure at scale. OpenSSF serves as a challenge advisor to promote openness, fairness, and community benefit throughout the competition process.
The group started discussions around disclosure policies related to vulnerabilities in ML-powered applications. This is currently a work in progress. The goal is to evolve the existing traditional software vulnerability disclosure practices into a document that can be applied to AI.
We are looking forward to formally establishing the model signing SIG, where participants interested in further developing and deploying model signing technology can collaborate in weekly meetings. We have a repository under the Sigstore organization on GitHub and are planning to make a stable release in the coming weeks.
You can get involved in either of the two initiatives that the working group currently has in progress: either the vulnerability disclosure policies for ML or the model signing SIG. We are also accepting contributions in other areas at the intersection of AI and security. It is a vast field and there can be multiple parallel efforts that together would create beneficial changes for the OSS community.
To learn more or contribute to our work, read on GitHub or join the mailing list. The working group also has bi-weekly meetings on Zoom. We are looking forward to hearing from you!
Mihai Maruseac is a member of the Google Open Source Security Team (GOSST), working on Supply Chain Security for ML, as co-lead on a Secure AI Framework (SAIF) workstream. Previously, Mihai worked on GUAC. Before GOSST, Mihai created the TensorFlow Security team. Prior to Google, Mihai worked on adding Differential Privacy to Machine Learning algorithms. Mihai has a PhD in Differential Privacy from UMass Boston.
Jautau “Jay” White, PhD, MBA, MS, CISM, CISSP-ISSAP, OSCP, CDPSE, Security Principal Program Manager, OSS Incubations + Ecosystem Team, Azure Office of the CTO – Microsoft. Jay has over 20 years of team building and leading through information security experiences with supply chain, cyber risk, and AI. He specializes in security, privacy, and compliance. He provides a combined tactical and strategic balance towards the implementation of enterprise and cyber risk management, security and compliance requirements that aligns to an organization’s broader business strategy. Jay believes that companies should go beyond the status quo for their customers and partners and take the teamwork/community approach to understanding business unit needs. Jay is a friend, trusted advisor, and a proud US Army retiree.