The Open Source Security Foundation (OpenSSF) is participating in the Biden-Harris Administration’s first-ever Consortium Dedicated to AI Safety, led by the US Department of Commerce. We join over 200 leading artificial intelligence (AI) stakeholders in supporting the development and deployment of trustworthy and safe AI along with other Linux Foundation (LF) projects including LF AI & Data, SPDX, and C2PA.
The U.S. AI Safety Institute Consortium (AISIC), established by the Department of Commerce’s National Institute of Standards and Technology (NIST), aims to bring together AI creators and users, academics, government and industry researchers, and civil society organizations to achieve this mission. This collaboration represents a pivotal step in fostering safe and trustworthy AI practices.
OpenSSF’s role in AISIC
OpenSSF’s role in AISIC underscores its commitment to fortifying AI security through open source collaboration. With the goal to enhance open source software security, OpenSSF’s participation emphasizes secure coding practices and establishing AI security standards. Collaborating with AISIC, OpenSSF aims to contribute expertise in securing software supply chains, advocating for a holistic AI safety approach. This partnership highlights the pivotal role of open source security in addressing AI’s complex challenges, fostering an ecosystem where trust, transparency, and innovation converge throughout the development and deployment stages.
Recently the OpenSSF formed the AI/ML Security Working Group to navigate the parallels between current software supply chain security risks and the nuances of those emerging from AI/ML use in the development of open source software and the use of open source software including public datasets to build and train AI/ML systems. In this regard, the OpenSSF works as a bridge between open source communities to understand the current AI/ML security landscape and then works hand-in hand to help solve the very difficult problems identified. “The OpenSSF and its very dedicated member organizations are in lockstep in efforts to identify and mitigate risks so that all can safely and securely manage the nuances of open source software and AI/ML development and use,” said Jay White, Co-Chair AI/ML Security WG. “As this continues to evolve and align with our efforts in AISIC we will need all the expertise we can get!” The AI/ML Security WG meets every other Monday at 1 PM ET/10 AM PT. The Zoom link can be found in the OpenSSF public calendar.Â
The Scope of AISIC: A Global Collective for Secure AI Systems
The U.S. AI Safety Institute Consortium, or AISIC, is an impressive assembly of AI developers, users, researchers, and organizations, making it the largest collection globally. With a diverse membership comprising Fortune 500 companies, academic teams, non-profit organizations, and various U.S. Government agencies, these entities collaborate with a shared commitment. Together, we will focus on advancing research and development initiatives to facilitate the creation of secure and reliable AI systems, laying the groundwork for future standards and policies.
Members of the AISIC will play a vital role in assisting NIST in implementing, iterating on, sustaining, and extending priority projects related to research, testing, and guidance on AI safety. By harnessing the expertise of this collective, NIST aims to ensure that its AI safety initiatives are well-integrated with the broader AI safety community, both nationally and globally.
For more information see the announcement by LF AI & Data and the full list of consortium participants on the NIST website.