The Biden-Harris Administration issued a landmark Executive Order on developing Artificial Intelligence (AI), harnessing the power of AI responsibly, and managing the risks of AI. Executive Order 14110 directs actions for new standards on AI safety, security, privacy protection, equity and civil rights advancement, consumer and worker protection, and more. As explained in the Fact Sheet released by the US White House, the Executive Order aims to bring the public and private sectors together to promote responsible innovation and healthy competition globally to advance AI safely and security with transparency.
Omkhar Arasaratnam, General Manager, Open Source Security Foundation (OpenSSF) says, “The OpenSSF supports the Biden-Harris Administration’s Executive Order (EO) on Artificial Intelligence. The EO provides a framework that fosters innovation in AI, like the DARPA AI Cyber Challenge, while ensuring secure, and equitable outcomes for everyone.”
Key Actions
This EO mandates continuous assurance and verification of AI systems’ safety, reliability and effectiveness. One key action is to require companies that develop the most powerful AI systems to report their development activities and cybersecurity measures to protect the system with the government. The action also requires dependent infrastructure and security information to be shared. AI systems and their runtime depend on open source software. Transparency and security in open source software (OSS) is what OpenSSF focuses on. The principle of building provenance into OSS via SLSA and Sigstore, assisting OSS consumers with assessment on the security posture of OSS via OpenSSF Scorecard, and providing consumable SBOM will help with this action.
In terms of AI in critical infrastructure and cybersecurity, EO 14110 calls for agencies overseeing critical infrastructure to assess AI-related risks and provide recommendations, with specific deadlines for various sectors. The US Secretary of the Treasury is to issue a public report on best practices for managing AI-specific cybersecurity risks in the financial sector. Additionally, AI security guidelines are to be incorporated into safety and security protocols for critical infrastructure, with an AI Safety and Security Board established to provide advice and recommendations. In cybersecurity, the US Secretaries of Defense and Homeland Security are tasked with developing plans and conducting pilot projects to deploy AI for identifying and mitigating vulnerabilities in government systems, reporting on the outcomes and lessons learned. This is the area where OpenSSF will make a material impact. Open source software is the backbone of AI systems and imperative in securing critical infrastructure, which is why OpenSSF brought together US government (USG) and industry leaders at the recent Secure Open Source Software Summit DC to address the challenges with OSS consumption by critical infrastructure and beyond.
A key directive in the EO is managing AI in Critical Infrastructure and in Cybersecurity. The goal is to ensure the protection of critical Infrastructure, and to capitalize on AI’s potential to improve cyber defense. AI-specific security risk assessment, management and incident response needs to be established. The Administration’s ongoing AI Cyber Challenge(AIxCC) aligns with this directive. OpenSSF announced at Black Hat 2023 its collaboration with the Defense Advanced Research Projects Agency (DARPA) on the AI Cyber Challenge (AIxCC) – a two-year competition aimed at driving innovation at the nexus of AI and cybersecurity to create a new generation of cybersecurity tools to secure open source software.
Overcoming AI and ML Security Challenges
OpenSSF’s mission is to make OSS more secure, including the OSS that is the foundation of many AI systems. The foundation convenes individuals, communities and organizations across the public and private sectors globally to take a risk-based approach addressing the security challenges. The foundation is adapting to the fast-changing landscape of AI by investing in overcoming AI and ML security challenges. We invite you to join the newly launched OpenSSF AI/ML Security Working Group to help make AI safe, secure and trustworthy.