Skip to main content
Tag

Software Supply Chain Security

SBOMs in the Era of the CRA: Toward a Unified and Actionable Framework

SBOMs in the Era of the CRA: Toward a Unified and Actionable Framework

By Blog, Guest Blog

By Madalin Neag, Kate Stewart, and David A. Wheeler

In our previous blog post, we explored how the Software Bill of Materials (SBOM) should not be a static artifact created only to comply with some regulation, but should be a decision ready tool. In particular, SBOMs can support risk management. This understanding is increasing thanks to the many who are trying to build an interoperable and actionable SBOM ecosystem. Yet, fragmentation, across formats, standards, and compliance frameworks, remains the main obstacle preventing SBOMs from reaching their full potential as scalable cybersecurity tools. Building on the foundation established in our previous article, this post dives deeper into the concrete mechanisms shaping that evolution, from the regulatory frameworks driving SBOM adoption to the open source initiatives enabling their global interoperability. Organizations should now treat the SBOM not merely as a compliance artifact to be created and ignored, but as an operational tool to support security and augment asset management processes to ensure vulnerable components are identified and updated in a timely proactive way. This will require actions to unify various efforts worldwide into an actionable whole.

Accelerating Global Policy Mandates

The global adoption of the Software Bill of Materials (SBOM) was decisively accelerated by the U.S. Executive Order 14028 in 2021, which mandated SBOMs for all federal agencies and their software vendors. This established the SBOM as a cybersecurity and procurement baseline, reinforced by the initial NTIA (2021) Minimum Elements (which required the supplier, component name, version, and relationships for identified components). Building on this foundation, U.S. CISA (2025) subsequently updated these minimum elements, significantly expanding the required metadata to include fields essential for provenance, authenticity, and deeper cybersecurity integration. In parallel, European regulatory momentum is similarly mandating SBOMs for market access, driven by the EU Cyber Resilience Act (CRA). Germany’s BSI TR-03183-2 guideline complements the CRA by providing detailed technical and formal requirements, explicitly aiming to ensure software transparency and supply chain security ahead of the CRA’s full enforcement.

To prevent fragmentation and ensure these policy mandates translate into operational efficiency, a wide network of international standards organizations is driving technical convergence at multiple layers. ISO/IEC JTC 1/SC 27 formally standardizes and oversees the adoption of updates to ISO/IEC 5962 (SPDX), evaluating and approving revisions developed by the SPDX community under The Linux Foundation. The standard serves as a key international baseline, renowned for its rich data fields for licensing and provenance and support for automation of risk analysis of elements in a supply chain. Concurrently, OWASP and ECMA International maintain ECMA-424 (OWASP CycloneDX), a recognized standard optimized specifically for security automation and vulnerability linkage. Within Europe, ETSI TR 104 034, the “SBOM Compendium,” provides comprehensive guidance on the ecosystem, while CEN/CENELEC is actively developing the specific European standard (under the PT3 work stream) that will define some of the precise SBOM requirements needed to support the CRA’s vulnerability handling process for manufacturers and stewards.

Together, these initiatives show a clear global consensus: SBOMs must be machine-readable, verifiable, and interoperable, supporting both regulatory compliance over support windows and real-time security intelligence. This global momentum set the stage for the CRA, which now transforms transparency principles into concrete regulatory obligations.

EU Cyber Resilience Act (CRA): Establishing a Legal Requirement

The EU Cyber Resilience Act (CRA) (Regulation (EU) 2024/2847) introduces a legally binding obligation for manufacturers to create, 1maintain, and retain a Software Bill of Materials (SBOM) for all products with digital elements marketed within the European Union. This elevates the SBOM from a voluntary best practice to a legally required element of technical documentation, essential for conformity assessment, security assurance, and incident response throughout a product’s lifecycle. In essence, the CRA transforms this form of software transparency from a recommendation into a condition for market access.

Core Obligations for Manufacturers under the CRA include:

  • SBOM Creation – Manufacturers must prepare an SBOM in a commonly used, machine-readable format [CRA I(II)(1)], such as SPDX or CycloneDX.
  • Minimum Scope – The SBOM must cover at least the top-level dependencies of the product  [CRA I(II)(1)]. While this is the legal minimum, including deeper transitive dependencies is strongly encouraged.
  • Inclusion & Retention – The SBOM must form part of the mandatory technical documentation and be retained for at least ten years (Art.13) after the product has been placed on the market.
  • Non-Publication Clause – The CRA requires the creation and availability of an SBOM but does not mandate its public disclosure Recital 77. Manufacturers must provide the SBOM upon request to market surveillance authorities or conformity assessment bodies for validation, audit, or incident investigation purposes.
  • Lifecycle Maintenance – The SBOM must be kept up to date throughout the product’s maintenance and update cycles, ensuring that any component change or patch is reflected in the documentation Recital 90.
  • Vulnerability Handling – SBOMs provide the foundation for identifying component vulnerabilities under the CRA, while full risk assessment requires complementary context such as exploitability and remediation data. (Annex I)

The European Commission is empowered, via delegated acts under Article 13(24), to further specify the format and required data elements of SBOMs, relying on international standards wherever possible. To operationalize this, CEN/CENELEC is developing a European standard under the ongoing PT3 work stream, focused on vulnerability handling for products with digital elements and covering the essential requirements of Annex I, Part II of the CRA. Its preparation phase includes dedicated sub-chapters on formalizing SBOM structures, which will serve as the foundation for subsequent stages of identifying vulnerabilities and assessing related threats (see “CRA workshop ‘Deep dive session: Vulnerability Handling” 1h36m35s).

In parallel, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) continues to shape global SBOM practices through its “Minimum Elements” framework and automation initiatives. These efforts directly influence Europe’s focus on interoperability and structured vulnerability handling under the CRA. This transatlantic alignment helps ensure SBOM data models and processes evolve toward a consistent, globally recognized baseline. CISA recently held a public comment window ending October 2, 2025 on a draft version of a revised set of minimum elements, and is expected to publish an update to the original NTIA Minimum Elements in the coming months.

Complementing these efforts, Germany’s BSI TR-03183-2 provides a more detailed technical specification than the original NTIA baseline, introducing requirements for cryptographic checksums, license identifiers, update policies, and signing mechanisms. It already serves as a key reference for manufacturers preparing to meet CRA compliance and will likely be referenced in the forthcoming CEN/CENELEC standard together with ISO/IEC and CISA frameworks. Together, the CRA and its supporting standards position Europe as a global benchmark for verifiable, lifecycle aware SBOM implementation, bridging policy compliance with operational security.

Defining the Unified Baseline: Convergence in Data Requirements

The SBOM has transitioned from a best practice into a legal and operational requirement due to the European Union’s Cyber Resilience Act (CRA). While the CRA mandates the SBOM as part of technical documentation for market access, the detailed implementation is guided by documents like BSI TR-03183-2. To ensure global compliance and maximum tool interoperability, stakeholders must understand the converging minimum data requirements. To illustrate this concept, the following comparison aligns the minimum SBOM data fields across the NTIA, CISA, BSI, and ETSI frameworks, revealing a shared move toward completeness, verifiability, and interoperability.

Data Field NTIA (U.S., 2021 Baseline) CISA’s Establishing a Common SBOM (2024) BSI TR-03183-2 (Germany/CRA Guidance) (2024) ETSI TR 104 034 (Compendium) (2025)
Component Name Required Required Required Required
Component Version Required Required Required Required
Supplier Required Required Required Required
Unique Identifier (e.g., PURL, CPE) Required Required  Required Required
Cryptographic Hash Recommended Required Required  Optional
License Information Recommended Required  Required Optional
Dependency Relationship Required  Required  Required Required
SBOM Author Required Required  Required  Required
Timestamp (Date of Creation) Required Required Required Required
Tool Name / Generation Context Not noted Not noted Required  Optional
Known Unknowns Declaration Optional Required Optional Optional
Common Format Required  Not noted Required  Required
Depth Not noted Not noted Not noted Optional

 

  • NTIA (2021): Established the basic inventory foundation necessary to identify components (who, what, and when).
  • CISA (2024): Framing Software Component Transparency establishes a structured maturity model by defining minimum, recommended, and aspirational SBOM elements, elevating SBOMs from simple component lists to verifiable security assets. CISA is currently developing further updates expected in 2025 to extend these principles into operational, risk-based implementation guidance.
  • BSI TR-03183-2: Mirrors the CISA/NTIA structure but mandates strong integrity requirements (Hash, Licenses) from the compliance perspective of the CRA, confirming the strong global convergence of expectations.
  • ETSI TR 104 034: As a technical compendium, it focuses less on specific minimum fields and more on the necessary capabilities of a functional SBOM ecosystem (e.g., trust, global discovery, interoperability, and lifecycle management).

The growing alignment across these frameworks shows that the SBOM is evolving into a globally shared data model, one capable of enabling automation, traceability, and trust across the international software supply chain.

Dual Standard Approach: SPDX and CycloneDX

The global SBOM ecosystem is underpinned by two major, robust, and mature open standards: SPDX and CycloneDX. Both provide a machine-processable format for SBOM data and support arbitrary ecosystems. These standards, while both supporting all the above frameworks, maintain distinct origins and strengths, making dual format support a strategic necessity for global commerce and comprehensive security.

The Software Package Data Exchange (SPDX), maintained by the Linux Foundation, is a comprehensive standard formally recognized by ISO/IEC 5962 in 2021. Originating with a focus on capturing open source licensing and intellectual property in a machine readable format, SPDX excels in providing rich, detailed metadata for compliance, provenance, legal due diligence, and supply chain risk analysis. Its strengths lie in capturing complex license expressions (using the SPDX License List and SPDX license expressions) and tracking component relationships in great depth, together with its extensions to support linkage to security advisories and vulnerability information, making it the preferred standard for rigorous regulatory audits and enterprise-grade software asset management. As the only ISO-approved standard, it carries significant weight in formal procurement processes and traditional compliance environments.  It supports multiple formats (JSON, XML, YAML, Tag/Value, and XLS) with free tools to convert between the formats and promote interoperability. 

The SPDX community has continuously evolved the specification since its inception in 2010, and most recently has extended it to a wider set of metadata to support modern supply chain elements, with the publication of SPDX 3.0 in 2024. This update to the specification contains additional fields & relationships to capture a much wider set of use cases found in modern supply chains including AI. These additional capabilities are captured as profiles, so that tooling only needs to understand the relevant sets, yet all are harmonized in a consistent framework, which is suitable for supporting product line management Fields are organized into a common “core”, and there are “software” and “licensing” profiles, which cover what was in the original specification ISO/IEC 5962. In addition there is now a “security” profile, which enables VEX and CSAF use cases to be contained directly in exported documents, as well as in databases  There is also a “build” profile which supports high fidelity tracking of relevant build information for “Build” type SBOMs. SPDX 3.0 also introduced a “Data” and “AI” related profiles, which made accurate tracking of AI BOMs possible, with support for all the requirements of the EU AI Act (see table in linked report).  As of writing, the SPDX 3.0 specification is in the final stages of being submitted to ISO/IEC for consideration. 

CycloneDX, maintained by OWASP and standardized as ECMA-424, is a lightweight, security-oriented specification for describing software components and their interdependencies. It was originally developed within the OWASP community to improve visibility into software supply chains. The specification provides a structured, machine-readable inventory of elements within an application, capturing metadata such as component versions, hierarchical dependencies, and provenance details. Designed to enhance software supply chain risk management, CycloneDX supports automated generation and validation in CI/CD environments and enables early identification of vulnerabilities, outdated components, or licensing issues. Besides its inclusion with SPDX in the U.S. federal government’s 2021 cybersecurity Executive Order, its formal recognition as an ECMA International standard in 2023 underscore its growing role as a globally trusted format for software transparency. Like SPDX, CycloneDX has continued to evolve since formal standardization and the current release is 1.7, released October 2025.

The CycloneDX specification continues to expand under active community development, regularly publishing revisions to address new use cases and interoperability needs. Today, CycloneDX extends beyond traditional SBOMs to support multiple bill-of-materials types, including Hardware (HBOM), Machine Learning (ML-BOM), and Cryptographic (CBOM), and can also describe relationships with external SaaS and API services. It integrates naturally with vulnerability management workflows through formats such as VEX, linking component data to exploitability and remediation context. With multi-format encoding options (JSON, XML, and Protocol Buffers) and a strong emphasis on automation.

OpenSSF and the Interoperability Toolkit

The OpenSSF has rapidly become a coordination hub uniting industry, government, and the open source community around cohesive SBOM development. Its mission is to bridge global regulatory requirements, from the EU’s Cyber Resilience Act (CRA) to CISA’s Minimum Elements and other global mandates, with practical, open source technical solutions. This coordination is primarily channeled through the “SBOM Everywhere” Special Interest Group (SIG), a neutral and open collaboration space that connects practitioners, regulators, and standards bodies. The SIG plays a critical role in maintaining consistent semantics and aligning development efforts across CISA, BSI, NIST, CEN/CENELEC, ETSI, and the communities implementing CRA-related guidance. Its work ensures that global policy drivers are directly translated into unified, implementable technical standards, helping prevent the fragmentation that so often accompanies fast-moving regulation.

A major focus of OpenSSF’s work is on delivering interoperability and automation tooling that turns SBOM policy into practical reality:

  • Protobom tackles one of the field’s toughest challenges – format fragmentation – by providing a format-agnostic data model capable of seamless, lossless conversion between SPDX, CycloneDX, and emerging schemas.
  • BomCTL builds on that foundation and offers a powerful, developer-friendly command line utility designed for CI/CD integration. It handles SBOM generation, validation, and transformation, allowing organizations to automate compliance and security workflows without sacrificing agility. Together, Protobom and Bomctl  embody the principles shared by CISA and the CRA: ensuring that SBOM data is modular, transparent, and portable across tools, supply chains, and regulatory environments worldwide.

Completing this ecosystem is SBOMit, which manages the end-to-end SBOM lifecycle. It provides built-in support for creation, secure storage, cryptographic signing, and controlled publication, embedding trust, provenance, and lifecycle integrity directly into the software supply chain process. These projects are maintained through an open, consensus-driven model, continuously refined by the global SBOM community. Central to that collaboration are OpenSSF’s informal yet influential “SBOM Coffee Club” meetings, held every Monday, where developers, vendors, and regulators exchange updates, resolve implementation challenges, and shape the strategic direction of the next generation of interoperable SBOM specifications.

OpenSSF’s strategic support for both standards – SPDX and CycloneDX – is vital for the entire ecosystem. By contributing to and utilizing both formats, most visibly through projects like Protobom and BomCTL which enable seamless, lossless translation between the two, OpenSSF ensures that organizations are not forced to choose between SPDX and CycloneDX. This dual format strategy satisfies the global requirement for using both formats  and maximizes interoperability, guaranteeing that SBOM data can be exchanged between all stakeholders, systems, and global regulatory jurisdictions effectively.

A Shared Vision for Action

Through this combination of open governance and pragmatic engineering, OpenSSF is defining not only how SBOMs are created and exchanged, but how the world collaborates on software transparency.

The collective regulatory momentum, anchored by the EU Cyber Resilience Act (CRA) and the U.S. Executive Order 14028, supported by the CISA 2025 Minimum Elements revisions, has cemented the global imperative for Software Bill of Materials (SBOM). These frameworks illustrate deep global alignment: both the CRA and CISA emphasize that SBOMs must be structured, interoperable, and operationally useful for both compliance and cybersecurity. The CRA establishes legally binding transparency requirements for market access in Europe, while CISA’s work encourages SBOMs within U.S. federal procurement, risk management, and vulnerability intelligence workflows. Together, they define the emerging global consensus: SBOMs must be complete enough to satisfy regulatory obligations, yet structured and standardized enough to enable automation, continuous assurance, and actionable risk insight. The remaining challenge is eliminating format and semantic fragmentation to transform the SBOM into a universal, enforceable cybersecurity control.

Achieving this global scalability requires a unified technical foundation that bridges legal mandates and operational realities. This begins with Core Schema Consensus, adopting the NTIA 2021 baseline and extending it with critical metadata for integrity (hashes), licensing, provenance, and generation context, as already mandated by BSI TR-03183-2 and anticipated in forthcoming CRA standards. To accommodate jurisdictional or sector-specific data needs, the CISA “Core + Extensions” model provides a sustainable path: a stable global core for interoperability, supplemented by modular extensions for CRA, telecom, AI, or contractual metadata. Dual support for SPDX and CycloneDX remains essential, satisfying the CRA’s “commonly used formats” clause and ensuring compatibility across regulatory zones, toolchains, and ecosystems.

Ultimately, the evolution toward global, actionable SBOMs depends on automation, lifecycle integrity, and intelligence linkage. Organizations should embed automated SBOM generation and validation (using tools such as Protobom, BomCTL, and SBOMit) into CI/CD workflows, ensuring continuous updates and cryptographic signing for traceable trust. By connecting SBOM information with vulnerability data in internal databases, the SBOM data becomes decision-ready, capable of helping identify exploitable or end-of-life components and driving proactive remediation. This operational model, mirrored in the initiatives of Japan (METI), South Korea (KISA/NCSC), and India (MeitY), reflects a decisive global movement toward a single, interoperable SBOM ecosystem. Continuous engagement in open governance forums, ISO/IEC JTC 1, CEN/CENELEC, ETSI, and the OpenSSF SBOM Everywhere SIG, will be essential to translate these practices into a permanent international standard for software supply chain transparency.

Conclusion: From Compliance to Resilient Ecosystem

The joint guidance A Shared Vision of SBOM for Cybersecurity insists on these global synergies under the endorsement of 21 international cybersecurity agencies. Describing the SBOM as a “software ingredients list,” the document positions SBOMs as essential for achieving visibility, building trust, and reducing systemic risk across global digital supply chains. That document’s central goal is to promote immediate and sustained international alignment on SBOM structure and usage, explicitly urging governments and industries to adopt compatible, unified systems rather than develop fragmented, country specific variants that could jeopardize scalability and interoperability.

The guidance organizes its vision around four key, actionable principles aimed at transforming SBOMs from static compliance documents into dynamic instruments of cybersecurity intelligence:

  • Modular Architecture – Design SBOMs around a Core Schema Baseline that satisfies essential minimum elements (component identifiers, supplier data, versioning) and expand it with optional extensions for domain specific or regulatory contexts (e.g., CRA compliance, sectoral risk requirements). Support and improve OSS tooling that enables processing and sharing of this data in a variety of formats.
  • Trust and Provenance – Strengthen authenticity and metadata transparency by including details about the generation tools, context, and version lineage, ensuring trust in the accuracy and origin of SBOM data.
  • Actionable Intelligence – Integrate SBOM data with vulnerability and incident-response frameworks such as VEX and CSAF, converting static component inventories into decision ready, risk aware security data.
  • Open Governance – Encourage sustained public–private collaboration through OpenSSF, ISO/IEC, CEN/CENELEC, and other international bodies to maintain consistent semantics and prevent fragmentation.

This Shared Vision complements regulatory frameworks like the EU Cyber Resilience Act (CRA) and reinforces the Open Source Security Foundation’s (OpenSSF) mission to achieve cross-ecosystem interoperability. Together, they anchor the future of SBOM governance in openness, modularity, and global collaboration, paving the way for a truly unified software transparency model.

The primary challenge to achieving scalable cyber resilience lies in the fragmentation of the SBOM landscape. Global policy drivers, such as the EU Cyber Resilience Act (CRA), the CISA-led Shared Vision of SBOM for Cybersecurity, and national guidelines like BSI TR-03183, have firmly established the mandate for transparency. However, divergence in formats, semantics, and compliance interpretations threatens to reduce SBOMs to static artifacts generated only because some regulation requires that they be created, rather than dynamic assets that can aid in security. Preventing this outcome requires a global commitment to a unified SBOM framework, a lingua franca capable of serving regulatory, operational, and security objectives simultaneously. This framework must balance policy diversity with technical capability universality, ensuring interoperability between European regulation, U.S. federal procurement mandates, and emerging initiatives in Asia and beyond. The collective engagement of ISO/IEC, ETSI, CEN/CENELEC, BSI, and the OpenSSF provides the necessary multistakeholder governance to sustain this alignment and accelerate convergence toward a common foundation.

Building such a framework depends on two complementary architectural pillars: Core Schema Consensus and Modular Extensions. The global core should harmonize essential SBOM elements, and CRA’s legal structure, into a single, mandatory baseline. Sectoral or regulatory needs (e.g., AI model metadata, critical infrastructure tagging, or crypto implementation details) should be layered through standardized modular extensions to prevent the ecosystem from forking into incompatible variants. To ensure practical interoperability, this architecture must rely on open tooling and universal machine-processable identifiers (such as PURL, CPE, SWID, and SWHID) that guarantee consistent and accurate linkage. Equally crucial are trust and provenance mechanisms: digitally signed SBOMs, verifiable generation context, and linkage with vulnerability data. These collectively transform the SBOM from a passive unused inventory into an actively maintained, actionable cybersecurity tool, enabling automation, real-time risk management, and genuine international trust in the digital supply chain, realizing the OpenSSF vision of “SBOMs everywhere.”

SBOMs have transitioned from a best practice to a requirement in many situations. The foundation established by the U.S. Executive Order 14028 has been legally codified by the EU’s Cyber Resilience Act (CRA), making SBOMs a non-negotiable legal requirement for accessing major markets. This legal framework is now guided by a collective mandate, notably by the Shared Vision issued by CISA, NSA, and 19 international cybersecurity agencies, which provides the critical roadmap for global alignment and action. Complementary work by BSI, ETSI, ISO/IEC, and OpenSSF now ensures these frameworks converge rather than compete.

To fully achieve global cyber resilience, SBOMs must not be merely considered as a compliance artifact to be created and ignored, but instead as an operational tool to support security and augment asset management processes. Organizations must:

  • Integrate and Automate SBOMs: Achieve full lifecycle automation for SBOM creation and continuous updates, making it a seamless part of the DevSecOps pipeline.
  • Maximize SBOM Interoperability: Mandate the adoption of both SPDX and CycloneDX to satisfy divergent global and regulatory requirements and ensure maximum tool compatibility.
  • Operationalize with Open Source Software Leverage OpenSSF tools (Protobom, BomCTL, SBOMit) to rapidly implement and scale technical best practices.
  • Drive Shared Governance for SBOMs: Actively engage in multistakeholder governance initiatives (CEN/CENELEC, ISO/IEC, CISA, ETSI,  OpenSSF) to unify technical standards and policy globally.
  • Enable Decision-Ready Processes that build on SBOMs: Implement advanced SBOM processes that link component data with exploitability and vulnerability context, transforming static reports into actionable security intelligence.

By embracing this shared vision, spanning among many others the CRA, CISA, METI, KISA, NTIA, ETSI, and BSI frameworks, we can definitively move from merely fulfilling compliance obligations to achieving verifiable confidence. This collective commitment to transparency and interoperability is the essential step in building a truly global, actionable, and resilient software ecosystem.

About the Authors

Madalin Neag works as an EU Policy Advisor at OpenSSF focusing on cybersecurity and open source software. He bridges OpenSSF (and its community), other technical communities, and policymakers, helping position OpenSSF as a trusted resource within the global and European policy landscape. His role is supported by a technical background in R&D, innovation, and standardization, with a focus on openness and interoperability.

Kate is VP of Dependable Embedded Systems at the Linux Foundation. She has been active in the SBOM formalization efforts since the NTIA initiative started, and was co-lead of the Formats & Tooling working group there. She was co-lead on the CISA Community Stakeholder working group to update the minimum set of Elements from the original NTIA set, which was published in 2024. She is currently co-lead of the SBOM Everywhere SIG.

Dr. David A. Wheeler is an expert on open source software (OSS) and on developing secure software. He is the Director of Open Source Supply Chain Security at the Linux Foundation and teaches a graduate course in developing secure software at George Mason University (GMU). Dr. Wheeler has a PhD in Information Technology, is a Certified Information Systems Security Professional (CISSP), and a Senior Member of the Institute of Electrical and Electronics Engineers (IEEE). He lives in Northern Virginia.

 

What’s in the SOSS? Podcast #42 – S2E19 New Education Course: Secure AI/ML-Driven Software Development (LFEL1012) with David A. Wheeler

By Podcast

Summary

In this episode of “What’s In The SOSS,” Yesenia interviews David A. Wheeler, the Director of Open Source Supply Chain Security at the Linux Foundation. They discuss the importance of secure software development, particularly in the context of AI and machine learning. David shares insights from his extensive experience in the field, emphasizing the need for both education and tools to ensure security. The conversation also touches on common misconceptions about AI, the relevance of digital badges for developers, and the structure of a new course aimed at teaching secure AI practices. David highlights the evolving nature of software development and the necessity for continuous learning in this rapidly changing landscape.

Conversation Highlights

00:00 Introduction to Open Source and Security
02:31 The Journey to Secure AI and ML Development
08:28 Understanding AI’s Impact on Software Development
12:14 Myths and Misconceptions about AI in Security
18:24 Connecting AI Security to Open Source and Closed Source
20:29 The Importance of Digital Badges for Developers
24:31 Course Structure and Learning Outcomes
28:18 Final Thoughts on AI and Software Security

Transcript

Yesenia (00:01)
Hello and welcome to What’s in the SOSS, OpenSSF podcast where we talk to interesting people throughout the open source ecosystem. They share their journey, expertise and wisdom. So yes, I need said one of your hosts and today we have the extraordinary experience of having David Wheeler on a welcome David. For those that may not know you, can you share a little bit about your row at the Linux Foundation OpenSSF?

David A. Wheeler (00:39)
Sure, my official title is actually probably not very illuminating. It says it’s the direct, I’m the director of open source supply chain security. But what that really means is that my job is to try to help other folks improve the security of open source software at large, all the way from it’s in someone’s head, they’re thinking about how to do it, developing it, putting it in repos, getting it packaged up, getting it distributed, receiving it just all the way through. We want to make sure that people get secure software and the software they actually intended to get.

Yesenia (01:16)
It’s always important, right? You don’t want to open up a Hershey bar that has no peanuts in the peanuts, right? So that was my analogy for the supply chain security in MySpace. Because I’m a little sensitive to peanuts. I was like, you know, you don’t want that.

David A. Wheeler (01:22)
You

David A. Wheeler (01:31)
You don’t want that. And although the food analogy is often pulled up, I think it’s still a good analogy. If you’re allergic to peanuts, you don’t want the peanuts. And unfortunately, it’s not just, hey, whether or not it’s got peanuts or not, but there was a scare involving Tylenol a while back. And to be fair, the manufacturer didn’t do anything wrong, but the bottles were tampered with by a third party.

Yesenia (01:40)
Mm-mm.

David A. Wheeler (01:57)
And so we don’t want tampered products. We want to make sure that when you request an open source program, it’s actually the one that was intended and not something else.

Yesenia (02:07)
So you have a very important job. Don’t play yourself there. We want to make sure the product you get is the one you get, right? So if you don’t know David, go ahead and message him on Slack, connect with him. Great gentleman in the open source space. And you’ve had a long time advocating for secure software development in the open source space. How did your journey lead to creating a course specifically on secure AI and ML driven development?

David A. Wheeler (02:36)
As with many journeys, it’s a complicated journey with lots of whens and ways. As you know, I’ve been interested in how do you develop secure software for a long time, decades now, frankly. And I have been collecting up over the years what are the common kinds of mistakes and more importantly, what are the systemic simple solutions you can make that would prevent that problem and eliminating it entirely ideally.

Um, and over the years it’s turned out that in fact, for a vast number, for the vast majority of problems that people have, there are well-known solutions, but they’re not well known by the developers. So a lot of this is really an education story of trying to make it so that software developers know how to do things. Now it’s a fair, you know, some would say, some would say, well, what about tools? Tools are valuable. Absolutely.

If to the extent that we can, we want to make it so that tools automatically do the secure thing. And that’s the right thing to do, but that’ll never be perfect. And people can always override tools. And so it’s not a matter of education or tools. I think that’s a false dichotomy. It’s you need tools and you need education. You need education or you can’t use the tools well as much as we can. We want to automate things so that they will handle things automatically, but you need both. You need both.

Now, to answer your specific question, I’ve actually been involved in and out with AI to some extent for literally decades as well. People have been interested in AI for years, me too. I did a lot more with symbolic based AI back in the day, wrote a lot of Lisp code. But since that time, really machine learning, although it’s not new, has really come into its own.

And all of a sudden it became quite apparent to me, and it’s not just me, to many people that software development is changing. And this is not a matter of what will happen someday in the future. This is the current reality for software development. And I’m going to give a quick shout out to some researchers in Stanford. I’ll have to go find the link. So who basically did some, I think some important studies related to this.

David A. Wheeler (04:59)
When you’re developing software from scratch and trying to create a tiny program, the AI tools are hard to beat because basically they’re just creating, know, they’re just reusing a template, but that’s a misleading measure, okay? That doesn’t really tell you what normal software development is like. However, let’s assume that you’re taking existing programming and improving it, and you’re using a language for which there’s a lot of examples for training. Okay, we’re talking Python and Java and, you know, various widely used languages, okay?

David A. Wheeler (05:28)
If you do those, turns out the AI tools can generate a lot of code. Some of it’s right. So that means you have to do more rework, frankly, when you use these tools. Once you take the rework into account, they’re coming up with a 20 % improvement in productivity. That is astounding. And I will argue that is the end of the argument. Seriously, there are definitely, there are companies where they have a single customer and the customer pays them to write some software. If the customer says never use AI, fine, the customer’s willing to pay 20 % improvement, I will charge that extra to them. But out in most commercial open source settings, you can’t throw, you can’t ignore a 20 % improvement. And that’s current tech, that’s not future tech. mean, the reality is that we haven’t seen improvements like this since the switch from hut, from assembly to high level languages, the use of, you know, the use of structure programming, I think was another case we got that kind. And you can make a very good case that open source software was that was a third case where you got that digital, productivity. Now you could also argue that’s a little unfair because open source didn’t improve your ability to write software. makes you didn’t have to write the software.

David A. Wheeler (06:53)
But that’s okay. That’s still an improvement, right? So I think that counts. But for the most part, we’ve had a lot of technologies that claim to improve productivity. I’ve worked many over the years. I’ve been very interested in how do you improve productivity? Most of them turned out not to be true. I don’t think that’s true for AI. It’s quite clear for multiple studies. mean, not all studies agree with this, by the way, but I think there’s enough studies that there’s a productivity improvement.

David A. Wheeler (07:21)
It does depend on how you employ these, that, but you know, and they’ll get better. But the big problem now is everyone is on the list. This is a case where everyone, even if you’re a professional and you’ve been doing software development for years, everybody’s new at this part of the game. These tools are new. And the problem here is that the good news is that they can help you. The bad news is they can harm you. They can do.

David A. Wheeler (07:50)
They can produce terribly insecure software. They can also end up being the security vulnerability themselves. And so we’re trying to get ahead of the game and looking around what’s the latest information, what can we learn? And it turns out there’s a lot that we can learn that we actually think is gonna stay on the test of time. And so that’s what this course is, those basics they’re gonna apply no matter what tool you use.

David A. Wheeler (08:17)
How do you make it say you’re using these tools, but you’re not immediately becoming a security vulnerability? How is it so that you’re less likely to produce vulnerable code? And that turns out to be harder. We can talk about why that is, but that’s what this course is in a nutshell.

Yesenia (08:33)
Yeah, I know I had a sneak preview at the slide deck and I was just like, this is fantastic. Definitely needed it. And I wanted to take a moment and give a kudos to the researchers because the engine, the industry wouldn’t be what it is today without the researchers. Like they’re the ones that are firsthand, like try and failing and then somebody picks it up and builds it and it is open source or industry. then boom, it becomes like this whole new field. So I know AI has been around for a minute.

David A. Wheeler (09:01)
Yeah, let me add that. I agree with you. Let me actually separate different researchers because we’re building on the first of course, the researchers who created these original AI and ML systems more generally, obviously a lot of the LLM based research. You’ve got the research specifically in developing tools for developing, for improving software development. And then you’ve got the researchers who are trying to figure out the security impacts of this. And those folks,

Yesenia (09:30)
Those are my favorite. Those are my favorite.

David A. Wheeler (09:31)
I’m Well, we need all of these folks. But the thing is, what concerns me is that remarkably, even though we’ve got this is really new tech, we do have some really interesting research results and publications about their security impacts. The problem is, most of these researchers are really good about, you know, doing the analysis, creating controls, doing the research, publishing a paper, but for most people, publishing a paper has no impact. People are not going to go out and read every paper on a topic. That’s, know, they have work to do basically. So if you’re a researcher makes these are very valuable, but what we’ve tried to do is take the research and boil it down to as a practitioner, what do you need to know? And we do cite the research because

David A. Wheeler (10:29)
You know, if you’re, if you’re interested or you say, Hey, I’m not sure I believe that. Well, great. Curiosity is fantastic. Go read the studies. there’s always limitations on studies. We don’t have infinite time and infinite money. but I think the research is actually pretty consistent. at least with Ted Hayes technology, we, can’t guess what the great grand future holds.

David A. Wheeler (10:55)
But I’m going to guess that at least for the next couple of years, we’re going to see a lot of LLMs, LLM use, they’re going to build on other tools. And so there’s things that we know just based on that, that we can say, well, given the direction of current technology, what’s okay, what’s to be concerned about? And most importantly, what can we do practically to make this work in our favor? So we get that 20 % because we’re going to want it.

Yesenia (11:24)
Yeah, at this point in time, we’re seedling within the AI ML piece. What you said is really, really important. It’s just like, so much more to this. There’s so much more that’s growing. And I want to take it back to something you had mentioned. You’re talking about the good that is coming from the AI ML. And there is the bad, of course. And for the course that you’re coming out, what is one misconception about AI in the software development security that you hope that this course will shatter? What myth are you busting?

David A. Wheeler (11:53)
What myth am I busting? I guess I’m going to cheat because I’m going to respond with two. It’s by the fact that I actually can count. guess, okay, I’m going to turn it into one, which is, guess, basically either over or under indexing on the value of A. Basically expecting too much or expecting too little. Okay, basically trying to figure out what the real expectations you should have are and not go outside that. So there’s my one. So let me talk about over and under. We’ve got people.

Yesenia (12:30)
Well, I’m going to give you another one because in software everything starts with zero. So I’ll give you another one.

David A. Wheeler (13:47)
Okay, all right, so let me talk about the under. There are some who have excessive expectations. We’ve got the, you know, I think vibe coding in particular is a symptom of this, okay? Now, there are some people who use the word vibe coding as another word for using AI. I think that’s not what the original creator of the term meant. And I actually think that’s not helpful because it’s a whole lot like talking about automated carriages.

Um, very soon we’re only going to be talking about carriages. Okay. Everybody’s going to be using automation AI except the very few who don’t. Okay. So, so there’s no point in having a special term for the normal case. Um, so what, what I mean by vibe coding is what the original creator of the term meant, which is, Hey, AI system creates some code. I’m never going to review it. I’m never going to look at it. I’m not going to do anything. I will just blindly accept it. This is a terrible idea. If it matters what the quality of the code is. Now there are cases where frankly the quality of the code is irrelevant. I’ve seen some awesome examples where you’ve got little kids, know, eight, nine year olds running around and telling a computer what to do and they get a program that seems to kind of do that. And that is great. I mean, if you want to do vibe coding with that, that’s fantastic. But if the code actually does something that matters, with current tech, this is a terrible idea. They’re not.

They can sometimes get it correct, but even humans struggle making perfect code every time and they’re not that good. The other case though is, man, we can’t ever use this stuff. I mean, again, if you’ve got a customer who’s paying you extra to never do it, that’s wonderful. Do what the customer asks and is willing to pay for it. For most of us, that’s not a reasonable industry position. What we’re going to need to do instead is learn together as an industry how to use this well. The good news is that although we will all be learning together, there’s some things already known now. So let’s run to the researchers, find out what they’ve learned, go to the practitioners, basically find what has been learned so far, start with that. And then we can build on and improve and go and other things. You don’t expect too much, don’t expect too little.

Yesenia (15:28)
Yeah, the five coding is an interesting one, because sometimes it spews out like correct code. But as somebody who’s written code and reviewed code and like done all this with the supply chain, I’m like. It’s like that extra work you gotta kind of add to it to make sure that you’re validating your testing it and it hasn’t just accidentally thrown in some security vulnerability in in that work. And I think that was. Go ahead.

David A. Wheeler (15:51)
What I can interrupt you, one of the studies that we cited, they basically created a whole bunch of functions that could be written either insecurely or securely as a test. Did this whole bunch of times. And they found that 45 % of the time using today’s current tooling, they chose the insecure approach. And there’s reason for this. ML systems are finally based on their training sets.

They’ve been trained on lots of insecure programs. What did you expect to get? You know, so this is actually going to be a challenge because when you’re trying to fight what the ML systems are training on, that is harder than going with the flow. That doesn’t mean it can’t be done, but it does require extra effort.

Yesenia (16:41)
We’re going extra left at that point. All right, so you your one and I gave you, know, one more because we started at zero. Any other misconception that is being bumped at the course.

David A. Wheeler (16:57)
Um, I guess the, uh, I guess the misconception sort is nothing can be done. And, uh, of course the whole course is a, uh, a, a, a stated disagreement with examples, uh, because in fact, there are things we can do right now. Now I would actually concede if somebody said, Hey, we don’t know everything. Well, sure. Uh, you know, I think all of us are in a life journey and we’re all learning things as we go. Uh, but that doesn’t mean that we have to, um, you know, just accept that nothing can be done. That’s a fatalistic approach that I think serves no one. There are things we can do. There are things that are known, though maybe not by you, but that’s okay. That’s what a course is for. We’ve worked to boil down, try to identify what is known, and with relatively little time, you’ll be far more prepared than you would be otherwise.

Yesenia (17:49)
It is a good course and I the course is aimed for developers, software engineers, open source contributors. So how does it connect to real world open source work like those that are working on closed source versus that open source software?

David A. Wheeler (18:04)
Well, okay, I should first quickly note that I work for the Open Source Security Foundation, Open Source is in the name, so we’re very interested in improving the security of open source software. That is our fundamental focus. That said, sometimes the materials that we create are not really unique to open source software. Where it can be applied by closed source software, we try to make that clear. Sometimes we don’t make it clear as we should, but we’re working on that.

Um, and frankly, in many cases, I think there’s also worth noting that, um, if you’re developing closed source software, the vast majority of the components you’re using are open source software. I mean, the average is 70 to 90 % of the software in a closed source system software system is actually open source software components. Uh, simple because it doesn’t make sense to rebuild everything from scratch today. That’s not a, an M E economically viable option for most folks. So.

in this particular case for the AI, it is applicable equally to open source and closed source. It applies to everybody. and this is actually true also for our LFD 121 course on how to develop secure software. And when you think about it, it makes sense. The attackers don’t care what your license is. just, you know, they just don’t care. They’re going to try to do bad things to you regardless of, of the licensing.

And so while certainly a number of things that we develop like, you know, the best practices badge are very focused on open source software, you know, other things like baseline, other things like, for example, this course on LFT 121, the general how to develop secure software course, they’re absolutely for open source and closed source. Because again, the attackers don’t care.

Yesenia (19:53)
Yeah, they just they just don’t they’re they’re actually just trying to go around all this like they’re trying to make sure they learn it so that they know what to do. Unfortunately, that’s the case. And this course you said it offers a digital badge. Why is this important for developers and employers?

David A. Wheeler (20:11)
Well, I think the short answer is that anybody can say, yeah, I learned something. But it’s, think, for, I guess I should start with the employers because that’s the easier one to answer. Employers like to see that people know things and having a digital badge is a super easy way for an employer to, to make sure that, yeah, they actually learn at least the basics of, you know, that topic. you know, certainly it’s the same for you know, university degrees and other things. You when you’re an employer, you want the, it’s very, important that people who are working for you actually know something that’s critically important. And while a degree or digital badge doesn’t guarantee it, it at least gives that additional evidence. For people, mean, obviously if you want, are trying to get employed by someone, it’s always nice to be able to prove that. But I think it’s also a way to both show you know something to others and frankly encourage others to learn this stuff. We have a situation now where way too many people don’t know how to do some what to me are pretty basic stuff. You know I’ll point back to the LFD 121 course which is how to develop secure software. Most colleges, most universities that’s not a required part of the degree. I think it should be.

David A. Wheeler (21:35)
But since it isn’t, it’s really, really helpful for everybody to know, wait, this person coming in, do they not, they’ve got this digital badge. That gives me much higher confidence going in as somebody I’m working with and that sort of thing, as well as just encouraging others to say, hey, look, I carried enough to take the time to learn this, you can too. And both LFD 121 and this new AI course are free, so that in there online so you can take it at your pace. Those roadblocks do not exist. We’re trying to get the word out because this is important.

Yesenia (22:16)
Yeah, I love that these courses are more accessible and how you touched on the students, like students applying for universities that might be more highly competitive. They’re like, hey, look, I’m taking this extra path to learn and take these courses. Here’s kind of like the proof. And it’s like Pokemon. It’s good to collect them all, know, between the badges and the certifications and the degrees.

I definitely that’s the security professional’s journeys. Collect them all at this point with the credibility and benefits.

David A. Wheeler (22:46)
Well, indeed, course, the real goal, of course, is to learn, not the badges. But I think that badges are frankly, you know, you collecting the gold star, there is nothing more human and nothing more okay than saying, Hey, I, you know, I got to I got a gold star for if you’re doing something that’s good. Yes, revel in that. Enjoy it. It’s fun.

David A. Wheeler (23:11)
And I don’t think that these are impossible courses by any means. And unlike some other things which are, know, I’m not against playing games, playing games is fun, but this is a little thing that’s both can be interesting and is going to be important long-term to not only yourself, but every who uses the code you make. Cause that’s gonna be all of us are the users of the software that all developers as a community make.

Yesenia (23:42)
Yeah, there’s a wide range impact from this, not just like even if you don’t create software, just understanding and learning about this, you’re a user to understanding that basic understanding of it. So I want to transition a little bit to the course because I know we’re spending the whole time about it. Let’s say I’m a friendly person. I signed up for this free LFEL 1012. Can you walk me through the course structure? Like what am I expected to take away from the course in that time period?

David A. Wheeler (24:09)
Okay, yeah, so let’s see here. So I think what I should do is kind of first walk through the outline. Basically, I mean, the first two parts unsurprisingly are introduction and some context. And then we jump immediately into key AI concepts for secure development. We do not assume that someone taking this course is already an expert in AI. I mean, if you are, that’s great. It doesn’t take, we’re not spending a lot of time on it, but we wanna make sure that you understand the basics, the key terms that matter for software development. And then we drill right into the security risks of using AI assistance. I want to make it clear, we’re not saying you can’t use just because something has a risk, everything has risks, okay? But understanding what the potential issues are is important because then you can start addressing them. And then we go through what I would call kind of the meat of the course, best practices for secure assistant use.

You know, how do you reduce the risk that the assistant itself doesn’t become subverted, it starts working against you, things like that. Writing more secure code with AI, if you just say, write some code, a lot of it’s gonna be insecure. There are ways to deal with that, but it’s not so simple or straightforward. For example, it’s pretty common to tell AI systems that, hey, I’m an expert in this topic and suddenly it gets better. That trick doesn’t work.

No, you may laugh, but honestly, that trick works in a lot of situations, but it doesn’t work here. And we’ve actually researched showing it doesn’t work. So there are things that work, but it’s more it’s, it’s more than that. And finally, reviewing code changes in a world with AI. Now, of course, involves reviewing proposed changes from others. And in some cases, trying to deal with the potential DDoS attacks as people start creating far more code than anybody can reasonably review. Okay. We’re have to deal with this. and frankly, biggest problem, frankly, the folks who are vibe coding, you know, they, they run some program. It tells them 10 things. I’ll just dump all 10 things at them. And no, that’s a terrible idea. you know, and the, the curl folks, for example, have had an interesting point where.

They complained bitterly about some inputs from AI systems, which were absolute garbage, wasted their time. And they’ve praised other AI submissions because somebody took the time to make sure that they were actually helpful and correct and so on. And then that’s fantastic. you know, basically you need to push back on the junk and then find and then welcome the good stuff. And then, of course, a little conclusion wrap up kind of thing.

Yesenia (27:01)
I love it. it was a good outline. was not seeing it. Is this like videos accomplished with it or is this just like a module click through?

David A. Wheeler (27:10)
Well, basically we, we group them into little chapters. forgot what their official term is. It’s chapters, section modules. I don’t remember what the right term is. I guess I should, but basically after you go that, then there’s a couple of quiz questions and then a little videos. Basically the idea is that we want people to get it quickly, but you know, if it’s just watch a video for an hour, people fall asleep, don’t remember anything. That’s the goal is to learn, not just, you know, sleep through a video.

David A. Wheeler (27:39)
So little snippets, little quiz questions, and at the end there’s a little final exam. And if you get your answers right, you get your badge. So it’s not that terribly hard. We estimate, it varies, but we estimate about an hour for people. So it’s not a massive time commitment. Do it on lunch break or something. I think this is going to be, as I said, I think this is going to be time well spent.

David A. Wheeler (28:07)
This is the world that we are all moving to, or frankly, have already arrived at.

Yesenia (28:12)
Yeah, I’m already here. think I said it’s a seedling. It’s about to grow into that big tree. Any last minute thoughts, takeaways that you want to share about the course, your experience, open source, supply chain security, all of the above.

David A. Wheeler (28:27)
My goodness. I’ll think of 20 things after we’re done with this, of course. well, no, problem is I’ll think about them later in French. believe it’s called the wisdom of the stairs. It’s as you leave the party, you come up with the point you should have made. so I guess I’ll just say that, you know, if you develop software, whether you’re not, whether you realize or not, it’s highly likely that the work that you do will influence many, many.

Yesenia (28:31)
You only get zero and one.

David A. Wheeler (28:54)
About many, many people, many more than you probably realize. So I think it’s important for all software developers to learn how to develop software, secure software in general, because whether or not you know how to do it, the attackers know how to attack it and they will attack it. So it’s important to know that in general, since we are moving and essentially have already arrived at the world of AI and software development, it’s important to learn the basics and yes.

Do keep learning. Well, all of us are going to keep learning throughout our lives. As long as we’re in this field, that’s not a bad thing. I think it’s an awesome thing. I wake up happy that I get to learn new stuff. But that means you actually have to go and learn the new stuff. And the underlying technology remark is, it’s actually remarkably stable in many things. This is a case though, where, yes, a lot of things change in the detail, but the fundamentals don’t. But this is something where, yeah, actually there is something fundamental that is changing. One time we didn’t use AI often to help us develop software. Now we do. So how do we do that wisely? And there’s a long list of specifics. The course goes through it. I’ll give a specific example so it’s not just this highfalutin, high level stuff. So for example,

Pretty much all these systems are based very much on LLMs, which is great. LLMs have some amazing stuff, but they also have some weaknesses. One is in particular, they are incredibly gullible. If they are told something, they will believe it. And if you tell them to read a document that gives them instructions on some tech, and the document includes malicious instructions, that’s what it’s going to do because it heard the malicious instructions.

David A. Wheeler (30:48)
Now that doesn’t mean you can’t use these technologies. I think that’s a road too far for most folks. But it does mean that there’s new risks that we never had to deal with before. And so there’s new techniques that we’re going to need to apply to do it well. And I don’t think they’re unreasonable. They’re just, you know, we now have a new situation and we’re have to make some changes because of new situation.

Yesenia (31:11)
Yeah, it’s like you mentioned earlier, like you can ask it to be an expert in something and then it’s like, oh, I’m an expert. That’s what I laughing. I was like, yeah, I use that a lot. I’m like, the prompt is you’re an expert in sales. You’re an expert in brand. You’re an expert in this. And it’s like, OK, once it gets in.

David A. Wheeler (31:25)
But the thing is, that really does work in some fields, remarkably. And of course, we can only speculate sometimes why LLMs do better in some areas than others. But I think in some areas, it’s quite easy to distinguish the work of experts from non-experts. And the work of experts is manifestly and obviously different. And at least so far, LLMs struggle to do that.

David A. Wheeler (31:54)
This differentiation in this particular domain. And we can speculate why but basically, the research says that doesn’t work. So don’t do that. Do there are other techniques that have far more success, do those instead. And I would say, hey, I’m sure we’ll learn more things, there’ll be more research, use those as we learn them. But that doesn’t mean that we get to excuse ourselves from ignoring the research we have now, even though we don’t know it.

David A. Wheeler (32:23)
We don’t know everything. We won’t know everything next year either. Find out what you need to know now and be prepared to learn more Seagull.

Yesenia (32:32)
It’s a journey. Always learning every year, every month, every day. It’s great. We’re going to transition into our rapid fire. All right, so I’m going to ask the question. You got to answer quiz and there’s no edit on this point. All right, favorite programming language to teach security with.

David A. Wheeler (32:56)
I don’t have a favorite language. It’s like asking what my children, know, which of my children are my favorite. I like lots of programming languages. That said, I often use Java, Python, C to teach different things more because they’re decent exemplars of those kinds of languages. But so there’s your answer.

Yesenia (33:19)
Those are good range because you have your memory based one, which is the see your Python, which more scripts in the Java, which is more object oriented. So you got a good diverse group.

David A. Wheeler (33:28)
Static type, right, you’ve got your static typing, you’ve got your scripting, you’ve got your lower level. But indeed, I love lots of different programming languages. I know over 100, I’m not exaggerating, I counted, there’s a list on my website. But that’s less impressive than you might think because after you’ve learned a couple, the others tend to not, they often are similar too. Yes, Haskell and Lisp are really different.

David A. Wheeler (33:55)
But most Burmese languages are not as different as you might think, especially after you’ve learned a few. So I hope can help.

Yesenia (34:01)
Yeah, the newer ones too are very similar in nature. Next question, Dungeon and Dragon or Lord of the Rings?

David A. Wheeler (34:11)
I love both. What are we doing? What are you doing to me? Yeah, so I play Dungeons and Dragons. I have watched, I’ve read the books and watched the movies many times. So yes.

Yesenia (34:24)
Yes, yes, yes. First open source project you ever contributed to.

David A. Wheeler (34:30)
Wow, that is too long ago. I don’t remember. Seriously, but it was before the term open source software was created because that was created much later. So it was called free software then. So I honestly don’t remember. I sure it was some small contribution to something somewhere like many folks do, but I’m sorry. It’s lost in the midst of times back in the eighties. Maybe. Yeah. The eighties somewhere, probably mid eighties.

Yesenia (35:00)
You’re going to go to sleep now. Like,

David A. Wheeler (35:01)
So, yeah, yeah, I’m sure somebody will do the research and tell me. thank you.

Yesenia (35:09)
There wasn’t GitHub, so you can’t go back to commits.

David A. Wheeler (35:11)
That’s right. That’s right. No, was long before get long before GitHub and so on. Yep. Carry on.

Yesenia (35:18)
When you’re writing code, coffee or tea?

David A. Wheeler (35:22)
Neither! Coke Zero is my preferred caffeine of choice.

Yesenia (35:26.769)
And this is not sponsored.

David A. Wheeler (35:28.984)
It is not sponsored. However, I have a whole lot of empty cans next to me.

Yesenia (35:35)
AI tool you find the most useful right now.

David A. Wheeler (35:39)
Ooh, that one’s hard. I actually use about seven or eight depending on what they’re good for. For actual code right now, I’m tending to use Claude Code. Claude Code’s really one of the best ones out there for code. And of course, five minutes later, it may change. GitHub’s not bad either. There’s some challenges I’ve had with them. They had some bugs earlier, which I suspect they fixed by now.

But in fact, I think this is an interesting thing. We’ve got a race going on between different big competitors, and this is in many ways good for all of us. The way you get good at anything is by competing with others. So I think that we’re seeing a lot of improvements because you’ve got competing. And it’s okay if the answer changes over time. That’s an awesome thing.

Yesenia (36:36)
That is awesome. That’s technology. And the last one, this is for chaos. GIF or GIF?

David A. Wheeler (36:42)
It’s GIF. Graphics. Graphics has a guck in it. And yes, I’m aware that the original perpetrator doesn’t pronounce it that way, but it’s still GIF. I did see a cartoon caption which said GIF or GIF. And of course I can hear it just reading it.

Yesenia (36:53)
There you have it.

Yesenia (37:05)
My notes is literally spelled the same.

David A. Wheeler (37:08)
Hahaha!

Yesenia (37:11)
All right, well there you have it folks, another rapid fire. David, thank you so much for your time today, for your impact contribution to open source in the past couple decades. Really appreciate your time and all the contributors that were part of this course. Check it out on the Linux Foundation website. then David, do you want to close it out with anything on how they can access the course?

David A. Wheeler (37:38)
Yeah, so basically the course is ecure AI/ML-Driven Software Development its numbers LFEL 1012 And I’m sure we’ll put a link in the show. No, I’m not gonna try to read out the URL But we’ll put a link in there to to get to it But please please take that course. We’ve got some other courses

Software development, you’re always learning and this is an easy way to get the information you most need.

Yesenia (38:14)
Thank you so much for your time today and those listening. We’ll catch you on the next episode.

David A. Wheeler (38:19)
Thank you.

OpenSSF Hosts 2025 Policy Summit in Washington, D.C. to Tackle Open Source Security Challenges

By Blog, Global Cyber Policy, Press Release

WASHINGTON, D.C. – March 11, 2025 – The Open Source Security Foundation (OpenSSF) successfully hosted its 2025 Policy Summit in Washington, D.C., on Tuesday, March 4. The summit brought together industry leaders and open source security experts to address key challenges in securing the software supply chain, with a focus on fostering harmonization for open source software (OSS) development and consumption in critical infrastructure sectors.

The event featured keynotes from OpenSSF leadership and industry experts, along with panel discussions and breakout sessions covering the latest policy developments, security frameworks, and industry best practices for open source software security. 

“The OpenSSF is committed to tackling the most pressing security challenges facing the consumption of open source software in critical infrastructure and beyond,” said Steve Fernandez, General Manager, OpenSSF. “Our recent Policy Summit highlighted the shared responsibility, common goals, and interest in strengthening the resilience of the open source ecosystem by bringing together the open source community, government, and industry leaders.” 

Key Themes and Discussions from the Summit

  1. AI, Open Source, and Security
  • AI security remains an emerging challenge: Unlike traditional software, AI has yet to experience a major security crisis akin to Heartbleed, leading to slower regulatory responses.
  • Avoid premature regulation: Experts advised policymakers to allow industry-led security improvements before introducing regulation.
  • Security guidance for AI developers: There is an increasing need for dedicated security frameworks for AI systems, akin to SLSA (Supply Chain Levels for Software Artifacts) in traditional software.
  1. Software Supply Chain Security and OSS Consumption
  • Balancing software repository governance: The summit explored whether package repositories should actively limit the use of outdated or vulnerable software, recognizing both the risks and ethical concerns of software curation.
  • Improving package security transparency: Participants discussed ways to provide better lifecycle risk information to software consumers and whether a standardized framework for package deprecation and security backports should be introduced.
  • Policy recommendations for secure OSS consumption: OpenSSF emphasized the need for cross-sector collaboration to align software security policies with global regulatory frameworks, such as the EU Cyber Resilience Act (CRA) and U.S. federal cybersecurity initiatives.

“The OpenSSF Policy Summit reaffirmed the importance of industry-led security initiatives,” said Jim Zemlin, Executive Director of the Linux Foundation. “By bringing together experts from across industries and open source communities, we are ensuring that open source security remains a collaborative effort, shaping development practices that drive both innovation and security.”

Following the summit, OpenSSF will continue to refine security guidance, best practices, and policy recommendations to enhance the security of open source software globally. The discussions from this event will inform ongoing initiatives, including the OSS Security Baseline, software repository security principles, and AI security frameworks.

For more information on OpenSSF’s policy initiatives and how to get involved, visit openssf.org.

Supporting Quotes

“The 2025 Policy Summit was an amazing day of mind share and collaboration across different teams, from security, to DevOps, and policy makers. By uniting these critical voices, the day resulted in meaningful progress toward a more secure and resilient software supply chain that supports innovation across IT Teams.” – Tracy Ragan, CEO and Co-Founder DeployHub

“I was pleased to join the Linux Foundation OpenSSF Policy Summit “Secure by Design” panel and share insights on improving the open source ecosystem via IBM’s history of creating secure technology solutions for our clients,” said Jamie Thomas, General Manager, Technology Lifecycle Services & IBM Enterprise Security Executive. “Open source has become an essential driver of innovation for artificial intelligence, hybrid cloud and quantum computing technologies, and we are pleased to see more regulators recognizing that the global open source community has become an essential digital public good.” – Jamie Thomas, General Manager, Technology Lifecycle Services & IBM Enterprise Security Executive

“I was delighted to join this year’s OpenSSF Summit on behalf of JFrog as I believe strongly in the critical role public/private partnerships and collaboration plays in securing the future of open source innovation. Building trust in open source software requires a dedicated focus on security and software maturity. Teams must be equipped with tools to understand and vet open source packages, ensuring we address potential vulnerabilities while recognizing the need for ongoing updates. As the value of open source grows, securing proper funding for these efforts becomes essential to mitigate risks effectively.” – Paul Davis, U.S. Field CISO, JFrog

“Great event. I really enjoyed the discussions and the idea exchange between speakers, panelists and the audience.  I especially liked the afternoon breakout discussion on AI, open source, and security.” Bob Martin, Senior Software and Supply Chain Assurance Principal Engineer at the MITRE Corporation

“The Internet is plagued by chronic security risks, with a majority of companies relying on outdated and unsupported open source software, putting consumer privacy and national security at risk. As explored at the OpenSSF Policy Summit, we are at an inflection point for open source security and sustainability, and it’s time to prioritize and invest in the open source projects that underpin our digital public infrastructure.” – Robin Bender Ginn, Executive Director, OpenJS Foundation

“It is always a privilege to speak at the OpenSSF Policy Summit in D.C. and converse with some of the brightest minds in security, government, and open source. The discussions we had about the evolving threat landscape, software supply chain security, and the policies needed to protect critical infrastructure were timely and essential. As the open source ecosystem expands with skyrocketing open source AI adoption, it’s vital that we work collaboratively across sectors to ensure the tools and frameworks developers rely on are secure and resilient. I look forward to continuing these important conversations and furthering our collective mission of keeping open source safe and secure.” – Brian Fox, CTO and Co-Founder, Sonatype

“The OpenSSF Policy Summit highlighted the critical intersection of policy, technical innovation, and collaborative security efforts needed to protect our software supply chains and address emerging AI security challenges. By bringing together policy makers and technical practitioners, we’re collectively building a more resilient open source ecosystem that benefits everyone, we look forward to future events and opportunities to collaborate with the OpenSSF to help strengthen this ecosystem.” – Jim Miller, Engineering Director of Blockchain and Cryptography, Trail of Bits

***

About the OpenSSF

The Open Source Security Foundation (OpenSSF) is a cross-industry initiative by the Linux Foundation that brings together the industry’s most important open source security initiatives and the individuals and companies that support them. The OpenSSF is committed to collaboration and working both upstream and with existing communities to advance open source security for all. For more information, please visit us at openssf.org.

Media Contact
Noah Lehman
The Linux Foundation
nlehman@linuxfoundation.org

OpenSSF Announces Initial Release of the Open Source Project Security Baseline

OpenSSF Announces Initial Release of the Open Source Project Security Baseline

By Blog, Press Release

New Initiative Aims to Enhance Open Source Software Security Through Tiered Best Practices

SAN FRANCISCO – February 25, 2025 – The Open Source Security Foundation (OpenSSF) is pleased to announce the initial release of the Open Source Project Security Baseline (OSPS Baseline). The Baseline initiative provides a structured set of security requirements aligned with international cybersecurity frameworks, standards, and regulations, aiming to bolster the security posture of open source software projects.

“The OSPS Baseline release is a significant milestone in advancing security initiatives within the open source ecosystem,” said Christopher Robinson, Chief Security Architect at OpenSSF. “We’re excited to roll out OSPS Baseline following community testing and validation — we are confident that these security best practices are both practical and impactful across open source projects.”

The OSPS Baseline offers a tiered framework of security practices that evolve with project maturity. It compiles existing guidance from OpenSSF and other expert groups, outlining tasks, processes, artifacts, and configurations that enhance software development and consumption security. By adhering to the Baseline, developers can lay a foundation that supports compliance with global cybersecurity regulations, such as the EU Cyber Resilience Act (CRA) and U.S. National Institute of Standards and Technology (NIST) Secure Software Development Framework (SSDF).

“We’ve gotten helpful feedback from projects involved in the pilot rollout, including adoption commitments from GUAC, OpenVEX, bomctl, and Open Telemetry,” said Stacey Potter, Independent Open Source Community Manager, after helping lead the OSPS Baseline pilot efforts. “We know it can be tough to navigate all the security standards out there, so we built a framework that grows with your project. Our goal is to take the guesswork out of it and help maintainers feel confident about where they stand, without adding extra stress. It’s all about empowering the community and making open source more secure for everyone!”

“I’m excited to see the release of OSPS Baseline,” said Ben Cotton, Open Source Community Lead at Kusari & OSPS Baseline co-maintainer. “This effort provides actionable, practical guidance to help developers achieve appropriate security levels for their projects. Too often, security advice is vague or impractical, but Baseline aims to change that. Every improvement to open source security strengthens the modern software ecosystem, making it safer for everyone.”

OpenSSF invites open source developers, maintainers, and organizations to make use of the OSPS Baseline. Through engaging with this initiative, stakeholders can also contribute to refining the framework and promoting widespread adoption of security best practices in the open source community.

For more information and to get involved, please visit the OSPS Baseline website or GitHub.

Supporting Quotes:

“The OSPS Baseline release is an important step toward efficiently addressing the security and resilience of open source projects. Open source stewards, manufacturers who rely on open source, and end users will all benefit long-term as this community-defined criteria shines light on project security best practices.”

– Eddie Knight, Open Source Program Office Lead at Sonatype and OSPS Baseline Project Lead

“We applaud the launch of the OSPS Baseline as a crucial initiative in bolstering the security landscape of open source projects. At TestifySec, we recognize the importance of robust security frameworks like the OSPS Baseline in safeguarding software integrity and enhancing resilience against evolving cyber threats. We look forward to leveraging these guidelines to further fortify our commitment to delivering secure solutions for our clients and the broader open source community.” 

– Cole Kennedy, Co-Founder and CEO of TestifySec

“Security is a fundamental priority for the cloud native ecosystem, and the OSPS Baseline represents a major step forward in providing clear, actionable guidance for projects of all sizes. By establishing a tiered framework that evolves with project maturity, OSPS Baseline empowers maintainers and contributors to adopt security best practices that are scalable and sustainable. The CNCF is proud to support efforts like this that strengthen open source software at every level of development and we look forward to collaborating with the OpenSSF on adoption.”

– Chris Aniszczyk, Chief Technology Officer, Cloud Native Computing Foundation

“As open source has become integral in most of our technology stacks, it has become increasingly critical to streamline and standardize the security expectations between open source maintainers and consumers.  By synthesizing the requirements and controls from a variety of laws, regulations, and standards, the OpenSSF Baseline provides a clear roadmap for open source consumers to understand their security foundations.”

– Evan Anderson, Principal Software Engineer at Stacklok and Open Source Maintainer

“The Open Source Project Security Baseline is a vital tool for enhancing the security of open source projects. By offering a comprehensive set of actionable measures, the Security Baseline provides effective guidance for all stakeholders in the open source ecosystem – manufacturers, stewards, and projects alike – to collaboratively assume responsibility and take meaningful steps to secure the open source supply chain on which we all rely.”

– Per Beming, Chief Standardization Officer at Ericsson

***

About the OpenSSF

The Open Source Security Foundation (OpenSSF) is a cross-industry initiative by the Linux Foundation that brings together the industry’s most important open source security initiatives and the individuals and companies that support them. The OpenSSF is committed to collaboration and working both upstream and with existing communities to advance open source security for all. For more information, please visit us at openssf.org.

Media Contact
Noah Lehman
The Linux Foundation
nlehman@linuxfoundation.org

OpenSSF Community Day NA 2025: Call for Proposals Now Open!

By Blog

The Call for Proposals (CFP) for OpenSSF Community Day North America is officially open through March 23, 2025! Co-located with Open Source Summit North America, this event will bring the open source community together in Denver, Colorado, on June 26, 2025, for a full day of engaging discussions and presentations focused on securing the open source software (OSS) supply chain.

Submit your proposal now!

Event Details:

  • When: June 26, 2025
  • Where: Denver, Colorado
  • CFP Deadline: Sunday, March 23, 2025 at 11:59 PM MDT/10:59 PM PDT
  • CFP Notifications: Tuesday, April 1, 2025
  • Types of Presentations: 5, 10, 15, or 20-minute presentations

This is your opportunity to share your expertise and innovative ideas with the community! We’re looking for sessions on topics like:

  • AI & ML in Security
  • Regulatory Compliance
  • Enhancing Security Tools
  • Cyber Resilience
  • Securing the Software Supply Chain
  • Case Studies & Real-World Experiences

*No product/vendor sales pitches — it’s a community-focused event!

For more information on the CFP, visit here. Submit your proposal today!

Interested in Sponsorship? 

We have exciting opportunities available to showcase your support for securing the open source ecosystem. By sponsoring OpenSSF Community Day NA, you’ll gain visibility among key industry leaders, security experts, and the open source community. Join us in driving forward the mission to strengthen the OSS supply chain. Email us at openssfevents@linuxfoundation.org to reserve your sponsorship.

Join Us in Denver! 

Don’t miss out on the opportunity to be part of this vital conversation. Whether you’re submitting a proposal, attending as a participant, or showcasing your support through sponsorship, OpenSSF Community Day NA is the place to connect, collaborate, and contribute to securing the open source software supply chain. We can’t wait to see you in Denver and work together to advance the future of OSS security!