Preserving Open Source Sustainability While Advancing Cybersecurity Compliance

Reflections on Voluntary Attestation Models Under the Cyber Resilience Act

By Madalin Neag

Executive Summary

The Cyber Resilience Act (CRA) represents a significant evolution in the European Union’s approach to product cybersecurity and software supply chain risk. Article 25 explicitly recognizes the unique role of free and open source software (FOSS) and seeks to facilitate compliance for manufacturers by enabling voluntary security attestation programmes for FOSS.

However, recent discussions around self-attestation models raise concerns that certain implementations may unintentionally undermine the very balance that Article 25 was designed to achieve. Specifically, approaches that rely on upstream attestations by FOSS developers, projects, or stewards risk reintroducing liability pressures, economic distortions, and operational burdens that were deliberately excised from the CRA during the legislative process.

While security transparency and due diligence are essential to achieving the goals of CRA, attestation-based compliance mechanisms must not shift legal or financial responsibility upstream to open source developers or their stewards. Instead, compliance efforts should focus on automation, verifiable metadata, and downstream accountability of the manufacturers of the commercial systems.

 This post reflects the author’s analysis and does not constitute legal advice.

Self-attestation model

Article 25 of the CRA empowers the European Commission to establish voluntary security attestation programmes to facilitate due diligence, particularly for manufacturers integrating FOSS components. Crucially, the article is framed to support manufacturers, not to impose new obligations on open source developers.

The legislative record surrounding the CRA makes clear that:

  • Open source developers are not normally manufacturers under the CRA.
  • Liability and conformity obligations intentionally rest downstream, with those who place products with digital elements on the market (including where those products include FOSS).
  • Open source’s “no warranties, no liabilities” model is a foundational principle that enables its scale, diversity, and sustainability.

Any interpretation of Article 25 that creates pressure – legal, contractual, or economic – on upstream FOSS projects risks deviating from both the letter and spirit of the regulation. In practice, this risk materializes most clearly through proposals advocating upstream self-attestation as a mechanism for CRA compliance. Security attestation forms, even when described as “voluntary,” function as statements of assurance. In regulated supply chains, such statements inevitably become inputs into contractual risk allocation, audits, and enforcement discussions.

Proposals that rely on upstream attestations being made by FOSS developers introduce structural and legal tensions. Individual contributors and maintainers typically have no control over how their software is ingested, configured, or integrated into downstream products. Likewise, open source stewards rarely direct day-to-day development decisions, nor do they have direct control over the production or release of all artifacts that may later be incorporated into regulated products. In this context, attestation mechanisms risk blurring the line between transparency and warranty, undermining the long-established “no warranties, no liabilities” foundation of open source licensing.

This dynamic is not merely theoretical. During the CRA legislative process, significant effort was invested in preserving a clear separation between open source development and regulatory liability. Any approach that implicitly shifts responsibility upstream, whether through formal declarations, contractual language, or quasi-certifications – risks reopening debates that were explicitly resolved during those negotiations.

At the same time, the practical structure of the open source ecosystem further complicates the feasibility of upstream attestations. Modern software projects routinely depend on hundreds or thousands of open source components, many of which are maintained by individuals or informal communities, operate outside any formal foundation, and lack the resources or mandate to engage in compliance-oriented documentation. Expecting these projects to participate in attestation-based schemes introduces friction without materially improving security outcomes.

A compliance model that expects manufacturers to collect signed attestations from upstream projects is not only inefficient, it is operationally infeasible at scale. It also disproportionately disadvantages the “long tail” of small but critical open source projects that underpin much of today’s infrastructure.

Some members of the community have suggested that open source stewards should issue formal security or compliance statements for projects they support, including publishing signed files, providing declarations to selected parties, or making such statements available through registries or logs. While stewards can play a constructive role by improving transparency, documentation, and tooling, requiring them to issue formal attestations crosses a critical boundary by effectively converting stewardship into a form of third-party assurance. Attestation implies assurance and risk management, which stewards are neither positioned nor authorized to assume.

Encouraging stewards to issue attestations is another way to enable manufacturers to shift compliance responsibility upstream, effectively treating non-profit entities as liability intermediaries. This is inconsistent with the CRA’s allocation of obligations to those who integrate and place products on the market. Stewards should not, under any circumstances, be expected to certify, warrant, or attest to compliance on behalf of downstream users.

The dangers of ‘Pay-to-comply’ models

In some cases outside the OpenSSF and Linux Foundation ecosystem, some have proposed that upstream attestations be paired, explicitly or implicitly, with financial contributions to foundations or sponsorship requirements. While foundations require sustainable funding, compliance pressure must not become a revenue mechanism. When voluntary programmes are tethered to specific foundation memberships or fee-based services, they effectively create a ‘compliance tax’ on open source. This ‘pay-to-comply’ model risks bifurcating the open source ecosystem into ‘certified’ well-funded projects and ‘unreliable’ community projects, regardless of the actual code quality. Also, this excludes independent maintainers and smaller non-profits, centralizing power in a few well-funded entities and distorting the competitive neutrality of the FOSS ecosystem.

Using regulatory obligations as leverage to extract fees:

  • fragments the open source supply chain,
  • creates further inequities between funded and unfunded projects,
  • incentivizes stewards to support the regulatory compliance efforts of only its paying members at the expense of the broader community, and
  • risks eroding trust between consumers and upstream communities.

There are a few OSS projects and distributions that operate as commercial entities: they charge for software or services, generate revenue, and, consequently, act as manufacturers under the CRA. The regulation already directly handles these cases and does so based on economic activity, not on whether the software is open source. These few OSS projects are important, but they are the minority. Most open source software is built by volunteers, informal communities, or small non-profits that were never meant to make a profit nor to carry manufacturer-level compliance obligations. Pay-to-comply attestation models blur this line, pushing regulatory pressure upstream onto non-commercial projects and treating the entire ecosystem as if it were commercially uniform. 

The CRA was not intended to act as an indirect funding instrument for intermediaries in the open source ecosystem. Framing upstream attestations as a paid obligation effectively reclassifies non-commercial maintainers as quasi-manufacturers, contradicting the intent of the CRA.

Lessons learnt from prior attestation efforts

The concept of security attestation of software is not new. Earlier initiatives, such as the U.S. CISA Secure Software Development Attestation Form, offer instructive lessons.

Although originally designed for proprietary software vendors, these forms were later, often inappropriately and ignoring the forms’ own instructions, suggested for use in open source contexts. Resistance from foundations and community leaders was grounded in several realities:

  • Signed paper attestations did not measurably improve security outcomes.
  • They imposed administrative burden without addressing root causes.
  • They encouraged symbolic compliance rather than meaningful risk reduction.
  • They shifted blame and risk for a manufacturer’s product decisions for the open source software to entities or individuals who are not involved in the product design decisions and are not compensated to do so.
  • The actual developers often work for different companies or engage on their own personal time. Those who work for companies are not comfortable making attestations available for another company‘s product decisions.
  • Foundations host the open source project communities, but foundations generally do not have direct controls over the technical decision-making of the FOSS developers. Without such control, which would be harmful to the FOSS ecosystem, attestations would be substantively meaningless.

Notably, even in these contexts, such attestations were optional and limited in scope, yet still proved controversial and were almost universally rejected by open source communities. Reintroducing similar mechanisms within the CRA ecosystem risks repeating these shortcomings without addressing the structural differences between proprietary software vendors and open source development. 

Contracts examples included in standards

The principles outlined above are particularly relevant when considering how emerging standards intended to support the CRA implementation are interpreted and applied in practice. While standards play an important role in translating regulatory requirements into operational guidance, their structure, terminology, and illustrative examples can have significant downstream effects, especially for free and open source software (FOSS) ecosystems. Even non-binding guidance may influence expectations across complex supply chains.

Within the European standardization landscape, CEN-CENELEC JTC 13 WG 9 has been tasked with developing horizontal cybersecurity standards to support the CRA. One of the resulting deliverables is the draft standard 40000-1-2, Cybersecurity for Products with Digital Elements – Principles for Cyber Resilience. This standard is targeted to address high-level cybersecurity activities across the total product life cycle by defining the goals to be achieved, identifying mandatory and optional inputs, and describing minimum expected outcomes to support CRA due diligence.

Although EN 40000-1-2 does not impose legal obligations, it is expected to serve as a key reference for manufacturers, auditors, and market surveillance authorities when interpreting CRA requirements. As a result, even informative or voluntary elements within the standard may acquire practical normative weight in procurement, compliance assessments, and contractual discussions. This makes it particularly important to examine how illustrative mechanisms interact with the realities of open source development.

Taking into account this background, in its actual shape, Annex B of EN 40000-1-2, which provides an illustrative example of a Cybersecurity Supplier Agreement (CSSA), merits closer examination when applied beyond its intended context. The example is described as “mainly applicable in a business-to-business context,” but this phrasing may be insufficiently precise. Without clearer scoping, readers may reasonably assume that the model is acceptable for all third-party components, including non-commercial open source software, despite the example not being designed for such use.

In practice, CSSAs presume identifiable suppliers with formal contractual relationships, the ability to define support periods, audit rights, and documentation obligations, and a level of centralized accountability that generally does not exist for open source projects. These assumptions sit uneasily with the decentralized, community-driven, and often informal nature of FOSS development, where maintainers do not act as suppliers in a commercial sense and do not control downstream integration or deployment.

Moreover, the CSSA model conflicts with the foundational principles of most open source licenses, which are explicitly based on “no warranties” and “no liabilities.” Introducing supplier-style agreements into this context risks blurring the distinction between voluntary transparency and legally meaningful assurances, potentially undermining long-standing expectations that enable open source collaboration and reuse at scale.

The illustrative responsibility-allocation tables included in Annex B further amplify these concerns. Annex B’s reliance on ‘Responsible’ (R) designations in RACI-style tables for upstream contributors is a fundamental mismatch for FOSS. By assigning “Responsible” roles to (OSS) “suppliers” for activities such as vulnerability verification, risk documentation, or lifecycle-related reporting, the example may be read as implying upstream accountability for actions that open source projects are neither positioned nor obligated to perform. It attempts to map the governance of a commercial corporation onto a distributed community that often is not even a legal entity.

This presumption that all OSS projects could be “suppliers” creates a “phantom liability” where a maintainer is labeled “’responsible” in a document that they never signed for a product that they do not sell. While such allocations may be appropriate in commercial supplier relationships, they do not reflect how responsibility is realistically distributed in typical open source ecosystems.

When applied, directly or indirectly, to open source components, the Annex B example risks normalizing expectations that typical upstream projects cannot reasonably meet, particularly given the scale and diversity of modern software dependencies. Even when framed as informative guidance, this may place undue pressure on manufacturers to seek agreements or assurances that many open source projects cannot provide, resulting in fragmented and inconsistent compliance practices.

An alternative path

Upstream security attestations and contractual assurances, such as those illustrated in Annex B of the 40000-1-2 standard draft, have significant limitations when applied to open source software. They assume identifiable suppliers, formal contractual relationships, and the ability to define support periods, audit rights, and documentation obligations, conditions that rarely exist for non-commercial open source projects.

There is a constructive alternative that aligns more closely with both CRA objectives and the realities of open source development. By using machine-readable signals (such as SLSA levels, OpenSSF Scorecard, OpenSSF Best Practices Badge, and OpenSSF Baseline), the burden of due diligence remains with the manufacturer, where the CRA legally places it, while providing manufacturers with the high-fidelity data they need. Rather than signed declarations, compliance efforts should emphasize:

  • build-time SBOMs generated automatically,
  • machine-readable security metadata,
  • public, verifiable attestations tied to artifacts (e.g., via Sigstore),
  • vulnerability disclosure and remediation signals (e.g., CSAF, VEX).

An upstream attestation is a “snapshot in time”. By the time a manufacturer integrates a component, the attestation will most likely be stale. These attestations also would not reflect the productization work that the Manufacturer puts into its integration of an upstream project, library, or SDK; they merely help support the manufacturer’s due diligence they must perform as they ingest third-party components. These attestations are also temporally separate from a product lifecycle, e.g. upstream software moves and changes at a much faster cadence than downstream manufacturers’ products and delivery schedules. Conversely, Continuous Metadata (VEX/CSAF) provides real-time security posture. Relying on the former provides a false sense of security; relying on the latter provides actual resilience data.

Focusing on secure consumption practices, rather than upstream attestations, reduces manual toil, scales naturally, and avoids placing personal or organizational liability on open source maintainers. The source code’s transparent and full availability empowers anyone in the downstream ecosystem to do the work an attestation would apply to themselves. The primary compliance challenge under the CRA lies with those who integrate and deploy software into products, not with upstream open source projects or individual contributors who publish source code. Improving processes such as dependency selection, risk assessment, continuous monitoring, and update strategies provides far greater security benefits than collecting upstream paperwork.

Taken together, these observations suggest that effective CRA compliance should prioritize enabling tools, guidance, and workflows that downstream integrators can adopt to analyze and manage risk. Rather than relying on upstream attestations, contractual assurances, or manual forms, a sustainable approach should combine automation, transparency, and practical guidance. Emphasizing automated, verifiable metadata – such as build-time SBOMs, cryptographically verifiable artifact provenance, and standardized vulnerability disclosures – allows organizations to achieve meaningful, auditable security improvements while respecting the legal and operational realities of open source software.

Cybersecurity compliance obligations under the CRA attach to those who integrate and place products with digital elements on the market. Any sustainable compliance approach should therefore focus on downstream processes and controls, rather than requiring upstream developers or open source maintainers to certify, warrant, or attest to development practices. Open source projects provide inputs, not finished regulated products, and this separation preserves the CRA’s intentional allocation of liability while still enabling manufacturers to perform meaningful due diligence.

Within this framework, the open source ecosystem already produces a wide range of security signals and artifacts. A sustainable approach should rationalize existing signals and evangelize their use upstream, map them to regulatory expectations, avoid duplication, and prioritize interoperability between tools and standards. This reduces fragmentation, lowers adoption barriers, and ensures smaller projects are not overwhelmed by conflicting requirements.

Most supply chain risk arises not from the existence of open source components, but from how they are selected, integrated, updated, and monitored. Effective practices therefore emphasize structured dependency evaluation, continuous monitoring of vulnerabilities and upstream changes, documented update and remediation strategies, and collaboration among downstream consumers who share common dependencies. This focus ensures effort is directed where it can meaningfully improve security outcomes.

Finally, guidance and tooling for open source compliance should be open, accessible, and voluntary. They should be available as digital public goods, usable by organizations of all sizes, implementable without excessive burden on upstream developers, and aligned with existing workflows. Upstream projects that voluntarily choose to enhance security posture should be supported with tools and guidance that are easy to adopt and produce outputs consumable by downstream users. Participation must remain voluntary, and compliance should not become a mechanism for privileging well-funded projects or extracting rent from the open source supply chain.

Grounded in these principles, a compliance approach shall emphasize secure consumption, automation, and transparency, delivering meaningful security improvements while respecting the legal, economic, and social foundations of the open source ecosystem. Such an approach is more consistent with CRA objectives than upstream attestation schemes, which risk burdening maintainers and misallocating responsibility. 

Conclusion

Voluntary attestation programmes under the CRA have the potential to support cybersecurity due diligence but only if they are designed with a clear understanding of open source realities. Models that rely on upstream self-attestation, contractualization, or monetization risk undermining the open source ecosystem and reintroducing liabilities that were deliberately excluded from the CRA.

A compliance strategy grounded in automation, transparency, and downstream accountability offers a more scalable, equitable, and effective path forward. Initiatives that uplift secure consumption practices rather than burden upstream developers are not only kinder to open source; they are more likely to achieve the CRA’s ultimate goal: improving cybersecurity outcomes across the digital supply chain. 

About the Author 

Madalin works as an EU Policy Advisor at OpenSSF focusing on cybersecurity and open source software. He bridges OpenSSF (and its community), other technical communities, and policymakers, helping position OpenSSF as a trusted resource within the global and European policy landscape. His role is supported by a technical background in R&D, innovation, and standardization, with a focus on openness and interoperability.