📖 Scorecard v6: OSPS Baseline conformance proposal and 2026 roadmap#4952
📖 Scorecard v6: OSPS Baseline conformance proposal and 2026 roadmap#4952justaugustus wants to merge 26 commits intoossf:mainfrom
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #4952 +/- ##
==========================================
+ Coverage 66.80% 69.67% +2.87%
==========================================
Files 230 251 +21
Lines 16602 15654 -948
==========================================
- Hits 11091 10907 -184
+ Misses 4808 3873 -935
- Partials 703 874 +171 🚀 New features to boost your workflow:
|
justaugustus
left a comment
There was a problem hiding this comment.
@ossf/scorecard-maintainers @ossf/scorecard-fe-maintainers @eddie-knight @puerco @evankanderson @mlieberman85 — based on conversations from this week with various WG ORBIT-adjacent maintainers, I'm tossing this early draft up for review.
Feel free to comment away while I work through this!
- Add AGENTS.md with project overview, build/test commands, architecture
guide, contribution conventions, and AI agent collaboration guidelines
(co-authorship trailer, OpenSpec workflow, git hygiene rules)
- Bootstrap openspec/ directory structure with initial specs:
- openspec/specs/platform-clients/spec.md: platform client abstraction
- openspec/changes/pvtr-integration/specs/pvtr-baseline/spec.md:
OSPS Baseline integration requirements and scenarios
- Incorporate guidance from the OSPO Engineering Playbook into AGENTS.md
Signed-off-by: Stephen Augustus <foo@auggie.dev>
Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Stephen Augustus <foo@auggie.dev>
The scope of this work is OSPS Baseline conformance within the ORBIT ecosystem — Privateer/PVTR interoperability is one aspect, not the whole story. Signed-off-by: Stephen Augustus <foo@auggie.dev> Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Stephen Augustus <foo@auggie.dev>
Complete rewrite of the proposal and spec to cover the full scope of the 2026 roadmap, not just Privateer/PVTR interoperability: - Conformance engine producing PASS/FAIL/UNKNOWN/NOT_APPLICABLE/ATTESTED - OSPS output format (--format=osps) - Versioned control-to-probe mapping files - Applicability engine for precondition detection - Security Insights ingestion for ORBIT ecosystem interop - Attestation mechanism for non-automatable controls - Gemara Layer 4 compatibility - CI gating support - Phased delivery aligned with quarterly milestones - ORBIT ecosystem positioning (complement PVTR, don't duplicate) Highlights Spencer's review notes as numbered open questions (OQ-1 through OQ-4): - OQ-1: Attestation identity model (OIDC? tokens? workflows?) - OQ-2: Enforcement detection vs. being an enforcement tool - OQ-3: scan_scope field usefulness in output schema - OQ-4: Evidence should be probe-based only, not check-based Renames spec subdirectory from pvtr-baseline to osps-conformance. Signed-off-by: Stephen Augustus <foo@auggie.dev> Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Stephen Augustus <foo@auggie.dev>
- Add openspec/specs/core-checks/spec.md and openspec/specs/probes/spec.md documenting existing Scorecard architecture for spec-driven development - Update .gitignore to exclude roadmap drafting notes Signed-off-by: Stephen Augustus <foo@auggie.dev> Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Stephen Augustus <foo@auggie.dev>
Stephen's responses to clarifying questions (CQ-1 through CQ-8) and feedback on the proposal draft: - Both scoring and conformance modes coexist; no deprecation needed now - Target OSPS Baseline v2026.02.19 (latest), align with maintenance cadence - Provide degraded-but-useful evaluation without Security Insights - Invest in Gemara SDK integration for multi-tool consumption - Prioritize Level 1 conformance; consume external signals where possible - Approval requires Stephen + Spencer + 1 non-Steering maintainer - Q2 outcome should be OSPS Baseline Level 1 conformance - Land capabilities across all surfaces (CLI, Action, API) Key changes requested: - Correct PVTR references (it's the Privateer plugin, not a separate tool) - Add Darnit and AMPEL comparison - Replace quarterly timelines with phase-based outcomes - Plan to extract Scorecard's control catalog for other tools - Use Mermaid for diagrams - Create separate OSPS Baseline coverage analysis in docs/ - Create docs/ROADMAP.md for public consumption Signed-off-by: Stephen Augustus <foo@auggie.dev> Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Stephen Augustus <foo@auggie.dev>
Changes based on Stephen's review: - Replace all "PVTR" references with "Privateer plugin for GitHub repositories" — it's the Privateer plugin, not a separate tool - Add ecosystem tooling comparison section covering Darnit (compliance audit + remediation), AMPEL (attestation-based policy enforcement), Privateer plugin (Baseline evaluation), and Scorecard (measurement) - Replace quarterly timeline (Q1-Q4) with phase-based delivery (Phase 1-3) focused on outcomes, not calendar dates - Update OSPS Baseline version from v2025-10-10 to v2026.02.19 - Convert ASCII ecosystem diagram to Mermaid - Add Scorecard control catalog extraction to scope - Add Gemara SDK integration to scope - Update coverage snapshot to reference docs/osps-baseline-coverage.md (to be created with fresh analysis) - Add approval process section based on governance answers - Update Security Insights requirement to degraded-but-useful mode - Add integration pipeline diagram (Scorecard -> Darnit -> AMPEL) Signed-off-by: Stephen Augustus <foo@auggie.dev> Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Stephen Augustus <foo@auggie.dev>
Create docs/osps-baseline-coverage.md with a control-by-control analysis of Scorecard's current probe coverage against OSPS Baseline v2026.02.19. Coverage summary: 8 COVERED, 17 PARTIAL, 31 GAP, 3 NOT_OBSERVABLE across 59 controls. Create docs/ROADMAP.md with a publicly-consumable 2026 roadmap organized into three phases: conformance foundation + Level 1, release integrity + Level 2, and enforcement detection + Level 3. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Stephen Augustus <foo@auggie.dev>
…h CQ-12 Remove reference to docs/roadmap-ideas.md from the coverage analysis document since it is not committed to the repo. Add four new clarifying questions to the proposal: NOT_OBSERVABLE controls in Phase 1 (CQ-9), mapping file ownership (CQ-10), OSPS output schema stability guarantees (CQ-11), and Phase 1 probe gap prioritization (CQ-12). Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Stephen Augustus <foo@auggie.dev>
Replace \n with <br/> in Mermaid node labels so line breaks render correctly in GitHub's Mermaid renderer. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Stephen Augustus <foo@auggie.dev>
Replace remaining "Darn" references with "Darnit" throughout the proposal. Add Minder to the ecosystem comparison table, integration diagram, and "What Scorecard must not do" section. Minder is an OpenSSF Sandbox project in the ORBIT WG that consumes Scorecard findings for policy enforcement and auto-remediation. Add CQ-13 (Minder integration surface) and CQ-14 (Darnit vs. Minder delineation) as new clarifying questions. Update docs/ROADMAP.md ecosystem alignment to include Minder. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Stephen Augustus <foo@auggie.dev>
Add a new section to docs/osps-baseline-coverage.md listing existing Scorecard issues and PRs that are directly relevant to closing OSPS Baseline coverage gaps, including: - ossf#2305 / ossf#2479 (Security Insights) - #30 (secrets scanning) - ossf#1476 / ossf#2605 (SBOM) - ossf#4824 (changelog) - ossf#2465 (private vulnerability reporting) - ossf#4080 / ossf#4823 / ossf#2684 / ossf#1417 (signed releases) - ossf#2142 (threat model) - ossf#4723 (Minder/Rego integration, closed) Add CQ-15 asking whether existing issues should be adopted as Phase 1 work items or whether new issues should reference them. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Stephen Augustus <foo@auggie.dev>
Remove openspec system specs (core-checks, platform-clients, probes) that were scaffolding for documenting existing Scorecard architecture. These are not part of the OSPS conformance proposal and can be recreated if needed. Remove docs/roadmap-ideas.md from .gitignore. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Stephen Augustus <foo@auggie.dev>
f0b229d to
afe4c8d
Compare
Add Allstar (Scorecard sub-project) to the ecosystem comparison table, integration flow diagram, and ORBIT ecosystem diagram. Allstar continuously monitors GitHub orgs and enforces Scorecard checks as policies with auto-remediation, and already enforces controls aligned with OSPS Baseline (branch protection, security policy, binary artifacts, dangerous workflows). Add Allstar to "Existing Scorecard surfaces that matter" section and to docs/ROADMAP.md ecosystem alignment. Add CQ-16 asking whether Allstar should be an explicit Phase 1 consumer of OSPS conformance output, and whether it is considered part of the enforcement boundary Scorecard does not cross. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Stephen Augustus <foo@auggie.dev>
Update the commit guidelines to make the -s flag requirement unambiguous. Add a complete commit message format example showing how to combine the HEREDOC pattern with -s for DCO sign-off and the Co-Authored-By trailer. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Stephen Augustus <foo@auggie.dev>
4f5c6ce to
bd76d94
Compare
|
Hey @justaugustus, thanks for leading this collaboration! Looking forward to hammering this out. Some things to clarify:
An alternative plan would be to for us to spend a week consolidating checks/probes into the pvtr plugin (with relevant CODEOWNERS), then update Scorecard to selectively execute the plugin under the covers. This would allow us to:
|
justaugustus
left a comment
There was a problem hiding this comment.
Reviewed this in today's Steering meeting and I think this is moving in the right direction, based on the initial feedback.
@spencerschrock and @jeffmendoza intend to leave reviews.
@GeauxJD @SecurityCRob — for awareness.
puerco
left a comment
There was a problem hiding this comment.
Amazing, two nits and two comments but it LGTM.
|
|
||
| - **Duplicate policy enforcement or remediation.** Downstream tools — [Privateer](https://github.com/ossf/pvtr-github-repo-scanner), [Minder](https://github.com/mindersec/minder), [AMPEL](https://github.com/carabiner-dev/ampel), [Darnit](https://github.com/kusari-oss/darnit), and others — consume Scorecard evidence through published output formats. Scorecard *produces* findings and attestations; downstream tools enforce, remediate, and audit. | ||
| - **Privilege any downstream consumer.** All tools consume Scorecard output on equal terms. No tool has a special integration relationship. | ||
| - **Turn OSPS controls into Scorecard checks.** OSPS conformance is a layer that consumes existing Scorecard signals, not 59 new checks. |
There was a problem hiding this comment.
This point is key. Baseline looks for outcomes. Compliance can be supported by Scorecard probe data.
The baseline control can be a 1:1 map to a probe's data, other times it will be a composite set of probes. If you add new probes to look for something new that's useful to test a baseline control, we just need to add another composition definition to say OSPS-XX-XXX can be [probe X] or [probe set 1] or [probe set 2].
This is akin to the way checks work now, but by generalizing it, the probe data can inform other framework testing tools, beyond baseline.
There was a problem hiding this comment.
This is akin to the way checks work now, but by generalizing it, the probe data can inform other framework testing tools, beyond baseline.
Agreed. The current composition of probes comprise the "Scorecard checks":
Lines 79 to 165 in 4dbf142
"probe sets" or "compositions" seems like the right way to approach it without introducing too much additional vocabulary (or layers of complexity).
| 4. Evaluation logic is self-contained — Scorecard can produce conformance | ||
| results using its own probes and mappings, independent of external | ||
| evaluation engines |
There was a problem hiding this comment.
I'm assuming conformance here means "framework compliance".
This is cool, but also ensure that Scorecard's view of the world can be used at the check and probe level to enable projects and organizations to evaluate adherence to other frameworks. Especially useful for internal/unpublished variants (profiles) of frameworks that organizations define.
There was a problem hiding this comment.
Agreed — the conformance engine should support arbitrary frameworks and organizational profiles. The probe findings are framework-agnostic by design; OSPS Baseline is just the first (non-"checks") evaluation layer over them.
The same probe evidence can be composed differently for other frameworks, as you suggested above.
We'll make this explicit in the proposal.
| - In-toto predicates (SVR first; track [Baseline Predicate PR #502](https://github.com/in-toto/attestation/pull/502)) | ||
| - Gemara output (transitive dependency via security-baseline) | ||
| - OSCAL Assessment Results (using [go-oscal](https://github.com/defenseunicorns/go-oscal)) | ||
| - Existing Scorecard predicate type (`scorecard.dev/result/v0.1`) preserved; new predicate types added as options |
There was a problem hiding this comment.
The current predicate type is the full scorecard run evaluation. For completeness' sake, it would be nice to have one type for a list of check evaluations and one for probe evaluations.
These are only useful, though, if they have more data than what an SVR has to offer, so I would wait until there is an actual need for them.
There was a problem hiding this comment.
Right. Probe-level findings are available via --format=probe but have no in-toto wrapper today. Agree that dedicated predicate types for check and probe evaluations are worth considering once there's a concrete need beyond what SVR provides.
Maybe we assume that you want a check-based or probe-based predicate type depending on the run and output options the user provides?
This might suggest the need for a --framework or --evaluation option?
| 5. **Applicability engine** — detects preconditions (e.g., "has made a release") and outputs NOT_APPLICABLE | ||
| 6. **Metadata ingestion layer** — supports Security Insights as one source among several for metadata-dependent controls (OSPS-BR-03.01, BR-03.02, QA-04.01). Architecture invites contributions for alternative sources (SBOMs, VEX, platform APIs). No single metadata file is required for meaningful results. | ||
| 7. **Attestation mechanism (v1)** — accepts repo-local metadata for non-automatable controls (pending OQ-1 resolution) | ||
| 8. **Scorecard control catalog extraction** — plan and mechanism to make Scorecard's control definitions consumable by other tools |
There was a problem hiding this comment.
From reading the proposal, wouldn't Scorecard rather become a consumer of control catalogs?
There was a problem hiding this comment.
Both directions — Scorecard would consume the OSPS Baseline catalog (via security-baseline) for [one type of] conformance evaluation, and Scorecard's own probe definitions (probes/*/def.yml) are already machine-readable YAML with structured metadata.
The "extraction plan" is about packaging those existing definitions for consumption so that tools like other parts of the Scorecard codebase or external tools, like AMPEL, can discover what Scorecard evaluates and compose mappings against it, if needed.
Add framework-agnostic conformance language, probe composition model (1:1 and many-to-1 mappings), bidirectional catalog framing, and future design concepts (framework CLI option, probe-level predicate type). Log feedback and responses in decisions.md. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Stephen Augustus <foo@auggie.dev>
FelixOliverLange
left a comment
There was a problem hiding this comment.
Really cool to see in general, as it may also help make output more understandable for users (e.g. pass / fail statements). From my perspective, it would be important to preserve the valuable insights generated by Scorecard, which is also why I have a question on how less mapped checks feed back into the ecosystem. But of course, I'm also only a user of Scorecard.
| - Multi-repo project-level conformance aggregation | ||
| - Attestation integration GA | ||
|
|
||
| ### Ecosystem alignment |
There was a problem hiding this comment.
When it comes to ecosystem alignment, will existent Scorecard checks like "Maintained" also feed back into other parts of the ecosystem (as right now, "Maintained" is mainly referenced as precondition)? Checks like "Maintained" can help users of open-source projects to e.g. identify abandoned projects, so would be nice to preserve in a prominent manner.
There was a problem hiding this comment.
But of course, I'm also only a user of Scorecard.
@FelixOliverLange — User feedback is important to let us know whether we're building the right thing. I appreciate you taking the time to comment! 🚀
Existing checks (like Maintained) would be fully preserved; check scores (0-10) and conformance labels (PASS/FAIL/UNKNOWN) are parallel evaluation layers over the same probe evidence. (See the three-tier evaluation model in the proposal.)
Checks that don't map to OSPS Baseline controls would continue to produce scores as they do today.
The Maintained check's probes also serve double duty: the conformance layer uses them as preconditions
(via the applicability engine) and as evidence toward maintenance-related controls.
No existing check would be deprioritized or removed.
Add Scorecard v6 framing with "Why v6" section, single-run architectural constraint, confidence scoring future concept, and Scorecard user feedback section (FL-1 through FL-4) from community meeting. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Stephen Augustus <foo@auggie.dev>
|
I am VERY supportive of this effort. Thanks for taking the time to think this through, and thanks to all the contributors for the thoughtful and respectful conversation! scorecard++ |
|
|
||
| ### Ecosystem alignment | ||
|
|
||
| Scorecard operates within the ORBIT WG ecosystem as an evidence engine. All |
There was a problem hiding this comment.
Why does Scorecard need to operate within the ORBIT WG ecosystem? Perhaps add a bit about what the ORBIT WG ecosystem is - that may clear it up.
| ## Summary | ||
|
|
||
| **Mission:** Scorecard produces trusted, structured security evidence for the | ||
| open source ecosystem. _(Full MVVSR to be developed as a follow-up deliverable |
There was a problem hiding this comment.
Suggestion: Expand MVVSR here.
| proves this architecture, and the central initiative for Scorecard's 2026 | ||
| roadmap. | ||
|
|
||
| This is fundamentally a **product-level shift** — the defining change for |
There was a problem hiding this comment.
A bit of a general question: What are the reasons of adding framework conformance to Scorecard itself instead of having a standalone tool to which we can feed Scorecard findings and where the standalone tool then gives a verdict about framework conformance?
| ### What Scorecard SHOULD NOT do | ||
|
|
||
| Scorecard SHOULD NOT (per [RFC 2119](https://www.rfc-editor.org/rfc/rfc2119)) | ||
| duplicate evaluation that downstream tools handle. There may be scenarios where |
There was a problem hiding this comment.
Would be good with a definition of downstream tools here.
|
|
||
| ## Architecture | ||
|
|
||
| ### Processing model |
There was a problem hiding this comment.
Is this ("Processing model") the current dataflow and the following section "Three-tier evaluation model" the intended?
spencerschrock
left a comment
There was a problem hiding this comment.
As a whole, I support the evidence-based focus. I like the 6 listed design principles.
I didn't quite get a chance to go through all the decisions.md file, as it's quite a lot of feedback to parse through
| Draft PR that attempted to run Minder Rego rules within Scorecard, | ||
| including OSPS-QA-05.01 and QA-03.01. Closed due to inactivity but | ||
| demonstrates interest in deeper Minder/Scorecard integration. |
There was a problem hiding this comment.
Is this going to drift into policy/enforcement? Does this conflict with our goal of evidence only?
| | OSPS-AC-01.01 | MFA for sensitive resources | NOT_OBSERVABLE | None | Requires org admin API access; Scorecard tokens typically lack this. Must be UNKNOWN unless org-admin token is provided. | | ||
| | OSPS-AC-02.01 | Least-privilege defaults for new collaborators | NOT_OBSERVABLE | None | Requires org-level permission visibility. Must be UNKNOWN. | |
There was a problem hiding this comment.
I'd say this could be observable if it just needs the right token. If this is run in the context of an OSPO self-observation I think it's fine.
| Check scores (0-10) and conformance labels (PASS/FAIL/UNKNOWN) are parallel | ||
| evaluation layers over the same probe evidence, produced in a single run. |
There was a problem hiding this comment.
to be clear, in this situation "evaluation" or "conformance" layer, just means output format?
| - Two-layer mapping model for OSPS Baseline v2026.02.19: | ||
| - Check-level relations contributed upstream to security-baseline | ||
| - Probe-level mappings maintained in Scorecard |
There was a problem hiding this comment.
What's the value in upstreaming check-level relations? I think mapping probes to baseline controls is fine.
| - Secrets detection — consuming platform signals where available | ||
| - Metadata ingestion layer — Security Insights as first supported source; | ||
| architecture supports additional metadata sources | ||
| - Scorecard control catalog extraction plan |
|
|
||
| The current version of the OSPS Baseline is [v2026.02.19](https://baseline.openssf.org/versions/2026-02-19). | ||
|
|
||
| We should align with the latest version at first and have a process for aligning with new versions on a defined cadence. We should understand the [OSPS Baseline maintenance process](https://baseline.openssf.org/maintenance.html) and align with it. |
There was a problem hiding this comment.
What sort of maintenance toil will this involve? Do existing controls get updated, or just new ones added?
|
|
||
| We need to land these capabilities for as much surface area as possible. |
There was a problem hiding this comment.
I would say the cron has additional barriers, cost of writing/serving more data. I have no concerns with the action
|
|
||
| The versioned mapping file (e.g., `pkg/osps/mappings/v2026-02-19.yaml`) is a critical artifact that defines which probes satisfy which OSPS controls. Who should own this file? Options: |
There was a problem hiding this comment.
this mapping file lives in which repo?
| ### Design principles | ||
|
|
There was a problem hiding this comment.
I agree with these principles
| # AGENTS.md | ||
|
|
||
| This file provides guidance for AI coding agents working on the OpenSSF Scorecard project. |
There was a problem hiding this comment.
This seems unrelated to this change. Other than I assume you using it to generate some of these docs?
| acceptable data dependency for control definitions (see Scope). | ||
|
|
||
| **Flexibility:** Under this structure, scaling back to a fully independent model | ||
| (Option A) remains straightforward — deprioritize or drop specific output |
| applicability engine all live in Scorecard) | ||
| 3. Interoperability is purely at the output layer — Gemara, in-toto, SARIF, | ||
| OSCAL are presentation formats, not architectural dependencies | ||
| 4. Evaluation logic is self-contained — Scorecard can produce conformance |
There was a problem hiding this comment.
Not sure if this is entirely correct. Currently, I wouldn't say that Scorecard can produce conformance results, but perhaps I am understanding the context of "constraints" incorrectly here; Are these current constraints or are they constraints that should exits with the conformance layer in Scorecard?
| AMPEL's role), perform compliance auditing and remediation (Darnit's role), or | ||
| guarantee compliance with any regulatory framework. | ||
|
|
||
| ## Success criteria |
There was a problem hiding this comment.
Would be nice to make this more explicit: What is the success criteria for here? The proposal or the implementation?
justaugustus
left a comment
There was a problem hiding this comment.
@spencerschrock @AdamKorcz — thanks so much for the feedback!
I'm working on some changes locally and will push them up by tomorrow.
What kind of change does this PR introduce?
Documentation: Scorecard v6 proposal and 2026 roadmap.
Scorecard v6 evolves Scorecard from a scoring tool to an open source security
evidence engine. The primary initiative for 2026 is adding
OSPS Baseline conformance evaluation as the
first use case that proves this architecture.
Mission: Scorecard produces trusted, structured security evidence for the
open source ecosystem.
Scorecard accepts diverse inputs about a project's security practices,
normalizes them through probe-based analysis, and packages the resulting
evidence in interoperable formats for downstream tools to act on. Check scores
(0-10) and conformance labels (PASS/FAIL/UNKNOWN) are parallel evaluation
layers over the same probe evidence, produced in a single run. v6 is additive —
existing checks, probes, scores, and output formats are preserved.
The goal of this PR is to create a collaboration/decision-making nexus for
Scorecard and WG ORBIT tooling maintainers to ensure that we build interfaces
that easily interact with other tools and minimize duplication of work across
our maintainers and others in the OpenSSF ecosystem.
Key changes that warrant a major version:
OSPS Baseline or other frameworks via pluggable mapping definitions
alongside existing JSON and SARIF
artifact for external tools
Key decisions resolved:
and conformance evaluation; interoperability at the output layer only
Predicate), Gemara (via security-baseline), OSCAL Assessment Results
in security-baseline, probe-level mappings in Scorecard
AMPEL, Minder, Darnit) are equal
Open questions requiring Steering Committee resolution:
OQ-1/CQ-22: Attestation identity model (blocking)
OQ-2: Enforcement detection scope
PR title follows the guidelines defined in our pull request documentation
What is the current behavior?
Scorecard produces 0-10 check scores and structured probe findings. There is no
OSPS Baseline conformance evaluation capability and no public 2026 roadmap.
What is the new behavior (if this is a feature change)?**
This PR adds documentation only (no code changes):
docs/ROADMAP.md— Public 2026 roadmap with phased delivery planopenspec/changes/osps-baseline-conformance/proposal.md— Detailed proposalcovering architecture, scope, phased delivery, and ecosystem positioning
openspec/changes/osps-baseline-conformance/decisions.md— Reviewer feedback,open questions, maintainer responses, and decision priority analysis
Tests for the changes have been added (for bug fixes/features)
N/A — documentation only.
Which issue(s) this PR fixes
NONE
Special notes for your reviewer
This PR is structured as two companion documents:
scope, phased delivery, ecosystem positioning, success criteria)
responses, and the decision priority analysis
The control-by-control coverage analysis
is maintained separately.
Feedback from Eddie Knight (ORBIT WG TSC Chair), Adolfo García Veytia (AMPEL),
Mike Lieberman, and Felix Lange has been incorporated. See decisions.md for the
full feedback log and how it informed the proposal.
Does this PR introduce a user-facing change?