Replies: 3 comments
-
|
Update (March 2026): This discussion body was updated to connect the token schema enums to the design system registry:
|
Beta Was this translation helpful? Give feedback.
-
|
Update (March 2026): Added cascade resolution and multi-dimensional modes sections; updated semantic complexity (non-default facets only); refreshed cross-links to #714, #715, and the coordination doc. |
Beta Was this translation helpful? Give feedback.
-
|
Update (April 2026): Status clarification on this RFC's relationship to the published spec. The analytical model proposed here (nested
The relationship is documented in Resolved open questions from this RFC:
This RFC remains open as a historical reference for the analytical model that shaped the spec. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
RFC: Token Schema Structure and Validation System
Status: Draft - Implementation Complete
Author: Garth Braithwaite
DACI: [To be assigned]
Implementation: PR #644
Related: DNA-1485, RFC #714: Spectrum Design Data Specification (umbrella), RFC #715: Distributed Design Data Architecture, RFC #625, RFC #626, RFC #661: Spectrum Design System Glossary, Registry | Spectrum Design Data, RFC coordination doc
Executive Summary
This RFC proposes a comprehensive schema structure for all Spectrum design tokens, transforming hyphen-delimited token names into structured JSON objects with full validation capabilities. This provides the foundation for advanced tooling including token recommendations, automated documentation, and cross-platform transformation.
Problem: Current token structure uses hyphen-delimited names with implicit meaning, no validation of naming conventions, and limited ability to query or analyze tokens systematically. This makes it difficult to build tooling, enforce governance, or provide semantic guidance.
Solution: Implement structured token format with JSON Schema validation, controlled vocabularies (enums), semantic analysis capabilities, and perfect round-trip conversion. Complete implementation provided in PR #644.
Results: All 2,338 tokens across 8 files successfully parsed and validated with 100% regeneration rate and 82% schema validation coverage.
Background & Context
Origin
Current Token Format
{ "text-to-visual-50": { "$schema": "https://opensource.adobe.com/spectrum-design-data/schemas/token-types/dimension.json", "value": "4px", "uuid": "f1bc4c85-c0dc-44bf-a156-54707f3626e9" } }Limitations:
Design Data System Vision
From your onsite presentation:
Proposal
Structured Token Format
Transform token names into structured objects with full semantic information:
{ "id": "f1bc4c85-c0dc-44bf-a156-54707f3626e9", "$schema": "https://opensource.adobe.com/spectrum-design-data/schemas/token-types/dimension.json", "value": "4px", "name": { "original": "text-to-visual-50", "structure": { "category": "spacing", "property": "spacing", "spaceBetween": { "from": "text", "to": "visual" }, "index": "50" }, "semanticComplexity": 1 }, "validation": { "isValid": true, "errors": [] } }Token Categories
Nine primary token categories identified across all tokens:
text-to-visual-50)button-height-100)corner-radius-100)accent-color-100→{blue-800})blue-800)blue-800,transparent-blue-800)gradient-stop-1-red)bold-font-weight,sans-font-family)Schema Architecture
Base Schema Hierarchy
Enum Schemas (Controlled Vocabularies)
12 enum schemas define allowed values for token name parts:
Total controlled vocabulary: 800+ values ensuring consistency
Connection to Design System Registry: The @adobe/design-system-registry package and Registry docs provide the canonical source for Spectrum terminology (sizes, states, components, anatomy terms, platforms, etc.). These enum schemas should be validated against or derived from the registry where they overlap, so that token naming and glossary definitions share a single source of truth. See RFC #661 for the glossary proposal and alignment details.
Semantic Complexity Metric
Measures how much semantic context a token provides (0-3+).
Rule (updated): Count only non-default structured facets that contribute to meaning — including dimensions when not at their declared default. Properties at default dimension values do not increase
semanticComplexity.Examples:
gray-100: complexity 0 (base palette, no semantic context)background-color-default: complexity 1 (semantic alias with property)button-background-color-default: complexity 2 (component + property + alias)button-control-background-color-hover: complexity 3+ (component + anatomy + property + state)Use Case: Token recommendation systems can suggest more semantically specific tokens:
blue-800, consideraccent-color-100(more semantic)"button-background-color-default(most specific for this use case)"Validation Strategy
Schema-Driven Validation
Validation Levels
Current Validation Results
Anonymous Token Array Structure
Tokens stored as array of objects (not keyed by name):
Why:
Before (keyed by name):
{ "blue-800": { "value": "#1473E6", "uuid": "..." } }After (anonymous array):
[ { "id": "550e8400-e29b-41d4-a716-446655440000", "value": "#1473E6", "name": { "original": "blue-800", "structure": { "category": "color-base", "color": "blue", "index": "800" }, "semanticComplexity": 0 } } ]Round-Trip Verification
Critical Requirement: Structured format must perfectly regenerate original token names.
Implementation:
Results: 100% match rate (2,338/2,338 tokens) - zero data loss
Example Template (spacing-token.hbs):
Proposed evolution: cascade resolution model
Token shape
Specificity (within a layer)
contrastdefaults toregular, thencolorScheme: darkalone is more specific than default, butdark + regular contrastdoes not countregulartoward specificity.Layers
Migrating from
setsA legacy
color-setentry such aslight/dark/wireframebecomes separate cascade tokens that differ only by structuredcolorScheme, each with a single value. Similarly,scale-set(desktop/mobile) becomes tokens differing by structuredscale. This is a mechanical split; validation must ensure coverage per dimension declarations (below).Note:
system-set(spectrum/express) is not present in current token data; removal of that schema is housekeeping, not a data migration.Multi-dimensional modes
Dimension declarations
The foundation publishes dimensions (e.g.
colorScheme,scale; future:contrast,language,motion). Each dimension declares:colorScheme: light, dark, wireframe;scale: desktop, mobile)Exact JSON for declarations is TBD; enums should stay aligned with @adobe/design-system-registry / #661 where terminology overlaps.
Cross-dimensional ambiguity
When teams author overrides in more than one dimension (e.g. both
colorSchemeandcontrast), the spec requires explicit combination tokens for the full cross-product that is actually used — validators reject ambiguous partial grids. This keeps resolution implicit (no tie-break priority list) while remaining fast to validate (checks are local to a token family).Validation performance
Coverage and combination checks can run per token family (tokens sharing the same base structure without non-default dimensions) and incrementally on file change, suitable for watch / on-save CLI workflows.
Implementation
Complete Implementation: PR #644
Package 1:
packages/structured-tokens/Package 2:
tools/token-name-parser/Parser Capabilities
Pattern Detection:
Example Parsing:
Input:
checkbox-control-size-small{ "category": "component-property", "component": "checkbox", "anatomyPart": "control", "property": "size", "options": ["small"] }Input:
text-to-visual-compact-medium{ "category": "spacing", "property": "spacing", "spaceBetween": { "from": "text", "to": "visual" }, "options": ["compact", "medium"] }Input:
accent-color-100(references{blue-800}){ "category": "semantic-alias", "property": "accent-color-100", "referencedToken": "blue-800", "notes": "Semantic alias providing contextual naming" }Usage Examples
Query Tokens by Category
Find High-Complexity Tokens
Track Token References
Validate Token Names
Benefits & Use Cases
1. Token Recommendation Systems
Enabled by semantic complexity metric and reference tracking
Use Case: IDE plugin suggests semantic alternatives
2. Automated Documentation Generation
Enabled by structured data and queryable format
Use Case: Generate token catalog by category
3. Design System Governance
Enabled by schema validation and controlled vocabularies
Use Case: CI/CD validation of token PRs
4. Cross-Platform Token Transformation
Enabled by structured format and perfect round-trip
Use Case: Transform tokens for different platforms
5. Token Migration & Deprecation
Enabled by reference tracking and semantic analysis
Use Case: Identify tokens to migrate
6. Foundation for Future RFCs
Directly enables other proposed RFCs
Alternatives Considered
Alternative 1: Keep Hyphenated Names Only
Pros: No change, existing tooling works
Cons: Can't build advanced tooling, no governance, limited querying
Decision: Rejected - doesn't meet future needs
Alternative 2: Use DTCG Format Directly
Pros: Standard format, external tool support
Cons: Doesn't capture Spectrum-specific semantics (anatomy, space-between), loses semantic complexity
Decision: Considered for future (RFC #627 proposes DTCG as additional output)
Alternative 3: Object with Names as Keys
Pros: Familiar structure, easy lookup by name
Cons: Can't have duplicate names across themes, harder round-trip
Decision: Rejected - anonymous array provides more flexibility
Alternative 4: AI/LLM-Based Parsing
Pros: Could handle more edge cases
Cons: Non-deterministic, harder to validate, slower
Decision: Rejected - rule-based parsing with schemas is more reliable
Migration & Adoption
Phase 1: Non-Breaking Addition (Complete in PR #644)
packages/tokens/src/packages/structured-tokens/tools/token-name-parser/Phase 2: Tooling Integration (Next)
Phase 3: Authoring Workflow (Future)
Phase 4: Platform Transformation (Future)
No breaking changes to existing token consumers.
Success Metrics
Achieved in PR #644:
Future Success Metrics:
Known Limitations & Future Work
455 Special Tokens (19.5%)
Tokens that regenerate correctly but need additional schemas:
Categories:
component-xs-regularbundles multiple font propertiesdrop-shadow-emphasizedhas complex structureswatch-border-opacitydirect opacity valuesbutton-minimum-width-multipliercalculation-basedandroid-elevationneeds platform schemaFuture Schemas Needed:
typography-composite-token.jsondrop-shadow-composite-token.jsonmultiplier-token.jsonImpact: These tokens work correctly (100% regeneration) but show as "special" in validation reports.
Edge Cases
focus-indicator,side-label-character-count)compact-extra-large)setsand exact declaration JSON still in progressPerformance Considerations
Open Questions
color-set/scale-setJSON to cascade tokens without breaking existing consumers.Resolved direction (was open): Multi-dimensional modes and platform extensions are specified at the spec level in #714, #715, and this RFC (cascade + dimensions); remaining work is schema + tooling implementation.
Related Work & References
GitHub Discussions
Jira Tickets
Implementation
Documentation (in PR #644)
FINAL_PROJECT_SUMMARY.md- Complete project overviewICONS_RESULTS.md- Icons parsing results (100% validation)TYPOGRAPHY_RESULTS.md- Typography parsing results (95.2% validation)LAYOUT_COMPONENT_RESULTS.md- Layout component results (70.3% validation)COLOR_FINAL_RESULTS.md- All color files summarySEMANTIC_COMPLEXITY.md- Semantic complexity metric documentationROUND_TRIP_VERIFICATION.md- Round-trip conversion verificationDecision Points
For Approval
For Discussion
Next Steps
Immediate (Post-Approval)
Short-term (1-2 months)
Medium-term (3-6 months)
Long-term (6-12 months)
Appendix
Appendix A: Complete Token Category Definitions
See full documentation in PR #644:
packages/structured-tokens/schemas/- All schema definitionspackages/structured-tokens/schemas/enums/- All enum definitionstools/token-name-parser/templates/- Regeneration templatesAppendix B: Validation Reports
Complete validation reports available in PR #644:
tools/token-name-parser/output/[filename]-validation-report.jsonAppendix C: Parser Implementation
Full parser source:
tools/token-name-parser/src/parser.js- Token name parsing logic (838 lines)tools/token-name-parser/src/validator.js- Schema validation (242 lines)tools/token-name-parser/src/name-regenerator.js- Name regeneration (98 lines)Appendix D: Test Coverage
All tests passing:
tools/token-name-parser/test/parser.test.jstools/token-name-parser/test/name-regenerator.test.jstools/token-name-parser/test/name-comparator.test.jstools/token-name-parser/test/semantic-complexity.test.jsFeedback & Discussion
Please provide feedback on:
This RFC is open for discussion and feedback before moving to approval.
Beta Was this translation helpful? Give feedback.
All reactions