Skip to content

fix: correct Codex context window (266K) and default fallback order#5

Merged
dorukardahan merged 1 commit intomainfrom
fix/factual-corrections
Feb 15, 2026
Merged

fix: correct Codex context window (266K) and default fallback order#5
dorukardahan merged 1 commit intomainfrom
fix/factual-corrections

Conversation

@dorukardahan
Copy link
Owner

Changes

Codex context window: 200K → 266K

The Codex 5.3 context window was listed as 200,000 tokens across benchmarks.json and SKILL.md. The actual value is 266,000 tokens (per OpenAI docs and confirmed in production configs).

Updated in:

  • benchmarks.jsoncontext_window field
  • SKILL.md — model tier table

Default fallback order: Codex before Pro

The default fallback chain was Opus → Pro → Codex → Kimi. Reordered to Opus → Codex → Pro → Kimi.

Rationale: Codex has a higher Intelligence score (51.5) than Pro (48.4), making it a better general-purpose fallback. Pro is a specialist (GPQA 0.908, 1M context) — great when you specifically need research or long context, but Codex is the safer all-rounder when Opus is unavailable.

Updated in:

  • examples/full-stack/openclaw.json — defaults and main agent
  • references/provider-config.md — reference config block
  • SKILL.md — Full Stack fallback table
  • README.md — Cross-Provider Fallback Chains table

Impact

No behavioral change for users with custom fallback chains. Only affects the default/example configs.

- Codex context window: 200K → 266K in benchmarks.json and SKILL.md
- Default fallback order: Opus → Codex → Pro (was Pro → Codex)
  Codex has higher Intelligence score (51.5 vs 48.4), making it a
  better general-purpose fallback than Pro
- Updated in: full-stack example, provider-config reference, SKILL.md
  fallback table, README fallback table
Copy link
Owner Author

@dorukardahan dorukardahan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the factual corrections — this PR improves reliability docs materially. I reviewed diff scope for README.md, SKILL.md, benchmarks.json, examples/full-stack/openclaw.json, and references/provider-config.md.

What looks good:

  • Codex context window is now consistently 266K in both SKILL.md and benchmarks.json.
  • Main-agent fallback order (Opus -> Codex -> Gemini Pro -> K2.5) is now aligned across README table, SKILL full-stack routing table, examples/full-stack/openclaw.json, and references/provider-config.md.
  • No secrets/private paths/credentials introduced (good for public repo).

Detailed consistency check:

  • benchmarks.json: openai-codex context_window updated to 266000
  • SKILL.md model catalog: Codex context updated to 266K
  • README.md fallback matrix: main row reordered to Codex before Gemini Pro
  • examples/full-stack/openclaw.json defaults + main agent fallbacks reordered
  • references/provider-config.md recommended config reordered

Potential follow-up (non-blocking):

  • There may still be stale fallback-order wording in narrative prose outside the changed blocks (if any). I did not see contradictions in this PRs touched lines, but a repo-wide grep for the old order string (Gemini Pro before Codex in reasoning/main fallbacks) could prevent future drift.

Overall: LGTM from a factual accuracy + cross-file consistency perspective.

@dorukardahan dorukardahan merged commit 11c8876 into main Feb 15, 2026
1 check passed
@dorukardahan dorukardahan deleted the fix/factual-corrections branch February 15, 2026 17:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments