feat: make learnings search limits configurable via gstack-config#1514
Open
MrAladdin wants to merge 1 commit into
Open
feat: make learnings search limits configurable via gstack-config#1514MrAladdin wants to merge 1 commit into
MrAladdin wants to merge 1 commit into
Conversation
Add two new config keys to ~/.gstack/config.yaml:
- learnings_preamble_limit (default: 3) — controls how many learnings
are loaded during every skill preamble
- learnings_skill_limit (default: 10) — controls how many learnings
are loaded inside skill workflows via {{LEARNINGS_SEARCH}}
Previously both values were hardcoded. Users with large context models
(200K+) can now increase limits to load more project memory per session.
Usage:
gstack-config set learnings_preamble_limit 20
gstack-config set learnings_skill_limit 30
Changes:
- bin/gstack-config: add defaults, header comment, list/defaults enum
- scripts/resolvers/preamble/generate-preamble-bash.ts: read limit
from config instead of hardcoded 3
- scripts/resolvers/learnings.ts: read limit from config instead of
hardcoded 10
- test/gen-skill-docs.test.ts: update assertion for dynamic limit
- All SKILL.md files regenerated
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
learnings_preamble_limitandlearnings_skill_limitconfig keys togstack-config~/.gstack/config.yamlinstead of using hardcoded values (3 and 10)Motivation
The hardcoded
--limit 3(preamble) and--limit 10(skill) were designed for the lowest-common-denominator model context window. Users running Claude Opus, GPT-4.5, or other large-context models have ample room to load 20-30 learnings without meaningful token pressure (~2000 tokens for 20 entries). Making this configurable lets power users get more value from accumulated project memory.Changes
bin/gstack-configlearnings_preamble_limit(default: 3) andlearnings_skill_limit(default: 10) to defaults table, header comments, and list/defaults enumerationscripts/resolvers/preamble/generate-preamble-bash.ts--limit 3with dynamic$(gstack-config get learnings_preamble_limit)scripts/resolvers/learnings.ts--limit 10with dynamic$(gstack-config get learnings_skill_limit)test/gen-skill-docs.test.ts--limit 3to--limit "$_LEARN_PREAMBLE_LIMIT"*/SKILL.mdUsage
Backwards compatibility
gstack-config getcall per preamble (~5ms, same as existing config reads)Test plan
bun test test/gen-skill-docs.test.ts— 385 pass, 0 failbun test browse/test/gstack-config.test.ts— 17 pass, 3 fail (pre-existing, unrelated)gstack-config get/set/list/defaultsall display new keys correctly--host hermesgeneration produces correct$GSTACK_BIN/gstack-config getcalls🤖 Generated with Claude Code