We use the Developer Certificate of Origin (DCO) instead of a CLA.
All commits must include a Signed-off-by: line.
Use git commit -s to add this automatically.
We use Safe Trunk Based Development. Direct pushes to main are blocked.
If you don't have write access to the repo, use the fork-based workflow:
# 1. Fork the repo on Codeberg to your account
# 2. Clone YOUR fork (not upstream)
git clone https://codeberg.org/YOUR-USER/hypergumbo.git
cd hypergumbo
# 3. Add upstream remote
git remote add upstream https://codeberg.org/iterabloom/hypergumbo.git
# 4. Set credentials (in .env or exported)
export FORGEJO_USER=your-username
export FORGEJO_TOKEN=your-tokenThen for each contribution:
# Sync with upstream
git fetch upstream
git checkout dev
git merge upstream/dev
# Create feature branch
git checkout -b yourname/feat/description
# Make changes, commit with sign-off
git commit -s -m "feat: description"
# Create PR to upstream
./scripts/contributeThe contribute script pushes to your fork and creates a PR to upstream. Unlike auto-pr, it does not auto-merge—you'll need to wait for maintainer review.
| Aspect | Maintainer (auto-pr) |
Contributor (contribute) |
|---|---|---|
| Push target | Upstream directly | Your fork |
| PR creation | refs/for/dev/branch | Fork → upstream/dev PR |
| CI polling | Waits and auto-merges | Exits after PR creation |
| Merge | Automatic on CI pass | Requires maintainer approval |
After your PR is merged, sync your fork:
git checkout dev
git fetch upstream
git merge upstream/dev
git push origin devThe project uses a YAML-backed structured tracker (see ADR-0013) with three visibility tiers: canonical (shared with upstream), workspace (local to your fork), and stealth (never committed).
Initial setup (run once after cloning your fork):
# Set tracker scope to workspace so your agent isn't blocked by upstream items
scripts/tracker fork-setupThis sets stop_hook.scope: workspace in your local config, meaning the stop hook only counts your workspace items (not upstream's canonical items) when deciding whether the agent can stop.
How it works:
- Your agent writes tracker items to
.agent/tracker-workspace/by default - Upstream's canonical items in
.agent/tracker/are read-only context ./scripts/contributeautomatically excludes workspace files from upstream PRs- The pre-push hook warns if you accidentally push workspace files to upstream
Promoting items to canonical:
If your workspace item should become part of upstream's tracker (e.g., a discovered invariant that affects the whole project), promote it:
scripts/tracker promote <ITEM-ID>This moves the item from workspace to canonical. Include the promoted item in a separate PR.
Two-user permissions: If your deployment uses separate OS users for human
and agent (see the
tracker README setup),
run htrac setup to diagnose permission issues and verify with the
Permission Test Playbook.
We provide a script that handles pushing, waiting for CI, and merging automatically.
./scripts/auto-pr "feat: my change title"See auto-pr Documentation below for details.
If you prefer manual control without installing CLI tools like tea:
- Commit changes to a feature branch.
- Push via AGit:
# Replace 'feature-branch' with your actual branch name git push origin HEAD:refs/for/dev/feature-branch \ -o title="feat: description" \ -o description="Extended details..."
- Wait for CI (Status check:
CI / pytest (pull_request)). - Merge via the Codeberg web UI.
This repository uses dual licensing:
- AGPL-3.0 for the core hypergumbo tool (
packages/hypergumbo-core/,packages/hypergumbo-lang-*/) - MPL-2.0 for the tracker package (
packages/hypergumbo-tracker/)
SPDX headers: Every .py source file must include a license header as its first line:
# SPDX-License-Identifier: AGPL-3.0-or-lateror for the tracker package:
# SPDX-License-Identifier: MPL-2.0DCO covers both licenses. Your Signed-off-by line certifies you have the right to submit under the applicable license. No CLA is required.
Codeberg is the source of truth for issues/PRs. GitHub is a mirror.
When you run pytest, you automatically get smart-test, which runs only tests affected by your changes. This speeds up the development cycle.
pytest # Runs affected tests only (via smart-test)
pytest --full # Runs complete test suite
pytest tests/test_foo.py # Specific file (still goes through smart-test)
pytest --version # Fast-path, no smart-test overheadpytest invocation
│
▼
┌─────────────────────────────────────┐
│ .venv/bin/pytest (wrapper) │
│ - Fast-path for --version, --help │
│ - Checks SMART_TEST_ACTIVE env var │
│ - Delegates to scripts/smart-test │
└──────────────┬──────────────────────┘
│
▼
┌─────────────────────────────────────┐
│ scripts/smart-test │
│ - Sets SMART_TEST_ACTIVE=1 │
│ - Finds affected tests via slice │
│ - Calls pytest with affected tests │
└──────────────┬──────────────────────┘
│
▼
┌─────────────────────────────────────┐
│ .venv/bin/pytest (wrapper again) │
│ - Sees SMART_TEST_ACTIVE=1 │
│ - Calls python -m pytest directly │
└─────────────────────────────────────┘
If pip install pytest overwrites the wrapper, it repairs automatically on the next pytest run:
pip install pytest → Overwrites wrapper with pip's entry point
│
▼
pytest invocation
│
▼
┌─────────────────────────────────────┐
│ conftest.py (pytest_configure) │
│ - Detects wrapper is broken │
│ - Runs: install-hooks --repair │
│ - Prints: "wrapper repaired" │
│ - Re-execs through fixed wrapper │
└─────────────────────────────────────┘
python -m pytest ... # Bypass wrapper completely
SMART_TEST_ACTIVE=1 pytest ... # Skip smart-test, use real pytest
./scripts/install-hooks --repair-shims # Manual repairWhen you run install-hooks, you may see:
⚠️ No stable hypergumbo found
smart-test will fall back to running full test suite
This means smart-test can't compute affected tests, so it runs everything (~10 min). To enable fast test selection (~30 sec for small changes), install a stable hypergumbo release outside your dev venv:
# Install stable hypergumbo via pipx (recommended)
pipx install hypergumbo
# Or via pip --user (if pipx not available)
python3 -m pip install --user hypergumboThis installs the PyPI release to ~/.local/bin/hypergumbo, which smart-test uses for slice --files analysis.
Why a separate install? This is a bootstrap safety measure: we use a known-good release to analyze the code under development, avoiding the "testing hypergumbo with itself" paradox. See ADR-0010 for design rationale.
# Fast parallel run with coverage (goes through smart-test)
pytest -n auto --cov=src --cov-fail-under=100
# Sequential run (for debugging)
pytest --cov=src --cov-fail-under=100
# Bypass smart-test for full control
SMART_TEST_ACTIVE=1 pytest -n auto --cov=src --cov-fail-under=100Some tests require sentence-transformers (which depends on PyTorch) for testing embedding-based features like sketch_precomputed.
CI behavior: CI does not attempt to install sentence-transformers. It only checks if it's already available. If not installed, embedding-dependent tests are skipped. This keeps CI fast and avoids flaky installs on resource-constrained runners.
Local development: To run embedding tests locally, install sentence-transformers:
pip install sentence-transformersThis works on most development machines where PyTorch wheels are available (Linux x86_64, macOS, Windows).
Test that requires embeddings: test_run_behavior_map_stores_sketch_precomputed in packages/hypergumbo-core/tests/test_cli_run_behavior_map.py tests the sketch_precomputed feature which uses embeddings. Other tests pass include_sketch_precomputed=False to skip embedding-dependent code paths.
We maintain 100% test coverage. Two coverage configurations exist:
| Config | Used When | Omits |
|---|---|---|
Default (pyproject.toml) |
Local dev, CI with embeddings | Embedding-only modules |
.coveragerc.no-embeddings |
CI without embeddings (Python 3.10 matrix) | Embedding-only modules + _embedding_data.py |
When to use # pragma: no cover:
-
Embedding-only code paths — Code that only executes when sentence-transformers is available:
if embeddings_dir: # pragma: no cover - only when embeddings available print(f"Embeddings cached: {embeddings_dir}")
-
Python 3.11+ features — CI tests on Python 3.10, so 3.11+ features aren't covered:
try: import tomllib # pragma: no cover - Python 3.11+ data = tomllib.loads(content) # pragma: no cover except ImportError: # fallback for Python 3.10
-
Optional dependencies — Code paths using non-required packages:
try: import tomli # pragma: no cover - optional dependency data = tomli.loads(content) # pragma: no cover except ImportError: data = None # fallback
Release workflow note: Multi-Python testing (3.10-3.13) and integration tests run nightly (nightly.yml), not during release. release.yml checks whether nightly already covered the release SHA — if so, it skips those jobs; otherwise they run post-publish as verification. The 3.10 jobs use .coveragerc.no-embeddings since sentence-transformers may not be available. If you add new embedding-related code, ensure it's either omitted or marked with pragma.
The scripts/auto-pr script automates the entire PR lifecycle: push, CI polling, and merge.
- Push — Pushes your branch using Forgejo's AGit workflow (
refs/for/dev/<branch>) - Create PR — The push automatically creates a PR on Codeberg
- Poll CI — Waits for CI status checks to complete (polls every 10 seconds)
- Merge — Uses fast-forward merge by default (preserves commit bodies + DCO)
- Cleanup — Switches back to
dev, pulls, and deletes the local feature branch
If the remote is unavailable (503 errors, network issues), the PR is queued locally and can be pushed later with ./scripts/auto-pr flush.
Default: Fast-forward merge — Your original commits are preserved exactly as-is, including:
- Full commit message bodies
- DCO
Signed-off-bylines - Commit SHAs remain unchanged
If branch has diverged: The script will prompt you to rebase first:
git fetch origin dev
git rebase origin/dev
./scripts/auto-prEmergency fallback: --squash — Only if fast-forward fails, rebasing fails, and it's an emergency:
./scripts/auto-pr --squashThis will:
- Warn and ask for confirmation
- Squash merge with
[from <sha>]in subject for traceability - Attach original commit body as a git note on the squash commit
- Push notes to remote
Some historical commits (Jan 9-22 2026) were squash-merged before fast-forward was the default. Their original bodies are preserved as git notes.
# Fetch notes from remote
git fetch origin refs/notes/*:refs/notes/*
# View commits with notes
git log --show-notes
# Show note for specific commit
git notes show <sha>Environment variables (set in shell or .env file in repo root):
| Variable | Required | Description |
|---|---|---|
FORGEJO_USER |
Yes | Your Codeberg username |
FORGEJO_TOKEN |
Yes | Codeberg API token with repo write access |
You can create a .env file (git-ignored) in the repo root:
FORGEJO_USER=your_username
FORGEJO_TOKEN=your_token_herePrerequisites:
- Must be on a feature branch (not
mainordev) - Branch must have commits ahead of
dev python3,curl, andgitmust be available
# Basic usage (uses last commit message as PR title)
./scripts/auto-pr
# Custom PR title
./scripts/auto-pr "feat: add new feature"
# Custom title and description
./scripts/auto-pr "feat: add new feature" "Detailed description here"
# Force squash merge (emergency only - use only if fast-forward and rebasing both fail)
./scripts/auto-pr --squash
# Queue management (when remote is unavailable)
./scripts/auto-pr list # Show queued PRs
./scripts/auto-pr flush # Push queued PRs when remote is back
./scripts/auto-pr --help # Show all optionsWhen the script creates a PR, it writes the PR number to .git/PR_PENDING. This file signals to automated systems (like AI coding agents) that a PR is in flight and they should wait before starting new work.
For agents: Check for this file before starting new tasks:
test -f .git/PR_PENDING && echo "PR in progress, wait..."The file is automatically removed when the PR is merged.
# 1. Start from dev
git checkout dev && git pull
# 2. Create a feature branch
git checkout -b my-feature
# 3. Make changes and commit
git add . && git commit -s -m "feat: my change"
# 4. Run auto-pr
./scripts/auto-pr
# 5. Script handles everything, you end up back on dev with changes mergedWhen Codeberg is unavailable, auto-pr automatically:
- Retries up to 3 times with 10-second delays
- If still failing, queues as a vPR (virtual PR) in
.git/PR_QUEUE - Exits cleanly so you can continue working
vPR workflow:
- vPRs form a linear chain (each branches from the previous)
- To add more changes:
git checkout -b new-branch $(./scripts/auto-pr status | grep tip | awk '{print $3}') - When Codeberg is back:
./scripts/auto-pr flush - Flush pushes ALL vPRs as a single atomic PR (individual commits preserved)
| Error | Cause | Solution |
|---|---|---|
FORGEJO_USER not set |
Missing env var | Add to .env or export in shell |
You are on 'main' |
Must use feature branch | git checkout -b feature-name |
Could not find open PR |
API timing issue | Wait and retry, or check Codeberg UI |
CI Failed |
Tests/linting failed | Fix issues, amend commit, re-run |
Merge failed |
Conflicts or permissions | Check PR on Codeberg for details |
PR queued locally |
Remote unavailable | Run ./scripts/auto-pr flush when remote is back |
This script uses Forgejo's AGit flow, which creates PRs via specially-formatted push refs:
git push origin HEAD:refs/for/dev/<branch-name> -o title="..." -o description="..."This is different from GitHub's flow where you push a branch and then create a PR separately. With AGit, the push is the PR creation.
Hypergumbo includes a vendor-agnostic hook system for AI coding agents operating in autonomous mode. These hooks enforce reflection before stopping, helping prevent workarounds and ensuring structural fixes.
See ADR-0008 for full design rationale.
.agent/
├── hooks/
│ ├── claude-code/stop.sh # Claude Code Stop hook
│ ├── gemini-cli/after-agent.sh # Gemini CLI AfterAgent hook
│ ├── cursor/stop.sh # Cursor stop hook
│ └── codex-cli/notify.sh # Codex CLI notification (limited)
├── stop_reflect.md # Reflection prompt (shared)
├── LOOP # Sentinel file (exists = continue)
└── tracker/ # Structured tracker (ADR-0013)
Add to .claude/settings.json:
{
"hooks": {
"Stop": [
{
"hooks": [
{
"type": "command",
"command": "\"$CLAUDE_PROJECT_DIR\"/.agent/hooks/claude-code/stop.sh"
}
]
}
]
}
}Add to .gemini/config.yaml:
hooks:
AfterAgent:
- type: command
command: .agent/hooks/gemini-cli/after-agent.shAdd to .cursor/hooks.json:
{
"version": 1,
"hooks": {
"stop": [
{
"command": ".agent/hooks/cursor/stop.sh",
"loop_limit": 10
}
]
}
}Add to ~/.codex/config.toml:
notify = [".agent/hooks/codex-cli/notify.sh"]Note: Codex CLI has limited hook support—it can only notify, not block or inject prompts.
- Activation: Hooks only engage when
AUTONOMOUS_MODE.txtcontains "TRUE" and.agent/LOOPexists - Reflection: When the agent tries to stop, the hook injects
stop_reflect.mdas guidance - Invariant tracking: Discovered invariants are managed via
scripts/tracker(ADR-0013) - Structural fixes: The reflection protocol encourages fixing root causes, not workarounds
To disable governance hooks temporarily, use the toggle script:
./scripts/loop-toggle off # Disable autonomous mode
./scripts/loop-toggle on # Re-enable autonomous mode
./scripts/loop-toggle status # Check current state
./scripts/loop-toggle # Toggle current stateThis manages both AUTONOMOUS_MODE.txt and the .agent/LOOP sentinel together.