- Commit as you go - Make small, focused commits after completing each feature or fix
- Push when done - Push all commits to remote when finishing a task or session
- No AI co-author - Never include
Co-Authored-Bylines in commits - Rebuild when done - Run
cargo build --release(thejcodesymlink picks it up automatically) - Promote to stable release - Run
scripts/install_release.shto update the stable/release binary - Test before committing - Run
cargo testto verify changes - Bump version for releases - Update version in
Cargo.tomlwhen making releases - Remote builds available - Use
scripts/remote_build.shto offload cargo work to another machine
jcode uses auto-incrementing semantic versioning (v0.1.X).
Automatic (patch):
- Build number auto-increments on every
cargo build - Stored in
~/.jcode/build_number - Example:
v0.1.1→v0.1.2→v0.1.3...
Manual (major/minor):
- For big changes, manually update major/minor version in
Cargo.toml - Minor (0.1.x → 0.2.0): New features, significant enhancements
- Major (0.x.x → 1.0.0): Breaking changes to CLI, config, or APIs
The build also includes git hash and -dev suffix for uncommitted changes (e.g., v0.1.47-dev (abc1234)).
This repo has self-dev mode. When running jcode in this directory:
- It auto-detects the jcode repo and enables self-dev mode
- Builds and tests a canary version before running
- Use
/reloadto hot-reload after making changes
Manual testing - After making changes, manually test the feature in a real terminal to verify it works. Use kitty to launch test instances:
sock=$(ls /tmp/kitty.sock* | head -1)
kitten @ --to unix:$sock launch --type=os-window ./target/release/jcodeProgrammatic testing - Use the debug socket for automated testing (see "Headless Testing via Debug Socket" section below).
cargo build --release # Build latest (jcode on PATH picks it up)
scripts/install_release.sh # Promote current build to stable/release
cargo test # Run all tests
cargo test --test e2e # Run only e2e tests
scripts/remote_build.sh --release # Build on remote machine, sync binary back
scripts/remote_build.sh test # Run tests on remote machine
JCODE_REMOTE_CARGO=1 scripts/test_e2e.sh # Route helper script cargo commands remotely
JCODE_REMOTE_CARGO=1 scripts/agent_trace.sh # Route helper script build remotelyLogs are written to ~/.jcode/logs/ (daily files like jcode-YYYY-MM-DD.log).
~/.local/bin/jcodeis a symlink totarget/release/jcode— always the latest build from source.~/.jcode/builds/stable/jcodeis the stable/release binary, updated byscripts/install_release.sh.- Ensure
~/.local/binis before~/.cargo/bininPATH.
jcode supports multiple providers and authentication methods:
Uses your Claude Pro/Max subscription - no API key needed, included in subscription cost.
Direct Claude API transport is the default in jcode. The legacy claude subprocess mode is only used when explicitly enabled.
Setup:
- Install Claude Code CLI:
npm install -g @anthropic-ai/claude-code - Login:
claude login - Credentials are stored in
~/.claude/.credentials.json
jcode automatically detects and uses these credentials. Token refresh is handled automatically.
How it works:
- jcode reads the OAuth token from
~/.claude/.credentials.json - Sends requests to
api.anthropic.com/v1/messages?beta=true - Includes required headers:
anthropic-beta: oauth-2025-04-20,claude-code-20250219 - System prompt must include Claude Code identity (handled automatically)
Pay-per-token via Anthropic Console.
export ANTHROPIC_API_KEY=sk-ant-...If ANTHROPIC_API_KEY is set, it takes priority over OAuth.
Both auth methods support prompt caching. The system prompt is cached for 5 minutes, reducing costs on subsequent turns. Token usage shows:
cache_write: Tokens cached on first requestcache_read: Tokens read from cache (90% cheaper)
Direct Anthropic API is now the default Claude transport.
To use Claude Code CLI as a subprocess (legacy rollback mode only):
export JCODE_USE_CLAUDE_CLI=1This shells out to the claude binary instead of calling the API directly.
It is retained only for legacy compatibility.
Access 200+ models from various providers (Anthropic, OpenAI, Google, Meta, etc.) via OpenRouter.
Setup:
- Get an API key from https://openrouter.ai/
- Set the environment variable:
export OPENROUTER_API_KEY=sk-or-v1-...Usage:
- Models use
provider/modelformat (e.g.,anthropic/claude-sonnet-4,openai/gpt-4o) - Switch models with
/model anthropic/claude-sonnet-4 - Available models are fetched dynamically from OpenRouter API
Features:
- Unified API for multiple providers
- Pay-per-token pricing
- Automatic provider routing and fallbacks
# Authentication
export ANTHROPIC_API_KEY=sk-ant-... # Direct API key (overrides OAuth)
export OPENROUTER_API_KEY=sk-or-v1-... # OpenRouter API key
export JCODE_USE_CLAUDE_CLI=1 # Deprecated: legacy Claude CLI subprocess mode
# Model selection
export JCODE_ANTHROPIC_MODEL=claude-opus-4-5-20251101
export JCODE_OPENROUTER_MODEL=anthropic/claude-sonnet-4 # Default OpenRouter model
# Debugging
export JCODE_ANTHROPIC_DEBUG=1 # Log API request payloadssrc/main.rs- Entry point, CLI, self-dev modesrc/tui/app.rs- TUI application state and logicsrc/tui/ui.rs- UI renderingsrc/tool/- Tool implementationssrc/id.rs- Session naming and IDs
jcode has a debug socket for headless/automated testing. This allows external scripts to:
- Execute tools directly (bypass LLM)
- Send messages to the agent and get responses
- Query agent state and history
- Spawn and control test instances
# Option 1: File toggle (persists, no restart needed after reload)
touch ~/.jcode/debug_control
# Option 2: Environment variable
JCODE_DEBUG_CONTROL=1 jcode serve- Main socket:
/run/user/$(id -u)/jcode.sock - Debug socket:
/run/user/$(id -u)/jcode-debug.sock
Commands can be namespaced with server:, client:, or tester: prefixes. Unnamespaced commands default to server.
Server Commands (agent/tools - default namespace):
| Command | Description |
|---|---|
state |
Agent state (session, model, canary) |
history |
Conversation history as JSON |
tools |
List available tools |
last_response |
Last assistant response |
message:<text> |
Send message, get LLM response |
tool:<name> <json> |
Execute tool directly |
sessions |
List all sessions |
create_session |
Create headless session |
create_session:<path> |
Create session with working directory |
destroy_session:<id> |
Destroy a session |
set_model:<model> |
Switch model (may change provider) |
set_provider:<name> |
Switch provider (claude/openai/openrouter) |
trigger_extraction |
Force end-of-session memory extraction |
available_models |
List all available models |
help |
List commands |
Client Commands (TUI/visual debug - client: prefix):
| Command | Description |
|---|---|
client:frame |
Get latest visual debug frame (JSON) |
client:frame-normalized |
Get normalized frame (for diffs) |
client:screen |
Dump visual debug frames to file |
client:enable |
Enable visual debug capture |
client:disable |
Disable visual debug capture |
client:status |
Get client debug status |
client:help |
Client command help |
Tester Commands (spawned instances - tester: prefix):
| Command | Description |
|---|---|
tester:spawn |
Spawn new tester instance |
tester:spawn {"cwd":"/path"} |
Spawn with options |
tester:list |
List active testers |
tester:<id>:frame |
Get frame from tester |
tester:<id>:state |
Get tester state |
tester:<id>:message:<text> |
Send message to tester |
tester:<id>:stop |
Stop tester |
Swarm Commands (multi-agent coordination - swarm: prefix):
| Command | Description |
|---|---|
swarm / swarm:members |
List all swarm members with details (includes timestamps, provider/model) |
swarm:list |
List all swarm IDs with member counts |
swarm:info:<swarm_id> |
Full info: members, coordinator, plan, context, conflicts |
swarm:coordinators |
List all coordinators (swarm_id -> session_id) |
swarm:coordinator:<swarm_id> |
Get coordinator for specific swarm |
swarm:plans |
List all swarm plans with item counts and participants |
swarm:plan:<swarm_id> |
Get plan items for specific swarm |
swarm:proposals |
List all pending plan proposals |
swarm:proposals:<swarm_id> |
List proposals for specific swarm (with items) |
swarm:proposals:<session_id> |
Get detailed proposal from a session |
swarm:context |
List all shared context entries (includes timestamps) |
swarm:context:<swarm_id> |
List context for specific swarm |
swarm:context:<swarm_id>:<key> |
Get specific context value |
swarm:touches |
List all file touches (path, session, op, age, timestamp_unix) |
swarm:touches:<path> |
Get touches for specific file |
swarm:touches:swarm:<swarm_id> |
Get touches filtered by swarm members |
swarm:conflicts |
Files touched by multiple sessions (with full access history) |
swarm:session:<id> |
Detailed session state (interrupts, provider, token usage) |
swarm:interrupts |
List pending interrupts across all sessions |
swarm:id:<path> |
Compute swarm_id for a path (shows git_root, is_git_repo) |
swarm:broadcast:<message> |
Broadcast message to all swarm members |
swarm:broadcast:<swarm_id> <message> |
Broadcast to specific swarm |
swarm:notify:<session_id> <message> |
Send DM to specific session |
swarm:help |
Full swarm command reference |
Event Commands (real-time event subscription - events: prefix):
| Command | Description |
|---|---|
events:recent |
Get recent 50 events |
events:recent:<N> |
Get recent N events |
events:since:<id> |
Get events since event ID (for polling) |
events:count |
Get event count and latest ID |
events:types |
List available event types |
Event types include: file_touch, notification, plan_update, plan_proposal, context_update, status_change, member_change. Events include timestamps (age_secs, timestamp_unix) for debugging timing issues.
import socket
import json
def debug_cmd(cmd, session_id, timeout=30):
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.connect('/run/user/1000/jcode-debug.sock')
sock.settimeout(timeout)
req = {'type': 'debug_command', 'id': 1, 'command': cmd, 'session_id': session_id}
sock.send((json.dumps(req) + '\n').encode())
data = sock.recv(65536).decode()
sock.close()
return json.loads(data)
# Get session first by subscribing to main socket
main_sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
main_sock.connect('/run/user/1000/jcode.sock')
main_sock.settimeout(10.0)
req = json.dumps({'type': 'subscribe', 'id': 1,
'working_dir': '/home/jeremy/jcode', 'selfdev': True}) + '\n'
main_sock.send(req.encode())
# Parse response to get session_id...
# Server commands (default namespace)
result = debug_cmd('state', session_id)
result = debug_cmd('tool:bash {"command":"echo hello"}', session_id)
result = debug_cmd('message:What is 2+2?', session_id)
# Client commands (visual debug)
result = debug_cmd('client:enable', session_id)
result = debug_cmd('client:frame', session_id)
# Tester commands (spawn and control test instances)
result = debug_cmd('tester:spawn {"cwd":"/tmp"}', session_id)
result = debug_cmd('tester:list', session_id)
result = debug_cmd('tester:tester_abc123:frame', session_id)Use the debug socket to test features with both Claude and OpenAI:
import socket
import json
def send_cmd(sock, cmd, session_id=None, timeout=60):
req = {"type": "debug_command", "id": 1, "command": cmd}
if session_id:
req["session_id"] = session_id
sock.send((json.dumps(req) + '\n').encode())
sock.settimeout(timeout)
data = sock.recv(65536).decode()
resp = json.loads(data)
return resp.get('ok'), resp.get('output', '')
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.connect('/run/user/1000/jcode-debug.sock')
# Create a test session
ok, output = send_cmd(sock, "create_session:/tmp/test")
session_id = json.loads(output)['session_id']
# Test with Claude
send_cmd(sock, "set_provider:claude", session_id)
send_cmd(sock, "message:Remember I prefer dark mode", session_id)
# Test with OpenAI
send_cmd(sock, "set_provider:openai", session_id)
send_cmd(sock, "message:What are my preferences?", session_id)
# Test memory extraction
send_cmd(sock, "trigger_extraction", session_id)
# Check memories
send_cmd(sock, 'tool:memory {"action":"list"}', session_id)
# Cleanup
send_cmd(sock, f"destroy_session:{session_id}")
sock.close()When in self-dev mode, the selfdev tool is available:
# Check build status
debug_cmd('tool:selfdev {"action":"status"}', session_id)
# Spawn a test instance
debug_cmd('tool:selfdev {"action":"spawn-tester","cwd":"/tmp","args":["--help"]}', session_id)
# List testers
debug_cmd('tool:selfdev {"action":"tester","command":"list"}', session_id)
# Control tester
debug_cmd('tool:selfdev {"action":"tester","command":"stop","id":"tester_xxx"}', session_id)- Claude provider: The Claude model may claim it doesn't have access to the
selfdevtool even when it's registered. Direct tool execution via debug socket works. GPT models correctly see and use selfdev.