Skip to content

jhcdev/paw

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

199 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Paw 🐱

  /\_/\   Paw
 ( o.o )  Too lazy to pick one AI. So I use them all.
  > ^ <

The multi-provider AI coding agent for the terminal. Use Anthropic, OpenAI Codex, Ollama, and vLLM/OpenAI-compatible endpoints β€” with automatic fallback, parallel sub-agents, cross-provider verification, and built-in safety. Not tied to one model, not tied to one provider. Switch with /model β€” no code changes, no lock-in.

Multi-provider, zero lock-inAnthropic (Claude), Codex (ChatGPT subscription), Ollama (local/free), and vLLM/OpenAI-compatible endpoints β€” all behind one CLI. Rate limit on Claude? Auto-switches to Codex. Need self-hosted inference? Point Paw at your vLLM server.
Smart Router + Problem ClassifierJust type naturally. Every prompt is instantly classified (security, debugging, architecture, performance, testing…) and routed to the optimal mode. No flags, no mode switching β€” it just works. English, Korean, Japanese, Chinese.
Autonomous agent/auto self-drives: analyze β†’ plan β†’ execute β†’ verify β†’ fix, until done. Shows every step, tool call, and AI reasoning in real time.
Parallel sub-agentsSpawn independent agents that work in background while you keep chatting. Each agent inherits your current model and session context.
Cross-session skill learningSuccessful auto-agent tasks are recorded across sessions. Repeated patterns auto-generate reusable skills. Bad patterns self-correct via confidence decay. Fully under user control via /memory.
Cross-provider verificationAI writes code β†’ a different AI reviews it. Paw also runs local checks (typecheck/build/test/lint), summarizes blockers inline, and keeps browsable verification logs.
Agent safetyEvery tool call is risk-classified in real-time. Destructive commands (rm -rf, mkfs, curl|sh) are blocked before they execute. High-risk operations auto-checkpoint via git stash.
Cross-session memoryPAW.md hierarchy β€” global, project, personal notes, and auto-learned context β€” injected on session start, survives compaction, persists across sessions.
Skills + Hooks7 built-in slash commands + unlimited custom skills. /review reviews AND fixes critical issues automatically. 10 lifecycle hook events with regex matchers, JSON stdin, and exit-code blocking.
Live Activity DisplayEvery tool call shown in real time with color-coded icons (Read=cyan, Write=yellow, Bash=magenta). AI intermediate responses stream live between tool calls.

Disclaimer: Paw is an independent, third-party project. Not affiliated with Anthropic, OpenAI, or any AI provider.


Quick Install

git clone https://github.com/jhcdev/paw.git
cd paw
npm install
npm link

Works on Linux, macOS, and WSL2. Requires Node.js 22+ and at least one provider (Anthropic API key, Codex CLI, Ollama, or vLLM).

After installation:

paw                                    # Auto-detect providers and start
paw "explain this project"             # Direct prompt
paw --continue                         # Resume last session
paw --provider codex                   # Force specific provider
paw --provider vllm                    # Use your vLLM/OpenAI-compatible server

Getting Started

paw                          # Interactive REPL β€” start coding
paw --provider ollama        # Force a specific provider
paw --provider vllm          # Force vLLM
paw --continue               # Resume last session
paw --session abc123         # Join specific session
paw --help                   # All flags and MCP commands
paw mcp list                 # List connected MCP servers
paw --logout                 # Remove saved credentials

Providers

Provider Auth Models Cost
Anthropic ANTHROPIC_API_KEY Haiku 4.5, Sonnet 4/4.6, Opus 4/4.6 Per-token
Codex codex login GPT-5.4, GPT-5.3, o4 Mini, o3 ChatGPT subscription
Ollama (none) Any pulled model Free (local)
vLLM VLLM_API_KEY optional Any model exposed via /v1/models Self-hosted
# Anthropic β€” set in .env or configure via /settings
ANTHROPIC_API_KEY=sk-ant-api03-...

# Codex β€” install CLI and login
npm install -g @openai/codex && codex login

# Ollama β€” pull a model and go
ollama pull qwen3

# vLLM β€” point Paw at your OpenAI-compatible endpoint
VLLM_BASE_URL=http://localhost:8000
VLLM_MODEL=auto
# optional
VLLM_API_KEY=dummy

Coming soon: Gemini, Groq, OpenRouter.


Smart Router + Problem Classifier

Just type naturally β€” Paw picks the best mode and auto-activates the right features:

You type Category detected Routed to Auto-activated
npm test β€” /pipe β€”
fix the JWT auth vulnerability Security /auto + team auto-verify ON
why does the app crash? Debugging /auto auto-verify ON
design a microservice architecture Architecture team team review
write unit tests for the auth module Testing /test skill auto-verify ON
review this code β€” /review skill β€”
이 μ½”λ“œ λ¦¬λ·°ν•΄μ€˜ β€” /review skill β€”
λ³΄μ•ˆ 취약점 μ°Ύμ•„μ„œ 고쳐쀘 Security /auto + team auto-verify ON

Supports: English, Korean, Japanese, Chinese.

Categories: security Β· debugging Β· architecture Β· performance Β· testing Β· data Β· api Β· web Β· devops Β· refactoring Β· explanation


Agent Modes

Solo (default)

Single provider handles all messages. Switch models anytime with /model.

Team (/team)

5 agents collaborate on every message:

Role Job Execution
Planner Architecture & plan Sequential
Coder Implementation Sequential
Reviewer Bugs, security, correctness Parallel
Tester Test case generation Parallel
Optimizer Performance improvements Sequential

Roles auto-assigned by efficiency scores. Review β†’ rework loop (MAJOR β†’ recode β†’ re-review, max 3x).

/auto β€” Autonomous Agent

Self-driving agent: analyze β†’ plan β†’ execute β†’ verify β†’ fix, until done.

/auto add input validation to all API endpoints

β—‰ Analyzing project...
βœ“ Creating plan...
β—‰ Executing step 1/10...
β—‰ Verifying...
βœ— Build error found
β—‰ Fixing errors...
βœ“ All checks passed
βœ“ COMPLETED (32.4s)

/spawn β€” Parallel Sub-Agents

Spawn independent agents that work in parallel β€” even while the main AI is thinking.

you  explain the architecture        ← main AI starts working
you  /spawn add tests for auth       ← runs immediately in background
you  /spawn update README            ← another agent, same or different provider
you  /agents                         ← check all agent progress and details

/agents β€” Agent Activity Browser

/agents               β†’ summary + interactive browser
/agents search auth   β†’ filter by keyword
/agents latest        β†’ latest agent detail
/agents list          β†’ print unified overview
/agents results       β†’ completed spawn results
/agents clear         β†’ clear completed tasks

/pipe β€” Shell Output β†’ AI

/pipe npm test              β†’ AI analyzes test failures
/pipe fix npm run build     β†’ AI fixes errors, re-runs until clean (max 5)
/pipe watch npm start       β†’ AI monitors startup output

Cross-Session Skill Learning

Paw automatically learns from every successful /auto run across sessions.

How it works

1st JWT auth fix   β†’ recorded in ~/.paw/learned-tasks.json
2nd JWT auth fix   β†’ past context injected automatically
3rd JWT auth fix   β†’ auto-skill created: /auto-security-auth

Every learned task carries a confidence score (0–1) that self-corrects:

Event Effect
Task succeeds Similar patterns +0.1 confidence (cap 1.0)
Task fails Similar patterns βˆ’0.3 confidence
Confidence < 0.15 Pattern auto-pruned
Auto-skill has < 2 backing patterns Skill file auto-deleted

Only patterns with confidence β‰₯ 0.4 are injected as context.

User control β€” all via /memory

/memory              # PAW.md sources + learned pattern summary + current mode
/memory auto         # learn silently, create skills automatically (default)
/memory ask          # learn silently, ask before creating skills
/memory off          # disable learning and context injection entirely

/memory yes          # confirm pending skill creation (ask mode)
/memory no           # skip pending skill creation (ask mode)

/memory forget <skill>           # delete skill + backing patterns
/memory forget --category <cat>  # purge all patterns for a category
/memory forget --all             # wipe the entire learned store

Trust & Safety

/verify β€” Cross-Provider Verification

AI generates code β†’ a different AI reviews it. Paw also runs local verification checks when available.

---
Verification (by codex/gpt-5.4):
  Status: BLOCKED
  Confidence: 85/100
  Blocking summary:
    - test: failing suite
  Checks:
    βœ— npm run --silent test
      ↳ failing suite
  [error] src/auth.ts: Potential SQL injection
---
/verify        # reviewer / effort settings
/verify logs   # browse recent verification runs

/safety β€” Risk Classification

Level Examples Action
Low read_file, search_text, glob Execute immediately
Medium write_file, edit_file, npm run build Execute immediately
High rm, git reset, terraform destroy Blocked + git checkpoint
Critical rm -rf /, mkfs, curl|sh Permanently blocked

25+ dangerous patterns blocked. Symlink traversal protection. SSRF blocked. Shell injection prevented.


Memory

Cross-session memory via PAW.md hierarchy + learned task patterns:

File Scope Shared
~/.paw/PAW.md All projects No
./PAW.md or .paw/PAW.md This project Yes (commit to repo)
./PAW.local.md This project, personal No (git-ignored)
~/.paw/memory/ Auto-learned context No
~/.paw/learned-tasks.json Cross-session task patterns No
/memory              # view memory sources + learned patterns
/remember <note>     # save note across sessions
/sessions <query>    # search and summarize past sessions
/compact [focus]     # AI-powered conversation compression
/export              # export full context as markdown
/export chat         # export conversation only

Skills

7 built-in + unlimited custom. $ARGUMENTS, !`command` injection, SKILL.md directories.

Built-in Description
/review Review code + auto-fix critical/major issues found
/refactor Refactoring improvements
/test Generate test cases
/explain Explain code in detail
/optimize Performance optimization
/document Generate documentation
/commit Conventional commit from diff

Custom skill β€” .paw/skills/deploy.md:

---
name: deploy
description: Deploy the application
argument-hint: [environment]
---

Deploy $ARGUMENTS to production.
Current branch: !`git branch --show-current`

Auto-learned skills β€” after 3 similar /auto tasks, a global skill is auto-created in ~/.paw/skills/auto-<category>-<keyword>.md. Manage via /memory.


Hooks

10 lifecycle events. Regex matchers. JSON stdin. Exit 2 = block.

Event When Can Block
pre-turn Before sending to model β€”
post-turn After model responds β€”
pre-tool Before tool execution Yes
post-tool After tool succeeds β€”
post-tool-failure After tool fails β€”
on-error On any error β€”
session-start REPL starts β€”
session-end REPL ends β€”
stop AI finishes responding Yes
notification Notification sent β€”

JSON β€” .paw/settings.json:

{
  "hooks": {
    "post-tool": [{
      "matcher": "edit_file|write_file",
      "hooks": [{ "type": "command", "command": "npx prettier --write $(jq -r '.tool_input.path')" }]
    }]
  }
}

Exit 0 = proceed. Exit 2 = block. Env: PAW_EVENT, PAW_CWD, PAW_TOOL_NAME.


Tools & MCP

9 Built-in Tools

list_files Β· read_file Β· read_image Β· write_file Β· edit_file Β· search_text Β· run_shell Β· glob Β· web_fetch

MCP (Model Context Protocol)

paw mcp add --transport http github https://api.github.com/mcp
paw mcp add --transport stdio memory -- npx -y @modelcontextprotocol/server-memory
paw mcp list
paw mcp remove github

Interactive manager via /mcp. Supports stdio, HTTP, SSE. Tools auto-injected into all providers.


REPL Commands

Command Description
/help All commands
/status Providers, usage, cost
/settings Provider API key management
/model Model catalog & switch
/team Team dashboard & collaboration
/spawn Spawn parallel sub-agent
/agents Unified agent activity & spawn status
/auto <task> Autonomous agent mode
/pipe <cmd> Shell output β†’ AI (fix/watch subcommands)
/verify Cross-provider verification settings
/verify logs Browse verification history
/safety Safety guard configuration
/memory PAW.md + learned patterns + learning mode (auto|ask|off|forget)
/memory yes|no Confirm/skip pending auto-skill creation (ask mode)
/remember <note> Save note to memory
/sessions List sessions + current ID
/sessions <query> Search & summarize past sessions
/export Export full context as markdown
/export chat Export conversation only
/compact [focus] AI-powered conversation compression
/skills List all skills
/hooks List configured hooks
/ask <provider> <prompt> Query specific provider
/tools Built-in + MCP tools
/mcp MCP server manager
/git Status + diff + log
/init Generate CONTEXT.md
/doctor Diagnostics
/clear Reset conversation
/exit Quit

Keyboard: ↑↓ navigate Β· Enter select Β· Tab autocomplete Β· Esc back Β· Ctrl+C interrupt Β· Ctrl+L clear Β· Ctrl+K compact


Live Activity Display

Every tool call and AI response is shown in real time while the agent works:

=^.^= β—‰ Executing step 2/5

  βœ“ Read     src/cli.tsx             0.3s  ⎿  245 lines
  βœ“ Search   "thinkMsg"              0.1s  ⎿  8 results
  βœ“ Bash     npm run build           1.2s
  β—‰ Write    src/output.ts

  I'll now update the render section to add the new...
Icon Meaning
β—‰ Tool running / step in progress
βœ“ Tool completed (elapsed time + result shown)
Color Tools
Cyan Read, List
Yellow Write, Update
Magenta Bash
Blue Search, Glob
Green Fetch

AI intermediate responses stream between tool calls so you see reasoning as it happens, not just the final answer. Works in all modes: /auto, /pipe, team, skill, and solo.


Files

File Purpose
~/.paw/credentials.json API keys (0600)
~/.paw/sessions/*.json Session history + verification history
~/.paw/team-scores.json Team performance scores
~/.paw/PAW.md Global instructions
~/.paw/memory/ Auto-learned memory
~/.paw/skills/*.md User-wide custom skills (incl. auto-generated auto-*)
~/.paw/hooks/*.md User-wide hooks
~/.paw/learned-tasks.json Cross-session task patterns with confidence scores
~/.paw/learn-config.json Learning mode preference (auto/ask/off)
PAW.md Project instructions
PAW.local.md Personal project notes
.paw/skills/*.md Project skills
.paw/hooks/*.md Project hooks
.paw/settings.json Project settings
.mcp.json MCP server config

Contributing

git clone https://github.com/jhcdev/paw.git
cd paw
npm install
npm test              # 390 tests
npm run build         # TypeScript β†’ dist/
npm link              # Install 'paw' command globally

License

MIT β€” see LICENSE.

About

Paw 🐱 β€” One terminal. Every AI. No lock-in.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors