Orca runs parallel AI coding agents (Claude Code, Codex, etc.) in sandboxed containers, with real-time streaming, job lifecycle management, and webhook notifications.
┌─────────┐ ┌─────────────┐ ┌──────────────┐
│ CLI │─────▶│ Backend │─────▶│ Scheduler │
│ orca │ HTTP │ (Fastify) │ │ Docker / K8s │
└─────────┘ └──────┬──────┘ └──────┬───────┘
│ │
┌──────┴──────┐ ┌──────┴───────┐
│ PostgreSQL │ │ Worker │
│ (jobs) │ │ (sandboxed) │
└─────────────┘ └──────┬───────┘
│
┌─────────────┐ │
│ Redis │◀─────────────┘
│ (streams) │ events via Redis Streams
└─────────────┘
Monorepo workspaces:
| Workspace | Path | Purpose |
|---|---|---|
@orca/cli |
cli/ |
Developer CLI (orca dev, orca job submit/status/logs/cancel/list) |
@orca/backend |
backend/ |
Fastify API server, job manager, schedulers, WebSocket streaming |
@orca/worker |
worker/ |
Runs inside containers — clones repos, executes agents, streams events |
@orca/shared |
packages/shared/ |
Shared types (AgentEvent, job states) |
- Node.js >= 22
- Docker (for local dev with Docker Compose, and building worker images)
- k3d + kubectl (only if using
orca dev --k8s) - PostgreSQL 16 and Redis 7 (provided automatically by
orca dev)
# 1. Clone and install
git clone <repo-url> orca && cd orca
npm install
# 2. Build the worker image
docker build -t orca-worker:latest -f docker/Dockerfile.worker .
# 3. Authenticate Claude Code (one-time setup)
npx tsx cli/src/index.ts auth
# This runs Claude Code in a container, prompts you to log in via browser,
# and saves your OAuth session to a persistent Docker volume.
# Works with Claude Pro, Max, Teams, and Enterprise plans — no API key needed.
# 4. Start the dev environment (Docker Compose)
npx tsx cli/src/index.ts dev
# This starts PostgreSQL, Redis, and the backend.
# Backend available at http://localhost:3000
# 5. Submit a job (in another terminal)
npx tsx cli/src/index.ts job submit \
--repo https://github.com/your-org/your-repo \
--prompt "Fix the failing test in src/utils.ts"
# 6. Check job status
npx tsx cli/src/index.ts job status <job-id>
# 7. Stream logs via WebSocket
npx tsx cli/src/index.ts job logs <job-id>Worker containers need Claude Code credentials to run the claude-code agent. Orca supports two methods:
# One-time: authenticate and save credentials to a Docker volume
npx tsx cli/src/index.ts auth
# Check credential status
npx tsx cli/src/index.ts auth --status
# Remove stored credentials
npx tsx cli/src/index.ts auth --logoutThis runs claude interactively in a container, shows a login URL, and persists the OAuth session to the orca-claude-creds Docker volume. All worker containers automatically mount this volume — no API key required.
If you have an Anthropic API key (from the Claude Console), set it in your environment before starting the dev stack:
export ANTHROPIC_API_KEY=sk-ant-...
npx tsx cli/src/index.ts devThe key is passed through the backend to each worker container. If both OAuth credentials and an API key are available, the API key takes precedence.
| Variable | Default | Description |
|---|---|---|
DATABASE_URL |
postgresql://orca:orca@localhost:5432/orca |
PostgreSQL connection string |
REDIS_URL |
redis://localhost:6379 |
Redis connection string |
PORT |
3000 |
HTTP server port |
HOST |
0.0.0.0 |
HTTP server bind address |
ORCA_SCHEDULER |
docker |
Scheduler backend: docker or k8s |
WORKER_IMAGE |
orca-worker:latest |
Docker image used for worker containers |
ORCA_NAMESPACE |
orca |
Kubernetes namespace (when ORCA_SCHEDULER=k8s) |
DOCKER_NETWORK |
orca_default |
Docker network for worker containers (when ORCA_SCHEDULER=docker) |
LOG_LEVEL |
info |
Pino log level: trace, debug, info, warn, error, fatal |
These are set automatically by the scheduler when launching a container. You don't set these yourself unless debugging a worker manually.
| Variable | Required | Default | Description |
|---|---|---|---|
JOB_ID |
yes | — | Unique job identifier |
PROMPT |
yes | — | The prompt/instructions for the agent |
REDIS_URL |
yes | — | Redis connection for event streaming |
AGENT_TYPE |
no | echo |
Agent to run: echo, claude-code |
WORK_DIR |
no | /workspace |
Working directory inside the container |
REPO_URL |
no | — | Git repository to clone before running the agent |
GIT_TOKEN |
no | — | Token for private repo authentication (injected into clone URL) |
BRANCH |
no | — | Git branch to checkout |
TIMEOUT_SECONDS |
no | 1800 (30 min) |
Hard timeout; soft timeout fires at 80% of this value |
LOG_LEVEL |
no | info |
Pino log level |
| Variable | Default | Description |
|---|---|---|
ORCA_API_URL |
http://localhost:3000 |
Backend API base URL |
npx tsx cli/src/index.ts devStarts PostgreSQL, Redis, and the backend via Docker Compose (docker/docker-compose.yml). Workers are launched as individual Docker containers by the DockerScheduler.
What you need installed: Docker
npx tsx cli/src/index.ts dev --k8sCreates a local k3d cluster (orca-dev), applies K8s manifests (namespace, RBAC, Postgres, Redis), builds and imports the worker image, then starts the backend configured with ORCA_SCHEDULER=k8s. Workers are launched as Kubernetes Jobs.
What you need installed: Docker, k3d, kubectl
Press Ctrl+C in either mode to tear down all resources.
If you prefer to manage Postgres and Redis yourself:
# Start Postgres and Redis however you like, then:
cd backend
DATABASE_URL=postgresql://user:pass@localhost:5432/orca \
REDIS_URL=redis://localhost:6379 \
npx tsx src/index.tscd backend
DATABASE_URL=postgresql://orca:orca@localhost:5432/orca npx drizzle-kit push| Method | Path | Description |
|---|---|---|
POST |
/api/v1/jobs |
Submit a new job |
GET |
/api/v1/jobs/:id |
Get job status and metadata |
GET |
/api/v1/jobs |
List jobs (filter by ?status=, ?agentType=, ?repoUrl=, ?since=, ?limit=, ?offset=) |
POST |
/api/v1/jobs/:id/cancel |
Cancel a running/scheduled job |
POST |
/api/v1/jobs/:id/retry |
Retry a terminal job (creates a new job) |
GET |
/api/v1/health |
Health check with component status (Postgres, Redis) |
WS |
/api/v1/jobs/:id/stream |
WebSocket for real-time event streaming (supports ?fromId= for replay) |
curl -X POST http://localhost:3000/api/v1/jobs \
-H "Content-Type: application/json" \
-d '{
"repoUrl": "https://github.com/your-org/your-repo",
"prompt": "Fix the failing test",
"agentType": "claude-code",
"config": { "gitToken": "ghp_..." },
"timeoutSeconds": 3600,
"webhookUrl": "https://your-app.com/webhook"
}'# All workspaces
npm test --workspaces
# Individual workspace
npm test --workspace=backend
npm test --workspace=worker
npm test --workspace=cliRedis must be running on localhost:6379 for backend and worker integration tests. Quick way:
docker run -d --name orca-test-redis -p 6379:6379 redis:7-alpineorca/
├── cli/ # CLI tool
│ └── src/
│ ├── cli.ts # Commander.js program setup
│ ├── commands/dev.ts # orca dev / orca dev --k8s
│ └── lib/ # API client, compose executor, health checks
├── backend/ # API server
│ └── src/
│ ├── api/ # Route handlers (jobs, health, streaming)
│ ├── db/ # Drizzle schema + client
│ ├── health/ # LiveHealthChecker (Postgres + Redis)
│ ├── jobs/ # JobManager, stores (in-memory, Postgres)
│ ├── scheduler/ # DockerScheduler, K8sScheduler
│ ├── streams/ # Redis Streams manager
│ ├── webhooks/ # Webhook sender with retry
│ ├── app.ts # Fastify app builder
│ ├── index.ts # Entry point
│ └── logger.ts # Pino logger factory
├── worker/ # Runs inside containers
│ └── src/
│ ├── agents/ # EchoAgent, ClaudeCodeAgent, registry
│ ├── git/ # GitCloner
│ ├── runtime.ts # Main worker loop
│ ├── sigterm.ts # Graceful shutdown handler
│ ├── index.ts # Entry point
│ └── logger.ts # Pino logger factory
├── packages/shared/ # Shared types
│ └── src/index.ts # AgentEvent type definitions
├── docker/
│ ├── docker-compose.yml # Local dev stack
│ ├── Dockerfile.backend
│ └── Dockerfile.worker
└── k8s/
├── namespace.yaml
├── rbac.yaml
└── infra.yaml # Postgres + Redis for k3d