You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Any npm install, mix deps.get, or cargo build can execute arbitrary code on the host machine. A single malicious package in the dependency tree gets full access to:
SSH keys (1Password agent socket)
AWS credentials (aws-vault session or IAM creds)
GitHub tokens
Browser cookies/sessions
Claude API keys
Everything else in $HOME
Real-world examples: event-stream (CVE-2018-16492), ua-parser-js (CVE-2021-27292), colors/faker sabotage, the eslint-scope npm token theft. This isn't theoretical — it's routine.
Goal: A single task dev <project-name> command that boots an isolated, credential-scoped container environment in seconds. Base image has zero baked-in secrets. Runtime credential injection is minimal, scoped, and gated behind hardware confirmation (passkey/Touch ID) where possible.
Design Principles
Zero-trust base image — no credentials, tokens, or secrets in the image. Ever.
Runtime-only credential injection — secrets passed as env vars or agent socket mounts at docker run time
Fast startup — must be <5s from cold, <1s warm. Docker + OrbStack is the path
Claude Code native — AI assistant must work seamlessly inside the container
Taskfile orchestration — task dev <project> is the only interface
Architecture
Runtime: Docker on OrbStack
Why Docker over alternatives:
Option
Boot time
macOS FS perf
Tooling maturity
Isolation
Docker + OrbStack
~1s
Near-native (VirtioFS)
Excellent
Good (namespaces + seccomp)
Apple Containers
~1s
Unknown
Very new, limited
Better (full VM)
Vagrant/VMs
30-60s
Poor (shared folders)
Mature but heavyweight
Best
K8s pods
~2-5s
Volume-dependent
Overkill for local
Good
Apple Containers (macOS container CLI, Virtualization.framework) are interesting but too immature for daily driving. Worth revisiting in 12 months. OrbStack gives us Docker with near-native perf and optional k8s if we want it later.
K8s option — don't dismiss entirely. OrbStack has built-in k8s. A future phase could define dev environments as k8s Deployments with PVCs, which would give us:
Resource limits (CPU/memory per project)
Network policies (restrict container egress)
PVC-backed persistent volumes (better than host mounts for isolation)
Same abstraction whether running locally or on a remote dev server
Base Image Strategy
Multi-stage Dockerfile, language-specific layers on a common base:
All images are public-registry-safe — nothing proprietary, no auth tokens, no private package access.
Container Filesystem Layout
/workspace/ ← project source (bind mount or named volume)
/home/dev/ ← user home
/home/dev/.config/ ← tool configs (mounted read-only from host where safe)
/home/dev/.ssh/ ← EMPTY, ssh-agent socket mounted at runtime
/home/dev/.claude/ ← claude config (mounted at runtime)
/run/secrets/ ← runtime-injected secrets (tmpfs)
Persistence Strategy
Two modes, project-dependent:
Bind mount (-v $(pwd):/workspace) — for active development, host editor access. Tradeoff: host FS is exposed.
Named volume (-v project-data:/workspace) — for higher isolation. Code stays in the volume, accessed only via container. Use docker cp or git to sync.
Recommendation: bind mount for dev, named volume for untrusted dependency work (e.g., auditing a new package).
Every git push, ssh connection etc. triggers a passkey/Touch ID prompt on the host. The container never sees a private key.
GitHub CLI
# Generate a scoped token and inject
GITHUB_TOKEN=$(gh auth token) docker run -e GITHUB_TOKEN devbox-ts
# Or mount gh config read-only
docker run -v "$HOME/.config/gh:/home/dev/.config/gh:ro" devbox-ts
When a project legitimately needs lifecycle scripts (native modules etc.), explicitly opt in per-project:
# In the project's package.json or .npmrc# pnpm v10+ has allowedScripts for granular control
Mix / Hex (baked into base image config)
# ~/.config/mix/config.exs or project config# mix hex.audit on every deps.get
Key risks: Mix compiles dependencies, which can execute arbitrary Elixir at compile time. Container isolation is the primary mitigation here — there's no ignore-scripts equivalent.
Cargo
# ~/.cargo/config.toml baked into image
[net]
git-fetch-with-cli = true# uses ssh-agent, not stored keys# Run cargo-audit as part of build# Run cargo-vet for supply chain review
build.rs scripts run during compilation — same risk as npm postinstall. Container isolation is the mitigation.
Go
# Baked into image environment
GONOSUMCHECK=""# enforce checksum verification for ALL modules
GOFLAGS="-mod=readonly"# prevent unexpected go.mod changes
GOPROXY="https://proxy.golang.org,direct"
GONOSUMDB=""# use sumdb for everything
no-new-privileges — can't escalate via setuid/setgid
cap-drop=ALL — drop all Linux capabilities
read-only root filesystem with tmpfs for writable paths
Resource limits prevent crypto mining / DoS
Consider --network=none for pure offline builds, switch to bridge when needed
Claude Code Integration
Option A: Claude inside the container
Install Claude Code in the base image via curl -fsSL https://claude.ai/install.sh | bash
Mount ANTHROPIC_API_KEY at runtime
Full filesystem access within container only
MCP servers would need to be configured inside the container
Option B: Claude on host, exec into container (recommended)
Claude Code runs on host with full editor/context integration
Commands execute via docker exec into the running container
Host Claude config stays on host, container is just an execution sandbox
Configure via Claude Code hooks or custom shell wrapper
Option C: Hybrid
Claude on host for planning/editing
Claude inside container for execution-heavy tasks (tests, builds)
Use MCP or a simple HTTP bridge between host and container Claude instances
Recommendation: Start with Option B. It keeps the UX seamless (Zed/terminal Claude works normally) while sandboxing execution. The container just needs to be a running target for docker exec.
Bind mount vs named volume as default? — Bind mount is more convenient (host editor access) but exposes the host FS. Named volume is more secure but requires docker cp or git for file access. Could default to bind mount and offer task dev:secure <project> for named-volume mode.
Network isolation granularity — Should the default allow outbound internet (for npm install, mix deps.get)? Probably yes for dev, but a --network=none mode for pure offline builds would be valuable.
How to handle ports? — Web dev needs port forwarding. Could auto-detect from package.json scripts or require explicit config.
Per-project config format? — A .devbox.yml in the project root? Or use the devcontainer spec for compatibility?
Image registry — Build locally only? Or push to GHCR for faster cold starts on new machines?
1Password CLI vs env vars — op read is elegant but requires 1Password CLI auth inside or on host. Env var injection from host op is simpler.
Problem
Any
npm install,mix deps.get, orcargo buildcan execute arbitrary code on the host machine. A single malicious package in the dependency tree gets full access to:$HOMEReal-world examples:
event-stream(CVE-2018-16492),ua-parser-js(CVE-2021-27292),colors/fakersabotage, theeslint-scopenpm token theft. This isn't theoretical — it's routine.Goal: A single
task dev <project-name>command that boots an isolated, credential-scoped container environment in seconds. Base image has zero baked-in secrets. Runtime credential injection is minimal, scoped, and gated behind hardware confirmation (passkey/Touch ID) where possible.Design Principles
docker runtimetask dev <project>is the only interfaceArchitecture
Runtime: Docker on OrbStack
Why Docker over alternatives:
Apple Containers (macOS
containerCLI, Virtualization.framework) are interesting but too immature for daily driving. Worth revisiting in 12 months. OrbStack gives us Docker with near-native perf and optional k8s if we want it later.K8s option — don't dismiss entirely. OrbStack has built-in k8s. A future phase could define dev environments as k8s Deployments with PVCs, which would give us:
Base Image Strategy
Multi-stage Dockerfile, language-specific layers on a common base:
All images are public-registry-safe — nothing proprietary, no auth tokens, no private package access.
Container Filesystem Layout
Persistence Strategy
Two modes, project-dependent:
-v $(pwd):/workspace) — for active development, host editor access. Tradeoff: host FS is exposed.-v project-data:/workspace) — for higher isolation. Code stays in the volume, accessed only via container. Usedocker cpor git to sync.Recommendation: bind mount for dev, named volume for untrusted dependency work (e.g., auditing a new package).
Credential Injection
SSH Agent Forwarding (1Password)
Every
git push,sshconnection etc. triggers a passkey/Touch ID prompt on the host. The container never sees a private key.GitHub CLI
AWS (via aws-vault)
Session tokens expire (typically 1hr). Even if exfiltrated, damage window is limited.
npm / Private Registry Auth
Inside the container,
.npmrcreferences the env var:The token is never written to the image or filesystem.
Hex (Elixir)
docker run -e HEX_API_KEY="$(op read 'op://Dev/hex-key/credential')" devbox-elixirClaude Code
Claude Code can also operate from the host talking to the container via
docker exec:This is potentially the better model — Claude stays on the host with full context, but all code execution happens in the container.
Safe Package Manager Defaults
npm / pnpm (baked into base image
.npmrc)When a project legitimately needs lifecycle scripts (native modules etc.), explicitly opt in per-project:
Mix / Hex (baked into base image config)
Key risks: Mix compiles dependencies, which can execute arbitrary Elixir at compile time. Container isolation is the primary mitigation here — there's no
ignore-scriptsequivalent.Cargo
build.rsscripts run during compilation — same risk as npm postinstall. Container isolation is the mitigation.Go
Container Security Hardening
Key hardening:
no-new-privileges— can't escalate via setuid/setgidcap-drop=ALL— drop all Linux capabilitiesread-onlyroot filesystem with tmpfs for writable paths--network=nonefor pure offline builds, switch to bridge when neededClaude Code Integration
Option A: Claude inside the container
curl -fsSL https://claude.ai/install.sh | bashANTHROPIC_API_KEYat runtimeOption B: Claude on host, exec into container (recommended)
docker execinto the running containerOption C: Hybrid
Recommendation: Start with Option B. It keeps the UX seamless (Zed/terminal Claude works normally) while sandboxing execution. The container just needs to be a running target for
docker exec.Taskfile Interface
Implementation Plan
Phase 1: Foundation
containers/directory in dotfilesDockerfile.base— Ubuntu 24.04, mise, zsh, task, safe .npmrc, common toolsDockerfile.ts— node, bun, pnpm via miseDockerfile.elixir— erlang, elixir via misedev,dev:stop,dev:build,dev:listignore-scripts=true+ token injectionPhase 2: Polish
Dockerfile.rustandDockerfile.goDockerfile.full(all languages)dev:execfor Claude Code host→container execution.devbox.ymlor similar) for custom env vars, ports, volumes-p 3000:3000)Phase 3: Advanced
.devcontainer/devcontainer.json) for editor integrationPhase 4: Claude-Native Workflow
.claude/settings.jsoncontainer-aware configurationOpen Questions
Bind mount vs named volume as default? — Bind mount is more convenient (host editor access) but exposes the host FS. Named volume is more secure but requires
docker cpor git for file access. Could default to bind mount and offertask dev:secure <project>for named-volume mode.Network isolation granularity — Should the default allow outbound internet (for
npm install,mix deps.get)? Probably yes for dev, but a--network=nonemode for pure offline builds would be valuable.How to handle ports? — Web dev needs port forwarding. Could auto-detect from
package.jsonscripts or require explicit config.Per-project config format? — A
.devbox.ymlin the project root? Or use the devcontainer spec for compatibility?Image registry — Build locally only? Or push to GHCR for faster cold starts on new machines?
1Password CLI vs env vars —
op readis elegant but requires 1Password CLI auth inside or on host. Env var injection from hostopis simpler.