AgentIncus, inspired by Code in Incus (COI), is a set of shell scripts that automate the creation of Incus containers for AI agents and secure development. See COI's Why Incus for why Incus over Docker.
Why shell scripts? They introduce no dependencies, are ergonomic enough for simple systems administration tasks, and transparently convey their purpose.
- Prerequisites
- Install
- Quick Start
- Scripts
- incus.init Options
- The Development Workflow
- Runtime Management
- Linux Gotchas
- Linux: Incus installed and initialized (
incus admin init) - macOS: Homebrew installed —
incus.initwill automatically prompt to install Colima and the Incus CLI, then bootstrap a Colima VM with the Incus runtime ~/.local/binin yourPATH
git clone <repo-url> agent_incus
cd agent_incus
./install_shortcutsThis symlinks the helper scripts into ~/.local/bin.
# Create a container with the current directory mounted as /workspace
incs -i my-project
# Open a shell
incs my-project
# Run a command (e.g. Claude Code)
incs my-project claude| Script | Alias | Purpose |
|---|---|---|
incs |
— | Unified CLI (shell, init, network, update) |
incus.init |
inci |
Create and provision a container |
incus.shell |
— | Open a login shell (or run a command) in a container |
incus.network |
incn |
Manage port proxy devices |
incus.macos.setup |
— | Bootstrap Colima + Incus on macOS (called automatically by incus.init) |
install_shortcuts |
— | Symlink helpers and aliases into ~/.local/bin |
incs is the main entrypoint. It routes to the underlying scripts:
incs my-project # Shell into container (default)
incs -s my-project # Shell (explicit)
incs -s my-project mix test # Run a command in container
incs -i my-project # Create a new container
incs -i my-project --from base-dev # Create from template
incs -n my-project 4000 3241 # Proxy ports 4000 and 3241 to localhost
incs -n my-project 4000:8080 # Host 4000 -> container 8080
incs -n my-project -b 10.0.0.5 4000 # Proxy on a specific address (e.g. Tailscale IP for remote access)
incs -n my-project -l # List active proxies
incs -n my-project -r 4000 # Remove proxy for port 4000
incs -n my-project -r all # Remove all proxies
incs -u my-project # Update packages in a container
incs -ua # Update all agent-incus containers
incs cron install # Install 7pm daily update cron
incs cron install 3 # Install 3am daily update cron
incs cron status # Show current cron schedule
incs cron remove # Remove the update cronThe individual scripts and aliases (inci, incn) still work directly.
Usage: incus.init [OPTIONS] <container-name>
Options:
-p, --path PATH Host directory to mount (default: current directory)
-m, --mount-path PATH Container mount point (default: /workspace)
-f, --from TEMPLATE Launch from a saved template (shorthand for --image incus-init/TEMPLATE)
-i, --image IMAGE Base image override (default: ubuntu/24.04)
-t, --template Save container as a reusable local template (implies --no-mount)
--<component> Pre-select a component (e.g. --1pass, --gh-token, --entire)
--no-mount Clone repo into container instead of mounting host directory
--no-sudo Do not grant sudo to the container user (for AI agents)
--colima-cpus N Colima VM CPUs (default: 4, macOS only)
--colima-memory N Colima VM memory in GB (default: 8, macOS only)
--colima-disk N Colima VM disk in GB (default: 100, macOS only)
--dry-run Show what would be done without doing it
- Launches an Ubuntu 24.04 container (override with
--image) - Installs build tools, dev libraries, Python, and Node.js
- Creates a user matching your host UID/GID with passwordless sudo
- Mounts your host directory into the container (tries
shift=true, falls back toraw.idmap) - Installs mise (runtime version manager) and Oh My Zsh
- Presents an interactive TUI to select optional components (see below)
Every container is provisioned with the following packages before any optional components are selected:
| Category | Packages |
|---|---|
| Core | bash, curl, git, wget, sudo, unzip, tmux, zsh |
| Build tools | build-essential, pkg-config, autoconf, automake, bison, cmake |
| Dev libraries | libssl-dev, libreadline-dev, libyaml-dev, libsqlite3-dev, libffi-dev, libncurses-dev, zlib1g-dev, and more |
| Runtimes | python3, python3-dev, python3-pip, python3-venv, nodejs, npm |
| Utilities | gpg, ca-certificates, psmisc, fontconfig, fzf, bat |
| Tools | mise (runtime version manager), Oh My Zsh (with zsh-autosuggestions), GitHub CLI |
The TUI lets you pick from optional components during container creation. Components are defined as standalone scripts in the components/ directory — the TUI discovers them automatically.
Included components:
| Component | Description |
|---|---|
| Docker | Container runtime & compose (enabled by default) |
| Chromium / Playwright | Headless browser for testing |
| OpenSpec | Spec-driven development CLI |
| rtk | High-performance CLI proxy that reduces LLM token consumption by 60-90% |
| fzf + bat | Interactive search & file preview |
| Claude Code | AI coding assistant |
| 1Password CLI | Password manager CLI |
| GitHub Auth | GitHub token & git credentials |
| just | Command runner for project tasks |
| Entire CLI | Entire CLI tool |
| Glow | Terminal markdown viewer |
| Codex | OpenAI coding agent |
| Tidewave | Tidewave CLI for agent-driven web app dev |
| Tailscale | Tailscale VPN client inside the container |
| Chadmux | Chad's tmux config + TPM plugins |
Skip the TUI with --no-tui to use defaults, or pre-select components via CLI flags (--1pass, --gh-token, --entire).
Drop a .sh file in the components/ directory. The filename prefix controls install order (e.g. 10- before 50-). Each file defines a simple contract:
# components/50-my-tool.sh
COMPONENT_ID="my-tool"
COMPONENT_NAME="My Tool"
COMPONENT_DESC="Does something useful"
COMPONENT_DEFAULT=0 # 0=off by default, 1=on
component_is_installed() {
incus exec "$CONTAINER_NAME" -- command -v my-tool &>/dev/null
}
component_install() {
log "Installing My Tool..."
incus exec "$CONTAINER_NAME" -- sh -c 'curl -fsSL ... | sh'
}Optional extras:
COMPONENT_CLI_FLAGS="--my-tool"— adds a CLI flag to pre-select without the TUICOMPONENT_NEEDS_PROMPT=1+component_prompt()— collect user input before installCOMPONENT_RUN_ON_LAUNCH=1+component_on_launch()— re-run setup when launching from a template (for symlinks, config that doesn't survive snapshots)
Component files are sourced in a separate bash process to safely extract metadata without executing install logic. The metadata (ID, name, description, default, CLI flags) is stashed into arrays that the TUI and arg parser use. Glob ordering (10-docker.sh before 50-just.sh) controls both TUI display order and install order — no dependency resolution needed. During install, each component file is sourced into the main process so its functions have access to globals like $CONTAINER_NAME and $HOST_USER.
A recommended setup uses two containers sharing the same workspace. The agent container runs with --no-sudo so AI tools cannot escalate privileges, while the dev container has full access and credentials:
graph TB
W["/workspace (app files)"]
H["Host Machine"] --> W
A["Agent Container"] --> W
D["Dev Container"] --> W
# Agent container — no sudo, no credentials
incs -i --no-sudo project-agent
# Dev container — with credentials
incs -i --1pass --gh-token project-dev
# Save as reusable template, then spin up new containers instantly
incs -i --template project-base
incs -i --from project-base --no-sudo project-agent-2The host, agent, and dev containers all read and write the same /workspace directory. Your editor, the AI agent, and your dev tools all see the same files.
Provisioning a container from scratch installs packages, build tools, mise, Oh My Zsh, and Docker. This takes a few minutes. You can skip that on subsequent containers by saving a template — a snapshot of a fully provisioned container with no secrets baked in, stored in the local Incus image store.
Build once:
incs -i --template my-baseThis provisions the container (without mounting host files), scrubs any tokens, and saves it locally as incus-init/my-base. The original container keeps running with its tokens intact.
Reuse instantly:
# Spin up a new container from the base image — seconds, not minutes
incs -i --from my-base my-project
# Same base, different credentials
incs -i --from my-base --1pass --gh-token my-devWhen launching from a template, provisioning (packages, shell setup, Oh My Zsh) is skipped entirely. Only workspace mounting, user creation (if needed), and selected components run.
Manage templates:
incus image list # see saved templates
incus image delete incus-init/my-base # remove oneTo access a service running inside a container from your host:
# Proxy one or more ports to localhost
incs -n project-dev 4000 3241
# Map different host and container ports
incs -n project-dev 4000:8080
# Proxy to a specific address (e.g. Tailscale IP)
incs -n project-dev -b 100.69.177.88 4000
# List active proxies
incs -n project-dev -l
# Remove a proxy
incs -n project-dev -r 4000Or use the container/VM IP directly — find it with incus list (Linux) or colima list (macOS). On macOS, the Colima VM IP (e.g. 192.168.64.6) is a private address only accessible from your Mac.
Combine incs -n with tailscale serve to expose a container app over tailnet HTTPS (tailnet-only, respects ACLs):
# Forward host:4000 -> container:4000
incs -n project-dev 4000:4000
# On the host, terminate TLS with the tailnet cert and proxy to the local port
tailscale serve --bg --https=443 http://127.0.0.1:4000Then reach the app from any tailnet device at https://<host>.<tailnet>.ts.net/. Requires MagicDNS and HTTPS certificates enabled in the tailnet admin console.
Important: bind to 0.0.0.0 — most dev servers bind to localhost by default, which blocks access from outside the container. You need to bind to all interfaces:
# Astro
npm run dev -- --host 0.0.0.0
# Next.js
npm run dev -- -H 0.0.0.0
# Rails
bin/rails server -b 0.0.0.0
# Phoenix
mix phx.server # binds 0.0.0.0 by default, but check config/dev.exs for ip: {127, 0, 0, 1}
# FastAPI
uvicorn main:app --host 0.0.0.0
# Vite (Vue, Svelte, etc.)
npm run dev -- --host 0.0.0.0Containers don't have sudo by default, so package updates run from the host via incus exec:
# Update a single container
incs -u my-project
# Update all agent-incus managed containers
incs -uaincs -ua starts stopped containers, updates them, then stops them again. Containers that were already running are left running. Only containers tagged with user.managed-by=agent-incus (set automatically during incs -i) are updated. Non-Debian containers are skipped.
Logs are written to ~/.local/state/incs/logs/ and cleaned up on success.
Install a daily cron to update all containers automatically:
incs cron install # Daily at 7pm (default)
incs cron install 3 # Daily at 3am
incs cron status # Show current schedule
incs cron remove # Remove the cronIf an update fails, you'll see a notification the next time you open a terminal:
⚠️ incs update failed (2026-04-11 19:00): avex mix-claude
Run 'ls ~/.local/state/incs/logs/' to see logs
To enable this, add the notification hook to your shell:
incs notify-snippet >> ~/.zshrcCapture environment state for rollback:
incus snapshot project-dev before-refactor
incus restore project-dev before-refactor
incus info project-dev # list snapshotsContainers come with mise pre-installed. Install runtimes per-project:
cd /workspace
mise use python@3.12 node@20Or add a mise.toml to your project — incus.init runs mise install automatically if one exists.
If UFW is enabled on the host, its default DROP policy will block traffic on the Incus bridge. See the Incus firewall documentation for setup instructions. The things you'll need to allow:
- DHCP + DNS — containers need these to get an IP address and resolve names
- Outbound forwarding — containers need a route through the host to reach the internet. Optionally, if you use
--proxy, the proxy port on the host must also accept connections from the bridge
If your host doesn't have IPv6 internet, disable it on the bridge:
incus network set incusbr0 ipv6.address noneWithout this, containers get an IPv6 address from Incus and prefer it (per RFC 6724). The symptom is confusing: ping works (resolves to IPv4) but apt-get update hangs trying to reach mirrors over IPv6.