Composable agent runtime with enforced isolation boundaries
Design principle: Skills are declared capabilities.
Capabilities only exist when bound to an isolated execution boundary.
VoidBox = Agent(Skills) + Isolation
Architecture · Install · Quick Start (CLI · Rust) · OCI Support · Host Mounts · Snapshots · Observability
Local-first. Cloud-ready. Runs on any Linux host with /dev/kvm.
Status: v0 (early release). Production-ready architecture; APIs are still stabilizing.
- Isolated execution — Each stage runs inside its own micro-VM boundary, not shared-process containers.
- Policy-enforced runtime — Command allowlists, resource limits, seccomp-BPF, and controlled network egress.
- Skill-native model — MCP servers, SKILL files, and CLI tools mounted as declared capabilities.
- Composable pipelines — Sequential
.pipe(), parallel.fan_out(), with explicit stage-level failure domains. - Claude Code native runtime — Each stage runs
claude-code, backed by Claude or Ollama via provider mode. - OCI-native — Auto-pulls guest images from GHCR; mount container images as base OS or skill providers.
- Observability native — OTLP traces, metrics, structured logs, and stage-level telemetry emitted by design.
- Persistent host mounts — Share host directories into guest VMs via 9p/virtiofs with read-only or read-write mode.
- No root required — Usermode SLIRP networking via smoltcp (no TAP devices).
Isolation is the primitive. Pipelines are compositions of bounded execution environments.
Containers share a host kernel — sufficient for general isolation, but AI agents executing tools, code, and external integrations create shared failure domains. VoidBox binds each agent stage to its own micro-VM boundary, enforced by hardware virtualization rather than advisory process controls. See Architecture (source) for the full security model.
Each release ships voidbox together with a kernel and initramfs so you can run workloads out of the box.
curl -fsSL https://raw.githubusercontent.com/the-void-ia/void-box/main/scripts/install.sh | shInstalls to /usr/local/bin and /usr/local/lib/voidbox/. For a specific version: VERSION=v0.1.2 curl -fsSL ... | sh.
brew tap the-void-ia/tap
brew install voidboxDownload the .deb for your CPU (amd64 or arm64) from Releases. Example for v0.1.2 on amd64:
curl -fsSLO https://github.com/the-void-ia/void-box/releases/download/v0.1.2/voidbox_0.1.2_amd64.deb
sudo dpkg -i voidbox_0.1.2_amd64.debsudo rpm -i https://github.com/the-void-ia/void-box/releases/download/v0.1.2/voidbox-0.1.2-1.x86_64.rpmUse the matching .rpm name from Releases for your version and architecture.
| Getting Started | First run, environment variables, API keys |
| Install (site) | Copy-paste install block and direct tarball links |
If you use Rust already, you can also cargo install void-box for the CLI only — pair it with kernel and initramfs from a release tarball or another install method above.
With voidbox on your PATH, run an agent from a YAML spec. From a clone of this repository:
voidbox run --file examples/hackernews/hackernews_agent.yamlCLI overview: voidbox run, validate, inspect, skills, snapshot, and config run locally and do not require a background server. For HTTP remote control, start voidbox serve (default 127.0.0.1:43100), then use status, logs, or tui against that daemon. Full command reference: CLI + TUI.
The full spec lives in examples/hackernews/hackernews_agent.yaml. A minimal shape looks like:
api_version: v1
kind: agent
name: hn_researcher
sandbox:
mode: auto
memory_mb: 1024
network: true
llm:
provider: claude
agent:
prompt: "Your task…"
skills:
- "file:examples/hackernews/skills/hackernews-api.md"
timeout_secs: 600Add the crate and build a VoidBox in code:
cargo add void-boxuse void_box::agent_box::VoidBox;
use void_box::skill::Skill;
use void_box::llm::LlmProvider;
// Skills = declared capabilities
let hn_api = Skill::file("skills/hackernews-api.md")
.description("HN API via curl + jq");
let reasoning = Skill::agent("claude-code")
.description("Autonomous reasoning and code execution");
// VoidBox = Agent(Skills) + Isolation
let researcher = VoidBox::new("hn_researcher")
.skill(hn_api)
.skill(reasoning)
.llm(LlmProvider::ollama("qwen3-coder"))
.memory_mb(1024)
.network(true)
.prompt("Analyze top HN stories for AI engineering trends")
.build()?;| Architecture | Component diagram, data flow, security model |
| Runtime Model | Claude Code runtime, LLM providers, skill types |
| CLI + TUI | Command reference, daemon API endpoints |
| Events + Observability | Event types, OTLP traces, metrics |
| OCI Containers | Guest images, base images, OCI skills |
| Snapshots | Sub-second VM restore, snapshot types |
| Host Mounts | 9p/virtiofs host directory sharing |
| Security | Defense in depth, session auth, seccomp |
| Wire Protocol | vsock framing, message types |
| Getting Started | Install, first agent, first run |
| Running on Linux | KVM setup, manual build, mock mode, tests |
| Running on macOS | Apple Silicon, Virtualization.framework |
| Observability Setup | OTLP config, Grafana playground |
| AI Agent Sandboxing | Isolated micro-VM agent execution |
| Pipeline Composition | Multi-stage pipelines with .pipe() and .fan_out() |
| YAML Specs | Declarative agent/pipeline definitions |
| Local LLMs | Ollama integration via SLIRP networking |
VoidBox is evolving toward a durable, capability-bound execution platform.
- Session persistence — Durable run/session state with pluggable backends (filesystem, SQLite, Valkey).
- Terminal-native interactive experience — Panel-based, live-streaming interface powered by the event API.
- Codex-style backend support — Optional execution backend for code-first workflows.
- Language bindings — Python and Node.js SDKs for daemon-level integration.
Apache-2.0 · The Void Platform
