π§ Atlas Lab is a localhost-first self-hosted platform made of a Node.js/TypeScript CLI, a layered Docker Compose stack, and an operational React dashboard served by the gateway. It is designed to provide Git hosting, optional automation agents, optional local AI LLM services, optional AI image and video generation, browser-based development workbenches, and structured image/volume backup workflows on a single machine.
Atlas Lab is built for a practical goal: run a repeatable local engineering platform without depending on custom DNS, hosts-file edits, scattered bootstrap scripts, or ad hoc reverse-proxy plumbing.
- π§± An always-on core layer with Gitea, the gateway, and Atlas Dashboard
- π€ An optional AI agents layer with n8n and external runners
- π§ An optional AI LLM layer with Open WebUI and Ollama
- πΌοΈ An optional AI image layer with InvokeAI and a script-managed local model set
- π¬ An optional AI video layer with ComfyUI and managed local video models
- π οΈ An optional workbench layer with browser-based Node, Python, AI, and C++ environments plus shared PostgreSQL
- π HTTPS-only ingress on
localhost - π¦ A self-contained npm package that can run without a local repository checkout
- πΎ Persistent state stored in named Docker volumes
- π½ Single-file backup and restore for Docker images and volumes
- no internal DNS
- no
hostsfile edits - no disposable init containers in Compose
- no hard dependency on a checked-out repo
- one coherent operational flow across development, packaging, and day-to-day use
- ποΈ Architecture
- π Services, Ports, and URLs
- πΈοΈ Docker Networks
- πΎ Persistence
- π§ͺ Host Requirements
- βοΈ Central Configuration
- π Quick Start
- π οΈ CLI Workflows
- π₯οΈ Atlas Dashboard
- π½ Backup and Restore
- π Default Credentials
- π Repository Layout
- π©Ί Troubleshooting
- π‘οΈ Security Notes
- π License
- π Official References
Atlas Lab is split into six explicit layers:
| Layer | Status | Includes | Purpose |
|---|---|---|---|
core |
always on | gateway, Atlas Dashboard, Gitea, Gitea DB | baseline platform |
ai-agents |
optional | n8n, n8n runners, AI agents gateway | workflow automation and agent orchestration |
ai-llm |
optional | Open WebUI, Ollama, AI LLM gateway | local LLM workflows |
ai-image |
optional | InvokeAI, AI image gateway, managed model staging | local image generation |
ai-video |
optional | ComfyUI, AI video gateway, managed model staging | local video generation |
workbench |
optional | Node Forge, Python Grid, AI Reactor, C++ Foundry, shared PostgreSQL, workbench gateway | browser-based development |
The project went through three shapes:
- subpath-based reverse proxying
- custom hostnames such as
*.lab.home.arpa - the current
localhost + dedicated HTTPS portsmodel
The current model is the most pragmatic for a single-machine lab:
- predictable URLs
- fewer frontend issues than subpath routing
- no local DNS to maintain
- no
hostsfile maintenance
Bootstrap is handled by the TypeScript CLI rather than by throwaway Compose init containers.
The CLI:
- starts the stack
- runs host preflight checks
- reconciles runtime state
- bootstraps Gitea
- bootstraps n8n only when the
ai-agentslayer is enabled - reconciles Ollama only when the AI LLM layer is enabled
- cleans up legacy runtime artifacts
All public web entry points are exposed over HTTPS on localhost.
The only host-level TCP service exposed directly is PostgreSQL from the workbench layer.
| Service | Layer | URL / Endpoint | Notes |
|---|---|---|---|
| Atlas Dashboard | core |
https://localhost:8443/ |
operational dashboard |
| Gitea | core |
https://localhost:8444/ |
Git forge, issues, reviews |
| n8n | ai-agents |
https://localhost:8445/ |
workflow automation |
| Open WebUI | ai-llm |
https://localhost:8446/ |
only with --with-ai-llm |
| Ollama | ai-llm |
https://localhost:8447/ |
HTTPS API |
| InvokeAI | ai-image |
https://localhost:8448/ |
only with --with-ai-image |
| ComfyUI | ai-video |
https://localhost:8449/ |
only with --with-ai-video |
| Node Forge | workbench |
https://localhost:8450/ |
Node / TypeScript workspace |
| Python Grid | workbench |
https://localhost:8451/ |
Python workspace |
| AI Reactor | workbench |
https://localhost:8452/ |
AI / notebook workspace |
| C++ Foundry | workbench |
https://localhost:8453/ |
C/C++ workspace |
| PostgreSQL | workbench |
localhost:15432 |
host-side desktop access |
- browsers always go through the gateway
- optional layers never start implicitly
- host-side PostgreSQL clients must use
localhost:15432, notpostgres-dev
| Network | Type | Purpose |
|---|---|---|
edge-net |
exposed | published ingress ports |
apps-net |
internal | Gitea and shared browser-facing services |
ai-agents-net |
internal | n8n and external runners |
ai-llm-net |
internal | Open WebUI and Ollama |
ai-image-net |
internal | InvokeAI and image-generation runtime |
ai-video-net |
internal | ComfyUI and video-generation runtime |
data-net |
internal | data services and infrastructure databases |
workbench-net |
internal | workbenches and PostgreSQL |
workbench-host-net |
bridge | host-side PostgreSQL bind |
services-egress-net |
selective egress | outbound access for core services |
workbench-egress-net |
selective egress | outbound access for workbench services |
postgres-devexists only inside Docker networking- desktop tools should connect to
localhost:15432 - the gateway remains the only public browser entry point
Atlas Lab uses named Docker volumes for runtime state.
Key volumes include:
gateway-certsgateway-configgateway-sitegateway-datagitea-datagitea-dbn8n-datainvokeai-dataollama-dataopen-webui-datapostgres-dev-data- workbench home/workspace volumes for Node, Python, AI, and C++
Recreating containers does not wipe state. Removing the volumes does.
Docker EnginewithDocker Compose v2Node.js >= 20npm
The AI LLM and AI image layers require:
- an
NVIDIAGPU - a working
nvidia-smion the host - Docker configured with NVIDIA GPU support
- CPU:
4 vCPUor better - RAM:
8 GBminimum,12-16 GBpreferred - disk:
20 GBfree or more - VRAM:
8 GBor more for comfortable Ollama usage
8443844484458446844784488449845084518452845315432whenworkbenchis enabled
On restrictive PowerShell setups, prefer the .cmd shims:
npm.cmd --version
atlas-lab.cmd statusThe lab uses a self-signed certificate for localhost.
Certificate download URL:
https://localhost:8443/assets/lab.crt
Git for Windows with schannel may require importing that certificate into the Windows trust store.
The main runtime configuration lives in:
Key variables include:
APP_VERSIONLAB_HTTPS_PORT,GITEA_HTTPS_PORT,N8N_HTTPS_PORTOPENWEBUI_HTTPS_PORT,OLLAMA_HTTPS_PORTINVOKEAI_HTTPS_PORT,COMFYUI_HTTPS_PORTNODE_DEV_HTTPS_PORT,PYTHON_DEV_HTTPS_PORT,AI_DEV_HTTPS_PORT,CPP_DEV_HTTPS_PORTPOSTGRES_DEV_HOST_PORTOLLAMA_CHAT_MODEL,OLLAMA_EMBEDDING_MODEL,OLLAMA_RUNTIME_MODELSINVOKEAI_MODEL_REPO,INVOKEAI_MODEL_REVISION,INVOKEAI_MODEL_TITLEconfig/models/invokeai-models.json,config/models/comfyui-models.jsonGITEA_ROOT_USERNAME,GITEA_ROOT_PASSWORDN8N_ROOT_EMAIL,N8N_ROOT_PASSWORDOPENWEBUI_ROOT_EMAIL,OPENWEBUI_ROOT_PASSWORD
Rule of thumb:
- change ports, versions, credentials, and models in
env/lab.env - change routing and runtime content in
config/gateway/templates/ - change CLI behavior in
src/
docker version
docker compose version
node --version
npm --versionnpm installnpm run dev -- upnpm run dev -- up --with-ai-llmnpm run dev -- up --with-workbenchnpm run dev -- up --with-ai-imagenpm run dev -- up --with-ai-llm --with-ai-image --with-workbenchnpm run dev -- statusnpm run dev -- doctor --smoke
npm run dev -- doctor --with-ai-llm --with-ai-image --smokenpm run dev -- down| Mode | Command | Purpose |
|---|---|---|
| dev mode | npm run dev -- up |
runs the TypeScript source with tsx |
| CLI build | npm run build |
bundles the CLI into dist/ |
| dashboard build | npm run build:atlas-dashboard |
typechecks and builds the dashboard |
| dashboard typecheck | npm run typecheck:atlas-dashboard |
checks dashboard TypeScript |
| local dashboard dev | npm run dev:atlas-dashboard |
starts local dashboard development |
| versioning | npm run set:version |
updates managed version files and creates the release commit |
| local pack | npm run pack:local |
creates a self-contained npm tarball |
| global install | npm install -g . |
installs atlas-lab globally |
| Command | Role |
|---|---|
atlas-lab up |
starts core only |
atlas-lab up --with-ai-llm |
adds the AI LLM layer |
atlas-lab up --with-ai-image |
adds the AI image layer |
atlas-lab up --with-workbench |
adds the workbench layer |
atlas-lab up --with-ai-llm --with-ai-image --with-workbench |
starts the full lab |
atlas-lab bootstrap |
reruns core bootstrap |
atlas-lab bootstrap --with-ai-llm |
reruns bootstrap and Ollama reconciliation |
atlas-lab doctor |
runs host and configuration checks |
atlas-lab doctor --smoke |
adds smoke tests for the core layer |
atlas-lab doctor --with-ai-llm --with-ai-image --smoke |
adds smoke tests for the AI LLM and AI image layers |
atlas-lab status |
shows Compose/runtime status |
atlas-lab down |
stops the stack |
atlas-lab save-images |
exports Docker images to a single archive |
atlas-lab restore-images |
restores Docker images from an archive |
atlas-lab save-volumes |
exports Docker volumes to a single archive |
atlas-lab restore-volumes |
restores Docker volumes from an archive |
The global npm package already includes:
- Compose files
env/lab.env- gateway templates
- custom Dockerfiles
- dashboard sources
- bootstrap scripts
This allows atlas-lab to run without a local repository checkout.
npm run pack:local
npm install -g .\cli-node-docker-atlas-lab-<version>.tgz
atlas-lab statusThe dashboard frontend lives in:
Its toolchain config lives in:
- visualize layer state
- surface operational links
- expose local markdown briefings
- show credentials and runtime notes
- support
it/enlocalization
npm run dev:atlas-dashboardOptional layers can be simulated with:
ATLAS_DASHBOARD_DEV_AI_LLM_ENABLEDATLAS_DASHBOARD_DEV_AI_IMAGE_ENABLEDATLAS_DASHBOARD_DEV_WORKBENCH_ENABLED
Atlas Lab supports backup and restore for both Docker images and Docker volumes.
- one
.tar.gzarchive for selected images - one
.tar.gzarchive for selected volumes - embedded manifest metadata
- realtime progress logs during export and restore
- support for
core,ai-agents,ai-llm,ai-image,ai-video, andworkbenchlayer selection
npm run dev -- save-images --with-ai-agents --with-ai-llm --with-ai-image --with-ai-video --with-workbench
npm run dev -- restore-images --input .\backups\images\atlas-lab-images.tar.gz
npm run dev -- down
npm run dev -- save-volumes --with-ai-agents --with-ai-llm --with-ai-image --with-ai-video --with-workbench
npm run dev -- restore-volumes --input .\backups\volumes\atlas-lab-volumes.tar.gzBootstrap is idempotent and reconciles Gitea, n8n when ai-agents is enabled, and Ollama when ai-llm is enabled.
β οΈ These credentials are intended for trusted local environments and are configurable throughenv/lab.env.
| Service | URL / Endpoint | Credentials |
|---|---|---|
| Atlas Dashboard | https://localhost:8443/ |
no dedicated login |
| Gitea | https://localhost:8444/ |
root / RootGitea!2026 |
| n8n | https://localhost:8445/ |
root@n8n.local / RootN8NApp!2026 |
| Open WebUI | https://localhost:8446/ |
root@openwebui.local / RootOpenWebUI!2026 |
| Ollama | https://localhost:8447/ |
gateway basic auth root / RootOllama!2026 |
| InvokeAI | https://localhost:8448/ |
gateway basic auth root / RootInvokeAI!2026 |
| ComfyUI | https://localhost:8449/ |
gateway basic auth root / RootComfyUI!2026 |
| PostgreSQL host-side | localhost:15432 |
postgres / RootPostgresDev!2026 |
For DBeaver and other desktop PostgreSQL clients:
- host:
localhost - port:
15432 - database:
lab - username:
postgres - password:
RootPostgresDev!2026
| Area | Purpose | Paths |
|---|---|---|
| CLI | application logic and commands | src/, bin/ |
| dashboard | React frontend | apps/atlas-dashboard, config/atlas-dashboard/ |
| Compose | layer orchestration | infra/docker/compose*.yml |
| images | Dockerfiles and startup scripts | infra/docker/images/ |
| gateway | runtime templates and briefings | config/gateway/templates/ |
| env | operational configuration | env/lab.env |
| repo scripts | versioning and support tooling | scripts/ |
Key files:
package.jsonLICENSEenv/lab.envinfra/docker/compose.ymlinfra/docker/compose.ai-llm.ymlinfra/docker/compose.ai-image.ymlinfra/docker/compose.workbench.ymlsrc/bin/atlas-lab.tssrc/app/create-cli-app.tssrc/services/config/gateway/templates/Caddyfile.templateconfig/gateway/templates/runtime/lab-config.json.templateinfra/docker/images/gateway/bootstrap-gateway.sh
Expected behavior. The lab uses a self-signed certificate.
One of the configured lab ports (8443-8448, 8450-8453, or 15432) is occupied or excluded by the system.
atlas-lab status
docker ps --format "table {{.Names}}\t{{.Ports}}\t{{.Status}}"This is usually a Docker daemon GPU pass-through issue, not an Ollama issue.
nvidia-smi -L
docker infoExpected behavior. The AI image layer prepares the configured image model set inside persistent storage during service startup before InvokeAI becomes fully ready.
Workbenches are not part of the core layer. Start them explicitly:
npm run dev -- up --with-workbenchVerify:
- the AI LLM layer is enabled
OLLAMA_CHAT_MODEL,OLLAMA_EMBEDDING_MODEL, andOLLAMA_RUNTIME_MODELSare sethttps://localhost:8447/api/tagsresponds- the AI LLM bootstrap has run
Expected behavior. The CLI creates the bootstrap owner, so the first-run wizard is skipped.
Atlas Lab is intended for:
- local use
- technical lab environments
- trusted networks
- development and prototyping
It is not an internet-facing production deployment hardened out of the box.
If you want to harden it further:
- move secrets into an external secret-management system
- replace the default certificate with one signed by an internal CA
- tighten network segmentation further
- define recurring backup policies
- audit logs and default credentials
This project is distributed under the MIT license.
- license file:
LICENSE - npm metadata:
package.json
- Compose startup order: https://docs.docker.com/compose/how-tos/startup-order/
- Compose profiles: https://docs.docker.com/compose/how-tos/profiles/
- Compose networks: https://docs.docker.com/reference/compose-file/networks/
- Docker networking drivers: https://docs.docker.com/engine/network/drivers/
- Caddyfile concepts: https://caddyserver.com/docs/caddyfile
- Global options: https://caddyserver.com/docs/caddyfile/options
- Reverse proxy: https://caddyserver.com/docs/caddyfile/directives/reverse_proxy
- Install with Docker: https://docs.gitea.com/installation/install-with-docker
- Admin CLI: https://docs.gitea.com/administration/command-line
- Environment variables: https://docs.n8n.io/hosting/configuration/environment-variables/
- Configuration methods: https://docs.n8n.io/hosting/configuration/configuration-methods/
- Hardening task runners: https://docs.n8n.io/hosting/securing/hardening-task-runners/
- SSL behind reverse proxy: https://docs.n8n.io/hosting/securing/set-up-ssl/
- Environment configuration: https://docs.openwebui.com/getting-started/env-configuration/
- Reverse proxy notes: https://docs.openwebui.com/tutorials/integrations/unraid
- FAQ: https://docs.ollama.com/faq
- API reference: https://github.com/ollama/ollama/blob/main/docs/api.md
- Docker installation docs: https://invoke-ai.github.io/InvokeAI/installation/060_INSTALL_DOCKER/
- FLUX.2-klein-4b-fp8: https://huggingface.co/black-forest-labs/FLUX.2-klein-4b-fp8
- Z-Image GGUF: https://huggingface.co/gguf-org/z-image-gguf
- Official docs: https://coder.com/docs/code-server/latest
- package.json scripts: https://docs.npmjs.com/cli/v11/configuring-npm/package-json
- npm link: https://docs.npmjs.com/cli/v11/commands/npm-link
- npm pack: https://docs.npmjs.com/cli/v11/commands/npm-pack
- child_process: https://nodejs.org/api/child_process.html
Atlas Lab is a complete local platform with:
- an always-on core plane
- optional AI LLM, AI image, and workbench layers
- a dark-first React dashboard
- a globally installable TypeScript CLI
- self-contained npm packaging
- structured backup and restore workflows
- MIT licensing