Overture is a local-first AI delivery control plane. It now supports two ways to start a project:
- the original quick path where you already have a finished
plan.md - a new guided pipeline that helps you shape the prompt, run deep research, review the generated plan, execute it with Symphony, launch the result locally, and then deploy it
The current app is a working Next.js control plane with SQLite persistence, real Codex-backed prompt/workshop and planning flows, vendored Symphony orchestration, artifact storage, gate tracking, mutable findings, per-project security review, launch/deploy runs, runtime observability, project deletion, and a settings area for model, research, and runtime defaults.
- Ingests deep-research markdown plans
- Runs a guided six-stage flow: workshop, research, plan review, execution, launch, deploy
- Uses Codex App Server for a resumable prompt workshop with saved thread state
- Runs deep research separately from decomposition and keeps
plan.mdas the canonical handoff - Uses Codex to produce a structured execution model
- Generates milestones, epics, dependency edges, findings, runs, artifacts, and audit events
- Injects mandatory QA, security, deployment, observability, documentation, and release gates
- Exposes a Linear-compatible tracker GraphQL surface for Symphony polling
- Launches Symphony against a per-project workflow contract
- Detects launch and deployment profiles from the target repository and records evidence for each supported run or publish path
- Tracks gate readiness, runtime state, artifacts, findings, and audit history in the UI
- Tracks both live-run token usage and the cumulative project token total across every run
- Supports hard deletion of failed or stale projects from the dashboard and project page
- Lets users choose planner and execution model defaults, research provider, and Codex thinking levels in
/settings, with per-project overrides available in the intake flow
Overture now supports this full lifecycle:
- Prompt Workshop
- Deep Research Run
- Plan Review and Plan Ingestion
- Symphony Execution
- Launch Locally to Test
- Deploy
If you upload or paste a source brief in the guided path, Overture stores it as a project artifact and uses it to seed the first workshop turn automatically.
The original quick path remains intact. If you already have a strong plan.md, you can still paste or upload it from the home page and go straight to plan ingestion and execution.
There are two supported execution modes:
local_chatgpt: uses the local Codex runtime already authenticated on the machine running Overturehosted_api: optional fallback mode that uses Codex withOPENAI_API_KEY
The automated local container deployment includes the Codex CLI, git, the Elixir runtime required by Symphony, and startup auth bootstrapping. On container startup, Overture copies the host machine's ChatGPT-backed Codex auth into the container's persisted CODEX_HOME under the platform runtime root so live planning and execution work without an extra login step.
Model selection works like this:
- If you leave the planner or execution model on
Codex default, Overture lets the Codex CLI choose its default model - If you want explicit control, set default model names in
/settings - If one project needs different model choices, open
Advanced project optionsin the intake form and override them there
Thinking level works like this:
- Overture writes Codex
model_reasoning_effortfor both planning and Symphony ticket execution - The settings page exposes dropdowns for
Low,Medium,High, andExtra High Extra Highis only offered for newer GPT-5 Codex-capable models; older selections automatically show the supported subset
Research-provider selection works like this:
codex_nativeis the default local-first research provideropenai_responsesis the hosted/API fallback whenOPENAI_API_KEYis available
- Next.js 16 + React 19
- SQLite via
better-sqlite3 - Zod for validation
- Custom control-plane UI on the App Router
- Vitest for unit coverage
- Playwright for end-to-end verification
- Semgrep, Trivy, and ZAP wrappers for security checks
- Vendored Symphony runtime under
vendor/symphony
src/app: routes, pages, and API endpointssrc/components: intake, workshop, research, dashboard, runtime, launch, deploy, and review UIsrc/lib/server: persistence, planner, workshop, research, tracker shim, Symphony manager, launch, deploy, storage, and repository logicscripts: seeding, runner entrypoint, and security wrapperstests/e2e: browser-level product testsinfra: Azure, AWS, Jetson, and Raspberry Pi deployment assets and notesvendor/symphony: vendored Symphony runtime used for executionplan.md: sample source blueprint
- Node.js 22+
- npm
- Docker Desktop for local container deployment and ZAP
- A working Codex runtime for live planning and execution
Optional local binaries:
semgreptrivymixif you want to override the bundled Symphony build tool path
Start from the shipped example:
cp .env.example .envCommon variables:
CONTROL_PLANE_TRACKER_TOKEN: token accepted by the tracker shimSYMPHONY_TRACKER_TOKEN: token used by Symphony against the tracker shimNEXT_PUBLIC_DEFAULT_REPO: default repo source shown in intake. For local Docker deployment this should normally stay.so Overture targets the checked-out app workspace.OPENAI_API_KEY: optional; only needed if you intentionally usehosted_apiPORT: app portOVERTURE_BIND_HOST: bind host fornpm run startOVERTURE_ROOT: optional runtime data root override; defaults to<repo>/.overtureCODEX_HOME: optional Codex auth/state home; Docker defaults this under.overtureOVERTURE_CODEX_BIN: optional Codex CLI overrideOVERTURE_MIX_BIN: optionalmixoverride for Symphony buildsOVERTURE_SYMPHONY_BIN: optional Symphony binary overrideOVERTURE_SYMPHONY_PORT_BASE: base port used for per-project Symphony runtimesOVERTURE_ORIGIN: override origin used by the runner scriptOVERTURE_INTERNAL_ORIGIN: optional internal loopback origin used by Symphony when it talks back to the control plane
Install dependencies:
npm installRun the app in development:
npm run devOpen http://127.0.0.1:3000.
For a production-style native launch:
npm run build
npm run startThis remains the simplest path for real project creation and execution when you want to use local ChatGPT-backed Codex auth.
This is the recommended production-like path on this machine because it automatically reuses your existing ChatGPT Codex login:
bash deploy.sh localThen open http://127.0.0.1:3000.
What this does:
- Builds the Docker image with the Codex CLI and Symphony dependencies included
- Includes the Docker CLI and Compose plugin inside the app container for local docker-backed launch and deploy profiles
- Mounts your host Codex auth into the container
- Mounts the host Docker socket and host gateway so launch/deploy stages can manage sibling local stacks from inside the control plane
- Keeps project artifacts and runtime files under
.overture - Stores the live SQLite database and active Symphony workspaces in Docker-managed volumes instead of the macOS bind mount
- Migrates an existing host
.overture/data/overture.db*into the Docker data volume on first launch - Starts the app on port
3000
Quick health check:
curl http://127.0.0.1:3000/api/healthThe Azure and AWS helpers now publish the current repo as a single-instance cloud control plane instead of leaving only placeholder baselines behind.
Azure:
AZURE_RESOURCE_GROUP=my-overture-rg OPENAI_API_KEY=sk-live-... bash deploy.sh azureAWS:
AWS_REGION=us-west-2 OPENAI_API_KEY=sk-live-... bash deploy.sh awsCloud deploy behavior:
- Both targets inject
OPENAI_API_KEYand force the product default execution mode tohosted_api - Azure builds remotely in ACR; AWS builds locally with Docker buildx and pushes to ECR
- Both targets run Overture as a single container on a dedicated VM/EC2 host with persistent
.overturestate on the instance filesystem - Both targets expose port
3000directly and printOVERTURE_APP_URLplusOVERTURE_HEALTHCHECK_URLwhen the deploy completes - Overture can publish these targets from the handoff flow, but release verification still treats final cloud validation as operator-owned
Open /settings in the UI to control:
- Default planner model from the built-in Codex model dropdown
- Default execution model from the built-in Codex model dropdown
- Default research provider
- Planning thinking level from the built-in Codex reasoning dropdown
- Agent thinking level from the built-in Codex reasoning dropdown
- Default execution mode
- Default repository source
- QA and security strictness defaults
- Symphony parallelism and max-turn limits
The shipped default for new projects is now 5 simultaneous Symphony agents.
These settings apply to new projects by default. Existing projects keep the planner/execution settings captured when they were created until you edit them from the project page under Project settings and options.
If you leave either model on Codex default, Overture lets the installed Codex CLI choose the runtime default. Otherwise you can pick from the current built-in Codex model catalog in the dropdown.
You now have two supported starting paths.
Use this when you only have notes, a rough product idea, or an incomplete spec.
- Open
/. - In the guided card, enter a project name.
- Paste notes or upload a source brief if you already have one.
- Optionally adjust repo, model, or research defaults.
- Click
Start guided project. - If a source brief is present, Overture will use it to seed the first workshop turn automatically.
- Use the workshop page to refine the prompt until it is ready.
- Lock the prompt and run deep research.
- Review and edit the generated
plan.md. - Approve the plan and ingest it.
- Start Symphony from the build view.
- Use the
LaunchandDeploypages to run later lifecycle stages.
Use this when you already have a finished plan.md or a strong markdown blueprint.
You can paste or upload a blueprint in the UI, or seed the sample plan.md from the command line:
npm run seedThat prints a project id. To launch Symphony for that project:
npm run runner -- <project-id>The project page shows live Symphony runtime state, bootstrap logs, retry queues, tracker slices, artifacts, findings, and gate status. It also shows both the current live-run token count and the total token count accumulated across all runs for that project.
For a fresh project run:
- Delete the old project from the home dashboard or project page if you want a clean slate.
- Open
/. - Choose either the guided path or the quick path.
- Give the project a name.
- Start the next project.
You can also create a project programmatically with POST /api/projects in either draft or quick-path mode and then start execution with POST /api/projects/:projectId/execute.
Project deletion is available in two places:
- The project dashboard cards on the home page
- The
Delete projectcontrol on an individual project page
Deletion is a hard delete. It stops any active Symphony runtime for the project, removes the project row from SQLite, and deletes its runtime folders under .overture/projects, .overture/artifacts, and .overture/workspaces.
Equivalent API:
DELETE /api/projects/:projectId
npm run dev: Next.js development servernpm run build: production buildnpm run start: standalone production server bound viaOVERTURE_BIND_HOSTandPORTnpm run seed: ingest the repoplan.mdnpm run runner -- <project-id>: launch or reattach Symphony for a projectnpm run lint: ESLintnpm run test: Vitestnpm run e2e: Playwright against an isolated.overture-e2eruntimenpm run qa: lint, unit tests, and production buildnpm run security: Semgrep and Trivy against the Overture repo itselfnpm run security:zap: ZAP baseline against a running Overture appnpm run deploy:local: Docker Compose local deployment helper
Project-level security review is available from the project Overview tab. It writes per-project findings and evidence artifacts instead of scanning the control-plane repo globally. If Semgrep, Trivy, or ZAP cannot run, or if the workspace sample is truncated for a very large repo, Overture records the review as partial instead of silently passing the security gate.
Supported deployment helper targets:
bash deploy.sh localbash deploy.sh jetsonbash deploy.sh raspberry_pibash deploy.sh azurebash deploy.sh awsbash deploy.sh ios_testflightbash deploy.sh ios_app_store
The Azure and AWS targets now provision real single-instance cloud hosts. Overture can publish them directly, but the release gate still remains partial until you validate the live cloud environment yourself.
Recommended local verification:
npm run qa
npm run e2e
npm run security
ZAP_TARGET_URL=http://127.0.0.1:3000 npm run security:zap
npm audit --audit-level=highFor a fast manual smoke after launching the app:
- Open
/. - If needed, open
/settingsand confirm the default model, thinking level, and run mode. - Create or open a project.
- Confirm the overview page shows the captured planning, agent, and run settings.
- Start the automated run from the project page.
- Verify
/api/healthreturnsok: true. - Confirm the
Live runtab shows either active work or a clear waiting explanation.
For a beginner end-to-end run in the UI with the new guided flow:
- Open
/. - Enter a project name.
- Click
Start guided project. - Use the workshop page until the prompt looks right.
- Lock the prompt and run research.
- Review the generated
plan.md. - Approve the plan.
- Start the automated run.
- Use
Live run,Launch, andDeployas the project advances.
Bring up the containerized control plane:
npm run deploy:localThis requires a usable ChatGPT Codex login on the host machine. deploy.sh local exports that host auth directory into the container automatically.
The local Docker setup now:
- Builds the production image
- Installs the Codex CLI automatically
- Installs
git,curl, the Docker CLI, and the Elixir runtime needed by Symphony - Includes the vendored Symphony runtime
- Keeps host-visible artifacts and runtime files under
.overture - Moves the live SQLite database and active Symphony workspaces into Docker-managed volumes
- Persists Codex auth under
/app/.overture/codex-home - Copies host ChatGPT Codex auth into the container automatically
- Mounts
/var/run/docker.sockandhost.docker.internalso local docker launch/deploy profiles can orchestrate sibling stacks - Exposes the app at http://127.0.0.1:3000
Health check:
curl http://127.0.0.1:3000/api/healthIf the host machine is not already logged into Codex, deploy.sh local now fails before startup and the container entrypoint also fails fast. A successful container boot means the required runtime dependencies and ChatGPT-backed Codex auth bootstrap path are present.
Target repositories can optionally define these repo-local healthcheck hints in .env or .env.example so Overture knows how to validate launch/deploy runs:
OVERTURE_LAUNCH_HEALTHCHECK_URLOVERTURE_DOCKER_HEALTHCHECK_URLOVERTURE_DEPLOY_HEALTHCHECK_URL
Default runtime directories:
- Native runs:
.overture/data/overture.dbis the canonical SQLite database - Docker runs: the canonical SQLite database is stored in the
overture_dataDocker volume and imported from host.overture/data/overture.db*on first boot if present .overture/artifacts: immutable evidence files.overture/codex-home: persisted Codex CLI auth and local Codex state for containerized runs.overture/projects: per-project workflow contracts and Symphony runtime files- Native runs:
.overture/workspacesholds per-project cloned workspaces used by Symphony - Docker runs: active Symphony workspaces live in the
overture_workspacesDocker volume for stability .overture-e2e: isolated Playwright runtime root
OVERTURE_ROOT changes where .overture is created, but source resolution still points at the actual app repository root.
Dockerfile: local production imagedocker-compose.yml: local container orchestration with shared host artifacts plus Docker-managed data/workspace volumesdeploy.sh: helper for local, Jetson, Raspberry Pi, Azure, AWS, and iOS preparation entrypointsinfra/jetson/README.md: Jetson deployment notesinfra/raspberry-pi/README.md: Raspberry Pi deployment notesinfra/azure/main.bicep: Azure baseline container-hosting assetinfra/aws/template.yaml: AWS baseline infrastructure template
GET /api/health: health probeGET /api/projects: list project summariesPOST /api/projects: create either a guided draft project or a quick-path project from spec contentPOST /api/projects/drafts: guided draft-project aliasDELETE /api/projects/:projectId: hard-delete a projectPATCH /api/projects/:projectId: rename a projectGET /api/projects/:projectId/snapshot: fetch the full project snapshotGET /api/projects/:projectId/artifacts: list project artifactsGET /api/projects/:projectId/workshop/thread: fetch the current workshop thread and messagesPOST /api/projects/:projectId/workshop: send a workshop message or lock the promptPOST /api/projects/:projectId/workshop/messages: workshop-message aliasPOST /api/projects/:projectId/workshop/fork: create a fork of the current workshop threadPOST /api/projects/:projectId/research: start deep researchPOST /api/projects/:projectId/research/run: deep-research aliasPOST /api/projects/:projectId/research/approve: approve the generated plan via the latest research artifactPOST /api/projects/:projectId/plan: ingest an edited or generatedplan.mdPOST /api/projects/:projectId/plan/ingest: plan-ingestion aliasPOST /api/projects/:projectId/execute: launch or refresh Symphony for the projectPOST /api/projects/:projectId/launch: run a launch profilePOST /api/projects/:projectId/launch/run: launch aliasPOST /api/projects/:projectId/deploy: run a deploy profilePOST /api/projects/:projectId/deploy/run: deploy aliasGET /api/settings: read saved platform defaultsPATCH /api/settings: update saved platform defaultsGET /api/artifacts/:artifactId: stream a stored artifactPOST /api/tracker/graphql: Linear-compatible tracker shim
- Artifact reads are boundary-checked before file access
- Security scans exclude vendored third-party runtime code from first-party policy failures
- The app ships response headers via
next.config.ts - The local Docker Compose deployment intentionally runs the container as
rootbecause the mounted Docker socket is the privilege boundary for local docker-backed launch and deploy profiles
- Native host execution and Docker deployment are both wired for real operation
- Docker local deployment now installs and boots the required Codex and Symphony runtime dependencies automatically
- Jetson and Raspberry Pi deployment helpers are real SSH-driven ARM64 container rollouts once remote-host settings are provided
- Azure and AWS deployments are real baseline infrastructure flows, but they still depend on the target cloud credentials and image references you provide