Releases: AitherLabs/AitherOS
v0.4.0 — Execution Intelligence, Streaming & Docker Environments
What's new in v0.4.0
Agent Intelligence Overhaul
The biggest leap in agent quality since launch. Agents now behave like capable autonomous workers rather than mechanical task executors:
- Tool roster in every task prompt — agents see all available tools (name + description) before starting, eliminating wasted rounds on tool discovery
- Parallel tool batching — agents are instructed to batch independent tool calls in a single response; they already ran in parallel, now they actually use this capability
- Forced reasoning turn on repeated errors — when a tool fails twice, the agent gets a no-tool thinking pass to diagnose why before retrying with a different approach
- Tool result compression —
run_commandoutput is head+tail trimmed (conclusions survive, build noise drops); HTML is stripped to text; rolling history stays focused on signal - Tool round budget raised 24 → 50 — agents no longer die mid-task on non-trivial work
Adaptive Replanning (Multi-Agent)
After each parallel subtask batch, the leader agent checks completed work and optionally adjusts pending subtask descriptions before the next batch runs. The plan evolves as new information surfaces — agents react, not just execute.
Live Token Streaming
LLM responses now stream token-by-token to the frontend. The execution timeline shows a live "Streaming" card with a blinking cursor while the model is generating, transitioning to the full reasoning display on completion.
Parallel Subtask Execution
Independent subtasks (no dependency chain) now execute concurrently. A workforce with 3 independent agents completes 3× faster. Results are applied sequentially after the fan-out completes — no race conditions.
Direct Mode (Single-Agent)
Single-agent workforces skip the planning and approval phases entirely. Execution starts immediately with zero overhead. Title is derived from the objective without an LLM call.
Docker Execution Environments
Workforces can now specify a Docker image (kalilinux/kali-rolling, python:3.12, ubuntu:22.04, etc.). The orchestrator spins up a container at execution start, mounts the workspace, and tears it down on completion. All agents in the workforce share one container — packages installed by one tool call are visible to subsequent calls.
Persistent PTY Shell
Non-Docker workforces get a long-running bash session per workspace. cd, export, and installed packages survive across run_command calls — no more re-navigating to the project directory every tool round.
Execution Timeline
- Agent chain-of-thought visible in the execution timeline — reasoning text between tool calls is rendered as thinking events
- Files panel shows only files touched in the current execution, not the entire workspace
- Delivery report panel lists files written to workspace + external actions (API calls, git pushes) taken during execution
Office & Kanban
- Full kanban task board on the Office page with drag-and-drop
- Agents can create and list tasks via
kanban_create_task/kanban_list_tasksvirtual tools - Workspace file tracking across executions
- New office background assets
MCP Tools
run_command: uses Docker exec (with cwd tracking via.aither_cwd) when a container is active, persistent PTY session otherwisefilesystem+mediatools: optionalformat=textoutput reduces JSON noise for agents- Agent efficiency guardrails: workspace snapshot pre-loaded, tool dedup, error nudges, token budget warnings
Fixes
- Review and final result now correctly reflect workspace files written during execution
- Loop detection,
/workspacealias resolution, knowledge_search result parsing
Upgrade Notes
Run the following migration before restarting:
-- scripts/025_docker_image.sql
ALTER TABLE workforces ADD COLUMN IF NOT EXISTS docker_image TEXT NOT NULL DEFAULT '';v0.3.0 — Autonomous Mode, WebGL Landing & Workspace Context
What's new in v0.3.0
Autonomous Mode: Full Auto-Approve
Workforces with Autonomous Mode enabled now auto-approve the strategy phase without human intervention, allowing the full execution lifecycle to run unattended. The approval message explicitly acknowledges autonomous mode so it's clear in the execution log why no human review was required.
Kanban: File Attachments & Task References
Tasks on the kanban board now support two new context fields:
- File attachments — attach workspace files directly to a task; the executor agent receives their contents as part of the subtask context
- Task references — link related or dependent tasks to a task for cross-reference during execution
The Add Task dialog is fully scrollable with a fixed footer so the Save button is always accessible regardless of how many files are in the picker.
Workforce Chat: Workspace File Picker
The workforce chat composer now includes a file attachment picker that lets you inject workspace file contents directly into a chat message. Files are fetched client-side and appended as fenced code blocks in the context sent to the agent. Supports search and multi-select.
Workforce Chat: Auto-Collapse for All Models
Message history auto-collapse (previously only active for image/media model conversations) is now applied to all model types, keeping the chat panel clean during long conversations. A "Show N hidden groups" toggle expands the full history on demand.
Executions: File Links & Rich Preview
- File paths in execution results and leader review are now rendered as clickable links that open an inline preview panel
- Preview supports: Markdown rendering, JSON pretty-printing, syntax highlighting, and native image display
- Relative workspace paths are resolved automatically; large files are truncated with a notice
Landing Page: WebGL Revolution Hero
The landing page hero is now a full-screen WebGL canvas with AitherOS-branded fluid noise, Voronoi, and curl noise effects. GSAP-powered hover animations on the headline phrases. All emoji icons across the page replaced with professional text labels and numbered identifiers.
Closed Beta Waitlist
New /api/beta/signup endpoint stores waitlist submissions (name, email, company). Submissions are visible in the admin panel under Settings.
Fixes
- Hero top bar no longer overlaps the fixed navigation bar
- Kanban Add Task dialog layout no longer breaks when the file picker expands
- Inline file links in execution output inherit correct prose text size and color
Commits
v0.2.6 - Workforce media chat stability and UX
Highlights
- Fixed Workforce Chat image rendering pipeline with authenticated frontend media proxy route.
- Hardened backend workforce file serving for generated assets (including legacy/generated-path fallback resolution).
- Added backend tool-loop guardrails for Workforce Chat to avoid long-running (>2 min) waits.
- Improved media connector output path normalization to return stable relative generated paths.
- Fixed knowledge ingestion execution FK handling for chat-ingested entries.
- Improved Workforce Chat UX for image-generation agents by auto-collapsing older messages and keeping the latest visible by default.
Security
- Workforce media file endpoint access is protected by auth in both frontend proxy and backend API.
Notes
- Generated image artifacts under local
generated/are intentionally not included in source control.
v0.2.5
v0.2.5
This release updates Workforce execution controls, Workforce Chat UX/persistence, media-output handling, auth redirect reliability, and Office/overview improvements.
Highlights
- Added single-agent execution mode end-to-end (UI + API + orchestrator), including approval bypass for explicit single-agent runs.
- Added Workforce Chat with Teams-like grouping, date separators, richer composer, contextual attachments, persistent per-agent history, and inline media previews.
- Added media debug hardening so image outputs are saved under workforce workspace generated paths and can be served via protected API.
- Consolidated Workforce details feed into a single Execution Feed (approvals + executions).
- Improved auth reliability for sign-in/sign-out redirects and overview token hydration behavior.
New functions introduced in this session
Backend: /backend/internal/api/debug.go
sanitizeMediaFilename(name string) string- Sanitizes user-provided media filenames, removes unsafe characters/path tricks, and enforces a file extension.
Backend: /backend/internal/api/workforces.go
func (h *WorkForceHandler) File(w http.ResponseWriter, r *http.Request)- Protected file-serving endpoint for workforce workspace files.
- Used for generated media previews/downloads via
GET /api/v1/workforces/{id}/files?path=....
Backend: /backend/internal/orchestrator/orchestrator.go
func (o *Orchestrator) normalizeMediaPlan(...) ([]models.ExecutionSubtask, bool)- Expands/normalizes media subtasks into single-output file tasks.
func isMediaAgent(agent *models.Agent) bool- Detects media agents by model type.
func buildSingleOutputMediaSubtask(path string, width, height int) string- Builds one-file media subtask payload.
func selectSubtaskMediaRequirements(objective, subtask string) []mediaOutputRequirement- Selects required media outputs for a subtask.
func extractMediaOutputRequirements(text string) []mediaOutputRequirement- Parses media output requirements from text/specs.
func mergeRequirementDimensions(primary, fallback []mediaOutputRequirement) []mediaOutputRequirement- Merges dimension hints between requirement sets.
func normalizeMediaPath(path string) string- Normalizes/sanitizes media output paths.
func buildSingleOutputMediaPrompt(objective, subtask, handoffCtx, outputPath string) string- Generates structured media prompt for one output file.
func buildMediaSpecMessage(prompt, outputPath, aspectRatio string) string- Builds canonical media spec message consumed by media connectors.
func inferAspectRatio(width, height int) string- Computes aspect ratio token from dimensions.
func extractGeneratedMediaPath(content string) string- Extracts generated file path from agent response text.
func validateRequiredMediaFile(workspacePath, outputPath string, width, height int) error- Validates generated file existence (and dimensions when required).
func isImagePath(path string) bool- Checks if output path is an image-type file.
func (o *Orchestrator) StartExecutionWithOptions(...)- Public start API supporting mode + optional single-agent target.
func (o *Orchestrator) startExecution(...)- Core execution bootstrap with mode-aware behavior.
func (o *Orchestrator) runMediaSubtask(...)- Media-specific subtask runner.
func (o *Orchestrator) resolveExecutionAgents(...) ([]*models.Agent, error)- Resolves effective agent set per execution mode.
func buildExecutionInputs(inputs map[string]string, mode models.ExecutionMode, agentID *uuid.UUID) map[string]string- Stamps execution mode metadata into inputs.
func normalizeExecutionMode(mode models.ExecutionMode) models.ExecutionMode- Normalizes mode value to supported enum.
func executionModeFromInputs(inputs map[string]string) models.ExecutionMode- Reads execution mode from execution inputs.
func selectedExecutionAgentID(inputs map[string]string) (uuid.UUID, bool)- Extracts selected single-agent ID from inputs.
func shouldAutoRunWithoutApproval(exec *models.Execution) bool- Decides approval bypass for single-agent runs.
func buildKanbanCompletionKnowledgeEntry(result string, includeKnowledge bool, includeProjectFacts bool) string- Builds knowledge payload from Kanban completion.
func summarizeExecutionResultForKanban(result string) string- Summarizes execution result for task activity.
func extractWorkspaceArtifacts(text string, maxCount int) []string- Extracts artifact/file references from execution output.
Frontend: /frontend/src/app/dashboard/executions/[id]/page.tsx
function normalizeWhitespace(text: string): string- Cleans text for summary rendering.
function toSingleSentence(text: string, maxLen = 180): string- Converts verbose content into concise one-liners.
function extractStrategyPayload(raw: string): { speakerHint?: string; payload: string }- Parses strategy payload blocks.
function parseJsonCandidate(candidate: string): unknown | null- Safe JSON candidate parser.
function parsePlanStepsFromStrategy(strategy: string): Array<Record<string, unknown>> | null- Extracts plan steps from strategy text.
function formatDecisionSentence(agentName: string, detail: string): string- Formats human-readable decision sentence.
function summarizeStrategyForDisplay(raw: string, preferredSpeaker: string): string- Converts strategy payload to concise display line.
function parseCompletionSignalFromContent(content: string): CompletionSignal | null- Detects completion signal from content payload.
function humanizeCompletionStatus(status?: string): string- Maps status codes to natural language labels.
function describeToolActivity(toolName: string): string- Human-readable activity text for tools.
function summarizeAgentActivityFromEvent(ev: LiveEvent, agentName: string): string | null- Condenses event stream to readable activity.
function AgentThread(...)- Agent thread renderer for execution chat panel.
Frontend: /frontend/src/app/dashboard/office/page.tsx
function clamp(value: number, min: number, max: number): number- Utility clamp used by room layout logic.
function buildOfficeSeats(agents: Agent[]): OfficeSeat[]- Produces seat map for Office room rendering.
function getAgentStatusStyle(status: string | undefined): { label: string; color: string }- Style map for agent status pills.
function getAgentAuraOpacity(status: string | undefined): number- Computes visual aura intensity by status.
Frontend: /frontend/src/app/dashboard/workforces/[id]/page.tsx
function isMediaModelType(modelType?: string): boolean- Detects media-capable chat target agents.
function encodeWorkforceChatContent(content: string, meta: WorkforceChatMeta): string- Encodes persisted chat payload with workforce-scoped metadata.
function decodeWorkforceChatContent(raw: string): { content: string; meta?: WorkforceChatMeta }- Decodes persisted workforce chat payload.
function toChatDayKey(date: string): string- Date bucket key for grouping.
function formatChatDayLabel(date: string): string- Renders Today/Yesterday/date labels.
function buildWorkforceChatGroups(messages: WorkforceChatMessage[]): WorkforceChatGroup[]- Groups neighboring messages like Teams-style bubbles.
function mapAgentChatToWorkforceMessage(chat: AgentChat, workforceId: string): WorkforceChatMessage | null- Maps persisted agent chat rows to workforce chat view model.
function resolveWorkforceChatImageUrl(raw: string, workforceId?: string): string- Resolves generated image refs via protected workforce file API.
function resolveChatMediaUrl(raw: string): string- Resolves absolute/relative media references.
function looksLikeImageRef(candidate: string): boolean- Detects image-like URL/path references.
function collectImageRefs(value: unknown): string[]- Extracts image refs from text/JSON/tool outputs.
function extractChatImages(content: string, toolCalls: WorkforceChatToolCall[] = []): string[]- Consolidates image refs for chat preview cards.
Additional behavioral updates in this session
- Auth redirect hardening for sign-in/sign-out on current origin.
- Overview stats loading now waits for hydrated access token before API fan-out.
- Workforce chat media mode enforces required
prompt+filenamefields. - Workforce chat height increased for better message/image visibility.
v0.2.4 — Cloudflare AI, Self-Hosting & Infrastructure
AitherOS v0.2.4
Cloudflare Workers AI
New provider type for image (and audio) generation powered by Cloudflare's free AI inference tier. Configure your Account ID and API Token in the Providers page, and AitherOS will automatically sync available models from your account — including Flux.1, Stable Diffusion, and other text-to-image models. No third-party API key required beyond your existing Cloudflare account.
Google Imagen & Gemini Image
Fixed image generation for Google providers:
- Google Imagen 4.0 (
imagen-4.0-generate-001) now uses the correct:predictendpoint with Vertex AI body format - Gemini image models (
gemini-2.0-flash-preview-image-generation, etc.) usegenerateContentwithresponseModalities: [\"IMAGE\"] - Both bypass LiteLLM and call the Google APIs directly, which is required for these endpoints
- OpenAI image connectors no longer double-append
/v1when the base URL already includes it
Self-Hosting Improvements
Path portability — All hardcoded /opt/AitherOS paths now read from the INSTALL_ROOT environment variable (default: /opt/AitherOS). Clone to any directory and set INSTALL_ROOT accordingly.
Auto-build on restart — The PM2 configuration now runs go build before starting the backend binary. Every pm2 restart aitherd picks up the latest code automatically — no separate build step.
Environment examples — .env.example updated with all required variables including INSTALL_ROOT, SERVICE_TOKEN, REGISTRATION_TOKEN, and AITHER_API_URL. New frontend/.env.example added with BACKEND_URL and NEXTAUTH_JWT_SECRET.
Self-hosting guide — README expanded with a full installation walkthrough: prerequisites, DB setup with pgvector, PM2 configuration, Cloudflare Tunnel setup (config.yml, DNS routing, systemd service), internal-only access option, and a complete environment variable reference.
Reliability
- Fixed
mcp_servers.iconcolumn overflow: widened fromVARCHAR(10)toTEXT(migration 015), icon changed to emoji — Aither-Tools now provisions correctly on all new workforces - Cloudflare model sync queries
/ai/models/searchwith task filters for Text-to-Image, Speech-to-Text, and Text-to-Speech
v0.2.3 — Image Generation, The Floor & Execution Intelligence
AitherOS v0.2.3
Image Generation
Agents can now generate images as a native capability. A new generate_image tool ships in Aither-Tools and works with Google Imagen, OpenAI DALL-E, fal.ai, or any compatible endpoint. The orchestrator automatically injects image provider credentials from your provider configuration — no manual wiring required. Provider settings now support Image, Video, and Audio model types alongside the existing LLM/embedding types.
The Floor
The dashboard overview is now a live virtual workspace called The Floor — showing active agents, what they're doing right now, and live connections between them. Replaces the static stats grid with a view that makes your AI team feel present.
Execution Intelligence
- Executions auto-generate a descriptive title during the planning phase using the LLM
- Execution list has search, filters, and grouping by status/date
- Execution detail page exports to Markdown
- Expandable event cards show tool arguments, subtask detail, and error context
Knowledge Base
- Better entry titles: scans content for meaningful headings instead of truncating raw text
- Halted executions now ingest to the knowledge base (previously only completed ones did)
- Workforce knowledge UI has source-type filtering (Results / Agent messages / Manual) and pagination
Activity Feed
Rich activity feed on the workforce page with real agent avatars, date grouping, metadata, and type filters.
Credential UX
- Inline credential quick-add inside the launch dialog
- Server-aware credential checklist shows exactly which secrets each MCP server needs
- Smart checklist on the workforce detail page
Reliability
- Fixed nil pointer panic when LLM Submit fails mid tool-loop
- Aither-Tools now provisions synchronously on workforce creation — MCP server is visible in the UI immediately, no race condition
- Migration 013:
provider_modelstype constraint now includes image/video/audio - Rate-limit errors correctly surface as
needs_helpinstead of deadlock - Rich deadlock diagnostics and higher tool round limit
Branding
- New slogan: The Operating System for Autonomous AI Teams
- Marketing landing page at the root route
- Favicon replaces generic "A" logo in nav and footer
v0.2.2 — Auth, Multi-Provider, Agent Tools & Embedding
What's new in v0.2.2
Security & Auth
- JWT enforcement on all API routes — every endpoint is now protected; public routes (health, login, register, WebSocket) remain open
- Service token bypass —
SERVICE_TOKENenv var allows internal Aither-Tools → backend calls without a JWT, so agent tools work after auth enforcement - Admin user management — admin users can list, create, and activate/deactivate accounts via
/api/v1/admin/users - Registration token gating — set
REGISTRATION_TOKENto restrict self-registration to invited beta users
Multi-Provider Support
- Provider connection test — test API key + base URL before saving a provider; shows live model list on success
- Auto-fill base URLs — selecting a provider type pre-fills the canonical base URL (OpenAI, OpenRouter, Gemini, Ollama, LiteLLM)
- Model picker on create — after a successful test, select which models to register; all pre-selected by default
- Dropped
provider_typeDB constraint — any provider type string is now accepted (no more constraint violations on Gemini, etc.) - Models visible on provider cards —
ListProvidersnow batch-loads registered models; displayed on provider cards in the UI - Agent model dropdown — agents with registered provider models get a dropdown selector; "Custom model ID..." fallback for ad-hoc use
Embedding & Knowledge
- Decoupled embedding config —
EMBEDDING_API_BASE,EMBEDDING_API_KEY,EMBEDDING_MODELare independent from the LLM provider (OpenRouter doesn't support embeddings — point embeddings at Ollama or OpenAI separately) - Live embedding status probe — providers page shows real-time embedding health with dimension count; suggests Ollama or OpenAI when broken
- Active knowledge search tool — agents can call
knowledge_searchmid-execution for semantic RAG queries, not just passive context injection - Knowledge ingest tool —
knowledge_ingest_filelets agents persist workspace files directly into the workforce knowledge base
Kanban & Agent Autonomy
- Kanban MCP tools — agents can now call
kanban_list,kanban_create, andkanban_updateto manage the board themselves — plan work, mark tasks in-progress, flag blockers - Autonomous scheduler — workforces with autonomous mode enabled are scheduled without human triggers
- Replan on rejection — orchestrator retries with feedback when a plan is rejected
Workspace & Provisioning
- Auto-provisioned Aither-Tools — every new workforce automatically gets its own Aither-Tools MCP server with correct env vars injected (workspace path, workforce ID, API token)
- Workspace path injected into agent prompts — agents know where their workspace is without being told
- Duplicate Aither-Tools hidden — "Available Servers" no longer shows other workforces' Aither-Tools instances
- MCP server icon support — server cards render
iconas image URL or emoji; Aither-Tools now uses the app favicon
Bug Fixes
- Fixed 404 on OpenRouter and all
/v1-suffixed base URLs — connector now strips trailing/v1before appending/v1/chat/completions - Fixed null tools crash on agent detail page
- Fixed N+1 query patterns across executions, agents, and providers pages
- Fixed plan JSON rendering in execution detail — shown as readable cards, not raw blobs
- Fixed frontend proxy (Next.js runtime catch-all instead of
next.configrewrite) - Fixed
workforces/directory excluded from git tracking
Configuration Reference
New env vars introduced in this release:
SERVICE_TOKEN=<random-hex> # required — allows agent tools to call the internal API
REGISTRATION_TOKEN=<invite-token> # optional — restrict self-registration to invited users
EMBEDDING_API_BASE=http://... # defaults to https://api.openai.com/v1
EMBEDDING_API_KEY=<key> # defaults to LLM_API_KEY if unset
EMBEDDING_MODEL=nomic-embed-text # or text-embedding-3-small, etc.
v0.2.1 — LLM Model Discovery & Routing
What's New
Live Model Discovery
- Providers page: New Fetch button in the Add Model dialog queries the provider's
/v1/modelsendpoint and shows all available models as clickable pills - Sync All: Registers every new model from the endpoint in one click — skips already-registered models, safe to re-run
- Backend: new
GET /api/v1/providers/:id/live-modelsendpoint that proxies to the provider's models endpoint
Model Dropdown on Agents
- Agent create dialog and agent detail page: Model field is now a dropdown populated from the provider's registered LLM models — falls back to free-text input when no provider is selected or provider has no registered models
- Switching provider auto-selects the first available model so the field is never empty
LiteLLM Router Expansion
- All 14 models from the Codex CLI accounts are now mapped in the LiteLLM config, load-balanced across 3 OpenAI-compatible account proxies:
gpt-5.4,gpt-5.4-mini,gpt-5.3-codex,gpt-5.2-codex,gpt-5.2,gpt-5.1-codex-max,gpt-5.1-codex,gpt-5.1-codex-mini,gpt-5.1,gpt-5-codex,gpt-5-codex-mini,gpt-5,gpt-oss-120b,gpt-oss-20b
Bug Fixes
- Fixed
Promise.allin Sync All causing all-or-nothing failures on duplicate models — switched toPromise.allSettledwith fresh provider state fetch before filtering
v0.2.0 — Workspace Management, Credentials & Kanban
AitherOS v0.2.0 — March 2026
This release brings production-grade workspace management, per-workforce credentials, Kanban task boards with autonomous mode, and critical performance + reliability fixes.
🚀 Workspace Provisioning & Aither-Tools MCP Server
Every workforce now gets its own isolated workspace directory with automatic provisioning:
- Auto-provisioned workspaces:
/opt/AitherOS/workforces/{workforce-slug}/workspace/created on workforce creation - Aither-Tools MCP server: 46 built-in tools (filesystem, git, network, secrets, etc.) automatically registered and attached to each workforce
- Per-workforce tool environments: Each Aither-Tools instance runs with
AITHER_WORKSPACEandAITHER_WORKFORCE_NAMEenv vars - Retroactive provisioning:
POST /api/v1/workforces/{id}/provisionendpoint for migrating old workforces - Tool discovery caching: Provisioner auto-discovers and caches all 46 tool definitions on workspace creation
🔐 Per-Workforce Encrypted Credentials
Agents can now use different API keys and secrets per workforce, stored securely and accessible via tools:
- AES-256-GCM encryption: All credentials encrypted at rest using
ENCRYPTION_KEYenv var (32-byte base64) - CRUD API:
GET/PUT/DELETE /api/v1/workforces/{id}/credentialsfor managing secrets per workforce - Automatic file export:
.secrets.jsonwritten to{workforce-root}/.secrets.json(mode 0600) on every credential change - Aither-Tools integration: New
get_secret(service, key_name)andlist_secrets()tools - Frontend UI: Credentials section on workforce detail page with inline add form, grouped by service, masked values
📋 Kanban Task Board + Autonomous Mode
Per-workforce task boards with execution linking and optional autonomous operation:
- 5-column board: Open → Todo → In Progress → Blocked → Done
- Execution linking: Tasks can be started directly from the board via "▶ Run" button, auto-updates status on completion/failure
- Autonomous mode toggle: Per-workforce setting for future scheduled leader review executions
- Priority coloring: Low/medium/high/critical tasks with color-coded left borders
- Agent assignment: Tasks can be assigned to specific agents with visual badges
- Full lifecycle: Orchestrator auto-marks tasks done/blocked when linked execution completes/fails
⚡ Performance Improvements
- N+1 agent loading eliminated:
GetAgentsBatch()replaces per-agent queries — oneSELECT ... WHERE id = ANY($1)instead of N queries - Workforces page stats optimization: New
GET /api/v1/statsendpoint reduces 7 API calls → 3 for a page with 5 workforces
🛠️ Backend Hardening & Bug Fixes
- Context cancellation fixes: Provisioner and knowledge manager goroutines now use
context.Background()— fixes silent failures - Circuit breaker recovery: Embedder
failCountresets to 0 on success (was permanently one-way) - Safe type assertions:
HaltExecutionandInjectInterventionuse ok check (prevents panics) - Knowledge manager availability guards: All methods check
embedder.Available()before HTTP calls - Provisioner idempotency: Checks before creating Aither-Tools server prevents duplicates
📚 API Additions
| Method | Path | Description |
|---|---|---|
POST |
/api/v1/workforces/:id/provision |
Provision workspace + Aither-Tools |
GET |
/api/v1/stats |
Global execution stats |
GET/POST |
/api/v1/workforces/:id/kanban |
Kanban CRUD |
PATCH/DELETE |
/api/v1/kanban/:taskID |
Task updates |
GET/PUT/DELETE |
/api/v1/workforces/:id/credentials |
Credential management |
🗄️ Schema Migrations
Run these migrations if upgrading from v0.1.0:
psql -U aitheros -d aitheros -f scripts/006_kanban.sql
psql -U aitheros -d aitheros -f scripts/007_credentials.sqlChanges:
workforcestable: Addedautonomous_mode,heartbeat_interval_mkanban_taskstable (new): Full task lifecycle with execution linkingworkforce_credentialstable (new): Encrypted secrets storage
🧩 MCP Server Updates
- Aither-Tools: Added
get_secret(service, key_name)andlist_secrets()tools for credential access
Breaking Changes
None. Fully backward compatible with v0.1.0.
Installation & Upgrade
New installation:
git clone https://github.com/AitherLabs/AitherOS.git
cd AitherOS
make setup-db
make build
pm2 start ecosystem.config.jsUpgrading from v0.1.0:
git pull origin main
make build
psql -U aitheros -d aitheros -f scripts/006_kanban.sql
psql -U aitheros -d aitheros -f scripts/007_credentials.sql
pm2 restart allPost-upgrade: Click "Provision" on each existing workforce detail page to create workspaces and attach Aither-Tools.
Full Changelog: https://github.com/AitherLabs/AitherOS/commits/v0.2.0
Documentation: https://github.com/AitherLabs/AitherOS/blob/main/README.md