Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,13 @@

**An open framework for agentic workflows. Bring your process, LLMs, scripts, agents, and infra — we handle the orchestration.**

Compose agent harnesses in YAML from LLM calls, scripts, validators, and transforms — wired with loops and conditionals. Mix deterministic and non-deterministic steps so your linter, tests, and schemas gate the LLM. Built-in support for Anthropic / OpenAI / Gemini / Ollama, coding-agent runtimes, SQLite / Postgres, OTel, and local / Docker / remote workers. Every layer has an extension point where the built-in doesn't fit. Scale agentic workflows like production infrastructure.
- **Compose** agent harnesses in YAML — LLM calls, scripts, validators, transforms, loops, and conditionals.
- **Gate the LLM with your tools** — deterministic steps (linters, tests, schemas) wrap non-deterministic LLM calls, so output is checked on every run.
- **Plug in your stack** — Anthropic, OpenAI, Gemini, Ollama, coding-agent runtimes. Every layer has an extension point where the built-in doesn't fit.
- **Run anywhere** — local, Docker, or remote workers; SQLite or Postgres; OTel-native.
- **Scale like infra** — multi-worker scheduling, approval gates, cost ceilings, live dashboard.

Ships with a reference SDLC template so you can see an end-to-end pipeline running in minutes. The framework is domain-agnostic: point it at code review, content generation, ops runbooks, data pipelines — anything where multiple LLM calls need to be coordinated with humans in the loop.
Ships with a reference SDLC template — runnable end-to-end in minutes. Domain-agnostic: point it at code review, content generation, ops runbooks, data pipelines — anywhere multiple LLM calls need to be coordinated with humans in the loop.

> **For platform teams:** think of it as Kubernetes-style orchestration for agentic workloads — control plane, execution plane, declarative specs — without the cluster.

Expand Down
6 changes: 4 additions & 2 deletions packages/platform/README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# @mandarnilange/agentforge

**Production infrastructure for AgentForge.** Distributed execution, PostgreSQL, full observability, crash recovery.
**Production infrastructure for AgentForge.** Run agentic workflows at scale — distributed execution, PostgreSQL, full observability, crash recovery.

Extends [@mandarnilange/agentforge-core](https://www.npmjs.com/package/@mandarnilange/agentforge-core) with everything needed to run AI agent workflows at scaleDocker/remote executors, PostgreSQL persistence, OpenTelemetry tracing, rate limiting, and multi-node worker scheduling.
Extends [@mandarnilange/agentforge-core](https://www.npmjs.com/package/@mandarnilange/agentforge-core) — the open framework for agentic workflows — with everything needed to run it at scale: Docker / remote executors, PostgreSQL persistence, OpenTelemetry tracing, rate limiting, and multi-node worker scheduling. Bring your own LLMs, scripts, agents, and infra; this package is the Kubernetes-style orchestration plane underneath — control plane + capability-scheduled workers, declarative specs, no cluster required.

> **Note:** This package requires `@mandarnilange/agentforge-core` as a peer dependency. The platform listing depends on it directly, so a single install pulls both:
> ```bash
Expand Down Expand Up @@ -118,6 +118,8 @@ docker compose -f packages/platform/docker-compose.worker.yml up -d

## Architecture

Platform borrows Kubernetes' separation of concerns — a **control plane** decides what runs where, an **execution plane** of nodes runs it. Both ship in this package; deploy them together on one host or split across machines (see *Distributed* in Quick Start above). Nodes advertise capabilities (`llm-access`, `docker`, `high-memory`, `git`, …) and the scheduler matches each agent's `nodeAffinity` to the pool — same mental model as Pod scheduling.

Platform extends core via its ports/adapters pattern:

```
Expand Down