-
Notifications
You must be signed in to change notification settings - Fork 1
Overview: What AasF Is
AasF (Agent as a Function/Framework) is a framework concept aimed at designing, executing, and orchestrating AI agents that use large language models (LLMs).
Traditional chat agents accumulate conversation history while carrying an invisible dataset of tool execution history that "clutters the context window." This black-box data managed in the cloud makes it difficult for users to achieve reproducible LLM usage.
The key characteristic of AasF is that, unlike conventional stateful chat agents, agents are defined as disposable, stateless "functions" and their state (context) is managed entirely externally. This makes it possible to ensure complete reproducibility of inputs, if not to make the LLM outputs fully stable.
AasF's design is based on three principles: statelessness, external context management, and the Unix philosophy.
-
Definition: Agents are designed as pure functions that accept a specific input (
context) and return a result (output). The agent itself does not hold internal state like session information or history, and it is immediately discarded after execution. -
Mathematical expression:
$$f(\text{context}) \rightarrow \text{result}$$
- Definition: The "state" or "history" needed by an agent for processing is managed externally as local files rather than in the cloud (e.g., JSON files).
- Complete transparency: All contexts can be directly viewed, edited, and inspected by the user, achieving a complete escape from vendor black boxes (Clean Jailbreak).
- Flow: Users or parent agents build and edit the context locally, and this "state" is injected into the agent.
-
Do One Thing Well: Each agent specializes in a single role (
role) and performs that function perfectly. -
Composable Intelligence: Agents are connected via standard text streams (JSON/Markdown), enabling pipeline configurations like
cli | agentoragent | agent. - Bottom-Heavy Architecture: Complexity is placed in tools and the framework rather than in the LLM. "The smaller the function, the smaller the deviation" — the smaller the function, the smaller the behavioral deviation.
Pipe leverages its stateless, non-interactive mode to achieve fully controlled, reproducible prompting locally (it sends a timestamp by default, but you can easily remove it if undesired).
This eliminates all sources of variation except the inherent non-determinism of the LLM itself and knowledge cutoff changes due to model updates.
It follows the very same spirit as Infrastructure as Code (IaC) that we benefited from with tools like Vagrant, Chef, and Docker — bringing a world where "the same input always produces the same result, no matter who runs it or when" to the domain of LLM agents.
While pipe allows building multi-agent setups easily by writing procedures and roles in natural language and running them with the takt command, intentionally adopting an ES5-era prototype-based notation to embody the concept of AasF helps make the core ideas more intuitive.
This notation inherits the philosophy of "separating state and logic" found in Perl's inside-out objects and the prototype chain of early JavaScript.
| Concept | Intention of Prototype Notation | Connection to AasF Philosophy |
|---|---|---|
| Emphasis on pure functions | Write function Agent() {} separately from Agent.prototype.invoke = function (...). |
Emphasizes AasF’s view of agents as collections of functions (invoke) that manipulate data. Agents are not complex OOP "actors" but simply containers for processing. |
| Lightweight structure | Simpler and more extensible code compared to class syntax. | Suggests that agents do not hold complex internal state, reinforcing AasF’s design principle to delegate state management to external context. |
| Clarification of disposability | Create a new instance each time with new Agent(). |
Visually conveys that agents are ephemeral (short-lived/disposable). This mirrors pipe's execution model of "reinjecting into a new agent each time." |
function Agent() {}
// Method to call an Agent as a "Function"
Agent.prototype.invoke = function (context) {
return {
// Session ID management is handled via the context
sessionId: context.sessionId || generateSessionId(),
// Make a request to the model and return the result
output: model.request(context)
}
}
// Create and execute a child agent
Agent.prototype.invokeChild = function (context) {
// Pass the parent's session ID in the context
context.parent = this.sessionId
return new Agent().invoke(context)
}
// Conductor orchestrates multi-agent workflow
const conductor = new Agent()
conductor.procedure = function () {
this.invokeChild(/** child1 context */)
this.invokeChild(/** child2 context */)
}
const parent = conductor.invoke(/** conductor context */)Core of the code:
-
invoke(context): Single execution of an agent. A new instance is always used. -
invokeChild(context): Parent agent creates and runs a child agent, building a hierarchical session structure. -
procedure: Conductor role orchestrates multi-agent operations. -
contextis everything: Agents remember nothing; all state is externally managed.
AasF’s stateless design and external context management provide significant advantages, especially when building systems that use LLMs.
Since the context injected from outside ensures reproducibility of inputs, it addresses many LLM-specific challenges.
- Input reproducibility (constancy): Although LLM outputs are inherently probabilistic and unstable, the input to agents can be made fully reproducible, making debugging and quality control easier.
-
Dry Run mode: The
--dry-runflag allows verification of the final JSON prompt generated before calling the actual API. This is essential for improving context engineering precision.
-
Easy model swapping: It’s easy to test the exact same input (context) with different LLMs (Gemini, Claude, GPT, etc.). Changing
api_modeis all that’s needed to compare providers or migrate. - Freedom from vendor lock-in: Because contexts are saved locally, you are not dependent on any particular cloud provider.
- Full visibility of context: All session histories, tool calls, and reference files are saved as local JSON files and can be viewed and edited at any time.
- Surgical precision in context manipulation: You can delete, edit, or compress only specific turns to optimize context with surgical precision.
The AasF framework provides mechanisms to coordinate stateless agents to perform complex tasks.
-
Clarified parent-child relationships: The
--parentoption allows creating child sessions by specifying a parent session ID. A Conductor-role agent calls multiple specialized child agents (Engineer, Reviewer, Compressor, etc.) to split and delegate tasks. - Small contexts, small deviations: Because each child agent receives only a small, specialized context, LLM hallucinations and deviations are minimized.
- Direct editing of session files: Since the execution context is explicitly saved as local JSON files, users can directly edit files to diagnose and fix contexts.
- Verifier/Therapist/Doctor workflows: Before compression or editing operations, a dedicated Verifier agent checks changes to ensure a natural conversational flow is preserved.
pipe is a framework that fully implements the AasF philosophy.
# Create a new session
takt --purpose "Code review" \
--background "React project" \
--roles roles/engineer.md \
--instruction "Review Button component"
# Continue an existing session
takt --session <SESSION_ID> \
--instruction "Fix the identified issues"
# Validate prompts with Dry Run
takt --dry-run --instruction "Show me the final JSON prompt"- Session-based history management: Each conversation is saved as a self-contained JSON file.
- Structured JSON prompts: Final prompts sent to LLMs are constructed according to a detailed JSON schema.
-
Extensible backend: Supports any LLM provider via backends like
gemini-api,gemini-cli, etc. - WebUI: A comprehensive interface to manage the entire session lifecycle.
- MCP integration: Advanced tool execution and traceability via the Model Context Protocol.
AasF is a new paradigm for deterministically controlling LLM agents.
- Agent = disposable function: Stateless, accepts context and returns results.
- Context = everything: All state management is external, ensuring complete transparency.
- Inheritance of the Unix philosophy: A collection of small, specialized, composable agents.
- Freedom from vendors: Escape cloud black boxes so users retain full control.
"Make AI Agents Deterministic." — This is the world pipe and AasF aspire to. (See above for file contents. You may not need to search or read the file again.)