tiny_agent is a header-only C++20 agent framework for building LLM-powered applications with a small, composable API.
It includes:
- OpenAI, Anthropic, and Gemini providers
- local OpenAI-compatible providers such as Ollama, llama.cpp, and vLLM
- tool calling with JSON schemas
- multi-turn chat and conversation history
- middleware
- nested/sub-agent delegation
- MCP stdio integration
- multimodal messages
- CMake 3.20+
- A C++20 compiler
vcpkgwithVCPKG_ROOTset- Internet access plus provider API keys for cloud-backed examples
This repository uses a vcpkg.json manifest. Configuring the project installs:
nlohmann-jsoncpp-httplibwith OpenSSL supportdoctestlibenvppjson-schema-validator
The core library target is header-only and primarily relies on nlohmann-json and cpp-httplib. The other dependencies are used by the repository's tests and validation helpers.
From the repository root:
cmake --preset default
cmake --build --preset defaultTo build an optimized version:
cmake --preset release
cmake --build --preset releaseexport VCPKG_ROOT="$HOME/src/vcpkg"
export OPENAI_API_KEY="your-key-here"
cmake --preset default
cmake --build --preset default
./build/examples/01_basic_chat$env:VCPKG_ROOT = "C:\src\vcpkg"
$env:OPENAI_API_KEY = "your-key-here"
cmake --preset default
cmake --build --preset default --config Debug
.\build\examples\Debug\01_basic_chat.exeOn single-config generators the binary may instead be under build/examples/01_basic_chat or build/examples/01_basic_chat.exe.
Environment variable names used by the code:
- OpenAI:
OPENAI_API_KEY - Anthropic:
CLAUDE_API_KEY - Gemini:
GEMINI_API_KEY
The example binaries read API keys from the shell environment.
The integration test in tests/test_agent.cpp also loads a repo-local .env file, so this works for the full test suite:
OPENAI_API_KEY=your-openai-key
CLAUDE_API_KEY=your-anthropic-key
GEMINI_API_KEY=your-gemini-key#include <tiny_agent/tiny_agent.hpp>
#include <tiny_agent/providers/openai.hpp>
#include <cstdlib>
#include <iostream>
int main() {
using namespace tiny_agent;
const char* key = std::getenv("OPENAI_API_KEY");
if (!key) {
std::cerr << "OPENAI_API_KEY not set\n";
return 1;
}
auto llm = LLM<openai>{"gpt-4o-mini", key};
auto response = llm.chat({
Message::system("You are a concise assistant."),
Message::user("What is the capital of Japan?")
});
std::cout << response.message.text() << "\n";
}#include <tiny_agent/tiny_agent.hpp>
#include <tiny_agent/providers/openai.hpp>
#include <cmath>
#include <cstdlib>
#include <iostream>
int main() {
using namespace tiny_agent;
const char* key = std::getenv("OPENAI_API_KEY");
if (!key) {
std::cerr << "OPENAI_API_KEY not set\n";
return 1;
}
auto agent = Agent{
LLM<openai>{"gpt-4o-mini", key},
AgentConfig{
.system_prompt = "Use tools for calculations.",
.tools = {
Tool::create(
"sqrt",
"Square root of a number",
[](const json& args) -> json {
return std::sqrt(args["x"].get<double>());
},
{
{"type", "object"},
{"properties", {{"x", {{"type", "number"}}}}},
{"required", {"x"}}
}
)
}
},
Log{std::cerr, LogLevel::info}
};
std::cout << agent.run("What is sqrt(144)?") << "\n";
}You can also target a local or self-hosted OpenAI-compatible endpoint:
#include <tiny_agent/tiny_agent.hpp>
#include <tiny_agent/providers/local.hpp>
#include <iostream>
int main() {
using namespace tiny_agent;
auto agent = Agent{
local::ollama("llama3"),
AgentConfig{
.system_prompt = "Reply briefly."
}
};
std::cout << agent.run("Give me one sentence about C++20.") << "\n";
}Helpers available in providers/local.hpp:
tiny_agent::local::ollama()with default base URLhttp://localhost:11434tiny_agent::local::llamacpp()with default base URLhttp://localhost:8080tiny_agent::local::vllm()with default base URLhttp://localhost:8000tiny_agent::local::create()for any other OpenAI-compatible endpoint
The library uses a single Log class with six levels. The default level is warn, so everything is quiet out of the box. Lower the level to trace framework internals.
Available levels (from most to least verbose):
| Level | What it shows |
|---|---|
trace |
Raw HTTP bodies, JSON-RPC messages, tool arguments/results, full message contents |
debug |
Agent loop iterations, LLM request/response summaries, token usage, MCP tool discovery |
info |
Tool calls, MCP connection events |
warn |
Max iterations reached, retry attempts |
error |
HTTP failures, tool execution errors, MCP errors |
off |
Silent |
auto agent = Agent{
LLM<openai>{"gpt-4o-mini", key},
AgentConfig{.name = "my_agent"},
Log{std::cerr, LogLevel::debug}
};Output at debug level:
[DEBUG] [my_agent] initializing (max_iterations=10 tools=1 middlewares=0)
[DEBUG] [my_agent] run("What is sqrt(144)?")
[DEBUG] [my_agent] iteration 1/10 (messages=2)
[DEBUG] [my_agent] LLM requested 1 tool call(s)
[INFO] [my_agent] calling tool: sqrt
[DEBUG] [my_agent] iteration 2/10 (messages=4)
[DEBUG] [my_agent] done: stop
The LLMConfig has its own Log field for HTTP-level tracing:
auto llm = LLM<openai>{"gpt-4o-mini", LLMConfig{
.api_key = key,
.log = Log{std::cerr, LogLevel::debug}
}};At debug you see request summaries and token usage. At trace you see the full request/response JSON.
auto mcp = mcp::connect_stdio(command, args, Log{std::cerr, LogLevel::debug});agent.log().set_level(LogLevel::trace);Log log{std::cerr, LogLevel::debug};
log.set_timestamps(true);
// 14:30:05.123 [DEBUG] [my_agent] iteration 1/10 (messages=2)middleware::logging(Log{std::cerr, LogLevel::debug})The simplest way to use this project in another CMake build is to vendor it and add it as a subdirectory:
cmake_minimum_required(VERSION 3.20)
project(my_app LANGUAGES CXX)
set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
add_subdirectory(external/tiny_agent_cpp)
add_executable(my_app main.cpp)
target_link_libraries(my_app PRIVATE tiny_agent)Then include the headers you need in your program:
#include <tiny_agent/tiny_agent.hpp>
#include <tiny_agent/providers/openai.hpp>This repository does not currently install/export a packaged CMake target, so add_subdirectory(...) is the easiest integration path.
The top-level CMake options are:
TINY_AGENT_BUILD_EXAMPLES=ONTINY_AGENT_BUILD_TESTS=ON
For a library-only build:
cmake --preset default -DTINY_AGENT_BUILD_EXAMPLES=OFF -DTINY_AGENT_BUILD_TESTS=OFF
cmake --build --preset defaultThe repository currently builds these examples:
01_basic_chat: rawLLM::chat(...)02_tool_calling: function calling with JSON-schema-style parameters03_nested_agents: manager/researcher/writer composition04_mcp_client: MCP stdio client that exposes MCP tools to an agent05_middleware: logging, retry, trim-history, and custom middleware06_deep_agent: multi-level delegation with fact-checking07_multimodal: image + text message input
Most examples use OPENAI_API_KEY, so set that first.
Run them from the repository root:
./build/examples/01_basic_chat
./build/examples/02_tool_calling
./build/examples/03_nested_agents
./build/examples/05_middleware
./build/examples/06_deep_agent
./build/examples/07_multimodalThe MCP example takes the server command on the command line:
./build/examples/04_mcp_client npx @modelcontextprotocol/server-filesystem .That example requires Node.js plus the MCP server package you want to launch.
Run the full suite:
ctest --preset defaultOn Visual Studio or other multi-config generators:
ctest --preset default -C DebugThe offline tests are:
test_typestest_tooltest_middleware
The integration test is:
test_agent
test_agent exercises real providers and expects API keys. It loads .env from the repository root.
- Install Xcode Command Line Tools, CMake, and
vcpkg. - HTTPS requests use the system certificate bundle path
/etc/ssl/cert.pemon Apple platforms. If you hit TLS errors, make sure that file exists and points to a valid CA bundle.
- Any recent GCC or Clang with C++20 support should work.
- Install the usual build tools first, for example
build-essential,cmake,git,curl,zip,unzip, andtar.
- Use a 64-bit Raspberry Pi OS image if possible.
- Follow the same Linux build steps.
- Prefer
cmake --preset releasefor smaller, faster binaries on Pi hardware. - Cloud providers generally work well on Raspberry Pi because the heavy inference runs remotely.
- For local inference, point
providers/local.hppat any OpenAI-compatible server reachable from the Pi, whether it runs on the Pi itself or elsewhere on your network.
- Use Visual Studio 2022 Build Tools or a full Visual Studio install with the C++ workload.
cmake --buildandctestmay need--config Debugor--config Releasewith multi-config generators.- PowerShell environment variables use the
$env:NAME = "value"syntax shown above.
The following repository commands were verified successfully on macOS:
cmake --preset default
cmake --build --preset default
ctest --preset default