Skip to content

Model-agnostic SDK for building production AI agents with MCP tools

Notifications You must be signed in to change notification settings

Targetly-Labs/flowllm

Repository files navigation

πŸ€– FlowLLM

Production-ready SDK for building AI agents with MCP tools

Quick Start β€’ Features β€’ Examples β€’ Architecture β€’ Documentation

npm version license downloads


What is FlowLLM?

FlowLLM is a production-ready SDK that makes building AI agents with Model Context Protocol (MCP) tools ridiculously easy.

Unlike existing frameworks that are either too low-level, too opinionated, or missing MCP support, FlowLLM gives you:

  • βœ… Model-agnostic by default (OpenAI, Anthropic, Gemini, local models)
  • βœ… MCP-native integration (use any MCP server as agent tools)
  • βœ… Production primitives (streaming, retries, error handling, cost tracking)
  • βœ… Deploy anywhere (works with any Node.js hosting platform)
import { defineAgent, openai } from '@targetly-labs/flowllm';

const agent = defineAgent({
  provider: openai('gpt-4o'),
  systemPrompt: 'You are a helpful assistant.',
});

const response = await agent.execute('What is the capital of France?');
console.log(response.content);

πŸš€ Quick Start

Installation

npm install @targetly-labs/flowllm

Basic Usage

import { defineAgent, openai } from '@targetly-labs/flowllm';

// 1. Define your agent
const agent = defineAgent({
  provider: openai('gpt-4o'),
  systemPrompt: 'You are a helpful assistant.',
  temperature: 0.7,
});

// 2. Execute a single query
const response = await agent.execute('Tell me a joke');
console.log(response.content);

// 3. Or stream responses
const stream = await agent.stream('Write a poem about TypeScript');
for await (const chunk of stream) {
  process.stdout.write(chunk.content || '');
}

Switch Providers Easily

import { openai, anthropic, gemini } from '@targetly-labs/flowllm/providers';

// OpenAI
const gptAgent = defineAgent({
  provider: openai('gpt-4o'),
  systemPrompt: 'You are a helpful assistant.',
});

// Anthropic
const claudeAgent = defineAgent({
  provider: anthropic('claude-3-5-sonnet-20240620'),
  systemPrompt: 'You are a helpful assistant.',
});

// Google Gemini
const geminiAgent = defineAgent({
  provider: gemini('gemini-pro'),
  systemPrompt: 'You are a helpful assistant.',
});

✨ Features

πŸ€– Agent Framework

  • Goal-driven agent execution
  • Multi-turn conversations with memory
  • Tool selection and orchestration
  • Streaming responses

🧠 LLM Client (Model-Agnostic)

  • Single API for multiple providers (OpenAI, Anthropic, Gemini)
  • Automatic retries and error handling
  • Built-in cost tracking
  • Token counting and management

πŸ”Œ MCP Integration (Coming Soon)

  • Native MCP protocol support
  • Automatic tool discovery from MCP servers
  • Type-safe tool calls
  • Works with any MCP server

πŸ’¬ Conversation & Memory

  • Short-term conversation memory
  • Token window management
  • Custom memory strategies
  • Context persistence

⚑ Streaming & Real-time

  • Token-by-token streaming
  • Server-sent events support
  • Progress updates
  • UI-friendly streaming APIs

πŸ› οΈ Tool / Function Calling

  • Type-safe tool definitions
  • Schema validation with Zod
  • Retry and fallback handling
  • Custom function support

πŸ“Š Production Features

  • Automatic retry logic for transient errors
  • Cost and token tracking (per request/session)
  • Request/response middleware
  • Structured logging with Pino
  • Performance monitoring

πŸ“š Examples

Basic Conversation

import { defineAgent, openai } from '@targetly-labs/flowllm';

const agent = defineAgent({
  provider: openai('gpt-4o'),
  systemPrompt: 'You are a helpful coding assistant.',
});

const response = await agent.execute('How do I reverse a string in JavaScript?');
console.log(response.content);

Streaming Responses

const agent = defineAgent({
  provider: openai('gpt-4o'),
  systemPrompt: 'You are a creative writer.',
});

const stream = await agent.stream('Write a short story about a robot');
for await (const chunk of stream) {
  process.stdout.write(chunk.content || '');
}

Custom Tools

import { defineAgent, defineTool, openai } from '@targetly-labs/flowllm';

const weatherTool = defineTool({
  name: 'get_weather',
  description: 'Get current weather for a location',
  parameters: {
    type: 'object',
    properties: {
      location: {
        type: 'string',
        description: 'City name',
      },
    },
    required: ['location'],
  },
  execute: async ({ location }) => {
    // Your weather API logic here
    return { temp: 72, condition: 'sunny', location };
  },
});

const agent = defineAgent({
  provider: openai('gpt-4o'),
  tools: [weatherTool],
  systemPrompt: 'You are a helpful weather assistant.',
});

const response = await agent.execute('What\'s the weather in Tokyo?');
console.log(response.content);

Multi-Provider Support

import { defineAgent, openai, anthropic } from '@targetly-labs/flowllm';

// Create agents with different providers
const agents = {
  gpt: defineAgent({
    provider: openai('gpt-4o'),
    systemPrompt: 'You are GPT-4.',
  }),
  claude: defineAgent({
    provider: anthropic('claude-3-5-sonnet-20240620'),
    systemPrompt: 'You are Claude.',
  }),
};

// Use the right agent for the job
const technicalResponse = await agents.claude.execute('Explain async/await');
const creativeResponse = await agents.gpt.execute('Write a haiku');

πŸ—οΈ Architecture

FlowLLM uses a layered architecture for maximum flexibility and maintainability:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚             FlowLLM SDK                     β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚  β”‚  Agent Framework                    β”‚   β”‚
β”‚  β”‚  - Conversation management          β”‚   β”‚
β”‚  β”‚  - Tool orchestration               β”‚   β”‚
β”‚  β”‚  - Memory handling                  β”‚   β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
β”‚                                             β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚  β”‚   LLM    β”‚  β”‚   MCP    β”‚  β”‚   Tools  β”‚ β”‚
β”‚  β”‚  Client  β”‚  β”‚Connector β”‚  β”‚  System  β”‚ β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
     β”‚              β”‚              β”‚
     β–Ό              β–Ό              β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ OpenAI  β”‚  β”‚ MCP Server β”‚  β”‚  Custom  β”‚
β”‚Anthropicβ”‚  β”‚  (Future)  β”‚  β”‚ Function β”‚
β”‚ Gemini  β”‚  β”‚            β”‚  β”‚  Tools   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Core Components

  • Agent: Orchestrates conversation flow, memory, and tool execution
  • LLMClient: Unified interface for all LLM providers
  • ToolRegistry: Manages and executes custom tools and MCP tools
  • Memory: Handles conversation history and token management
  • CostTracker: Tracks token usage and costs across requests
  • RetryHandler: Implements exponential backoff for transient errors

For detailed architecture diagrams and data flows, see ARCHITECTURE.md.


🎯 Use Cases

πŸ’Ό AI-Powered SaaS Features

Build AI features into your product with ease.

const supportAgent = defineAgent({
  provider: openai('gpt-4o'),
  systemPrompt: 'Help customers with their support tickets.',
});

πŸ€– Autonomous Agents

Build agents that can take actions and make decisions.

const codeReviewAgent = defineAgent({
  provider: anthropic('claude-3-5-sonnet-20240620'),
  systemPrompt: 'Review code and suggest improvements.',
});

πŸ’¬ Conversational Apps

Build chatbots, Discord bots, Slack bots with LLM capabilities.

const discordBot = defineAgent({
  provider: gemini('gemini-pro'),
  systemPrompt: 'Helpful Discord bot.',
});

πŸ”§ Internal Tools

Build internal AI assistants for your team.

const opsAgent = defineAgent({
  provider: openai('gpt-4o'),
  systemPrompt: 'Help with DevOps tasks and incident response.',
});

πŸ“– Documentation


πŸ›£οΈ Roadmap

βœ… Phase 1 – Core SDK (Current)

  • Core agent framework
  • Multi-provider LLM client (OpenAI, Anthropic, Gemini)
  • Conversation memory
  • Custom tool calling
  • Streaming responses
  • Error handling & retries
  • Cost tracking
  • TypeScript support

πŸ”„ Phase 2 – MCP Integration (Next)

  • Native MCP protocol support
  • Automatic tool discovery
  • Type-safe MCP tool calls
  • MCP server integration examples

πŸ“… Phase 3 – Advanced Features

  • Advanced memory systems (long-term, user profiles)
  • Multi-agent orchestration
  • Prompt versioning
  • Execution tracing and analytics

🀝 Contributing

We welcome contributions! Please see CONTRIBUTING.md for guidelines.

# Clone the repository
git clone https://github.com/targetly-labs/flowllm.git

# Install dependencies
cd flowllm
npm install

# Run tests
npm test

# Build the project
npm run build

πŸ“„ License

MIT License - See LICENSE for details


🌟 Show Your Support

If you find FlowLLM useful, please consider:

  • ⭐ Starring the repository
  • πŸ› Reporting bugs and issues
  • πŸ’‘ Suggesting new features
  • 🀝 Contributing code
  • πŸ“’ Sharing with others

πŸ”— Links


Built with ❀️ by the Targetly Labs team

About

Model-agnostic SDK for building production AI agents with MCP tools

Resources

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published