Skip to content

Model-agnostic SDK for building production AI agents with MCP tools(Python)

License

Notifications You must be signed in to change notification settings

Targetly-Labs/flowllm-python

Repository files navigation

🐍 FlowLLM Python SDK

Let's build amazing AI agents together! πŸš€


🌟 Acknowledgments


MIT License - see LICENSE file for details.

πŸ“„ License


Contributions are welcome! Please read our Contributing Guide first.

🀝 Contributing


  • API keys for LLM providers
  • Poetry (for development)
  • Python 3.9+

πŸ“‹ Requirements


nano .env
# Add your API keys

cp .env.example .env
# Copy environment template
```bash

### Environment Setup

poetry run black .

Formatting

poetry run ruff check .

Linting

poetry run mypy flowllm

Type checking

poetry run pytest --cov

With coverage

poetry run pytest

Run tests

### Testing

poetry shell

Activate virtual environment

poetry install

Install dependencies

curl -sSL https://install.python-poetry.org | python3 -

Install Poetry

cd flowllm-python git clone https://github.com/flowllm/flowllm-python.git

Clone repository

### Setup

## πŸ› οΈ Development

---

- [Architecture](ARCHITECTURE.md)
- [Examples](examples/)
- [API Reference](docs/api_reference.md)
- [Getting Started](docs/getting_started.md)

## πŸ“š Documentation

---

) mcp_client=mcp_client provider=OpenAIProvider(model="gpt-4"), agent = define_agent(

Use MCP tools in agent

]) "https://database-mcp.targetly.dev" "https://github-mcp.targetly.dev", mcp_client = await connect_mcp([

Connect to MCP servers

from flowllm.mcp import connect_mcp

Connect to MCP servers for tool discovery:

### πŸ”Œ MCP Integration

) ) max_delay=60.0 initial_delay=1.0, max_attempts=3, retry_config=RetryConfig( provider=OpenAIProvider(model="gpt-4"), agent = define_agent(

from flowllm.core.retry import RetryConfig

Built-in retry with exponential backoff:

### πŸ”„ Retry Logic

print(f"Total tokens: {summary['total_tokens']}") print(f"Total cost: ${summary['total_cost']:.4f}") summary = tracker.get_summary()

Get cost summary

await agent.execute("Hello!")

) cost_tracker=tracker provider=OpenAIProvider(model="gpt-4"), agent = define_agent(

tracker = CostTracker()

from flowllm.core.cost_tracker import CostTracker

Track costs automatically:

### πŸ“Š Cost Tracking

asyncio.run(main())

    print(chunk.content, end="", flush=True)
async for chunk in agent.stream("Tell me a story"):

async def main():

Stream responses token-by-token:

### ⚑ Streaming

) memory=memory provider=OpenAIProvider(model="gpt-4"), agent = define_agent(

) summary_provider=OpenAIProvider(model="gpt-3.5-turbo") max_messages=10, memory = SummaryMemory(

Summary memory (summarize old messages)

memory = SlidingWindowMemory(window_size=5)

Sliding window (keep recent)

memory = BufferMemory(max_messages=10)

Keep all messages (up to limit)

from flowllm.core.memory import BufferMemory, SlidingWindowMemory, SummaryMemory

Choose the right memory strategy for your use case:

### πŸ’¬ Memory Strategies
return {"temp": 20, "conditions": "sunny", "units": units}
"""Get weather for a location."""

async def get_weather(location: str, units: str = "celsius") -> dict: @define_tool(params=WeatherParams)

units: str = Field(default="celsius", description="Temperature units")
location: str = Field(description="City name")

class WeatherParams(BaseModel):

from pydantic import BaseModel, Field

Define tools with type hints and validation:

### πŸ› οΈ Custom Tools

agent = define_agent(provider=GeminiProvider(model="gemini-pro"))

Google Gemini

agent = define_agent(provider=AnthropicProvider(model="claude-3-5-sonnet-20241022"))

Anthropic

agent = define_agent(provider=OpenAIProvider(model="gpt-4"))

OpenAI

from flowllm.providers import OpenAIProvider, AnthropicProvider, GeminiProvider

Switch between providers with one line:

### πŸ€– Model-Agnostic

## ✨ Features

---

asyncio.run(main())

print(f"Cost: ${response.cost['total_cost']:.4f}")
print(response.content)
response = await agent.execute("What's the weather in Tokyo?")

async def main():

Use the agent

) max_messages=10 memory_type="buffer", tools=[get_weather], system_prompt="You are a helpful weather assistant.", provider=OpenAIProvider(model="gpt-4"), agent = define_agent(

Create an agent

return f"Weather in {location}: Sunny, 72Β°F"
"""Get weather for a location."""

async def get_weather(location: str) -> str: @define_tool(name="get_weather", description="Get weather for a location")

Define a custom tool

from flowllm.providers import OpenAIProvider from flowllm import define_agent, define_tool import asyncio

### Basic Usage

pip install flowllm

### Installation

## πŸš€ Quick Start

---

- βœ… Async/await by default
- βœ… Type-safe with full type hints
- βœ… Production features (streaming, retries, cost tracking)
- βœ… Native MCP (Model Context Protocol) integration
- βœ… Multiple LLM providers (OpenAI, Anthropic, Google Gemini)
FlowLLM Python SDK is a **model-agnostic framework** for building AI agents with support for:

[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/)

> **Production-ready SDK for building AI agents with MCP tools in Python**

About

Model-agnostic SDK for building production AI agents with MCP tools(Python)

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published