Skip to content

Verdenroz/chimeric

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

175 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Chimeric Logo

Chimeric

PyPI version Python Versions License: MIT Documentation Status CI codecov

Unified Python interface for multiple LLM providers with automatic provider detection and seamless switching.

πŸš€ Supported Providers

OpenAI Anthropic Google AI xAI Grok Groq Cohere Cerebras OpenRouter

πŸ“– Documentation

For detailed usage examples, configuration options, and advanced features, visit our documentation.

πŸ“¦ Installation

pip install chimeric

Set your API keys as environment variables:

export OPENAI_API_KEY="your-key-here"
export ANTHROPIC_API_KEY="your-key-here"

⚑ Quickstart

Basic Usage

from chimeric import Chimeric

client = Chimeric()  # Auto-detects API keys from environment

response = client.generate(
    model="gpt-4o",
    messages="Hello!"
)
print(response.content)

Streaming Responses

# Real-time streaming
stream = client.generate(
    model="claude-3-5-sonnet-latest",
    messages="Tell me a story about space exploration",
    stream=True
)

for chunk in stream:
    print(chunk.content, end="", flush=True)

Function Calling with Tools

@client.tool()
def get_weather(city: str) -> str:
    """Get current weather for a city."""
    return f"Sunny, 72Β°F in {city}"

@client.tool()
def calculate_tip(bill_amount: float, tip_percentage: float = 18.0) -> dict:
    """Calculate tip and total amount for a restaurant bill."""
    tip = bill_amount * (tip_percentage / 100)
    total = bill_amount + tip
    return {"tip": tip, "total": total, "tip_percentage": tip_percentage}

response = client.generate(
    model="gpt-4o",
    messages=[
        {"role": "user", "content": "What's the weather in NYC?"},
        {"role": "user", "content": "Also calculate a tip for a $50 dinner bill"}
    ]
)
print(response.content)

Structured Output

from pydantic import BaseModel

class Sentiment(BaseModel):
    label: str
    score: float
    reasoning: str

response = client.generate(
    model="gpt-4o",
    messages="Analyse the sentiment: 'This library is fantastic!'",
    response_model=Sentiment,
)
print(response.parsed.label)    # "positive"
print(response.parsed.score)    # 0.98

Embeddings

# Single text β†’ result.embedding (list[float])
result = client.embed(
    model="text-embedding-3-small",
    input="Python developer with 5 years experience",
)
print(len(result.embedding))   # e.g. 1536

# Batch β†’ result.embeddings (list[list[float]])
result = client.embed(
    model="text-embedding-3-small",
    input=["Python developer", "Go engineer", "React developer"],
)
print(len(result.embeddings))  # 3

# Also available via Google and Cohere
result = client.embed(model="gemini-embedding-001", input="Hello")
result = client.embed(model="embed-english-v3.0", input="Hello")

Multi-Provider Switching

# Seamlessly switch between providers
models = ["gpt-4o-mini", "claude-3-5-haiku-latest", "gemini-2.5-flash"]

for model in models:
    response = client.generate(
        model=model,
        messages="Explain quantum computing in one sentence"
    )
    print(f"{model}: {response.content}")

πŸ”§ Key Features

  • Multi-Provider Support: Switch between 8 major AI providers seamlessly
  • Automatic Detection: Auto-detects available API keys from environment
  • Unified Interface: Consistent API across all providers
  • Embeddings: Single and batch text embeddings via OpenAI, Google, and Cohere
  • Structured Output: Parse responses directly into Pydantic models
  • Streaming Support: Real-time response streaming
  • Function Calling: Tool integration with decorators
  • Async Support: Full async/await compatibility
  • Local AI: Connect to Ollama, LM Studio, or any OpenAI-compatible endpoint

πŸ› Issues & Feature Requests

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

About

Unified Python interface for multiple LLM providers with automatic provider detection and seamless switching.

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors