Unified Python interface for multiple LLM providers with automatic provider detection and seamless switching.
For detailed usage examples, configuration options, and advanced features, visit our documentation.
pip install chimericSet your API keys as environment variables:
export OPENAI_API_KEY="your-key-here"
export ANTHROPIC_API_KEY="your-key-here"from chimeric import Chimeric
client = Chimeric() # Auto-detects API keys from environment
response = client.generate(
model="gpt-4o",
messages="Hello!"
)
print(response.content)# Real-time streaming
stream = client.generate(
model="claude-3-5-sonnet-latest",
messages="Tell me a story about space exploration",
stream=True
)
for chunk in stream:
print(chunk.content, end="", flush=True)@client.tool()
def get_weather(city: str) -> str:
"""Get current weather for a city."""
return f"Sunny, 72Β°F in {city}"
@client.tool()
def calculate_tip(bill_amount: float, tip_percentage: float = 18.0) -> dict:
"""Calculate tip and total amount for a restaurant bill."""
tip = bill_amount * (tip_percentage / 100)
total = bill_amount + tip
return {"tip": tip, "total": total, "tip_percentage": tip_percentage}
response = client.generate(
model="gpt-4o",
messages=[
{"role": "user", "content": "What's the weather in NYC?"},
{"role": "user", "content": "Also calculate a tip for a $50 dinner bill"}
]
)
print(response.content)from pydantic import BaseModel
class Sentiment(BaseModel):
label: str
score: float
reasoning: str
response = client.generate(
model="gpt-4o",
messages="Analyse the sentiment: 'This library is fantastic!'",
response_model=Sentiment,
)
print(response.parsed.label) # "positive"
print(response.parsed.score) # 0.98# Single text β result.embedding (list[float])
result = client.embed(
model="text-embedding-3-small",
input="Python developer with 5 years experience",
)
print(len(result.embedding)) # e.g. 1536
# Batch β result.embeddings (list[list[float]])
result = client.embed(
model="text-embedding-3-small",
input=["Python developer", "Go engineer", "React developer"],
)
print(len(result.embeddings)) # 3
# Also available via Google and Cohere
result = client.embed(model="gemini-embedding-001", input="Hello")
result = client.embed(model="embed-english-v3.0", input="Hello")# Seamlessly switch between providers
models = ["gpt-4o-mini", "claude-3-5-haiku-latest", "gemini-2.5-flash"]
for model in models:
response = client.generate(
model=model,
messages="Explain quantum computing in one sentence"
)
print(f"{model}: {response.content}")- Multi-Provider Support: Switch between 8 major AI providers seamlessly
- Automatic Detection: Auto-detects available API keys from environment
- Unified Interface: Consistent API across all providers
- Embeddings: Single and batch text embeddings via OpenAI, Google, and Cohere
- Structured Output: Parse responses directly into Pydantic models
- Streaming Support: Real-time response streaming
- Function Calling: Tool integration with decorators
- Async Support: Full async/await compatibility
- Local AI: Connect to Ollama, LM Studio, or any OpenAI-compatible endpoint
- Found a bug? Use our Bug Report template
- Want a feature? Use our Feature Request template
This project is licensed under the MIT License - see the LICENSE file for details.