Skip to content

Supported Integrations

Niccanor Dhas edited this page Feb 22, 2026 · 1 revision

Supported Integrations

tmam auto-instruments 40+ LLM providers, agent frameworks, and vector databases the moment they are detected in your environment. No extra code is required — just call init() and use your libraries as normal.


LLM Providers

Provider | Package | Async Support -- | -- | -- OpenAI | openai | ✅ Anthropic | anthropic | ✅ Cohere | cohere | ✅ Mistral | mistralai | ✅ AWS Bedrock | boto3 | — Google Vertex AI | vertexai | ✅ Google AI Studio | google-genai | — Azure AI Inference | azure-ai-inference | ✅ Groq | groq | ✅ Ollama | ollama | — vLLM | vllm | — Together AI | together | ✅ GPT4All | gpt4all | — LiteLLM | litellm | — Reka | reka-api | ✅ PremAI | premai | — AI21 | ai21 | — ElevenLabs | elevenlabs | ✅ AssemblyAI | assemblyai | — HuggingFace Transformers | transformers | —

Enable with collect_gpu_stats=True in init(). See GPU Monitoring.


What Gets Captured Per Provider

LLM Providers

Every LLM call captures:

  • Model name and provider
  • Input prompt tokens, output completion tokens, total tokens
  • Estimated cost (based on published pricing)
  • Request duration (ms)
  • Time to first token (TTFT) for streaming calls
  • Time between tokens (TBT) for streaming calls
  • Finish reason
  • Response ID
  • Prompt and completion text (if capture_message_content=True)
  • Exceptions and error messages

Agent Frameworks

Agent spans capture:

  • Agent name and role
  • Tool calls made
  • Sub-span hierarchy (agent → tool → LLM)
  • Input/output at each step

Vector Databases

Vector DB spans capture:

  • Operation type (query, insert, delete, etc.)
  • Collection/index name
  • Number of results returned
  • Query duration

Disabling Specific Integrations

from tmam import init

init( url="...", public_key="...", secrect_key="...", disabled_instrumentors=["openai", "chroma"], # skip these )


Adding Custom Instrumentation

For code not covered by auto-instrumentation, use the @trace decorator or start_trace() context manager.

# Supported Integrations

tmam auto-instruments 40+ LLM providers, agent frameworks, and vector databases the moment they are detected in your environment. No extra code is required — just call init() and use your libraries as normal.


LLM Providers

Provider Package Async Support
OpenAI openai
Anthropic anthropic
Cohere cohere
Mistral mistralai
AWS Bedrock boto3
Google Vertex AI vertexai
Google AI Studio google-genai
Azure AI Inference azure-ai-inference
Groq groq
Ollama ollama
vLLM vllm
Together AI together
GPT4All gpt4all
LiteLLM litellm
Reka reka-api
PremAI premai
AI21 ai21
ElevenLabs elevenlabs
AssemblyAI assemblyai
HuggingFace Transformers transformers

Agent Frameworks

Framework Package
LangChain langchain
LlamaIndex llama-index
CrewAI crewai
AG2 (AutoGen) ag2
Haystack haystack-ai
Phidata phidata
Dynamiq dynamiq
ControlFlow controlflow
Julep julep
Mem0 mem0ai
EmbedChain embedchain
MultiOn multion
Letta letta
OpenAI Agents openai (agents SDK)

Vector Databases

Database Package
Chroma chromadb
Pinecone pinecone
Qdrant qdrant-client
Milvus pymilvus
Astra DB astrapy

Web Crawling & Tools

Tool Package
Crawl4AI crawl4ai
FireCrawl firecrawl-py

GPU Metrics

GPU Vendor Library
NVIDIA pynvml
AMD amdsmi

Enable with collect_gpu_stats=True in init(). See [GPU Monitoring](GPU-Monitoring).


What Gets Captured Per Provider

LLM Providers

Every LLM call captures:

  • Model name and provider
  • Input prompt tokens, output completion tokens, total tokens
  • Estimated cost (based on published pricing)
  • Request duration (ms)
  • Time to first token (TTFT) for streaming calls
  • Time between tokens (TBT) for streaming calls
  • Finish reason
  • Response ID
  • Prompt and completion text (if capture_message_content=True)
  • Exceptions and error messages

Agent Frameworks

Agent spans capture:

  • Agent name and role
  • Tool calls made
  • Sub-span hierarchy (agent → tool → LLM)
  • Input/output at each step

Vector Databases

Vector DB spans capture:

  • Operation type (query, insert, delete, etc.)
  • Collection/index name
  • Number of results returned
  • Query duration

Disabling Specific Integrations

from tmam import init

init(
    url="...",
    public_key="...",
    secrect_key="...",
    disabled_instrumentors=["openai", "chroma"],  # skip these
)

Adding Custom Instrumentation

For code not covered by auto-instrumentation, use the @trace decorator or start_trace() context manager.

Clone this wiki locally