Multi-provider routing for Claude Code CLI. Use your Copilot subscription, Ollama offline, or Anthropic Direct.
-
Updated
Mar 2, 2026 - Shell
Multi-provider routing for Claude Code CLI. Use your Copilot subscription, Ollama offline, or Anthropic Direct.
Self-hosted AI API gateway with per-client provider config (Gemini, OpenAI, Anthropic, Ollama, LM Studio). Rate limits, quotas, model whitelists, real-time dashboard.
CLI-first multi-model orchestrator — routes tasks to Codex, Gemini, and Claude
Run AI-powered community operations in chat: orchestrate Claude/OpenAI/Gemini/Ollama, trigger real integrations, and ship with production-ready self-hosted guardrails.
Platform-agnostic Rust library for AI routing on mobile devices
A sample architecture that mimics MoE (Mixture of Experts) using Go.
Astrai inference router skill for OpenClaw — 40-60% savings on agent LLM costs with intelligent routing and privacy controls
Production AI orchestration routing requests between local models (Ollama) and cloud APIs (Claude/OpenAI). Reduces AI costs 60-80% by running lightweight models locally for simple tasks.
The Smart Packet Identity Layer (SPID) is a universal, AI-readable identifier system that links every Smart Packet to its origin, context, and purpose — enabling secure, structured retrieval of voice-ready content across platforms.
Intelligent AI model routing for Claude Code - choose the best model for each task
Manage and route API requests between applications and multiple LLM providers with per-client keys, configs, and real-time usage tracking.
Add a description, image, and links to the ai-routing topic page so that developers can more easily learn about it.
To associate your repository with the ai-routing topic, visit your repo's landing page and select "manage topics."