blog: Barbacane vs Portkey and LiteLLM comparison#12
Open
Conversation
Second piece of the audit's Phase 1 comparison-content strategy. Targets the "Barbacane vs Portkey" / "Barbacane vs LiteLLM" / "inbound vs outbound AI gateway" search terms. Thesis: Portkey and LiteLLM (outbound AI gateways, application -> LLM) and Barbacane (inbound AI gateway, agent -> your APIs) solve different problems and most teams end up running both. The category distinction is the high-value takeaway; specific feature compare is the usual procurement grind. Structure: - Lede: the vocabulary-collision problem - Two directions of traffic - What Portkey and LiteLLM do, concretely - What Barbacane does, concretely - How the two categories compose in a production stack - Where the overlap actually is (AI governance middleware) - Decision table by situation - Feature comparison table - Procurement advice - Closing, links back to the canonical MCP gateway post and /mcp/ Saved as draft: true with publishDate 2026-04-29 (six days out). Flip the draft flag in a follow-up commit when ready to publish.
…e AI capability Previous draft was factually wrong: it claimed Barbacane is inbound-only and does not talk to LLMs. In fact: - `ai-proxy` dispatcher (shipped, docs.barbacane.dev/guide/dispatchers.html#ai-proxy) is an outbound AI proxy that routes to OpenAI, Anthropic, Ollama, and any OpenAI-compat endpoint; supports provider fallback, named targets for policy-driven routing, pinned provider API versions, SSE streaming for OpenAI-compatible providers. - ADR-0024 names Kong AI Gateway, LiteLLM, Portkey, KrakenD as direct competitors and positions Barbacane's spec-driven + WASM-composable approach as the differentiator. - PR #67 (feat: AI gateway middleware suite) ships the four AI governance middlewares that compose around ai-proxy. - PR #69 (ADR-0030) extends ai-proxy with OpenAI Responses API support and dynamic model routing. Rewrite: - Title changed from "inbound vs outbound AI gateways" to "picking an AI gateway in 2026". Inbound/outbound is still useful taxonomy, but the dichotomy is wrong for Barbacane. - New thesis: all three ship outbound LLM proxy competently; the axis that differentiates them is how AI gateway relates to the rest of the gateway (monolithic AI proxy vs dispatcher + middlewares, config file vs OpenAPI spec, AI-only vs also-MCP). - Decision table and feature table rebuilt honestly; Barbacane shows Yes on outbound-related concerns now. - Kept MCP as an additional capability that Portkey/LiteLLM do not have, not as the entire reason to pick Barbacane. Claims verified against: - docs.barbacane.dev/guide/dispatchers.html (ai-proxy) - barbacane-dev/barbacane adr/0024-ai-gateway-plugin.md - barbacane-dev/barbacane#67 (AI gateway middleware suite) - barbacane-dev/barbacane#69 (ADR-0030 Responses API)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Test plan