Conversation
Add Avian (api.avian.io) as a new OpenAI-compatible inference provider with four models: DeepSeek V3.2, Kimi K2.5, GLM-5, and MiniMax M2.5. Changes: - New provider module (web/src/llm-api/avian.ts) with streaming and non-streaming support, per-model pricing, usage tracking, and billing - Route avian/* models through the Avian provider in chat completions API - Add AVIAN_API_KEY to server env schema - Register avian models in model-config constants and agent type definitions
|
One thing I'd double-check before merging: Without that guard here, Avian could insert duplicate BigQuery rows and consume credits more than once for the same response. I'd be inclined to copy the CanopyWave/SiliconFlow pattern and strip |
|
Addressed feedback: added |
Summary
https://api.avian.io/v1)web/src/llm-api/avian.ts) following the same pattern as Fireworks/SiliconFlow, with streaming + non-streaming support, per-model pricing, usage/billing tracking, TTFT measurement, and error handlingavian/*model IDs through the Avian provider in the chat completions APIAVIAN_API_KEYto server env schema (optional, like other third-party provider keys)model-config.tsconstants,providerModelNames,providerDomains, and all three copies of theModelNametypeModels available via Avian
avian/deepseek-v3.2avian/kimi-k2.5avian/glm-5avian/minimax-m2.5Test plan
isAvianModel()correctly matches onlyavian/*model IDsavian/