Skip to content

feat: add Venice as LLM provider#8

Open
sabrinaaquino wants to merge 1 commit intoNunchi-trade:mainfrom
sabrinaaquino:feat/add-venice-provider
Open

feat: add Venice as LLM provider#8
sabrinaaquino wants to merge 1 commit intoNunchi-trade:mainfrom
sabrinaaquino:feat/add-venice-provider

Conversation

@sabrinaaquino
Copy link

Adds Venice as an LLM provider alongside Gemini, Claude, and OpenAI. Venice is OpenAI-compatible and routes through api.venice.ai. If VENICE_API_KEY is set, it takes priority.

Copy link
Contributor

@JaeLeex JaeLeex left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice clean refactor on the OpenAI client caching — the _openai_clients dict keyed by base_url is a good pattern. A few issues to address before merge.

and routes any model name to the appropriate backend.
"""
# Venice: if API key is set, use it (handles all models)
if os.environ.get("VENICE_API_KEY"):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Issue (High): Venice hijacks all providers when VENICE_API_KEY is set

This makes Venice take priority over every other provider regardless of the user's intent. If someone has both ANTHROPIC_API_KEY and VENICE_API_KEY set (common in dev environments), running --model claude-sonnet-4-20250514 silently routes to Venice instead of Anthropic directly.

Venice should only activate when explicitly chosen — not by env var sniffing.

Suggested fix:

def _detect_provider(model: str) -> str:
    if model.startswith("venice:"):
        return "venice"
    if model.startswith("gemini"):
        return "gemini"
    if model.startswith("claude"):
        return "claude"
    if model.startswith("gpt") or model.startswith("o1") or model.startswith("o3") or model.startswith("o4"):
        return "openai"
    # Venice as fallback for unknown models when key is available
    if os.environ.get("VENICE_API_KEY"):
        return "venice"
    return "gemini"

| Google Gemini | `gemini-2.0-flash` (default), `gemini-2.5-pro` | `GEMINI_API_KEY` |
| Anthropic Claude | `claude-haiku-4-5-20251001`, `claude-sonnet-4-20250514` | `ANTHROPIC_API_KEY` |
| OpenAI | `gpt-4o`, `gpt-4o-mini`, `o3-mini` | `OPENAI_API_KEY` |
| Venice | `claude-opus-4-6`, `kimi-k2-5`, `openai-gpt-54-pro`, `zai-org-glm-5` | `VENICE_API_KEY` |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Issue (Medium): Verify model names

openai-gpt-54-pro and zai-org-glm-5 don't appear to be real model identifiers. Are these Venice-specific proxy names? If so, please link to Venice's model catalog so we can verify. We don't want to advertise model names that don't resolve.

Also worth clarifying that claude-opus-4-6 here is Venice-proxied, not direct Anthropic API access.

export VENICE_API_KEY=...
hl run claude_agent -i ETH-PERP --tick 15 --model claude-opus-4-6
hl run claude_agent -i ETH-PERP --tick 15 --model kimi-k2-5
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: The Venice examples should use an explicit prefix (e.g. venice:claude-opus-4-6) to match the detection fix above. Bare claude-opus-4-6 would route to Anthropic without the prefix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants