v0.8.0: OpenAI-Compatible Provider + Agent Invoke #8
idapixl
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
cortex-engine v0.8.0
Two features that change how agents use cortex:
1. OpenAI-Compatible LLM Provider
One adapter, entire ecosystem. Set an env var, pick a model:
DEEPSEEK_API_KEYdeepseek-chatHF_TOKENmistralai/Mistral-Nemo-Instruct-2407OPENROUTER_API_KEYdeepseek/deepseek-chatOPENAI_API_KEYgpt-4oAuto-detects the provider from your env vars. No URL configuration needed for known providers.
2. agent_invoke Tool
Dispatch tasks to a cortex-backed agent. It queries cortex for existing knowledge, completes the task with your cheap LLM, and stores findings back. Knowledge compounds across sessions.
Before: Spawn an expensive subagent that starts cold every time.
After: Call agent_invoke — it knows what you researched last week.
What This Enables
Try it:
npm install cortex-engine@0.8.0Release notes | Wiki
Beta Was this translation helpful? Give feedback.
All reactions