Turn any LLM API into an OpenAI-compatible endpoint in minutes.
A lightweight compatibility proxy that wraps your upstream model endpoint behind an OpenAI-style /v1/* API, so existing SDKs, clients, and workflows can connect with minimal changes.
A lot of AI apps already know how to talk to the OpenAI API. The real problem is not calling a model - it is preserving compatibility across SDKs, tools, deployments, and vendor changes.
This project helps you:
- keep an OpenAI-style API surface
- reduce downstream migration cost
- plug existing tools into a different upstream
- keep flexibility for private deployment, routing, and cost control
- Exposes
GET /v1/models - Proxies
GET/POST /v1/{path}to your upstream - Passes through request headers safely
- Optionally forces
stream=trueforchat/completions - Supports environment-based configuration
- Includes Docker, Compose, systemd, and usage examples
- Wrapping a private or third-party LLM endpoint behind an OpenAI-style API
- Reusing existing OpenAI SDK integrations without rewriting client code
- Connecting tools like Open WebUI, Dify, Cherry Studio, or internal apps
- Building an internal AI gateway with minimal moving parts
cp .env.example .env
pip install -r requirements.txt
export $(grep -v '^#' .env | xargs)
uvicorn proxy:app --host 0.0.0.0 --port 9000cp .env.example .env
docker build -t openai-compatible-proxy .
docker run --rm -p 9000:9000 --env-file .env openai-compatible-proxycp .env.example .env
docker compose up -d --buildMain environment variables:
REAL_BASE: upstream OpenAI-compatible base URLPROXY_MODELS: comma-separated model ids exposed by/v1/modelsFORCE_CHAT_STREAM: forcestream=truefor chat completionsPROXY_TIMEOUT: upstream timeout in secondsPROXY_TITLE: title shown on/
See .env.example for defaults.
Returns basic metadata about the proxy.
Returns a simple health check payload.
Returns a model list based on PROXY_MODELS.
Forwards requests to:
{REAL_BASE}/{path}
curl http://127.0.0.1:9000/v1/modelscurl http://127.0.0.1:9000/v1/chat/completions \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer your-key' \
-d '{
"model": "gpt-4o-mini",
"messages": [{"role": "user", "content": "hello"}]
}'This pattern is useful when connecting tools that already expect OpenAI-compatible APIs, such as:
- OpenAI Python SDK
- OpenAI Node SDK
- Cherry Studio
- Open WebUI
- Dify
- Any app that accepts
base_url/api_base
See docs/architecture.md for the simplified request flow and positioning.
.
├── assets/
│ └── icon.svg
├── proxy.py
├── requirements.txt
├── .env.example
├── Dockerfile
├── docker-compose.yml
├── docs/
│ ├── compatibility.md
│ ├── deployment.md
│ ├── faq.md
│ └── troubleshooting.md
└── examples/
├── python-openai-sdk/
├── cherry-studio/
├── open-webui/
└── dify/
docs/compatibility.mddocs/deployment.mddocs/faq.mddocs/troubleshooting.mdREADME.zh-CN.md
- Wrap a non-OpenAI upstream behind a familiar API
- Switch model vendors without changing downstream clients
- Add a thin compatibility layer for internal AI tools
- Provide one stable endpoint to multiple teams or apps
- Developers who already rely on OpenAI SDKs
- Teams migrating away from a single model vendor
- Builders who want a simple compatibility layer before adopting a full AI gateway
- Anyone who needs a stable API surface for tools, automations, or internal platforms
See ROADMAP.md.
See CHANGELOG.md.
MIT