Skip to content

Commit 07d595a

Browse files
authored
docs: add LICENSE, CONTRIBUTING guide, and GitHub issue templates
* feat: expand tests, add ESLint, document env vars - Add 17 new integration tests: CORS edge cases (disallowed origins, no-origin header, OPTIONS for disallowed origin), auth (401/pass-through), and error handling (400/502/404) for /v1/chat/completions Closes #14, closes #16 - Add ESLint with flat config, npm run lint script, and Lint job in CI Closes #15 - Improve README with quickstart section, npm install instructions, and corrected package name; add type column to env vars table Closes #17 * feat: implement SSE streaming and support all opencode providers - Implement streaming for POST /v1/chat/completions (issue #11): subscribe to opencode event stream, pipe message.part.updated deltas as SSE chat.completion.chunk events, finish on session.idle - Implement streaming for POST /v1/responses (issue #11): emit response.created / output_text.delta / response.completed events - Fix provider-agnostic system prompt hint (issue #12): remove 'OpenAI-compatible' wording so non-OpenAI models are not confused - Add TextEncoder and ReadableStream to ESLint globals - Add streaming integration tests (happy path, unknown model, session.error) * feat: refactor SSE queue, expand test coverage, fix package metadata - Extract createSseQueue() helper, eliminating duplicated SSE queue pattern in /v1/chat/completions and /v1/responses streaming branches (closes #34) - Add tests for GET /v1/models happy path, empty providers, and error path (closes #33) - Add tests for POST /v1/responses: happy path, validation, streaming, session.error (closes #32) - Fix package.json description to be provider-agnostic (closes #35) - Add engines field declaring bun >=1.0.0 requirement (closes #35) - Line coverage: 55% -> 89%, function coverage: 83% -> 94% * feat: add Anthropic Messages API and Google Gemini API endpoints - POST /v1/messages — Anthropic Messages API with streaming (SSE) - POST /v1beta/models/:model:generateContent — Gemini non-streaming - POST /v1beta/models/:model:streamGenerateContent — Gemini NDJSON streaming - New helpers: normalizeAnthropicMessages, normalizeGeminiContents, extractGeminiSystemInstruction, mapFinishReasonToAnthropic/Gemini - 35 new tests (77 -> 112 total, all passing) - Update README to document all supported API formats Closes #38, #39 * docs: rewrite README and expand package keywords for discoverability - Lead with value proposition, ASCII diagram, and feature table - Quickstart reduced to 4 steps; works in under 60 seconds - SDK examples for OpenAI, Anthropic, Gemini (JS+Python), LangChain - UI integration guides: Open WebUI, Chatbox, Continue, Zed - Reference section kept concise; full prose docs moved inline - package.json: sharper description, 20 keywords covering all search terms (openai-compatible, anthropic, gemini, ollama, langchain, open-webui, llm-proxy, ai-gateway, local-llm, github-copilot, model-router, …) * fix: remove pretty-printing from JSON responses to reduce payload size * fix: reflect request Origin header in CORS allow-origin when a specific origin is configured * docs: add LICENSE, CONTRIBUTING guide, and GitHub issue templates
1 parent a81e3eb commit 07d595a

4 files changed

Lines changed: 211 additions & 0 deletions

File tree

Lines changed: 65 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,65 @@
1+
name: Bug report
2+
description: Something isn't working as expected
3+
labels: ["bug"]
4+
body:
5+
- type: markdown
6+
attributes:
7+
value: |
8+
Thanks for taking the time to report a bug. Please fill in as much detail as you can.
9+
10+
- type: textarea
11+
id: description
12+
attributes:
13+
label: What happened?
14+
description: A clear description of the bug.
15+
validations:
16+
required: true
17+
18+
- type: textarea
19+
id: reproduction
20+
attributes:
21+
label: Steps to reproduce
22+
description: How do we reproduce the issue?
23+
placeholder: |
24+
1. Start opencode with the plugin loaded
25+
2. Send a request to POST /v1/chat/completions with ...
26+
3. See error
27+
validations:
28+
required: true
29+
30+
- type: textarea
31+
id: expected
32+
attributes:
33+
label: Expected behaviour
34+
description: What did you expect to happen?
35+
validations:
36+
required: true
37+
38+
- type: textarea
39+
id: request
40+
attributes:
41+
label: Request / response (if applicable)
42+
description: Paste the curl command or request body and the response you received.
43+
render: bash
44+
45+
- type: input
46+
id: version
47+
attributes:
48+
label: opencode-llm-proxy version
49+
placeholder: "e.g. 1.6.1"
50+
validations:
51+
required: true
52+
53+
- type: input
54+
id: runtime
55+
attributes:
56+
label: Runtime and OS
57+
placeholder: "e.g. Node.js 22, macOS 14 / Bun 1.2, Ubuntu 24.04"
58+
validations:
59+
required: true
60+
61+
- type: input
62+
id: provider
63+
attributes:
64+
label: Provider / model
65+
placeholder: "e.g. github-copilot/claude-sonnet-4.6"
Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
name: Feature request
2+
description: Suggest a new feature or improvement
3+
labels: ["enhancement"]
4+
body:
5+
- type: markdown
6+
attributes:
7+
value: |
8+
Thanks for the suggestion! Please describe the use case clearly so we can understand what you need.
9+
10+
- type: textarea
11+
id: problem
12+
attributes:
13+
label: What problem does this solve?
14+
description: Describe the situation where this would be useful.
15+
placeholder: "e.g. I use the Vercel AI SDK and currently have to..."
16+
validations:
17+
required: true
18+
19+
- type: textarea
20+
id: solution
21+
attributes:
22+
label: Proposed solution
23+
description: What would you like to see added or changed?
24+
validations:
25+
required: true
26+
27+
- type: textarea
28+
id: alternatives
29+
attributes:
30+
label: Alternatives you've considered
31+
description: Any workarounds you're using today?
32+
33+
- type: dropdown
34+
id: api_format
35+
attributes:
36+
label: Which API format does this relate to? (if any)
37+
options:
38+
- OpenAI Chat Completions
39+
- OpenAI Responses API
40+
- Anthropic Messages API
41+
- Google Gemini
42+
- All / general
43+
- Not API-format specific

CONTRIBUTING.md

Lines changed: 82 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,82 @@
1+
# Contributing
2+
3+
Thanks for your interest in contributing to opencode-llm-proxy.
4+
5+
## Getting started
6+
7+
```bash
8+
git clone https://github.com/KochC/opencode-llm-proxy.git
9+
cd opencode-llm-proxy
10+
npm install
11+
```
12+
13+
Run the tests:
14+
15+
```bash
16+
npm test
17+
```
18+
19+
Run the linter:
20+
21+
```bash
22+
npm run lint
23+
```
24+
25+
## How to contribute
26+
27+
### Reporting bugs
28+
29+
Open a [bug report](https://github.com/KochC/opencode-llm-proxy/issues/new?template=bug_report.yml). Include:
30+
31+
- What you did
32+
- What you expected
33+
- What actually happened
34+
- Your Node.js / Bun version and OS
35+
36+
### Suggesting features
37+
38+
Open a [feature request](https://github.com/KochC/opencode-llm-proxy/issues/new?template=feature_request.yml) describing the use case.
39+
40+
### Submitting a pull request
41+
42+
1. Fork the repo and create a branch from `dev` (not `main`)
43+
2. Make your changes
44+
3. Add or update tests in `index.test.js` — all 112+ tests must pass
45+
4. Lint passes: `npm run lint`
46+
5. Commit using [Conventional Commits](https://www.conventionalcommits.org/):
47+
- `fix:` for bug fixes (triggers a patch release)
48+
- `feat:` for new features (triggers a minor release)
49+
- `docs:` / `chore:` / `test:` for everything else (no release)
50+
6. Open a PR against the `dev` branch
51+
52+
## Branch model
53+
54+
```
55+
dev ──► main ──► npm (via Release Please)
56+
```
57+
58+
- All work goes on `dev`
59+
- `main` is release-only — only Release Please PRs merge directly there
60+
- Do not open PRs against `main`
61+
62+
## Tests
63+
64+
Tests use the Node.js built-in test runner — no external framework needed.
65+
66+
```bash
67+
node --test # run once
68+
node --test --watch # watch mode
69+
node --test --experimental-test-coverage # with coverage
70+
```
71+
72+
Tests mock the OpenCode SDK client entirely — no real LLM calls are made.
73+
74+
## Code style
75+
76+
ESLint enforces style. Run `npm run lint` before pushing. The config is in `eslint.config.js`.
77+
78+
Key conventions in the codebase:
79+
80+
- Pure functions are exported for testability (`normalizeMessages`, `buildPrompt`, etc.)
81+
- Each API format (OpenAI, Anthropic, Gemini) has its own section in `index.js`
82+
- Error responses mirror the format of the target API (OpenAI errors for `/v1/*`, Anthropic errors for `/v1/messages`, etc.)

LICENSE

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
MIT License
2+
3+
Copyright (c) 2025 KochC
4+
5+
Permission is hereby granted, free of charge, to any person obtaining a copy
6+
of this software and associated documentation files (the "Software"), to deal
7+
in the Software without restriction, including without limitation the rights
8+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9+
copies of the Software, and to permit persons to whom the Software is
10+
furnished to do so, subject to the following conditions:
11+
12+
The above copyright notice and this permission notice shall be included in all
13+
copies or substantial portions of the Software.
14+
15+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21+
SOFTWARE.

0 commit comments

Comments
 (0)