Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ on:
jobs:
quality:
name: Format, Lint, Typecheck, Test, Browser Test, Build
runs-on: blacksmith-4vcpu-ubuntu-2404
runs-on: ubuntu-24.04
steps:
- name: Checkout
uses: actions/checkout@v4
Expand Down
16 changes: 12 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# T3 Code + Copilot

This repo is a T3 Code fork that stays up to date with upstream and adds GitHub Copilot support.
This repo is a T3 Code fork that stays up to date with upstream and adds GitHub Copilot plus browser-local WebGPU support.

T3 Code is a minimal web GUI for coding agents. This fork supports both Codex and GitHub Copilot.
T3 Code is a minimal web GUI for coding agents. This fork supports Codex, GitHub Copilot, and a browser-side Local WebGPU adapter powered by Hugging Face Transformers.js.

## Preview

Expand All @@ -13,17 +13,18 @@ T3 Code is a minimal web GUI for coding agents. This fork supports both Codex an

- tracks upstream `pingdotgg/t3code`
- adds GitHub Copilot provider support
- adds a browser-side Local WebGPU provider for curated Hugging Face ONNX models
- keeps Codex support working too

## How to use

> [!WARNING]
> You need to have either [Codex CLI](https://github.com/openai/codex) or GitHub Copilot available and authorized for T3 Code to work.
> You need either [Codex CLI](https://github.com/openai/codex), GitHub Copilot, or a WebGPU-capable browser for the Local WebGPU adapter.

The easiest way to use this fork is the desktop app.

- Download it from the [releases page](https://github.com/zortos293/t3code-copilot/releases)
- Launch the app and choose either `Codex` or `GitHub Copilot`
- Launch the app and choose `Codex`, `GitHub Copilot`, or `Local WebGPU`

You can also run it from source:

Expand All @@ -34,6 +35,13 @@ bun run dev

Open the app, connect your provider, and start chatting.

### Local WebGPU notes

- Local WebGPU runs entirely in the browser and does not use the server provider runtime.
- The first run downloads model files from Hugging Face/CDN endpoints and may take a while.
- Use the Settings browser to search compatible Hugging Face models, then start with the smaller instruct variants for the best chance of fitting browser memory limits.
- WebGPU availability depends on browser, OS, and GPU support. Recent Chromium-based browsers work best today.

## Some notes

We are very very early in this project. Expect bugs.
Expand Down
110 changes: 110 additions & 0 deletions apps/server/src/huggingFaceModelSearch.test.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
import { afterEach, describe, expect, it, vi } from "vitest";

import {
clearHuggingFaceModelSearchCache,
searchHuggingFaceModels,
} from "./huggingFaceModelSearch";

describe("huggingFaceModelSearch", () => {
afterEach(() => {
clearHuggingFaceModelSearchCache();
vi.restoreAllMocks();
});

it("returns featured recommended models and caches repeated requests", async () => {
const fetchSpy = vi.spyOn(globalThis, "fetch").mockResolvedValue(
new Response(
JSON.stringify([
{
id: "onnx-community/Qwen2.5-1.5B-Instruct",
likes: 12,
downloads: 845,
private: false,
tags: ["transformers.js", "onnx", "text-generation", "license:apache-2.0"],
pipeline_tag: "text-generation",
library_name: "transformers.js",
},
{
id: "community/SmallLM-Instruct",
likes: 4,
downloads: 120,
private: false,
tags: ["transformers.js", "text-generation"],
pipeline_tag: "text-generation",
library_name: "transformers.js",
},
{
id: "onnx-community/Skip-Me",
likes: 1,
downloads: 2,
private: false,
tags: ["transformers.js", "custom_code"],
pipeline_tag: "text-generation",
library_name: "transformers.js",
},
]),
{ status: 200, headers: { "content-type": "application/json" } },
),
);

const first = await searchHuggingFaceModels({ limit: 5 });
const second = await searchHuggingFaceModels({ limit: 5 });

expect(fetchSpy).toHaveBeenCalledTimes(1);
expect(first).toEqual({
mode: "featured",
models: [
{
id: "onnx-community/Qwen2.5-1.5B-Instruct",
author: "onnx-community",
name: "Qwen2.5-1.5B-Instruct",
downloads: 845,
likes: 12,
pipelineTag: "text-generation",
libraryName: "transformers.js",
license: "apache-2.0",
compatibility: "recommended",
},
],
truncated: false,
});
expect(second).toEqual(first);
});

it("returns compatible search results for explicit queries", async () => {
vi.spyOn(globalThis, "fetch").mockResolvedValue(
new Response(
JSON.stringify([
{
id: "community/Phi-lite-Instruct",
likes: 14,
downloads: 912,
private: false,
tags: ["transformers.js", "text-generation", "license:mit"],
pipeline_tag: "text-generation",
library_name: "transformers.js",
},
{
id: "onnx-community/Phi-3.5-mini-instruct-onnx-web",
likes: 36,
downloads: 1500,
private: false,
tags: ["transformers.js", "onnx", "text-generation", "license:mit"],
pipeline_tag: "text-generation",
library_name: "transformers.js",
},
]),
{ status: 200, headers: { "content-type": "application/json" } },
),
);

const result = await searchHuggingFaceModels({ query: "phi", limit: 10 });

expect(result.mode).toBe("search");
expect(result.query).toBe("phi");
expect(result.models.map((model) => model.id)).toEqual([
"onnx-community/Phi-3.5-mini-instruct-onnx-web",
"community/Phi-lite-Instruct",
]);
});
});
Loading
Loading