In Codespaces: VyBox Lite auto-configures GitHub Copilot Chat and inline chat to use the same model as the terminal (GitHub Models / GPT-4.1) when the container starts. No extra setup: open Copilot Chat and the model picker will include GPT-4.1 (GitHub Models); inline chat defaults to it. If you don’t see it, run node scripts/configure-copilot-github-models.js once after opening the Chat view.
In VyBox Lite, the terminal uses the llm CLI with the llm-github-models plugin, so commands like:
llm "Generate a Verilog module for an AXI4-Lite slave interface"use GitHub Models (e.g. github/gpt-4.1) with your Codespaces GITHUB_TOKEN—no extra API keys.
This page describes how you can use the same backend (GitHub Models) from the Copilot Chat window in VS Code.
| Entry point | Backend / auth | Same models? |
|---|---|---|
| Terminal | llm + llm-github-models + GITHUB_TOKEN → GitHub Models API |
✅ |
| Copilot Chat | GitHub Copilot (subscription); model picker may include GitHub-hosted models | Depends on plan and model list |
Copilot Chat and the llm CLI are different products. To use the same models (GitHub Models) in chat, use one of the options below.
If you have a Copilot plan (Free, Pro, etc.):
- Open Copilot Chat (icon in the title bar or sidebar).
- At the bottom of the chat input, open the model dropdown (e.g. “Current model”).
- Choose a model that matches what you use in the terminal (e.g. GPT-4.1 or the same model name you get from
llm models list).
When the same model name is available in both places, you are effectively using the same or equivalent model in the chat window. Model availability depends on your Copilot plan.
GitHub Models exposes an OpenAI-compatible API. In VS Code Insiders, you can add it as a custom model so Copilot Chat uses the same backend as the terminal.
- Base URL:
https://models.github.ai/inference
(The client will call/chat/completionson this base.) - API key: Your
GITHUB_TOKEN(in Codespaces this is set automatically) or a PAT withmodelsscope.
Steps:
- Use VS Code Insiders (required for custom OpenAI-compatible models as of current docs).
- Open Copilot Chat → click the model dropdown at the bottom → Manage Models (or run Chat: Manage Language Models from the Command Palette).
- Add Models → choose the option for OpenAI-compatible or custom endpoint.
- Set:
- Endpoint / base URL:
https://models.github.ai/inference - API key:
$env:GITHUB_TOKEN(PowerShell) or the value ofGITHUB_TOKENfrom your environment (in Codespaces, use the token from the environment or Secrets).
- Endpoint / base URL:
- Select the model ID (e.g.
openai/gpt-4.1as in the GitHub Models catalog).
After that, you can select this provider/model in the Copilot Chat model picker to use the same GitHub Models backend as the terminal.
Note: The exact UI and setting name may be under github.copilot.chat.customOAIModels or “Add model from provider” in the Language Models editor. If the UI changes, the key idea is: OpenAI-compatible base URL + GITHUB_TOKEN (or PAT with models scope).
To use the exact same process as the terminal from within the chat flow:
- In Copilot Chat, ask it to run a command, for example:
- “Run in the terminal:
llm 'Your question here'”
- “Run in the terminal:
- Or open the integrated terminal and run:
source ~/vyges-venv/bin/activate # if not already active llm "Your question here"
- Copy the reply back into the chat if you want to continue the conversation there.
This uses the same llm + GitHub Models stack as your normal terminal usage.
| Goal | Approach |
|---|---|
| Same model, chat UI | Option 1: Pick matching model in Copilot Chat dropdown. |
| Same API (GitHub Models) in chat | Option 2: Add custom OpenAI-compatible model in VS Code Insiders with https://models.github.ai/inference + GITHUB_TOKEN. |
| Same CLI/backend, no setup | Option 3: Run llm "..." in the terminal or ask Copilot to run it. |
Auto-config (Codespaces): The devcontainer runs scripts/configure-copilot-github-models.js on start. It writes github.copilot.chat.customOAIModels and inlineChat.defaultModel into your VS Code User settings using GITHUB_TOKEN, so terminal and Copilot stay in sync by default.
For more on the terminal setup, see the README (“Try AI Immediately”) and scripts/welcome.sh.