Nexus AI Code Assist Vscode — bringing AI capabilities directly into the VS Code development environment.
Topics: vscode-extension · developer-tools · ai-code-completion · coding-assistant · deep-learning · generative-ai · ide-integration · large-language-models · llm · typescript
Nexus AI Code Assist Vscode is a VS Code extension that integrates AI-powered tooling into the editor workflow, eliminating context switching between the editor and external AI tools. It provides completions, explanations, generation commands, or chat capabilities directly accessible from the Command Palette and keyboard shortcuts.
The extension is built with TypeScript using the VS Code Extension API and communicates with an AI backend (Google Gemini, OpenAI, or a local model via Ollama) to provide intelligent, context-aware assistance. All sensitive credentials are stored in VS Code's SecretStorage to prevent accidental exposure.
The architecture follows the VS Code provider model: language-specific providers handle completions and hover actions, while WebView panels provide richer chat and generation interfaces. Configuration is fully exposed through VS Code's settings system for a native user experience.
The best developer tools are the ones that stay out of the way while making hard things easier. This extension was built to make AI assistance feel native to the editor rather than bolted on — accessible with a keystroke, configured through familiar settings, and respectful of the developer's focus.
User Action (keystroke / command / hover)
│
VS Code Provider (InlineCompletion / Hover / Command)
│
Context extraction from active editor
│
AI Backend API call
│
Editor injection / WebView display
All extension actions are registered as named VS Code commands, accessible from the Command Palette (Ctrl+Shift+P) without memorising keyboard shortcuts.
Right-click context menu items for common AI actions: Explain Selection, Refactor Selection, Add Docstring, Generate Tests.
Switch between Google Gemini, OpenAI, or local Ollama endpoints via VS Code Settings without reinstalling the extension.
API keys are stored in VS Code's encrypted SecretStorage rather than plaintext settings, preventing accidental exposure in dotfiles or version control.
Rich HTML/CSS/JS panel for chat, generation history, and multi-turn interaction, rendered in VS Code's sidebar or editor area.
Real-time status bar item shows the current AI model name and a loading indicator during active API calls.
All primary actions have configurable keyboard shortcuts defined in package.json contributes.keybindings.
Works on VS Code for Windows, macOS, and Linux; tested on VS Code 1.85+ and compatible with VS Code Insiders.
| Library / Tool | Role | Why This Choice |
|---|---|---|
| TypeScript | Extension language | Type-safe VS Code extension development |
| VS Code Extension API | Editor integration | Providers, commands, WebView, SecretStorage |
| esbuild | Bundler | Fast TypeScript → JavaScript bundling |
| AI SDK (Gemini / OpenAI) | AI backend | API client for LLM inference |
| vscode-test | Testing | Extension integration tests in an Extension Host |
Key packages detected in this repo:
node-fetch
- Python 3.9+ (or Node.js 18+ for TypeScript/JS projects)
pipornpmpackage manager- Relevant API keys (see Configuration section)
git clone https://github.com/Devanik21/nexus-ai-code-assist-vscode.git
cd nexus-ai-code-assist-vscode
npm install
npm run watch # watch mode for development
# Press F5 in VS Code to open Extension Development Host
# Build distributable VSIX
npm run package
code --install-extension *.vsix// settings.json
{
"extension.apiKey": "",
"extension.model": "gemini-2.0-flash",
"extension.triggerMode": "onDemand"
}| Variable | Default | Description |
|---|---|---|
extension.apiKey |
(empty) |
AI backend API key (use SecretStorage command to set) |
extension.model |
gemini-2.0-flash |
AI model identifier |
extension.triggerMode |
onDemand |
When to trigger AI: onType, onDemand, onSave |
Copy
.env.exampleto.envand populate all required values before running.
nexus-ai-code-assist-vscode/
├── README.md
├── package-lock.json
├── package.json
├── tsconfig.json
└── ...
- Repository-wide context indexing for project-aware completions
- Offline local model support via Ollama REST API
- Test generation from function signatures
- Multi-file refactoring with diff preview
- Marketplace publication with automated release workflow
Contributions, issues, and feature requests are welcome. Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/your-feature) - Commit your changes (
git commit -m 'feat: add your feature') - Push to your branch (
git push origin feature/your-feature) - Open a Pull Request
Please follow conventional commit messages and ensure any new code is documented.
A valid API key for the configured AI backend is required. The extension only sends code to the AI backend when a user action explicitly triggers it.
Devanik Debnath
B.Tech, Electronics & Communication Engineering
National Institute of Technology Agartala
This project is open source and available under the MIT License.
Crafted with curiosity, precision, and a belief that good software is worth building well.