A graph-based, multi-model LLM chat with branching conversations, bring-your-own-key (BYOK) provider support, and an "Ask the Council" feature that queries multiple models in parallel and synthesizes a single answer.
Built by VertxAI and released under the MIT License — free to use, modify, and distribute. We believe in open, hackable tools for AI workflows.
- What is this?
- Why open source?
- Features
- Tech stack
- Prerequisites
- Quickstart
- Environment variables
- Core routes & API
- Security & encryption
- Extending the app
- Contributing & support
- License
NonLinear AI Chat turns conversations into nodes on a canvas. You can:
- Branch from any message and explore alternative replies without losing the original thread.
- Use multiple LLM providers (OpenAI, Anthropic, Google Gemini, optional OpenRouter) with your own API keys (BYOK).
- Ask the Council — send one prompt to several models at once, optionally get a critique, then a single synthesized answer.
Everything is server-side: your keys stay encrypted, and the UI is a React Flow canvas with auth (email/password + optional Google OAuth).
This project is fully open source under the MIT License. You can:
- Run it locally or self-host without vendor lock-in.
- Fork and adapt it for your team or product.
- Learn from a clean separation of auth, encryption, provider abstraction, and canvas UI.
- Contribute improvements back to the community.
VertxAI maintains it as a public good for developers building with LLMs.
| Feature | Description |
|---|---|
| Canvas chat | Conversations are nodes on a React Flow canvas; edges show branches and replies. |
| Branching history | Reply from any node to explore alternatives; previous paths stay intact. |
| Multi-provider | OpenAI, Anthropic, Google Gemini, optional OpenRouter — all via BYOK. |
| Ask the Council | Query multiple models in parallel, optional critique, then one synthesized answer. |
| Auth | Email/password (credentials) + optional Google OAuth via NextAuth. |
| BYOK key management | Encrypted provider keys per user; test, rotate, and delete from the UI. |
| Secure by default | Server-side model calls, AES-256-GCM for keys, rate limiting, Zod on all APIs. |
| Layer | Technology |
|---|---|
| Framework | Next.js 14 (App Router), TypeScript (strict) |
| UI | React 18, Tailwind CSS, shadcn-style components, Lucide icons |
| Canvas | React Flow 11 |
| State | Zustand (client-side canvas state) |
| Auth | NextAuth (credentials + Google) |
| Database | PostgreSQL 16 + Prisma 5 |
| Secrets | AES-256-GCM (Node crypto), base64 32-byte key |
Before you start, ensure you have:
- Node.js 18+ (20+ recommended)
- npm or yarn
- Docker and Docker Compose (for PostgreSQL)
- Git
Follow these steps to get the app running locally in a few minutes.
git clone https://github.com/your-org/NonLinear_AI_Chat.git
cd NonLinear_AI_Chat(Replace with your actual repo URL if different.)
npm installdocker compose up -dThis starts Postgres 16 on localhost:5432 with:
- User:
postgres - Password:
postgres - Database:
nonlinear_ai_chat
cp .env.example .envEdit .env and set at least:
| Variable | Required | Description |
|---|---|---|
DATABASE_URL |
Yes | Must match Docker Postgres, e.g. postgresql://postgres:postgres@localhost:5432/nonlinear_ai_chat |
NEXTAUTH_SECRET |
Yes | Random 32+ character string (e.g. openssl rand -base64 32) |
NEXTAUTH_URL |
Yes | In dev: http://localhost:3000 |
KEY_ENCRYPTION_SECRET |
Yes | Base64-encoded 32-byte key for encrypting user API keys |
Generate a secure encryption key:
node -e "console.log(require('crypto').randomBytes(32).toString('base64'))"Paste the output into KEY_ENCRYPTION_SECRET in .env.
Optional: set GOOGLE_CLIENT_ID and GOOGLE_CLIENT_SECRET for Google OAuth.
npx prisma generate
npx prisma migrate deploynpm run devOpen http://localhost:3000 in your browser.
- Go to Sign up and create an account (or Login if you already have one).
- Go to Settings → Keys and add at least one provider key (e.g. OpenAI). Keys are encrypted and stored per user.
- Open App to use the canvas: send a message, branch from any node, or use Ask the Council.
| Variable | Required | Description |
|---|---|---|
DATABASE_URL |
Yes | PostgreSQL connection string |
NEXTAUTH_SECRET |
Yes | Secret for NextAuth session signing |
NEXTAUTH_URL |
Yes | Full URL of the app (e.g. http://localhost:3000) |
KEY_ENCRYPTION_SECRET |
Yes | Base64 32-byte key for AES-256-GCM |
GOOGLE_CLIENT_ID |
No | Google OAuth client ID |
GOOGLE_CLIENT_SECRET |
No | Google OAuth client secret |
OPENAI_API_BASE |
No | Override OpenAI API base URL (default: https://api.openai.com/v1) |
ANTHROPIC_API_BASE |
No | Override Anthropic API base |
GOOGLE_GEMINI_API_BASE |
No | Override Gemini API base |
OPENAI_ALLOWED_MODELS |
No | Comma-separated list to restrict OpenAI models in UI |
| Route | Description |
|---|---|
/ |
Landing page |
/signup |
Email/password signup |
/login |
Login (credentials or Google) |
/app |
Main canvas (loads or creates latest canvas) |
/app/[canvasId] |
Specific canvas |
/settings/keys |
Manage encrypted provider keys (BYOK) |
| Method | Route | Description |
|---|---|---|
| POST | /api/chat |
Single-model chat; creates user + assistant nodes and edges |
| POST | /api/council |
Council run: multi-model + optional critique + synthesis |
| GET | /api/council/[nodeId] |
Council run details for a node |
| POST / GET / DELETE | /api/keys |
Manage encrypted provider keys |
| POST | /api/keys/test |
Test a provider key |
| POST | /api/nodes/position |
Persist node positions |
| POST | /api/canvases |
Create a new canvas |
- Model calls are server-only — the client never talks to LLM providers directly.
- Provider keys are encrypted with AES-256-GCM before storage; decryption happens only on the server when calling a provider. Plaintext keys are never logged or sent to the client.
- Zod validates all JSON API inputs.
- Rate limiting (in-memory sliding window) applies to
/api/chat,/api/council, and/api/keys/test. - Auth: NextAuth protects
/app,/settings, and all/api/*; endpoints verify resources belong to the authenticated user. - CSRF: Handled by NextAuth for auth flows.
See SECURITY.md for more detail.
- New providers/models: Implement the
Providerinterface inlib/providers/*, register inlib/providers/index.ts, and add UI for keys in settings. - Custom node types: Add React Flow node components under
components/app/nodes/*and register them innodeTypesinCanvasApp. - Auto-layout / export: Use something like Dagre for layout; export can walk the node graph and emit markdown or outline.