Skip to content

Spandan2022/VertxAI

Open Source by VertxAI

VertxAI

A graph-based, multi-model LLM chat with branching conversations, bring-your-own-key (BYOK) provider support, and an "Ask the Council" feature that queries multiple models in parallel and synthesizes a single answer.

Built by VertxAI and released under the MIT License — free to use, modify, and distribute. We believe in open, hackable tools for AI workflows.

License: MIT Next.js TypeScript React Tailwind CSS Prisma PostgreSQL NextAuth React Flow Zustand


Table of contents


What is this?

NonLinear AI Chat turns conversations into nodes on a canvas. You can:

  • Branch from any message and explore alternative replies without losing the original thread.
  • Use multiple LLM providers (OpenAI, Anthropic, Google Gemini, optional OpenRouter) with your own API keys (BYOK).
  • Ask the Council — send one prompt to several models at once, optionally get a critique, then a single synthesized answer.

Everything is server-side: your keys stay encrypted, and the UI is a React Flow canvas with auth (email/password + optional Google OAuth).


Why open source?

This project is fully open source under the MIT License. You can:

  • Run it locally or self-host without vendor lock-in.
  • Fork and adapt it for your team or product.
  • Learn from a clean separation of auth, encryption, provider abstraction, and canvas UI.
  • Contribute improvements back to the community.

VertxAI maintains it as a public good for developers building with LLMs.


Features

Feature Description
Canvas chat Conversations are nodes on a React Flow canvas; edges show branches and replies.
Branching history Reply from any node to explore alternatives; previous paths stay intact.
Multi-provider OpenAI, Anthropic, Google Gemini, optional OpenRouter — all via BYOK.
Ask the Council Query multiple models in parallel, optional critique, then one synthesized answer.
Auth Email/password (credentials) + optional Google OAuth via NextAuth.
BYOK key management Encrypted provider keys per user; test, rotate, and delete from the UI.
Secure by default Server-side model calls, AES-256-GCM for keys, rate limiting, Zod on all APIs.

Tech stack

Layer Technology
Framework Next.js 14 (App Router), TypeScript (strict)
UI React 18, Tailwind CSS, shadcn-style components, Lucide icons
Canvas React Flow 11
State Zustand (client-side canvas state)
Auth NextAuth (credentials + Google)
Database PostgreSQL 16 + Prisma 5
Secrets AES-256-GCM (Node crypto), base64 32-byte key

Prerequisites

Before you start, ensure you have:

  • Node.js 18+ (20+ recommended)
  • npm or yarn
  • Docker and Docker Compose (for PostgreSQL)
  • Git

Quickstart

Follow these steps to get the app running locally in a few minutes.

1. Clone the repository

git clone https://github.com/your-org/NonLinear_AI_Chat.git
cd NonLinear_AI_Chat

(Replace with your actual repo URL if different.)

2. Install dependencies

npm install

3. Start PostgreSQL with Docker

docker compose up -d

This starts Postgres 16 on localhost:5432 with:

  • User: postgres
  • Password: postgres
  • Database: nonlinear_ai_chat

4. Set up environment variables

cp .env.example .env

Edit .env and set at least:

Variable Required Description
DATABASE_URL Yes Must match Docker Postgres, e.g. postgresql://postgres:postgres@localhost:5432/nonlinear_ai_chat
NEXTAUTH_SECRET Yes Random 32+ character string (e.g. openssl rand -base64 32)
NEXTAUTH_URL Yes In dev: http://localhost:3000
KEY_ENCRYPTION_SECRET Yes Base64-encoded 32-byte key for encrypting user API keys

Generate a secure encryption key:

node -e "console.log(require('crypto').randomBytes(32).toString('base64'))"

Paste the output into KEY_ENCRYPTION_SECRET in .env.

Optional: set GOOGLE_CLIENT_ID and GOOGLE_CLIENT_SECRET for Google OAuth.

5. Run database migrations

npx prisma generate
npx prisma migrate deploy

6. Start the development server

npm run dev

Open http://localhost:3000 in your browser.

7. Create an account and add a provider key

  • Go to Sign up and create an account (or Login if you already have one).
  • Go to Settings → Keys and add at least one provider key (e.g. OpenAI). Keys are encrypted and stored per user.
  • Open App to use the canvas: send a message, branch from any node, or use Ask the Council.

Environment variables

Variable Required Description
DATABASE_URL Yes PostgreSQL connection string
NEXTAUTH_SECRET Yes Secret for NextAuth session signing
NEXTAUTH_URL Yes Full URL of the app (e.g. http://localhost:3000)
KEY_ENCRYPTION_SECRET Yes Base64 32-byte key for AES-256-GCM
GOOGLE_CLIENT_ID No Google OAuth client ID
GOOGLE_CLIENT_SECRET No Google OAuth client secret
OPENAI_API_BASE No Override OpenAI API base URL (default: https://api.openai.com/v1)
ANTHROPIC_API_BASE No Override Anthropic API base
GOOGLE_GEMINI_API_BASE No Override Gemini API base
OPENAI_ALLOWED_MODELS No Comma-separated list to restrict OpenAI models in UI

Core routes & API

Pages

Route Description
/ Landing page
/signup Email/password signup
/login Login (credentials or Google)
/app Main canvas (loads or creates latest canvas)
/app/[canvasId] Specific canvas
/settings/keys Manage encrypted provider keys (BYOK)

API routes (auth required)

Method Route Description
POST /api/chat Single-model chat; creates user + assistant nodes and edges
POST /api/council Council run: multi-model + optional critique + synthesis
GET /api/council/[nodeId] Council run details for a node
POST / GET / DELETE /api/keys Manage encrypted provider keys
POST /api/keys/test Test a provider key
POST /api/nodes/position Persist node positions
POST /api/canvases Create a new canvas

Security & encryption

  • Model calls are server-only — the client never talks to LLM providers directly.
  • Provider keys are encrypted with AES-256-GCM before storage; decryption happens only on the server when calling a provider. Plaintext keys are never logged or sent to the client.
  • Zod validates all JSON API inputs.
  • Rate limiting (in-memory sliding window) applies to /api/chat, /api/council, and /api/keys/test.
  • Auth: NextAuth protects /app, /settings, and all /api/*; endpoints verify resources belong to the authenticated user.
  • CSRF: Handled by NextAuth for auth flows.

See SECURITY.md for more detail.


Extending the app

  • New providers/models: Implement the Provider interface in lib/providers/*, register in lib/providers/index.ts, and add UI for keys in settings.
  • Custom node types: Add React Flow node components under components/app/nodes/* and register them in nodeTypes in CanvasApp.
  • Auto-layout / export: Use something like Dagre for layout; export can walk the node graph and emit markdown or outline.

About

An infinite canvas chat that allows multiple conversations with AI models in one place all while preserving context.

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages