Skip to content
This repository was archived by the owner on Apr 15, 2026. It is now read-only.

localwolfpackai/canvas-mvp

Repository files navigation

Lupo Studios Canvas

Canvas

A visual node-based AI orchestration canvas
by Lupo Studios

Next.js 14 TypeScript Strict React Flow 68 Tests Passing MIT License

OverviewFeaturesQuick StartArchitectureAPITestingDeployment

Collaboration flow diagram showing the development process from architecture to shipping


Overview

Canvas is a multi-model AI orchestration platform that lets you visually compose, connect, and execute AI workflows using a node-based canvas interface. Think of it as a visual programming environment for AI -- connect GPT-5 and Gemini nodes, manage context flow between them, and track costs in real-time.

Built with production-grade TypeScript (strict mode), React Flow for the visual canvas, and Zustand for state management. Designed for users who want fine-grained control over AI model orchestration without writing code.

What it does

  • Visual AI Composition -- Drag-and-drop nodes representing AI model calls onto a canvas
  • Multi-Model Support -- Use GPT-5 (OpenAI) and Gemini (Google) in the same workflow
  • Context Flow -- Connect nodes with edges to pass output from one model as context to another
  • Real-Time Cost Tracking -- Per-node and per-branch budget monitoring with heat-level indicators
  • Variant Generation -- Generate multiple responses per prompt for comparison
  • Snapshot System -- Save, load, undo/redo canvas states
  • Pack Export/Import -- Share workflows as portable JSON files

Features

Canvas & Nodes

Feature Description
LLM Nodes Cards representing individual AI calls with model, prompt, status, cost display
Edge Connections Visual connections between nodes defining context flow
Snap-to-Grid 20px grid snapping for clean layouts
Keyboard Shortcuts Cmd+K command palette, Delete to remove nodes
Focus Mode Dim non-selected nodes to reduce visual noise
MiniMap Bird's-eye navigation for large canvases

AI Models

Model Capabilities Pricing (per 1K tokens)
GPT-5 Think, Expand $0.005 input / $0.015 output
Gemini Think, Expand, Vision $0.0005 input / $0.0015 output

Budget Management

The budget system tracks spending across branches with three heat levels:

  • Cool (< 50%) -- Green indicator, safe to run
  • Warm (50-85%) -- Yellow indicator, approaching limit
  • Hot (>= 85%) -- Red indicator, budget nearly exhausted

Default soft cap: $3.00 per branch (configurable).

Context Scoping

Nodes support three context scopes that determine what information flows into them:

Scope Behavior
None No context from other nodes
Local Context from upstream connected nodes (via edges)
Global Context from all nodes, with optional TTL filtering

Quick Start

Prerequisites

  • Node.js >= 18.0
  • npm >= 9.0
  • API keys (optional -- app works with stubs if missing)

Installation

# Clone the repository
git clone https://github.com/localwolfpackai/canvas-mvp.git
cd canvas-mvp

# Install dependencies
npm install

# Copy environment template
cp .env.example .env.local

# (Optional) Add your API keys to .env.local
# OPENAI_API_KEY=sk-...
# GOOGLE_API_KEY=AIza...

# Start development server
npm run dev

Open http://localhost:3000 to see the canvas.

Available Scripts

Script Description
npm run dev Start development server with hot reload
npm run build Create optimized production build
npm start Start production server
npm run lint Run ESLint checks
npm run type-check Run TypeScript type checking (strict mode)
npm test Run test suite (68 tests)
npm run test:watch Run tests in watch mode
npm run test:coverage Run tests with coverage report

Architecture

src/
├── app/                    # Next.js App Router
│   ├── api/
│   │   ├── keys/           # API key management (GET/POST)
│   │   ├── test-keys/      # API key validation
│   │   └── llm/
│   │       ├── gpt5/       # GPT-5 API proxy with Zod validation
│   │       └── gemini/     # Gemini API proxy with Zod validation
│   ├── layout.tsx          # Root layout with metadata
│   └── page.tsx            # Home page with ErrorBoundary
│
├── components/
│   ├── Canvas.tsx           # React Flow canvas shell
│   ├── CommandK.tsx         # Cmd+K command palette
│   ├── ContextGlass.tsx     # Context preview & cost estimation
│   ├── ErrorBoundary.tsx    # React error boundary
│   ├── HeatStrips.tsx       # Budget heat visualization
│   ├── Snapshots.tsx        # Undo/redo & snapshot management
│   ├── VariantGrid.tsx      # Multi-variant display grid
│   ├── ApiKeyManager.tsx    # API key configuration UI
│   └── nodes/
│       └── LLMNode.tsx      # Individual LLM node component
│
├── lib/
│   ├── context.ts           # Context collection with edge traversal
│   ├── budget.ts            # Budget enforcement & heat tracking
│   ├── storage.ts           # IndexedDB persistence layer
│   ├── telemetry.ts         # Event logging & analytics
│   ├── jobManager.ts        # Job queue with concurrency control
│   ├── packs.ts             # Graph pack import/export
│   ├── collab.ts            # Yjs collaboration framework
│   └── providers/
│       ├── base.ts          # Provider registry & stub fallback
│       ├── gpt5.ts          # GPT-5 client-side handler
│       └── gemini.ts        # Gemini client-side handler
│
├── store/
│   └── canvas.ts            # Zustand state management with devtools
│
├── types.ts                 # TypeScript type definitions
├── schemas.ts               # Zod runtime validation schemas
└── constants.ts             # Centralized configuration

Design Principles

  1. Type Safety First -- TypeScript strict mode with noUncheckedIndexedAccess, Zod for runtime validation
  2. Separation of Concerns -- UI components, state management, and business logic in distinct layers
  3. Provider Abstraction -- AI models accessed through a registry pattern with stub fallbacks
  4. Budget Enforcement -- Spend tracking at the branch level with heat indicators
  5. Error Resilience -- Error boundaries, graceful fallbacks, and structured error handling

State Management

The app uses Zustand with granular selectors to minimize re-renders:

// Granular selectors - components only re-render when their slice changes
const nodes = useNodes();           // Only node data
const edges = useEdges();           // Only edge data
const budgets = useBudgets();       // Only budget data
const actions = useCanvasActions(); // Stable action references

Data Flow

User Action → Canvas Component → Zustand Store → React Flow
                                       ↓
                              API Route (POST /api/llm/*)
                                       ↓
                              OpenAI / Google API
                                       ↓
                              Response → Store Update → UI

API

LLM Endpoints

POST /api/llm/gpt5

Proxy endpoint for GPT-5 calls with Zod request validation.

{
  "nodeId": "node-abc123",
  "model": "gpt5",
  "kind": "think",
  "prompt": "Analyze this problem...",
  "variantCount": 3,
  "context": { "nodes": [], "tokens": 0, "costUSD": 0, "scope": "local" },
  "scope": "local"
}

POST /api/llm/gemini

Proxy endpoint for Gemini calls with identical request schema.

Key Management

Endpoint Method Description
/api/keys GET Retrieve saved API keys
/api/keys POST Save/update API keys with format validation
/api/test-keys POST Validate keys against provider APIs

Testing

The project uses Vitest with @testing-library/react for testing.

# Run all tests
npm test

# Watch mode
npm run test:watch

# With coverage
npm run test:coverage

Test Suite Overview

Test File Tests Coverage
context.test.ts 13 Context collection, edge traversal, token estimation, cost calculation
budget.test.ts 22 Budget creation, spend tracking, heat levels, cap updates, export/import
packs.test.ts 13 Pack creation, export/import round-trip, validation, stats
jobManager.test.ts 6 Queue status, active jobs, cancellation, cleanup
schemas.test.ts 14 Zod schema validation for Node, Edge, Budget, Context, RunSpec, GraphPack

Total: 68 tests, all passing


Deployment

Vercel (Recommended)

The project is configured for Vercel deployment:

  1. Push to your GitHub repository
  2. Import the project in Vercel
  3. Add environment variables:
    • OPENAI_API_KEY (optional)
    • GOOGLE_API_KEY (optional)
  4. Deploy

The build output is optimized:

  • Static pages pre-rendered at build time
  • API routes server-rendered on demand
  • Security headers configured (X-Frame-Options, CSP, etc.)

Environment Variables

Variable Required Description
OPENAI_API_KEY No OpenAI API key for GPT-5 calls
GOOGLE_API_KEY No Google API key for Gemini calls
LLM_DISABLE_REAL No Set to 1 to force stub responses

Tech Stack

Category Technology
Framework Next.js 14 (App Router)
Language TypeScript 5.4 (strict mode)
UI React 18, React Flow 11
Styling Tailwind CSS 3.4
State Zustand 4.5
Validation Zod 3.23
Storage IndexedDB (idb-keyval)
Collaboration Yjs 13.6
Icons Lucide React
Testing Vitest, Testing Library
IDs nanoid

Security

  • API keys stored server-side only, never exposed to client
  • Input validation via Zod on all API endpoints
  • Security headers configured (X-Frame-Options: DENY, X-Content-Type-Options: nosniff, Referrer-Policy, Permissions-Policy)
  • No X-Powered-By header to reduce fingerprinting
  • API responses use Cache-Control: no-store to prevent sensitive data caching
  • React Strict Mode enabled for development safety

Lupo Studios
Building intelligent tools for creative professionals

About

Node-based AI workflow canvas — compose and execute across models

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages