| title | Database Management |
|---|---|
| description | Drizzle ORM schema, migrations, dual-database setup (NeonDB + Supabase), and seeding |
| category | guide |
| audience | developer |
This guide describes the RevealUI database workflow and the underlying database scripts it orchestrates across different development environments.
Last Updated: 2026-01-31
- Overview
- Available Commands
- Command Details
- Environment Variables
- Environment-Specific Usage
- Database Locations
- Common Workflows
- Troubleshooting
- Best Practices
RevealUI now exposes a unified developer-facing database interface through the revealui CLI.
Use these commands first:
| Command | Purpose |
|---|---|
pnpm revealui doctor |
Check whether your local environment is ready |
pnpm revealui dev up --dry-run |
Preview the local startup plan |
pnpm revealui dev up |
Bootstrap the local environment |
pnpm revealui db status |
Check local database readiness |
pnpm revealui db start |
Start the local database |
pnpm revealui db migrate |
Run Drizzle migrations through the RevealUI workflow |
The older pnpm db:* scripts still exist and remain useful as lower-level building blocks, but the supported day-to-day workflow is now revealui dev ... and revealui db ....
| Command | Description |
|---|---|
pnpm revealui db init |
Initialize the local database data directory when needed |
pnpm revealui db start |
Start local Postgres using the current local environment |
pnpm revealui db stop |
Stop the local database |
pnpm revealui db status |
Report local database health |
pnpm revealui db migrate |
Run Drizzle migrations |
pnpm revealui dev up |
End-to-end local bootstrap, including database checks |
| Command | Description | File | Environment Variables Required |
|---|---|---|---|
pnpm db:init |
Initialize database connection and verify setup | scripts/setup/database.ts |
POSTGRES_URL, DATABASE_URL, or SUPABASE_DATABASE_URI |
pnpm db:migrate |
Run Drizzle migrations | scripts/setup/migrations.ts |
POSTGRES_URL or DATABASE_URL |
pnpm db:reset |
Drop all tables and recreate schema | scripts/setup/reset-database.ts |
Same as init |
pnpm db:seed |
Seed database with sample data | scripts/setup/seed-sample-content.ts |
Same as init |
pnpm db:backup |
Create JSON backup of all tables | scripts/commands/database/backup.ts |
Same as init |
pnpm db:restore |
Restore from backup file | scripts/commands/database/restore.ts |
Same as init |
pnpm db:status |
Check database connection and table count | scripts/commands/database/status.ts |
Same as init |
pnpm db:setup-test |
Setup test database | scripts/dev-tools/test-database.ts |
Test-specific vars |
Purpose: Initialize the local database data directory for the supported local Postgres workflow.
Usage:
pnpm revealui db initPurpose: Start the local Postgres instance used by RevealUI local development.
Usage:
pnpm revealui db start
pnpm revealui db statusPurpose: Apply the current Drizzle migration set to the configured local database target.
Usage:
pnpm revealui db migrate
pnpm revealui dev upPurpose: Verify database connection and initialize RevealUI tables.
What it does:
- Detects database provider (Neon, Supabase, Vercel, Generic)
- Tests connection with
SELECT NOW()query - Checks PostgreSQL version
- Verifies RevealUI system tables exist:
revealui_locked_documentsrevealui_locked_documents_relsrevealui_preferencesrevealui_preferences_relsrevealui_migrations
- Lists all RevealUI tables found
Environment Variables (priority order):
DATABASE_URL(primary)POSTGRES_URL(fallback)SUPABASE_DATABASE_URI(Supabase-specific)
Usage:
# Low-level verification
pnpm db:init
# Check what gets initialized
# (Safe to run multiple times - idempotent)Exit codes:
0- Success1- Configuration error (no connection string)2- Execution error (connection failed)
Purpose: Run Drizzle schema migrations to update database structure.
What it does:
- Checks for
POSTGRES_URLorDATABASE_URL - Generates new migrations if schema changed (
pnpm db:generate) - Pushes schema changes to database (
pnpm db:push) - Verifies migration success:
- Checks
node_id_mappingstable exists - Checks
embedding_metadatacolumn inagent_memories
- Checks
Interactive: Yes (prompts before modifying database)
Usage:
# Low-level migration flow
pnpm db:migrate
# In CI (auto-confirm)
CI=true pnpm db:migrateSafety:
- Shows warning before modifying database
- Requires confirmation unless in CI
- Uses transactions (rollback on error)
Purpose: Complete database reset (drop all tables, recreate schema).
What it does:
- Validates database connections
- Creates backup (unless
--no-backup) - Drops all tables and custom types (enums)
- Runs migrations to recreate schema
- Optionally seeds sample data (
--seed)
Flags:
--confirm/-y- Skip confirmation prompt--no-backup- Skip backup creation--seed- Seed sample data after reset--database=rest- Reset only REST database (Neon)--database=vector- Reset only Vector database (Supabase)--force- Allow in CI environment
Safety Features:
- Interactive confirmation (unless
--confirm) - Automatic backups to
.revealui/backups/(unless--no-backup) - Transaction support (rollback on error)
- CI protection (requires
--forcein CI) - Keeps last 5 backups (auto-cleanup)
Usage:
# Interactive reset (safest)
pnpm db:reset
# Quick reset with backup
pnpm db:reset --confirm
# Reset without backup (faster)
pnpm db:reset --confirm --no-backup
# Reset and seed sample data
pnpm db:reset --seed
# Reset only REST database
pnpm db:reset --database=restBackup location:
.revealui/backups/db-backup-2026-01-30T12-34-56.json
Purpose: Populate database with sample content for development.
What it does:
- Seeds sample users, posts, pages, media, etc.
- Creates test data for all collections
- Uses realistic sample content
Usage:
# Seed database
pnpm db:seed
# Often combined with reset
pnpm db:reset --seedPurpose: Create JSON backup of all database tables.
What it does:
- Connects to database
- Exports all tables to JSON format
- Saves to
.revealui/backups/directory - Cleans up old backups (keeps last 5)
Usage:
# Create backup
pnpm db:backup
# Backups saved to:
# .revealui/backups/db-backup-<timestamp>.jsonPurpose: Restore database from backup file.
Usage:
# Restore from specific backup
pnpm db:restore .revealui/backups/db-backup-2026-01-30T12-34-56.jsonPurpose: Check database connection and current state.
What it does:
- Tests database connection
- Shows PostgreSQL version
- Lists table count
- Shows database provider (Neon, Supabase, etc.)
Usage:
# Quick health check
pnpm db:statusPrimary: (at least one required)
DATABASE_URL- PostgreSQL connection stringPOSTGRES_URL- Alternative name (Neon convention)SUPABASE_DATABASE_URI- Supabase-specific
Format:
# Standard PostgreSQL
postgresql://user:password@host:port/database
# Neon (serverless)
postgresql://user:password@host.neon.tech/database?sslmode=require
# Supabase (connection pooling)
postgresql://postgres:password@db.project.supabase.co:5432/postgresNODE_ENV- Affects logging level (development,production,test)CI- Skips interactive prompts when set totrue
Server control (preferred RevealUI CLI):
revealui db init # Initialize PostgreSQL data directory
revealui db start # Start PostgreSQL server
revealui db stop # Stop PostgreSQL server
revealui db status # Check if server runningApplication-level (use pnpm scripts):
pnpm db:init # Verify connection, check tables
pnpm db:migrate # Run schema migrations
pnpm db:seed # Seed sample data
pnpm db:reset # Reset databaseTypical workflow:
# 1. Inspect the local bootstrap plan
revealui dev up --dry-run
# 2. Start PostgreSQL if needed
revealui db start
# 3. Initialize database (pnpm script)
pnpm db:init
# 4. Run migrations (pnpm script)
pnpm db:migrate
# 5. Seed data (pnpm script)
pnpm db:seedServer control (Docker Compose):
# PostgreSQL runs automatically in container
# No manual start/stop needed
# If needed:
docker-compose restart db
docker-compose logs dbApplication-level (use pnpm scripts):
pnpm db:init # Same as Nix
pnpm db:migrate # Same as Nix
pnpm db:seed # Same as Nix
pnpm db:reset # Same as NixConnection string:
# In Dev Container, database host is "db" (service name)
DATABASE_URL=postgresql://postgres@db:5432/revealuiServer control (OS-specific):
# macOS (Homebrew)
brew services start postgresql@16
brew services stop postgresql@16
# Linux (systemd)
sudo systemctl start postgresql
sudo systemctl stop postgresql
# Windows (manual)
pg_ctl -D "C:\Program Files\PostgreSQL\16\data" start
pg_ctl -D "C:\Program Files\PostgreSQL\16\data" stopApplication-level (use pnpm scripts):
pnpm db:init # Same as other environments
pnpm db:migrate # Same as other environments
pnpm db:seed # Same as other environments
pnpm db:reset # Same as other environments| Environment | PostgreSQL Data Directory | Gitignored? |
|---|---|---|
| Nix | .pgdata/ |
✅ Yes |
| Dev Containers | Docker volume (unnamed) | N/A (container) |
| Manual | System-dependent | N/A (outside project) |
Backups (all environments):
- Directory:
.revealui/backups/ - Format:
db-backup-<timestamp>.json - Retention: Last 5 backups kept
- Gitignored: ✅ Yes (via
.revealui/)
# 1. Inspect the local bootstrap plan
revealui dev up --dry-run
# 2. Ensure PostgreSQL is running
# (RevealUI CLI for Nix/local, Docker automatic, Manual: OS-specific otherwise)
revealui db start
# 3. Initialize database
pnpm db:init
# 4. Run migrations
pnpm db:migrate
# 5. (Optional) Seed sample data
pnpm db:seed
# 6. Start development
pnpm dev# Interactive (safest)
pnpm db:reset
# Quick reset (no prompts, with backup)
pnpm db:reset --confirm
# Full reset with sample data
pnpm db:reset --confirm --seed# Reset database to known state
pnpm db:reset --confirm --no-backup
# Run tests
pnpm test# Backup production-like data
pnpm db:backup
# Verify migrations work
pnpm db:migrate
# Test with production-like data
pnpm devSolution:
# Check environment variables
echo $DATABASE_URL
echo $POSTGRES_URL
# Create .env.development.local if missing
cp .env.template .env.development.local
# Edit with your database URL
code .env.development.localNix:
# Check if PostgreSQL is running
revealui db status
# Start if not running
revealui db start
# Check logs
cat .pgdata/logfileDev Containers:
# Check database container
docker-compose ps
# View logs
docker-compose logs db
# Restart if needed
docker-compose restart dbManual:
# Check if PostgreSQL is running
# macOS:
brew services list
# Linux:
sudo systemctl status postgresql
# Windows:
pg_ctl status -D "C:\Program Files\PostgreSQL\16\data"Cause: User doesn't have permissions on tables
Solution:
# Reset database with proper permissions
pnpm db:reset --confirmCause: Migrations already applied
Solution:
# Check migration status
pnpm db:status
# If needed, reset and migrate
pnpm db:reset --confirm
pnpm db:migrateSolution:
# Reset database first
pnpm db:reset --confirm --no-backup
# Then restore
pnpm db:restore .revealui/backups/db-backup-<timestamp>.jsonUse pnpm scripts for consistency across environments
pnpm db:init # ✅ Works everywhere
pnpm db:migrate # ✅ Works everywhereBack up before destructive operations
pnpm db:backup
pnpm db:resetTest migrations before deployment
pnpm db:backup
pnpm db:migrate
# Test thoroughly
pnpm db:restore backup.json # If issues foundUse --confirm in scripts
{
"scripts": {
"db:fresh": "pnpm db:reset --confirm --seed"
}
}Don't use environment-specific commands in shared scripts
# ❌ Bad (environment-specific)
revealui db start && pnpm dev
# ✅ Good (environment-agnostic)
revealui dev upDon't skip backups in production-like environments
# ❌ Dangerous
pnpm db:reset --no-backup
# ✅ Safer
pnpm db:reset # Creates backup automaticallyDon't hardcode connection strings
// ❌ Bad
const db = postgres("postgresql://postgres@localhost:5432/revealui");
// ✅ Good
const db = postgres(process.env.DATABASE_URL!);GitHub Actions:
- name: Setup database
run: |
pnpm db:init
pnpm db:migrate
env:
DATABASE_URL: ${{ secrets.DATABASE_URL }}
- name: Run tests
run: pnpm testDocker Compose (CI):
services:
db:
image: pgvector/pgvector:pg16
environment:
POSTGRES_DB: revealui_test
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
app:
depends_on:
- db
environment:
DATABASE_URL: postgresql://postgres:postgres@db:5432/revealui_test
command: |
pnpm db:init
pnpm db:migrate
pnpm testFrom Devbox → Pure Nix:
# 1. Export from Devbox
devbox shell
pg_dump -d revealui > backup.sql
exit
# 2. Switch to Nix
rm -rf .devbox/
direnv allow
# 3. Import to Nix
revealui db start
pnpm db:init
psql -d revealui < backup.sqlBetween Environments:
# 1. Export from old environment
pnpm db:backup
# Or: pg_dump -d revealui > backup.sql
# 2. Setup new environment
# (follow environment-specific setup)
# 3. Import to new environment
pnpm db:restore backup.json
# Or: psql -d revealui < backup.sql- CI/CD Guide - CI/CD pipeline and deployment
- Architecture - System architecture overview
Last Updated: 2026-01-31 Part of: Development Guide consolidation
Comprehensive guide to database performance optimization for RevealUI.
- Overview
- Query Monitoring
- Indexing Strategy
- N+1 Query Elimination
- Query Caching
- Connection Pooling
- Performance Benchmarks
- Best Practices
- Troubleshooting
Database performance is critical for application speed. This guide covers:
- Query Monitoring - Track slow queries and performance metrics
- Indexes - Strategic indexing for fast lookups
- N+1 Elimination - Optimize relationship queries
- Caching - In-memory Map-based caching for query results
- Connection Pool - Optimized pool configuration
- Query time: < 20ms (p95)
- Connection pool utilization: < 80%
- Cache hit rate: > 80%
- No N+1 queries
- All foreign keys indexed
import {
monitorQuery,
logSlowQuery,
} from "@revealui/core/monitoring/query-monitor";
// Wrap queries with monitoring
const users = await monitorQuery("getUsersWithPosts", async () => {
return db.query("SELECT * FROM users");
});
// Log slow query manually
logSlowQuery(
"SELECT * FROM posts WHERE author_id = $1",
150, // 150ms duration
["user-123"],
);import {
getQueryStats,
getQueryReport,
} from "@revealui/core/monitoring/query-monitor";
// Get summary stats
const stats = getQueryStats();
console.log({
totalQueries: stats.totalQueries,
avgDuration: stats.avgDuration,
p95: stats.p95,
slowQueries: stats.slowQueries,
});
// Get full report
const report = getQueryReport();
console.log(report);import { getQueryPercentiles } from "@revealui/core/monitoring/query-monitor";
const percentiles = getQueryPercentiles();
console.log({
p50: percentiles.p50, // Median
p95: percentiles.p95, // 95th percentile
p99: percentiles.p99, // 99th percentile
});-- Find foreign keys without indexes
SELECT
tc.table_name,
kcu.column_name
FROM information_schema.table_constraints tc
JOIN information_schema.key_column_usage kcu
ON tc.constraint_name = kcu.constraint_name
WHERE tc.constraint_type = 'FOREIGN KEY'
AND NOT EXISTS (
SELECT 1 FROM pg_indexes
WHERE tablename = tc.table_name
AND indexdef LIKE '%' || kcu.column_name || '%'
);-- User email lookup (authentication)
CREATE INDEX CONCURRENTLY idx_users_email ON users(email);
-- Post slug lookup (public URLs)
CREATE INDEX CONCURRENTLY idx_posts_slug ON posts(slug);
-- Published posts sorted by date
CREATE INDEX CONCURRENTLY idx_posts_published_at
ON posts(published_at DESC)
WHERE published_at IS NOT NULL;
-- Composite index for common query
CREATE INDEX CONCURRENTLY idx_posts_author_status
ON posts(author_id, status);-- Index only published posts
CREATE INDEX CONCURRENTLY idx_posts_published
ON posts(published_at DESC)
WHERE status = 'published';
-- Index only verified users
CREATE INDEX CONCURRENTLY idx_users_verified
ON users(email_verified_at)
WHERE email_verified_at IS NOT NULL;-- GIN index for full-text search
CREATE INDEX CONCURRENTLY idx_posts_search
ON posts USING gin(
to_tsvector('english', coalesce(title, '') || ' ' || coalesce(content, ''))
);
-- Query using full-text search
SELECT * FROM posts
WHERE to_tsvector('english', coalesce(title, '') || ' ' || coalesce(content, ''))
@@ plainto_tsquery('english', 'optimization')
ORDER BY ts_rank(
to_tsvector('english', coalesce(title, '') || ' ' || coalesce(content, '')),
plainto_tsquery('english', 'optimization')
) DESC;-- Check index usage statistics
SELECT
schemaname,
tablename,
indexname,
idx_scan as scans,
idx_tup_read as tuples_read,
idx_tup_fetch as tuples_fetched
FROM pg_stat_user_indexes
ORDER BY idx_scan DESC;
-- Find unused indexes
SELECT
schemaname,
tablename,
indexname,
idx_scan
FROM pg_stat_user_indexes
WHERE idx_scan = 0
AND indexname NOT LIKE 'pg_toast%'
ORDER BY tablename;// ❌ BAD: N+1 query pattern
const posts = await db.query("SELECT * FROM posts");
for (const post of posts.rows) {
// Additional query for each post
const author = await db.query("SELECT * FROM users WHERE id = $1", [
post.author_id,
]);
post.author = author.rows[0];
}// ✅ GOOD: Single query with JOIN
const posts = await db.query(`
SELECT
p.*,
json_build_object(
'id', u.id,
'name', u.name,
'email', u.email
) as author
FROM posts p
LEFT JOIN users u ON u.id = p.author_id
`);// ✅ GOOD: Batch load in single query
async function getUsersByIds(ids: string[]) {
return db.query("SELECT * FROM users WHERE id = ANY($1)", [ids]);
}// ✅ GOOD: Aggregate related data
const query = `
SELECT
u.id,
u.name,
COALESCE(
json_agg(
json_build_object(
'id', p.id,
'title', p.title
)
ORDER BY p.published_at DESC
) FILTER (WHERE p.id IS NOT NULL),
'[]'
) as posts
FROM users u
LEFT JOIN posts p ON p.author_id = u.id
GROUP BY u.id
`;import { cacheQuery } from "@revealui/core/cache/query-cache";
// Cache for 5 minutes
const users = await cacheQuery(
"users:all",
() => db.query("SELECT * FROM users"),
{ ttl: 300 },
);import { cacheList } from "@revealui/core/cache/query-cache";
const posts = await cacheList(
"posts",
{ status: "published", limit: 20 },
() => getPublishedPosts(),
300,
);import { cacheItem } from "@revealui/core/cache/query-cache";
const user = await cacheItem("users", userId, () => getUserById(userId), 300);import {
invalidateCache,
invalidateCachePattern,
invalidateResource,
} from "@revealui/core/cache/query-cache";
// Invalidate specific key
await invalidateCache("users:all");
// Invalidate by pattern
await invalidateCachePattern("posts:*");
// Invalidate entire resource
await invalidateResource("users");import { cacheSWR } from "@revealui/core/cache/query-cache";
// Return stale data immediately, revalidate in background
const data = await cacheSWR("expensive-query", () => runExpensiveQuery(), {
ttl: 300, // Fresh for 5 minutes
staleTime: 60, // Stale data valid for 1 minute
});import { Pool } from "pg";
const pool = new Pool({
max: 20, // Maximum pool size
min: 5, // Minimum pool size
idleTimeoutMillis: 30000, // 30 seconds
connectionTimeoutMillis: 5000, // 5 seconds
statement_timeout: 10000, // 10 seconds
query_timeout: 10000, // 10 seconds
});import { getPoolStats, checkDatabaseHealth } from "@revealui/db/pool";
// Get pool statistics
const stats = getPoolStats();
console.log({
totalCount: stats.totalCount,
idleCount: stats.idleCount,
utilization: stats.utilization,
});
// Check health
const health = await checkDatabaseHealth();
console.log(health);import { warmupPool } from "@revealui/db/pool";
// Pre-warm connections on startup
await warmupPool();# Run all query benchmarks
pnpm benchmark:queries
# View results
cat benchmark-results.jsonimport { benchmarkQuery } from "@/scripts/performance/benchmark-queries";
const result = await benchmarkQuery(
"My Custom Query",
() => db.query("SELECT * FROM posts LIMIT 100"),
100, // iterations
);
console.log({
avgDuration: result.avgDuration,
p95: result.p95,
qps: result.queriesPerSecond,
});-- ✅ Index foreign keys
CREATE INDEX idx_posts_author_id ON posts(author_id);
-- ✅ Index frequently filtered columns
CREATE INDEX idx_posts_status ON posts(status);
-- ✅ Index sort columns
CREATE INDEX idx_posts_published_at ON posts(published_at DESC);-- ❌ BAD: Fetches unnecessary data
SELECT * FROM users
-- ✅ GOOD: Fetch only needed columns
SELECT id, name, email FROM users-- Analyze query performance
EXPLAIN ANALYZE
SELECT * FROM posts
WHERE status = 'published'
ORDER BY published_at DESC
LIMIT 20;// ❌ BAD: Individual inserts
for (const user of users) {
await db.query("INSERT INTO users (name, email) VALUES ($1, $2)", [
user.name,
user.email,
]);
}
// ✅ GOOD: Batch insert
const values = users.map((u, i) => `($${i * 2 + 1}, $${i * 2 + 2})`).join(",");
const params = users.flatMap((u) => [u.name, u.email]);
await db.query(`INSERT INTO users (name, email) VALUES ${values}`, params);-- Always use LIMIT for large result sets
SELECT * FROM posts
WHERE status = 'published'
ORDER BY published_at DESC
LIMIT 20;const client = await pool.connect();
try {
await client.query("BEGIN");
await client.query("INSERT INTO users (name) VALUES ($1)", ["Alice"]);
await client.query("INSERT INTO posts (title, author_id) VALUES ($1, $2)", [
"Post",
1,
]);
await client.query("COMMIT");
} catch (error) {
await client.query("ROLLBACK");
throw error;
} finally {
client.release();
}-- Enable slow query log
ALTER SYSTEM SET log_min_duration_statement = 100; -- Log queries > 100ms
SELECT pg_reload_conf();
-- View slow queries
SELECT * FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;-- Find tables with sequential scans
SELECT
schemaname,
tablename,
seq_scan,
seq_tup_read,
idx_scan
FROM pg_stat_user_tables
WHERE seq_scan > 1000
ORDER BY seq_scan DESC;-- View blocking queries
SELECT
blocked.pid AS blocked_pid,
blocking.pid AS blocking_pid,
blocked.query AS blocked_query,
blocking.query AS blocking_query
FROM pg_stat_activity blocked
JOIN pg_stat_activity blocking
ON blocking.pid = ANY(pg_blocking_pids(blocked.pid))
WHERE blocked.wait_event_type = 'Lock';// Monitor pool stats
import { getPoolStats } from "@revealui/db/pool";
setInterval(() => {
const stats = getPoolStats();
if (stats.utilization > 80) {
console.warn("High pool utilization:", stats);
}
if (stats.waitingCount > 5) {
console.warn("Many waiting requests:", stats.waitingCount);
}
}, 60000);- pg_stat_statements - Query performance statistics
- EXPLAIN ANALYZE - Query execution plan
- pgAdmin - Database administration
- PgHero - Performance dashboard
- Grafana - Metrics visualization
Last Updated: 2026-04-26 Version: 0.2 (working draft — pre-1.0 per the suite-wide versioning convention)