This guide covers running, debugging, and iterating on DiscoveryOS locally.
npm run devThis starts the Next.js development server with hot module replacement (HMR). The application is available at http://localhost:3000.
- Hot Reload — Changes to components and pages are reflected immediately
- API Route Reload — Server-side API changes take effect on the next request
- Error Overlay — Build errors and runtime errors are displayed in the browser
Three shell scripts are provided for managing the development server in the background:
bash dev-start.shThis script checks for port conflicts, installs dependencies if needed, runs database migrations, and starts the dev server in the background. Logs are written to logs/dev.log and the server PID is tracked for later use.
bash dev-status.shReports whether the dev server is currently running.
bash dev-stop.shStops the dev server using the stored PID.
npx tsc --noEmitnpm run lintUses ESLint with the project's configured rules.
npm run buildCreates an optimized production build. This also performs type checking.
# Run migrations (apply schema changes)
npx drizzle-kit migrate
# Generate new migration files (after modifying schema.ts)
npx drizzle-kit generate
# Explore the database interactively
npx drizzle-kit studio- Start the dev server with the Node.js inspector:
NODE_OPTIONS='--inspect' npm run dev - Open
chrome://inspectin Chrome and connect to the Node.js process - Set breakpoints in API routes and server-side code
Use the browser's built-in developer tools:
- React DevTools — Inspect component hierarchy and state
- Network Tab — Monitor API requests and responses
- Console — View logs and errors
You can develop most features without configuring AI API keys:
- Workspace management, file browser, and UI components work without any API key
- AI chat and note generation features will show a "not configured" message
For testing workspace management with multiple roots:
WORKSPACE_ROOTS=/tmp/workspace-a,/tmp/workspace-bIf you run a local LLM (e.g., via Ollama or LM Studio) with an OpenAI-compatible API:
OPENAI_API_KEY=dummy-key
OPENAI_BASE_URL=http://localhost:11434/v1