This project lets you explore a PostgreSQL database that is running locally (for example inside Docker) by turning natural language requests into SQL with a model hosted in LM Studio. The React frontend talks to a lightweight Node/Express backend that
- Connects to your database safely, retrieves table metadata, and executes read-only queries.
- Proxies requests to the LM Studio HTTP API so the browser never has to deal with CORS or API details.
- Node.js 18 or newer
- A PostgreSQL instance you can reach from the host (e.g.
docker run --name mydb -p 5432:5432 -e POSTGRES_PASSWORD=password postgres:16) - Either:
- A Google AI Studio API key (for Gemini models), or
- LM Studio with the local server enabled (
Settings → Developer → Enable local server)
- Install dependencies
npm install
- Update
.env.localif you run the backend on a different host/port. By default it expects the API onhttp://localhost:4000:VITE_API_BASE_URL=http://localhost:4000 - (Optional) Create a
.envfile for the backend to override defaults such asSERVER_PORT=4000 POSTGRES_SCHEMA=public # change if your tables live in a different schema POSTGRES_SSL=false # set to true when you need SSL (self-signed allowed) ALLOWED_ORIGINS=http://localhost:3000
Open two terminals:
- Start the backend API
npm run server
- Start the Vite dev server
npm run dev
The application is available at http://localhost:3000.
- Enter your PostgreSQL connection string (for example
postgresql://postgres:password@localhost:5432/postgres) and click Connect. The backend validates the credentials and loads table metadata from the configured schema. - Select one or more tables whose structure should be shared with the model.
- Pick an LLM provider:
- Google Gemini – paste your API key, optionally fetch the available models (defaults to
gemini-2.0-flash-2.5). - LM Studio – provide the local server endpoint (defaults to
http://localhost:1234/v1) and load the served models.
- Google Gemini – paste your API key, optionally fetch the available models (defaults to
- Describe the question you want answered in natural language and click Generate SQL. The backend relays the request to the chosen provider and returns the generated SQL.
- Review the generated SQL and click Execute Query to run it. Only
SELECT/WITHqueries are allowed for safety. Query results show in a table below.
- All API calls stay on your machine; nothing leaves the local network.
- If Gemini returns an empty list, the UI falls back to
gemini-2.0-flash-2.5; you can still type another model id manually. - If LM Studio returns an empty list, make sure its local server is running and a model is loaded.
- For production use you should add authentication/authorization around the backend endpoints.