Skip to content

dubnium0/SmartDB

Repository files navigation

Local PostgreSQL LLM Assistant

This project lets you explore a PostgreSQL database that is running locally (for example inside Docker) by turning natural language requests into SQL with a model hosted in LM Studio. The React frontend talks to a lightweight Node/Express backend that

  1. Connects to your database safely, retrieves table metadata, and executes read-only queries.
  2. Proxies requests to the LM Studio HTTP API so the browser never has to deal with CORS or API details.

Requirements

  • Node.js 18 or newer
  • A PostgreSQL instance you can reach from the host (e.g. docker run --name mydb -p 5432:5432 -e POSTGRES_PASSWORD=password postgres:16)
  • Either:
    • A Google AI Studio API key (for Gemini models), or
    • LM Studio with the local server enabled (Settings → Developer → Enable local server)

Setup

  1. Install dependencies
    npm install
  2. Update .env.local if you run the backend on a different host/port. By default it expects the API on http://localhost:4000:
    VITE_API_BASE_URL=http://localhost:4000
  3. (Optional) Create a .env file for the backend to override defaults such as
    SERVER_PORT=4000
    POSTGRES_SCHEMA=public      # change if your tables live in a different schema
    POSTGRES_SSL=false          # set to true when you need SSL (self-signed allowed)
    ALLOWED_ORIGINS=http://localhost:3000

Running locally

Open two terminals:

  1. Start the backend API
    npm run server
  2. Start the Vite dev server
    npm run dev

The application is available at http://localhost:3000.

Usage

  1. Enter your PostgreSQL connection string (for example postgresql://postgres:password@localhost:5432/postgres) and click Connect. The backend validates the credentials and loads table metadata from the configured schema.
  2. Select one or more tables whose structure should be shared with the model.
  3. Pick an LLM provider:
    • Google Gemini – paste your API key, optionally fetch the available models (defaults to gemini-2.0-flash-2.5).
    • LM Studio – provide the local server endpoint (defaults to http://localhost:1234/v1) and load the served models.
  4. Describe the question you want answered in natural language and click Generate SQL. The backend relays the request to the chosen provider and returns the generated SQL.
  5. Review the generated SQL and click Execute Query to run it. Only SELECT/WITH queries are allowed for safety. Query results show in a table below.

Notes

  • All API calls stay on your machine; nothing leaves the local network.
  • If Gemini returns an empty list, the UI falls back to gemini-2.0-flash-2.5; you can still type another model id manually.
  • If LM Studio returns an empty list, make sure its local server is running and a model is loaded.
  • For production use you should add authentication/authorization around the backend endpoints.

About

Use natural language to query your local PostgreSQL database with a local LLM.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors