Three service folders aligned to the PRD/TD:
frontend/– React + TypeScript SPA with chat streaming UI, KB switcher, and document upload.app-service/– Go REST + SSE gateway (auth, sessions, docs, streaming to AI service).ai-service/– FastAPI stub handling/internal/ingestand/internal/rag/query/stream.
- Python AI Service
cd ai-service
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
APP_PORT=9000 uvicorn main:app --host 0.0.0.0 --port 9000 --reload
- Go App Service
cd app-service
APP_PORT=8080 \
AI_SERVICE_URL=http://localhost:9000 \
LOCAL_STORAGE_PATH=./storage \
SERVICE_TOKEN=local-demo \
DB_DSN="postgres://aurora:aurora@localhost:5432/auroramind?sslmode=disable" \
go run main.go
Note: Go now stores absolute storage_uri paths and connects to Postgres for sessions/docs.
- Frontend
cd frontend
npm install
VITE_APP_API_BASE=http://localhost:8080 npm run dev -- --host --port 5173
Open http://localhost:5173, log in with any email/password, create a session, upload a doc, and send a chat. Go re-streams tokens from the Python stream as SSE.
The project defaults to podman. Ensure your podman machine is started.
podman machine init # if first time
podman machine start
make compose-up
To use Docker instead:
make compose-up ENGINE=docker
- Frontend: http://localhost:4173 (calls Go on :8080)
- Go App Service: http://localhost:8080
- Python AI Service: http://localhost:9000
- Postgres: localhost:5432 (user/pass/db = aurora/aurora/auroramind)
.env.example includes the main knobs (copy to .env and adjust as needed).
make ai-setup– create venv + install deps forai-servicemake ai-run– start FastAPI (needsOPENAI_API_KEYenv if you want LangChain path)make go-run– run Go app service pointing at local AI servicemake frontend-install/make frontend-dev– install deps then run Vite dev servermake compose-up/make compose-down– full stack via Docker
Frontend tip: Enter sends the message; Cmd/Ctrl+Enter inserts a newline in the chat box.
- Frontend → Go
POST /v1/auth/loginPOST /v1/sessionsandGET /v1/sessionsPOST /v1/sessions/{id}/messages/stream(SSE)POST /v1/kb/{id}/documents(multipart upload)GET /v1/kb/{id}/documents
- Go → Python
POST /internal/ingestafter uploadsPOST /internal/rag/query/streamfor streaming completions
- Go:
APP_PORT,AI_SERVICE_URL,SERVICE_TOKEN,LOCAL_STORAGE_PATH - Python:
APP_PORT,SERVICE_TOKEN,OPENAI_CHAT_MODEL,PINECONE_INDEX_NAME - Frontend:
VITE_APP_API_BASE
- Harden auth with real JWT signing/verification.
- Add ingestion status polling and chunk provenance display in the UI.
- Deploy to production (e.g. Kubernetes/Fly.io).