You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Option A: Docker (recommended)
docker run -d --name factorylm-postgres \
-e POSTGRES_USER=factorylm \
-e POSTGRES_PASSWORD=localdev \
-e POSTGRES_DB=matrix \
-p 5432:5432 \
postgres:16
# Option B: Use Neon (serverless, free tier)# Set DATABASE_URL env var to your Neon connection string# Option C: Skip if you're only working on core/, plc-modbus, or cosmos
Step 4: Run services
Service
Command
Port
Notes
PLC Modbus API
cd services/plc-modbus && uvicorn backend.main:app --reload
8000
Use PLC_USE_MOCK=true for simulator
My-Ralph API
cd my-ralph && python -m uvicorn api.main:app --reload
8000
Change port if running alongside PLC API
PLC Copilot bot
cd services/plc-copilot && python photo_to_cmms_bot.py
# Start PLC Modbus API with mock PLCcd services/plc-modbus
PLC_USE_MOCK=true uvicorn backend.main:app --reload --port 8000
# The mock PLC simulates a Micro 820 with:# - Coils (digital outputs)# - Holding registers (analog values)# - Input registers (sensor readings)
Step 7: Secrets via Doppler (optional)
# If you have Doppler set up:
doppler run --project factorylm-core --config dev -- pytest
# Otherwise, set env vars directly:# Windows PowerShell:$env:GROQ_API_KEY = "your-key"$env:TELEGRAM_BOT_TOKEN = "your-token"# Linux/macOS:export GROQ_API_KEY="your-key"export TELEGRAM_BOT_TOKEN="your-token"
Step 8: Ollama (local LLM, optional)
# Install Ollama: https://ollama.ai# Pull a small model:
ollama pull qwen2.5:0.5b
# FactoryLM core can use it:
LLM_PROVIDER=flm LLM_API_KEY=unused python -c "from factorylm.config import get_config; print(get_config())"