- OS: Linux (Ubuntu 20.04+ recommended)
- RAM: Minimum 8GB, Recommended 16GB+
- Storage: Minimum 20GB free space
- CPU: x86_64 architecture
- Network: Internet access for initial setup
- Docker: Version 24.0+ with Docker Compose plugin
- Git: For cloning the repository
- curl: For health checks
- Ollama: Local LLM engine (install separately)
git clone https://github.com/FlowTech-Lab/FlowTech-AI.git
cd FlowTech-AIchmod +x init.sh
sudo ./init.shThe init.sh script will:
- ✅ Verify system prerequisites
- ✅ Check available disk space (minimum 2GB)
- ✅ Download all Docker images
- ✅ Create data directories with proper permissions
- ✅ Configure environment variables
- ✅ Start all services in correct order
- ✅ Verify service health
# Check all services are running
docker compose ps
# Test service endpoints
curl -s http://localhost:8081 # OpenWebUI
curl -s http://localhost:5678 # n8n
curl -s http://localhost:8082 # SearxNG
curl -s http://localhost:3300 # Langfuse
curl -s http://localhost:6333 # Qdrant# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Start Ollama service
ollama serve
# Pull recommended models (in another terminal)
ollama pull qwen3:4b #chat modèle for low vram memory
ollama pull qwen3:8b #chat modèle for 6gb+ Vram
ollama pull bge-m3:567m #chat embrinding modèle.The init.sh script creates a .env file with default values. Key variables:
# Database
POSTGRES_USER=n8n
POSTGRES_PASSWORD=<auto-generated>
POSTGRES_DB=n8n
# Services
OPENWEBUI_PORT=8081
N8N_PORT=5678
SEARXNG_PORT=8082
LANGFUSE_PORT=3300
# Langfuse
LANGFUSE_INIT_USER_EMAIL=admin@flowtech.local
LANGFUSE_INIT_USER_PASSWORD=<auto-generated>OpenWebUI Setup:
- Access: http://localhost:8081
- Go to Admin Panel > Settings > Web Search
- Configure:
http://searxng:8080/search
Langfuse Setup:
- Access: http://localhost:3300
- Login with credentials from
.env - Create organization and project
- Generate API keys in Project > Settings > API Keys
n8n Workflow Setup:
- Access: http://localhost:5678
- Import main workflow:
SRC/N8N-openwebui-workflow.json - Import function:
SRC/function-N8N Pipe.json - Configure webhook URL in OpenWebUI:
http://n8n:5678/webhook/invoke_n8n_agent
Services communicate using container names:
- PostgreSQL:
postgres:5432 - Redis:
redis:6379 - Qdrant:
http://qdrant:6333 - MinIO:
minio:9000 - ClickHouse:
clickhouse:8123
For LAN access, replace localhost with your server IP:
- OpenWebUI:
http://YOUR_IP:8081 - n8n:
http://YOUR_IP:5678 - SearxNG:
http://YOUR_IP:8082 - Langfuse:
http://YOUR_IP:3300
If you encounter change mount propagation through procfd error:
- The setup now uses Docker named volumes by default
- This issue should be resolved in the current version
# Fix AI_Data permissions
sudo chown -R $USER:$USER AI_Data/
chmod -R 755 AI_Data/# Check service logs
docker compose logs [service-name]
# Restart specific service
docker compose restart [service-name]
# Complete reset (WARNING: deletes all data)
# Edit init.sh: set DEV_MODE=true
sudo ./init.shIf default ports are in use, modify .env:
OPENWEBUI_PORT=8082
N8N_PORT=5679
# ... etc# All services status
docker compose ps
# Individual service health
docker compose exec postgres pg_isready
docker compose exec redis redis-cli ping
curl -f http://localhost:6333/health # Qdrant- Change default passwords in
.env - Enable HTTPS with reverse proxy (nginx/traefik)
- Configure firewall to restrict port access
- Set up backups for
AI_Data/directory - Enable monitoring with Prometheus/Grafana
# Backup data directory
tar -czf flowtech-ai-backup-$(date +%Y%m%d).tar.gz AI_Data/
# Backup PostgreSQL
docker compose exec postgres pg_dump -U n8n n8n > backup-$(date +%Y%m%d).sql
# Backup Qdrant
docker compose exec qdrant qdrant-cli snapshot create- Vertical scaling: Increase VM resources (RAM/CPU)
- Horizontal scaling: Consider Kubernetes for multiple nodes
- Storage scaling: Use external storage for
AI_Data/
- Application logs:
logs/init-YYYYMMDD-HHMMSS.log - Docker logs:
docker compose logs [service] - Service logs: Inside containers at
/var/log/
- Check logs first:
docker compose logs [service] - Verify prerequisites are met
- Review this documentation
- Check GitHub issues
- Create new issue with logs and system info
Next Steps: After successful setup, see USER_GUIDE.md for usage instructions.