Step-by-step guide for first-time installation
Before starting, ensure you have:
✅ Operating System: Ubuntu 20.04+ / Debian 11+ / RHEL 8+
✅ CPU: 4 cores minimum
✅ RAM: 16 GB minimum
✅ Disk Space: 50 GB free
✅ Docker: Version 24.0.0+
✅ Docker Compose: Version 2.20.0+
# Check Docker version
docker --version
# Should show: Docker version 24.0.0 or higher
# Check Docker Compose
docker compose version
# Should show: Docker Compose version v2.20.0 or higher
# Test Docker permissions
docker ps
# If error, add user to docker group:
sudo usermod -aG docker $USER
newgrp dockergit clone https://github.com/FlowTech-Lab/FlowTech-AI.git
cd FlowTech-AI# Requires sudo for setting proper file permissions
sudo ./init.shLangfuse Configuration - User Email
Enter the email for the Langfuse administrator user:
What to enter: Your email address (e.g., admin@example.com)
- This creates the Langfuse admin account for LLM observability
- You'll use this to login at http://localhost:3300
Langfuse Configuration - Password
Enter the password for the Langfuse administrator user (or press Enter for auto-generation):
Options:
- Press Enter: Auto-generate a secure password ✅ Recommended
- Type password: Use your own password (min 8 characters)
Note: The password will be saved in .env file and displayed at the end of installation.
The script will:
- ✅ Download Docker images (~5 minutes)
- ✅ Generate secure credentials
- ✅ Create directories and set permissions
- ✅ Start all services
- ✅ Configure ClickHouse, PostgreSQL, Redis
- ✅ Display final summary with credentials
Total time: ~5-7 minutes (depending on internet speed)
At the end, you'll see:
🔑 Default Credentials:
• Langfuse: admin@example.com / <auto-generated-password>
• N8N: admin / <auto-generated-password>
• N8N Bearer Token: <bearer-token>
• Samba Share: admin / <auto-generated-password>
- Save these credentials in a secure location (password manager)
- They are also stored in
.envfile (keep it secure!) - You'll need them to access the services
docker compose psExpected output: All services should show Up (healthy) or Up:
NAME STATUS
clickhouse Up (healthy)
flowtech-ai-n8n-1 Up
flowtech-ai-openwebui-1 Up
flowtech-ai-postgres-1 Up (healthy)
flowtech-ai-searxng-1 Up
langfuse-web Up
langfuse-worker Up
mcp-qdrant Up (healthy)
mcp-qdrant-knowledge Up (healthy)
minio Up (healthy)
qdrant Up
redis Up (healthy)
Open in your browser:
| Service | URL | Expected Result |
|---|---|---|
| OpenWebUI | http://localhost:8081 | Chat interface loads |
| n8n | http://localhost:5678 | Login page (use N8N credentials) |
| Langfuse | http://localhost:3300 | Login page (use Langfuse email/password) |
| Qdrant | http://localhost:6333/dashboard | Qdrant dashboard |
| SearxNG | http://localhost:8082 | Search interface |
curl http://localhost:8000/sseExpected: Should return event stream data (text/event-stream)
# Linux/Mac
cp cursor-mcp-config.json ~/.cursor/mcp.json
# Windows
copy cursor-mcp-config.json %USERPROFILE%\.cursor\mcp.json# Open file
nano ~/.cursor/mcp.jsonChange:
{
"mcpServers": {
"qdrant": {
"url": "http://YOUR_SERVER_IP:8000/sse" // ← Change to your server IP
}
}
}How to find your IP:
# Linux
hostname -I | awk '{print $1}'
# Windows
ipconfig
# macOS
ifconfig | grep "inet " | grep -v 127.0.0.1Close and reopen Cursor IDE completely.
In Cursor chat:
@qdrant store "FlowTech-AI installation successful!"
Then:
@qdrant find installation
Expected: Should retrieve the stored message.
✅ Success! Cursor is now connected to your knowledge base.
Open http://localhost:8081 in your browser.
No login required - OpenWebUI uses your browser session.
-
Click "New Chat"
-
In the chat interface, click "Knowledge" button (book icon)
-
Click "Upload Files"
-
Select documents:
- PDF files
- Markdown files (.md)
- Text files (.txt)
- Word documents (.docx)
- Code files (.py, .js, .ts, etc.)
-
Wait for processing (progress bar)
-
Click "Close" when done
Enable Knowledge in Chat:
- The "Knowledge" button should be highlighted/active
- If not, click it to activate
Ask questions:
User: What information is in the uploaded documents?
AI: Based on the documents, I can see...
Example questions:
- "Summarize the API documentation"
- "How do I configure the database according to the docs?"
- "What are the main features described?"
✅ Success! RAG is working with your documents.
Use credentials from installation summary:
- Username:
admin - Password: (from
.envor installation output)
- Click "Workflows" → "Add workflow"
- Click "⋮" (three dots) → "Import from File"
- Select:
SRC/FlowTech-AI-Complete-Workflow.json - Click "Save"
The workflow needs:
- Ollama Connection: Already configured (external server)
- Qdrant Connection: Automatic (uses localhost:6333)
Click "Active" toggle in top right.
✅ Success! Workflow is ready to use.
Scenario: Save a Python function for later reference
In Cursor:
def calculate_embeddings(text: str) -> List[float]:
"""Calculate embeddings using BGE model"""
# Your code here
passStore it:
@qdrant store "Python function for BGE embeddings calculation"
Later retrieve:
@qdrant find python bge embeddings
Scenario: Upload your project documentation and ask questions
- Upload
API_DOCS.pdfto OpenWebUI - Enable "Knowledge" in chat
- Ask: "How do I authenticate API requests?"
- Get answer with citations from your docs
Scenario: Process incoming webhooks
- Create webhook trigger in n8n
- Add processing nodes
- Connect to Qdrant for context
- Send results to external API
Check:
docker compose logsCommon causes:
- Port already in use: Change ports in
.env - Not enough RAM: Close other applications
- Docker not running:
sudo systemctl start docker
Solution:
docker compose down
sudo ./init.shSolution:
# Check firewall
sudo ufw allow 8081 # OpenWebUI
sudo ufw allow 8000 # MCP-Qdrant
sudo ufw allow 5678 # n8n
sudo ufw allow 6333 # Qdrant
# Or disable firewall temporarily for testing
sudo ufw disableCheck:
- Is
~/.cursor/mcp.jsonconfigured with correct IP? - Can you reach the server:
curl http://YOUR_IP:8000/sse - Is firewall blocking port 8000?
Solution:
# Test from Cursor's computer
curl http://SERVER_IP:8000/sse
# Should return event stream, not errorCheck:
- Is "Knowledge" button activated in chat?
- Did documents finish uploading/processing?
- Are documents actually uploaded?
Solution:
# Check Qdrant collections
curl http://localhost:6333/collections
# Should show collections with vectorsCheck .env file:
cat .env | grep PASSWORD
cat .env | grep AUTHAll passwords are stored there.
Current configuration is secure for local networks:
- ✅ Random passwords auto-generated
- ✅ Services bound to localhost (except when needed)
- ✅ Firewall-ready
Additional security needed:
-
Reverse Proxy (Nginx/Traefik)
# Add SSL certificates # Add authentication # Rate limiting
-
Change Default Ports
# Edit .env OPENWEBUI_PORT=8443 N8N_PORT=5443 # etc.
-
Firewall Rules
# Only allow specific IPs sudo ufw allow from 192.168.1.0/24 to any port 8081 -
Regular Updates
cd FlowTech-AI git pull docker compose pull docker compose up -d
Expected resource usage (after all services start):
- RAM: ~8-10 GB
- CPU: 10-20% idle, 50-80% during AI inference
- Disk: ~10 GB for Docker images + data growth
Monitor resources:
# Overall system
htop
# Docker resources
docker statscd FlowTech-AI
docker compose restartdocker compose restart openwebui
docker compose restart mcp-qdrant# All services
docker compose logs -f
# Single service
docker compose logs -f openwebui
# Last 100 lines
docker compose logs --tail=100 mcp-qdrantdocker compose downdocker compose down --volumesgit pull origin main
docker compose pull
docker compose up -dAfter successful installation:
- Configure Cursor - AI-enhanced coding
- Use OpenWebUI RAG - Document Q&A
- Create n8n Workflows - Automation
- Optional: Setup Obsidian - Note management
You now have a complete AI stack running:
- ✅ OpenWebUI for conversational AI
- ✅ Cursor integration for AI-enhanced coding
- ✅ n8n for automation
- ✅ Qdrant for vector storage
- ✅ Full observability with Langfuse
Start building! 🚀
- Issues: https://github.com/FlowTech-Lab/FlowTech-AI/issues
- Discussions: https://github.com/FlowTech-Lab/FlowTech-AI/discussions
- Troubleshooting: See above section
Made with ❤️ by FlowTech-Lab