Skip to content

Latest commit

 

History

History
569 lines (402 loc) · 11.3 KB

File metadata and controls

569 lines (402 loc) · 11.3 KB

📦 FlowTech-AI - Complete Installation Guide

Step-by-step guide for first-time installation


📋 Prerequisites

Before starting, ensure you have:

Operating System: Ubuntu 20.04+ / Debian 11+ / RHEL 8+
CPU: 4 cores minimum
RAM: 16 GB minimum
Disk Space: 50 GB free
Docker: Version 24.0.0+
Docker Compose: Version 2.20.0+

Check Docker Installation

# Check Docker version
docker --version
# Should show: Docker version 24.0.0 or higher

# Check Docker Compose
docker compose version
# Should show: Docker Compose version v2.20.0 or higher

# Test Docker permissions
docker ps
# If error, add user to docker group:
sudo usermod -aG docker $USER
newgrp docker

🚀 Installation Steps

Step 1: Clone Repository

git clone https://github.com/FlowTech-Lab/FlowTech-AI.git
cd FlowTech-AI

Step 2: Run Installation Script

# Requires sudo for setting proper file permissions
sudo ./init.sh

⚠️ Important: The script will ask you 2 questions:

Question 1: Langfuse Administrator Email

Langfuse Configuration - User Email
Enter the email for the Langfuse administrator user: 

What to enter: Your email address (e.g., admin@example.com)

  • This creates the Langfuse admin account for LLM observability
  • You'll use this to login at http://localhost:3300

Question 2: Langfuse Administrator Password

Langfuse Configuration - Password
Enter the password for the Langfuse administrator user (or press Enter for auto-generation):

Options:

  • Press Enter: Auto-generate a secure password ✅ Recommended
  • Type password: Use your own password (min 8 characters)

Note: The password will be saved in .env file and displayed at the end of installation.

Step 3: Wait for Installation

The script will:

  1. ✅ Download Docker images (~5 minutes)
  2. ✅ Generate secure credentials
  3. ✅ Create directories and set permissions
  4. ✅ Start all services
  5. ✅ Configure ClickHouse, PostgreSQL, Redis
  6. ✅ Display final summary with credentials

Total time: ~5-7 minutes (depending on internet speed)

Step 4: Save Your Credentials

At the end, you'll see:

🔑 Default Credentials:
  • Langfuse: admin@example.com / <auto-generated-password>
  • N8N: admin / <auto-generated-password>
  • N8N Bearer Token: <bearer-token>
  • Samba Share: admin / <auto-generated-password>

⚠️ IMPORTANT:

  • Save these credentials in a secure location (password manager)
  • They are also stored in .env file (keep it secure!)
  • You'll need them to access the services

✅ Verify Installation

Check All Services are Running

docker compose ps

Expected output: All services should show Up (healthy) or Up:

NAME                      STATUS
clickhouse                Up (healthy)
flowtech-ai-n8n-1         Up
flowtech-ai-openwebui-1   Up
flowtech-ai-postgres-1    Up (healthy)
flowtech-ai-searxng-1     Up
langfuse-web              Up
langfuse-worker           Up
mcp-qdrant                Up (healthy)
mcp-qdrant-knowledge      Up (healthy)
minio                     Up (healthy)
qdrant                    Up
redis                     Up (healthy)

Test Web Interfaces

Open in your browser:

Service URL Expected Result
OpenWebUI http://localhost:8081 Chat interface loads
n8n http://localhost:5678 Login page (use N8N credentials)
Langfuse http://localhost:3300 Login page (use Langfuse email/password)
Qdrant http://localhost:6333/dashboard Qdrant dashboard
SearxNG http://localhost:8082 Search interface

Test MCP-Qdrant (Cursor Integration)

curl http://localhost:8000/sse

Expected: Should return event stream data (text/event-stream)


🔧 Configure Cursor IDE

Step 1: Copy MCP Configuration

# Linux/Mac
cp cursor-mcp-config.json ~/.cursor/mcp.json

# Windows
copy cursor-mcp-config.json %USERPROFILE%\.cursor\mcp.json

Step 2: Edit Server IP

# Open file
nano ~/.cursor/mcp.json

Change:

{
  "mcpServers": {
    "qdrant": {
      "url": "http://YOUR_SERVER_IP:8000/sse"  // ← Change to your server IP
    }
  }
}

How to find your IP:

# Linux
hostname -I | awk '{print $1}'

# Windows
ipconfig

# macOS
ifconfig | grep "inet " | grep -v 127.0.0.1

Step 3: Restart Cursor

Close and reopen Cursor IDE completely.

Step 4: Test MCP Integration

In Cursor chat:

@qdrant store "FlowTech-AI installation successful!"

Then:

@qdrant find installation

Expected: Should retrieve the stored message.

Success! Cursor is now connected to your knowledge base.


📚 Configure OpenWebUI RAG

Step 1: Access OpenWebUI

Open http://localhost:8081 in your browser.

No login required - OpenWebUI uses your browser session.

Step 2: Upload Documents

  1. Click "New Chat"

  2. In the chat interface, click "Knowledge" button (book icon)

  3. Click "Upload Files"

  4. Select documents:

    • PDF files
    • Markdown files (.md)
    • Text files (.txt)
    • Word documents (.docx)
    • Code files (.py, .js, .ts, etc.)
  5. Wait for processing (progress bar)

  6. Click "Close" when done

Step 3: Test RAG

Enable Knowledge in Chat:

  • The "Knowledge" button should be highlighted/active
  • If not, click it to activate

Ask questions:

User: What information is in the uploaded documents?
AI: Based on the documents, I can see...

Example questions:

  • "Summarize the API documentation"
  • "How do I configure the database according to the docs?"
  • "What are the main features described?"

Success! RAG is working with your documents.


🔄 Configure n8n Workflows

Step 1: Access n8n

Open http://localhost:5678

Step 2: Login

Use credentials from installation summary:

  • Username: admin
  • Password: (from .env or installation output)

Step 3: Import Workflow

  1. Click "Workflows""Add workflow"
  2. Click "⋮" (three dots) → "Import from File"
  3. Select: SRC/FlowTech-AI-Complete-Workflow.json
  4. Click "Save"

Step 4: Configure Credentials

The workflow needs:

  • Ollama Connection: Already configured (external server)
  • Qdrant Connection: Automatic (uses localhost:6333)

Step 5: Activate Workflow

Click "Active" toggle in top right.

Success! Workflow is ready to use.


🎯 Quick Usage Examples

Example 1: Code Context in Cursor

Scenario: Save a Python function for later reference

In Cursor:

def calculate_embeddings(text: str) -> List[float]:
    """Calculate embeddings using BGE model"""
    # Your code here
    pass

Store it:

@qdrant store "Python function for BGE embeddings calculation"

Later retrieve:

@qdrant find python bge embeddings

Example 2: Documentation Q&A in OpenWebUI

Scenario: Upload your project documentation and ask questions

  1. Upload API_DOCS.pdf to OpenWebUI
  2. Enable "Knowledge" in chat
  3. Ask: "How do I authenticate API requests?"
  4. Get answer with citations from your docs

Example 3: Automated Workflow in n8n

Scenario: Process incoming webhooks

  1. Create webhook trigger in n8n
  2. Add processing nodes
  3. Connect to Qdrant for context
  4. Send results to external API

🐛 Troubleshooting

Problem: Services won't start

Check:

docker compose logs

Common causes:

  • Port already in use: Change ports in .env
  • Not enough RAM: Close other applications
  • Docker not running: sudo systemctl start docker

Solution:

docker compose down
sudo ./init.sh

Problem: Can't access services from other computers

Solution:

# Check firewall
sudo ufw allow 8081  # OpenWebUI
sudo ufw allow 8000  # MCP-Qdrant
sudo ufw allow 5678  # n8n
sudo ufw allow 6333  # Qdrant

# Or disable firewall temporarily for testing
sudo ufw disable

Problem: Cursor can't connect to MCP

Check:

  1. Is ~/.cursor/mcp.json configured with correct IP?
  2. Can you reach the server: curl http://YOUR_IP:8000/sse
  3. Is firewall blocking port 8000?

Solution:

# Test from Cursor's computer
curl http://SERVER_IP:8000/sse

# Should return event stream, not error

Problem: OpenWebUI RAG not finding documents

Check:

  1. Is "Knowledge" button activated in chat?
  2. Did documents finish uploading/processing?
  3. Are documents actually uploaded?

Solution:

# Check Qdrant collections
curl http://localhost:6333/collections

# Should show collections with vectors

Problem: Forgot credentials

Check .env file:

cat .env | grep PASSWORD
cat .env | grep AUTH

All passwords are stored there.


🔒 Security Recommendations

For Local/Development Use

Current configuration is secure for local networks:

  • ✅ Random passwords auto-generated
  • ✅ Services bound to localhost (except when needed)
  • ✅ Firewall-ready

For Production/Internet-Facing

Additional security needed:

  1. Reverse Proxy (Nginx/Traefik)

    # Add SSL certificates
    # Add authentication
    # Rate limiting
  2. Change Default Ports

    # Edit .env
    OPENWEBUI_PORT=8443
    N8N_PORT=5443
    # etc.
  3. Firewall Rules

    # Only allow specific IPs
    sudo ufw allow from 192.168.1.0/24 to any port 8081
  4. Regular Updates

    cd FlowTech-AI
    git pull
    docker compose pull
    docker compose up -d

📊 Resource Usage

Expected resource usage (after all services start):

  • RAM: ~8-10 GB
  • CPU: 10-20% idle, 50-80% during AI inference
  • Disk: ~10 GB for Docker images + data growth

Monitor resources:

# Overall system
htop

# Docker resources
docker stats

🔄 Common Operations

Restart All Services

cd FlowTech-AI
docker compose restart

Restart Single Service

docker compose restart openwebui
docker compose restart mcp-qdrant

View Logs

# All services
docker compose logs -f

# Single service
docker compose logs -f openwebui

# Last 100 lines
docker compose logs --tail=100 mcp-qdrant

Stop Stack

docker compose down

Stop and Remove Data

docker compose down --volumes

Update to Latest Version

git pull origin main
docker compose pull
docker compose up -d

📝 Next Steps

After successful installation:

  1. Configure Cursor - AI-enhanced coding
  2. Use OpenWebUI RAG - Document Q&A
  3. Create n8n Workflows - Automation
  4. Optional: Setup Obsidian - Note management

🎉 Congratulations!

You now have a complete AI stack running:

  • ✅ OpenWebUI for conversational AI
  • ✅ Cursor integration for AI-enhanced coding
  • ✅ n8n for automation
  • ✅ Qdrant for vector storage
  • ✅ Full observability with Langfuse

Start building! 🚀


🆘 Need Help?


Made with ❤️ by FlowTech-Lab