- Node.js >= 18.0.0
- npm >= 9.0.0
- Docker & Docker Compose (recommended)
- (Optional) Ollama for local LLM
git clone <your-repo>
cd fileai
npm run setupThis script will:
- Install all dependencies
- Create
.envfiles from examples - Check for Docker
Option A: Using Docker (Recommended)
npm run services:startThis starts:
- MongoDB on
localhost:27017 - Qdrant on
localhost:6333 - MinIO on
localhost:9000(Console:localhost:9001)
Option B: Manual Installation
Install and configure each service manually:
# Install Ollama from https://ollama.ai
# Pull models
ollama pull llama3
ollama pull nomic-embed-textFor OpenAI instead, you'll configure the API key in the setup wizard.
npm run devThis starts:
- Frontend: http://localhost:3000
- Backend API: http://localhost:3001
-
Configure:
- Database connection
- Storage (local/S3/MinIO)
- Vector database (Qdrant)
- LLM provider (Ollama/OpenAI)
- Embedding provider
-
Click "Finish Setup"
After setup, you'll be redirected to create the first user account.
# Build and start all services
docker-compose up -d
# View logs
docker-compose logs -f
# Stop all services
docker-compose downServices will be available at:
- Frontend: http://localhost:3000
- Backend API: http://localhost:3001
- MongoDB: localhost:27017
- Qdrant: http://localhost:6333
- MinIO: http://localhost:9000
# Database
MONGODB_URI=mongodb://localhost:27017/fileai
# JWT Secret (CHANGE THIS!)
JWT_SECRET=your-super-secret-key-change-this
# Server
PORT=3001
NODE_ENV=production
# Vector DB
QDRANT_URL=http://localhost:6333
QDRANT_API_KEY=
# Storage
STORAGE_TYPE=local # or s3, minio
LOCAL_STORAGE_PATH=./uploads
# S3/MinIO (if using)
S3_BUCKET=fileai
S3_REGION=us-east-1
S3_ACCESS_KEY_ID=
S3_SECRET_ACCESS_KEY=
S3_ENDPOINT= # For MinIO: http://localhost:9000
# LLM - Ollama
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=llama3
OLLAMA_EMBEDDING_MODEL=nomic-embed-text
# LLM - OpenAI (optional)
OPENAI_API_KEY=
OPENAI_MODEL=gpt-4-turbo-preview
OPENAI_EMBEDDING_MODEL=text-embedding-3-small
# Default providers
LLM_PROVIDER=ollama
EMBEDDING_PROVIDER=ollamaNEXT_PUBLIC_API_URL=http://localhost:3001/trpc- User uploads file (PDF, DOCX, XML, TXT)
- File is stored in configured storage (S3/MinIO/Local)
- Text is extracted using appropriate processor
- Text is chunked and embedded
- Embeddings are stored in Qdrant
- Metadata is stored in MongoDB
- User submits query
- Query is embedded using configured model
- Similar chunks are retrieved from Qdrant
- Context is built from relevant chunks
- LLM generates answer with sources
- Answer and sources are returned to user
- Create new processor in
apps/server/src/services/processors/ - Implement
FileProcessorinterface - Register in
processors/index.ts
Example:
export class NewFormatProcessor implements FileProcessor {
supportedTypes = ["application/new-format"];
async process(file: Buffer, filename: string): Promise<ProcessedDocument> {
// Your processing logic
return {
text: extractedText,
metadata: { ... },
};
}
}- Create new adapter in
apps/server/src/services/storage/ - Implement
StorageAdapterinterface - Register in
storage/index.ts
- Create new adapter in
apps/server/src/services/vector/ - Implement
VectorStoreAdapterinterface - Register in
vector/index.ts
# Check if MongoDB is running
docker-compose ps mongodb
# View MongoDB logs
docker-compose logs mongodb# Check Qdrant status
curl http://localhost:6333/health
# View Qdrant logs
docker-compose logs qdrant# Check if Ollama is running
curl http://localhost:11434/api/version
# List available models
ollama list
# Pull missing models
ollama pull llama3
ollama pull nomic-embed-text- Check file size (max 50MB)
- Check supported formats (PDF, DOCX, XML, TXT)
- View backend logs for processing errors
- Ensure storage is configured correctly
# Development
npm run dev # Start dev servers
npm run build # Build all packages
npm run lint # Run linters
npm run type-check # Check TypeScript
# Docker Services
npm run services:start # Start MongoDB, Qdrant, MinIO
npm run services:stop # Stop services
npm run docker:up # Start all services (including app)
npm run docker:down # Stop all services
npm run docker:logs # View logs
# Cleanup
npm run clean # Clean build artifacts
docker-compose down -v # Remove all containers and volumesFor issues and questions:
- Check the troubleshooting section above
- Review logs:
npm run docker:logs - Open an issue on GitHub
- Change the JWT_SECRET in production!
- Use strong passwords for services
- Enable authentication on Qdrant in production
- Use HTTPS in production
- Regularly update dependencies
- Don't commit
.envfiles to git
- Use OpenAI embeddings for better quality (but costs money)
- Adjust
CHUNK_SIZEandCHUNK_OVERLAPfor your documents - Increase
topKfor more context (but slower) - Use local Ollama for free inference
- Consider using GPU for faster Ollama performance
MIT