A fully self-hosted AI running entirely on your own hardware. Useful if you don't want any subscriptions and if you value your privacy!
Combines Ollama (local LLM), Open WebUI (chat interface), and SearXNG (private web search) into a single Docker Compose setup.
- Built to deepen my understanding of how AI works and how multi-containerization works
- Wanted to learn how to keep containers isolated from each other
- Made for people who just want to mess around with a local AI on their own device without dealing with the setup
- Might be something I add to my future homelab I want to build
- Local LLM inference via Ollama - you can run any Ollama model you want
- Chat interface via Open WebUI - WebUI is a Chat-GPT like interface which makes easy to use and navigate
- Private web search via SearXNG - Let's your AI search the internet without your data being collected by third parties
- Custom layout - custom logo, favicon, and CSS to make fit your style more
- Easy setup and usage - One command to set everything up on any device
-
Clone the repository
git clone https://github.com/lukas362/Selfhost-AI.git cd Selfhost-AI -
Start the stack
docker compose up -d
-
Open the UI Navigate to
http://localhost:3000in your browser. -
Pull a model (first time only)
docker exec -it ollama ollama pull llama3You can find models at ollama.com/library.