Run AI models locally with minimal setup. The only requirement is Docker Compose.
Launch Ollama and Open WebUI:
docker compose up -dStop everything with:
docker compose downPull a model inside the Ollama container:
docker exec ollama ollama pull deepseek-r1:8bBrowse available models at https://ollama.com/search
Visit http://localhost:3111 in your browser.