This project provides a Docker-based setup to self-host AI services using Ollama, OpenWebUI and Authelia (OIDC). Follow the instructions below to get started.
- Docker
- Docker Compose
- NVIDIA GPU with drivers installed (for GPU acceleration)
- NVIDIA Container Toolkit (if necessary)
-
Clone the repository:
git clone https://github.com/tdharris/ollama-openwebui.git cd ollama-openwebui -
Copy the sample environment file and update it with your configuration:
cp .env.sample .env
Edit the
.envfile to set your specific configuration values. -
Build and start the services:
docker compose up -d
This will start the Ollama and OpenWebUI services.
OLLAMA_IMAGE_VERSION: Version of the Ollama image to use (default:latest).OLLAMA_PORT: Port on which Ollama will be accessible (default:11434).OPEN_WEBUI_IMAGE_VERSION: Version of the OpenWebUI image to use (default:cuda).OPEN_WEBUI_PORT: Port on which OpenWebUI will be accessible (default:3002).OAUTH_CLIENT_ID: OAuth client ID for OpenWebUI.OAUTH_CLIENT_SECRET: OAuth client secret for OpenWebUI.OPENID_PROVIDER_URL: URL of the OpenID provider.OAUTH_PROVIDER_NAME: Name of the OAuth provider.OPENID_REDIRECT_URI: Redirect URI for OpenID authentication.
Note: See Authelia OIDC Open WebUI for more information on how to configure Authelia with OpenWebUI.
ollama: Stores Ollama data.open-webui: Stores OpenWebUI data.
- Ollama: Accessible at
http://localhost:${OLLAMA_PORT} - OpenWebUI: Accessible at
http://localhost:${OPEN_WEBUI_PORT}
To stop the services, run:
docker compose downTo update the services to the latest version, pull the latest images and restart the services:
docker compose pull
docker compose up -dMore models can be found on the Ollama library.
To download a model, use the following command:
docker exec -it ollama ollama run llama3Note: Replace llama3 with the model name.
Check the logs of the services if you encounter any issues:
docker compose logs -fThis project is licensed under the MIT License.