This project has a docker-compose.yml file to start Ollama and Ollama WebUI on your machine.
- Docker
Copy the .env.example file to .env and set the MODELS_PATH variable to the path where your Ollama models are stored.
cp .env.example .envdocker-compose up -ddocker-compose downIf you have an NVIDIA GPU and want to use it with Ollama, you can use the provided docker-compose.nvidia.yml file. Make sure you have the NVIDIA Container Toolkit installed on your machine.
To start Ollama with NVIDIA GPU support, run:
docker-compose -f docker-compose.nvidia.yml up -dMake sure to set the MODELS_PATH environment variable to the path where your Ollama models are stored before running the command.
If you encounter this issue:
✘ Container ollama Error response from daemon: unknown or invalid runtime name: nvidia 0.0s
Error response from daemon: unknown or invalid runtime name: nvidiaMake sure you have the NVIDIA Container Toolkit installed. You can find the installation instructions here.
Then you might need to reconfigure docker or containerd to use the nvidia runtime as default. You can follow the instructions here, but long story short, execute the following:
# For Docker
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
# For Containerd
sudo nvidia-ctk runtime configure --runtime=containerd
sudo systemctl restart containerd