Run Ollama locally on your Home Assistant instance.
Tip
For detailed installation instructions of the App and examples, download my PDF: https://automatelike.pro/ollama-ha
Warning
Large Download Warning: The app image is over 3.7GB. Installation can take 25-30 minutes or more on a Raspberry Pi 5, depending on your internet speed and SD card performance. Please be patient and ensure you have sufficient disk space.
- Add Repository:
- Go to Settings > Apps > App Store.
- Click the three dots in the top right > Repositories.
- Add the URL of this repository:
https://github.com/peyanski/ollama-ha.
- Install App:
- Scroll down or search for "Ollama".
- Click Install. (This is the long step!)
- Start App:
- Once installed, toggle "Start on boot" and click Start.
- Check the "Log" tab to ensure the server starts listening on port
11434.
By default, GPU support is disabled to ensure compatibility. If you have a supported GPU passed through to Home Assistant:
- Go to the Configuration tab.
- Toggle gpu to
true. - Restart the app.
To free up disk space or check what is installed, use the command line.
- Install the Advanced SSH & Web Terminal app.
- Start it and open the terminal.
-
Find Container Name:
docker ps | grep ollama-ha(Look for a name like
addon_ad7c61ed_ollama-ha). -
Enter Container:
docker exec -it addon_ad7c61ed_ollama-ha bash(Replace
addon_ad7c61ed_ollama-hawith your actual container name).
Once inside the container:
-
List Installed Models: Check which models are using space.
ollama list
-
Delete a Model: Remove a model by name.
ollama rm llama3.2