Previous: None | Next: Running Hatchling
This article is about:
- Setting up Docker Desktop for running Hatchling
- Configuring GPU support for LLM acceleration
- Installing and configuring Ollama Docker container
This document provides instructions on how to set up and run Ollama for deploying and running local LLMs using Docker.
- Docker Desktop: Install Docker Desktop
[!Note] If you wish to leverage GPUs running on a server, then you must install docker on both the GPU server and your local computer that will access the GPU resources. Docker on the GPU server will be used to install Ollama (this guide), and Docker on the local computer will be used to install Hatchling (next guide).
- On Windows, install Windows Subsystem for Linux (WSL). Latest version is v2: Official Microsoft Documentation
- GPU Support:
- For MacOS users with Apple Silicon chips (typically M series), you can follow the instructions for CPU and ignore the GPU-related sections
- For Windows & Linux with dedicated GPUs, we strongly recommend enabling GPU support to increase LLM output speed. We will be using the official documentation for each GPU type:
- NVIDIA GPUs: NVIDIA Container Toolkit Installation
- AMD GPUs: AMD ROCm Installation
-
Install Docker Desktop:
- Download and install Docker Desktop following the official instructions: https://docs.docker.com/get-docker/
-
On Windows, connect Docker to WSL:

- In Docker Desktop, follow the arrows numbered 1, 2, and 3 on the screenshot to navigate through
Settings>Resources>WSL Integration. - Either enable integration with your default WSL distro (arrow 4.1) OR select a specific one (arrow 4.2)
- Click "Apply & Restart" if you make changes (arrow 5)
- In Docker Desktop, follow the arrows numbered 1, 2, and 3 on the screenshot to navigate through
-
For GPU owners, setup GPU Support:
-
Open a terminal on the computer with the GPU you want to use (for GPU servers, you likely connect through ssh)
- On Windows, launch the Linux version that was installed via WSL and that Docker is using. For example, in the previous image, that would be
Ubuntu-24.04; so, runwsl -d Ubuntu-24.04to start Ubuntu.
- On Windows, launch the Linux version that was installed via WSL and that Docker is using. For example, in the previous image, that would be
-
For NVIDIA GPU support, run:
# Add NVIDIA repository keys curl -fsS https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg # Add NVIDIA repository curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list # Update package lists sudo apt-get update # Install NVIDIA container toolkit sudo apt-get install -y nvidia-container-toolkit # Configure Docker runtime sudo nvidia-ctk runtime configure --runtime=docker
-
For AMD GPU support, run:
# Install required packages sudo apt install python3-setuptools python3-wheel # Download and install AMD GPU installer script (for Ubuntu 24.04) sudo apt update wget https://repo.radeon.com/amdgpu-install/6.4.2.1/ubuntu/noble/amdgpu-install_6.4.60402-1_all.deb sudo apt install ./amdgpu-install_6.4.60402-1_all.deb # Install graphics and ROCm support sudo amdgpu-install -y --usecase=graphics,rocm # Add current user to render and video groups sudo usermod -a -G render,video $LOGNAME
-
Close the terminal
-
Restart Docker
-
-
Pull Ollama Image:
- Open a terminal capable of running docker commands.
- Write
docker pull ollama/ollamain the terminal and press enter to run it.- It will download about 1.6GB (as of May 2025)
- Once finished, click on the
Imagestab (arrow 1) of Docker Desktop, and check thatollama/ollamais available (arrow 2)
- Alternatively, to check that the image exists, you can run the command
docker images -a. The output should include a line similar toollama/ollama latest d42df3fe2285 11 days ago 4.85GB(May 2025)
Previous: None | Next: Running Hatchling


