How to Run Ollama Locally Using Docker

Prasad Khode
2 min readSep 2, 2024

--

Running AI models locally can be a great way to leverage the power of machine learning without relying on cloud services. In this guide, I will show you how to set up and run Ollama locally using Docker. This includes pulling the necessary images, setting up containers, and executing models. Let’s dive in!

Prerequisites

Before we begin, ensure you have Docker installed on your machine. If not, you can download it from the official Docker website.

Step 1: Pull the Ollama Docker Image

First, we need to pull the Ollama Docker image from Docker Hub. This image contains everything you need to run Ollama.

docker pull ollama/ollama

Step 2: Run the Ollama Container

Once the image is pulled, we can run the container. This step will start Ollama and expose it on port 11434.

docker run -d -p 11434:11434 --name ollama ollama/ollama
  • -d: Runs the container in detached mode.
  • -p 11434:11434: Maps the container's port 11434 to your machine's port 11434.
  • --name ollama: Names the container "ollama".

Step 3: Execute Models within the Ollama Container

Now that the container is running, you can execute different models within it. For example, to run the llama3 model, use the following command:

docker exec -it ollama ollama run llama3

Similarly, to run the gemma model, use:

docker exec -it ollama ollama run gemma
  • docker exec -it: Executes a command in a running container.
  • ollama run llama3: Runs the llama3 model in the Ollama container.

Step 4: Set Up Ollama with a Web UI

If you prefer to interact with Ollama via a web interface, you can set up a web UI using Docker. There are two options:

Option 1: Open-WebUI

This option sets up a generic web interface for AI models.

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Option 2: Ollama-WebUI

This option is specifically tailored for Ollama.

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v ollama-webui:/app/backend/data --name ollama-webui --restart always ghcr.io/ollama-webui/ollama-webui:main
  • -v ollama-webui:/app/backend/data: Mounts a volume for persistent data storage.
  • --restart always: Ensures the container restarts automatically if it stops.

Open localhost:3000 in your browser, complete the setup to access Ollama model over web ui

Conclusion

By following these steps, you’ve successfully set up Ollama to run locally on your machine using Docker. Whether you’re running models directly or interacting through a web interface, you now have a powerful tool at your disposal. Feel free to explore the capabilities of Ollama and experiment with different models.

--

--