How to Install the Latest Versions of Docker and Docker Compose on Ubuntu and Debian
Docker is no longer just a tool for developers. It’s widely used for test environments, production services, and even small self-hosted projects at home. Docker Compose makes this setup even more convenient: instead of manually running multiple containers, you can describe the entire setup in a single file and launch it with one command.
On Ubuntu and Debian, Docker can be installed in just a couple of commands from the default repositories. The problem is that this method doesn’t always provide the latest version.
Sometimes this leads to missing features, unstable application behavior, or incompatibility with newer configurations.
That’s why, if you need an up-to-date Docker and a modern version of Compose, it’s better to install them from Docker’s official sources rather than the distribution’s base repository. Below, we’ll walk through the entire process step by step.
What Exactly Gets Installed
Before starting, it’s worth clarifying a few terms. The word “Docker” is often used to refer to everything at once: the engine, the client, additional components, and build tools. In reality, these are separate parts.
Docker Engine is the core of the system. It runs containers and manages images, networks, volumes, and background processes. If you just need to run a single service (like Nginx, Redis, or a small app), this is usually enough.
Docker Compose is needed when one container isn’t enough. For example, if your setup includes a web application, a database, and a background worker, it’s much easier to manage them as a single unit. Compose allows you to define this setup in a YAML file and control it using familiar commands.
When installing on Ubuntu or Debian, you’ll typically encounter the following packages:
- docker — a package from the default repository; it may be outdated or differ from current documentation
- docker-ce — the up-to-date Community Edition maintained by the Docker team
- docker-compose-plugin — the official plugin that enables the docker compose command
- docker-buildx-plugin — an extension for more flexible image building, including multi-platform builds
- containerd.io — a low-level container runtime used by Docker Engine
This is why installing from official sources is usually preferable: you get a predictable set of components and are less dependent on how quickly your distribution updates its repositories.
Preparing the System
Before installing new packages, update the package list:
sudo apt update
Then install the required dependencies:
sudo apt install ca-certificates gnupg curl
Here’s what they’re for:
- ca-certificates — enables proper HTTPS support
- gnupg — used to verify repository signatures
- curl — needed to download keys and files via URL
This is a basic preparation step. Without it, the installation may fail or behave incorrectly.
Installing Docker on Ubuntu and Debian
The next step is adding Docker’s GPG key so that apt can verify package authenticity.
For Ubuntu:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker.gpg
For Debian:
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker.gpg
Next, add the official Docker APT repository.
For Ubuntu:
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
For Debian:
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
This creates a separate repository file so Docker updates through apt while remaining isolated from system repositories.
Update package lists again:
sudo apt update
Now install Docker and related components:
sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin
This installs:
- Docker Engine
- CLI client
- containerd
- Buildx plugin
To verify the installation:
docker --version
If a version is displayed, the client is installed correctly. Next, check that the Docker service is running:
sudo systemctl status docker
If needed, enable it at startup:
sudo systemctl enable docker
This is especially useful on servers where Docker should start automatically after reboot.
Installing Docker Compose
There’s an important nuance here. The old standalone docker-compose command is being phased out. The modern approach uses Docker CLI plugins with the docker compose command.
Docker searches for plugins in several locations. For a system-wide installation, it’s best to place Compose in a shared directory:
sudo mkdir -p /usr/local/lib/docker/cli-plugins/
Download the latest Docker Compose binary:
sudo curl -SL https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/lib/docker/cli-plugins/docker-compose
Make it executable:
sudo chmod +x /usr/local/lib/docker/cli-plugins/docker-compose
Verify installation:
docker compose version
If a version is returned, Compose is installed correctly and recognized by Docker CLI.
Where Docker Looks for Plugins
To avoid confusion, it helps to understand how Docker finds plugins.
First, it checks user paths:
- $DOCKER_CLI_PLUGIN_PATH (if set manually)
- ~/.docker/cli-plugins/
Then system directories:
- /usr/local/lib/docker/cli-plugins/ — convenient for manual installs
- /usr/lib/docker/cli-plugins/ — typically used by package managers
That’s why /usr/local/lib/docker/cli-plugins/ is usually the best option for manual installation: the plugin becomes globally available and isn’t tied to a specific user.
What to Check After Installation
Once Docker and Compose are installed, run a few quick checks.
Start with a test container:
sudo docker run hello-world
This downloads a test image and verifies that Docker Engine can run containers.
Another practical point is permissions. By default, many Docker commands require sudo. This is normal, but you can add your user to the docker group to run commands without elevated privileges:
sudo usermod -aG docker $USER
You’ll need to log out and back in for this to take effect. Keep in mind that Docker access effectively grants elevated system control, so use this carefully on production servers.
Why Not Use the Default Repository
At first glance, installing Docker via apt seems easier: the package is available, the command works, and Docker gets installed.
The downside is that you depend on the version maintained by your distribution. Stable server distributions update packages conservatively, prioritizing stability over new features.
For core system components, this is a benefit. But with Docker, you often want newer features—especially if you rely on official documentation, use modern Compose features, build images with Buildx, or move configurations between machines.
Final Thoughts
After completing these steps, your server will have the latest versions of Docker Engine and Docker Compose installed. The system will be ready to run containers, build images, and deploy multi-service applications.
This approach is convenient because you work with up-to-date components, avoid limitations of outdated packages, and get predictable behavior aligned with current Docker documentation.
For test environments, personal projects, CI pipelines, and typical server workloads, this setup is more than sufficient. And if you later need something more complex—multiple services, custom networks, volumes, or automated builds—you’ll already have a solid and modern foundation to build on.
Related
All articles
What a 404 Error Means, Why It Happens, and How to Deal With It
A 404 error is one of the basic HTTP status codes. It is most often displayed as “404 Not Found” or “Page Not Found.” The wording makes it seem like something has broken. In practice, that is not necessarily the…
iDempiere: An Overview of the ERP System for Business
Choosing an Enterprise Resource Planning (ERP) system today can be a daunting task. On one side, you have giants like SAP or Oracle: they promise everything at once, but their cost can become a financial burden even for a successful…