Docker_ Complete Guide to Docker for Beginners and Intermediates_ (Code Tutorials Book 6) [BooxRack]
Docker_ Complete Guide to Docker for Beginners and Intermediates_ (Code Tutorials Book 6) [BooxRack]
Craig Berg
Introduction
In the earlier days of technological evolution, developers deployed
applications directly on physical machines, with each equipped with an
Operating System. Because of the single user-space, applications shared
runtime.
Although deployment on physical machines was stable, the maintenance was
long and arduous, more so when each host used a different operating system.
There was no flexibility for developers and the hosted applications.
As you can imagine, this caused many issues when there was more than one
application built that required regular maintenance and a standalone machine
for it.
The system should also be running kernel version 3.8 or higher. Check the
kernel version using the command:
You can also check for supported storage backends such as DeviceMapper,
VFS, AUFS, ZFS, and Overlay filesystem. Systems such as Ubuntu may use
Overlay FS.
Most Linux distributions should have device-mapper thin-provisioning
module for implementing the layers. To check whether you have device-
mapper installed on your distro, use the command.
dmsetup -ls
Finally, you should ensure you’ve enabled support for namespaces and
cgroups . Since most Linux distributions have made them available and
supported for a while, your Linux distro should have that support in-built. To
check cgroups and namespaces, check the kernel configuration file using the
command:
For smooth experiences, Mac users should have the following system
requirements.
Once downloaded, open the docker.dmg file to open the installer and drag the
docker icon to the Applications folder on your system.
To start Docker, click on the docker icon in the Applications folder and wait
for initialization. Once Docker is running, you will get a welcome window
with a starter tutorial.
Installing Docker On Linux (Debian Buster)
Before installing Docker on Linux, ensure you are using a system that meets
the requirements listed earlier.
Next, we need to remove all previous installations of Docker on the system.
The names of docker installations might include: docker.io docker-engine,
Docker, containerd, runc
Open the terminal and enter the command below:
sudo apt-get remove docker.io Docker, docker-engine –y
We can now install Docker without the probability of running into problems
caused by previous installations.
In the terminal, start by executing the command:
Next, verify that you are using the Docker official key with fingerprint:
Use the command below to search the last characters of the key and verify:
The next step is to add the docker apt repository to the stable repository
channel. Use the commands below:
If you prefer frequent updates, you can use the nightly test channel by
changing the stable value to test in the above command.
NOTE: Nightly channel may have a few bugs.
Now we can install Docker engine using the command below:
Once the installation completes, verify that the installation is working using
the command:
docker container run hello-world
You can also configure the docker daemon to start during boot time using the
systemctl
Start the docker service using the command sudo systemctl start docker
Enable the docker service at startup using the command: sudo systemctl
enable Docker
Stop the service with sudo systemctl stop Docker.
How To Use A Script To Automate Docker Install
In most cases, you will need to setup docker on a single host, which is a
relatively simple process. However, if you need to setup Docker on hundreds
of hosts, the task will be repetitive and tedious. You can use a script to
automate this process.
Open the command prompt and start a new bash file. Ensure you are
comfortable with what the script does before executing it. You will require
sudo or root permissions.
Navigate to the following URL:
https://get.docker.com
Once there, copy the script and save it. Once done as saved, execute the file
using the command:
Section 3: How to Pull Docker Images and Run
Containers
This section intends to test whether Docker is running as expected, not to
explain the concepts. We will cover the entire Docker workflow in later
sections.
We will start by pulling a docker image and running a container using the
image. If you prefer to using the graphical interface provided by Docker
desktop, we will be using the commands throughout the book. To avoid
potential errors, ensure that Docker daemon is running. Open the terminal
and enter the command:
Once you have the image downloaded, you can view the list of images using
the command:
For example, to create an Nginx container, we use the command:
$ docker container ls
Docker uses a client-server architecture. The Docker binary has the Docker
client and the Docker server daemon that exist in a single host.
The Docker client can communicate with a local or remote docker server
daemon via the network sockets or RESTful API clients. The docker server
daemon is responsible for performing tasks such as building, running, and
distributing containers.
The docker client daemon sends commands to the docker server daemon that
is running on a remote or localhost, which then connects to the Docker
registry to get the images requested by the docker client.
In our simple example above, the docker client installed with the docker
binary communicates with the docker server daemon that then connects to the
docker registry requesting an NGINX image. Once downloaded or found, we
can use it to create containers.
A docker image refers to a read-only template used to create containers
during runtime. Docker image templates depend on the base image and all the
layers residing in it.
A docker registry stores the docker images; the by docker daemon references
it when pulling images. Docker registries can be public or private; it all
depends on the specified setting and the location of pulling and pushing the
images. Public docker images are available in the Docker hub.
An image repository refers to a collection of a similar set of images
distinguished by their GUIDs. For example, you can install various versions
of nginx image by passing the tag such as docker image pull nginx:latest
where latest becomes substituted with the correct version.
Containers refer to “virtual machines” that run base containers and the
accompanying layers. Containers have all the requirements for running
applications on them.
A Docker registry index manages accounts, searches, tags, permissions, etc.
in a public docker image registry.
The concept is illustratable using the following image:
Let’s move on to the next section and learn how to work with containers:
Section 4: Working With Docker Containers
In this section, we are going to cover Docker containers in more detail. We
will cover how to search and pull images, list and manage containers, manage
container logs, remove containers, stop containers, and so much more.
In the previous section, we illustrated a simple process of creating containers
using Docker. As the primary goal of Docker is to create containers, this
section shall delve a bit deeper into that.
Getting comfortable with performing tasks such as creating, updating,
stopping, and deleting containers will allow you to utilize the full
functionalities of Docker.
Before getting started, let us make sure that Docker is up and running by
using the command-line utility to get the docker version. If not running, you
will get an error close to the one shown below:
Once Docker is running, you will get an output displaying the client and
server versions as well as other detailed information.
API version: 1.40 (minimum version 1.12)
Go version: go1.12.17
Git commit: afacb8b
How To Search And List Images In Docker
To start creating a docker container, we need an image to use. If you know
the name of the image, you can pull the image using the docker pull
command.
We can also search the docker registry that holds both private and public
images for the target we are looking for:
By convention, docker search is executed on the docker public registry that is
available at:
https://hub.docker.com
To search an image in the Docker registry, we use the docker search
command that uses the following syntax.
For example, if we want to search for a Debian or Nginx image, we use the
command:
The docker search output gives information related to the Debian images
such as the names, descriptions, number of star ratings given to the images. It
also provides information about whether an image is official or not, as well as
its automation status.
The name shows the official name allocated to the image. The name uses the
user/image-name naming convention.
The stars show how many users have liked the given image and how popular
it is. The official status, on the other hand, shows whether the listed image is
from a trusted source or not. The Automated status shows whether the image
is built automatically once it’s pushed into a version control system or not.
You can also pass the –filter option to show only automated images or with a
rating of a particular range as well as whether an image is official or not.
Let us create a docker container using an image of our choice. How about a
CentOS 7 container?
Merging all the layers from the image used using Union Filesystem
Allocation unique identifiers to the containers
Allocating a filesystem and mounting the read/writer layers for the
containers
Provision of a bridge network interfaces.
Internet Protocol address assigned to the container
Performing commands specified by the users.
In our case, the commands specified by the user are /bin/bash that allows us
to interact with the system directly.
Container specific information such as the hostname, logs and configuration
details are in storage under /var/lib/docker/containers
By default, the docker run command automatically initializes and starts the
docker container. However, you can create and start the docker container
later using the commands
Using these commands starts the container in the background, which then
becomes attachable. You can also start a container in the background by
passing the -d flag in the docker run command.
You can also choose to delete a container automatically once it exits using
the rm flag as shown in the command below:
Once you exit the interactive shell, the container becomes destroyed
automatically.
To get more information on how to use the docker run –options , use the
docker documentation or docker run –help command in the terminal.
https://docs.docker.com/engine/reference/commandline/container_run/
How To Get Containers In A Docker
To view all the docker containers, both running and stopped, we use the
docker container ls command. The general syntax for the command is:
docker container ls [OPTIONS]
For example, to view all the containers in the host:
Once we run the docker container ls command, the docker daemon fetches
the metadata associated with the containers. Unless specified, the Docker
container ls command returns the following metadata information about the
containers.
To add more functionality to the logs output, use the command flags such as -
t that displays the time stamps for the logs. Another useful flag is the -f that
shows tail-like behavior.
To find more information about the docker container log command, use the
documentations or docker container logs –help
How To Stop And Destroy Docker Containers
We can stop one or more running containers at once using the docker
container stop command. The general syntax for the command is:
docker container stop [OPTIONS] [CONTAINER1] [CONTAINER2]
[CONTAINER…n]
To stop the Debian container, we created in earlier sections, use the
command:
sudo docker container stop debian
debian
Once we call the stop command, Docker automatically moves the container
from running state to the stopped state that works by stopping all the
processes within the containers.
To stop all running containers, execute the following commands
Docker stop $(docker ps -q)
For example, list all the containers and then stop them:
You can find more information on docker stop command on the official
documentation available here:
https://docs.docker.com/engine/reference/commandline/container_stop/
We can completely remove a container using the docker rm command.
Before removing a container, you have to stop the container; alternatively,
you can use the force option. The standard syntax is:
sudo docker container rm [OPTIONS] CONTAINER [CONTAINER_name]
For example, let us remove all the containers in our host. First, we will start
by listing all the containers.
sudo docker container ls –a
Next, call the docker container rm with –force if the containers are running,
followed by the list of containers to remove, as shown below:
You can also choose to remove the links and the volumes associated with the
container. The docker rm command works by removing the read/write layer
created at the first initialization of the container.
You can also remove all the stopped containers at once. Let us start by
creating containers, stopping them, and removing them all at once.
Next, ensure that the status of the container is exited and then remove them
using the docker prune command:
For example, to start a container with a restart action of always, we use the
command:
For example, if we want to mount a partition under /sda2 we can use the
command;
For example, we can inject bash on an apache container using the commands:
Let us start by creating a new container from scratch; you can create any
image you prefer, such as debian, Ubuntu, centos, arch, etc.
Once you have the container created, in the terminal, update the repositories
within the container using the container’s default package manager.
Next, install a package of your choice such as LAMP stack (Linux Apache
MySQL and PHP) as shown:
Step 1
Install the apache package and follow the configurations to set it up:
Step 2
Install MySQL packages on the system
Step 3
Finally, install a database management tool such as PhpMyAdmin
apt-get install phpMyAdmin
Once completed and appropriately configured, we can create a new image
with the LAMP stack installed on the image.
Open a new terminal window and enter the commands as shown. The docker
container can be inactive or running. To create an image, use the container id
instead of the container name:
Next, you can view the images within the host:
As you can see, Docker creates an image from the container and gives it the
name ubuntu-lamp. You pass your name as the author of the image and the
message, providing information about the image.
Docker container become created from image layers where each layer
becomes inherited from the parent layer. Since the layers are in read-only
mode, a read-write layer becomes created, which allows us to perform
modifications on the system such as performing package installations.
Since this layer becomes automatically cleaned upon stopping or destroying
the container, we used the docker commit command to preserve it and create
a docker image stored with other docker images.
To view all the changes of the container’s filesystem from its parent image,
we use the docker diff command:
This command displays all the changes that have occurred within the
filesystem. You can output to a text file for later inspections and debugging.
The docker diff command append prefixes to the files and directories
modified
To log-in using a GitLab account, create an account on GitLab, and use the
log-in credentials to log into Docker.
To view all the store login credentials, dump the contents of the config.json
file using the command:
To logout on all the registries, use the docker logout command:
To find out more information about docker logout command, use the –help
flag.
How To Publish Docker Images To The Registry
Throughout the previous sections, we have downloaded images from the
docker registry, which is a hub for sharing and publishing images.
In this section, we will see how to upload and publish our images to the
official docker image-registry.
Before pushing an image to docker registry, ensure you are logged in to
hub.docker.com using the local pc docker client. If you are using a third-party
docker registry providers, check their login process on the official
documentation.
To push a docker image to the registry, we use the docker push command.
The general syntax for the command is:
In some cases, Docker may deny access to the resource using the command-
line, you can open the browser and create an image repository using the
command provides:
The general syntax for the commands are:
The
local-image refers to the name of the image you created or wish to push,
followed by the tag name. Next, docker push command.
To find more information about working with Docker Hub images, check the
documentation:
https://docs.docker.com/engine/reference/commandline/image_push/
How To Remove Docker Images
Similar to Docker containers, we can remove images hosted locally if not
needed. To do this, we use the docker image rm command. The command
removes the image or images passed to it. You can specify the image using
the image’s short or long id, the image’s name and its tag, the image’s digest
value. If you specify the image’s name and not tag, the latest tag becomes
assumed by default and removes the latest image.
If the images within the local registry container have more than one tag, the
tags will need removal before executing the docker image rm command. You
can also remove them using -f or –force command which force-removes the
images. This forces the removal of all the tags and images.
The general syntax for the command is:
docker image rm [OPTIONS] [IMAGES]
Let us start by viewing all the images within the registry
docker images ls.
Next, choose the image and remove it. You can use either of the image
properties specified above. For this example, we are going to use the image
id.
Performing the above tasks fails as the image is referenced by other tags. You
can remove all the image tags until the image is removed or use the force
command.
When using the –force command, ensure that the docker image you are trying
to remove does not have any containers spawned to it, as this will lead to
dangling images.
You can remove all images in the docker registry using the command:
docker image rm $(docker image rm -q)
To get more information on the docker rm command, use the -h flag, or
check the official documentation:
https://docs.docker.com/engine/reference/commandline/image_rm/
How To Export Docker Images
Docker allows you to use tarballs and export the images for importation to
other machines. This feature comes in handy if you do not want to use public
images but export from a custom one. You can use the Docker save
command to perform the task. The general syntax for the command is:
docker image save --output [filename.tar] image [image name]
Let us start by pulling a new image. You can choose either. For this section,
we will pull a new Redis image. Redis is an open-source data structure (in
memory) used as a database cache or message broker.
Check more information about Redis from the source below:
https://redis.io/documentation
Once you have pulled an image of your choice, export the image as a tarball
using the command:
docker image save --o=redis-image.tar redis
Executing the command creates a tar ball image within the current directory,
with the image being importable to another host. We will discuss importing a
docker image in later sections.
You can view the image using the ls -l command, as illustrated below:
The docker export command allows you to save a docker container filesystem
using the commands:
docker export -o =container_name.tar name
In the above the command, we imported the redis image using the name
redis-imported and gave it a tag imported to avoid conflicts.
The docker import -h command is useful to learn more about importing
images.
Working With Dockerfile
The dockerfile is an important Docker feature. Dockerfile is simply a text-
based build-in tool that allows us to specify various properties about
automating the process of creating Docker images.
The docker file is handled by the docker engine line through the line and
performs tasks specified in the file one at a time. Docker images created a
specified dockerfile are constant, which means they are immutable and
cannot be changed.
Let us start by creating docker images with dockerfile. Before we begin the
process, we need to perform various operations.
Next, start a file called dockerfile. You can use your favorite
command to execute the operation.
touch Dockerfile && nano Dockerfile
Once we have entered into nano or your specified text editor, enter
the configuration as shown below:
It’s good to note that you can include more configuration to the Dockerfile;
for instance, tag names, repositories, env, expose, volume, copy, user,
onbuild, etc. For this tutorial, we stick to the basics.
Once you use an image and create a container from it, a new writeable layer,
also called a container layer, is added on top of the existing layers. The
writeable layers hold all the changes performed on the container, such as
deleting files, changing permissions, creating new files, etc.
Some of the guidelines and recommended best practices include:
Once created, open the repository and navigate to build options. Select the
connected account and configure your build configurations by selecting your
github account and the source repository.
Once completed, select save and build. Ensure the github repository contains
a Dockerfile uploaded. Once completed, this will start the build process
using the Dockerfile configuration and triggers. You can also perform more
automated build configurations by select configure automated builds option:
To check the status of the build, click on the docker tag and click on view
logs for tasks such as debugging, the logs below show a failed docker build
which can help you perform the necessary actions such as editing the docker
file.
Section 6: Containers Network and Data
Management
In this section, we are going to cover how to manage container networking
and storage features. Among other things, shall look at
The NAT rule created by the docker engine has various configurations such
as:
Next, we launch a new container and attach its network to the mainContainer
Ip address, and view the IP address attached to the container.
sudo docker container run --rm --net container:mainContainer debian ip addr
As shown in the above outputs, we can see the eth0 of the main container and
the transient container contains the same network interface indexes and
shares the same IP address of 172.17.0.3
That works when one container becomes created, and the successive
containers use the network of the main or first container, the docker-engine
created a network namespace for the main container and then allocated the
same network namespace to the other containers.
Its good to note that the main container network namespace should be
running before creating other containers and sharing the network namespace.
If the container stops before the ‘slave’ containers, it will put the containers
in an unusable state.
This is a common concept used by containers inside a Kubernetes pod to
share the IP address.
How To Work With User-Defined Network Bridges
Throughout the previous sections, we have used the default network bridge
created by Docker upon installation. That allows the containers that we create
to communicate with each using their IP address but the container names.
In a microservice architecture, linking containers with the ip addresses
created during startup is not convenient. Docker came up with ways to link
containers using user-defined network bridges.
User-defined network bridges are similar to the default docker network-
bridge, with the primary difference being that they provide extra features
such as:
We can inspect the bridge0 interface configuration using the docker network
inspect command as shown below:
sudo docker network inspect bridge0
From the configuration, we see that the network bridge gets assigned a subnet
of 172.18.0.0./16 and a default gateway of 172.18.0.1.
You can also view the bridge from the host using the ip addr command as:
ip addr
We use the –network alias to group multiple containers under a single name
allowing load balancing them with embedded DNS, which uses round-robin
load balancing.
https://www.cloudflare.com/learning/dns/glossary/round-robin-dns/
https://www.nginx.com/resources/glossary/round-robin-load-balancing/
Next, we can inspect the two containers’ IP addresses using the inspect
command:
Finally, the containers can now communicate with each other using their
names, which helps improve the concept of multi containers.
Let us take DNS load balancing for a spin. We can use the network net alias
ping to illustrate this —shown in the command below.
sudo docker container run --net bridge0 --rm debian ping -c4 slaveAlias
sudo docker container run --net bridge0 --rm debian ping -c4 slaveAlias
BOOM! It works!
As we can see from the above outputs, the first ping got a response from
172.18.0.3, and the second ping got a reponse from 172.18.0.3, which shows
that the DNS load balancer is working and resolves the slaveAlias alias using
a round-robin algorithm.
Once a container becomes created and connected to the user-defined network
bridge, Docker automatically adds the name of the container and its alias to
the DNS record of the user-defined network. It then propagates the details to
other containers connected to the same user-defined network bridge via the
embeded DNS on 127.0.0.11
Since its DNS server like any other, we can query the DNS records using
tools such as dig or nslookup. Debian base image does not come with the tool
installed and may require you to install it.
sudo docker container run -rm -it --net bridge0 debian
root@32ec297c52cc:/# apt-get update
apt-get install dnsutils -y
Finally, exit from the contaier and remove it using the docker rm command.
Next, create a new container and mount the dataVolume volume as we did in
the previous section
sudo docker container run -it --name debianMounted -v
dataVolume:/usr/data debian
As we can see, the data becomes saved in the volume and mounted to new
containers as specified by the user. You can view the volume save path using
the Docker inspect command as:
sudo docker volume inspect dataVolume
This feature is very useful while working with sensitive data that you do not
want preserved in either host or container’s read/write layer.
There are various limitations associated with running a container with tmpfs
mounts. These include: