Docker Containerization Cookbook
Docker Containerization Cookbook
ii
Contents
1.1
1.2
1.2.1
1.3.1
Creating a machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.2
Creating an image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.3
Creating a container . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.4
1.4.1
Maven configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.4.2
1.4.3
1.3
1.4
1.5
2
Requisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2
Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.1
2.3
2.4
3
11
Basic usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.1
Pulling images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3.2
2.3.3
Creating containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
15
3.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2
3.3
3.3.2
Namespaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.4
3.5
3.6
Security dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.4.1
AppArmor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.4.2
3.4.3
3.4.4
Networking dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.5.1
Netfilter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.5.2
IPTables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.5.3
Netlink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.7
3.8
4
Device mapper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.7.2
LXC
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
20
4.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.2
4.3
4.4
Write a Simple "Hello, World" Program in Java and Run it Within a Docker Container . . . . . . . . . . . . . . 22
4.5
iii
4.4.1
Create HelloWorld.java . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.4.2
Create a Dockerfile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.4.3
4.5.2
Create a Dockerfile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.5.3
4.5.4
4.5.5
4.6
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.7
Docker as a Service
30
5.1
5.2
5.3
Docker CaaS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.4
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
33
6.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
6.2
6.3
Context definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6.3.1
6.3.2
6.3.3
Build with - . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.4
Dockerfile location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.5
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.6
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
37
7.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
7.2
7.3
7.4
8
iv
7.3.1
7.3.2
7.3.3
7.3.4
43
8.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
8.2
8.2.2
8.2.3
8.3
8.4
8.5
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
49
9.1
Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
9.2
9.3
9.3.2
9.3.3
9.3.4
9.3.5
Specifying a username . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
9.4
9.5
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
53
10.1 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
10.2 Setting up some containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
10.2.1 Pulling a sample image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
10.2.2 Creating sample containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
10.3 Listing containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
10.3.1 Formatting the output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
10.3.2 Saving format templates in Docker configuration file . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
10.3.3 Using filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
10.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
vi
vii
Preface
Docker is the worlds leading software containerization platform. Docker containers wrap a piece of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries - anything that can be
installed on a server. This guarantees that the software will always run the same, regardless of its environment. (Source:
https://www.docker.com/what-docker)
Docker provides an additional layer of abstraction and automation of operating-system-level virtualization on Linux. Docker
uses the resource isolation features of the Linux kernel such as cgroups and kernel namespaces, and a union-capable file system
such as OverlayFS and others to allow independent "containers" to run within a single Linux instance, avoiding the overhead of
starting and maintaining virtual machines (Source: https://en.wikipedia.org/wiki/Docker_(software)).
In this ebook, we provide a compilation of Docker examples that will help you kick-start your own automation projects. We
cover a wide range of topics, from installation and configuration, to DNS and commands. With our straightforward tutorials, you
will be able to get your own projects up and running in minimum time.
viii
1 / 56
Chapter 1
1.1
2 / 56
Docker is a container manager, which means that is able to create and execute containers that represent specific runtime environments for your software. In contrast with virtual machines like VirtualBox, Docker uses resource isolation features of the Linux
kernel to allow independent containers to run within a single Linux instance, avoiding the overhead of starting and maintaining
virtual machines. A computer with docker can run multiple containers at the same time.
Therefore, the automation of Docker mainly offers facilities for integration tests and continuous delivery.
1.2
In order to use docker, it is important to have clear some vocabulary. Docker distinguishes three important concepts: docker
machines, docker images and containers.
Docker containers are executions of an specific runtime environment. For example, to simulate a machine with an specific
database up. These runtime environments are called Docker images, which are the result of executing the set of commands that
appear in an specific script-like file called Dockerfile. Therefore, Docker allow having multiple containers of an specific
Docker image. These could be compared with the concepts of program and process. Indeed, containers like processes, can be
created, stopped, died, or running.
Docker machines are local (and virtual) machines or remote machines(e.g in a cloud such as Amazon AWS or DigitalOcean)
with a docker running. Like physical machines, docker machines has an specific IP address. Each docker machine can manage
multiple docker images and containers. From our own personal computer, the docker-machine command, allow us connecting to
all our docker machines to manage their containers and images.
3 / 56
1.2.1
Download the Docker Machine binary and update your PATH.If you are running OS X or Linux:
Unix Docker Machine installation
$ curl -L https://github.com/docker/machine/releases/download/v0.6.0/docker-machine-uname
-s-uname -m > /usr/local/bin/docker-machine && \
chmod +x /usr/local/bin/docker-machine
4 / 56
Otherwise, simply use the installer from the Docker official releases.
1.3
There are many docker commands. In fact, if we run simply docker from our terminal, we can see all the available commands.
Docker commands
Commands:
attach
build
commit
cp
create
diff
events
exec
export
history
images
import
info
inspect
kill
load
login
logout
logs
pause
port
ps
pull
push
rename
restart
rm
rmi
run
save
search
start
stats
stop
tag
top
unpause
version
wait
In this section, we will see how to manage docker machines, how to pull a docker image using an specific docker machine and
how to create a container for such docker image.
1.3.1
5 / 56
Creating a machine
First of all, we will list the set of available docker machines with the following command:
List Docker Machines
docker-machine ls
In case of not having any machine, we are going to create a new one as follows:
Docker installation test
docker-machine create --driver virtualbox default
Notice that in order to create a docker machine, we need to specify a driver, which will determine if it is a virtual machine, in
this case virtual box, or if it is a connection to docker running in an external machine.
1.3.2
Creating an image
Docker images are created in an specific docker machine. In order to define the docker machine that we want to use, we need to
run the following command:
Select docker machine
eval "$(docker-machine env default)"
In order to check the effects, list the available docker machines again. At this moment, a new machine should appear with a new
IP address (192.168.99.100) and contains an asterisk in the ACTIVE column.
Active Docker Machine
NAME
default
ACTIVE
*
DRIVER
virtualbox
STATE
Running
URL
tcp://192.168.99.100:2376
SWARM
Docker images are the binary files of runtime environments. The contents of a docker image are the result of executing the set
of instructions that appear in a plain text file called Dockerfile. However, in order to avoid building docker images from
Dockerfiles over and over again, there is an online repository for docker images called Docker Hub.
For example, in order to pull a MongoDB image, we only need to execute the docker pull command as follows:
Docker pull command
docker pull mongo
In order to see the installed docker images in our docker machine, use the following command. Notice that a new entry appears
with the Mongo image. Each image has a unique identifier of the docker image for our machine called IMAGE ID.
List Docker Images
docker images
If you are interested to understand how to create a Dockerfile, we strongly recommend follow this example.
1.3.3
Creating a container
Docker containers are executions of docker images. In order to create a container from an image, we use the command docker
run. For example:
List Docker Images
docker run mongo
6 / 56
After that, we can check the list of containers, with the following instruction.
List docker containers
docker ps -a
Notice that our container has an identifier too, called CONTAINER ID. We will use it later.
The IP address to connect to this container from our application is the IP of our docker machine (192.168.99.100).
1.3.4
At this point we have downloaded a docker machine and we have started a docker container. In order to stop and remove the
container and the image we need to execute the following instructions:
Stop the container:
Stop Docker Container
docker stop #containerId
1.4
There are different approaches to design integration tests with Docker. In this example, we will design a Maven project, which
before executing JUnit tests, it automatically starts a MongoDB Docker container.
1.4.1
Maven configuration
First of all, copy this pom.xml file. We are going to use the docker-maven-plugin to automatically load the defined
Docker images and create a container before running the integration tests. In order to support integration tests, we use the
maven-failsafe-plugin.
pom.xml
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/ XMLSchema-instance"
xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd /maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>javacodegeeks</groupId>
<artifactId>javacodegeeks</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>jar</packaging>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.build.resourceEncoding>UTF-8</project.build.resourceEncoding>
<maven.compile.encoding>UTF-8</maven.compile.encoding>
</properties>
<build>
<plugins>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
<encoding>UTF-8</encoding>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-failsafe-plugin</artifactId>
<version>2.19.1</version>
<configuration>
<includes>
<include>**/IntegrationTest.java</include>
</includes>
</configuration>
<executions>
<execution>
<id>integration-test</id>
<goals>
<goal>integration-test</goal>
</goals>
</execution>
<execution>
<id>verify</id>
<goals>
<goal>verify</goal>
</goals>
</execution>
</executions>
</plugin>
<!-- Tell surefire to skip test, we are using the failsafe plugin -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.10</version>
<configuration>
<skip>true</skip>
</configuration>
</plugin>
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.15.0</version>
<configuration>
<logDate>default</logDate>
<autoPull>true</autoPull>
<dockerHost>tcp://192.168.99.100:2376</dockerHost>
<certPath>${user.home}/.docker/machine/certs/</certPath>
<images>

</images>
</configuration>
<!-- Hooking into the lifecycle -->
<executions>
<execution>
<id>start</id>
<phase>pre-integration-test</phase>
<goals>
<goal>start</goal>
</goals>
</execution>
<execution>
<id>stop</id>
<phase>post-integration-test</phase>
<goals>
<goal>stop</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongo-java-driver</artifactId>
<version>3.2.2</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
<scope>test</scope>
</dependency>
</dependencies>
</project>
1.4.2
Integration tests are executed after creating a MongoDB container by the maven-docker-plugin. We are going to create
a test to validate the following Zoo component, which only has one operation called addAnimal that inserts an animal to
MongoDB.
Zoo.java
package javacodegeeks;
import org.bson.Document;
import com.mongodb.MongoClient;
public class Zoo {
private MongoClient client;
9 / 56
To test the Zoo component, we just validate, with an empty database, that after invoking the addAnimal operation, MongoDB
contains a new animal.
IntegrationTest.java
package javacodegeeks;
import
import
import
import
import
import
import
import
org.bson.Document;
org.junit.AfterClass;
org.junit.Assert;
org.junit.BeforeClass;
org.junit.Test;
com.mongodb.MongoClient;
com.mongodb.client.FindIterable;
com.mongodb.client.MongoDatabase;
Notice that the IP address of our MongDB database is our Docker machine.
1.4.3
To run our integration tests, we simply need to execute the mvn verify command.
Run integration tests
10 / 56
mvn verify
1.5
Download
You can download the full source code of this example here: DockerTutorialForBeginners
11 / 56
Chapter 2
2.1
Requisites
According to Docker documentation, we must meet the following prerequisites for installing Docker:
64-bit distro. Obviously, we need a 64-bit microprocessor.
At least, 3.10 kernel version. In Ubuntu, these versions are present since 13.10, so it shouldnt be a problem.
Just in case, we can check the kernel version with the following command:
uname -r
Which will show the kernel release. For this case, the output is:
4.4.0-21-generic
2.2
Installation
We can install Docker simply via apt-get, without the need of adding any repository, just installing the docker.io package:
sudo apt-get update
sudo apt-get install docker.io
We can check that Docker has been successfully installed with executing the following:
docker -v
12 / 56
Finally, we can pull an image to check that Docker is working properly. For example, we will pull a BusyBox image:
docker pull busybox
If you got this error, dont worry, we will see how to fix it in the following subsection.
2.2.1
Actually, is not an error itself, is just that the user we are trying to execute Docker with, has no permission to access the Docker
service.
This means that, if we try the same command with sudo permissions, we wont get the error:
sudo docker pull busybox
And we will see that the docker group is not in the list. To add it, we can simply execute:
sudo usermod $(whoami) -G docker -a
(The -G option is for secondary groups; and the -a is for appending to the existing ones).
If we check again the groups, we will see that now the docker group is in the list. But, before anything, we have to restart the
session, to make the change work.
After re-logging in again, we will see that we can run Docker without the need of root permissions:
docker pull busybox # It works!
2.3
Basic usage
Now that we have Docker running, we will see the most basic things we can do with it.
2.3.1
13 / 56
Pulling images
The Docker community is so big. There is a huge number of preconfigured images, many of them maintained by the official
developers of the software in question (e.g. MySQL, Nginx, Jenkins, and a large etc.).
These images can be found at Docker Hub. You dont have to create an account to pull the images from there, but you will need
one if you want to host your images.
We already have seen how to pull images. For those official images, we just have to specify the name of the repository:
docker pull <repository>
By default, the latest image in the repository will be pulled (you have probably already the "Using default tag: latest" message if
you have pulled the BusyBox image). But we can also pull a specific image, specified by a version tag. For that, we just have to
append the tag in question after a colon:
docker pull <username>/<repository>:<tag>
For example:
docker pull busybox:1.25.0
Note: to see the images you have, you can use the images command:
docker images
2.3.2
Creating files known as Dockerfiles is the way of creating our custom images, basing on an existing one.
It only consists on writing the steps to execute in a file.
2.3.3
Creating containers
We have seen how to pull images and create our own ones, but not how to use them.
When we use an image, i.e., we instantiate it, we create a container. We can have several containers from the same image. The
concept is almost the same as with class and objects in Object Oriented Programming.
For creating containers, we use the run command. The easiest way is to execute it just specifying the image:
docker run busybox
Nothing seemed to happen. But, behind the scenes, Docker has created a container from the image, execute it, and then end it, as
we didnt specify nothing to do.
We can execute a command inside the container:
docker run busybox echo "Hello world!"
Not very impressive, right? But imagine doing so with a virtual machine, and compare how much time takes each one.
We can also set an interactive shell with -it option:
docker run -it busybox
To see a more complete example, we are going to pull the Nginx image:
14 / 56
And, if we follow localhost:8080 in our browser, we will see that we have a dockerized Nginx web server!
2.4
Summary
In this tutorial we have seen the installation of Docker in Ubuntu and its very basic configuration. Apart from that, we have also
seen the basic usage of Docker, pulling images from the Docker Hub and creating containers from these images.
15 / 56
Chapter 3
Introduction
Docker is a containerization technology that provides OS level virtualization to applications. It isolates processes, storage,
networking, and also provide security to services running within its containers. To enable this, Docker depends on various
features of the Linux Kernel. Let us get introduced to these Docker kernel requirements in this post.
3.2
The dependencies on the Linux kernel can be broadly categorized into 4 classes - resource constraining, security, networking,
and storage. Resource constraining features allow container creators to place restrictions on container environments like memory
usage, cpu, etc.,. Security features allow security policies to be applied on containers. Networking features allow for the SDN
networking features provided by Docker. Storage features allow Docker to support volumes, and various storage backends.
16 / 56
3.3
3.3.1
Control groups, or cgroups, is a kernel feature to constrain the resource usage of a process or a set of processes. This provides
Docker with 4 main features:
Limit resources (CPI, memory, network, disk I/O, . . . ) to user-defined processes.
Prioritize resources to processes (a set of processes will get more resources than another set).
Measure resource usage for billing purposes.
Control a group of processes.
The docker run command is used to manipulate resources allocated to a container. For instance, docker run --cpushares=<value> sets the cpu share allocated to a container (every container gets 1024 shares by default). docker run -cpuset-cpus=<value> sets the CPU core on which the container would be run. Do look at this insightful article for some
examples of manipulating cgroups settings for Docker containers.
3.3.2
17 / 56
Namespaces
Namespaces is a kernel feature that provides lightweight process virtualization to containers. This helps Docker to isolate these
resources for a container - process IDs, hostnames, user IDs, network access, IPC and filesystems. Docker combines namespaces
and cgroups to isolate resources for containers and place resource usage constraints. These namespaces are used to isolate
containers - Process ID (pid), Network (net), Mount (mnt), Hostname (uts), Shared Memory (ipc).
A pid namespace provides processes running within containers with separate pids isolated from other containers./li>
A net namespace creates separate network interfaces, IP adrresses and such for each container.
A mnt namespace creates isolated mounts for each container. Mount points from host OS may be carried into the container but
any any additions to the container mounts are not propagated back to the host.
An uts namespace creates containers with their own hostnames without affecting other containers or the rest of the system.
An ipc namespace creates isolated shared memory space for each container and prevents access between shared memory of
different computers.
3.4
3.4.1
Security dependencies
AppArmor
AppArmor is a Mandatory Access Control (MAC) tool to restrict programs to a limited set of resources. Restriction policies are
set in a simple text file to administer storage, networking, capabilities of a program. A policy can run in enforcement or complain
mode. A policy running in enforcement mode will enforce the policy and report violations. A policy running in complain mode
will not enforce restrictions but only report violations.
Docker installs a default AppArmor profile - /etc/apparmor.d/docker - during installation. This profile is applied to all
Docker containers. To apply a specific AppArmor profile to a container use the option docker run -it <containername> --security-opt=apparmor=<profile-name>.
Read this page for more details about Dockers usage of AppArmor.
3.4.2
SELinux, like AppArmor, enforces MAC policies on other subsystems of the Linux kernel. When compared to AppArmor,
SELinux follows a more elaborate multi-level security policy control. This is currently developed and maintained by RedHat.
3.4.3
Capabilites as implemented in Linux (known as "Posix Capabilities") partitions the root users privileges into distinct smaller
units called "capabilities". These capabilities are enabled/disabled as a unit and assigned to individual threads. This allows a
thread/process to perform some privileged operation with a minimal set of capabilities but without assuming superuser permissions. See man capabilities in any Linux system for more details on capabilities. Docker uses capabilities to restrict the
actual capabilities of the container while providing all possible features to the service within it. A root user within a Docker
container may not have all privileges as a root user in the actual host OS.
Read this post for more explanation on Dockers support of capabilities.
3.4.4
Secure Computing Mode, also called seccomp, provides a facility to place filters on the system calls available to a user-defined
process. This is combined with other tools to provide a secure computing sandbox to filter a thread from all available system
calls. When seccomp is applied to a thread, the thread can perform only 4 system calls - read(), write(), sigreturn() and exit().
The kernel will kill the process if it uses any other system call.
A seccomp profile is set to a Docker container with the security-opt option of docker run like so:
18 / 56
Read this doc for more information about Dockers use of seccomp.
3.5
3.5.1
Networking dependencies
Netfilter
Netfilter is a framework provided by the Linux Kernel that allows network packets flowing through the machine to be manipulated. Features include stateless and stateful packet filtering of IPv4 and IPv6 packets, Network address translation, port address
translation, extensible APIs for 3rd party app developers. Docker uses Netfilter through its userspace counterpart IPTables.
3.5.2
IPTables
iptables is the user-space utility counterpart for netfilter. It interacts with netfilter and allows a system administrator to define
tables of firewalling rules for packet filtering, network address translation (NAT), and so on. The Docker daemon automatically
appends rules firewalling rules to iptables if it sees it installed in the system. For example when we expose a containers port
to the outside world Docker adds a corresponding rule to iptables. To disable iptables, start Docker daemon with the option
iptables set to false like so:
$ dockerd --iptables=false
See this blog post for a few examples of how Docker uses iptables.
3.5.3
Netlink
Netlink as a tool provides a mechanism for communication between kernel and userspace components using a socket interface.
Even userspace components can use this to communicate among one another. This is an alternative to ioctl and reduces dependence on direct system calls, ioctl calls, and such. Docker implements its netlink libraries to talk to the kernels netlink interface
to create and configure network devices.
This excellent post has more details about how Docker uses Netlink.
3.6
Docker supports several storage drivers with a plug-in architecture. One can choose a storage driver to run the Docker daemon
with. However, Docker Engine can support only one active storage driver at a time. A change in the storage drive will need the
Docker daemon to restart.
3.6.1
Device mapper
The devicemapper framework is provided by the kernel to map physical block devices as virtual devices devices. It provides
the foundation for features such as logical volume management, device encryption, copy-on-write files, etc.,. Docker uses this
framework to support copy-on-write files in containers.
3.7
3.7.1
19 / 56
libcontainer (Now called opencontainers/RunC) - This is not exactly a kernel feature. Docker developed this as an execution
engine that exposes a consistent standardized Go API to work with Linux namespaces, cgroups, capabilities, AppArmor, security
profiles, network interfaces, firewalls and firewalling rules. RunC has replaced LXC as the default execution driver of the Docker
Engine.
3.7.2
LXC
Like libcontainer, LXC provides a userspace interface for the Linux Kernels container supporting features. LXC was the initial
execution engine before Docker moved to RunC.
3.8
Summary
In this post we were introduced to the key kernel features on which Docker depends and builds to enable containerization. Each
of the Kernel features in itself can be pursued further and understood more deeply to improve container security in Docker. The
Docker docs also contain more information about kernel level enablers.
20 / 56
Chapter 4
4.1
Introduction
Creating a Docker Hello World container and getting it to work verifies that you have the Docker infrastructure set up properly
in your development system. We will go about this in 4 different ways:
Run Docker hello world image provided by Docker
Get a "Hello, world" printed from another basic Docker image
Write a Simple "Hello, World" Program in Java and Run it Within a Docker Container
Execute a "Hello, World" Program in Java Using Gradle and Docker
You should have Docker installed already to follow the examples. Please go to the Docker home page to install the Docker engine
before proceeding further, if needed. On Linux, please do ensure that your user name is added into the "docker" group while
installing so that you need not invoke sudo to run the Docker client commands. You should also have Java and Gradle installed
to try examples 3 and 4. Install your favorite J2SE distribution to get Java and install Gradle from gradle.org.
So let us get started!
4.2
This is the simplest possible way to verify that you have Docker installed correctly. Just run the hello-world Docker image in a
terminal.
$ docker run hello-world
This command will download the hello-world Docker image from the Dockerhub, if not present already, and run it. As a
result, you should see the below output if it went well.
21 / 56
4.3
This too is simple. Docker provides a few baseimages that can be used directly to print a "Hello, World". This can be done as
shown below and it verifies that you have a successful installation of Docker.
$ docker run alpine:latest "echo" "Hello, World"
This command downloads the Alpine baseimage the first time and creates a Docker container. It then runs the container and
executes the echo command. The echo command echoes the "Hello, World" string. As a result, you should see the output as
below.
22 / 56
Figure 4.2: Use the Alpine Docker Image to Print Hello World
4.4
Write a Simple "Hello, World" Program in Java and Run it Within a Docker Container
Let us take it up a notch now. Let us write a simple hello world program in Java and execute it within a Docker container.
4.4.1
Create HelloWorld.java
First of all, let us create a simple Java program that prints "Hello, World". Open up your favorite text editor and enter the
following code:
public class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello, World");
}
}
This is a standard HelloWorld program in Java very closely resembling the one provided in the Official Java tutorial. Save the
file as HelloWorld.java. Now, compile this file using the Java compiler.
$ javac HelloWorld.java
This should create the class file Helloworld.class. Normally, we would use the java command to execute HelloWorld as
below
23 / 56
4.4.2
Create a Dockerfile
To create a new Docker image we need to create a Dockerfile. A Dockerfile defines a docker image. Let us create this now.
Create a new file named Dockerfile and enter the following in it and save it.
FROM alpine:latest
ADD HelloWorld.class HelloWorld.class
RUN apk --update add openjdk8-jre
ENTRYPOINT ["java", "-Djava.security.egd=file:/dev/./urandom", "HelloWorld"]
The complete Dockerfile syntax can be found from Docker docs but here is briefly what we have done. We extend our image
from the Alpine baseimage. We next added HelloWorld.class into the image with the same name. Later we installed a JRE
environment using OpenJDK. Finally, we gave the command to execute when this image is run - that is to run our HelloWorld in
the JVM.
4.4.3
Now, build an image from this Dockerfile by executing the below command.
$ docker build --tag "docker-hello-world:latest" .
24 / 56
4.5
25 / 56
Let us take it up another notch now. Real world programs are more complex than the ones shown above. So let us create another
hello world Java program and run it in a Docker container using a few best-practices of the Java ecosystem. Furthermore, let us
set up a Java project using Gradle, add the Docker plugin for Gradle into it, create a Docker image and run it using Gradle.
4.5.1
First of all, Create a new folder called "docker-hello-world-example", open a terminal and change into this folder. Next, use
Gradle to initialize a new Java project.
$ gradle init --type java-library
Once done you should see the following folder structure created. You can freely delete the files Library.java and Librar
yTest.java. We will not be needing them.
Figure 4.6: Folder structure for the Sample Java Project Created by Gradle
Now, create a new file src/main/java/com/javacodegeeks/examples/HelloWorld.java and enter the following in it. Save it once done.
package com.javacodegeeks.examples;
public class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello, World");
}
}
4.5.2
26 / 56
Create a Dockerfile
Next, create a Docker file in src/main/resources called "src/main/resources/Dockerfile" and enter the following. Save and close
the file once done.
FROM alpine:latest
RUN apk --update add openjdk8-jre
COPY docker-hello-world-example.jar app.jar
ENTRYPOINT ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar", "app.jar"]
Let us quickly go through what we did in the above Dockerfile. We derive our image from the Alpine Linux base image. Next, we
installed J2SE from OpenJDK. Next, we copied the jar file generated by the build process as app.jar into the Docker image.
Finally, we set the command to be run when the Docker container is run which is to execute the app.jar file.
4.5.3
Next, make the following changes into the build.gradle file to update the Jar manifest.
mainClassName = com.javacodegeeks.examples.HelloWorld
jar {
manifest {
attributes(
Main-Class: mainClassName
)
}
}
We set the mainClassName to com.javacodegeeks.examples.HelloWorld and set the same attribute to be inserted
into the manifest file of the jar that will be created.
Next, insert the following changes at the very top of build.gradle file to configure and add the Gradle plugins for Docker.
buildscript {
repositories {
maven {
url "https://plugins.gradle.org/m2/"
}
}
dependencies {
classpath "gradle.plugin.com.palantir.gradle.docker:gradle-docker:0.9.1"
}
}
plugins {
id com.palantir.docker version 0.9.1
id com.palantir.docker-run version 0.9.1
}
apply plugin: com.palantir.docker
Here we basically added the Gradle plugins called docker and docker-run provided by Palantir. These plugins enable us to
create Docker containers and run them using Gradle. We added the URL to tell where to find the gradle-docker plugin (https:
//plugins.gradle.org/m2/). We also added the plugins into the classpath as build dependencies.
4.5.4
Next, insert the following changes at the bottom of build.gradle file to configure the details of the Docker image.
27 / 56
docker {
name com.javacodegeeks.examples/docker-hello-world:1
tags latest
dockerfile src/main/docker/Dockerfile
dependsOn tasks.jar
}
Here we configured the details of the docker image. We assigned the image name com.javacodegeeks.examples/
docker-hello-world with version 1. We assigned it the tag latest and gave the path where to find the Docker file.
Finally, we made it depend on the Gradle task jar so that the output of the jar task will be fed to this task.
Finally, insert the following changes at the bottom of build.gradle file to configure the details of the Docker container.
dockerRun {
name docker-hello-world-container
image com.javacodegeeks.examples/docker-hello-world:1
command java, -Djava.security.egd=file:/dev/./urandom, -jar, app.jar
}
Here we configured the details of the docker container. We assigned the container name "docker-hello-world-container". We
assigned it the image from where to create the container and gave it a command to execute when running the container.
4.5.5
Now we are ready to build and run this. Use Gradle to build and run this program
$ ./gradlew clean build docker dockerRun
You will notice that the code was built, jar was created, Docker image and containers were created, and the container was run
too. However, there is no "Hello, World" actually printed on the screen here.
28 / 56
Figure 4.7: Run the Gradle Tasks clean, build, docker, and dockerrun
The last line simply says that the Docker container docker-hello-world-container is running. To see if it printed "Hello, World"
successfully, we need to check the logs created by the Docker container. This can be checked by executing the following
command.
$ docker logs docker-hello-world-container
29 / 56
Figure 4.8: Check the Docker Container Logs to Check if Hello world was Printed
There it is!
4.6
Summary
In this example we learned how to create Docker Hello world type containers in 4 different ways:
First, we learned how to run the hello-world image provided by Docker. This printed Hello World as soon as it was run
Next, we learned how to run the Alpine image and print a Hello World using the echo shell command
Next, we learned how to get the basic Hello World program into a Docker image and run it to print Hello World. The source
code for this can be downloaded from the links given below.
Finally, we learned how to set up a structured Java project using Gradle and generate Docker images and Containers using
Gradle plugins. We used this structure to create a Hello World Java project and got it to run in a container. Later, we verified
the Docker logs for the container to see that Hello World was printed when the container was run. The source code for this
also can be downloaded from the links given below.
4.7
Download
The source code for example 3 can be downloaded here: docker-hello-world.zip
The source code for example 4 can be downloaded here: docker-hello-world-example.zip
30 / 56
Chapter 5
Docker as a Service
In the previous posts we learned about Docker, containers, and working with containers, In this post we will learn about Containers as a Service (CaaS) in general and how Docker realizes CaaS.
5.1
Take a typical developer work-flow involving Docker. It will be typically as below. This is based on the reference workflow
provided in the Docker docs.
31 / 56
5.2
So, what is CaaS? It is a cloud service model that enables users to order and use containerization infrastructure from a cloud
provider using a pay-as-you-go model. Docker CaaS is an example. Amazon EC2 Container Service (ECS) by AWS is another
32 / 56
example.
As a cloud service model, CaaS probably sits as a subset of IaaS in that it enables a container-based infrastructure for its users.
A CaaS offering usually seeks to provide the following features to its users.
Manage containers through an API or web interface.
Monitor compute, storage and network resources used by the containers.
Provide cluster management and orchestration capabilities for containers.
Provide security and governance controls.
Optionally, be cloud agnostic so that the CaaS can be shifted across cloud providers or from in-premises to cloud.
5.3
Docker CaaS
Dockers CaaS offering called Docker Datacenter packages Dockers tools to provide developers and IT with a consistent container infrastructure. This is a commercial offering aimed at enterprises and provides the following features.
Brings well known open source Docker tools under a CaaS umbrella - Docker Engine, Docker Compose, Docker registry,
Docker Swarm.
Adds commercial offerings - Universal Control Pane and Trusted Registry.
Ability to deploy the CaaS in-premises or cloud.
A private trusted docker registry for managing images.
Cluster management and orchestration capabilities through Swarm and Universal Control Pane.
Tools for the application development lifecycle, continuous integration and deployment.
Provides security in the application lifecycle all the way from from dev to production stages.
Refer to the Docker Datacenter pages by Docker for more information about providing Docker as a service.
5.4
Summary
In this short post, we understood the need for a "Docker-as-a-service" solution to handle containerized application development
needs at scale for an organizations. We briefly understood the features provided by a CaaS service in general. Then we briefly
discussed Dockers CaaS offering called the Docker Datacenter that offers Dockers tools as a service to developers and IT
administrators.
33 / 56
Chapter 6
Introduction
Docker is a tool to avoid the usual headaches of conflicts, dependencies and inconsistent environments, which is an important
problem for distributed applications, where we need to install or upgrade several nodes with the same configuration.
Docker is a container manager, which means that is able to create and execute containers that represent specific runtime environments for your software. An specific runtime environment is called Docker image and containers are specific executions of a
Docker images. It is the same concept that classes and objects/instances, which allow to define, for example, different network
configurations for each container.
Dockerhub is a repository of docker images. Docker users can upload their docker images and share them with the community.
It allows that users of software providers do not need to deal with installation guides. They just need to download the image and
create as many containers they need. In fact, it is specially useful when system administrators need to scale the same software in
a cluster of machines.
The purpose of this tutorial is to explain how to build docker images using the docker build command and the supported building
options. As we will see below, docker allows to build images using different approaches.
6.2
The docker build command is available from your docker installation, which depends on your operative system and are widely
explained (here for mac, here for windows and here for ubuntu). Once, you have followed the installation instructions, you should
be able to run the docker build command.
Docker usage command
docker build --help
34 / 56
6.3
Context definition
The context of a docker build command could be a local directory (PATH), a Git repository (URL) or the standard input (-). This
section explains the usage scenarios of these different contexts.
35 / 56
6.3.1
Probably, you are assuming that the docker daemon is always running in your machine. However, when you are building docker
images for a remote machine, this command is sending to the daemon a tar file with the contents of the local directory you have
specified to proceed with the build command. For example:
Docker build from path[source,java]
docker build .
6.3.2
URLs are specially designed for Git repositories. In this case, the build command clones the Git repository and use the cloned
repository as context. The Dockerfile at the root of the repository is used as Dockerfile. Alternatively to http or https, you can
specify an arbitrary Git repository by using the git:// or git@ schema. For a GitHub repository, we can use docker build as
follows:
Docker build from git
docker build github.com/myuser/project
6.3.3
36 / 56
Build with -
This will read a Dockerfile from STDIN without context. It is commonly used to send a compressed file. The Supported formats
are: bzip2, gzip and gz. For example:
Docker build from the standard output
docker build - < context.tar.gz
6.4
Dockerfile location
The dockerfile location, is by default in the root directory of the specified context and it is called Dockerfile. However, you can
specify another one with the -f option, but always it must be located inside the build context. For example:
Docker build with an specific dockerfile
docker build -f Dockerfile.debug .
6.5
Example
Now, lets build an Ubuntu image with Java from a Github project used in a previous JavaCodeGeeks example. Run the following
command:
Docker build command with a real project
docker build -t javacodegeeks/buildsample github.com/rpau/docker-example4j
Now, in order to check if the image is available, execute docker images. This command shows the available images for your
docker daemon. It should show something similar to the following output:
6.6
Conclusion
This tutorial explains what is a docker image and how to build it using the command docker build. This command requires
to specify a context, which is the location of the files that we need to add into the image and also the Dockerfile. Finally, our
example shows how to build an image from a Github project and how to verify what are the available images in our docker
daemon with the docker images command.
Remember that creating docker images is extremely useful to set up an specific execution environment that needs to be installed
in a cluster of machines. Therefore, it allows to avoid problems caused by differences between production and development
environments.
37 / 56
Chapter 7
Introduction
Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you define in a configuration file the set of docker containers that application requires an specific machine. Then, using a single command, you create
and start all the services in a single host.
Docker Compose is specially useful for the following use case scenarios:
Create a development environment with all the required services started only using your own machine. In other words, stop
to prepare your development environment manually over and over again in your own or different machines.
Automate test environments with the same characteristics than the production environments and run integration tests.
Single hosts deployments- that is use a machine with different services as docker containers. The most recommended way to
do so is using the Docker machine or Docker swarm because you can launch docker compose remotely from your laptop in a
safe way without requiring SSH.
To use Docker Compose, you just need to create configuration file called docker-compose.yml, which contains the specification
to create and run a set of docker containers. Lets try to create an example with a web application that uses a redis database
server.
38 / 56
7.2
Docker compose is one of the included tools into the Docker toolbox. To install Docker toolbox, you just need to download the
appropriate installer from here for your operative system and execute it.
Docker Toolbox contains the following tools:
Docker Compose : to create runtime environments with multiple docker containers in one machine.
Docker Machine : to dynamically run docker commands to remote machines or in the local/default one.
Docker Kitematic : to Build and run containers through a graphical user interface (GUI).
VirtualBox : to run Docker.
Docker client: to create and run docker containers.
Once you have finished the installation procedure, you should be able to run docker-compose --help, which prints the
accepted subcommands.
7.3
This example is about a web application for create a TODO list using Redis with basically a three steps:
39 / 56
Define how to run the web and database docker images in docker-compose.yml so they can be run together in an isolated
environment.
Create the web application and its Dockerfile to create the runtime environment anywhere.
Lastly, run docker-compose up and Compose will start and run your entire app.
7.3.1
Create an empty project called todos and copy the following docker-compose.yml inside the project directory.
docker-compose.yml
web:
build: .
ports:
- "8000:8000"
links:
- redis
redis:
image: redis
This file specifies two containers: web, which contains our Java code to launch a web server and redis server. Notice that in the
first case, appears the build property, whereas the second case, the image property. It means that in order web container needs
to build a Docker image that appears in our local file system. However, for the redis server, the image is downloaded from the
DockerHub repository.
With this docker-compose.yml file, Compose simulates two machines with different IP addresses in a same local network because
they are linked (i.e link property). However, only the web is externally through the 8000 of our docker-machine and Docker
creates a binding between this port and the container 8000 port.
7.3.2
Now, lets create the Dockerfile that Compose requires in the same directory than the docker-compose.yml file with the following
contents.
Dockerfile
FROM ubuntu:14.04
MAINTAINER javacodegeeks
RUN apt-get update && apt-get install -y python-software-properties software-properties- common
RUN add-apt-repository ppa:webupd8team/java
RUN echo "oracle-java8-installer shared/accepted-oracle-license-v1-1 boolean true" |
debconf-set-selections
This Dockerfile builds our todolist project with Maven and starts an specific Java class. Thus, it is necessary to convert our todolist
project into a Maven project. To do so, copy the following pom.xml in the project directory (the same place than Dockerfile and
docker-compose.yml.
pom.xml
40 / 56
This POM file uses lettuce (a Java Redis client) and jackson and commons-io as a utility libraries for coding basic HTTP server
in only one Java class. Create this class with the following code:
src/main/java/com/javacodegeeks/todolist/TodoServer.java
package com.javacodegeeks.todolist;
import
import
import
import
import
import
java.io.IOException;
java.io.OutputStream;
java.net.InetSocketAddress;
java.net.MalformedURLException;
java.net.URL;
java.util.List;
import org.apache.commons.io.IOUtils;
import
import
import
import
import
com.fasterxml.jackson.databind.ObjectMapper;
com.lambdaworks.redis.RedisClient;
com.lambdaworks.redis.RedisConnection;
com.lambdaworks.redis.ValueScanCursor;
com.sun.net.httpserver.HttpExchange;
41 / 56
import com.sun.net.httpserver.HttpHandler;
import com.sun.net.httpserver.HttpServer;
public class TodoServer {
public static void main(String[] args) throws Exception {
HttpServer server = HttpServer.create(new InetSocketAddress(8000), 0);
server.createContext("/", new MyHandler(System.getenv("REDIS_PORT")));
server.setExecutor(null); // creates a default executor
server.start();
}
static class MyHandler implements HttpHandler {
private RedisClient redisClient;
private RedisConnection connection;
private ObjectMapper mapper;
public MyHandler(String redisURL) throws MalformedURLException {
String hostPortURL = redisURL.substring("tcp://".length());
int separator = hostPortURL.indexOf(:);
redisClient = new RedisClient(hostPortURL.substring(0, separator),
Integer.parseInt(hostPortURL.substring(separator +
1)));
connection = redisClient.connect();
mapper = new ObjectMapper();
}
public void handle(HttpExchange t) throws IOException {
String method = t.getRequestMethod();
OutputStream os = t.getResponseBody();
String response = "";
if (t.getRequestURI().getPath().equals("/todos")) {
if (method.equals("GET")) {
ValueScanCursor cursor = connection.sscan("todos");
List tasks = cursor.getValues();
response = mapper.writeValueAsString(tasks);
} else if (method.equals("PUT")) {
connection.sadd("todos", IOUtils.toString(t. getRequestBody()));
}
}
t.sendResponseHeaders(200, response.length());
os.write(response.getBytes());
os.close();
}
@Override
public void finalize() {
connection.close();
redisClient.shutdown();
}
}
}
Notice that the TodoServer is assuming that the Redis connection URL appears in an environment variable called REDIS_PORT.
42 / 56
However, how to know the names of the available environment variables? Docker defines an standard way to do so here.
7.3.3
All the elements for the example are ready now. So, open your prompt, go to the project directory and run:
Docker-Compose up command
docker-compose up
Voila! At this moment, we have an HTTP server connected with Redis in our default docker-machine. In order to discover the IP
of the default docker-machine, run the following command.
Docker-Machine ip command
$docker-machine ip default
192.168.99.100
7.3.4
Using the output of the previous command, we can test the web server using the curl command. The available options are:
Create new tasks:
CURL command to create new tasks
curl -X PUT --data "new task" https://192.168.99.100:8000/todos
7.4
43 / 56
Chapter 8
Introduction
This post introduces Docker Engines network feature in general and specifically introduces configuring DNS in containers. This
post assumes that you have the Docker Engine installed and that you know the basics of working with containers. We will not
discuss service discovery from other networks here since that needs a deeper knowledge of Docker tools like Docker Swarm.
Since these posts focus on the basics of Docker we will focus on networking cotainers within the same host. So lets get started
with understanding the basics of networking in Docker.
8.2
The Docker Engine provides 3 networks by default - bridge, host and none. The command docker network ls lists
out all networks created by Docker. So on a default installation you will see something like this.
$ docker network ls
NETWORK ID
NAME
36d51e62e518
bridge
18e8e68644ea
host
1f87f168df62
none
DRIVER
bridge
host
null
SCOPE
local
local
local
The bridge network is mapped to the docker0 network bridge on the Docker Engines host. The ip address command
can be used to check the details of docker0.
$ ip address show label docker0
7: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 02:45:01:c0:ba:c7 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:1ff:fec0:bac7/64 scope link
valid_lft forever preferred_lft forever
8.2.1
Every container created by the Docker engine is added to the default bridge network. The command docker network
inspect can be used to inspect the network like so. . .
$ docker network inspect bridge
[
{
44 / 56
"Name": "bridge",
"Id": "36d51e62e518d5430c798ce6eb68dc2737e9ad0555dd45097b04ef7597bce644",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
Note the "Containers" section in the above picture. It lists the containers attached to the bridge network. This section will
be empty in a fresh Docker installation or when all containers are stopped.
8.2.2
Apart from the default networks, users can create new networks with the docker network create command. Let us create
a new network called example-network and inspect its contents now.
$ docker network create example-network && docker network inspect example-network
301ffd3fbc1e323d40d2bc687f2e5c6341c16011e91d57151577fa08c914a157
[
{
"Name": "example-network",
"Id": "301ffd3fbc1e323d40d2bc687f2e5c6341c16011e91d57151577fa08c914a157",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1/16"
}
]
},
"Internal": false,
"Containers": {},
"Options": {},
"Labels": {}
}
45 / 56
Let us now add a container to this network and inspect again. The command docker run --network= can be used to
attach a container to a network of our choice, like below:
$ docker run -itd --name=infinite_loop --network=example-network enhariharan/infinite-loop
8da6157240e09d69c7490c1af4ce483b30ff6f7ba7804b45818c0975e7b8ba25
The container is added into the network example-network. Let us now investigate the network settings again.
$ docker network inspect example-network
[
{
"Name": "example-network",
"Id": "301ffd3fbc1e323d40d2bc687f2e5c6341c16011e91d57151577fa08c914a157",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1/16"
}
]
},
"Internal": false,
"Containers": {
"8da6157240e09d69c7490c1af4ce483b30ff6f7ba7804b45818c0975e7b8ba25": {
"Name": "infinite_loop",
"EndpointID": " ccb0318b24dc3b0f1676f5aa16e5ea839adb715baa6bcd4ad16cdadb4aabd973",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
You can see that the "Containers" section now has the container infinite_loop created from the image.
8.2.3
All containers are added to the bridge network by default. Containers within the same Docker network can connect to each
other. Let us see that through a simple example. We will create 2 simple containers from the image enhariharan/inf
inite-loop into the custom network example-network created earlier. Then well inspect the network to see that the
containers were added as expected.
$ docker run -itd --name=infinite_loop_1 --network=example-network enhariharan/infinite- loop
3c23fb845274e24ac8bdaa3216affc3b6d2755b58e6a8a4424c4008ca3c8022c
$
$ docker run -itd --name=infinite_loop_2 --network=example-network enhariharan/infinite- loop
9bd946658d4456618f60413e0cb41f6b4502eb60cfbff325a1bd7f218bffdd45
46 / 56
$
$ docker network inspect example-network
...
"Containers": {
"3c23fb845274e24ac8bdaa3216affc3b6d2755b58e6a8a4424c4008ca3c8022c": {
"Name": "infinite_loop_1",
"EndpointID": "0 c5fc22252e5d88ae746fb07d9dc9827ab6840a40db956eff2ed8bcfa042a099",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"9bd946658d4456618f60413e0cb41f6b4502eb60cfbff325a1bd7f218bffdd45": {
"Name": "infinite_loop_2",
"EndpointID": "1 eecb3c8414026bfeaf63cfb23fb5319ca068c2c3c220ad486f19de02b2225bd",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
}
},
...
Containers infinite_loop_1 and infinite_loop_2 were created in detached mode. They have been added into exa
mple-network with IP address 172.18.0.2 and 172.18.0.3 respectively, as confirmed above. If the --network=
is not provided then containers are added to the bridge network by default. Now, let us open a session into the container
infinite_loop_1 and try to ping infinite_loop_2.
$ docker exec -it infinite_loop_1 sh
/ # ping -c 5 172.18.0.3
PING 172.18.0.3 (172.18.0.3): 56 data bytes
64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.165
64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.124
64 bytes from 172.18.0.3: seq=2 ttl=64 time=0.115
64 bytes from 172.18.0.3: seq=3 ttl=64 time=0.115
64 bytes from 172.18.0.3: seq=4 ttl=64 time=0.116
ms
ms
ms
ms
ms
--- 172.18.0.3 ping statistics --5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.106/0.117/0.137 ms
So as you can see, containers added into the same network can automatically find each other through their IP addresses. Next,
let us see how to setup DNS in Docker so that we need use the IP addresses to communicate.
8.3
Docker engine uses these 3 files within every container to configure its DNS.
/etc/hostname - This file maps the container IP address to a name.
/etc/hosts - This file maps other containers IP addresses to names.
/etc/resolv.conf - This file contains the IP addresses of other DNS servers to refer to if the container cannot resolve a
name to IP address.
These files can be setup and manipulated using the docker run command. The hostname for a container is set into /etc/
hostname using the --hostname= option.
47 / 56
The entry of container1 can be put into the /etc/hosts file of container2 using the --link= option of docker
run. Let us try one more example now, we will link infinite_loop_1 to infinite_loop_2 and ping container1
from within container2 using the hostname of infinite_loop_1.
$ docker run -itd --name=infinite_loop_1 --hostname=container1 enhariharan/infinite-loop
c97fe6577f1e8b2ca90b04157a299c45becadea5a5ccd61564c8d279d98c3719
$
$ docker run -itd --name=infinite_loop_2 --hostname=container2 --link=infinite_loop_1 enhariharan/infinite-loop
2a26adfae9fb06038a823e4fea9a5118875abbdb8e023502eda96c91d4c79d6c
$
$ docker exec -it infinite_loop_2 cat /etc/hosts
127.0.0.1
localhost
::1
localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3
infinite_loop_1 container1
172.17.0.4
container2
$
$ docker exec -it infinite_loop_2 ping -c 5 container1
PING container1 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.175 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.118 ms
64 bytes from 172.17.0.3: seq=2 ttl=64 time=0.115 ms
64 bytes from 172.17.0.3: seq=3 ttl=64 time=0.123 ms
64 bytes from 172.17.0.3: seq=4 ttl=64 time=0.118 ms
--- container1 ping statistics --5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.115/0.129/0.175 ms
But the fun with discovering containers in the bridge network sort of stops here. To connect to containers without using their
IP addreses and without linking to them at docker run as above we need to do user-defined networks of Docker. Let us see
the basics of that next.
8.4
There are some important differences between connecting containers in user-defined networks and within bridge network. One
such point is that the --link option of docker run will not work in user-defined networks. Let us see how we can get the
same functionality as above using user-defined networks. We will add the containers infinite_loop_1 and infinite_l
oop_2 into a user-defined network called example_network and ping infinite_loop_2 from within infinite_lo
op_1.
$ docker run -itd --name=infinite_loop_1 --hostname=container1 --network=example_network
enhariharan/infinite-loop
d91757ae6e4dab01d9a37e73ec9a314c15ed195e5763fb6d9898ceb45dbc0964
$
$ docker run -itd --name=infinite_loop_2 --hostname=container2 --network=example_network
enhariharan/infinite-loop
3ab4dad71bad7114025db7330686cb23114c76f705f9e3489a2763aa74970f2b
48 / 56
$
$ docker exec -it infinite_loop_2 ping -c 4 infinite_loop_1
PING infinite_loop_1 (172.20.0.2): 56 data bytes
64 bytes from 172.20.0.2: seq=0 ttl=64 time=0.111 ms
64 bytes from 172.20.0.2: seq=1 ttl=64 time=0.122 ms
64 bytes from 172.20.0.2: seq=2 ttl=64 time=0.119 ms
64 bytes from 172.20.0.2: seq=3 ttl=64 time=0.131 ms
--- infinite_loop_1 ping statistics --4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.111/0.120/0.131 ms
$
$ docker exec -it infinite_loop_2 ping -c 4 container1
ping: bad address container1
From the above snippet, it is seen that containers are visible in the same network by their container names but not their host
names since ping by the hostname container1 does not work.
The recommended way is to add a network-alias to each container apart from the hostname. This network-alias can then be used
to connect by other containers. Let us try the above steps again but by setting a network-alias this time.
$ docker run -itd --name=infinite_loop_1 --hostname=container1 --network=example_network -- network-alias=container1_alias enhariharan/infinite-loop
c80675335e3a235b84e317d5a160324e7add9421d197524a26ef2c461a679f1c
$
$ docker run -itd --name=infinite_loop_2 --hostname=container2 --network=example_network -- network-alias=container2_alias enhariharan/infinite-loop
532b2d3d2694ed41c3abe44ae32671619b312b9a5c33604f8eaacd620b1d2874
$
$ docker exec -it infinite_loop_2 ping -c 4 container1_alias
PING container1_alias (172.20.0.2): 56 data bytes
64 bytes from 172.20.0.2: seq=0 ttl=64 time=0.127 ms
64 bytes from 172.20.0.2: seq=1 ttl=64 time=0.115 ms
64 bytes from 172.20.0.2: seq=2 ttl=64 time=0.119 ms
64 bytes from 172.20.0.2: seq=3 ttl=64 time=0.113 ms
--- container1_alias ping statistics --4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.113/0.118/0.127 ms
So this is how Docker Engines embedded DNS can be configured so that containers can communicate with other containers
within the same host and within the same network bridge. To talk among other networks or to talk among other hosts an
overlay network must be configured. Also, a cluster of Docker hosts can be created using tools like Docker Swarm. These
topics are outside the scope of this post and will be taken up in a future post.
8.5
Summary
In this post, we were introduced to the basics of Docker networking. We learned the types of networks supported and how
to create a user-defined bridge type network. We also understand how to add containers into specific networks. Then we
understood how containers can communicate with one another by configuring the embedded DNS server provided by Docker
Engine.
49 / 56
Chapter 9
9.1
Installation
Note: Docker requires a 64-bit system with a kernel version equal or higher to 3.10.
We can install Docker simply via apt-get, without the need of adding any repository, just installing the docker.io package:
sudo apt-get update
sudo apt-get install docker.io
For more details, you can follow the Install Docker on Ubuntu Tutorial.
9.2
If you have used Docker before, you (should) already know that a container is an instance of an image, so, first, we need one.
The easiest way to obtain Docker images is from Docker Hub, pulling them. We could pull, for example, an Nginx image:
docker pull nginx
You can check that you have the image on your system executing:
docker images
TAG
latest
IMAGE ID
abf312888d13
CREATED
2 weeks ago
SIZE
181.5 MB
9.3
50 / 56
Now that we have a Docker image, we can easily create a container from it.
As said before, a container is just an instance of an image. We can have several different containers from a single image, running
at the same time.
The run docker command creates the container, and then runs it. So, running a container from an image is as easy as executing:
docker run <image-name|image-id>[:tag]
And, also:
docker run abf312888d13
For the previous examples, Docker would create the container from the image that already exists in our disk. But, we can also
directly run containers for which we dont have the image in the disk; Docker is clever enough to know that it has to pull it, if it
cannot be found in the disk. So, we could execute:
docker run nginx:1.10
Which is an image that doesnt exist, and Docker will do a pull followed by the run, after saying that it was unable to find the
image:
Unable to find image nginx:1.10 locally
9.3.1
If you ran the container, you would saw that the container is being ran in the console, and, that if you terminate the process, the
container is stopped.
Usually, we want to run the containers in background or detached. For that, we have to use the -d option:
docker run -d nginx
With this option, Docker will show the container ID, and then detach the container.
9.3.2
The random names generated by Docker do not say nothing about the container, so, we should always give a meaningful name
to the container. This is achieved with the pull option, e.g.:
docker run -d --name=nginx1 nginx
9.3.3
51 / 56
An essential option for most of the containers is the port publishing. If we are running a web server, as in this example, is
obviously essential.
For this, we have to use the -p (--publish) option, which follows this format:
docker run -p <host-port>:<container-port> <image>
So, for example, if we would want to map the port 80 of the container to the port 80 of the host, we would execute:
docker run -d --name=nginx2 -p 80:80 nginx
Now, if we follow localhost (or 127.0.0.1) in a browser, we should see the Nginx welcome page. That is, we would be
accessing the web root of the web server running in the container.
Of course, we can use any (free) port in the host:
docker run -d --name=nginx3 -p 8080:80 nginx
9.3.4
For some occasions, we just want to get the command line of a container. For this, we have to use two options: -i (-interactive and -t (--tty and also specify the shell:
docker run -i -t <image> </path/to/shell>
For example:
docker run --name=nginx4 -i -t nginx /bin/bash
9.3.5
Specifying a username
Exists the possibility of specifying the username the container has to be ran with, with the -u (--user) option:
docker run -u <username> <image>
Applied to the previous example, for running the container not with root user, but with nginx:
docker run --name=nginx6 -u nginx -it nginx /bin/bash
9.4
Now, the nginx1 container is stopped. If we would try to create a container with the same name:
52 / 56
The message is quite explanatory: we cannot create a container with the name of another container (thats why we have used
different names for the containers in this example), even if stopped. For that, we would have to delete or rename it.
If what we want to do is to run that container, that is, that specific instance of the nginx image that we created before, we
dont have to use the start command, but restart:
docker restart <container-name|container-id>
9.5
Summary
This example has shown how to run Docker containers, that is, an instance of a Docker image. We have seen several options for
these. Apart from that, we have also seen how to restart existing containers, which is slightly different from creating them.
53 / 56
Chapter 10
10.1
Installation
Note: Docker requires a 64-bit system with a kernel version equal or higher to 3.10.
We can install Docker simply via apt-get, without the need of adding any repository, just installing the docker.io package:
udo apt-get update
sudo apt-get install docker.io
For more details, you can follow the Install Docker on Ubuntu Tutorial.
10.2
10.2.1
10.2.2
run
run
run
run
busybox
busybox echo "Hello world"
--name=mybusybox busybox
-it busybox
Note: after the last container execution, open a new terminal, since the have set the interactive mode for it.
10.3
54 / 56
Listing containers
The Docker command for listing containers is ps (yes, not a very good name). So, lets try it:
docker ps
IMAGE
PORTS
busybox
COMMAND
NAMES
"sh"
jolly_bhabha
CREATED
STATUS
3 minutes ago
Up 3
The first thing that probably will come to our minds is that we are just seeing a single container, when we actually instantiated
several of them. This is because ps command just shows the active, running containers. And, actually, we only have a single
container running: the one we instantiated interactively.
If we want to show all the containers, we have to pass the -a (--all) option to ps:
docker ps -a
CREATED
STATUS -
3 minutes ago
Up 3
3 minutes ago
Exited -
3 minutes ago
grave_ramanujan
"echo Hello world"
3 minutes ago
pedantic_hoover
"sh"
3 minutes ago
cranky_murdock
Exited -
IMAGE
COMMAND
PORTS
e723bc39fdd9
minutes
642aec18d638
(0) 3 minutes
fe574b4fcca5
(0) 3 minutes
0583be5c5598
(0) 3 minutes
e4ceead785b3
(0) 3 minutes
10.3.1
busybox
NAMES
"sh"
jolly_bhabha
busybox
ago
busybox
ago
busybox
ago
busybox
ago
"sh"
mybusybox
"sh"
Exited Exited -
You may find the ps output not very pretty, since the lines can be wrapped if the width of the terminal window is not big enough.
Fortunately, this command allows custom formatting. This is achieved with the --format option. The format is specified in
"Go Template", which is pretty easy:
docker ps --format {{.<column-name1>}}[ {{.<column-nameN>}} ]
Specifying as many columns as we want. Of course, the formatting is compatible with -a option.
For example, if we would just want to see the names, we could execute:
docker ps -a --format {{ .Names }} {{ .Status }}
The name of the fields are case sensitive, and for same cases the field name we have to use with --format is not very intuitive.
So this is the list of the field name we have to use for each column:
55 / 56
CONTAINER ID: ID
IMAGE: Image
COMMAND: Command
CREATED: RunningFor
STATUS: Status
PORTS: Ports
NAMES: Names
Note that the fields which were named CONTAINER ID and CREATED, use completely different names for the formatting. If
we want to show the header for each column, we can add the table identifier before we specify the format, for example:
docker ps -a --format table {{ .ID }} {{ .Names }}
NAMES
jolly_bhabha
mybusybox
grave_ramanujan
pedantic_hoover
cranky_murdock
And, to align each column, we can use the tabulation t character between each one:
docker ps -a --format table {{ .ID }}\t{{ .Names }}
10.3.2
NAMES
jolly_bhabha
mybusybox
grave_ramanujan
pedantic_hoover
cranky_murdock
We have seen how useful can be the custom formatting for container outputting. But, on the other hand, it requires to write quite
a lot, which perhaps make thing that its not worth.
Fortunately, Docker allows to define a default formatting configuration for the ps command, so, every time we execute it, it will
show the output as defined, without the need of specifying the --format option.
For this, we just have to open the config.json file with our favorite editor, located in the .docker/ folder, in the home
directory:
vim ~/.docker/config.json
56 / 56
docker ps -a
10.3.3
NAMES
jolly_bhabha
mybusybox
grave_ramanujan
pedantic_hoover
cranky_murdock
STATUS
Exited
Exited
Exited
Exited
Exited
(0)
(0)
(0)
(0)
(0)
10
10
10
10
10
minutes
minutes
minutes
minutes
minutes
ago
ago
ago
ago
ago
Using filters
The remaining interesting feature for container listing is the filtering of the results. This is useful if we are dealing with several
containers, and we just want information about some of them.
For filtering the output, the option -f (--format) is used. The format is the following:
docker ps -f "key1=value1" -f "key2=value2" -f "keyN=valueN"
As you can see, we can specify as many filters as we want, but, for each of them, we have to specify the filter option.
For example, we could filter by the container name:
docker ps -a -f "name=mybusybox"
NAMES
mybusybox
STATUS
Exited (0) 15 minutes ago
10.4
NAMES
jolly_bhabha
grave_ramanujan
pedantic_hoover
cranky_murdock
STATUS
Exited
Exited
Exited
Exited
(0)
(0)
(0)
(0)
15
15
15
15
minutes
minutes
minutes
minutes
ago
ago
ago
ago
Summary
In this tutorial, we have seen how to list containers created with Docker. But we have gone further than that: we have seen the
options that the command for listing containers provide, to make the output prettier and more suitable for our needs, seeing also
how to save the configuration to use it as default.