Docker Container
Docker Container
● http://people.irisa.fr/Anthony.Baire/docker-tutorial.pdf
● https://docker-curriculum.com/
● https://docs.docker.com/get-started/overview/
● https://docs.docker.com/get-started/
● https://www.cse.wustl.edu/~jain/cse570-18/ftp/m_21cdk.pdf
● https://www.cse.iitb.ac.in/~puru//courses/spring19/cs695/
● https://www.simplilearn.com/tutorials/docker-tutorial/
Introduction
Advantages of Virtualization
● Minimize hardware costs (CapEx) - Multiple virtual servers on one physical
hardware.
● Easily move VMs to other data centers
○ Provide disaster recovery. Hardware maintenance.
○ Follow the sun (active users) or follow the moon (cheap power)
● Consolidate idle workloads. Usage is bursty and asynchronous. Increase
device utilization.
● Conserve power- Free up unused physical resources
● Easier automation (Lower OpEx) - Simplified provisioning/administration of
hardware and software
● Scalability and Flexibility: Multiple operating systems
Problems of Virtualization
[source: https://www.docker.com/whatisdocker/]
What is Docker ?
● Docker is an open platform for developing, shipping, and running applications.
● Docker enables you to separate your applications from your infrastructure.
● Docker provides the ability to package and run an application in a loosely isolated
environment called a container.
● The isolation and security allow you to run many containers simultaneously on a
given host.
● Docker provides tooling and a platform to manage the lifecycle of your containers:
○ Develop your application and its supporting components using containers.
○ The container becomes the unit for distributing and testing your application.
○ When you’re ready, deploy your application into your production environment, as a
container or an orchestrated service. This works the same whether your production
environment is a local data center, a cloud provider, or a hybrid of the two.
Docker Architecture
● Docker uses a client-server architecture.
● The Docker client talks to the Docker daemon, which does the heavy lifting of
building, running, and distributing your Docker containers.
● The Docker client and daemon can run on the same system, or you can
connect a Docker client to a remote Docker daemon.
● The Docker client and daemon communicate using a REST API, over UNIX
sockets or a network interface.
● Docker uses a technology called namespaces to provide the isolated
workspace called the container. When you run a container, Docker creates a
set of namespaces for that container.
Docker Architecture
Docker Architecture
● The Docker daemon : The Docker daemon (dockerd) listens for Docker API
requests and manages Docker objects such as images, containers, networks,
and volumes. A daemon can also communicate with other daemons to
manage Docker services.
● The Docker client : The Docker client (docker) is the primary way that many
Docker users interact with Docker. When you use commands such as docker
run, the client sends these commands to dockerd, which carries them out.
The docker command uses the Docker API. The Docker client can
● Docker objects: When you use Docker, you are creating and using images,
containers, networks, volumes, plugins, and other objects. This section is a
brief overview of some of those objects.
○ Images: An image is a read-only template with instructions for creating a Docker
container. Often, an image is based on another image, with some additional
customization.
○ Containers: A container is a runnable instance of an image. You can create, start,
stop, move, or delete a container using the Docker API or CLI. You can connect a
container to one or more networks, attach storage to it, or even create a new
image based on its current state.
Docker vs VM
Implementation and Usage
Installation
Link: https://docs.docker.com/get-docker/
>sudo sh get-docker.sh
Configure Docker to start on boot
1. To automatically start Docker and Container on boot for other distros, use the commands
below:
$ sudo systemctl enable docker.service
3. Docker Version
$ docker version [OPTIONS]
Docker Daemon
● The Docker daemon is a service that runs on your host operating system.
● It currently only runs on Linux because it depends on a number of Linux
kernel features, but there are a few ways to run Docker on MacOS and
Windows too.
● S tart the daemon manually : dockerd
● By default this directory is: /var/lib/docker
Docker Images
● Docker image is the one that is shipped with code and can be run on any
platform where docker Engine is installed.
● Lets take an example: Think a developer writes code and then he packages
all the code, dependencies , installables etc in one file called Dockerfile and
create an image out of it .
Docker Images
Local Images can be seen by: docker images
Docker Images
● IMAGE ID is the first 12 characters of the true identifier for an image. You can
create many tags of a given image, but their IDs will all be the same (as above).
● VIRTUAL SIZE is virtual because it's adding up the sizes of all the distinct underlying
layers. This means that the sum of all the values in that column is probably much
larger than the disk space used by all of those images.
● The value in the REPOSITORY column comes from the -t flag of the docker build
command, or from docker tag-ing an existing image. You're free to tag images using a
nomenclature that makes sense to you, but know that docker will use the tag as the
registry location in a docker push or docker pull.
Docker Images
● The full form of a tag is [REGISTRYHOST/][USERNAME/]NAME[:TAG]. For ubuntu
above, REGISTRYHOST is inferred to be registry.hub.docker.com. So if you plan on
storing your image called my-application in a registry at docker.example.com, you
should tag that image docker.example.com/my-application.
● The TAG column is just the [:TAG] part of the full tag. This is unfortunate terminology.
● The latest tag is not magical, it's simply the default tag when you don't specify a tag.
● You can have untagged images only identifiable by their IMAGE IDs. These will get the
TAG and REPOSITORY. It's easy to forget about them.
Container
To use a programming metaphor, if an image is a class, then a container is an
instance of a class—a runtime object. Containers are hopefully why you're using
Docker; they're lightweight and portable encapsulations of an environment in
which to run applications.
● docker ps only outputs running containers. You can view all containers
(running or stopped) with docker ps -a.
● NAMES can be used to identify a started container via the --name flag.
DTR (Docker Trusted Registry)
● DTR can be installed on any platform where you can store your Docker
images securely, behind your firewall.
● DTR has a user interface that allows authorized users in your organization to
browse Docker images and review repository events.
● It even allows you to see what Dockerfile lines were used to produce the
image and, if security scanning is enabled, to see a list of all of the software
installed in your images.
DTR (Docker Trusted Registry)
● Efficiency: DTR has this ability to clean the unreferenced manifests and cache the images
as well for faster pulling of images.
● Security scanning : Image Scanning is built in feature provided out of the box by DTR.
● Image signing : DTR has built in Notary, you can use Docker Content Trust to
sign and verify images.
Docker Image Commands
1. Downloading Docker Image:
○ Search: docker search ubuntu:18.04 (specific version)
○ Pull: docker pull ubuntu (latest version)
2. Listing Images
○ docker images
○ docker images -a (show all images)
1. Docker gives you the capability to create your own Docker images, and it
can be done with the help of Docker Files.
2. A Dockerfile is a text file which contains a series of commands or
instructions.
3. These instructions are executed in the order in which they are written.
4. Execution of these instructions takes place on a base image.
5. On building the Dockerfile, the successive actions form a new image from
the base parent image.
Creating a Dockerfile
● Create a file called Docker File and edit it using any default editor.
● Please note that the name of the file has to be "Dockerfile" with "D" as capital.
● sudo gedit Dockerfile
● Build your Docker File using the following instructions.
FROM ubuntu
MAINTAINER [email protected]
RUN apt-get update
RUN apt-get install –y mininet
CMD [“echo”,”Image created”]
Creating a DockerFile
Options
● -t − is to mention a tag to the image
● ImageName − This is the name you want to give to your image.
● TagName − This is the tag you want to give to your image.
● Dir − The directory where the Docker File is present.
WORKDIR /newtemp
CMD pwd
Sample Dockerfile
FROM node:8.11-slim
ENV workdirectory /usr/node
WORKDIR $workdirectory
WORKDIR app
COPY package.json .
RUN ls -ll &&\
npm install
# command executable and version
ENTRYPOINT ["node"]
Sample Dockerfile
FROM node:$NODE_VERSION
ENV workdirectory /usr/node
WORKDIR $workdirectory
WORKDIR app
COPY package.json .
RUN ls -ll &&\
npm install
RUN useradd abc
USER abc
ADD index.js .
RUN ls -l
EXPOSE 3070
ENTRYPOINT ["node"]
Pushing and Pulling to and from Docker Hub
● just with your own user name and email that you used for the account. Enter
your password when prompted. If everything worked you will get a message
similar to
Solutions to these problems can be to save the Docker container locally as a a tar archive, and then
you can easily load that to an image when needed.
● To save a Docker image after you have pulled, committed or built it you use the
docker save command.
For example, lets save a local copy of the verse_gapminder docker image we
made:
docker save verse_gapminder > verse_gapminder.tar
● If we want to load that Docker container from the archived tar file in the future,
we can use the docker load command:
● User-defined bridge networks are best when you need multiple containers
to communicate on the same Docker host.
● Host networks are best when the network stack should not be isolated from
the Docker host, but you want other aspects of the container to be isolated.
● Overlay networks are best when you need containers running on different
Docker hosts to communicate, or when multiple applications work together
using swarm services.
● Macvlan networks are best when you are migrating from a VM setup or need
your containers to look like physical hosts on your network, each with a
unique MAC address.
● Third-party network plugins allow you to integrate Docker with specialized
network stacks.
More Reading Container Networking: (Not in Syllabus)