0% found this document useful (0 votes)
6 views

kubernetes_4

Uploaded by

m.dee.luffy.koa
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

kubernetes_4

Uploaded by

m.dee.luffy.koa
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

and by virtualization software on the host, containers are enabled by the

Linux kernel itself. You’ll learn about container technologies later when you
can try them out for yourself. You’ll need to have Docker installed for that,
so let’s learn how it fits into the container story.

2.1.2 Introducing the Docker container platform


While container technologies have existed for a long time, they only became
widely known with the rise of Docker. Docker was the first container system
that made them easily portable across different computers. It simplified the
process of packaging up the application and all its libraries and other
dependencies - even the entire OS file system - into a simple, portable
package that can be used to deploy the application on any computer running
Docker.

Introducing containers, images and registries

Docker is a platform for packaging, distributing and running applications. As


mentioned earlier, it allows you to package your application along with its
entire environment. This can be just a few dynamically linked libraries
required by the app, or all the files that are usually shipped with an operating
system. Docker allows you to distribute this package via a public repository
to any other Docker-enabled computer.

Figure 2.4 The three main Docker concepts are images, registries and containers
Figure 2.4 shows three main Docker concepts that appear in the process I’ve
just described. Here’s what each of them is:

Images—A container image is something you package your application


and its environment into. Like a zip file or a tarball. It contains the
whole filesystem that the application will use and additional metadata,
such as the path to the executable file to run when the image is executed,
the ports the application listens on, and other information about the
image.
Registries—A registry is a repository of container images that enables
the exchange of images between different people and computers. After
you build your image, you can either run it on the same computer, or
push (upload) the image to a registry and then pull (download) it to
another computer. Certain registries are public, allowing anyone to pull
images from it, while others are private and only accessible to
individuals, organizations or computers that have the required
authentication credentials.
Containers—A container is instantiated from a container image. A
running container is a normal process running in the host operating
system, but its environment is isolated from that of the host and the
environments of other processes. The file system of the container
originates from the container image, but additional file systems can also
be mounted into the container. A container is usually resource-restricted,
meaning it can only access and use the amount of resources such as CPU
and memory that have been allocated to it.

Building, distributing, and running a container image

To understand how containers, images and registries relate to each other, let’s
look at how to build a container image, distribute it through a registry and
create a running container from the image. These three processes are shown
in figures 2.5 to 2.7.

Figure 2.5 Building a container image


As shown in figure 2.5, the developer first builds an image, and then pushes it
to a registry, as shown in figure 2.6. The image is now available to anyone
who can access the registry.

Figure 2.6 Uploading a container image to a registry

As the next figure shows, another person can now pull the image to any other
computer running Docker and run it. Docker creates an isolated container
based on the image and invokes the executable file specified in the image.

Figure 2.7 Running a container on a different computer


Running the application on any computer is made possible by the fact that the
environment of the application is decoupled from the environment of the
host.

Understanding the environment that the application sees

When you run an application in a container, it sees exactly the file system
content you bundled into the container image, as well as any additional file
systems you mount into the container. The application sees the same files
whether it’s running on your laptop or a full-fledged production server, even
if the production server uses a completely different Linux distribution. The
application typically has no access to the files in the host’s operating system,
so it doesn’t matter if the server has a completely different set of installed
libraries than your development computer.

For example, if you package your application with the files of the entire Red
Hat Enterprise Linux (RHEL) operating system and then run it, the
application will think it’s running inside RHEL, whether you run it on your
Fedora-based or a Debian-based computer. The Linux distribution installed
on the host is irrelevant. The only thing that might be important is the kernel
version and the kernel modules it loads. Later, I’ll explain why.

This is similar to creating a VM image by creating a new VM, installing an


operating system and your app in it, and then distributing the whole VM
image so that other people can run it on different hosts. Docker achieves the
same effect, but instead of using VMs for app isolation, it uses Linux
container technologies to achieve (almost) the same level of isolation.

Understanding image layers

Unlike virtual machine images, which are big blobs of the entire filesystem
required by the operating system installed in the VM, container images
consist of layers that are usually much smaller. These layers can be shared
and reused across multiple images. This means that only certain layers of an
image need to be downloaded if the rest were already downloaded to the host
as part of another image containing the same layers.

Layers make image distribution very efficient but also help to reduce the
storage footprint of images. Docker stores each layer only once. As you can
see in the following figure, two containers created from two images that
contain the same layers use the same files.

Figure 2.8 Containers can share image layers


The figure shows that containers A and B share an image layer, which means
that applications A and B read some of the same files. In addition, they also
share the underlying layer with container C. But if all three containers have
access to the same files, how can they be completely isolated from each
other? Are changes that application A makes to a file stored in the shared
layer not visible to application B? They aren’t. Here’s why.

The filesystems are isolated by the Copy-on-Write (CoW) mechanism. The


filesystem of a container consists of read-only layers from the container
image and an additional read/write layer stacked on top. When an application
running in container A changes a file in one of the read-only layers, the entire
file is copied into the container’s read/write layer and the file contents are
changed there. Since each container has its own writable layer, changes to
shared files are not visible in any other container.

When you delete a file, it is only marked as deleted in the read/write layer,
but it’s still present in one or more of the layers below. What follows is that
deleting files never reduces the size of the image.

WARNING

Even seemingly harmless operations such as changing permissions or


ownership of a file result in a new copy of the entire file being created in the
read/write layer. If you perform this type of operation on a large file or many
files, the image size may swell significantly.

Understanding the portability limitations of container images

In theory, a Docker-based container image can be run on any Linux computer


running Docker, but one small caveat exists, because containers don’t have
their own kernel. If a containerized application requires a particular kernel
version, it may not work on every computer. If a computer is running a
different version of the Linux kernel or doesn’t load the required kernel
modules, the app can’t run on it. This scenario is illustrated in the following
figure.

Figure 2.9 If a container requires specific kernel features or modules, it may not work
everywhere

You might also like