0% found this document useful (0 votes)
4 views

kubernetes_5

Uploaded by

m.dee.luffy.koa
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

kubernetes_5

Uploaded by

m.dee.luffy.koa
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

The figure shows that containers A and B share an image layer, which means

that applications A and B read some of the same files. In addition, they also
share the underlying layer with container C. But if all three containers have
access to the same files, how can they be completely isolated from each
other? Are changes that application A makes to a file stored in the shared
layer not visible to application B? They aren’t. Here’s why.

The filesystems are isolated by the Copy-on-Write (CoW) mechanism. The


filesystem of a container consists of read-only layers from the container
image and an additional read/write layer stacked on top. When an application
running in container A changes a file in one of the read-only layers, the entire
file is copied into the container’s read/write layer and the file contents are
changed there. Since each container has its own writable layer, changes to
shared files are not visible in any other container.

When you delete a file, it is only marked as deleted in the read/write layer,
but it’s still present in one or more of the layers below. What follows is that
deleting files never reduces the size of the image.

WARNING

Even seemingly harmless operations such as changing permissions or


ownership of a file result in a new copy of the entire file being created in the
read/write layer. If you perform this type of operation on a large file or many
files, the image size may swell significantly.

Understanding the portability limitations of container images

In theory, a Docker-based container image can be run on any Linux computer


running Docker, but one small caveat exists, because containers don’t have
their own kernel. If a containerized application requires a particular kernel
version, it may not work on every computer. If a computer is running a
different version of the Linux kernel or doesn’t load the required kernel
modules, the app can’t run on it. This scenario is illustrated in the following
figure.

Figure 2.9 If a container requires specific kernel features or modules, it may not work
everywhere
Container B requires a specific kernel module to run properly. This module is
loaded in the kernel in the first computer, but not in the second. You can run
the container image on the second computer, but it will break when it tries to
use the missing module.

And it’s not just about the kernel and its modules. It should also be clear that
a containerized app built for a specific hardware architecture can only run on
computers with the same architecture. You can’t put an application compiled
for the x86 CPU architecture into a container and expect to run it on an
ARM-based computer just because Docker is available there. For this you
would need a VM to emulate the x86 architecture.

2.1.3 Installing Docker and running a Hello World container


You should now have a basic understanding of what a container is, so let’s
use Docker to run one. You’ll install Docker and run a Hello World
container.

Installing Docker

Ideally, you’ll install Docker directly on a Linux computer, so you won’t


have to deal with the additional complexity of running containers inside a
VM running within your host OS. But, if you’re using macOS or Windows
and don’t know how to set up a Linux VM, the Docker Desktop application
will set it up for you. The Docker command-line (CLI) tool that you’ll use to
run containers will be installed in your host OS, but the Docker daemon will
run inside the VM, as will all the containers it creates.

The Docker Platform consists of many components, but you only need to
install Docker Engine to run containers. If you use macOS or Windows,
install Docker Desktop. For details, follow the instructions at
http://docs.docker.com/install.

Note

Docker Desktop for Windows can run either Windows or Linux containers.
Make sure that you configure it to use Linux containers, as all the examples
in this book assume that’s the case.

Running a Hello World container

After the installation is complete, you use the docker CLI tool to run Docker
commands. Let’s try pulling and running an existing image from Docker
Hub, the public image registry that contains ready-to-use container images
for many well-known software packages. One of them is the busybox image,
which you’ll use to run a simple echo "Hello world" command in your first
container.

If you’re unfamiliar with busybox, it’s a single executable file that combines
many of the standard UNIX command-line tools, such as echo, ls, gzip, and
so on. Instead of the busybox image, you could also use any other full-
fledged OS container image like Fedora, Ubuntu, or any other image that
contains the echo executable file.

Once you’ve got Docker installed, you don’t need to download or install
anything else to run the busybox image. You can do everything with a single
docker run command, by specifying the image to download and the
command to run in it. To run the Hello World container, the command and its
output are as follows:
$ docker run busybox echo "Hello World"
Unable to find image 'busybox:latest' locally #A
latest: Pulling from library/busybox #A
7c9d20b9b6cd: Pull complete #A
Digest: sha256:fe301db49df08c384001ed752dff6d52b4... #A
Status: Downloaded newer image for busybox:latest #A
Hello World #B

With this single command, you told Docker what image to create the
container from and what command to run in the container. This may not look
so impressive, but keep in mind that the entire “application” was downloaded
and executed with a single command, without you having to install the
application or any of its dependencies.

In this example, the application was just a single executable file, but it could
also have been a complex application with dozens of libraries and additional
files. The entire process of setting up and running the application would be
the same. What isn’t obvious is that it ran in a container, isolated from the
other processes on the computer. You’ll see that this is true in the remaining
exercises in this chapter.

Understanding what happens when you run a container

Figure 2.10 shows exactly what happens when you execute the docker run
command.

Figure 2.10 Running echo “Hello world” in a container based on the busybox container image
The docker CLI tool sends an instruction to run the container to the Docker
daemon, which checks whether the busybox image is already present in its
local image cache. If it isn’t, the daemon pulls it from the Docker Hub
registry.

After downloading the image to your computer, the Docker daemon creates a
container from that image and executes the echo command in it. The
command prints the text to the standard output, the process then terminates
and the container stops.

If your local computer runs a Linux OS, the Docker CLI tool and the daemon
both run in this OS. If it runs macOS or Windows, the daemon and the
containers run in the Linux VM.

Running other images

Running other existing container images is much the same as running the
busybox image. In fact, it’s often even simpler, since you don’t normally
need to specify what command to execute, as with the echo command in the
previous example. The command that should be executed is usually written in
the image itself, but you can override it when you run it.
For example, if you want to run the Redis datastore, you can find the image
name on http://hub.docker.com or another public registry. In the case of
Redis, one of the images is called redis:alpine, so you’d run it like this:
$ docker run redis:alpine

To stop and exit the container, press Control-C.

Note

If you want to run an image from a different registry, you must specify the
registry along with the image name. For example, if you want to run an
image from the Quay.io registry, which is another publicly accessible image
registry, run it as follows: docker run quay.io/some/image.

Understanding image tags

If you’ve searched for the Redis image on Docker Hub, you’ve noticed that
there are many image tags you can choose from. For Redis, the tags are
latest, buster, alpine, but also 5.0.7-buster, 5.0.7-alpine, and so on.

Docker allows you to have multiple versions or variants of the same image
under the same name. Each variant has a unique tag. If you refer to images
without explicitly specifying the tag, Docker assumes that you’re referring to
the special latest tag. When uploading a new version of an image, image
authors usually tag it with both the actual version number and with latest.
When you want to run the latest version of an image, use the latest tag
instead of specifying the version.

Note

The docker run command only pulls the image if it hasn’t already pulled it
before. Using the latest tag ensures that you get the latest version when you
first run the image. The locally cached image is used from that point on.

Even for a single version, there are usually several variants of an image. For
Redis I mentioned 5.0.7-buster and 5.0.7-alpine. They both contain the
same version of Redis, but differ in the base image they are built on. 5.0.7-
buster is based on Debian version “Buster”, while 5.0.7-alpine is based on
the Alpine Linux base image, a very stripped-down image that is only 5MB
in total – it contains only a small set of the installed binaries you see in a
typical Linux distribution.

To run a specific version and/or variant of the image, specify the tag in the
image name. For example, to run the 5.0.7-alpine tag, you’d execute the
following command:
$ docker run redis:5.0.7-alpine

These days, you can find container images for virtually all popular
applications. You can use Docker to run those images using the simple
docker run single-line command.

2.1.4 Introducing the Open Container Initiative and Docker


alternatives
Docker was the first container platform to make containers mainstream. I
hope I’ve made it clear that Docker itself is not what provides the process
isolation. The actual isolation of containers takes place at the Linux kernel
level using the mechanisms it provides. Docker is the tool using these
mechanisms to make running container almost trivial. But it’s by no means
the only one.

Introducing the Open Container Initiative (OCI)

After the success of Docker, the Open Container Initiative (OCI) was born to
create open industry standards around container formats and runtime. Docker
is part of this initiative, as are other container runtimes and a number of
organizations with interest in container technologies.

OCI members created the OCI Image Format Specification, which prescribes
a standard format for container images, and the OCI Runtime Specification,
which defines a standard interface for container runtimes with the aim of
standardizing the creation, configuration and execution of containers.
Introducing the Container Runtime Interface (CRI) and its
implementation (CRI-O)

This book focuses on using Docker as the container runtime for Kubernetes,
as it was initially the only one supported by Kubernetes and is still the most
widely used. But Kubernetes now supports many other container runtimes
through the Container Runtime Interface (CRI).

One implementation of CRI is CRI-O, a lightweight alternative to Docker


that allows you to leverage any OCI-compliant container runtime with
Kubernetes. Examples of OCI-compliant runtimes include rkt (pronounced
Rocket), runC, and Kata Containers.

2.2 Deploying Kiada—the Kubernetes in Action


Demo Application
Now that you’ve got a working Docker setup, you can start building a more
complex application. You’ll build a microservices-based application called
Kiada - the Kubernetes in Action Demo Application.

In this chapter, you’ll use Docker to run this application. In the next and
remaining chapters, you’ll run the application in Kubernetes. Over the course
of this book, you’ll iteratively expand it and learn about individual
Kubernetes features that help you solve the typical problems you face when
running applications.

2.2.1 Introducing the Kiada Suite

The Kubernetes in Action Demo Application is a web-based application that


shows quotes from this book, asks you Kubernetes-related questions to help
you check how your knowledge is progressing, and provides a list of
hyperlinks to external websites related to Kubernetes or this book. It also
prints out the information about the container that served processed the
browser’s request. You’ll soon see why this is important.

The look and operation of the application


A screenshot of the web application is presented in the following figure.

Figure 2.11 A screenshot of the Kubernetes in Action Demo Application (Kiada)

The architecture of the Kiada application is shown in the next figure. The
HTML is served by a web application running in a Node.js server. The client-
side JavaScript code then retrieves the quote and question from the Quote and
the Quiz RESTful services. The Node.js application and the services
comprise the complete Kiada Suite.

Figure 2.12 The architecture and operation of the Kiada Suite


The web browser talks directly to three different services. If you’re familiar
with microservice architectures, you might wonder why no API gateway
exists in the system. This is so that I can demonstrate the issues and solutions
associated with cases where many different services are deployed in
Kubernetes (services that may not belong behind the same API gateway). But
chapter 11 will also explain how to introduce Kubernetes-native API
gateways into the system.

The look and operation of the plain-text version

You’ll spend a lot of time interacting with Kubernetes via a terminal, so you
may not want to go back and forth between it and a web browser when you
perform the exercises. For this reason, the application can also be used in
plain-text mode.

The plain-text mode allows you to use the application directly from the
terminal using a tool such as curl. In that case, the response sent by the
application looks like the following:
==== TIP OF THE MINUTE
Liveness probes can only be used in the pod’s regular containers.
They can’t be defined in init containers.
==== POP QUIZ
Third question
0) First answer
1) Second answer
2) Third answer

Submit your answer to /question/0/answers/<index of answer> using the POST m

==== REQUEST INFO


Request processed by Kubia 1.0 running in pod "kiada-ssl" on node "kind-work
Pod hostname: kiada-ssl; Pod IP: 10.244.2.188; Node IP: 172.18.0.2; Client I

The HTML version is accessible at the request URI /html, whereas the text
version is at /text. If the client requests the root URI path /, the application
inspects the Accept request header to guess whether the client is a graphical
web browser, in which case it redirects it to /html, or a text-based tool like
curl, in which case it sends the plain-text response.

In this mode of operation, it’s the Node.js application that calls the Quote and
the Quiz services, as shown in the next figure.

Figure 2.13 The operation when the client requests the text version

From a networking standpoint, this mode of operation is much different than


the one described previously. In this case, the Quote and the Quiz service are
invoked within the cluster, whereas previously, they were invoked directly

You might also like