0% found this document useful (0 votes)
72 views7 pages

3.2 How Openshift Components Work Together: Containers Are Linux

1. OpenShift creates various application components like build configs, deployment configs, and image streams to deploy applications in containers. 2. Kubernetes then creates replication controllers, services, and pods to schedule the application across nodes and expose it through a single IP address. 3. Together, OpenShift and Kubernetes work with Docker to deploy the application in containers, make it accessible to users, and ensure it can scale effectively.

Uploaded by

Agung Riyadi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views7 pages

3.2 How Openshift Components Work Together: Containers Are Linux

1. OpenShift creates various application components like build configs, deployment configs, and image streams to deploy applications in containers. 2. Kubernetes then creates replication controllers, services, and pods to schedule the application across nodes and expose it through a single IP address. 3. Together, OpenShift and Kubernetes work with Docker to deploy the application in containers, make it accessible to users, and ensure it can scale effectively.

Uploaded by

Agung Riyadi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

38 CHAPTER 3 Containers are Linux

A transportable unit to move applications around. This is a typical developer’s


answer.
A fancy Linux process (one of our personal favorites).
A more effective way to isolate processes on a Linux system. This is a more
operations-centered answer.
What we need to untangle is the fact that they’re all correct, depending on your point
of view.
In chapter 1, we talked about how OpenShift uses Kubernetes and docker to
orchestrate and deploy applications in containers in your cluster. But we haven’t
talked much about which application component is created by each of these services.
Before we move forward, it’s important for you to understand these responsibilities as
you begin interacting with application components directly.

3.2 How OpenShift components work together


When you deploy an application in OpenShift, the request starts in the OpenShift
API. We discussed this process at a high level in chapter 2. To really understand how
containers isolate the processes within them, we need take a more detailed look at
how these services work together to deploy your application. The relationship
between OpenShift, Kubernetes, docker, and, ultimately, the Linux kernel is a chain
of dependencies.
When you deploy an application in OpenShift, the process starts with the Open-
Shift services.

3.2.1 OpenShift manages deployments


Deploying applications begins with application components that are unique to Open-
Shift. The process is as follows:
1 OpenShift creates a custom container image using your source code and the
builder image template you specified. For example, app-cli and app-gui use the
PHP builder image.
2 This image is uploaded to the OpenShift container image registry.
3 OpenShift creates a build config to document how your application is built.
This includes which image was created, the builder image used, the location of
the source code, and other information.
4 OpenShift creates a deployment config to control deployments and deploy and
update your applications. Information in deployment configs includes the
number of replicas, the upgrade method, and application-specific variables and
mounted volumes.
5 OpenShift creates a deployment, which represents a single deployed version of
an application. Each unique application deployment is associated with your
application’s deployment config component.
How OpenShift components work together 39

6 The OpenShift internal load balancer is updated with an entry for the DNS
record for the application. This entry will be linked to a component that’s cre-
ated by Kubernetes, which we’ll get to shortly.
7 OpenShift creates an image stream component. In OpenShift, an image stream
monitors the builder image, deployment config, and other components for
changes. If a change is detected, image streams can trigger application rede-
ployments to reflect changes.
Figure 3.1 shows how these components are linked together. When a developer cre-
ates source code and triggers a new application deployment (in this case, using the oc
command-line tool), OpenShift creates the deployment config, image stream, and
build config components.

1. The developers Users want to use


create application application but have
source code. no access (and no
application…)
Developers 2. The developers trigger
a new application
deployment.
Users

Source oc new-app
code ...

External

OpenShift

Builder Custom Image Deployment


Build config
image image stream config

Image registry
Deployment

DNS
route

3. A custom container 4. The deployment config creates


Load balancer image is created a unique deployment for each
and referenced in application version.
the build config.

3b. A DNS route is created 3a. The image stream monitors


in the OpenShift load the images and deployment
balancer. config for changes, triggering
upgrades and rebuilds as
needed to serve the new
configuration.

Figure 3.1 Application components created by OpenShift during application deployment


40 CHAPTER 3 Containers are Linux

The build config creates an application-specific custom container image using the
specified builder image and source code. This image is stored in the OpenShift image
registry. The deployment config component creates an application deployment that’s
unique for each version of the application. The image stream is created and monitors
for changes to the deployment config and related images in the internal registry. The
DNS route is also created and will be linked to a Kubernetes object.
In figure 3.1, notice that the users are sitting by themselves with no access to the
application. There is no application. OpenShift depends on Kubernetes, as well as
docker, to get the deployed application to the user. Next, we’ll look at Kubernetes’
responsibilities in OpenShift.

3.2.2 Kubernetes schedules applications across nodes


Kubernetes is the orchestration engine at the heart of OpenShift. In many ways, an
OpenShift cluster is a Kubernetes cluster. When you initially deployed app-cli, Kuber-
netes created several application components:
Replication controller—Scales the application as needed in Kubernetes. This com-
ponent also ensures that the desired number of replicas in the deployment con-
fig is maintained at all times.
Service—Exposes the application. A Kubernetes service is a single IP address
that’s used to access all the active pods for an application deployment. When
you scale an application up or down, the number of pods changes, but they’re
all accessed through a single service.
Pods—Represent the smallest scalable unit in OpenShift.

NOTE Typically, a single pod is made up of a single container. But in some sit-
uations, it makes sense to have a single pod consist of multiple containers.

Figure 3.2 illustrates the relationships between the Kubernetes components that are
created. The replication controller dictates how many pods are created for an initial
application deployment and is linked to the OpenShift deployment component.
Also linked to the pod component is a Kubernetes service. The service represents
all the pods deployed by a replication controller. It provides a single IP address in
OpenShift to access your application as it’s scaled up and down on different nodes in
your cluster. The service is the internal IP address that’s referenced in the route cre-
ated in the OpenShift load balancer.

NOTE The relationship between deployments and replication controllers is


how applications are deployed, scaled, and upgraded. When changes are
made to a deployment config, a new deployment is created, which in turn cre-
ates a new replication controller. The replication controller then creates the
desired number of pods within the cluster, which is where your application is
actually deployed.
How OpenShift components work together 41

The Kubernetes service is associated with


the DNS route created in the load balancer.

Load balancer

Deployment
DNS
route

OpenShift

Kubernetes

Replication
Pod Service
controller

OpenShift deployments Replication controllers The Kubernetes service


are associated with the are associated with pods is linked to pods for
Kubernetes replication in Kubernetes. each deployment.
controller.

Figure 3.2 Kubernetes components that are created when applications are deployed

We’re getting closer to the application itself, but we haven’t gotten there yet. Kuberne-
tes is used to orchestrate containers in an OpenShift cluster. But on each application
node, Kubernetes depends on docker to create the containers for each application
deployment.

3.2.3 Docker creates containers


Docker is a container runtime. A container runtime is the application on a server that
creates, maintains, and removes containers. A container runtime can act as a stand-
alone tool on a laptop or a single server, but it’s at its most powerful when being
orchestrated across a cluster by a tool like Kubernetes.

NOTE Docker is currently the container runtime for OpenShift. But a new
runtime is supported as of OpenShift 3.9. It’s called cri-o, and you can find
more information at http://cri-o.io.

Kubernetes controls docker to create containers that house the application. These
containers use the custom base image as the starting point for the files that are visible
to applications in the container. Finally, the docker container is associated with the
Kubernetes pod (see figure 3.3).
To isolate the libraries and applications in the container image, along with other
server resources, docker uses Linux kernel components. These kernel-level resources
are the components that isolate the applications in your container from everything
else on the application node. Let’s look at these next.
42 CHAPTER 3 Containers are Linux

Containers use the custom container image Custom Builder


as the basis for the container filesystem. image image

Image registry

OpenShift

Kubernetes

Replication
Pod
controller

docker
Container
Containers are associated
with a Kubernetes pod.

Figure 3.3 Docker containers are associated with Kubernetes pods.

3.2.4 Linux isolates and limits resources


We’re down to the core of what makes a container a container in OpenShift and
Linux. Docker uses three Linux kernel components to isolate the applications run-
ning in containers it creates and limit their access to resources on the host:
Linux namespaces—Provide isolation for the resources running in the container.
Although the term is the same, this is a different concept than Kubernetes
namespaces (http://mng.bz/X8yz), which are roughly analogous to an Open-
Shift project. We’ll discuss these in more depth in chapter 7. For the sake of
brevity, in this chapter, when we reference namespaces, we’re talking about
Linux namespaces.
Control groups (cgroups)—Provide maximum, guaranteed access limits for CPU
and memory on the application node. We’ll look at cgroups in depth in chapter 9.
SELinux contexts—Prevent the container applications from improperly access-
ing resources on the host or in other containers. An SELinux context is a
unique label that’s applied to a container’s resources on the application node.
This unique label prevents the container from accessing anything that doesn’t
have a matching label on the host. We’ll discuss SELinux contexts in more
depth in chapter 11.
The docker daemon creates these kernel resources dynamically when the container is
created. These resources are associated with the applications that are launched for the
corresponding container; your application is now running in a container (figure 3.4).
Applications in OpenShift are run and associated with these kernel components.
They provide the isolation that you see from inside a container. In upcoming sections,
How OpenShift components work together 43

The docker container creates SELinux limits containers’ access


Linux kernel resources to to only what they should be able
isolate applications. to access on the application node.

Container

docker

Linux kernel

Control SELinux
Application Namespaces
groups contexts

User space Kernel space

The application is Namespaces isolate Control groups limit


linked to the container applications in the CPU and memory
namespaces to isolate container from other resources available
it from everything else. applications on the to each container.
host.

Figure 3.4 Linux kernel components used to isolate containers

we’ll discuss how you can investigate a container from the application node. From the
point of view of being inside the container, an application only has the resources allo-
cated to it that are included in its unique namespaces. Let’s confirm that next.

Userspace and kernelspace


A Linux server is separated into two primary resource groups: the userspace and the
kernelspace. The userspace is where applications run. Any process that isn’t part of
the kernel is considered part of the userspace on a Linux server.
The kernelspace is the kernel itself. Without special administrator privileges like
those the root user has, users can’t make changes to code that’s running in the ker-
nelspace.
The applications in a container run in the userspace, but the components that isolate
the applications in the container run in the kernelspace. That means containers are
isolated using kernel components that can’t be modified from inside the container.

In the previous sections, we looked at each individual layer of OpenShift. Let’s put all
of these together before we dive down into the weeds of the Linux kernel.
44 CHAPTER 3 Containers are Linux

3.2.5 Putting it all together


The automated workflow that’s executed when you deploy an application in Open-
Shift includes OpenShift, Kubernetes, docker, and the Linux kernel. The interactions
and dependencies stretch across multiple services, as outlined in figure 3.5.

Developers

Users

Source oc new-app
code ...

External

Builder Custom Image Deployment


Build config
image image stream config

Image registry Load


balancer
Deployment
DNS
route

OpenShift

Kubernetes

Replication
Pod Service
controller

docker
Container

Linux

Control SELinux
Application Namespaces
groups contexts

User space Kernel space

Figure 3.5 OpenShift deployment including components that make up the container

You might also like