Tutorials K8S
Tutorials K8S
Tutorials
This section of the Kubernetes documentation contains tutorials. A tutorial
shows how to accomplish a goal that is larger than a single task. Typically a
tutorial has several sections, each of which has a sequence of steps. Before
walking through each tutorial, you may want to bookmark the Standardized
Glossary page for later references.
• Basics
• Configuration
• Stateless Applications
• Stateful Applications
• CI/CD Pipeline
• Clusters
• Services
• What's next
Basics
• Kubernetes Basics is an in-depth interactive tutorial that helps you
understand the Kubernetes system and try out some basic Kubernetes
features.
• Hello Minikube
Configuration
• Configuring Redis Using a ConfigMap
Stateless Applications
• Exposing an External IP Address to Access an Application in a Cluster
Stateful Applications
• StatefulSet Basics
CI/CD Pipeline
• Set Up a CI/CD Pipeline with Kubernetes Part 1: Overview
Clusters
• AppArmor
Services
• Using Source IP
What's next
If you would like to write a tutorial, see Using Page Templates for
information about the tutorial page type and the tutorial template.
Feedback
Was this page helpful?
Yes No
Thanks for the feedback. If you have a specific, answerable question about
how to use Kubernetes, ask it on Stack Overflow. Open an issue in the
GitHub repo if you want to report a problem or suggest an improvement.
Create an Issue Edit This Page
Page last modified on July 21, 2018 at 8:06 AM PST by apply
content_template (#9598) (Page History)
Hello Minikube
This tutorial shows you how to run a simple Hello World Node.js app on
Kubernetes using Minikube and Katacoda. Katacoda provides a free, in-
browser Kubernetes environment.
Note: You can also follow this tutorial if you've installed Minikube
locally.
• Objectives
• Before you begin
• Create a Minikube cluster
• Create a Deployment
• Create a Service
• Enable addons
• Clean up
• What's next
Objectives
• Deploy a hello world application to Minikube.
• Run the app.
• View application logs.
minikube/server.js
minikube/Dockerfile
FROM node:6.14.2
EXPOSE 8080
COPY server.js .
CMD [ "node", "server.js" ]
For more information on the docker build command, read the Docker
documentation.
Kubernetes Basics
This tutorial provides a walkthrough of the basics of the Kubernetes
cluster orchestration system. Each module contains some background
information on major Kubernetes features and concepts, and includes
an interactive online tutorial. These interactive tutorials let you manage
a simple cluster and its containerized applications for yourself.
Using the interactive tutorials, you can learn to:
2. Deploy an app
Yes No
Objectives
Kubernetes Clusters
Summary:
◦ Kubernetes cluster
◦ Minikube
Masters manage the cluster and the nodes are used to host the running
applications.
Now that you know what Kubernetes is, let's go to the online tutorial
and start our first cluster!
Feedback
Was this page helpful?
Yes No
Objectives
Once you have a running Kubernetes cluster, you can deploy your
containerized applications on top of it. To do so, you create a
Kubernetes Deployment configuration. The Deployment instructs
Kubernetes how to create and update instances of your application.
Once you've created a Deployment, the Kubernetes master schedules
mentioned application instances onto individual Nodes in the cluster.
Summary:
◦ Deployments
◦ Kubectl
Feedback
Was this page helpful?
Yes No
Objectives
Kubernetes Pods
When you created a Deployment in Module 2, Kubernetes created a
Pod to host your application instance. A Pod is a Kubernetes
abstraction that represents a group of one or more application
containers (such as Docker or rkt), and some shared resources for
those containers. Those resources include:
Pods are the atomic unit on the Kubernetes platform. When we create a
Deployment on Kubernetes, that Deployment creates Pods with
containers inside them (as opposed to creating containers directly).
Each Pod is tied to the Node where it is scheduled, and remains there
until termination (according to restart policy) or deletion. In case of a
Node failure, identical Pods are scheduled on other available Nodes in
the cluster.
Summary:
◦ Pods
◦ Nodes
◦ Kubectl main commands
Nodes
A Pod always runs on a Node. A Node is a worker machine in
Kubernetes and may be either a virtual or a physical machine,
depending on the cluster. Each Node is managed by the Master. A Node
can have multiple pods, and the Kubernetes master automatically
handles scheduling the pods across the Nodes in the cluster. The
Master's automatic scheduling takes into account the available
resources on each Node.
You can use these commands to see when applications were deployed,
what their current statuses are, where they are running and what their
configurations are.
Now that we know more about our cluster components and the
command line, let's explore our application.
A node is a worker machine in Kubernetes and may be a VM or physical
machine, depending on the cluster. Multiple Pods can run on one Node.
Feedback
Was this page helpful?
Yes No
Objectives
Although each Pod has a unique IP address, those IPs are not exposed
outside the cluster without a Service. Services allow your applications
to receive traffic. Services can be exposed in different ways by
specifying a type in the ServiceSpec:
Additionally, note that there are some use cases with Services that
involve not defining selector in the spec. A Service created without se
lector will also not create the corresponding Endpoints object. This
allows users to manually map a Service to specific endpoints. Another
possibility why there may be no selector is you are strictly using type:
ExternalName.
Summary
Pod
Node
You can create a Service at the same time you create a Deployment by
using
--expose in kubectl.
Labels can be attached to objects at creation time or later on. They can
be modified at any time. Let's expose our application now using a
Service and apply some labels.
Yes No
Objectives
Scaling an application
Summary:
◦ Scaling a Deployment
You can create from the start a Deployment with multiple instances
using the --replicas parameter for the kubectl run command
Scaling overview
2.
1.
Previous Next
Scaling out a Deployment will ensure new Pods are created and
scheduled to Nodes with available resources. Scaling will increase the
number of Pods to the new desired state. Kubernetes also supports
autoscaling of Pods, but it is outside of the scope of this tutorial.
Scaling to zero is also possible, and it will terminate all Pods of the
specified Deployment.
Feedback
Was this page helpful?
Yes No
Objectives
Updating an application
Summary:
◦ Updating an app
Feedback
Was this page helpful?
Yes No
Feedback
Was this page helpful?
Yes No
◦ Objectives
◦ Before you begin
◦ Real World Example: Configuring Redis using a ConfigMap
◦ What's next
Objectives
◦ Create a kustomization.yaml file containing:
▪ a ConfigMap generator
▪ a Pod resource config using the ConfigMap
◦ Apply the directory by running kubectl apply -k ./
◦ Verify that the configuration was correctly applied.
◦ Katacoda
◦ Play with Kubernetes
◦ The example shown on this page works with kubectl 1.14 and
above.
◦ Understand Configure Containers Using a ConfigMap.
pods/config/redis-config
maxmemory 2mb
maxmemory-policy allkeys-lru
apiVersion: v1
kind: Pod
metadata:
name: redis
spec:
containers:
- name: redis
image: redis:5.0.4
command:
- redis-server
- "/redis-master/redis.conf"
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
resources:
limits:
cpu: "0.1"
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /redis-master
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: example-redis-config
items:
- key: redis-config
path: redis.conf
kubectl apply -k .
Use kubectl exec to enter the pod and run the redis-cli tool to
verify that the configuration was correctly applied:
What's next
◦ Learn more about ConfigMaps.
Feedback
Was this page helpful?
Yes No
◦ Objectives
◦ Before you begin
◦ Creating a service for an application running in five pods
◦ Cleaning up
◦ What's next
Objectives
◦ Run five instances of a Hello World application.
◦ Create a Service object that exposes an external IP address.
◦ Use the Service object to access the running application.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: load-balancer-example
name: hello-world
spec:
replicas: 5
selector:
matchLabels:
app.kubernetes.io/name: load-balancer-example
template:
metadata:
labels:
app.kubernetes.io/name: load-balancer-example
spec:
containers:
- image: gcr.io/google-samples/node-hello:1.0
name: hello-world
ports:
- containerPort: 8080
kubectl apply -f https://k8s.io/examples/service/load-
balancer-example.yaml
Name: my-service
Namespace: default
Labels: app.kubernetes.io/name=load-balancer-
example
Annotations: <none>
Selector: app.kubernetes.io/name=load-balancer-
example
Type: LoadBalancer
IP: 10.3.245.137
LoadBalancer Ingress: 104.198.205.71
Port: <unset> 8080/TCP
NodePort: <unset> 32377/TCP
Endpoints:
10.0.0.6:8080,10.0.1.6:8080,10.0.1.7:8080 + 2 more...
Session Affinity: None
Events: <none>
6. In the preceding output, you can see that the service has several
endpoints: 10.0.0.6:8080,10.0.1.6:8080,10.0.1.7:8080 + 2 more.
These are internal addresses of the pods that are running the
Hello World application. To verify these are pod addresses, enter
this command:
curl http://<external-ip>:<port>
Hello Kubernetes!
Cleaning up
To delete the Service, enter this command:
To delete the Deployment, the ReplicaSet, and the Pods that are
running the Hello World application, enter this command:
What's next
Learn more about connecting applications with services.
Feedback
Was this page helpful?
Yes No
Objectives
◦ Start up a Redis master.
◦ Start up Redis slaves.
◦ Start up the guestbook frontend.
◦ Expose and view the Frontend Service.
◦ Clean up.
◦ Katacoda
◦ Play with Kubernetes
3. Query the list of Pods to verify that the Redis Master Pod is
running:
4. Run the following command to view the logs from the Redis
Master Pod:
application/guestbook/redis-master-service.yaml
apiVersion: v1
kind: Service
metadata:
name: redis-master
labels:
app: redis
role: master
tier: backend
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
role: master
tier: backend
2. Query the list of Services to verify that the Redis Master Service is
running:
If there are not any replicas running, this Deployment would start the
two replicas on your container cluster. Conversely, if there are more
than two replicas are running, it would scale down until two replicas
are running.
application/guestbook/redis-slave-deployment.yaml
2. Query the list of Pods to verify that the Redis Slave Pods are
running:
NAME READY
STATUS RESTARTS AGE
redis-master-1068406935-3lswp 1/1
Running 0 1m
redis-slave-2005841000-fpvqc 0/1
ContainerCreating 0 6s
redis-slave-2005841000-phfv9 0/1
ContainerCreating 0 6s
application/guestbook/redis-slave-service.yaml
apiVersion: v1
kind: Service
metadata:
name: redis-slave
labels:
app: redis
role: slave
tier: backend
spec:
ports:
- port: 6379
selector:
app: redis
role: slave
tier: backend
application/guestbook/frontend-deployment.yaml
2. Query the list of Pods to verify that the three frontend replicas are
running:
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# comment or delete the following line if you want to use
a LoadBalancer
type: NodePort
# if your cluster supports it, uncomment the following to
automatically create
# an external load-balanced IP for the frontend service.
# type: LoadBalancer
ports:
- port: 80
selector:
app: guestbook
tier: frontend
1. Run the following command to get the IP address for the frontend
Service.
http://192.168.99.100:31323
2. Copy the IP address, and load the page in your browser to view
your guestbook.
1. Run the following command to get the IP address for the frontend
Service.
2. Copy the external IP address, and load the page in your browser to
view your guestbook.
Cleaning up
Deleting the Deployments and Services also deletes any running Pods.
Use labels to delete multiple resources with one command.
No resources found.
What's next
◦ Add ELK logging and monitoring to your Guestbook application
◦ Complete the Kubernetes Basics Interactive Tutorials
◦ Use Kubernetes to create a blog using Persistent Volumes for
MySQL and Wordpress
◦ Read more about connecting applications
◦ Read more about Managing Resources
Feedback
Was this page helpful?
Yes No
Objectives
◦ Start up the PHP Guestbook with Redis.
◦ Install kube-state-metrics.
◦ Create a Kubernetes secret.
◦ Deploy the Beats.
◦ View dashboards of your logs and metrics.
◦ Katacoda
◦ Play with Kubernetes
Install kube-state-metrics
Kubernetes kube-state-metrics is a simple service that listens to the
Kubernetes API server and generates metrics about the state of the
objects. Metricbeat reports these metrics. Add kube-state-metrics to
the Kubernetes cluster that the guestbook is running in.
Output:
cd examples/beats-k8s-send-anywhere
Note: There are two sets of steps here, one for self managed
Elasticsearch and Kibana (running on your servers or using
the Elastic Helm Charts), and a second separate set for the
managed service Elasticsearch Service in Elastic Cloud. Only
create the secret for the type of Elasticsearch and Kibana
system that you will use for this tutorial.
◦ Self Managed
◦ Managed service
Self managed
There are four files to edit to create a k8s secret when you are
connecting to self managed Elasticsearch and Kibana (self managed is
effectively anything other than the managed Elasticsearch Service in
Elastic Cloud). The files are:
1. ELASTICSEARCH_HOSTS
2. ELASTICSEARCH_PASSWORD
3. ELASTICSEARCH_USERNAME
4. KIBANA_HOST
Set these with the information for your Elasticsearch cluster and your
Kibana host. Here are some examples
ELASTICSEARCH_HOSTS
["http://host.docker.internal:9200"]
["http://host1.example.com:9200", "http://
host2.example.com:9200"]
Edit ELASTICSEARCH_HOSTS
vi ELASTICSEARCH_HOSTS
ELASTICSEARCH_PASSWORD
<yoursecretpassword>
Edit ELASTICSEARCH_PASSWORD
vi ELASTICSEARCH_PASSWORD
ELASTICSEARCH_USERNAME
Edit ELASTICSEARCH_USERNAME
vi ELASTICSEARCH_USERNAME
KIBANA_HOST
1. The Kibana instance from the Elastic Kibana Helm Chart. The
subdomain default refers to the default namespace. If you have
deployed the Helm Chart using a different namespace, then your
subdomain will be different:
"kibana-kibana.default.svc.cluster.local:5601"
"host.docker.internal:5601"
Edit KIBANA_HOST
vi KIBANA_HOST
Managed service
This tab is for Elasticsearch Service in Elastic Cloud only, if you have
already created a secret for a self managed Elasticsearch and Kibana
deployment, then continue with Deploy the Beats.
There are two files to edit to create a k8s secret when you are
connecting to the managed Elasticsearch Service in Elastic Cloud. The
files are:
1. ELASTIC_CLOUD_AUTH
2. ELASTIC_CLOUD_ID
Set these with the information provided to you from the Elasticsearch
Service console when you created the deployment. Here are some
examples:
ELASTIC_CLOUD_ID
devk8s:ABC123def456ghi789jkl123mno456pqr789stu123vwx456yza789
bcd012efg345hijj678klm901nop345zEwOTJjMTc5YWQ0YzQ5OThlN2U5MjA
wYTg4NTIzZQ==
ELASTIC_CLOUD_AUTH
elastic:VFxJJf9Tjwer90wnfTghsn8w
Edit the required files:
vi ELASTIC_CLOUD_ID
vi ELASTIC_CLOUD_AUTH
StatefulSet Basics
This tutorial provides an introduction to managing applications with
StatefulSets. It demonstrates how to create, delete, scale, and update
the Pods of StatefulSets.
◦ Objectives
◦ Before you begin
◦ Creating a StatefulSet
◦ Pods in a StatefulSet
◦ Scaling a StatefulSet
◦ Updating StatefulSets
◦ Deleting StatefulSets
◦ Pod Management Policy
◦ Cleaning up
Objectives
StatefulSets are intended to be used with stateful applications and
distributed systems. However, the administration of stateful
applications and distributed systems on Kubernetes is a broad, complex
topic. In order to demonstrate the basic features of a StatefulSet, and
not to conflate the former topic with the latter, you will deploy a simple
web application using a StatefulSet.
◦ Pods
◦ Cluster DNS
◦ Headless Services
◦ PersistentVolumes
◦ PersistentVolume Provisioning
◦ StatefulSets
◦ kubectl CLI
Creating a StatefulSet
Begin by creating a StatefulSet using the example below. It is similar to
the example presented in the StatefulSets concept. It creates a
Headless Service, nginx, to publish the IP addresses of Pods in the
StatefulSet, web.
application/web/web.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
Download the example above, and save it to a file named web.yaml
You will need to use two terminal windows. In the first terminal, use ku
bectl get to watch the creation of the StatefulSet's Pods.
For a StatefulSet with N replicas, when Pods are being deployed, they
are created sequentially, in order from {0..N-1}. Examine the output of
the kubectl get command in the first terminal. Eventually, the output
will look like the example below.
Notice that the web-1 Pod is not launched until the web-0 Pod is
Running and Ready.
Pods in a StatefulSet
Pods in a StatefulSet have a unique ordinal index and a stable network
identity.
Each Pod has a stable hostname based on its ordinal index. Use kubect
l exec to execute the hostname command in each Pod.
Name: web-0.nginx
Address 1: 10.244.1.6
nslookup web-1.nginx
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
Name: web-1.nginx
Address 1: 10.244.2.6
The CNAME of the headless service points to SRV records (one for each
Pod that is Running and Ready). The SRV records point to A record
entries that contain the Pods' IP addresses.
In a second terminal, use kubectl delete to delete all the Pods in the
StatefulSet.
Wait for the StatefulSet to restart them, and for both Pods to transition
to Running and Ready.
Use kubectl exec and kubectl run to view the Pods hostnames and
in-cluster DNS entries.
Name: web-0.nginx
Address 1: 10.244.1.7
nslookup web-1.nginx
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
Name: web-1.nginx
Address 1: 10.244.2.8
The Pods' ordinals, hostnames, SRV records, and A record names have
not changed, but the IP addresses associated with the Pods may have
changed. In the cluster used for this tutorial, they have. This is why it is
important not to configure other applications to connect to Pods in a
StatefulSet by IP address.
Write the Pods' hostnames to their index.html files and verify that the
NGINX webservers serve the hostnames.
Note:
If you instead see 403 Forbidden responses for the above curl
command, you will need to fix the permissions of the
directory mounted by the volumeMounts (due to a bug when
using hostPath volumes) with:
Examine the output of the kubectl get command in the first terminal,
and wait for all of the Pods to transition to Running and Ready.
Even though web-0 and web-1 were rescheduled, they continue to serve
their hostnames because the PersistentVolumes associated with their
PersistentVolumeClaims are remounted to their volumeMounts. No
matter what node web-0and web-1 are scheduled on, their
PersistentVolumes will be mounted to the appropriate mount points.
Scaling a StatefulSet
Scaling a StatefulSet refers to increasing or decreasing the number of
replicas. This is accomplished by updating the replicas field. You can
use either kubectl scale or kubectl patch to scale a StatefulSet.
Scaling Up
Examine the output of the kubectl get command in the first terminal,
and wait for the three additional Pods to transition to Running and
Ready.
Scaling Down
The controller deleted one Pod at a time, in reverse order with respect
to its ordinal index, and it waited for each to be completely shutdown
before deleting the next.
Updating StatefulSets
In Kubernetes 1.7 and later, the StatefulSet controller supports
automated updates. The strategy used is determined by the spec.updat
eStrategy field of the StatefulSet API Object. This feature can be used
to upgrade the container images, resource requests and/or limits,
labels, and annotations of the Pods in a StatefulSet. There are two valid
update strategies, RollingUpdate and OnDelete.
RollingUpdate update strategy is the default for StatefulSets.
Rolling Update
The Pods in the StatefulSet are updated in reverse ordinal order. The
StatefulSet controller terminates each Pod, and waits for it to transition
to Running and Ready prior to updating the next Pod. Note that, even
though the StatefulSet controller will not proceed to update the next
Pod until its ordinal successor is Running and Ready, it will restore any
Pod that fails during the update to its current version. Pods that have
already received the update will be restored to the updated version,
and Pods that have not yet received the update will be restored to the
previous version. In this way, the controller attempts to continue to
keep the application healthy and the update consistent in the presence
of intermittent failures.
All the Pods in the StatefulSet are now running the previous container
image.
Tip You can also use kubectl rollout status sts/<name> to view the
status of a rolling update.
Staging an Update
You can roll out a canary to test a modification by decrementing the par
tition you specified above.
Wait for all of the Pods in the StatefulSet to become Running and
Ready.
kubectl get po -l app=nginx -w
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 3m
web-1 0/1 ContainerCreating 0 11s
web-2 1/1 Running 0 2m
web-1 1/1 Running 0 18s
web-0 1/1 Terminating 0 3m
web-0 1/1 Terminating 0 3m
web-0 0/1 Terminating 0 3m
web-0 0/1 Terminating 0 3m
web-0 0/1 Terminating 0 3m
web-0 0/1 Terminating 0 3m
web-0 0/1 Pending 0 0s
web-0 0/1 Pending 0 0s
web-0 0/1 ContainerCreating 0 0s
web-0 1/1 Running 0 3s
On Delete
The OnDelete update strategy implements the legacy (1.6 and prior)
behavior, When you select this update strategy, the StatefulSet
controller will not automatically update Pods when a modification is
made to the StatefulSet's .spec.template field. This strategy can be
selected by setting the .spec.template.updateStrategy.type to OnDe
lete.
Deleting StatefulSets
StatefulSet supports both Non-Cascading and Cascading deletion. In a
Non-Cascading Delete, the StatefulSet's Pods are not deleted when the
StatefulSet is deleted. In a Cascading Delete, both the StatefulSet and
its Pods are deleted.
Non-Cascading Delete
Even though web has been deleted, all of the Pods are still Running and
Ready. Delete web-0.
As the web StatefulSet has been deleted, web-0 has not been
relaunched.
Ignore the error. It only indicates that an attempt was made to create
the nginx Headless Service even though that Service already exists.
Examine the output of the kubectl get command running in the first
terminal.
Let's take another look at the contents of the index.html file served by
the Pods' webservers.
Even though you deleted both the StatefulSet and the web-0 Pod, it still
serves the hostname originally entered into its index.html file. This is
because the StatefulSet never deletes the PersistentVolumes associated
with a Pod. When you recreated the StatefulSet and it relaunched web-
0, its original PersistentVolume was remounted.
Cascading Delete
In another terminal, delete the StatefulSet again. This time, omit the --
cascade=false parameter.
Examine the output of the kubectl get command running in the first
terminal, and wait for all of the Pods to transition to Terminating.
As you saw in the Scaling Down section, the Pods are terminated one at
a time, with respect to the reverse order of their ordinal indices. Before
terminating a Pod, the StatefulSet controller waits for the Pod's
successor to be completely terminated.
Note that, while a cascading delete will delete the StatefulSet and its
Pods, it will not delete the Headless Service associated with the
StatefulSet. You must delete the nginx Service manually.
Even though you completely deleted the StatefulSet, and all of its Pods,
the Pods are recreated with their PersistentVolumes mounted, and web-
0 and web-1 will still serve their hostnames.
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
podManagementPolicy: "Parallel"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
Download the example above, and save it to a file named web-
parallel.yaml
This manifest is identical to the one you downloaded above except that
the .spec.podManagementPolicy of the web StatefulSet is set to Parall
el.
Examine the output of the kubectl get command that you executed in
the first terminal.
The StatefulSet controller launched both web-0 and web-1 at the same
time.
Keep the second terminal open, and, in another terminal window scale
the StatefulSet.
Examine the output of the terminal where the kubectl get command is
running.
The StatefulSet controller launched two new Pods, and it did not wait
for the first to become Running and Ready prior to launching the
second.
Keep this terminal open, and in another terminal delete the web
StatefulSet.
Again, examine the output of the kubectl get command running in the
other terminal.
The StatefulSet controller deletes all Pods concurrently, it does not wait
for a Pod's ordinal successor to terminate prior to deleting that Pod.
Close the terminal where the kubectl get command is running and
delete the nginx Service.
Cleaning up
You will need to delete the persistent storage media for the
PersistentVolumes used in this tutorial. Follow the necessary steps,
based on your environment, storage configuration, and provisioning
method, to ensure that all storage is reclaimed.
Feedback
Was this page helpful?
Yes No
◦ Objectives
◦ Before you begin
◦ Create PersistentVolumeClaims and PersistentVolumes
◦ Create a kustomization.yaml
◦ Add resource configs for MySQL and WordPress
◦ Apply and Verify
◦ Cleaning up
◦ What's next
Objectives
◦ Create PersistentVolumeClaims and PersistentVolumes
◦ Create a kustomization.yaml with
▪ a Secret generator
▪ MySQL resource configs
▪ WordPress resource configs
◦ Apply the kustomization directory by kubectl apply -k ./
◦ Clean up
Before you begin
You need to have a Kubernetes cluster, and the kubectl command-line
tool must be configured to communicate with your cluster. If you do not
already have a cluster, you can create one by using Minikube, or you
can use one of these Kubernetes playgrounds:
◦ Katacoda
◦ Play with Kubernetes
The example shown on this page works with kubectl 1.14 and above.
1. mysql-deployment.yaml
2. wordpress-deployment.yaml
kubectl apply -k ./
NAME
TYPE DATA AGE
mysql-pass-c57bb4t7mf
Opaque 1 9s
NAME STATUS
VOLUME CAPACITY
ACCESS MODES STORAGECLASS AGE
mysql-pv-claim Bound pvc-8cbd7b2e-4044-11e9-
b2bb-42010a800002 20Gi RWO
standard 77s
wp-pv-claim Bound pvc-8cd0df54-4044-11e9-
b2bb-42010a800002 20Gi RWO
standard 77s
NAME READY
STATUS RESTARTS AGE
wordpress-mysql-1894417608-x5dzt 1/1
Running 0 40s
http://1.2.3.4:32406
6. Copy the IP address, and load the page in your browser to view
your site.
You should see the WordPress set up page similar to the following
screenshot.
Warning: Do not leave your WordPress installation on this
page. If another user finds it, they can set up a website on
your instance and use it to serve malicious content.
Either install WordPress by creating a username and
password or delete your instance.
Cleaning up
1. Run the following command to delete your Secret, Deployments,
Services and PersistentVolumeClaims:
kubectl delete -k ./
What's next
◦ Learn more about Introspection and Debugging
◦ Learn more about Jobs
◦ Learn more about Port Forwarding
◦ Learn how to Get a Shell to a Container
Feedback
Was this page helpful?
Yes No
Objectives
◦ Create and validate a Cassandra headless Service.
◦ Use a StatefulSet to create a Cassandra ring.
◦ Validate the StatefulSet.
◦ Modify the StatefulSet.
◦ Delete the StatefulSet and its Pods.
Caution:
Minikube defaults to 1024MB of memory and 1 CPU. Running
Minikube with the default resource configuration results in
insufficient resource errors during this tutorial. To avoid
these errors, start Minikube with the following settings:
application/cassandra/cassandra-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: cassandra
name: cassandra
spec:
clusterIP: None
ports:
- port: 9042
selector:
app: cassandra
Validating (optional)
The response is
It can take several minutes for all three Pods to deploy. Once they
are deployed, the same command returns:
Datacenter: DC1-K8Demo
======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns
(effective) Host ID Rack
UN 172.17.0.5 83.57 KiB 32
74.0% e2dd09e6-d9d3-477e-96c5-45094c08db0f
Rack1-K8Demo
UN 172.17.0.4 101.04 KiB 32
58.8% f89d6835-3a42-4419-92b3-0e62cae1479c
Rack1-K8Demo
UN 172.17.0.6 84.74 KiB 32
67.1% a6a1e8c2-3dc5-4417-b1a0-26507af2aaad
Rack1-K8Demo
This command opens an editor in your terminal. The line you need
to change is the replicas field. The following sample is an excerpt
of the StatefulSet file:
Cleaning up
Deleting or scaling a StatefulSet down does not delete the volumes
associated with the StatefulSet. This setting is for your safety because
your data is more valuable than automatically purging all related
StatefulSet resources.
What's next
◦ Learn how to Scale a StatefulSet.
◦ Learn more about the KubernetesSeedProvider
◦ See more custom Seed Provider Configurations
Feedback
Was this page helpful?
Yes No
Running ZooKeeper, A
Distributed System Coordinator
This tutorial demonstrates running Apache Zookeeper on Kubernetes
using StatefulSets, PodDisruptionBudgets, and PodAntiAffinity.
◦ Objectives
◦ Before you begin
◦ Creating a ZooKeeper Ensemble
◦ Ensuring Consistent Configuration
◦ Managing the ZooKeeper Process
◦ Tolerating Node Failure
◦ Surviving Maintenance
◦ Cleaning up
Objectives
After this tutorial, you will know the following.
◦ Pods
◦ Cluster DNS
◦ Headless Services
◦ PersistentVolumes
◦ PersistentVolume Provisioning
◦ StatefulSets
◦ PodDisruptionBudgets
◦ PodAntiAffinity
◦ kubectl CLI
You will require a cluster with at least four nodes, and each node
requires at least 2 CPUs and 4 GiB of memory. In this tutorial you will
cordon and drain the cluster's nodes. This means that the cluster
will terminate and evict all Pods on its nodes, and the nodes will
temporarily become unschedulable. You should use a dedicated
cluster for this tutorial, or you should ensure that the disruption you
cause will not interfere with other tenants.
ZooKeeper Basics
The ensemble uses the Zab protocol to elect a leader, and the ensemble
cannot write data until that election is complete. Once complete, the
ensemble uses Zab to ensure that it replicates all writes to a quorum
before it acknowledges and makes them visible to clients. Without
respect to weighted quorums, a quorum is a majority component of the
ensemble containing the current leader. For instance, if the ensemble
has three servers, a component that contains the leader and one other
server constitutes a quorum. If the ensemble can not achieve a quorum,
the ensemble cannot write data.
This creates the zk-hs Headless Service, the zk-cs Service, the zk-pdb
PodDisruptionBudget, and the zk StatefulSet.
service/zk-hs created
service/zk-cs created
poddisruptionbudget.policy/zk-pdb created
statefulset.apps/zk created
Once the zk-2 Pod is Running and Ready, use CTRL-C to terminate
kubectl.
The StatefulSet controller creates three Pods, and each Pod has a
container with a ZooKeeper server.
zk-0
zk-1
zk-2
To examine the contents of the myid file for each server use the
following command.
Because the identifiers are natural numbers and the ordinal indices are
non-negative integers, you can generate an identifier by adding 1 to the
ordinal.
myid zk-0
1
myid zk-1
2
myid zk-2
3
To get the Fully Qualified Domain Name (FQDN) of each Pod in the zk
StatefulSet use the following command.
The zk-hs Service creates a domain for all of the Pods, zk-
hs.default.svc.cluster.local.
zk-0.zk-hs.default.svc.cluster.local
zk-1.zk-hs.default.svc.cluster.local
zk-2.zk-hs.default.svc.cluster.local
clientPort=2181
dataDir=/var/lib/zookeeper/data
dataLogDir=/var/lib/zookeeper/log
tickTime=2000
initLimit=10
syncLimit=2000
maxClientCnxns=60
minSessionTimeout= 4000
maxSessionTimeout= 40000
autopurge.snapRetainCount=3
autopurge.purgeInterval=0
server.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888
server.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888
server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888
Achieving Consensus
The A records for each Pod are entered when the Pod becomes Ready.
Therefore, the FQDNs of the ZooKeeper servers will resolve to a single
endpoint, and that endpoint will be the unique ZooKeeper server
claiming the identity configured in its myid file.
zk-0.zk-hs.default.svc.cluster.local
zk-1.zk-hs.default.svc.cluster.local
zk-2.zk-hs.default.svc.cluster.local
server.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888
server.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888
server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888
When the servers use the Zab protocol to attempt to commit a value,
they will either achieve consensus and commit the value (if leader
election has succeeded and at least two of the Pods are Running and
Ready), or they will fail to do so (if either of the conditions are not met).
No state will arise where one server acknowledges a write on behalf of
another.
The most basic sanity test is to write data to one ZooKeeper server and
to read the data from another.
The command below executes the zkCli.sh script to write world to the
path /hello on the zk-0 Pod in the ensemble.
WATCHER::
To get the data from the zk-1 Pod use the following command.
The data that you created on zk-0 is available on all the servers in the
ensemble.
WATCHER::
This creates the zk StatefulSet object, but the other API objects in the
manifest are not modified because they already exist.
Once the zk-2 Pod is Running and Ready, use CTRL-C to terminate
kubectl.
NAME READY STATUS RESTARTS AGE
zk-0 0/1 Pending 0 0s
zk-0 0/1 Pending 0 0s
zk-0 0/1 ContainerCreating 0 0s
zk-0 0/1 Running 0 19s
zk-0 1/1 Running 0 40s
zk-1 0/1 Pending 0 0s
zk-1 0/1 Pending 0 0s
zk-1 0/1 ContainerCreating 0 0s
zk-1 0/1 Running 0 18s
zk-1 1/1 Running 0 40s
zk-2 0/1 Pending 0 0s
zk-2 0/1 Pending 0 0s
zk-2 0/1 ContainerCreating 0 0s
zk-2 0/1 Running 0 19s
zk-2 1/1 Running 0 40s
Use the command below to get the value you entered during the sanity
test, from the zk-2 Pod.
Even though you terminated and recreated all of the Pods in the zk
StatefulSet, the ensemble still serves the original value.
WATCHER::
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20Gi
NAME STATUS
VOLUME CAPACITY
ACCESSMODES AGE
datadir-zk-0 Bound pvc-bed742cd-
bcb1-11e6-994f-42010a800002 20Gi RWO 1h
datadir-zk-1 Bound pvc-bedd27d2-
bcb1-11e6-994f-42010a800002 20Gi RWO 1h
datadir-zk-2 Bound pvc-bee0817e-
bcb1-11e6-994f-42010a800002 20Gi RWO 1h
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
Configuring Logging
Use the command below to get the logging configuration from one of
Pods in the zk StatefulSet.
zookeeper.root.logger=CONSOLE
zookeeper.console.threshold=INFO
log4j.rootLogger=${zookeeper.root.logger}
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.Threshold=${zookeeper.console.threshol
d}
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601}
[myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n
This is the simplest possible way to safely log inside the container.
Because the applications write logs to standard out, Kubernetes will
handle log rotation for you. Kubernetes also implements a sane
retention policy that ensures application logs written to standard out
and standard error do not exhaust local storage media.
Use kubectl logs to retrieve the last 20 log lines from one of the Pods.
You can view application logs written to standard out or standard error
using kubectl logs and from the Kubernetes Dashboard.
securityContext:
runAsUser: 1000
fsGroup: 1000
Use the command below to get the file permissions of the ZooKeeper
data directory on the zk-0 Pod.
You can use kubectl patch to update the number of cpus allocated to
the servers.
kubectl patch sts zk --type='json' -p='[{"op": "replace",
"path": "/spec/template/spec/containers/0/resources/requests/
cpu", "value":"0.3"}]'
statefulset.apps/zk patched
This terminates the Pods, one at a time, in reverse ordinal order, and
recreates them with the new configuration. This ensures that quorum is
maintained during a rolling update.
statefulsets "zk"
REVISION
1
2
Use the kubectl rollout undo command to roll back the modification.
The command used as the container's entry point has PID 1, and the
ZooKeeper process, a child of the entry point, has PID 27.
livenessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 15
timeoutSeconds: 5
The probe calls a bash script that uses the ZooKeeper ruok four letter
word to test the server's health.
In one terminal window, use the following command to watch the Pods
in the zk StatefulSet.
When the liveness probe for the ZooKeeper process fails, Kubernetes
will automatically restart the process for you, ensuring that unhealthy
processes in the ensemble are restarted.
readinessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 15
timeoutSeconds: 5
Use the command below to get the nodes for Pods in the zk StatefulSe
t.
kubernetes-node-cxpk
kubernetes-node-a5aq
kubernetes-node-2g2d
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
Surviving Maintenance
In this section you will cordon and drain nodes. If you are using
this tutorial on a shared cluster, be sure that this will not
adversely affect other tenants.
The previous section showed you how to spread your Pods across nodes
to survive unplanned node failures, but you also need to plan for
temporary node failures that occur due to planned maintenance.
Use kubectl cordon to cordon all but four of the nodes in your cluster.
kubectl cordon <node-name>
In one terminal, use this command to watch the Pods in the zk Statefu
lSet.
In another terminal, use this command to get the nodes that the Pods
are currently scheduled on.
kubernetes-node-pb41
kubernetes-node-ixsl
kubernetes-node-i4c4
Use kubectl drain to cordon and drain the node on which the zk-0
Pod is scheduled.
As there are four nodes in your cluster, kubectl drain, succeeds and
the zk-0 is rescheduled to another node.
Keep watching the StatefulSet's Pods in the first terminal and drain
the node on which zk-1 is scheduled.
You cannot drain the third node because evicting zk-2 would violate zk
-budget. However, the node will remain cordoned.
Use zkCli.sh to retrieve the value you entered during the sanity test
from zk-0.
The output:
Cleaning up
◦ Use kubectl uncordon to uncordon all the nodes in your cluster.
◦ You will need to delete the persistent storage media for the
PersistentVolumes used in this tutorial. Follow the necessary steps,
based on your environment, storage configuration, and
provisioning method, to ensure that all storage is reclaimed.
Feedback
Was this page helpful?
Yes No
AppArmor
FEATURE STATE: Kubernetes v1.4 beta
This feature is currently in a beta state, meaning:
Using Source IP
Applications running in a Kubernetes cluster find and communicate
with each other, and the outside world, through the Service abstraction.
This document explains what happens to the source IP of packets sent
to different types of Services, and how you can toggle this behavior
according to your needs.
◦ Objectives
◦ Before you begin
◦ Terminology
◦ Prerequisites
◦ Source IP for Services with Type=ClusterIP
◦ Source IP for Services with Type=NodePort
◦ Source IP for Services with Type=LoadBalancer
◦ Cleaning up
◦ What's next
Objectives
◦ Expose a simple application through various types of Services
◦ Understand how each Service type handles source IP NAT
◦ Understand the tradeoffs involved in preserving source IP
◦ Katacoda
◦ Play with Kubernetes
Terminology
This document makes use of the following terms:
Prerequisites
You must have a working Kubernetes 1.5 cluster to run the examples in
this document. The examples use a small nginx webserver that echoes
back the source IP of requests it receives through an HTTP header. You
can create it as follows:
deployment.apps/source-ip-app created
iptables
service/clusterip exposed
# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc
noqueue
link/ether 0a:58:0a:f4:03:08 brd ff:ff:ff:ff:ff:ff
inet 10.244.3.8/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::188a:84ff:feb0:26a5/64 scope link
valid_lft forever preferred_lft forever
service/nodeport exposed
NODEPORT=$(kubectl get -o
jsonpath="{.spec.ports[0].nodePort}" services nodeport)
NODES=$(kubectl get nodes -o
jsonpath='{ $.items[*].status.addresses[?
(@.type=="ExternalIP")].address }')
If you're running on a cloudprovider, you may need to open up a
firewall-rule for the nodes:nodeport reported above. Now you can try
reaching the Service from outside the cluster through the node port
allocated above.
client_address=10.180.1.1
client_address=10.240.0.5
client_address=10.240.0.3
Note that these are not the correct client IPs, they're cluster internal
IPs. This is what happens:
Visually:
client
\ ^
\ \
v \
node 1 <--- node 2
| ^ SNAT
| | --->
v |
endpoint
client_address=104.132.1.79
Note that you only got one reply, with the right client IP, from the one
node on which the endpoint pod is running.
Visually:
client
^ / \
/ / \
/ v X
node 1 node 2
^ |
| |
| v
endpoint
service/loadbalancer exposed
Print IPs of the Service:
curl 104.198.149.140
CLIENT VALUES:
client_address=10.240.0.5
...
Visually:
client
|
lb VIP
/ ^
v /
health check ---> node 1 node 2 <--- health check
200 <--- ^ | ---> 500
| V
endpoint
healthCheckNodePort: 32122
curl 104.198.149.140
CLIENT VALUES:
client_address=104.132.1.79
...
2. With a packet forwarder, such that requests from the client sent to
the loadbalancer VIP end up at the node with the source IP of the
client, not an intermediate proxy.
Cleaning up
Delete the Services:
What's next
◦ Learn more about connecting applications via services
◦ Learn more about loadbalancing
Feedback
Was this page helpful?
Yes No