0% found this document useful (0 votes)
24 views137 pages

Cloud Sect9 Kubernetes F23PA2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views137 pages

Cloud Sect9 Kubernetes F23PA2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 137

1

Cloud Computing
Theory and
Practice
INSY 5345 & INSY 4307
DR. SANTOSO BUDIMAN
Topics 2

 Section 1: Cloud Computing Introduction


 Section 2: Computer Basics
 Section 3: AWS Elastic Compute Cloud
 Section 4: Virtual Private Cloud
 Section 5: Load Balancing & Auto Scaling
 Section 6: AWS Storage
 Section 7: AWS Databases
 Section 8: Containers (IAM, aws cli, Dockerfile, Docker image, Docker
Container, ECR)
 Section 9: Kubernetes (YAML, AWS EKS, kubectl, eksctl, NameSpace,
IaC)
 Section 10: Serverless
Topics
• Kubernetes
• Kubernetes Cluster
• YAML for Kubernetes
• Kubectl
• eksctl
• EKS Demo
• Namespace
• Kubernetes Cluster Design

3
Microservices, Containers, and Kubernetes

• A software architecture that


Containers system for automating
breaks down application into deployment, scaling, and
services/modules • a standard unit of software management of containerized
that packages up code and all applications. Ex: Kubernetes
its dependencies, so the
application runs quickly and
reliably from one computing
environment to another
• Easy to scale
Container
Microservices
Orchestration

4
Container Orchestration Tools

Apache Mesos
Kubernetes AWS EKS –
Docker Swarm (orchestration tool (Kubernetes on
for containers and (Open Source) AWS)
non-containers)

AWS ECS
AWS Fargate Redhat OpenShift
(AWS native Others
(serverless compute (a Kubernetes
container
engine for EKS/ECS) distribution)
orchestration)

5
Kubernetes & AWS EKS Market Share

Kubernetes self managed & AWS EKS are the most popular
(reason why it is covered in this class).
Remember, Containers and Kubernetes can be deployed on-
premise as well.

• Based on survey conducted by StackRox.


• https://www.stackrox.com/kubernetes-adoption-security-and-market-share-for-containers/

6
Kubernetes

7
Kubernetes
Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of
containerized applications (automating containerized applications).
▪ It abstracts the hardware infrastructure as one huge computational resource.
▪ Based on linux container technology.
▪ It groups containers that make up an application into logical units for easy management and discovery.
▪ orchestrate many containers on many hosts, scale, and deploy rollouts and rollbacks.
▪ originally designed by Google - builds upon 15 years of experience (initial release June 2014)
▪ Now maintained by the Cloud Native Computing Foundation (CNCF).

• https://kubernetes.io/

8
From a user point of view:
one huge machine
To deploy services

9
Traditional Deployment To Kubernetes
Monoliths Microservices

Traditional Virtualized Container


Kubernetes
Deployment Deployment Deployment
• Apps run on physical • Allows multiple VMs • Lightweight • Provides framework
servers to run in one compared to VM to manage
• No boundaries physical server CPU • Each Container has containers
between apps • Each VM has own own filesystems, • Run the distributed
(resources OS binary, and needed systems resiliently
allocation issues) • Isolation between libraries
• need additional VMs • Containers can be
physical servers to • Better resource deployed in physical
scale up an app. utilization server or VM
• Scalability

10
Kubernetes functions

Service discovery and load balancing -Kubernetes can expose a container using DNS or own IP address
-Can load balance and distribute traffic
Storage orchestration -Allows users to automatically mount a storage system (local storage,
public cloud providers)
Automated rollouts and rollbacks - Can be programmed to create/remove containers
Automatic bin packing - Can specify how much CPU and memory each container needs
Self-healing - Will restart failed containers, replace containers, or kill containers that
don’t respond to user-defined health check
Security and configuration - Lets users store and manage sensitive information such as passwords,
management auth tokens, ssh key.
11
Kubernetes – what
it does
• A Kubernetes cluster is a set of nodes that
run containerized applications. It contains:
• Control Plane
• One or more Nodes/Minions
• A node can be a physical server
or a VM (ex. AWS EC2)
• A Node can have one or more PODs.
• A POD consists of one or more containers.
• Kubernetes orchestrates containers on
nodes, scales, deploys rollouts and
rollbacks. https://kubernetes.io/docs/contribute/style/diagram-guide/

12
POD- Container
IP address
• A Pod is a wrapper around one or more containers
• A Pod is the smallest unit in Kubernetes that a
user create or deploy.
• Typically, one container per pod.
• A Pod has:
• a unique network IP in the Kubernetes
cluster and
• A set of ports for the containers.
• Containers inside a Pod can communicate with one
another using localhost and ports.
• A Deployment represents a group of replicas of the
same Pod.

13
Multi-container PODs
• Rare case, mostly one container.
• Can have multiple containers with a hard dependency.
• Can not have multiple containers of the same kind in a pod.
• Helper Containers (Side-car)
• Data Pullers: Pull data for the main container
• Data Pushers: Push data from the main container
• Proxies

14
Kubernetes Cluster
IP Address IP Address
POD POD
Localhost:Port # Localhost:Port # Localhost:Port # Localhost:Port #
container container container container

• A pod is a host (localhost) just like a laptop.


• A pod has an internal IP address
15
• A container in a POD does not have an IP address but is assigned a port number.
• Note the different setting with a container directly in a VM (docker) without Kubernetes.
Kubernetes Components

https://kubernetes.io/docs/concepts/overview/components/#:~:text=A%20Kubernetes%20cluster%20consists%20of,set%20of%20machines%20called%20nodes.&tex
t=The%20worker%20node(s)%20host,the%20Pods%20in%20the%20cluster.

16
Kubernetes Control Plane

Kubernetes Responsible for managing the entire cluster.


Control Plane Components: kube-apiserver, kube-controller-manager, kube-scheduler, ETCD.
Can have 1-3 instances (for HA purposes)
kube-apiserver All communications between cluster components are done through the API-server. It allows clients
interaction to the Control Plane using REST API.
kube-controller A daemon that runs a series of controllers. Ex: ReplicationController (ensure the set number of replicas in a
manager ReplicaSet are running; Deployment Controller (manages rolling update and rollback).

kube-scheduler Schedules pods on available worker nodes.


ETCD Kubernetes cluster data store. It is a strongly consistent, distributed key-value store that provides a reliable
way to store cluster data.

cloud-controller Cloud-specific control logic. The cloud-controller manager links the cluster into the cloud provider's API.
manager
17
Kubernetes Nodes (Minions)
Node/Minion A node can be a VM or physical machine.
- Each node contains the necessary services to run pods:
- Container runtime
- kubelet
- kube-proxy
- Add-ons can be installed, such as CoreDNS
kubelet responsible for communication between the Kubernetes Control Plane and the Nodes; it receives
instructions from the Control Plane on which Pod to run and manage the state of the Pods.
Container runtime responsible for pulling the container image from a registry and running the application. Docker
used to be the popular one but starting Kubernetes v1.24 was deprecated.
kube-proxy A network proxy runs on each node. It maintains network rules for services. These network rules
allow network communication to the Pods from network sessions inside or outside of cluster (route
traffic to the pods).

18
Maximum Cluster Configuration
A cluster is a set of nodes (physical or virtual machines) running Kubernetes
agents, managed by the control plane.
From the user perspective, it is one huge computing machine.
Kubernetes v1.27 supports clusters with:
• Up to 5,000 nodes
• Up to 110 pods per node
• Up to 150,000 total pods
• Up to 300,000 total containers
• https://kubernetes.io/docs/setup/best-practices/cluster-
large/#:~:text=More%20specifically%2C%20Kubernetes%20is%20designed,
No%20more%20than%205%2C000%20nodes

19
Deployment & Service
A deployment is a group of replicas of the same Pod.
Service ▪ Deployment creates a ReplicaSet that creates a
app = xxx defined number of replicated Pods
A service is a logical abstraction for a deployed group of
pods in a cluster.
▪ SW router to the pods.
▪ enabling network access to this set of pods.
▪ Has static IP address (not like PODs)
▪ Common types:
POD POD POD
• Internally: ClusterIP,
app = xxx app = xxx app = xxx
• Externally: LoadBalancer.
• Each pod will have an internal IP-address (ephemeral)
Deployment

20
Region Region

VPC VPC

Kubernetes Cluster Kubernetes Cluster


Control Control Deployment
Plane Plane
EC2 EC2 EC2 EC2 EC2 EC2
POD POD POD

POD POD

21
Kubernetes Manifest Files
• A Kubernetes manifest file is configuration file written in YAML or JSON,
that describes the resources wanted to be created in a cluster.
• When a file is applied to a Kubernetes cluster, Kubernetes creates objects
based on the configuration.
• Some manifest kinds: Deployment, Services, Namespace, Pod, ConfigMap,
Secret, etc.
• Manifest file structure:
• apiVersion – Kubernetes API version
• kind – object type to be created (ex: Deployment)
• metadata – define object name, labels and annotations
• spec – actual resource configurations

22
Deployment & Service Summary

Deployment Service
• A deployment is a group of • Service is an abstract way to
replicas of the same Pod. expose an application running on a
• provides declarative updates set of Pods as a network service.
for Pods and ReplicaSets. • https://kubernetes.io/docs/concepts/
• https://kubernetes.io/docs/concepts/ services-networking/service/
workloads/controllers/deployment/ • Includes:
• Includes: • Pod name (to select)
• Number of Pods • Type: ex. LoadBalancer
• What container(s) inside a pod • Port: abstracted service port (ex:80)
• Container image (ex. From ECR)
• Container port number (ex. 80)
https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/

23
Kubernetes Service Types

• In Kubernetes, workloads run in containers, containers run in Pods, Pods are managed by Deployments (with
the help of other Kubernetes Objects), and Deployments are exposed via Services.
• Pods have ephemeral, internal IPs.
• A services exposes a deployment. It is a resource created to make a single, constant point of entry to a group of
pods providing the same service. Every service has a permanent IP address and port number.
• Service Types:
• ClusterIP (default if not explicitly stated): exposes the service on a cluster-internal IP (service only
accessible within a Kubernetes cluster).
• LoadBalancer: exposes the service externally using the load balancer of the cloud provider.
• NodePort: exposes the service on each node’s IP at a static port (an open port on every node of a cluster).
• ExternalName. Works as a proxy that redirects requests to a service sitting outside / inside the cluster.
• https://kubernetes.io/docs/concepts/services-networking/service/
24
Deployment – POD – YAML file

IP Address IP Address

Port A Port A

POD POD
Port B Port B

Node 1

• Each POD has an internal IP address (ephemeral).


• Ports of a POD are defined in deployment yaml file –
(containerPort)
• you can expose more than 1 container in a POD.
• In this example, only one POD port is exposed (80).
25
Service
• A service is an abstraction layer
• Every service is assigned an IP address (Cluster IP)
and port
• targetPort is the TCP port a POD listens to
(containerPort).
• A Service can map any incoming port to a
targetPort.
• By default, the targetPort is set to the same value
as the port field.
• PODs are identified by selectors.
• This spec creates a Service object named "my-service",
which targets TCP port 9376 on any Pod with the Cluster IP port
app.kubernetes.io/name=MyApp label.
Pod port
• Kubernetes assigns this Service an IP address
(sometimes called the "cluster IP")
• https://kubernetes.io/docs/concepts/services-
networking/service/

26
Port Terms in YAML files
IP Address Port #
Service: Cluster IP

targetPort

NodePort Deployment file:


Port in worker node • containerPort: POD port assigned to
IP Address the container.
• Note: targetPort in Service file must
containerPort match this containerPort
POD Service file:
• port: Service port
• targetPort: the containerPort
Note: majority, • nodePort: worker node port
one container in number (30000-32767)
a POD
Cluster IP
• Each service has a ClusterIP (an internal fixed IP address) and
a port.
• To access from outside the cluster can use service types:
• LoadBalancer (the load balancer is outside the cluster)
• NodePort (Using the IP address of any of the worker nodes and the port
number).

28
Service: Cluster IP (internal IP) for Internal
Client
Internal client

IP Address Port #
Service: Cluster IP

IP Address IP Address IP Address IP Address IP Address IP Address

Port A Port A Port A Port A Port A Port A

POD POD POD POD POD POD


Port B Port B Port B Port B Port B Port B

Node 1 Node 2 Node 3


Cluster 29
NodePort
• Is built on top of ClusterIP. ClusterIP will be automatically created.
• The same port# for every node.
• The port number is between 30000-32767.
• If no port number is specified, Kubernetes selects a free port.
• Expose the ClusterIP service outside of the cluster using any of nodes’ IP
addresses and the port #.
• Service is accessible through the IP address and the static port of any cluster node.
• every node in the cluster listens on this port, traffic will be forwarded to the
ClusterIP.
• If you use NodePort, don’t forget to allow traffic in the Security Group.
• Every container in a Pod shares the network namespace, including the IP
address and network ports.

30
Service: NodePort
IP Address Port #
Service: Cluster IP

Exter
nal
Client
NodePort NodePort NodePort

IP Address IP Address IP Address IP Address IP Address IP Address

Port A Port A Port A Port A Port A Port A

POD POD POD POD POD POD


Port B Port B Port B Port B Port B Port B

Node 1 Node 2 Node 3


31
NodePort Service YAML example

apiVersion: v1
kind: Service
metadata:
name: deployment-xyz-service
spec:
type: NodePort
selector:
app: myapp
ports:
- port: 80
targetPort: 80
nodePort: 31233

32
Load Balancer
• Cloud providers typically provides load Ex:
balancer automatically for Kubernetes cluster. apiVersion: v1
• External to the cluster kind: Service
• AWS: ELB metadata:
name: test-loadbalance
• The service type must be set to LoadBalancer.
spec:
• Type LoadBalancer is extension of NodePort. type: LoadBalancer
• NodePort and ClusterIP are created ports:
automatically. - port: 80
• If the environment does not support targetPort: 8080
LoadBalancer, the service will behave as a nodePort: 30010 #optional, Kubernetes will assign if not set.
NodePort service.
selector:
app: xxx

33
Kubernetes Service

Load Balancer

Note: AWS EKS uses the Network Load Balancer and the Classic Load Balancer for pods running on EC2 instance worker nodes through LoadBalancer.
https://aws.amazon.com/premiumsupport/knowledge-center/eks-kubernetes-services-cluster/
34
Service: LoadBalancer
cluster
Client
IP Address Port #
Service: Cluster IP

Load
Balan
cer
NodePort NodePort NodePort

IP Address IP Address IP Address IP Address IP Address IP Address

Port A Port A Port A Port A Port A Port A

POD POD POD POD POD POD


Port B Port B Port B Port B Port B Port B

Node 1 Node 2 Node 3


35
Deployment and Service YAML -explanation
• apiVersion: apps/v1 • apiVersion: v1
• kind: Deployment #Object is a deployment • kind: Service #Object is a Service
• metadata: • metadata:
• name: mynginxdeply #Deployment name • name: mynginxserv #Name of service
• spec: #Specification of the deployment • spec: # Service spec
• replicas: 2 # number of PODs in this depl. • selector:
• selector: #the POD to select • app: mynginxpod #POD name, must match POD
• matchLabels: #match the pod with label name in depl.
• app: mynginxpod • ports:
• - protocol: TCP
• template: #POD template • port: 80 #The load balancer will listen to this port
• metadata: • type: LoadBalancer #external
• labels: #POD label must match the matchLabels
• app: mynginxpod
• spec:
• containers: #container spec below are data seq.
POD • - name: mynginx
• image: public.ecr.aws/n3b9k8l8/budiman-nginx #in
this case is the URI of the image in ECR.
• ports:
• - containerPort: 80

Deployment Service
36
Kubernetes Auto-Scaling

• Kubernetes scaling:
• Scaling Deployments (number of replicas)
• Horizontal Pod Autoscaling
• https://kubernetes.io/docs/tasks/run-application/horizontal-pod-
autoscale/#:~:text=In%20Kubernetes%2C%20a%20HorizontalPodAutoscaler%20automatica
lly,is%20to%20deploy%20more%20Pods.
• Scaling Clusters (number of nodes)
• Horizontal node autoscaling
• https://kubernetes.io/blog/2016/07/autoscaling-in-kubernetes/
• Note: you can actually scale in AWS when creating Node Group manually or
using eksctl

37
• Every Pod gets its own internal IP address. It
communicates using its IP address, no link between pods is
necessary.
• Kubernetes offers the following four networking models for
Kubernetes container communication:
• Container-to-Container communications (within a Pod)
Networking • Pod-to-Pod communications
• Pod-to-Service communications
• External-to-Internal (service) communications
• https://kubernetes.io/docs/concepts/cluster-
administration/networking/

38
Solution with Multiservice in the same cluster
example
Client • A solution with 2 Microservices:
• Service A interfaces with the
internet.
Service A Type:
LoadBalancer/ingress/NodePort
• Service B receives requests from
Service A.
Deployment A
POD POD POD POD
• https://kubernetes.io/docs/tasks/access-application-
cluster/connecting-frontend-backend/

Service B Type: ClusterIP Service


type: Outside the
Deployment B External cluster, ex
POD POD POD POD Name Database
A Kubernetes cluster
39
Container runtime
• Container runtime is a low-level software component that pulls images, creates and runs
containers.
• A container runtime is needed in every K8s node for Pods to run.
• Several container runtime:
• containerd
• created by Docker and was part of Docker Engine but was donated to CNCF in 2017
• Docker Engine
• Has a suite of functions which are similar to Kubernetes and use containerd as runtime
• CLI, API, Volume, Network
• Need dockershim to work with CRI
• Mirantis Container Runtime
• CRI-O

40
Kubernetes & Docker
• Container Runtime Interface (CRI) is a plugin interface which enables the kubelet to use a variety of container
runtimes, without having a need to recompile the cluster components.
• Kubernetes is deprecating Docker as a container runtime in favor of runtimes that use the Container Runtime
Interface (CRI) created for Kubernetes.
• CRI makes it flexible to use different container runtime.
• Docker was a popular container runtime used by k8s but docker was not designed to be inside Kubernetes.
• Docker isn’t compliant with CRI and need dockershim.
• Dockershim was removed from Kubelet in v1.24 release.
• Docker is still a useful tool for building containers, and the images that result from running docker build can still run
in Kubernetes cluster.
• Docker-produced images will continue to work in cluster with all runtimes.
• Just need to change container runtime from Docker to another supported container runtime.

41
CRI makes it flexible to use different container runtimes.
CRI-containerd is an implementation of CRI that allows containers to be directly
created and managed by containerd at kubelet’s request

• https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-
dockershim/check-if-dockershim-removal-affects-you/

42
Kubernetes Namespaces

Kubernetes supports multiple virtual clusters backed by the same physical cluster.
• These virtual clusters are called namespaces.
• Namespaces are a way to divide cluster resources between multiple users (via resource quota).
• Kubernetes namespaces help different projects/teams/customers to share a cluster.
• Creation and deletion of namespaces are described in the Admin Guide documentation for
namespaces.
• Command: kubectl create namespace, or
• Configuration file
https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/

43
• Create development and production namespaces.
• Note: if no namespace is defined, components are created in “default”

Example
44
Cluster – Namespace - Deployment
A Kubernetes namespace is a
Kubernetes Cluster virtual Kubernetes Cluster.

Namespace Namespace Namespace Can have:


Deployment Deployment Deployment • Multiple namespaces in a
POD POD POD POD POD POD POD POD POD
cluster
• Multiple deployments in a
Deployment Deployment Deployment namespace
• Multiple POD replicas (the
POD POD POD POD POD POD POD POD POD same POD) in a deployment

Deployment Deployment Deployment


POD POD POD POD POD POD POD POD POD

45
Use Cases

Kubernetes Cluster Kubernetes Cluster

NS: Default (all NS:


NS:
Business
NS: NS: Web
Database Monitoring processing
resources) Logic 1

Kubernetes Cluster Kubernetes Cluster

NS: NS:
NS: Test
NS: NS: NS:
Development Production Project A Project B Project C

46
AWS Container Orchestration Services

Docker Apache Kubernetes –


Swarm Mesos Open Source

AWS EKS AWS ECS AWS Fargate

Others

47
AWS Container Orchestration Services (ECS
and EKS)

The control plane


AWS ECS AWS EKS

AWS compute engine where


AWS
AWS EC2s the containers run
Fargate
Serverless

48
AWS ECS vs EKS
• AWS ECS is an AWS native service (proprietary)
• Simpler architecture than Kubernetes
• https://spotinst.com/blog/amazon-ecs-vs-eks-
container-orchestration-simplified/
• Note: ECS is cheaper than EKS for obvious
reasons. But we will use EKS in this class since
Kubernetes is more widespread.

49
AWS Fargate
AWS Fargate is a serverless compute engine for containers. It works with both ECS and EKS.
• Serverless (no need to provision and manage servers)
• allocates the right amount of compute
• eliminating the need to choose instances and scale cluster capacity.
• pay for the only resources required to run containers (not the EC2s)

https://aws.amazon.com/fargate/
EC2 Fargate

50
https://aws.amazon.com/ecs/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc&ecs-blogs.sort-
by=item.additionalFields.createdDate&ecs-blogs.sort-order=desc

EKS and ECS

https://aws.amazon.com/eks/

51
Amazon ECS

Orchestrates when containers run

Maintains and scales the fleet of instances that


run your containers
Amazon Elastic
Container Service
(Amazon ECS) Removes the complexity of standing up the
infrastructure

52
Amazon ECS orchestrates containers
AWS Cloud
Region VPC
Amazon ECS Cluster
Availability Zone A
EC2 instance Container
Container
Task definition registry image

Availability Zone B
Task definition EC2 instance

Service description

Service

53
AWS Fargate

• Is a fully managed container service


• Works with Amazon Elastic Container Service (Amazon
ECS) and Amazon Elastic Kubernetes Service (Amazon
EKS)

• Provisions, manages, and scales your container


AWS clusters
Fargate • Manages runtime environment

• Provides automatic scaling

54
EKS Pricing
• $0.10 / hour / EKS cluster (not free-tier) or $2.40/day.
• EKS can run on:
• EC2 – charges on EC2 instances and EBS volumes used. EC2s in a cluster can
not be stopped.
• AWS Fargate (Serverless) - pricing is calculated based on the vCPU and
memory resources used from the time you start to download your container
image until the Amazon EKS pod terminates, rounded up to the nearest
second
• on-premises using AWS Outposts.
• https://aws.amazon.com/eks/pricing/?p=pm&c=eks&z=4

55
Service Discovery

• To find a service, clients running inside a cluster can use:


• environment variables and
• DNS

56
Service Discovery – Environment Variables
• Environment variables
• When a Pod is run on a Node, the
kubelet adds a set of environment
variables for each active Service. It
adds {SVCNAME}_SERVICE_HOST and
{SVCNAME}_SERVICE_PORT variables
• Services must be created before
the client Pods.
• Example, the Service redis-primary https://kubernetes.io/docs/concepts/services-
exposes TCP port 6379 and is networking/service/
allocated cluster IP address
10.0.0.11, produces the following
environment variables:
57
Service Discovery - DNS
• A DNS service can be setup for a Kubernetes cluster using an add-on.
• A cluster-aware DNS server, such as CoreDNS, watches the Kubernetes API for new Services and creates a set of DNS records
for each one.
• Pods should be able to resolve Services by their DNS name.
• For example,
• a Service called my-service is created in a Kubernetes namespace my-ns.
• The control plane and the DNS Service acting together create a DNS record for my-service.my-ns.
• Pods in the my-ns namespace should be able to find the service by doing a name lookup for my-service (my-service.my-ns
would also work).
• CoreDNS is a general-purpose authoritative DNS server that can serve as the Kubernetes cluster DNS.
• Kubernetes version 1.21, kubeadm removed its support for kube-dns as a DNS application. For kubeadm
v1.27, the only supported cluster DNS application is CoreDNS.
• When an AWS EKS cluster with at least one node is launched, two replicas of the CoreDNS image are
deployed by default.
• https://kubernetes.io/docs/tasks/administer-cluster/coredns/
• https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html

58
ConfigMaps and Secrets
• A ConfigMap is an API object to store non-confidential data • A Secret is an object that contains a small amount of
in key-value pairs. sensitive data such as a password, a token, or a key.
• Pods can consume ConfigMaps as environment variables, • https://kubernetes.io/docs/concepts/configuration/secret/
command-line arguments, or as configuration files in a • Secrets are similar to ConfigMaps but are specifically
volume. intended to hold confidential data.
• ConfigMap is a namespace object. • Secret is a namespace object.
• https://kubernetes.io/docs/concepts/configuration/configmap/

59
Demos
• Demo 1: Create a cluster and create a deployment and a service.
• Demo 2: Service Discovery (Environment Variables and CoreDNS).
• Demo 3: Namespaces.

60
DEMO 1 Using AWS
Academy Environment

61
Demo Steps

Create EC2 as
manager, install Create Launch Managed Launch
Launch Cluster Launch Services
awscli, eksctl, kubeconfig Node Group Deployment
kubectl

62
The 3 tools used in this section
awscli
CLI tools for working with AWS services, including AWS EKS

eksctl
A command line tool for creating and managing clusters on EKS. Simplify EKS cluster creation
(automating tasks). Written in Go, uses CloudFormation (AWS native IaC).

kubectl
A command line tool for working with Kubernetes clusters.

• Note starting F23 we use AWS Academy sandbox, which has limitations
• We can not create an IAM user. Hence, we need to use the provided access key
credentials for aws cli.
• eksctl does not have the proper permission, so we can’t use this.
https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html
• We can not write to ECR (no defined permission), so we can’t use this either.
63
Spin Up Control EC2

Spin up EC2 to
Install AWS Configure Install
control Kubernetes
CLI AWS CLI kubectl
cluster (set LabRole)

https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html

64
Steps to create EKS cluster

Create Create Create Launch


Create VPC and its Create EKS Cluster Create kubeconfig file Launch a Managed
subnets • Create IAM Role for the • In the machine you want to Node Group in subnets
• Where EKS cluster will be in cluster to access necessary use kubectl command created
AWS resources • Can use an aws cli command
• Create Public/Private subnets • An EKS managed node
for worker nodes • Create and configure cluster • .kube/config group is an
(assign IAM Role, select K8s • Kubectl will use kubeconfig autoscaling group and
• Security Group
version) file to access cluster associated EC2 instances,
• Note: Control Plane is in a
• Specify networking (VPC and managed by AWS for
separate AWS VPC
subnets) an EKS cluster.
• Create IAM role for EC2s with
necessary permissions
• Configure Node Group

65
Demo

Region: us-west-2

VPC

EC2 to manage Kubernetes Cluster


Kubernetes Control Plane
from
awscli EC2 EC2 EC2
kubectl (need POD POD POD

kubeconfig)
POD POD POD

66
awscli, eksctl, kubectl

• Need to install:
• awscli – CLI tools for working with AWS services, including AWS EKS
• kubectl - A command line tool for working with Kubernetes clusters.

• https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html

67
Install aws cli on the EC2 created before
Goto Install the AWS CLI
https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html
Since we are using ubuntu, use linux steps
Install AWS CLI v2. There are more steps in this slide than the instruction.

Do
#Install aws CLI
curl "https://awscli.amazonaws.com/awscli-exe-linux-
x86_64.zip" -o "awscliv2.zip"
sudo apt-get update
sudo apt-get install unzip
unzip awscliv2.zip
sudo ./aws/install
aws --version
aws configure

68
Complete the AWS CLI v2 installation and
configure

• Don’t forget to do the aws configure and enter the Access Key ID and
Secret Access Key you got when you created the IAM User (if you use
a regular aws account).

69
Install kubectl on linux

https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.21.2/2021-07-05/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin
kubectl version --short --client

70
Kubeconfig

71
kubectl
• kubectl - command line tool that control Kubernetes clusters.
• Syntax
• kubectl [command] [TYPE] [NAME] [flags]
• https://kubernetes.io/docs/reference/kubectl/overview/
• https://kubernetes.io/docs/reference/generated/kubectl/kubectl-
commands#run
• https://kubernetes.io/docs/reference/kubectl/cheatsheet/
• Note, in the sandbox, you need to put sudo in front of the kubectl
commands.

72
kubeconfig file

• A kubeconfig file is a file used to configure access to Kubernetes when used in conjunction with
the kubectl command line tool (or other clients). It contains information about clusters, users,
namespaces, and authentication mechanisms.
• the actual file name is config not kubeconfig (a generic way to call this file).
• A YAML file.
• The kubectl command-line tool uses kubeconfig files to find the information it needs to choose a
cluster and communicate with the API server of a cluster.
• By default, the resulting configuration file (config) is created at the kubeconfig path (.kube/) in
the home directory
• kubectl looks for a file named config in that directory.
• Kubeconfig file can contain multiple clusters or a file for each cluster.
• https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/

73
Create kubeconfig file
The kubectl command-line tool uses kubeconfig files to find the information it needs to
choose a cluster and communicate with the API server of a cluster.
You need to have kubeconfig file (.kube/config) to use kubectl.

EC2

config

Kubernetes clusters
kubectl

74
Kubeconfig
created

1.Cluster: information about cluster(s)


2.Users: information about users
3.Context: mapping clusters and users (who have access to which clusters)
75
4.https://kubernetes.io/docs/reference/config-api/kubeconfig.v1/
Test if Config works

• If the command works, then the config (kubeconfig) is fine

76
YAML
• YAML = YAML Ain’t Markup Language
• a human-readable data serialization standard (to transfer data) that can be used in conjunction with
programming languages and is often used to write configuration files.
• object-based data format
• Key-value pairs (Hash) – string, Boolean, int, float, list, date and time (ISO 8601)
• intended to be read and written in streams
• Document extension: .yml or .yaml (.yaml is preferred)
• use cases:
• configuration files,
• messages between applications, and
• saving application state
• Editor (there are others) – use this to check https://onlineyamltools.com/edit-yaml
• https://yaml.org/
• https://onlineyamltools.com/highlight-yaml
• https://yaml.org/spec/1.2/spec.html

77
YAML Syntax

https://yaml.org/spec/1.2/spec.html
YAML’s block collections use indentation for scope and begin each entry on its own line.
▪ Don’t use tab – use spaces (suggested 2 but YAML will follow consistent spacing)
▪ Must align
Mappings use a colon and space (“: ”) to mark each key: value pair (must have space after :
).
Block sequences indicate each entry with a dash and space ( “- ”).
Comments begin “#”.

78
YAML
• Use YAML to create Kubernetes Objects.
• Fields (details are different per object):
• apiVersion - Which version of the Kubernetes API you're using to create this object
• kind - What kind of object you want to create (Deployment, Service, POD, etc.)
• metadata - Data that helps uniquely identify the object, including a name string, UID, and optional
namespace
• spec - What state you desire for the object
• Deployment:
• https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#deploymentspec-v1-
apps
• Service:
• https://kubernetes.io/docs/concepts/services-networking/service/
• https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/

79
YAML for Kubernetes
Kubernetes objects are represented in Kubernetes API. Required Fields
• Can be expressed in .yaml format. • In the .yaml file for the Kubernetes object you want to create,
you'll need to set values for the following fields:
• apiVersion - Which version of the Kubernetes API you're using
to create this object
• kind - What kind of object you want to create (Deployment,
Service)
• metadata - Data that helps uniquely identify the object,
including a name string, UID, and optional namespace
• spec - What state you desire for the object
• The format of the object spec is different for every
Kubernetes object and contains nested fields specific to that
object.
• The Kubernetes API Reference can help you find the spec
format for all of the objects you can create using Kubernetes.
• For example, the spec format for a Pod can be found in PodSpec
v1 core, and
• the spec format for a Deployment can be found in
DeploymentSpec v1 apps.

https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/
80
Kubectl with JSON and YAML

https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_create/

Although both JSON and YAML formats are accepted, we will use YAML in this class as it is the best practice.

81
SOME JSON YAML DIFFERENCES
YAML JSON
• Better for configuration • better as a serialization format or
• Better human-readability serving up data for APIs
• YAML is a superset of JSON, can • more explicit and strict than YAML.
parse JSON with a YAML • Popular format for transmitting
parser(different format, ex: JSON data over http.
use {}).
• Features include:
• the ability to self reference
• support for complex datatypes
• embedded block literals
• comments

82
Comparison Example

https://www.json2yaml.com/convert-yaml-to-json
83
Deployment and Service YAML (for copy and
paste) – spaces are important – check with Yaml
tool
• apiVersion: apps/v1 • apiVersion: v1
• kind: Deployment • kind: Service
• metadata: • metadata:
• name: mynginxdeply • name: mynginxserv
• spec: • spec:
• replicas: 2 • selector:
• selector: • app: mynginxpod
• matchLabels: • ports:
• app: mynginxpod • - protocol: TCP
• template: • port: 80
• metadata: • type: LoadBalancer
• labels:
• app: mynginxpod
• spec:
• containers:
• - name: mynginx
• image: public.ecr.aws/n3b9k8l8/budiman-
nginx
• ports:
• - containerPort: 80

Deployment Service
84
Note about copy and paste
• YAML is very sensitive with characters, it does not like tabs.
• If you copy and paste from this slide to a word document, sometimes
it copies characters it doesn’t like.
• Check https://onlineyamltools.com/edit-yaml

85
Deployment and Service YAML -explanation
• apiVersion: apps/v1 • apiVersion: v1
• kind: Deployment #Object is a deployment • kind: Service #Object is a Service
• metadata: • metadata:
• name: mynginxdeply #Deployment name • name: mynginxserv #Name of service
• spec: #Specification of the deployment • spec: # Service spec
• replicas: 2 # number of PODs in this depl. • selector:
• selector: #the POD to select • app: mynginxpod #POD name, must match POD
• matchLabels: #match the pod with label name in depl.
• app: mynginxpod • ports:
• - protocol: TCP
• template: #POD template • port: 80
• metadata: • type: LoadBalancer #external
• labels: #POD label must match the matchLabels
• app: mynginxpod
• spec:
• containers: #container spec below are data seq.
POD • - name: mynginx
• image: public.ecr.aws/n3b9k8l8/budiman-nginx #in
this case is the URI of the image in ECR.
• ports:
• - containerPort: 80

Deployment Service
86
Kubernetes Cheat Sheet

https://intellipaat.com/mediaFiles/2019/03/Kubernetes-Cheat-Sheet.jpg

87
Steps to create a Kubernetes cluster in aws
academy sandbox
• Follow the steps (with some modifications) -
https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html
• Create an EC2 in the default VPC.
• Install aws cli and copy/paste the credentials (you may have to redo this every time you restart the lab)
• Install kubectl. Kubeconfig file must be created/updated once the cluster is created.
• We have limited IAM access, we must use IAM role: labuser
• We don’t need to install eksctl since it won’t work in the sandbox
• Create VPC and subnets for the cluster.
• https://docs.aws.amazon.com/eks/latest/userguide/creating-a-vpc.html
• Download the CloudFormation to create “Public and private subnets”.
• This VPC has two public and two private subnets (resembling eksctl).
• https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-vpc-private-subnets.yaml
• Create eks cluster from the eks console.
• Create or update a kubeconfig file for your cluster.
• Create a node group from eks console.

88
Create VPC with 2 public subnets and 2
private subnets

89
After CloudFormation is finished, check
output

90
Create EKS Cluster

91
92
93
Wait until the cluster is created (could take 15
minutes or more).

94
Create node group

95
96
97
Select Public Subnets

98
Wait until the status is active

99
EKS Control Plane
VPC

EC2 EC2

Public Subnet Public Subnet

Private Subnet Private Subnet


AZ1 AZ2

Public and Private subnets are created.


Now you have a Kubernetes Cluster
Next is to create Deployments and Services
A Cluster can serve multiple deployment and services 100
Create EC2 in Default VPC
• Create EC2
• Set IAM role: LabInstanceprofile
• Install aws cli (+configure) and kubectl
• Test both installation

101
102
103
Configure aws cli

104
Configure kubectl

105
106
• The sandbox limitations limits what we can do. We can not create
deployment or services.

107
The rest of Demo 1 – Demo 3 can not be
performed in the sandbox

108
Create a deployment & Services (image is
fetched from docker hub) – this is not possi

109
Check ELB

110
111
Describe Services

Pod IPs

112
113
114
Check the security group

115
116
117
Demo 2: Service Discovery
• Use the nginx service created in Client
Demo 1.
• Start an alpine linux pod.
• List the Environment Variables to Alpine POD Core
DNS
get the Cluster IP of the nginx
service.
Service: nginx
• Download the html file of the nginx
service using: Deployment B
• the Cluster IP address (Environment POD POD POD POD
Variable A Kubernetes cluster
• the service name (CoreDNS)

118
The nginx Service: Cluster IP and Service
name

119
Start an alpine pod and do printenv

120
Service Discovery using Env Variable (Cluster
IP address)

121
Service Discovery using the service name
(CoreDNS)

122
nslookup

Type exit if you want to get out of the alpine pod.

123
Demo 3: Namespaces
• We have created a deployment and a
service in the default namespace.
• Create a namespace call Production Kubernetes Cluster
• Create an nginx deployment and a
service in the Production namespace NS:
NS: Default
• Test service discovery using CoreDNS Production
by calling service in the default
namespace from the production
namespace (use wget).
• Cluster IP is a virtual IP and must be
used together with a port number,
hence ping will not work.

124
Create a Production namespace
• You can create a manifest file or
use the command to create a
namespace.
• Use lower case characters.

125
Create nginx deployment and service in the
production namespace

126
127
128
Compare to the one in default namespace

129
Test service discovery on another namespace

• Run an alpine pod in production name space.


• Do wget on the nginx service in the default namespace using the name (servicename.namespace.svc.cluster.local)

130
• Namespaces (virtual clusters)
• Deployments (manage ReplicaSets)

K8s •

Services
Secrets (store sensitive information, ex passwords)
• ConfigMaps (store non-sensitive configuration)
Cluster •

Persistent Storages (ebs, efs)
Databases
• Service Discovery (CoreDNS/Environment Variables)

131
Cluster CoreDNS

namespace 1
Service A Service B Service X Secrets
Pod .. Pod Pod .. Pod Pod .. Pod ConfigMaps

namespace 2
Service A Service B Service X Secrets

Pod .. Pod Pod .. Pod Pod .. Pod ConfigMaps

………. upto Node


Node 1 Node 2 5000

Persistent Volumes DataBases


132
Deleting resources
• Delete all objects (check all deployment and services are deleted)
• Delete production namespace
• Delete node group (eks console)
• Delete eks cluster
• Delete VPC stack

133
Delete all objects
• kubectl delete all --all
• all - resource types
• --all
• Delete production namespace

134
Delete Node group (EKS console)

135
Delete eks cluster

136
End Of Lecture

137

You might also like