Mar 2022-7-30AM K8s Running Notes
Mar 2022-7-30AM K8s Running Notes
Installation
============
KOPS --> Kubernetes Operations is a sotware using which we can create production
ready
highily available kubenetes services in Cloud like AWS.KOPS will leverage Cloud
Sevices like
AWS AutoScaling & Lanuch Configurations to setup K8's Master & Workers. It will
Create 2 ASG & Lanuch Configs
one for master and one for worekrs. Thesse Auto Scaling Groups will manage EC2
Instances.
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-strong-
getting-started-strong-
Name Spaces
ex:
# Create name space using command(Imperative)
kubectl create ns test-ns
ex:
kubectl label namespaces test-ns team=testingteam
apiVersion: v1
kind: Namespace
metadata:
name: <NameSpaceName>
lables: # Labels are key value pairs(Metadata)
<key>: <value>
<key> <value>
# Example
apiVersion: v1
kind: Namespace
metadata:
name: test-ns
labels:
team: testingteam
# Command to apply
kubectl apply -f <fileName>.yaml
ex:
ex:
# If we don't mention name space it will create in default(current) namespace.
kubectl run javawebapp --image=dockerhandson/java-web-app:1 --labels app=javawebapp
--port=8080
ex:
kubect get pods -n test-ns
POD
Replication Controller
Replica Set
DaemonSet
Deployment
Statefullset
Service
PersistentVolume
PersistentVolumeClaim
CofgigMap
Secret ..etc
# POD Manifest
apiVersion: v1
kind: Pod
metadata:
name: <PodName>
labels:
<Key>: <value>
namespace: <nameSpaceName>
spec:
containers:
- name: <NameOfTheCotnainer>
image: <imagaName>
ports:
- containerPort: <portOfContainer>
Example:
---
apiVersion: v1
kind: Pod
metadata:
name: mavenwebapppod
labels:
app: mavenwebapp
spec:
containers:
- name: mavenwebappcontainer
image: dockerhandson/maven-web-appliction:1
ports:
- containerPort: 8080
Service
========
apiVersion: v1
kind: Service
metadata:
name: <serviceName>
namespace: <nameSpace>
spec:
type: <ClusterIP/NodePort>
selector:
<key>: <value>
ports:
- port: <servciePort> # default It to 80
targetPort: <containerPort>
apiVersion: v1
kind: Pod
metadata:
name: nodejspod
namespace: test-ns
labels:
app: nodeapp
spec:
containers:
- name: nodeappcontainer
image: dockerhandson/node-app-mss:1
ports:
- containerPort: 9981
---
apiVersion: v1
kind: Service
metadata:
name: nodejsappsvc
namespace: test-ns
spec:
type: NodePort
selector:
app: nodeapp
ports:
- port: 80
targetPort: 9981
With in the cluster one application(POD) can access other applications(PODS) using
Service name.
POD --> Pod is the smallest building block which we can deploy in k8s.Pod
represents running process.Pod contains one or more containers.These container will
share same network,storage and any other specifications.Pod will have unique IP
Address in k8s cluster.
Pods
SingleContainerPods --> Pod will have only one container.
apiVersion: v1
kind: Pod
metadata:
name: nodeapppod
namespace: test-ns
labels:
app: nodeapp
spec:
containers:
- name: nodeapp
image: dockerhandson/nodejs-app-mss:2
ports:
- containerPort: 9981
- name: ngnixapp
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nodeappsvc
namespace: test-ns
spec:
type: NodePort
selector:
app: nodeapp
ports:
- port: 80
name: nginxport
targetPort: 80
- port: 9981
name: nodeport
targetPort: 9981
What is FQDN?
Fully Qualified Domain name.
If one POD need access to service & which are in differnent names space we have to
use FQDN of the serivce.
Syntax: <serivceName>.<namespace>.svc.cluster.local
ex: mavenwebappsvc.test-ns.svc.cluster.local
POD --> Pod is the smallest building block which we can deploy in k8s.Pod
represents running process.Pod contains one or more containers.These container will
share same network,storage and any other specifications.Pod will have unique IP
Address in k8s cluster.
Pods
SingleContainerPods --> Pod will have only one container.
We should not create pods directly for deploying applications.If pod is down it
wont be rescheduled.
We have to create pods with help of controllers.Which manages POD life cycle.
Controllers
===========
ReplicationController
ReplicaSet
DaemonSet
Deploymnet
StatefullSet
# Replication Conrtoller
apiVersion: v1
kind: ReplicationController
metadata:
name: <replicationControllerName>
namespace: <nameSpaceName>
spec:
replicas: <noOfReplicas>
selector:
<key>: <value>
template: # POD Template
metadata:
name: <PODName>
labels:
<key>: <value>
spec:
- containers:
- name: <nameOfTheContainer>
image: <imageName>
ports:
- containerPort: <containerPort>
Example:
========
apiVersion: v1
kind: ReplicationController
metadata:
name: javawebapprc
namespace: test-ns
spec:
replicas: 1
selector:
app: javawebapp
template:
metadata:
name: javawebapppod
labels:
app: javawebapp
spec:
containers:
- name: javawebappcontainer
image: dockerhandson/java-web-application:1
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: javawebappsvc
namespace: test-ns
spec:
type: NodePort
selector:
app: javawebapp
ports:
- port: 80
targetPort: 8080
# Another Appplication
apiVersion: v1
kind: ReplicationController
metadata:
name: pythonrc
spec:
replicas: 1
selector:
app: pythonapp
template: # Pod template
metadata:
name: pythonapppod
labels:
app: pythonapp
spec:
containers:
- name: pythonappcontainer
image: dockerhandson/python-app:1
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: pythonsvc
spec:
type: NodePort
selector:
app: pythonapp
ports:
- port: 80
targetPort: 5000
# Another Appplication
apiVersion: v1
kind: ReplicationController
metadata:
namespace: test-ns
name: mavenwebrc
spec:
replicas: 1
template:
metadata:
name: mavenwebapppod
labels:
app: mavenwebapp
spec:
containers:
- name: mavenwebappcontainer
image: dockerhandson/maven-web-application:1
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: mavenwebappsvc
namespace: test-ns
spec:
type: NodePort
selector:
app: mavenwebapp
ports:
- port: 80
targetPort: 8080
ReplicaSet:
It's next gernation of replication controller. Both manages the pod replicas. But
only difference as now is
selector support.
RS --> Supports eqaulity based selectors and also set based selectors.
Set Based
key in (value1,value2,value3)
key notin (value1)
selector:
matchLabels: # Equality Based
key: value
matchExpressions: # Set Based
- key: app
operator: IN
values:
- javawebpp
- javawebapplication
# Mainfest File RS
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: <RSName>
spec:
replicas: <noOfPODReplicas>
selector: # To Match POD Labels.
matchLabels: # Equality Based Selector
<key>: <value>
matchExpressions: # Set Based Selector
- key: <key>
operator: <in/not in>
values:
- <value1>
- <value2>
template:
metadata:
name: <PODName>
labels:
<key>: <value>
spec:
- containers:
- name: <nameOfTheContainer>
image: <imageName>
ports:
- containerPort: <containerPort>
Example:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: javawebapprs
spec:
replicas: 1
selector:
matchLabels:
app: javawebapp
template:
metadata:
name: javawebapppod
labels:
app: javawebapp
spec:
containers:
- image: dockerhandson/java-web-app:1
name: javawebappcontainer
ports:
- containerPort: 8080
kubectl get rs
kubectl get rs -n <namespace>
kubectl get all
kubectl scale rs <rsName> --replicas <noOfReplicas>
Create will Create an Object if it's not already created. Apply will perfrom create
if object is not created earlier.If it's already
created it will update.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: <DSName>
spec:
selector: # To Match POD Labels.
matchLabels: # Equality Based Selector
<key>: <value>
matchExpressions: # Set Based Selector
- key: <key>
operator: <in/not in>
values:
- <value1>
- <value2>
template:
metadata:
name: <PODName>
labels:
<key>: <value>
spec:
containers:
- name: <nameOfTheContainer>
image: <imageName>
ports:
- containerPort: <containerPort>
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginxds
namespace: test-ns
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
kubectl get ds
kubectl get ds -n <namespace>
kubectl get all
# Deployment ReCreate
apiVersion: apps/v1
kind: Deployment
metadata:
name: javawebappdeployment
namespace: test-ns
spec:
replicas: 2
selector:
matchLabels:
app: javawebapp
strategy:
type: Recreate
template:
metadata:
name: javawebapppod
labels:
app: javawebapp
spec:
containers:
- name: javawebappcontainer
image: dockerhandson/java-web-app:1
ports:
- containerPort: 8080
POD AutoScaler
==============
What is difference b/w Kubernetes AutoScaling(POD AutoScaling) & AWS AutoScaling?
POD AutoScaling --> Kuberenets POD AutoScaling Will make sure u have minimum number
pod replicas available at any time & based the observed CPU/Memory utilization on
pods it can scale PODS. HPA Will Scale up/down pod replicas of
Deployment/ReplicaSet/ReplicationController based on observerd CPU & Memory
utilization base the target specified.
AWS AutoScaling --> It will make sure u have enough number of nodes(Servers).
Always it will maintian minimum number of nodes. Based the observed CPU/Memory
utilization of node it can scale nodes.
Note: Deploy metrics server as k8s addon which will fetch metrics. Follow bellow
link to deploy metrics Server.
====
https://github.com/MithunTechnologiesDevOps/metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hpadeployment
spec:
replicas: 2
selector:
matchLabels:
name: hpapod
template:
metadata:
labels:
name: hpapod
spec:
containers:
- name: hpacontainer
image: k8s.gcr.io/hpa-example
ports:
- name: http
containerPort: 80
resources:
requests:
cpu: "100m"
memory: "64Mi"
limits:
cpu: "100m"
memory: "256Mi"
---
apiVersion: v1
kind: Service
metadata:
name: hpaclusterservice
labels:
name: hpaservice
spec:
ports:
- port: 80
targetPort: 80
selector:
name: hpapod
type: NodePort
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: hpadeploymentautoscaler
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: hpadeployment
minReplicas: 2
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 40
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 40
# Create temp POD using below command interatively and increase the load on demo
app by accessing the service.
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: javawebappdeploymenthpa
namespace: test-ns
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: javawebappdeployment
minReplicas: 2
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 85
---
apiVersion: v1
kind: Service
metadata:
name: javawebappsvc
namespace: test-ns
spec:
type: NodePort
selector:
app: javawebapp
ports:
- port: 80
targetPort: 8080
# Spring App & Mongod DB as POD with out volumes
apiVersion: apps/v1
kind: Deployment
metadata:
name: springapp
namespace: test-ns
spec:
replicas: 2
selector:
matchLabels:
app: springapp
template:
metadata:
labels:
app: springapp
spec:
containers:
- name: springappcontainer
image: dockerhandson/spring-boot-mongo:1
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
memory: "512Mi"
cpu: "500m"
ports:
- containerPort: 8080
env:
- name: MONGO_DB_HOSTNAME
value: mongosvc
- name: MONGO_DB_USERNAME
value: devdb
- name: MONGO_DB_PASSWORD
value: devdb@123
---
apiVersion: v1
kind: Service
metadata:
name: springappsvc
namespace: test-ns
spec:
type: NodePort
selector:
app: springapp
ports:
- port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: mongodb
namespace: test-ns
spec:
selector:
matchLabels:
app: mongo
template:
metadata:
name: myapp
labels:
app: mongo
spec:
containers:
- name: mongodbcontainer
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: devdb
- name: MONGO_INITDB_ROOT_PASSWORD
value: devdb@123
---
apiVersion: v1
kind: Service
metadata:
name: mongosvc
namespace: test-ns
spec:
type: ClusterIP
selector:
app: mongo
ports:
- port: 27017
targetPort: 27017
apiVersion: apps/v1
kind: Deployment
metadata:
name: springapp
namespace: test-ns
spec:
replicas: 2
selector:
matchLabels:
app: springapp
template:
metadata:
labels:
app: springapp
spec:
containers:
- name: springappcontainer
image: dockerhandson/spring-boot-mongo:1
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
memory: "512Mi"
cpu: "500m"
ports:
- containerPort: 8080
env:
- name: MONGO_DB_HOSTNAME
value: mongosvc
- name: MONGO_DB_USERNAME
value: devdb
- name: MONGO_DB_PASSWORD
value: devdb@123
---
apiVersion: v1
kind: Service
metadata:
name: springappsvc
namespace: test-ns
spec:
type: NodePort
selector:
app: springapp
ports:
- port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: mongodb
namespace: test-ns
spec:
selector:
matchLabels:
app: mongo
template:
metadata:
name: myapp
labels:
app: mongo
spec:
containers:
- name: mongodbcontainer
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: devdb
- name: MONGO_INITDB_ROOT_PASSWORD
value: devdb@123
volumeMounts:
- name: mogodbhostvol
mountPath: /data/db
volumes:
- name: mogodbhostvol
hostPath:
path: /mongodata
---
apiVersion: v1
kind: Service
metadata:
name: mongosvc
namespace: test-ns
spec:
type: ClusterIP
selector:
app: mongo
ports:
- port: 27017
targetPort: 27017
Step 1:
The above command lets us install the latest available version of a software
through the Ubuntu repositories.
Now, run the following command in order to install the NFS Kernel Server on your
system:
sudo vi /etc/exports
apiVersion: apps/v1
kind: Deployment
metadata:
name: springapp
namespace: test-ns
spec:
replicas: 2
selector:
matchLabels:
app: springapp
template:
metadata:
labels:
app: springapp
spec:
containers:
- name: springappcontainer
image: dockerhandson/spring-boot-mongo:1
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
memory: "512Mi"
cpu: "500m"
ports:
- containerPort: 8080
env:
- name: MONGO_DB_HOSTNAME
value: mongosvc
- name: MONGO_DB_USERNAME
value: devdb
- name: MONGO_DB_PASSWORD
value: devdb@123
---
apiVersion: v1
kind: Service
metadata:
name: springappsvc
namespace: test-ns
spec:
type: NodePort
selector:
app: springapp
ports:
- port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: mongodb
namespace: test-ns
spec:
selector:
matchLabels:
app: mongo
template:
metadata:
name: myapp
labels:
app: mongo
spec:
containers:
- name: mongodbcontainer
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: devdb
- name: MONGO_INITDB_ROOT_PASSWORD
value: devdb@123
volumeMounts:
- name: mogodbvol
mountPath: /data/db
volumes:
- name: mogodbvol
nfs:
server: 172.31.47.141
path: /mnt/nfs_share
---
apiVersion: v1
kind: Service
metadata:
name: mongosvc
namespace: test-ns
spec:
type: ClusterIP
selector:
app: mongo
ports:
- port: 27017
targetPort: 27017
PVC
If pod requires access to storage(PV),it will get an access using PVC. PVC will be
attached to PV.
RWO - ReadWriteOnce
ROX - ReadOnlyMany
RWX - ReadWriteMany
Claim Policies
A Persistent Volume Claim can have several different claim policies associated with
it including
Commands
kubectl get pv
kubectl get pvc
kubectl get storageclass
kubectl describe pvc <pvcName>
kubectl describe pv <pvName>
https://github.com/MithunTechnologiesDevOps/Kubernates-Manifests/tree/master/pv-pvc
Static Volumes
1) Create PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-hostpath
namespace: test-ns
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mongodata"
2) Create PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongopvc
namespace: test-ns
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Commands://
=========
kubectl get pv
kubectl get pvc
kubectl describe pv <pvName>
kubectl describe pvc <pvcName>
Note: Configure Storage Class for Dynamic Volumes based on infra sturcture. Make
that one as default storage class.
NFS Provisioner
Prerequisiets:
1) NFS Server
2) Insall nfs client softwares in all k'8s nodes.
https://raw.githubusercontent.com/MithunTechnologiesDevOps/Kubernates-Manifests/
master/pv-pvc/nfsstorageclass.yml
$ curl https://raw.githubusercontent.com/MithunTechnologiesDevOps/Kubernates-
Manifests/master/pv-pvc/nfsstorageclass.yml >> nfsstorageclass.yml
And update Your NFS Server IP Address(2 Places you need update IP Addrees) And path
of nfs share. Apply
Dynamic Volumes
https://raw.githubusercontent.com/MithunTechnologiesDevOps/Kubernates-Manifests/
master/SpringBoot-Mongo-DynamicPV.yml
1) Create PVC(If we don't mention storageclass name it will use defautl storage
class which is configured.) It will create PV.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongodb-nfs-pvc
namespace: test-ns
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
# Complete Manifest Where in single yml we defined Deployment & Service for
SpringApp & PVC(with NFS Dynamic StorageClass),ReplicaSet & Service For Mongo.
apiVersion: apps/v1
kind: Deployment
metadata:
name: springappdeployment
namespace: test-ns
spec:
replicas: 2
selector:
matchLabels:
app: springapp
template:
metadata:
name: springapppod
labels:
app: springapp
spec:
containers:
- name: springappcontainer
image: dockerhandson/spring-boot-mongo
ports:
- containerPort: 8080
env:
- name: MONGO_DB_USERNAME
value: devdb
- name: MONGO_DB_PASSWORD
value: devdb@123
- name: MONGO_DB_HOSTNAME
value: mongo
---
apiVersion: v1
kind: Service
metadata:
name: springapp
namespace: test-ns
spec:
selector:
app: springapp
ports:
- port: 80
targetPort: 8080
nodePort: 30032
type: NodePort
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongodbpvc
namespace: test-ns
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: mongodb
namespace: test-ns
spec:
selector:
matchLabels:
app: mongodb
template:
metadata:
name: mongodbpod
labels:
app: mongodb
spec:
volumes:
- name: pvc
persistentVolumeClaim:
claimName: mongodbpvc
containers:
- name: mongodbcontainer
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: devdb
- name: MONGO_INITDB_ROOT_PASSWORD
value: devdb@123
volumeMounts:
- name: pvc
mountPath: /data/db
---
apiVersion: v1
kind: Service
metadata:
name: mongo
namespace: test-ns
spec:
type: ClusterIP
selector:
app: mongodb
ports:
- port: 27017
targetPort: 27017
We can create ConfigMap & Secretes in Cluster using command or also using yml.
Or Using yml
---
apiVersion: v1
kind: ConfigMap
metadata: # We can define multiple key value pairs.
name: springappconfig
namespace: test-ns
data:
mongodbusername: devdb
Using Yml:
apiVersion: v1
kind: Secret
metadata:
name: springappsecret
namespace: test-ns
type: Opaque
stringData: # We can define multiple key value pairs.
mongodbpassword: devdb@123
apiVersion: v1
kind: ConfigMap
metadata:
name: springappconfig
namespace: test-ns
data: # We can define multiple key value pairs.
mongodbusername: proddb
Using Yml:
apiVersion: v1
kind: Secret
metadata:
name: springappsecret
namespace: test-ns
type: Opaque
stringData: # We can define multiple key value pairs.
mongodbpassword: prodb@123
apiVersion: apps/v1
kind: Deployment
metadata:
name: springappdeployment
namespace: test-ns
spec:
replicas: 2
selector:
matchLabels:
app: springapp
template:
metadata:
labels:
app: springapp
spec:
containers:
- name: springappcontainer
image: dockerhandson/spring-boot-mongo:1
env:
- name: MONGO_DB_HOSTNAME
value: mongosvc
- name: MONGO_DB_USERNAME
valueFrom:
configMapKeyRef:
name: springappconfig
key: mongodbusername
- name: MONGO_DB_PASSWORD
valueFrom:
secretKeyRef:
name: springappsecret
key: mongodbpassword
resources:
limits:
memory: "512Mi"
cpu: "500m"
requests:
cpu: "200m"
memory: "256Mi"
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: springappsvc
namespace: test-ns
spec:
type: NodePort
selector:
app: springapp
ports:
- port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: mongodb
namespace: test-ns
spec:
replicas: 1
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongocontainer
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
configMapKeyRef:
name: springappconfig
key: mongodbusername
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: springappsecret
key: mongodbpassword
volumeMounts:
- name: mongodbhostvol
mountPath: /data/db
volumes:
- name: mongodbhostvol
hostPath:
path: /tmp/mongo
---
apiVersion: v1
kind: Service
metadata:
name: mongosvc
namespace: test-ns
spec:
selector:
app: mongo
ports:
- port: 27017
targetPort: 27017
apiVersion: v1
kind: ConfigMap
metadata:
name: javawebappconfig
data:
tomcat-users.xml: |
<?xml version='1.0' encoding='utf-8'?>
<tomcat-users xmlns="http://tomcat.apache.org/xml"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://tomcat.apache.org/xml tomcat-
users.xsd"
version="1.0">
<user username="tomcat" password="tomcat" roles="admin-gui,manager-gui"/>
</tomcat-users>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: javawebdeployment
spec:
replicas: 2
strategy:
type: Recreate
selector:
matchLabels:
app: javawebapp
template:
metadata:
name: javawebappod
labels:
app: javawebapp
spec:
containers:
- name: javawebappcontainer
image: dockerhandson/java-web-app:3
ports:
- containerPort: 8080
volumeMounts:
- name: tomcatusersconfig
mountPath: "/usr/local/tomcat/conf/tomcat-users.xml"
subPath: "tomcat-users.xml"
volumes:
- name: tomcatusersconfig
configMap:
name: javawebappconfig
items:
- key: "tomcat-users.xml"
path: "tomcat-users.xml"
ex:
Docker Hub: --docker-server is optional in case of docker hub
ECR # Get ECR password using AWS CLI and use the password below. If its EKS cluster
we just need to attache ECR Policies(Permsisssions) to IAM Role and attach that
role EKS nodes.No
need to create a secret and use that as imagepull secret.
# Nexus
kubectl create secret docker-registry nexuscred --docker-server=172.31.106.247:8083
--docker-username=admin --docker-password=admin123
apiVersion: apps/v1
kind: Deployment
metadata:
name: javawebappdeployment
spec:
replicas: 2
revisionHistoryLimit: 10
strategy:
type: Recreate
selector:
matchLabels:
app: javawebapp
template:
metadata:
name: javawebapppod
labels:
app: javawebapp
spec:
containers:
- name: javawebappcontainer
image: dockerhandson/java-web-app:1
ports:
- containerPort: 8080
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 1
memory: 1Gi
livenessProbe:
httpGet:
path: /java-web-app
port: 8080
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /java-web-app
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: javawebappsvc
spec:
type: NodePort
selector:
app: javawebapp
ports:
- port: 80
targetPort: 8080
Statefullset:
Manages the deployment and scaling of a set of Pods, and provides guarantees about
the ordering and uniqueness of these Pods.
#######MongoDB StatefulSet###########
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongod
namespace: test-ns
spec:
selector:
matchLabels:
app: mongod
serviceName: mongodb-service
replicas: 3
template:
metadata:
labels:
app: mongod
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongodbcontainer
image: mongo
command:
- "mongod"
- "--bind_ip"
- "0.0.0.0"
- "--replSet"
- "MainRepSet"
resources:
requests:
cpu: 200m
memory: 128Mi
limits:
cpu: 200m
memory: 256Mi
ports:
- containerPort: 27017
volumeMounts:
- name: mongodb-persistent-storage-claim
mountPath: "/data/db"
volumeClaimTemplates:
- metadata:
name: mongodb-persistent-storage-claim
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
namespace: test-ns
spec:
clusterIP: None # Headless Service
selector:
app: mongod
ports:
- port: 27017
targetPort: 27017
# Setup Mongodb Reple Set And Added Members And Create the Administrator for the
MongoDB
mongo
######Spring App#######
apiVersion: apps/v1
kind: Deployment
metadata:
name: springappdeployment
namespace: test-ns
spec:
replicas: 2
selector:
matchLabels:
app: springapp
template:
metadata:
name: springapppod
labels:
app: springapp
spec:
containers:
- name: springappcontainer
image: dockerhandson/spring-boot-mongo:1
ports:
- containerPort: 8080
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 300m
memory: 256Mi
env:
- name: MONGO_DB_USERNAME
value: devdb
- name: MONGO_DB_PASSWORD
value: devdb123
- name: MONGO_DB_HOSTNAME
value: mongodb-service
---
apiVersion: v1
kind: Service
metadata:
name: springappsvc
namespace: test-ns
spec:
selector:
app: springapp
ports:
- port: 80
targetPort: 8080
type: NodePort
# Node Selector
apiVersion: apps/v1
kind: Deployment
metadata:
name: javawebapp
spec:
replicas: 2
selector:
matchLabels:
app: javawebapp
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 60
template:
metadata:
name: javawebapp
labels:
app: javawebapp
spec:
nodeSelector:
name: workerOne
containers:
- name: javawebapp
image: dockerhandson/java-web-app:3
ports:
- containerPort: 8080
---
# requiredDuringSchedulingIgnoredDuringExecution(HardRule)
apiVersion: apps/v1
kind: Deployment
metadata:
name: javawebapp
spec:
replicas: 2
selector:
matchLabels:
app: javawebapp
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 60
template:
metadata:
name: javawebapp
labels:
app: javawebapp
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "node"
operator: In
values:
- workerOne
containers:
- name: javawebapp
image: dockerhandson/java-web-app:3
ports:
- containerPort: 8080
---
# preferredDuringSchedulingIgnoredDuringExecution(Soft Rule)
apiVersion: apps/v1
kind: Deployment
metadata:
name: javawebapp
spec:
replicas: 2
selector:
matchLabels:
app: javawebapp
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 60
template:
metadata:
name: javawebapp
labels:
app: javawebapp
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: name
operator: In
values:
- workerone
containers:
- name: javawebapp
image: dockerhandson/java-web-app:3
ports:
- containerPort: 8080
Pod Affinity
------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginxdeployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
name: nginxpod
labels:
app: nginx
spec:
containers:
- name: nginxcontainer
image: nginx
ports:
- containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: javawebappdeployment
spec:
replicas: 1
selector:
matchLabels:
app: javawebapp
strategy:
type: Recreate
template:
metadata:
name: javawebappod
labels:
app: javawebapp
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: "kubernetes.io/hostname"
containers:
- name: javawebappcontainer
image: dockerhandson/java-web-app:4
ports:
- containerPort: 8080
Pod AntiAffinity
----------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: javawebappdeployment
spec:
replicas: 2
strategy:
type: Recreate
selector:
matchLabels:
app: javawebapp
template:
metadata:
name: javawebapppod
labels:
app: javawebapp
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: "kubernetes.io/hostname"
containers:
- name: javawebapp
image: dockerhandson/java-web-app:3
ports:
- containerPort: 8080
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 1
memory: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: javawebappsvc
spec:
type: NodePort
selector:
app: javawebapp
ports:
- port: 80
targetPort: 8080
# Taint a node
kubectl taint nodes <nodeId/Name> <key>=<value>:<effect>
Example:
=======
kubectl taint nodes ip-172-31-34-69 node=HatesPods:NoSchedule
Create will Create an Object if it's not already created. Apply will perfrom create
if object is not created earlier.If it's already
created it will update.
Change/Switch Context(NameSpace)
=================================
Note: If we don't mention -n <namespace> it will refer default namespace.
If required we can change name space context.
# Change/Switch namespace
Resource Quotas:
===============
When several users or teams share a cluster with a fixed number of nodes, there is
a concern that one team could use more than its fair share of resources.
2) Users create resources (pods, services, etc.) in the namespace, and the quota
system tracks usage to ensure it does not exceed hard resource limits defined in a
ResourceQuota.
4) If quota is enabled in a namespace for compute resources like cpu and memory,
users must specify requests or limits for those values; otherwise, the quota system
may reject pod creation.
Hint: Use the LimitRange admission controller to force defaults for pods that make
no compute resource requirements.
apiVersion: v1
kind: Namespace
metadata:
name: testns
---
#Resource Quota
apiVersion: v1
kind: ResourceQuota
metadata:
name: testns-qs-quota
namespace: test-ns
spec:
hard:
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "1"
limits.memory: 2Gi
pods: 3
---
# LimitRange
apiVersion: v1
kind: LimitRange
metadata:
name: testns-limit-range
namespace: test-ns
spec:
limits:
- default:
cpu: 200m
memory: 256Mi
defaultRequest:
cpu: 100m
memory: 128Mi
type: Container
Network policies
=================
Network policies are Kubernetes resources that control the traffic between pods
and/or network endpoints. They uses labels to select pods and specify the traffic
that is directed toward those pods using rules. Most CNI plugins support the
implementation of network policies, however, if they don’t and we create a
NetworkPolicy, then that resource will be ignored.
The most popular CNI plugins with network policy support are:
Weave
Calico
Cilium
Romana
In Kubernetes, pods are capable of communicating with each other and will accept
traffic from any source, by default. With NetworkPolicy we can add traffic
restrictions to any number of selected pods, while other pods in the namespace
(those that go unselected) will continue to accept traffic from anywhere. The
NetworkPolicy resource has mandatory fields such as apiVersion, kind, metadata and
spec. Its spec field contains all those settings which define network restrictions
within a given namespace: