kubernetes must know features
kubernetes must know features
1. Autoscaling
• Question: What is Kubernetes autoscaling, and how does it help with workload
management?
• Answer: Kubernetes autoscaling automatically adjusts the number of pods and resources
based on real-time application demand, ensuring optimal resource utilization. It includes
Horizontal Pod Autoscaler (HPA) for scaling pods, Vertical Pod Autoscaler (VPA) for adjusting
pod resources, and Cluster Autoscaler for scaling nodes.
Example: This HPA scales the my-app deployment between 2 & 10 replicas based on CPU usage.
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
2. Helm Charts
• my-helm-chart/
• │── charts/ # Subcharts (optional)
• │── templates/ # Kubernetes manifests (YAML) with templating
• │ │── deployment.yaml
• │ │── service.yaml
• │ │── ingress.yaml
• │ │── _helpers.tpl
• │── values.yaml # Default values for the templates
• │── Chart.yaml # Metadata about the chart
• │── README.md # Documentation
•
3. Network Policies
Example: Only frontend pods can communicate with backend pods on port 8080.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
• Question: How does Kubernetes handle persistent storage for stateful applications?
• Answer: Persistent Volumes (PVs) provide stable storage resources, while Persistent Volume
Claims (PVCs) allow applications to request storage dynamically. This ensures that critical
data remains available even if pods restart or are deleted.
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/mnt/data"
PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi # Requesting 5Gi storage
storageClassName: manual
apiVersion: v1
kind: Pod
metadata:
name: pod-with-pvc
spec:
containers:
- name: my-app
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: storage-volume
volumes:
- name: storage-volume
persistentVolumeClaim:
claimName: my-pvc
5. Ingress Controllers
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myapp.example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- path: /web
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tls-ingress
spec:
tls:
- hosts:
- secure.example.com
secretName: my-tls-secret
rules:
- host: secure.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: secure-service
port:
number: 443
apiVersion: v1
kind: Service
metadata:
name: api-service
spec:
selector:
app: api
ports:
- protocol: TCP
port: 80
targetPort: 8080
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
DATABASE_HOST: "mysql-service"
DATABASE_PORT: "3306"
LOG_LEVEL: "debug"
kind: Pod
metadata:
name: pod-using-configmap
spec:
containers:
- name: my-app
image: nginx
envFrom:
- configMapRef:
name: app-config
apiVersion: v1
kind: Secret
metadata:
name: app-secret
type: Opaque
data:
DATABASE_PASSWORD: bXlzZWNyZXQ= # Base64 encoded "mysecret"
API_KEY: c3VwZXJzZWNyZXQ= # Base64 encoded "supersecret"
apiVersion: v1
kind: Pod
metadata:
name: pod-mounting-secret
spec:
volumes:
- name: secret-volume
secret:
secretName: app-secret
containers:
- name: my-app
image: nginx
volumeMounts:
- name: secret-volume
mountPath: /etc/secret
7. Service Mesh
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
namespace: dev
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: dev
subjects:
- kind: User
name: dev-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
• Question: Why are Pod Disruption Budgets essential for maintaining application availability?
• Answer: PDBs define the minimum number of pods that must remain available during
updates or maintenance. This prevents downtime and ensures continuous service
availability.
• For stateful applications like databases (MySQL, MongoDB, Redis), use PDB with
StatefulSets to prevent disruption
Example: PDB for Redis StatefulSet
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: redis-pdb
namespace: default
spec:
minAvailable: 2
selector:
matchLabels:
app: redis
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: "redis"
replicas: 3
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:latest
10. StatefulSets
apiVersion: batch/v1
kind: Job
metadata:
name: backup-job
spec:
template:
spec:
containers:
- name: backup
image: alpine
command: ["sh", "-c", "echo 'Backing up database'"]
restartPolicy: Never
apiVersion: batch/v1
kind: CronJob
metadata:
name: scheduled-job
spec:
schedule: "*/5 * * * *" # Every 5 minutes
jobTemplate:
spec:
template:
spec:
containers:
- name: cron-job
image: busybox
command: ["sh", "-c", "echo 'Running scheduled task'"]
restartPolicy: OnFailure
13. Kubernetes Namespaces
Creating a Namespace
• apiVersion: v1
• kind: Namespace
• metadata:
• name: dev
•
•
• apiVersion: v1
• kind: Pod
• metadata:
• name: init-container-example
• spec:
• initContainers:
• - name: init-db
• image: busybox
• command: ["sh", "-c", "until nc -z db-service 3306; do echo
waiting for database; sleep 2; done"]
• containers:
• - name: my-app
• image: nginx
• Question: What is the purpose of Kubernetes probes, and how do they improve application
stability?
• Answer: Kubernetes uses probes to check the health of pods. Liveness probes restart failing
containers, Readiness probes control traffic routing to only healthy pods, and Startup probes
ensure proper initialization before allowing traffic.
Example:
LivenessProbe: Restarts the pod if it becomes unresponsive.
ReadinessProbe: Ensures the pod is ready before sending traffic.
StartupProbe: Ensures full initialization before marking it healthy.
apiVersion: v1
kind: Pod
metadata:
name: probe-demo
spec:
containers:
- name: my-app
image: my-app:latest
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
startupProbe:
httpGet:
path: /startup
port: 8080
failureThreshold: 30
periodSeconds: 5
• Question: How do Resource Quotas and Limits help manage cluster resources?
• Answer: Resource Quotas restrict overall resource consumption per namespace, while
Limits define the maximum CPU and memory that a pod or container can use, ensuring fair
resource distribution.
apiVersion: v1
kind: ResourceQuota
metadata:
name: dev-quota
namespace: dev
spec:
hard:
pods: "10" # Max 10 pods in the namespace
requests.cpu: "4" # Max 4 CPU requests
requests.memory: 8Gi # Max 8Gi memory requests
limits.cpu: "8" # Max 8 CPU usage
limits.memory: 16Gi # Max 16Gi memory usage
apiVersion: v1
kind: Pod
metadata:
name: resource-limited-pod
namespace: dev
spec:
containers:
- name: my-app
image: nginx
resources:
requests:
cpu: "250m" # Guarantees 0.25 CPU core
memory: "256Mi" # Guarantees 256Mi memory
limits:
cpu: "500m" # Maximum 0.5 CPU core
memory: "512Mi" # Maximum 512Mi memory
• Question: What are the different types of Kubernetes services, and when should they be
used?
• Answer: Kubernetes offers four service types:
o ClusterIP (default) for internal communication.
o NodePort for exposing services on a static port of each node.
o LoadBalancer for exposing services externally via cloud provider load balancers.
o ExternalName for aliasing services outside the cluster.
• Question: How do affinity and anti-affinity rules improve pod scheduling in Kubernetes?
• Answer: Affinity rules allow you to specify which nodes a pod should run on based on labels,
while anti-affinity rules prevent pods from running on the same node. This helps in
optimizing resource utilization and improving fault tolerance. For example, you can ensure
that pods of the same application are spread across different nodes to avoid a single point of
failure.
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: backend
topologyKey: "kubernetes.io/hostname"
containers:
- name: web-app
image: nginx
• Question: What is the difference between Horizontal and Vertical Scaling in Kubernetes?
• Answer:
o Horizontal Scaling (HPA): Increases or decreases the number of pods based on
demand.
o Vertical Scaling (VPA): Adjusts pod CPU/memory resources dynamically.
• Question: What are Sidecar Containers, and how are they used in Kubernetes?
• Answer: Sidecar Containers run alongside the main application container in a pod,
performing auxiliary functions like logging, monitoring, or security.
• Question: What are effective strategies to optimize Kubernetes cost in cloud environments?
• Answer:
o Use Spot Instances for non-critical workloads.
o Enable Cluster Autoscaler to scale down unused nodes.
o Implement Resource Requests & Limits to prevent over-provisioning.
o Use Vertical Pod Autoscaler (VPA) to optimize pod resource allocation dynamically.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: my-app:latest
ports:
- containerPort: 8000