0% found this document useful (0 votes)
4 views

05 Devops

Uploaded by

farzi account
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

05 Devops

Uploaded by

farzi account
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

Table of Contents

1. DevOps - Tools used?................................................................................................................................1


2. DevOps - Phases and CI vs CD...................................................................................................................1
3. Jenkins - CI/CD - Jenkins file......................................................................................................................2
4. Jenkins - CI/CD - Jenkins file......................................................................................................................2
5. Docker - Virtualization vs. Containerization..............................................................................................3
6. Docker - OS level virtualizations vs types of Hypervisors...........................................................................3
7. Docker - Architecture................................................................................................................................3
8. Docker - How to generate images.............................................................................................................4
9. Docker - CMD vs entrypoint......................................................................................................................4
10. Docker - Docker-compose?....................................................................................................................5
11. Podman - Docker vs Podman architecture.............................................................................................5
12. Podman - Benefits (3 reasons)...............................................................................................................5
13. K8S - Theory - Architecture....................................................................................................................5
14. K8S - Theory - Controller Objects...........................................................................................................6
15. K8S - Theory - Deployment Strategy......................................................................................................7
16. K8S - Theory - Service............................................................................................................................7
17. K8S - Theory - Volumes/Storage............................................................................................................8
18. K8S - Theory - Security..........................................................................................................................8
19. K8S - Theory - Monitoring.....................................................................................................................9
20. K8S - Multi container pods?..................................................................................................................9
21. K8S - How to upgrade K8S cluster?........................................................................................................9
22. K8S - Why kubelet on master node, if its worker node component?....................................................10
23. K8S - What is static pods?....................................................................................................................10
24. K8S - Which tool used to create K8S cluster?.......................................................................................10
25. K8S - How pod is assigned to a node?..................................................................................................10
26. K8S - 3 main parts of any k8s config?...................................................................................................10
27. K8S - Which part of yaml is blueprint for pod creation?.......................................................................10
28. K8S - How to do workload processing?................................................................................................10
29. K8S - Why to use kubernetes?.............................................................................................................10
30. K8S - Replicaset vs Deployment - which is higher?...............................................................................11
31. K8S - Theory - Example of Deployment yaml.......................................................................................11
32. K8S - What is maxSurge, minReadyForSeconds?..................................................................................11
33. K8S - Which properties are available in Deployment but not ReplicaSet?............................................12
34. K8S - Service, how does service know which pod to manage?.............................................................12
35. K8S - Imperative vs Declarative approach in managing resources?......................................................13
36. K8S - helm charts?...............................................................................................................................13
37. K8S - Scaling/Availability.....................................................................................................................13
38. K8S - Scaling - HPA...............................................................................................................................14
39. K8S - Scaling - Deploy Metrics Server...................................................................................................15
40. K8S - Disaster Recovery.......................................................................................................................15
41. Helm - What is helm?..........................................................................................................................15
42. Terraform - Workflow in terraform.....................................................................................................16
43. Terraform - Where terraform stores its state?.....................................................................................17
44. Terraform - How can we retrieve information from external systems e.g. AWS latest AMI?................17

1. DevOps - Tools used?


 Git : repository
 Jenkins: CI/CD
 Junit: automation testing
 Docker: environment
 K8s or Spring cloud: orchestration
 IaC: terraform, ansible

2. DevOps - Phases and CI vs CD


 Continous Delivery
 Code is automatically tested, but deployment to production is manual decision
 Continous deployment
 Code not only automatically tested, but also deployed automatically
 Jenkins = CI/CD tool to implement DevOps phases
 Open source, most popular CI/CD tool
 Numerous plugins available
 Customization of pipeline
 Integration with multiple platforms, any source, Cloud agnostic
 Security, has RBAC
 Huge, vibrant, active community
3. Jenkins - CI/CD - Jenkins file
 Jenkins
 CI/CD tool
 Post code commit, what steps to follow, build, test, deploy etc
 Jenkinsfile
 PaC (Pipeline as a Code) like IaC
 Consists of all Stages
 Commit to github, create a webhook on github, whenever commit happens triggers jenkins job on server

 Replicate Jenkins in multiple environment


 Defines structure of pipeline - stages/steps/tasks.
 Example - 3 MS, should there be 1 jenkinsfile or 3 jenkinsfile?
 Easy to have 3 jenkinsfile, each per microservices, easy to configure. Isolation, flexibility, separate versioning OR
 1 jenkinsfile for 3 MS. Configure each MS in github to send webhook notifications to Jenkins when changes from
any MS is pushed to github. Centralized control

4. Jenkins - CI/CD - Jenkins file


 Jenkinsfile can be written in either - declarative or scripted syntax
 Declarative
 High level abstraction for defining pipelines
 Create as many “stage” as required.
pipeline {
agent {
// Specify the agent where the pipeline should run (e.g., a Docker image, a specific node).
}
stages {
stage('Checkout') {
steps {
// Checkout the source code from your version control system.}}
stage('Build') {
steps {
// Build your application (e.g., compile code, package artifacts). }}
stage('Test') {
steps {
// Run tests (e.g., unit tests, integration tests). }}
stage('Deploy') {
steps {
// Deploy your application (e.g., to a development or production environment). }}
}
post {
success {
// Actions to perform on pipeline success.
failure {
// Actions to perform on pipeline failure.
}
}
 Scripted
 Low level, flexible way to define pipelines using groovy
node {
// Define the node where the pipeline should run.
try {
// Start of the 'try' block
// Checkout the source code from your version control system.
// Build your application (e.g., compile code, package artifacts).
// Run tests (e.g., unit tests, integration tests).
// Deploy your application (e.g., to a development or production environment).
} catch (Exception e) {
// Handle exceptions and failures.
} finally {
// Cleanup or post-processing steps (e.g., artifact archiving, notifications).
}
}

5. Docker - Virtualization vs. Containerization


 Isolation
 Virtualization creates separate OS
 Containers uses underlying OS
 Resource Utilization
 Virtualization high overhead, as each VM has full OS. VM has more memory footprint.
 Containers are resource effecient
 Portability
 Virtualization less portable
 Containers highly portable
 Speed
 Virtualization slow to start
 Containers faster to start

6. Docker - OS level virtualizations vs types of Hypervisors


 VM monitor (h/w or s/w component) that acts as an abstraction layer to allow multiple OS run on single physical
machine
 Type1 Hypervisor (Bare Metal)
 Runs directly on host hardware
 Type2 Hypervisor (Hosted)
 Runs on host OS
 e.g., virtual box, vmware
 Do containers use virtualization (host OS level virtualization)?
 NO
 Container use OS level virtualization, which is different than VM
 Container use host OS kernel
 Container isolated by control groups, namespace
 Both, C and VM uses virtualization, buit rather than using VM, C uses host OS virtualization

7. Docker - Architecture
 Components
 Docker client
 Docker server/host
 Docker registry
 Architecture
 C-S architecture
 Docker client talks with docker daemon
 Based on images, containes are created in Docker server
 Docker also includes registries (dockerhub or private registry) for storing and sharing images

8. Docker - How to generate images


 Dockerfile
 Dockerfile with list of instructions
 Cons
- Complex
- Why developer need to know about steps, instructions of dockerfile
 Buildpacks
 By Heroku
 Build container images without Dockerfiles or other build configuration files manually.
 Automatically detect the language, framework, and dependencies of an application and use that information to
create a container image optimized for deployment.
 3 main components:
=> Detection: Buildpacks analyze the application source code to determine its language and dependencies.
This process is known as detection.
=> Build: After detection, buildpacks compile the application and its dependencies into a runnable artifact.
This process is known as the build phase.
=> Runtime: Finally, the buildpacks provide a runtime environment for the application to run in a
containerized environment.
 Google Jib
 By Google
 Open-source Java containerization tool developed by Google.
 It's designed to simplify the process of building and packaging Java applications into container images for
deployment in Docker or Kubernetes environments. Jib focuses on Java-specific optimizations, such as
incremental builds and layering, to generate smaller and more efficient container images.

9. Docker - CMD vs entrypoint


 Both commands to tell what to execute when container is started
 CMD
 Default command to run when container starts which can be overriden at runtime
 Can be overriden by provding arguments
 If a command is provided when running the container (docker run ... command), it overrides the CMD
instruction. The main purpose of a CMD is to provide defaults for an executing container. If you specify a CMD
and also provide additional arguments to the docker run command, then the CMD will be overridden by the
arguments provided.
 Entrypoint
 Specifies the executable that should always be run when the container starts.
 Unlike CMD, the ENTRYPOINT instruction does not get overridden by the command provided when running the
container.
 If a command is provided when running the container (docker run ... command), it gets appended as arguments
to the ENTRYPOINT.
 Usecase
 ENTRYPOINT is used to define the primary purpose of the container, while CMD is used to provide default
arguments that can be easily overridden.
 ENTRYPOINT suitable for creating wrapper scripts or executables to set up the container environment before
executing the main application or for containerizing command-line tools. ENTRYPOINT provides more control
over command execution and arguments, ensuring consistency across deployments.
 If need flexibility in command execution (CMD) or enforce specific behavior (ENTRYPOINT).

10.Docker - Docker-compose?
 Tool to define and run multi-container docker applications
 One yaml file has all definitions
 One file, multiple images, all in single yaml file
 One command all all start/down

11.Podman - Docker vs Podman architecture


 Docker uses C-S architecture. Daemon, server.
 Podman uses fork-exec model
 Podman has one process and has no daemon to coordinate requests between C<->S
 Podman is rootless mode; docker needs root support or user in 'docker' group

12. Podman - Benefits (3 reasons)


 1. No daemon process (like dockerD)
 2. No root access
 By default installation, 'docker' group is created
 If we want to execute docker command via non root users UserA, then add UserA to 'docker' group and execute
docker run
 UserB will not be able to run docker command unless it is in 'docker' group. sudo usermod -aG docker UserB
 Root user by default added to the 'docker' group, but it is recommended to create a separate docker user
 3. Supports CRI (container runtime interface)
 K8S 1.23 onwards removed dockershim
 K8S (dockershim) -> docker
 All communication to docker was via dockershim
 But to make it more generic K8S created CRI and implemented it. Any container based on CRI will work with K8S
 Docker does not implement CRI, we need to install plugin like lighweight CRI-O and then K8S will connect to
docker.
 Docker can still be used as Container with K8S, but it requires CRI like CRI-O to integrate docker, k8s. Containerd
is already included in Docker, so you can use it as the CRI without installing any additional software.
 K8S (CRI) -> CRI-O-> docker
 Podman implementa CRI, hence we need not install CRI-O
 K8S (dockershim) -> podman

13. Docker - docker vs containerD


 1. No daemon process (like dockerD)
 2. No root access
By default
14.K8S - Theory - Architecture
 Rise of microservices, rise of containers, and hence problem of managing containers
 Master Slave
 Master
 The master components manage the state of the cluster.
 This includes accepting client requests (describing the desired state), scheduling containers and running control
loops to drive the actual cluster state towards the desired state.
 Apiserver
- REST API supporting basic CRUD operations on API objects (such as pods, deployments, and services).
- This is the endpoint that a cluster admin communicates with, for example, using kubectl.
- Apiserver, is stateless. Instead, it uses a distributed key-value storage system (etcd) as its backend
 Etcd
- Storing all cluster state.
 Controller managers (State change detector)
- Provides control loops i.e., watch actual vs desired state and tries to move towards desired state
- Many different controllers e.g., replication controller ensures right number of replica pods are running for
each deployment.
 Scheduler
- In which node, we need to create pod? pod placement across the set of available nodes
- Striving to balance resource consumption to not place excessive load on any cluster node.
- It also takes user scheduling restrictions into account, such as (anti- )affinity rules.
 Worker
 The node components run on every cluster node.
 Container runtime
- E.g., docker, to execute containers on the node.
 Kubelet (real slave/captain on each pod)
- Executes instructions (pods) on the node as dictated by the control plane’s
- API server communicates with Kubelet.
- Kubelet then manages containers by starting, stopping, monitoring containers as per spec
 kube-proxy
- Agent on all worker nodes
- Networking and load balancing, interacts with MasterNode.APIServer
- Manages routing tables, network routing
- Pod-Pod communication and service access to cluster
 Who talks to whom?
 Cluster Admin
- Admins connect to Master Node/ Api server
 Regular user
- External clients connect to Worker Node/ kube proxy
15.K8S - Theory - Controller Objects
 Deployment CO (stateless apps)
 Provides declarative updates for Pods and ReplicaSet
 Blueprint for stateless app.
 In general, we dont work with pod but Deployment.
 + Easy to scale
 StatefulSet CO (stable,sticky,predictable identifiers)
 Stateful deployment
 Stable identfiers, network identity
 Ordered pod initialization
 Headess service. Stateful set provides headless service i.e., no need of LB or proxy to connect to pod. Instead
direct connect with pods as each pod has stable name
 DNS records are created with pods hostname.
 E.g. stateful pods.
 NOTE: DB in itself is usually kept outside pod
 ReplicaSet CO
 Maintain pod count same as in config at any given time
 DaemonSet CO
 Ensures one pod of particular deployment in one node
 background process, security scans, antivirus
 E.g., fleuntd logging

16.K8S - Theory - Deployment Strategy


 Recreate DS
 Replicaset ALL scaled down at a given time
 For dev environment
 Rolling Update DS
 Replicaset ONE by one scaled down at a given time
 Recreate one by one pods
 At a given time, both v1 and v2 can be running
 NOTE: during upgrade, a new RS2 is created RS2 (P1v2,P2v2) is created. Old RS1 is kept as is and good in case of
rollback
 Canary DS
 New version deployed to subset of users
 Expose new version to small % of users
 E.g., 99% customer traffic to v1, 1% customer traffic to v2
 Use nginx-ingress to divert traffic or adjust replicas
 Blue Green DS
 In rolling update, we have v1 and v2 running simultaneously.
 But in BG, we create 2 environments, and start deploying v2 in G.
 Once all pods in Green is up and ready to receive traffic,, we do LB switch.

17.K8S - Theory - Service


 Provides network connectivity to group of pods
 PodA on IP1, what if PodA dies and restart on IP2 ? Should MS1 know about IP addresses? No
 Features
 Helps achieving scalability, instead of IP rely on service name.
 Service discovery and routing traffic to pods
 Load balancing
 Provides internal as well as external connectivity
 Types of services
 ClusterIP
- Only internal accessibility within cluster. Default
- No external IP
 Node Port
- Allows external connectivity to outside cluster
- Exposes service on ports on a given node, e.g., PodA on IP:8081, PodB on IP:8082 etc
- Creates cluster wide port
 Load Balancer
- Allows external connectivity to outside cluster
- Single external IP for application vs IP per node of NodePort.

 Ingress
- API gateway, define routing rule to handle external traffic
 Other types
 Headless
- No IP or LB properties
- Stable identifiers

18.K8S - Theory - Volumes/Storage


 Pods are transient, need mechanism to store data independent of pod lifecycle
 Volumes
 Separate storage from container
 Persistent Volume
 Volume separates from container, but not from pod; still lifecycle is attached to pod
 PV is to manage storage sepaarately from pods
 Admins create pool of PV, users can use it
 Persistent Volume Claim
 PV created by admins
 PVC created by users who wants to access PV
 How PVC binds to PV? Static provisioning of Volume i.e., create PV, create PVC, bind both
 Storage class
 Another abstraction layer, abstracts provider
 If 100 claims, 100 PV created
 Better provision PVC dynamically via provisioner attribute
 SC will create PV-PVC pair dynamically. Storage class has provisioners e.g., aws, gcloud
 When PVC is created by user with SC, K8S auto creates PV.

19.K8S - Theory - Security


 Authorization on API server
 Node Authorizer part of API server i.e., control plane. Responsible for auth/authz requests coming from worker
nodes to API server.
 By default, K8S uses node RBAC node for authorization.
 NA cluster consults cluster RBAC policies to determine if the node is authorized
 Attribute based access control (ABAC)
 Access based on attributes assigned to users, resources; fine grained access, focus on attributes rather roles
 (-) Less abstraction, direct reliance on attributes i.e. strongly coupled with user and its attributes. Difficult to
maintain consistency
 Role based access control (RBAC)
 Instead of directly associating users with permission. Create middle layer i.e., role.
 Users assigned roles
 Webhook
 Manage authz externally
 Dont use inbuilt authz, instead API server consults external tool
 Namespace
 Organize and isolate resources
 4 namespaces by default - kube-system, kube-public, kube-node-lease, default
 Secure image
 Image has multiple layers, avoid unncessary libs
 Dont use inbuilt authz, instead API server consults external tool
 Run kube-bench on nodes periodically
 kube-bench ensures K8S deployment
 CIS kubernetes benchmark
 Run container as non-root
 Build image with dedicated user
 User ServiceAccounts
 Restrict communication betwee pods
 Encrypt communication
 Secure secret data like vault
 Automated backups
 Enforce network policies
 Build image with dedicated user

20.K8S - Theory - Monitoring


 Liveness Probe
 Pod is up and running
 Readiness Probe
 Pod is up and running + ready to accept traffic
 Alternative tools
 promethus - collect metrics, alert generation, query
 aws cloudwatch
 nagios, monitor network services, hosts, devices
21.K8S - Multi container pods?
 Sidecar
 logging agents + main app
 Init containers
 Specialized container to run before main container starts
 E.g., pull code from repo on startup
 E.g., big file download
 Unlike sidecar, init container is not lifelong and cease to exists
 Ephemeral containers
 Runs temporarily for special function e.g., troubleshooting
 If main container crashed?
 Otherwise can use istio as well.

22.K8S - How to upgrade K8S cluster?


 AWS EKS automatically does it
 Manually
 Drain Node1 i.e., move from pods from Node1 to Node2
 Cordon Node1 i.e., mark Node1 as unschedule so that no new pod is created on Node1
 Update Node1
 Uncordon Node1 i.e., schedule new pod to be created on Node1

23.K8S - Why kubelet on master node, if its worker node component?


 Kubelet is basically for managing pods
 And some pods run on master as well like networking nodes, etc pod.
 When kubeadm creates cluster, it creates pods for etcd and apiserver component, hence we need kubelet

24.K8S - What is static pods?


 Feature of K8s allowing to manage pods directly using kubelet rather API server
 Hence, they are bound to a particular node and cannot be moved to another node
apiVersion: v1
kind: Pod
metadata:
name: nginx-static-pod
namespace: kube-system
 Usually used in bootstraping kubernetes, no user involvement e.g., kubesystem, kube-public namespace

25.K8S - Which tool used to create K8S cluster?


 Kubeadm
 Not for prod
 Lightweight tool to setup cluster
 Need manual backup of etcd
 kubeadm cannot turn off machines when not in use, rely on external tools
 EKS

26.K8S - How pod is assigned to a node?


 Pod requirements stored on etcd
 Scheduler reads metadata, sorting, filtering, find good match
 Scheduler assigns pod to a node. Resource quotas for pods.
 Controller is state manager and keep and eye on current state vs desired state

27.K8S - 3 main parts of any k8s config?


 metadata
 spec
 status (auto generated)

28.K8S - Which part of yaml is blueprint for pod creation?


 Spec.template

29.K8S - How to do workload processing?


 Jobs/CronJobs
 Usual workloads of pods is serve webapp or db
 But some workload require to run small taks for certain time e.g., batch processing, jobs computation on image
etc. Perform task and finish.

30.K8S - Why to use kubernetes?


 1. Self healing - 99% memory full; declarative way of desired state - replica set/deployment
 2. Container management.
 3. Native support to LB and service discovery. Each service can be used as a DNS name. K8S will resolve the
name(to IP) and ultimately forward request to the pod.
 4. Automation, quickly updates, agility e.g., Terraform, ansible etc.

31.K8S - Replicaset vs Deployment - which is higher?


 Deployment (updates and rollback)
 Deployment creates all 3 resources (deploy, rs, pod). Deployment restores RS
 ReplicaSet (self-healing, scalable, desired state). RS restores Pod
 Identified by labels
ReplicaSet Deployment
Controller object Controller object, Higher level than ReplicaSet
Ensures a fixed number of replica pods. But lacks Manage and control application deployments with updates
deployment features and rollbacks.
Updates and Rollbacks not directly supported Deployments are higher level controllers with advanced
deployment features that manage replicasets, providing
rolling updates, rollbacks, declarative configuration for app
deployments with minimal downtime
E.g., PodA with version v1, need 10 pods, easily with E.g., PodA to v2 is done using Deployment
ReplicaSet. What if PodA needs v2. What to do?

32.K8S - Theory - Example of Deployment yaml


 3 levels
 Identified each level by labels
33.K8S - What is maxSurge, minReadyForSeconds?
 maxSurge = maximum extra pods allowed during updates.
 minReadySeconds = minimum time that each new v2 pod must be healthy before terminating v1 pod. During this
time both v1 and v2 coexist. Allows old pod to finish pending tasks. After this time, v1 pod is terminated.
 Deployment properties
 During upgrade, a new ReplicaSet is created
 Maximum 4 pods at a time (3 replicas + 1maxSurge)

34.K8S - Which properties are available in Deployment but not ReplicaSet?


 Strategy
 E.g., rolling update strategy which allows for controlled rolling updates; options maxUnvailable, maxSurge.
 MinReadySeconds
 Minimum X seconds that newly created pods must be ready. After X seconds, old pod is scaled down.
ReplicaSet Deployment
apiVersion: apps/v1 apiVersion: apps/v1
kind: ReplicaSet kind: Deployment
metadata: metadata:
name: my-replicaset name: my-deployment
spec: spec:
replicas: 3 replicas: 3
selector: selector:
matchLabels: matchLabels:
app: my-app app: my-app
template: template:
metadata: metadata:
labels: labels:
app: my-app app: my-app
spec: spec:
containers: containers:
- name: my-container - name: my-container
image: my-image:1.0 image: my-image:1.0
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
minReadySeconds: 30

35.K8S - Service, how does service know which pod to manage?


 Labels
 MS1 => service => pods

36.K8S - Imperative vs Declarative approach in managing resources?


 Declarative
 Use manifests file e.g., deployment.yaml etc
 Imperative
 Use direct commands or imperative tools like kubectl to perform specific operations directly

37.K8S - helm charts?


 Package manager of kubernetes resources
 Each MS has template folder, templates to parameterize values
 Values.yaml contain default values
 Chart.yaml contains metadata about helm chart.
 Manage and deploy multiple microservices as a single helm release
 Helm install will package and deploy all resources defined in helm chart, taking into account values.yaml
helm install my-app-release ./my-app-chart
38.K8S - Scaling/Availability
 Probes
 Liveness, readiness probes
 Network policy
 Restrict unwanted communicaton between pods
 There are 2 major available solutions to scale Kubernetes cluster based on demanded load.
 Horizontal Pod Autoscaler (HPA) - native Kubernetes component to scale Deployment or ReplicaSet based on
CPU or other metrics
 Cluster Autoscaler (CA) - plugin to auto-scale worker-nodes of Kubernetes cluster

 PDB (Pod Disruption Budget)


 Dedine minimum pods required for a clsuster to function normally
 E.g., deployment has 5 replicas and always want 4 ro run
 So, create a PDB with minimumAvailable=4
 PDB gurantees safe functioning of application

39.K8S - Scaling - HPA


 ASG and TG for EC2
 Who gives metrics ? CW metrics
 HPA
 Horizontal pod autoscaler.
 Who gives metrics ? Metrics Server
 Control loop implemented via a dedicated controller within Control Plane of cluster
 HPA feeds itself from Metrics Server
 Install Metrics Server
 Automatically scales replicas of pods based on resource metrics e.g., CPU utilization
 HPA continuously monitors metrics
- Calculates the desired pod count, and updates it, accordingly, optimizing resource allocation and application
availability
 HPA has desired metric count e.g., CPU utilization of 50%
 If utilization increases, replicas go up e.g. 5, and if utilization decreases replica count dynamically reduces to 1

 How HPA works - Install Metrics Server


 Metrics Server is a cluster-wide aggregator of resource usage data.
 It collects resource metrics from the kubelet running on each WN and exposes them in the Kubernetes API
server (through the Kubernetes Metrics API) to the HPA
 HPA continuously monitors Metrics Server

40.K8S - Scaling - Deploy Metrics Server


 Metrics Server
 In kube-system namespace
41.K8S - Disaster Recovery
 Take cluster snapshots in external storage like S3
 Take Snapshot Recovery using tools like Valero. Valero is popular backup and recovery tool to take backup of entire
cluster

42.Helm - What is helm?


 Package manger of kubernetes applciatons/yaml files, simplying complex k8s deployment.
 Like brew manager for installing application on mac. Helm is for K8S.
 Package manager = Software for installing/upgrading/uninstalling etc
 Helm Charts = collection of k8s resources

 Advantages
 Easy packaging/installation of yaml/k8s resources
 Versioned, easy rollback, go back to any revision
 Dynamic provison, placeholders inside yaml, override using values.yaml
 Before
 E.g. 3 microservices, 3 services file largely similar, except for minor changes in ports/version etc.
 Maintain 3 yaml files separately.
 User creates separate k8s files describing resources
 After
 Simplify template management
 Get alread created versioned charts charts
 Simple flow
helm install mydb bitnami.sql
 Load the chart and dependencies
 Parse values.yaml to update placeholders
 Generate yaml
 Parse the generated yaml to kube objetcs anad validate
 Generate final yaml to send to kube
 Commands
 helm install
 helm repo list
 helm repo add
 helm install
 helm history
 helm rollback

43.Helm - Structure
 Helm chart structure.

44.Terraform - Workflow in terraform


 1. Initialize (`terraform init`)
 First step is to initialize Terraform in your working directory.
 Downloads the necessary provider plugins (AWS, Azure, etc.) + modules + backend
 Terraform stores its state file (by default, it's stored locally as `terraform.tfstate`).
 2. Validate (`terraform validate`), optional
 After initializing, validate your configuration to catch syntax errors and potential issues before proceeding
further.
 3. Plan (`terraform plan`), optional
 Examines configuration and determines what changes, if any, need to be made to your infrastructure to match
the desired state specified in your configuration.
 Shows results in +/-. What will be + and what will be –
 4. Apply (`terraform apply`)
 Executes the changes specified in the plan and updates your infrastructure accordingly.
 5. Destroy (`terraform destroy`)
 Reads your Terraform configuration and destroys all the resources it manages.

45.Terraform - Where terraform stores its state?


 .tfstate = stores metadata of resources
 Stores resource status, attributes, dependencies, and metadata for terraform-managed infrastructure.
 Stores what is created in cloud e.g., in aws
 Created when execute `terraform apply` command for the first time after `terraform init`.
During apply terraform provisions the specified infrastructure resources and records their current state in the
tfstate file.
 After the initial creation, Terraform updates and manages the tfstate file with each subsequent `apply`, `plan`,
or `refresh` operation.
 tfstate keeps track of the state of infrastructure changes

 Current state vs desired state


 Desired state is what is stored in configuration files e.g., .tf files
 Current state is what is created in provider e.g., aws EC2

46.Terraform - How can we retrieve information from external systems e.g. AWS latest AMI?
 Datasources
 Retrieve data from external systems like AWS/Database/API etc
 Use it within terraform configuration
 Accessed via special kind of resource “data resource”, declared using a data block
 One data block => one data resource
 Example
 Fetch the latest amazon AMI instead of hardcoding the AMI name in configuration
data "aws_ami" "amz_linux2" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amzn2-ami-kernel-5.10-hvm-*-gp2"]
}

You might also like