K8s Notes1
K8s Notes1
*Containers*
+ Like a single house with multiple rooms, each with its own key
*Key differences*
1. *Operating System*: VMs have their own OS, while containers share the host OS.
3. *Isolation*: VMs provide strong isolation, while containers provide lightweight isolation.
5. *Overhead*: VMs have higher overhead due to the need to run multiple OSes.
Cgroups (Control Groups) is a Linux feature that helps manage resources like CPU, memory, and I/O for a
group of processes. Think of it like a manager who ensures each team (processes) gets the resources they
need to work efficiently.
Cgroups:
- Set limits on resource usage (e.g., team A can use only 2 desks)
Benefits:
In summary, cgroups help manage resources for groups of processes, ensuring efficient use and preventing
resource contention.
Containers are a way to package and run software applications in a isolated environment, called a "container".
Here's why:
*What is a container?*
Imagine a shipping container that holds everything an application needs to run, like a mini-data center. This
container includes:
1. Application code
1. *Isolation*: Containers keep applications separate from each other and the host system, preventing conflicts
and improving security.
2. *Portability*: Containers are lightweight and portable, making it easy to move applications between
environments (like development, testing, and production).
3. *Efficiency*: Containers use fewer resources than traditional virtual machines, making them more efficient.
4. *Consistency*: Containers ensure consistency across environments, reducing errors and making deployment
easier.
*Analogy*
Think of containers like a lunchbox. You pack everything you need for lunch (application, dependencies,
settings) into a single container (lunchbox). This way, you can easily carry and run your "lunch" (application)
anywhere, without worrying about what's already in the fridge (host system).
Containers are a powerful tool for developing, deploying, and managing software applications, making them
more efficient, secure, and scalable!
- They run the applications and provide resources like CPU and memory.
*Step 3: Pods*
- They run on Worker Nodes and are managed by the Master Node.
*Step 4: Deployments*
- They ensure the right number of Pods are running and healthy.
*Step 5: Services*
*Step 7: Networking*
- It includes the Master Node, Deployments, and Services, working together to manage the cluster.
- The Data Plane is like the "highway" for your application data.
- It includes the Worker Nodes, Pods, and Persistent Storage, handling data processing and storage.
Here's an explanation of Kubernetes Master Node and Worker Node components in layman terms, step by
step:
*Master Node:*
1. *API Server*: The "Receptionist" - handles requests and communication between components.
2. *Scheduler*: The "Traffic Cop" - decides which Worker Node to run Pods on.
3. *Controller Manager*: The "Maintenance Crew" - ensures the cluster is running correctly.
*Worker Node:*
1. *Kubelet*: The "Pod Manager" - runs and manages Pods on the Worker Node.
In simple terms:
- The Master Node is like the "brain" of the cluster, making decisions and controlling the cluster.
- The Worker Node is like the "hands" of the cluster, running the applications and providing resources.
The Master Node components work together to manage the cluster, while the Worker Node components work
together to run the applications.
*What is CRI-O?*
CRI-O is a container runtime that helps Kubernetes manage containers. Think of it like a "container engine"
that runs your applications.
2. *CRI-O creates the container*: CRI-O uses a library called OCI (Open Container Initiative) to create the
container.
3. *CRI-O runs the container*: CRI-O runs the container using a runtime like runc.
4. *CRI-O manages the container*: CRI-O monitors the container's performance, restarts it if it fails, and cleans
up when it's deleted.
2. *Secure*: CRI-O uses OCI and runc to ensure secure container execution.
CRI-O is like a "container manager" that helps Kubernetes run and manage containers. It's a crucial
component that ensures your applications run smoothly and efficiently in a Kubernetes cluster!
Here's an explanation of Kubernetes Labels, Annotations, Selectors, and Set-based Selectors in layman terms:
_Labels_
Labels are like "tags" or "keywords" that you attach to objects (like Pods or Nodes) in your Kubernetes cluster.
They help you organize and filter objects based on specific characteristics.
_Annotations_
Annotations are like "notes" or "comments" that you add to objects in your Kubernetes cluster. They provide
additional information about the object, but don't affect its behavior.
_Selectors_
Selectors are like "filters" that help you select objects based on their labels. You can use selectors to:
_Set-based Selectors_
Set-based Selectors are like "advanced filters" that allow you to select objects based on sets of labels. You can
use set-based selectors to:
- Match objects with any of the specified labels (e.g., "env=production" or "env=staging")
- Match objects with all of the specified labels (e.g., "env=production" and "app=nginx")
- Match objects with none of the specified labels (e.g., "env!=production")
In simple terms:
- Set-based Selectors are like advanced search filters (e.g., find books by multiple categories)
_Workloads_
Workloads are like "jobs" that you want to run in your Kubernetes cluster. They represent the
applications or tasks that you want to execute.
_Deployments_
Deployments are like "recipes" for managing Workloads. They define:
Deployments ensure that your Workload is running correctly and can scale or update as needed.
_ReplicaSets_
ReplicaSets are like "teams" of identical Workloads. They ensure that a specified number of
replicas (copies) of a Workload are running at any given time.
Example: If you want to run 3 replicas of a web server, a ReplicaSet ensures that 3 copies are
always running, even if one fails.
In simple terms: