Kubernetes Deployment Strategies
Last Updated :
08 Aug, 2024
Kubernetes is an open-source container orchestration tool. It helps to provide run and manage the thousands of containerized applications with different deployment environments. It has two main functionalities Auto scaling and Auto healing. One of its key strengths is its flexibility in deployment methodologies, ensuring easy updating and wide availability of applications. In this article, we will explore the various Kubernetes implementation methods, exploring their terminology, methods, and useful examples.
Key Terminologies
Pod: Small units that can be deployed in Kubernetes, representing a single instance of a process running in a cluster.
- ReplicaSet: Checks that the specified number of pod replicas are running at any given time.
- Deployment: Provides declarative updates to applications and ensures that the desired state of applications is maintained.
- Rolling updates: gradually replace the old version of the application with a new version, ensuring minimal downtime.
- Blue-green deployment: maintain two identical environments, one for the current version and another for the new version.
- Canary deployment: Deploys a new version sequentially to a small number of users before it is fully deployed.
- A/B Testing: Uses two versions of the application to test and compare functionality and user experience.
Deployment Strategies
1. Rolling updates
Description: Rolling updates allow you to update the application incrementally, without downtime. This process creates new pods sequentially during deployment, ensuring that some pods are always available during the update process.
Steps to follow:
- Create a deployment: Define the desired state of your application in the Deployment YAML file.
- Use roles: Use the kubectl apply -f deployment.yaml command to create a role.
- Update deployment: Change the image version of the deployment YAML file and reuse it to start the rolling update.
- Check updates: Use kubectl rollout status deployment/my-app to check progress.
- Go back if necessary: Use kubectl rollout undo deployment/my-app and revert to the previous version if issues are found.
Example:
Let’s go through a detailed example of how rolling updates work in Kubernetes, a popular container orchestration platform.
1. Preliminary provisions
You start with an application deployed with Kubernetes Deployment. Let’s assume we have an application with a deployment configuration that looks like this:
Initial Deployment (Version 1.0):
App-deployment-v1.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:1.0
ports:
- containerPort: 80
In this configuration, we have the application running version 1.0 three times.
2. Rolling update process
To update rolling, you modify the Deployment settings to reflect the new version of the application. For example, if you are upgrading to version 1.1:
Updated deployment (version 1.1):
App-deployment-v2.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:1.1
ports:
- containerPort: 80
3. Rolling updates
To use rolling updates, you would use the kubectl apply command to update the deployment:
kubectl apply -f app-deployment-v2.yaml
4. The use of resources
You can enable the rolling update behavior by using additional fields in the Deployment specification:
Rolling Update Method:
App-deployment-config.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:1.1
ports:
- containerPort: 80
maxSurge: Specifies the maximum number of Pods above the desired number during the update. In this example Kubernetes can create one more Pod than the desired count.
maxUnavailable: Specifies the maximum number of unavailable Pods during the update. In this example, a Pod can be unavailable at any time.
2. Blue-Green Deployment
Description: The blue-green deployment maintains two identical environments, one for the current version (blue) and the other for the new version (green). Verification of the new version Once the vehicle changes from blue to green.
Steps to follow:
- Deploy green environment: Deploy the new version of the application to a separate (green) environment.
- Test a green environment: Test the new one thoroughly and make sure it works as expected.
- Change traffic: Change the service configuration to show green environment, direct all traffic to the new version.
- Performance Management: Monitor the new environment closely for any issues.
- Back up if necessary: If problems arise, switch the traffic back to the blue zone.
Example:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app-green
ports:
- protocol: TCP
port: 80
targetPort: 8080
Explanation: Above example shows that how traffic is converted from a blue zone (v1.0) to a green zone (v2.0). If problems are identified, traffic can be turned back to blue.
3. Canary Deployment
Description: Canary deploy this new version for a small subset of users before rolling out full. This allows for real-world testing and feedback, reducing the risk associated with the use of new versions.
Steps to follow:
- Deploy Canary version: Deploy the new version to a small percentage (e.g. 10%) of the pod.
- Performance Monitoring: View activity and user profiles from the canary deployment.
- Gradual rollout: Gradually increase the number of pods performing new functionality based on good information and performance metrics.
- Full rollout: Once trusted, update all remaining pods to the new version.
- Go back if necessary: Reduce or remove the canary pods if you find issues and investigate the problem.
Example: Canary Deployment with Kubernetes
1. Save your Docker images
Assuming you have a Docker image for your application, you can have two versions:
- myapp:1.0 (current hard version)
- myapp:1.1 (uploading version)
2. Use Kubernetes with Canary Release
Here’s how you can configure canary deployment in Kubernetes:
a. Define your deployments
Create a Kubernetes Deployment plan for both canary and production versions.
Canary-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-canary
spec:
replicas: 2
selector:
matchLabels:
app: myapp
version: canary
template:
metadata:
labels:
app: myapp
version: canary
spec:
containers:
- name: myapp
image: myapp:1.1
ports:
- containerPort: 80
Production-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-production
spec:
replicas: 10
selector:
matchLabels:
app: myapp
version: stable
template:
metadata:
labels:
app: myapp
version: stable
spec:
containers:
- name: myapp
image: myapp:1.0
ports:
- containerPort: 80
b. Create a Service to Expose the Application
Myapp-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 80
c. Apply the Configurations
Deploy the canary and production versions, and then create the service.
kubectl apply -f canary-deployment.yaml
kubectl apply -f production-deployment.yaml
kubectl apply -f myapp-service.yaml
3. Establish traffic controls
You can use a service mesh or ingress controller to route a small percentage of traffic to the canary deployment. For example, if Istio is a service mesh, you can configure it as follows.
Canary-routing.yaml:
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: myapp
spec:
hosts:
- "myapp.example.com"
http:
- route:
- destination:
host: myapp
subset: stable
weight: 90
- destination:
host: myapp
subset: canary
weight: 10
Create a destination rule for the subsets:
Destination-rule.yaml:
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: myapp
spec:
host: myapp
subsets:
- name: stable
labels:
version: stable
- name: canary
labels:
version: canary
Apply the Routing Configurations:
kubectl apply -f canary-routing.yaml
kubectl apply -f destination-rule.yaml
In this example:
- Deployment plan: Separate configuration for canary and production deployments.
- Service Configuration: A service that sends traffic to two deployments.
- Traffic routing: Istio routing rules that control how traffic is split between static and canary versions.
- Note: This example uses Kubernetes and Istio for demonstration purposes, but canary deployments can be implemented using other tools and cloud platforms. The basic principles—reaching a subset of users and rolling it out gradually—remain constant.
Conclusion
Kubernetes deployment methods such as Rolling Updates, Blue-Green, and Canary deployments provide robust ways to update applications with minimal downtime and risk Each strategy offers unique benefits, from updates that sequence through to real-world testing, ensuring applications are reliable and user satisfied. By using these techniques, developers can maintain continuous service availability and respond to issues quickly, thereby enhancing overall application performance and user experience These techniques understanding and implementing this is essential for effective management of Kubernetes applications.
Similar Reads
What Are The Kubernetes Deployment Stratagies ?
Traditionally, applications used to be deployed on the Physical Servers. However, it was not very efficient as multiple instances of the applications running on the server used to consume a lot of resources. Although Virtual Machines running on the Servers tried to solve the issue, resource allocati
9 min read
kubernetes Replicaset VS Deployment
In Kubernetes, both Deployment and ReplicaSet are used to manage the lifecycle of pods. In order to Manage the life cycle of applications in a Kubernetes cluster, it relies on both ReplicaSets and Deployments, which are two important components. The emergence of Kubernetes has set it as a dominant p
6 min read
What is Kubernetes Deployment?
Kubernetes is an open-source Container Management tool that automates container deployment, container scaling, descaling, and container load balancing (also called as container orchestration tool). It is written in Golang and has a huge community because it was first developed by Google and later do
10 min read
kuberneets Deployment VS Pod
In the world of modern software development, managing apps may be as difficult as putting together a puzzle without a picture. This is when Kubernetes steps in to save the day! But what is Kubernetes, and why should you care? Let's divide it into bite-sized chunks. Basics of KubernetesThink of Kuber
7 min read
Kubernetes Deployments - Security Best Practices
Applications can be more easily managed by Kubernetes, which automates operational tasks of container management and includes built-in commands for deploying applications, rolling out changes to them, scaling them up and down to fit changing needs, monitoring them, and more.Due to its complexity, Ku
9 min read
What is Kubernetes Blue Green Deployment?
Blue-green deployment is a software deployment strategy that involves running two identical production environments, known as "blue" and "green." At any given time, only one of these environments serves live traffic, while the other remains idle or serves only non-production traffic (e.g., testing o
6 min read
Kubernetes - Service DNS
An open-source container orchestration system called Kubernetes is primarily employed for the automated deployment, scaling, and management of software. Another name for Kubernetes is K8s. Initially created by Google, Kubernetes is currently maintained by the Cloud Native Computing Foundation. Altho
11 min read
DevOps Best Practices for Kubernetes
DevOps is the hot topic in the market these days. DevOps is a vague term used for wide number of operations, most agreeable defination of DevOps would be that DevOps is an intersection of development and operations. Certain practices need to be followed during the application release process in DevO
11 min read
How To Deploy Nginx In Kubernetes ?
Kubernetes is an open-source container orchestration tool used to deploy, scale, and manage containerized applications. On the other hand, NGINX is a web service used for proxying and load balancing. Here in this article, I have first discussed what is Kubernetes and its features. Then I have discus
8 min read
Kubernetes Headless Service
Kubernetes, the open-source container orchestration platform, offers an effective abstraction layer for handling and deploying containerized applications. One crucial feature of Kubernetes is its capability to manage services, and amongst them, the headless service stands out as a completely unique
5 min read