Open In App

Kubernetes Deployment Strategies

Last Updated : 08 Aug, 2024
Comments
Improve
Suggest changes
Like Article
Like
Report

Kubernetes is an open-source container orchestration tool. It helps to provide run and manage the thousands of containerized applications with different deployment environments. It has two main functionalities Auto scaling and Auto healing. One of its key strengths is its flexibility in deployment methodologies, ensuring easy updating and wide availability of applications. In this article, we will explore the various Kubernetes implementation methods, exploring their terminology, methods, and useful examples.

Key Terminologies

Pod: Small units that can be deployed in Kubernetes, representing a single instance of a process running in a cluster.

  • ReplicaSet: Checks that the specified number of pod replicas are running at any given time.
  • Deployment: Provides declarative updates to applications and ensures that the desired state of applications is maintained.
  • Rolling updates: gradually replace the old version of the application with a new version, ensuring minimal downtime.
  • Blue-green deployment: maintain two identical environments, one for the current version and another for the new version.
  • Canary deployment: Deploys a new version sequentially to a small number of users before it is fully deployed.
  • A/B Testing: Uses two versions of the application to test and compare functionality and user experience.

Deployment Strategies

1. Rolling updates

Description: Rolling updates allow you to update the application incrementally, without downtime. This process creates new pods sequentially during deployment, ensuring that some pods are always available during the update process.

Steps to follow:

  1. Create a deployment: Define the desired state of your application in the Deployment YAML file.
  2. Use roles: Use the kubectl apply -f deployment.yaml command to create a role.
  3. Update deployment: Change the image version of the deployment YAML file and reuse it to start the rolling update.
  4. Check updates: Use kubectl rollout status deployment/my-app to check progress.
  5. Go back if necessary: ​​Use kubectl rollout undo deployment/my-app and revert to the previous version if issues are found.

Example:

Let’s go through a detailed example of how rolling updates work in Kubernetes, a popular container orchestration platform.

1. Preliminary provisions

You start with an application deployed with Kubernetes Deployment. Let’s assume we have an application with a deployment configuration that looks like this:

Initial Deployment (Version 1.0):

App-deployment-v1.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:1.0
ports:
- containerPort: 80

In this configuration, we have the application running version 1.0 three times.

2. Rolling update process

To update rolling, you modify the Deployment settings to reflect the new version of the application. For example, if you are upgrading to version 1.1:

Updated deployment (version 1.1):

App-deployment-v2.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:1.1
ports:
- containerPort: 80

3. Rolling updates

To use rolling updates, you would use the kubectl apply command to update the deployment:

kubectl apply -f app-deployment-v2.yaml

4. The use of resources

You can enable the rolling update behavior by using additional fields in the Deployment specification:

Rolling Update Method:

App-deployment-config.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:1.1
ports:
- containerPort: 80

maxSurge: Specifies the maximum number of Pods above the desired number during the update. In this example Kubernetes can create one more Pod than the desired count.

maxUnavailable: Specifies the maximum number of unavailable Pods during the update. In this example, a Pod can be unavailable at any time.

2. Blue-Green Deployment

Description: The blue-green deployment maintains two identical environments, one for the current version (blue) and the other for the new version (green). Verification of the new version Once the vehicle changes from blue to green.

Steps to follow:

  1. Deploy green environment: Deploy the new version of the application to a separate (green) environment.
  2. Test a green environment: Test the new one thoroughly and make sure it works as expected.
  3. Change traffic: Change the service configuration to show green environment, direct all traffic to the new version.
  4. Performance Management: Monitor the new environment closely for any issues.
  5. Back up if necessary: ​​If problems arise, switch the traffic back to the blue zone.

Example:

apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app-green
ports:
- protocol: TCP
port: 80
targetPort: 8080

Explanation: Above example shows that how traffic is converted from a blue zone (v1.0) to a green zone (v2.0). If problems are identified, traffic can be turned back to blue.

3. Canary Deployment

Description: Canary deploy this new version for a small subset of users before rolling out full. This allows for real-world testing and feedback, reducing the risk associated with the use of new versions.

Steps to follow:

  • Deploy Canary version: Deploy the new version to a small percentage (e.g. 10%) of the pod.
  • Performance Monitoring: View activity and user profiles from the canary deployment.
  • Gradual rollout: Gradually increase the number of pods performing new functionality based on good information and performance metrics.
  • Full rollout: Once trusted, update all remaining pods to the new version.
  • Go back if necessary: ​​Reduce or remove the canary pods if you find issues and investigate the problem.

Example: Canary Deployment with Kubernetes

1. Save your Docker images

Assuming you have a Docker image for your application, you can have two versions:

  • myapp:1.0 (current hard version)
  • myapp:1.1 (uploading version)

2. Use Kubernetes with Canary Release

Here’s how you can configure canary deployment in Kubernetes:

a. Define your deployments

Create a Kubernetes Deployment plan for both canary and production versions.

Canary-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-canary
spec:
replicas: 2
selector:
matchLabels:
app: myapp
version: canary
template:
metadata:
labels:
app: myapp
version: canary
spec:
containers:
- name: myapp
image: myapp:1.1
ports:
- containerPort: 80

Production-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-production
spec:
replicas: 10
selector:
matchLabels:
app: myapp
version: stable
template:
metadata:
labels:
app: myapp
version: stable
spec:
containers:
- name: myapp
image: myapp:1.0
ports:
- containerPort: 80

b. Create a Service to Expose the Application

Myapp-service.yaml:

apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 80

c. Apply the Configurations

Deploy the canary and production versions, and then create the service.

kubectl apply -f canary-deployment.yaml
kubectl apply -f production-deployment.yaml
kubectl apply -f myapp-service.yaml

3. Establish traffic controls

You can use a service mesh or ingress controller to route a small percentage of traffic to the canary deployment. For example, if Istio is a service mesh, you can configure it as follows.

Canary-routing.yaml:

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: myapp
spec:
hosts:
- "myapp.example.com"
http:
- route:
- destination:
host: myapp
subset: stable
weight: 90
- destination:
host: myapp
subset: canary
weight: 10

Create a destination rule for the subsets:

Destination-rule.yaml:

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: myapp
spec:
host: myapp
subsets:
- name: stable
labels:
version: stable
- name: canary
labels:
version: canary

Apply the Routing Configurations:

kubectl apply -f canary-routing.yaml
kubectl apply -f destination-rule.yaml

In this example:

  • Deployment plan: Separate configuration for canary and production deployments.
  • Service Configuration: A service that sends traffic to two deployments.
  • Traffic routing: Istio routing rules that control how traffic is split between static and canary versions.
  • Note: This example uses Kubernetes and Istio for demonstration purposes, but canary deployments can be implemented using other tools and cloud platforms. The basic principles—reaching a subset of users and rolling it out gradually—remain constant.

Conclusion

Kubernetes deployment methods such as Rolling Updates, Blue-Green, and Canary deployments provide robust ways to update applications with minimal downtime and risk Each strategy offers unique benefits, from updates that sequence through to real-world testing, ensuring applications are reliable and user satisfied. By using these techniques, developers can maintain continuous service availability and respond to issues quickly, thereby enhancing overall application performance and user experience These techniques understanding and implementing this is essential for effective management of Kubernetes applications.


Next Article

Similar Reads