0% found this document useful (0 votes)
52 views

Adv DevOps Viva Questions

Uploaded by

chinuharyan12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views

Adv DevOps Viva Questions

Uploaded by

chinuharyan12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Linux Operating System

Linux is a versatile and widely used open-source operating system that serves as the foundation
for numerous distributions (distros) such as Ubuntu, CentOS, and Fedora. Developed by Linus
Torvalds in 1991, Linux is based on the Unix operating system principles and offers several key
features:

1. Open Source: The source code is freely available for anyone to view, modify, and
distribute. This fosters a collaborative development environment.
2. Multi-user and Multi-tasking: Linux allows multiple users to access the system
simultaneously and supports running multiple processes at once.
3. Portability: Linux can run on various hardware platforms, from servers and desktops to
embedded systems and mobile devices.
4. Security: With built-in security features, such as user permissions and access control,
Linux is known for its robust security architecture.
5. Command-Line Interface (CLI): While many Linux distributions offer graphical user
interfaces (GUIs), the command line is a powerful tool for managing the system and
automating tasks.
6. Package Management: Linux distributions use package managers to install, update,
and remove software easily. Popular package management systems include APT
(Debian/Ubuntu) and YUM/DNF (CentOS/Fedora).

Software Development Life Cycle

The Software Development Life Cycle (SDLC) is a structured process used for developing
software applications. It outlines various phases that a software project goes through from initial
conception to final deployment and maintenance. Here are the key stages of the SDLC:

1. Requirement Analysis:
○ Objective: Gather and analyze requirements from stakeholders to understand
their needs.
○ Activities: Conduct interviews, surveys, and workshops to document functional
and non-functional requirements.
2. Planning:
○ Objective: Define the scope, resources, timeline, and budget for the project.
○ Activities: Create a project plan that outlines milestones, deliverables, and risk
assessments.
3. Design:
○ Objective: Architect the software based on the gathered requirements.
○ Activities: Develop system architecture, design user interfaces, and create
database schemas. This phase may include creating both high-level and detailed
design specifications.
4. Implementation (or Coding):
○ Objective: Translate design documents into actual code.
○ Activities: Developers write code according to the design specifications,
following coding standards and best practices. This phase may involve unit
testing to ensure individual components work as intended.
5. Testing:
○ Objective: Identify and fix defects in the software before deployment.
○ Activities: Conduct various types of testing, including unit testing, integration
testing, system testing, and user acceptance testing (UAT). The goal is to ensure
the software meets the specified requirements.
6. Deployment:
○ Objective: Make the software available to users.
○ Activities: Deploy the application to production environments, which may involve
installing it on servers or distributing it to end-users. Documentation and training
may also be provided during this phase.
7. Maintenance:
○ Objective: Ensure the software remains functional and up-to-date.
○ Activities: Address issues reported by users, implement updates, and add new
features as necessary. This phase continues throughout the software's lifecycle.

Explain different types of Cloud services

Cloud services are typically categorized into three main types, each offering different levels of
control, flexibility, and management. Here’s an overview of the different types of cloud services:

1. Infrastructure as a Service (IaaS)

● Definition: IaaS provides virtualized computing resources over the internet. It allows
users to rent IT infrastructure, such as servers, storage, and networking, from a cloud
provider.
● Key Features:
○ Scalability: Easily scale resources up or down based on demand.
○ Control: Users have significant control over the infrastructure, including
operating systems and applications.
○ Cost-Effective: Pay-as-you-go pricing model reduces upfront costs.
● Examples: Amazon Web Services (AWS) EC2, Microsoft Azure Virtual Machines,
Google Cloud Compute Engine.

2. Platform as a Service (PaaS)

● Definition: PaaS provides a platform that allows developers to build, deploy, and
manage applications without worrying about the underlying infrastructure.
● Key Features:
○ Development Tools: Includes tools for development, testing, and deployment,
simplifying the development process.
○ Middleware: Provides middleware services for database management,
application hosting, and messaging.
○ Collaboration: Supports collaboration among development teams with
integrated development environments (IDEs).
● Examples: Google App Engine, Microsoft Azure App Service, Heroku.

3. Software as a Service (SaaS)

● Definition: SaaS delivers software applications over the internet on a subscription basis,
eliminating the need for installation and maintenance.
● Key Features:
○ Accessibility: Accessible from any device with an internet connection, typically
through a web browser.
○ Automatic Updates: Providers handle software updates, ensuring users always
have the latest version.
○ Multi-Tenancy: Multiple users share the same application instance, making it
cost-effective.
● Examples: Google Workspace (formerly G Suite), Microsoft 365, Salesforce, Zoom.

4. Function as a Service (FaaS)

● Definition: FaaS, also known as serverless computing, allows developers to execute


code in response to events without managing servers.
● Key Features:
○ Event-Driven: Functions are triggered by specific events, such as HTTP
requests, database changes, or message queue events.
○ Automatic Scaling: The platform automatically scales resources based on the
number of requests.
○ Cost-Efficiency: Users only pay for the compute time used when the function is
executed.
● Examples: AWS Lambda, Azure Functions, Google Cloud Functions.

What is DevOps

DevOps is a set of practices, principles, and cultural philosophies that aims to improve
collaboration between development (Dev) and operations (Ops) teams in software development
and IT operations. The goal of DevOps is to shorten the software development lifecycle while
delivering high-quality software that meets customer needs.

Key Concepts of DevOps


1. Collaboration:
○ DevOps fosters a culture of collaboration and communication between
development, operations, quality assurance, and other stakeholders. This
reduces silos and improves teamwork, leading to faster problem-solving and
decision-making.
2. Automation:
○ Automation is a core principle of DevOps. It involves automating repetitive tasks
such as code integration, testing, deployment, and infrastructure management.
Tools like Jenkins, GitLab CI/CD, and Docker are commonly used to streamline
these processes.
3. Continuous Integration (CI):
○ CI is the practice of automatically integrating code changes from multiple
contributors into a shared repository several times a day. This helps catch bugs
early and ensures that new code integrates smoothly with existing code.
4. Continuous Delivery (CD):
○ CD extends CI by automating the deployment process, enabling teams to deliver
software updates to production environments quickly and reliably. It ensures that
the software is always in a deployable state.
5. Monitoring and Feedback:
○ Continuous monitoring of applications and infrastructure allows teams to gather
feedback on performance, security, and user experience. This information helps
identify issues and inform future development.
6. Infrastructure as Code (IaC):
○ IaC involves managing and provisioning infrastructure through code and
automation, allowing for consistent and repeatable deployments. Tools like
Terraform and Ansible are commonly used in this practice.

Explain AWS Cloud9

AWS Cloud9 is a cloud-based integrated development environment (IDE) provided by Amazon


Web Services (AWS). It enables developers to write, run, and debug their code using just a web
browser. Cloud9 is designed to enhance collaboration and streamline the development process,
making it an ideal choice for building and managing applications in the cloud.

Key Features of AWS Cloud9

1. Cloud-Based IDE:
○ AWS Cloud9 provides a fully managed IDE that runs in the cloud, allowing
developers to access their development environment from anywhere with an
internet connection. There’s no need for local installations or configurations.
2. Preconfigured Environments:
○ Cloud9 comes with preconfigured environments that include essential tools and
programming languages (like Python, JavaScript, PHP, and more). This allows
developers to get started quickly without spending time setting up their
workspace.
3. Collaboration:
○ The IDE supports real-time collaboration, allowing multiple developers to work on
the same code simultaneously. Users can see each other's changes in real time,
making it easier to collaborate on projects, share feedback, and conduct pair
programming.
4. Built-In Terminal:
○ Cloud9 includes a built-in terminal that provides access to the command line,
enabling developers to run shell commands and interact with AWS services
directly from the IDE.
5. Debugging Tools:
○ The IDE offers debugging capabilities, including breakpoints, stack traces, and
variable inspection. This helps developers identify and fix issues in their code
more efficiently.
6. Integration with AWS Services:
○ AWS Cloud9 is tightly integrated with other AWS services, such as AWS
Lambda, Amazon S3, and AWS CodeStar. This allows developers to build and
deploy serverless applications and manage cloud resources directly from the
IDE.
7. Customization:
○ Developers can customize their IDE environment, including themes, keyboard
shortcuts, and code editor settings, to suit their preferences and workflow.
8. Cost-Effective:
○ AWS Cloud9 has a pay-as-you-go pricing model. Users only pay for the
underlying AWS resources they consume (like EC2 instances) while using
Cloud9, making it cost-effective for different development needs.

What is EC2
Amazon Elastic Compute Cloud (EC2) is a core service of Amazon Web Services (AWS) that
provides scalable computing capacity in the cloud. It allows users to run virtual servers, known
as instances, on-demand, enabling them to deploy and manage applications without needing
physical hardware.

What is S3
Amazon Simple Storage Service (S3) is a scalable, high-speed, web-based cloud storage
service offered by Amazon Web Services (AWS). S3 is designed to store and retrieve any
amount of data from anywhere on the web, making it ideal for various use cases, including
backup, archiving, data lakes, and content distribution.
What is CodeDeploy
AWS CodeDeploy is a fully managed deployment service offered by Amazon Web Services
(AWS) that automates the process of deploying applications to a variety of compute services,
including Amazon EC2, AWS Lambda, and on-premises servers. CodeDeploy helps streamline
and standardize application deployments, making it easier to release new features and updates
reliably and quickly.

What is Kubernetes

Kubernetes is an open-source container orchestration platform designed to automate the


deployment, scaling, management, and networking of containerized applications. Originally
developed by Google, Kubernetes has become the de facto standard for managing containers
in production environments, facilitating the development of cloud-native applications. Here are
the key features and concepts associated with Kubernetes:

Key Features of Kubernetes

1. Container Orchestration:
○ Kubernetes manages the lifecycle of containers across a cluster of machines,
ensuring that the right number of instances are running and that they are
distributed across the infrastructure.
2. Automatic Scaling:
○ Kubernetes can automatically scale applications up or down based on demand.
This includes scaling the number of container instances to handle increased
loads and downscaling during periods of low usage.
3. Load Balancing:
○ Kubernetes provides built-in load balancing to distribute traffic among containers,
ensuring even resource utilization and improving application reliability and
availability.
4. Service Discovery and Networking:
○ Kubernetes facilitates service discovery by providing stable endpoints for
applications and automating the routing of traffic to the correct containers. It also
includes features for managing internal and external networking.
5. Self-Healing:
○ Kubernetes automatically replaces and reschedules containers that fail or
become unresponsive, ensuring that applications remain available and resilient to
failures.
6. Declarative Configuration:
○ Kubernetes uses declarative configuration, allowing users to define the desired
state of the system through configuration files (YAML or JSON). The system
continuously monitors the current state and makes adjustments to achieve the
desired state.
7. Storage Orchestration:
○ Kubernetes can automatically mount the storage system of your choice, whether
it be local storage, a public cloud provider (e.g., Amazon EBS, Google Cloud
Storage), or a networked storage system (e.g., NFS).
8. Rolling Updates and Rollbacks:
○ Kubernetes supports rolling updates to deploy new application versions without
downtime. If an update fails, it can roll back to the previous stable version
automatically.

Core Components of Kubernetes Architecture

1. Master Node:
○ The master node is the control plane of the Kubernetes cluster, responsible for
managing the cluster and its components. It runs several key components:
■ Kube-apiserver: The API server that exposes the Kubernetes API and
acts as the central management point for the cluster. It handles all REST
requests and updates the state of the cluster.
■ etcd: A distributed key-value store that holds the cluster's configuration
data and state information. It serves as the source of truth for the cluster.
■ Kube-scheduler: Responsible for scheduling pods onto nodes based on
resource availability and constraints, ensuring that workloads are
efficiently distributed across the cluster.
■ Kube-controller-manager: Runs various controllers that manage the
state of the cluster, including replication controllers (to maintain the
desired number of pod replicas) and deployment controllers.
2. Worker Nodes:
○ Worker nodes (also known as minions) are the machines that run the
containerized applications. Each worker node contains several key components:
■ Kubelet: An agent that runs on each worker node, responsible for
communicating with the master node and managing the lifecycle of the
containers running in pods.
■ Kube-proxy: A network proxy that maintains network rules on the nodes,
enabling communication between pods and services. It also
load-balances traffic across pods.
■ Container Runtime: The software responsible for running the containers,
such as Docker, containerd, or CRI-O.
3. Pods:
○ The smallest deployable unit in Kubernetes, a pod can contain one or more
containers that share the same network namespace and storage resources. Pods
are designed to run a single instance of a service or application.
4. Services:
○Services abstract a set of pods and provide a stable endpoint for accessing them.
They define policies for how to access the pods, enabling load balancing and
service discovery within the cluster.
5. Namespaces:
○ Namespaces provide a mechanism for isolating resources within the cluster,
allowing multiple teams or applications to share the same cluster while keeping
their resources separate.

What is kubectl

kubectl is the command-line interface (CLI) tool for interacting with Kubernetes clusters. It
provides a set of commands that allow users to deploy applications, manage cluster resources,
and perform various administrative tasks. With kubectl, users can control and automate the
deployment of containerized applications, making it a crucial tool for Kubernetes operations.
Here’s an overview of its features, commands, and usage:

Key Features of kubectl

1. Resource Management:
○ kubectl allows users to create, update, delete, and get information about
Kubernetes resources such as pods, services, deployments, replicasets, and
more.
2. Configuration Management:
○ Users can apply configuration changes to the cluster using YAML or JSON files,
allowing for declarative management of resources.
3. Viewing Logs:
○ kubectl provides commands to view logs from containers running within pods,
aiding in debugging and monitoring applications.
4. Port Forwarding:
○ It supports port forwarding, enabling access to services running in pods directly
from the local machine.
5. Shell Access:
○ Users can gain shell access to a container running in a pod using kubectl
exec, allowing for troubleshooting and diagnostics.
6. Namespace Management:
○ kubectl allows users to work within specific namespaces, helping organize
resources and isolate different environments within the same cluster.
7. Context Switching:
○ Users can easily switch between different clusters and contexts, facilitating
management of multiple Kubernetes environments.
What is Terraform

Terraform is an open-source infrastructure as code (IaC) tool created by HashiCorp that allows
users to define, provision, and manage infrastructure resources across various cloud providers
and services using a declarative configuration language. With Terraform, you can automate the
setup and management of your infrastructure, making it easier to provision resources
consistently and reliably. Here are the key features, concepts, and use cases of Terraform:

Key Features of Terraform

1. Infrastructure as Code:
○ Terraform enables users to define their infrastructure using configuration files
written in HashiCorp Configuration Language (HCL) or JSON. This allows for
version control, sharing, and collaboration.
2. Provider Support:
○ Terraform supports a wide range of cloud providers (such as AWS, Azure,
Google Cloud) as well as on-premises solutions (like VMware, OpenStack)
through the use of providers. This allows you to manage resources across
different platforms using a single tool.
3. Execution Plan:
○ Before applying any changes, Terraform generates an execution plan that shows
what actions will be taken (create, update, delete) to achieve the desired state of
your infrastructure. This helps prevent unintentional changes.
4. State Management:
○ Terraform maintains a state file that records the current state of the infrastructure.
This state file is essential for tracking resources and determining what changes
need to be made during subsequent runs.
5. Modules:
○ Terraform allows you to create reusable components called modules. Modules
enable you to organize your configuration files and share them across projects,
promoting best practices and reducing duplication.
6. Immutable Infrastructure:
○ Terraform encourages an immutable infrastructure approach, where changes are
made by replacing resources rather than modifying them in place. This leads to
greater reliability and easier rollback strategies.
7. Collaboration:
○ Terraform supports collaboration through remote state storage options (like AWS
S3, HashiCorp Consul, or Terraform Cloud), allowing teams to share and manage
state files in a secure and consistent manner.

Common Commands

● terraform init: Initializes a new or existing Terraform configuration, downloading


necessary provider plugins and setting up the working directory.
● terraform plan: Creates an execution plan, showing the changes that Terraform will
make to the infrastructure based on the current state and configuration.
● terraform apply: Applies the changes required to reach the desired state of the
infrastructure as defined in the configuration files.
● terraform destroy: Removes all the resources defined in the configuration, cleaning up
the infrastructure.
● terraform fmt: Formats Terraform configuration files to a canonical format and style.
● terraform validate: Validates the configuration files to ensure they are syntactically valid
and internally consistent.

What is SAST

SAST (Static Application Security Testing) is a software security testing methodology that
analyzes an application's source code or binary code to identify potential vulnerabilities and
security flaws without executing the program. It is a key part of the software development
lifecycle (SDLC) and helps developers detect issues early in the development process, reducing
the cost and effort associated with fixing vulnerabilities later on

How SAST Works

1. Source Code Analysis:


○ SAST tools examine the application's source code, looking for patterns and
structures that may indicate security vulnerabilities (e.g., SQL injection, cross-site
scripting).
2. Control Flow Analysis:
○ SAST analyzes the control flow of the application, tracing how data moves
through the application to identify potential security issues in logic and data
handling.
3. Data Flow Analysis:
○ This involves examining how data is processed, stored, and transmitted, helping
to detect issues related to improper data handling, such as unvalidated input.
4. Reporting:
○ After scanning, SAST tools generate reports that detail identified vulnerabilities,
including their severity, location in the code, and recommendations for
remediation.

What is Jenkins
Jenkins is an open-source automation server widely used for continuous integration (CI) and
continuous delivery (CD) in software development. It enables developers to automate the
building, testing, and deployment of applications, thereby enhancing productivity and
collaboration within development teams.
What is SonarQube
SonarQube is an open-source platform used for continuous inspection of code quality and
security. It helps developers and organizations manage technical debt by providing automated
code reviews, ensuring compliance with coding standards, and identifying vulnerabilities in their
codebase. SonarQube supports multiple programming languages and integrates seamlessly
with various development and CI/CD tools.

What is Nagios
Nagios is an open-source monitoring system that provides comprehensive monitoring of
network services, host resources, and various other applications. It is widely used in IT
environments to ensure that systems are functioning correctly and to alert administrators when
issues arise.

Use Cases

1. Infrastructure Monitoring:
○ Monitoring the health and performance of servers, databases, and network
devices to ensure uptime and reliability.
2. Application Monitoring:
○ Tracking the performance and availability of critical applications, including web
servers, email servers, and custom software.
3. Service Monitoring:
○ Monitoring network services such as HTTP, FTP, SMTP, and DNS to ensure they
are operational and meeting performance standards.
4. Compliance and Reporting:
○ Providing compliance reports and historical data to support audits and ensure
that infrastructure meets industry standards.

Explain AWS Lambda


AWS Lambda is a serverless computing service provided by Amazon Web Services (AWS) that
allows developers to run code without provisioning or managing servers. It automatically
handles the computing resources required for the code execution, scaling automatically in
response to incoming requests.

Architecture of AWS Lambda

1. Lambda Function:
○ A Lambda function is the main code that gets executed in response to events. It
consists of the code itself, a handler, and configuration settings (like memory
size, timeout, and environment variables).
2. Event Sources:
○ Various AWS services and custom applications can act as event sources that
trigger the Lambda function. For example, an S3 upload can trigger a Lambda
function to process the file.
3. Execution Role:
○ Each Lambda function runs with an associated execution role that defines
permissions to access other AWS services. This ensures secure and controlled
access to resources.
4. Runtime Environment:
○ AWS Lambda manages the runtime environment in which the function executes,
including the necessary libraries and dependencies. Users only need to upload
their code along with any additional packages.

Use Cases

1. Data Processing:
○ AWS Lambda can be used to process data in real-time, such as transforming
data uploaded to S3 or processing logs from CloudWatch.
2. API Backend:
○ Developers can use AWS Lambda in conjunction with API Gateway to create
serverless APIs that scale automatically based on traffic.
3. Event-Driven Applications:
○ Lambda can be integrated with various AWS services to create event-driven
architectures, responding to changes in data and triggering workflows.
4. Scheduled Tasks:
○ AWS Lambda can run tasks on a scheduled basis using Amazon CloudWatch
Events, making it suitable for cron-like jobs.
5. Chatbots and Voice Assistants:
○ AWS Lambda can be used to build back-end services for chatbots and voice
assistants, handling requests and responses dynamically.

Explain AWS Codestar


AWS CodeStar is a cloud-based service provided by Amazon Web Services (AWS) that
enables teams to quickly develop, build, and deploy applications on AWS. It streamlines the
development process by integrating various AWS services into a unified development toolchain.
CodeStar provides pre-configured project templates, collaboration tools, and a user-friendly
dashboard to help developers and DevOps teams work efficiently on software projects.

Key Components of AWS CodeStar

1. AWS CodeCommit:
○ A fully managed source control service used to host Git repositories.
CodeCommit integrates with CodeStar to store and manage the source code,
making it accessible to the development team.
2. AWS CodeBuild:
○ A fully managed build service that compiles source code, runs tests, and
produces deployable artifacts. CodeBuild is used in CodeStar projects to
automatically build code whenever changes are pushed to the repository.
3. AWS CodeDeploy:
○ A service that automates the deployment of applications to a variety of
environments, including Amazon EC2 instances, on-premises servers, and
Lambda functions. In a CodeStar project, CodeDeploy manages the deployment
process.
4. AWS CodePipeline:
○ A continuous delivery service that automates the entire release process, from
source code changes to production deployment. CodePipeline orchestrates the
integration of CodeCommit, CodeBuild, and CodeDeploy to create a CI/CD
pipeline for the project.
5. AWS CloudFormation:
○ AWS CloudFormation is used by CodeStar to create and manage the
infrastructure required for the project. This allows teams to provision and manage
resources, such as EC2 instances, databases, and networking configurations, in
an automated manner.
6. AWS Lambda (for serverless projects):
○ For serverless projects, CodeStar integrates with AWS Lambda, enabling
developers to build and deploy serverless applications with ease.
7. IAM Role Management:
○ AWS IAM is used within CodeStar to define user roles and permissions. Each
team member can be assigned a role (e.g., owner, contributor, viewer) with
different levels of access to the project.
8. Jira and Other Issue Tracking Tools:
○ CodeStar integrates with Jira and other issue-tracking tools to enable project
management and tracking of development tasks, bugs, and feature requests.
9. Amazon CloudWatch:
○ For monitoring and logging, CodeStar integrates with Amazon CloudWatch to
provide insights into application performance and resource usage. Developers
can set alarms and monitor metrics in real time.

What is Docker

Docker is an open-source platform that automates the deployment, scaling, and management of
applications inside lightweight, portable containers. It enables developers to package
applications and their dependencies into a standardized unit called a container, which can run
consistently across various environments, including development, testing, and production.
Docker provides a solution to the "works on my machine" problem by ensuring that the
environment in which the code runs is consistent, regardless of where it is deployed.
Key Concepts of Docker

1. Container:
○ A Docker container is a lightweight, standalone, and executable package that
includes everything needed to run an application: the code, runtime, libraries,
environment variables, and system tools. Containers are isolated from each other
and the host system, but they can communicate through well-defined channels.
2. Docker Image:
○ A Docker image is a read-only template used to create containers. It contains the
application code, libraries, dependencies, and configuration files required for the
application to run. Docker images are built from Dockerfiles, which define the
steps needed to assemble an image.
3. Dockerfile:
○ A Dockerfile is a script that contains a series of instructions to assemble a Docker
image. It specifies the base image, environment setup, application code, and any
necessary dependencies. Dockerfiles are used to automate the process of
building Docker images.
4. Docker Hub:
○ Docker Hub is a cloud-based registry service where users can store and share
Docker images. It hosts official images, community-contributed images, and
custom images created by users. Docker Hub allows for the easy distribution of
Docker images.
5. Docker Engine:
○ Docker Engine is the core component of Docker. It is a client-server application
that builds, runs, and manages Docker containers. The Docker Engine uses the
Docker CLI (Command Line Interface) to interact with the Docker daemon
(server), which does the heavy lifting of container management.

Docker vs. Virtual Machines (VMs)

● Containers: Lightweight, share the host OS kernel, and start in seconds. They
encapsulate applications and their dependencies but are isolated from the host and
other containers.
● Virtual Machines: Heavyweight, each runs a full OS instance, and take longer to start.
VMs virtualize the entire physical machine, including the hardware and operating
system.

Use Cases of Docker

1. Application Deployment:
○ Docker enables easy and consistent application deployment across different
environments, ensuring that the code runs the same in development, testing, and
production environments.
2. Microservices Architecture:
○ Docker is commonly used in microservices architectures, where each service is
containerized and deployed independently. This ensures that services are
decoupled and can be scaled and updated individually.
3. Continuous Integration/Continuous Delivery (CI/CD):
○ Docker automates the build, test, and deployment process in CI/CD pipelines.
Developers can test code in isolated containers, ensuring consistent behavior
before deploying to production.
4. Development Environments:
○ Developers use Docker to create consistent development environments. Docker
eliminates the need to install dependencies and configurations on the local
machine by encapsulating everything in containers.
5. Hybrid Cloud and Multi-Cloud Deployments:
○ Docker enables applications to run consistently across different cloud platforms
or a combination of on-premises and cloud environments, making it ideal for
hybrid cloud strategies.

You might also like