0% found this document useful (0 votes)
8 views

III IV Unit Devops

Uploaded by

Yogi Nambula
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

III IV Unit Devops

Uploaded by

Yogi Nambula
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

SHORT ANSWERS

i) What is the purpose of the Docker Hub?


Ans:
The purpose of Docker Hub is to serve as a cloud-based repository for
storing, sharing, and distributing Docker images, allowing users to easily
access and pull pre-built images or upload their own for use in
containerized applications.

ii) What is Docker Engine?


Ans:
Docker Engine is the core component of Docker that enables the
building, running, and managing of containers. It consists of a server
(Docker daemon), a REST API, and a command-line interface (CLI) for
interacting with the daemon.

iii) Differentiate between a Git repository and a Git working directory.


Ans:
A Git repository is a version-controlled directory that stores the project's
history, including all commits, branches, and tags. A Git working
directory is the local directory where you make changes to the files,
reflecting the current state of the project, which can be staged and
committed to the repository.

iv) Differentiate between a Docker container and a Docker image


Ans:
A Docker container is a running instance of a Docker image, with its
own environment and state. A Docker image is a static, read-only
blueprint that defines the application's code, libraries, and
dependencies, which can be used to create containers.

v) What is the purpose of the docker pull command?


Ans:
The docker pull command is used to download a Docker image from a
remote registry (like Docker Hub) to the local machine, making it
available for creating containers.

vi) What is the purpose of a Docker Container?


Ans:
The purpose of a Docker container is to provide a lightweight, portable,
and consistent environment for running applications. It encapsulates
an application and its dependencies, ensuring it runs uniformly across
different systems without conflicts or inconsistencies.

vii) Differentiate between Git and GitHub


Ans:
Git is a version control system used to track changes in code, enabling
collaboration and managing code history. GitHub is a cloud-based
platform that hosts Git repositories, allowing developers to share,
collaborate, and manage Git-based projects online.

viii) Distinguish between Git repository and Git working directory.


Ans:
A Git repository is a directory that contains all the project's version-
controlled files, including the history of commits. A Git working
directory, on the other hand, is the local folder where the files are
currently being edited or worked on, reflecting the latest state of the
project as per the repository.

ix) Define Jenkins pipeline.


Ans:
A Jenkins pipeline is a series of automated steps or stages in Jenkins
that define the process of building, testing, and deploying code. It is
used to automate the continuous integration and continuous delivery
(CI/CD) workflow.

x) What is the purpose of Jenkins plugins?


Ans:
The purpose of Jenkins plugins is to extend Jenkins' functionality by
adding new features or integrating with other tools and systems, such
as version control, build tools, and deployment platforms, to enhance
the automation of the CI/CD pipeline.

xi) Which programming languages can you use to define Jenkins


pipelines?
Ans:
Jenkins pipelines can be defined using Groovy (with Jenkins Pipeline
DSL) or Declarative Pipeline syntax, both of which are based on the
Groovy programming language.

xii) What is the function of a Kubernetes node?


Ans:
A Kubernetes node is a physical or virtual machine that runs
containerized applications in a Kubernetes cluster. It provides the
necessary resources (CPU, memory, storage) to run pods and is
managed by the Kubernetes control plane.

xiii) What does kubectl stand for and what its purpose?
Ans:
kubectl stands for Kubernetes control. It is a command-line tool used to
interact with and manage Kubernetes clusters, allowing users to deploy
applications, manage cluster resources, and view logs.

xiv) Define pod in kubernetes


Ans:
A pod in Kubernetes is the smallest deployable unit, which can contain
one or more containers that share the same network and storage
resources. Pods are used to run applications and ensure they are
managed together as a single unit.

ESSAYS

1. What are the key benefits of using containers for application deployment?
Ans:
Containers have become an essential technology for modern application
deployment. They provide several key benefits, which include:

Portability: Containers encapsulate applications and their dependencies into


a single unit, ensuring that they can run consistently across different
environments. Whether on a developer’s local machine, a test environment,
or a production server, containers eliminate issues related to differing
software configurations, making it easier to deploy applications across
diverse platforms.

Scalability: Containers are lightweight and can be rapidly scaled up or down


depending on demand. With orchestration tools like Kubernetes,
containerized applications can automatically adjust resources based on
traffic, ensuring efficient use of infrastructure while maintaining performance.

Isolation and Security: Each container runs in its own isolated environment,
which helps to prevent conflicts between applications or different versions of
dependencies. This isolation also enhances security, as vulnerabilities in one
container are less likely to impact others.

Resource Efficiency: Unlike virtual machines, containers share the host


system's operating system kernel, allowing them to use fewer resources and
start up more quickly. This leads to improved performance and more efficient
utilization of server resources.

Consistency across Environments: Containers ensure that an application


behaves the same way in different stages of the development pipeline, from
development to testing to production. This consistency reduces the likelihood
of "it works on my machine" problems and makes it easier to debug and test.
2. Explain how to install Docker on a Linux-based operating system,
including package management tools used in the installation process.
Ans:
Installing Docker on Linux
If you are running Linux you will need to install Docker directly. You
should be logged in as a user with sudo privileges. First, you will need to
ensure that you have the command line utility cURL. Do this by opening
a terminal and typing:
Step 1
$ which curl

• If cURL is not installed, update your package manager and install it,
using:
Step 1.1
$ sudo apt-get update
$ sudo apt-get install curl

Now that you have cURL, you can use it to get the latest Docker package:
Step 2
$ curl -fsSL https://get.docker.com/ | sh

Add your account to the docker group.


Step 3
sudo usermod -aG docker <your_username>
This step is required to be able to run Docker commands as a non-root user.
You will have to log out and log back in for the change to take effect.
Now you should have Docker! Verify that it is installed by running the hello-
world container:
Step 4

$ docker run hello-world


Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world

3. Explain the 3-tree architecture in GIT. Discuss the role of the working
directory, staging area and local repository in managing changes.
Ans:
1. 3-TreeArchitecture
In Git, the concept of 3-tree architecture is central to how the system
organizes and manages its data. This architecture consists of three key
components:
i. Working Directory
The working directory is where you actively work on your project
files. It reflects the current state of the files that you're editing.
Changes made here are not tracked until you stage them. This
directory allows you to make edits, create new files, or delete
existing ones.
ii. Staging Area (Index)
The staging area, also known as the index, is a critical intermediary
between your working directory and the repository. When you run
the git add command, you stage changes, which prepare them for
the next commit. This allows you to select which changes you want
to include, enabling a more controlled commit process.
iii. Repository (Commit History)
The repository is where Git stores all the commits. It contains the
complete history of your project, represented as a series of
snapshots. Each commit points to its parent commit(s), forming a
directed acyclic graph (DAG). This structure enables efficient
tracking of changes and allows you to revert to previous versions if
necessary.
Workflow Using the 3-Tree Architecture
i. Modify Files: You edit files in your working directory.
ii. Stage Changes: Use git add to move changes to the staging area.
This allows you to review and organize what will be included in your
next commit.
iii. Commit Changes: Run git commit to save the staged changes to
the repository. This creates a new commit, which captures the state
of your project at that moment.
Benefits of the 3-Tree Architecture
• Separation of Concerns: Each layer (working directory, staging area,
and repository) has a distinct role, making it easier to manage your
project's state.
• Controlled Commits: The staging area allows you to prepare and
curate commits, combining changes as needed without
immediately affecting the repository.
• Efficient Data Management: By separating the different states, Git
can efficiently manage and track changes without unnecessary
overhead.
The 3-tree architecture in Git—comprising the working directory,
staging area, and repository—provides a structured approach to
version control. It enhances flexibility, control over changes, and
efficient data management, making Git a powerful tool for developers.

4. Discuss the structure of a Docker Image and its components


Ans:
A Docker image is a lightweight, stand-alone, and executable package that
contains everything needed to run a piece of software, including the code,
runtime, libraries, environment variables, and configurations. Understanding
the structure of a Docker image is essential for working effectively with
Docker. Here's a simplified breakdown of its components and structure:
i) Layers
Docker images are built using layers. Each layer represents a set of
changes made to the image, typically corresponding to a single
command in the Dockerfile (e.g., RUN, COPY, ADD). These layers
are stacked on top of each other, with each layer building on the
one below it.
• Base Layer: The bottom-most layer, often a minimal operating
system (e.g., Ubuntu, Alpine), is usually specified in the FROM
instruction of the Dockerfile.
• Intermediate Layers: These layers contain changes like installing
dependencies, setting environment variables, or copying files.
• Final Layer: The topmost layer where your application or service
is ready to run.
ii) Image Layers and the Union File System
Docker uses a Union File System (UFS), which allows it to combine
multiple layers into a single unified view. The benefit of this is:
• Efficient storage: Layers are shared between images, so
common layers don't take up space in each image.
• Layer caching: Docker can cache layers from previous builds,
speeding up image creation.
iii) File system Snapshot
Each layer is essentially a file system snapshot containing changes
that happened during the build. For example:
• Adding files.
• Installing packages.
• Modifying configurations.
These layers are immutable, meaning they cannot be changed once
created. When a layer is updated, a new layer is created on top of it.
iv. Docker file Instructions
A Docker image is typically built using a Dockerfile, which is a script
containing various instructions to assemble the image. Some
common instructions in a Dockerfile include:
• FROM: Specifies the base image.
• RUN: Executes commands inside the image (e.g.,
install software).
• COPY / ADD: Copies files from the host into the image.
• WORKDIR: Defines the working directory for running
commands.
• CMD: Specifies the default command to run when the
container starts.
v. Tags
Each image can have one or more tags (e.g., myapp:v1.0), which
represent different versions or configurations of an image. Tags
make it easier to refer to specific versions.

5. Differentiate between git merge and git rebase.


Ans:
Merge Rebase
Git Merge lets you merge Git Rebase allows you to integrate
different Git branches. the changes from one branch into
another.
Git Merge logs show you the Git Rebase logs are linear. As the
complete history of commit commits are rebased, the history is
merging. altered to reflect this.
All the commits on a feature All commits are rebased, and the
branch are combined into a same number of commits is added
single commit on the master to the master branch.
branch.
Merge is best used when the Rebase is best used when the
target branch is supposed to target branch is private.
be shared.
Merge preserves history. Rebase rewrites history.

6. Discuss the process of scheduling a Jenkins build job.


Ans:
Scheduling build jobs in Jenkins allows you to automate the execution of jobs
at specific times or intervals, which can help with regular tasks like nightly
builds or periodic testing.
Step 1: Access the Job Configuration
1. Open Jenkins Dashboard: Navigate to your Jenkins instance in a web
browser.
2. Select Your Job: Find and click on the job you want to schedule.
3. Configure the Job: Click on Configure in the left sidebar to access the
job configuration settings.
Step 2: Set Up Build Triggers
1. Scroll to the Build Triggers Section: In the job configuration page, look
for the Build Triggers section.
2. Enable Polling:
o Check the box for Build periodically. This allows you to specify a
schedule using a cron-like syntax.
o In the text box that appears, enter a schedule. For example:
▪ H 2 * * * - This runs the job at 2 AM every day.
▪ H/15 * * * * - This runs the job every 15 minutes.
Cron Syntax Breakdown:
o The schedule format is: MINUTE HOUR DOM MONTH DOW
▪ MINUTE: 0-59
▪ HOUR: 0-23
▪ DOM: Day of Month (1-31)
▪ MONTH: Month (1-12)
▪ DOW: Day of Week (0-7, where both 0 and 7 represent
Sunday)
o You can use:
▪ * for "every"
▪ H to distribute load evenly (randomly selects a minute for
the job to run)
3. Alternative Scheduling Options:
o Poll SCM: If you want Jenkins to check for changes in your source
code management system (like Git) at specified intervals, check
the Poll SCM option and set the schedule similarly.
Step 3: Save Configuration
• After configuring the build triggers, scroll down and click the Save
button to apply your changes.
Step 4: Monitor Scheduled Builds
• Build History: Once scheduled, you can monitor when builds are
triggered from the job’s build history.
• Logs: After each scheduled build, check the build logs to ensure
everything runs as expected.

Example Cron Expressions


Here are some common cron expressions for scheduling:
• Every day at midnight:
H0***
• Every hour:
H****
• Every Monday at 10 AM:
H 10 * * 1
• Every 5 minutes:
H/5 * * * *

Scheduling builds in Jenkins is a straightforward process that enhances


automation and efficiency. By using cron expressions, you can customize
when your jobs run, helping ensure timely builds and tests without manual
intervention.
7. Describe the architecture of Kubernetes.
Ans:
Kubernetes is an open-source container orchestration platform
designed to automate the deployment, scaling, and management of
containerized applications. It operates in a highly distributed manner
and utilizes master-slave architecture, consisting of several components
that work together to manage the lifecycle of containers effectively.
The architecture of Kubernetes can be broadly divided into two main
sections: the Control Plane and the Node (or Worker Node). Each of
these has distinct roles and responsibilities in managing applications in
a Kubernetes cluster.

1. Control Plane
The Control Plane is the brain of the Kubernetes cluster. It is responsible
for managing the overall state of the system, ensuring the desired state
of applications, and orchestrating various tasks within the cluster. The
Control Plane consists of the following key components:
• API Server (kube-apiserver): The API Server acts as the entry point
to the Kubernetes cluster. It exposes the Kubernetes API, which
allows users and clients to interact with the cluster. All requests,
such as creating or managing pods, services, deployments, etc.,
are sent to the API server, which validates and processes them.
• Controller Manager (kube-controller-manager): The Controller
Manager is responsible for ensuring that the current state of the
cluster matches the desired state defined by the user. It runs a set
of controllers that monitor the state of the cluster and take
corrective actions. For example, if a pod goes down, the
Replication Controller will create a new pod to maintain the
desired number of replicas.
• Scheduler (kube-scheduler): The Scheduler is responsible for
selecting which node (worker machine) will run the newly created
pods. It takes into account factors such as resource requirements,
hardware constraints, and other policies to make optimal
scheduling decisions.
• Etcd: Etcd is a key-value store that is used by Kubernetes to store all
the cluster data, including configuration details, state data, and
metadata. It is the source of truth for all cluster-related information
and is essential for maintaining cluster consistency.
2. Nodes (Worker Nodes)
The Node (also known as a Worker Node) is where the actual execution of
containerized applications happens. Each node runs the necessary
components to run and manage containers. A Kubernetes cluster can
have multiple worker nodes. The key components on a node include:
• Kubelet: The Kubelet is an agent that runs on each worker node
and ensures that containers are running as expected. It
communicates with the API server to receive instructions on what
containers should run on the node and ensures that the containers
are healthy and functioning correctly.
• Kube Proxy: The Kube Proxy is responsible for managing network
rules for pod communication. It facilitates internal networking and
load balancing, ensuring that requests to services are correctly
routed to the appropriate pods running on the worker nodes.
• Container Runtime: The container runtime is the software
responsible for running the containers. Kubernetes supports various
container runtimes, such as Docker, containerd, and CRI-O. The
runtime handles the creation, execution, and management of
containers within the node.
3. Pod
A Pod is the smallest and most basic deployable unit in Kubernetes. It
can contain one or more containers, which share the same network
namespace, storage, and other resources. Pods are the units that
Kubernetes manages, schedules, and deploys across nodes. While
Kubernetes handles individual containers, it works primarily with pods to
manage deployments and scaling.

You might also like