DevOps Questions
DevOps Questions
DevOps Lifecycle
DevOps defines an agile relationship between operations and Development. It is a
process that is practiced by the development team and operational engineers together
from beginning to the final stage of the product.
Learning DevOps is not complete without understanding the DevOps lifecycle phases.
The DevOps lifecycle includes seven phases as given below:
1) Continuous Development
This phase involves the planning and coding of the software. The vision of the project
is decided during the planning phase. And the developers begin developing the code
for the application. There are no DevOps tools that are required for planning, but there
are several tools for maintaining the code.
2) Continuous Integration
This stage is the heart of the entire DevOps lifecycle. It is a software development
practice in which the developers require to commit changes to the source code more
frequently. This may be on a daily or weekly basis. Then every commit is built, and this
allows early detection of problems if they are present. Building code is not only
involved compilation, but it also includes unit testing, integration testing, code
review, and packaging.
The code supporting new functionality is continuously integrated with the existing
code. Therefore, there is continuous development of software. The updated code
needs to be integrated continuously and smoothly with the systems to reflect changes
to the end-users.
Jenkins is a popular tool used in this phase. Whenever there is a change in the Git
repository, then Jenkins fetches the updated code and prepares a build of that code,
which is an executable file in the form of war or jar. Then this build is forwarded to the
test server or the production server.
3) Continuous Testing
This phase, where the developed software is continuously testing for bugs. For
constant testing, automation testing tools such as TestNG, JUnit, Selenium, etc are
used. These tools allow QAs to test multiple code-bases thoroughly in parallel to
ensure that there is no flaw in the functionality. In this phase, Docker Containers can
be used for simulating the test environment.
Selenium does the automation testing, and TestNG generates the reports. This entire
testing phase can automate with the help of a Continuous Integration tool
called Jenkins.
Automation testing saves a lot of time and effort for executing the tests instead of
doing this manually. Apart from that, report generation is a big plus. The task of
evaluating the test cases that failed in a test suite gets simpler. Also, we can schedule
the execution of the test cases at predefined times. After testing, the code is
continuously integrated with the existing code.
4) Continuous Monitoring
Monitoring is a phase that involves all the operational factors of the entire DevOps
process, where important information about the use of the software is recorded and
carefully processed to find out trends and identify problem areas. Usually, the
monitoring is integrated within the operational capabilities of the software application.
It may occur in the form of documentation files or maybe produce large-scale data
about the application parameters when it is in a continuous use position. The system
errors such as server not reachable, low memory, etc are resolved in this phase. It
maintains the security and availability of the service.
5) Continuous Feedback
The application development is consistently improved by analyzing the results from
the operations of the software. This is carried out by placing the critical phase of
constant feedback between the operations and the development of the next version
of the current software application.
The continuity is the essential factor in the DevOps as it removes the unnecessary steps
which are required to take a software application from development, using it to find
out its issues and then producing a better version. It kills the efficiency that may be
possible with the app and reduce the number of interested customers.
6) Continuous Deployment
In this phase, the code is deployed to the production servers. Also, it is essential to
ensure that the code is correctly used on all the servers.
The new code is deployed continuously, and configuration management tools play an
essential role in executing tasks frequently and quickly. Here are some popular tools
which are used in this phase, such as Chef, Puppet, Ansible, and SaltStack.
AD
7) Continuous Operations
All DevOps operations are based on the continuity with complete automation of the
release process and allow the organization to accelerate the overall time to market
continuingly.
It is clear from the discussion that continuity is the critical factor in the DevOps in
removing steps that often distract the development, take it longer to detect issues and
produce a better version of the product after several months. With DevOps, we can
make any software product more efficient and increase the overall count of interested
customers in your product.
b) What is a distributed VCS?
What is a “version control system”?
Version control systems are a category of software tools that helps in
recording changes made to files by keeping a track of modifications done in
the code.
Why Version Control system is so Important?
As we know that a software product is developed in collaboration by a group
of developers they might be located at different locations and each one of
them contributes to some specific kind of functionality/features. So in order
to contribute to the product, they made modifications to the source
code(either by adding or removing). A version control system is a kind of
software that helps the developer team to efficiently communicate and
manage(track) all the changes that have been made to the source code along
with the information like who made and what changes have been made. A
separate branch is created for every contributor who made the changes and
the changes aren’t merged into the original source code unless all are
analyzed as soon as the changes are green signaled they merged to the main
source code. It not only keeps source code organized but also improves
productivity by making the development process smooth.
Basically Version control system keeps track on changes made on a particular
software and take a snapshot of every modification. Let’s suppose if a team
of developer add some new functionalities in an application and the updated
version is not working properly so as the version control system keeps track
of our work so with the help of version control system we can omit the new
changes and continue with the previous version.
Benefits of the version control system:
• Enhances the project development speed by providing efficient
collaboration,
• Leverages the productivity, expedites product delivery, and skills of the
employees through better communication and assistance,
• Reduce possibilities of errors and conflicts meanwhile project
development through traceability to every small change,
• Employees or contributors of the project can contribute from anywhere
irrespective of the different geographical locations through this VCS,
• For each different contributor to the project, a different working copy is
maintained and not merged to the main file unless the working copy is
validated. The most popular example is Git, Helix core, Microsoft TFS,
• Helps in recovery in case of any disaster or contingent situation,
• Informs us about Who, What, When, Why changes have been made.
Use of Version Control System:
• A repository: It can be thought of as a database of changes. It contains
all the edits and historical versions (snapshots) of the project.
• Copy of Work (sometimes called as checkout): It is the personal copy of
all the files in a project. You can edit to this copy, without affecting the
work of others and you can finally commit your changes to a repository
when you are done making your changes.
• Working in a group: Consider yourself working in a company where you
are asked to work on some live project. You can’t change the main code
as it is in production, and any change may cause inconvenience to the
user, also you are working in a team so you need to collaborate with
your team to and adapt their changes. Version control helps you with
the, merging different requests to main repository without making any
undesirable changes. You may test the functionalities without putting it
live, and you don’t need to download and set up each time, just pull
the changes and do the changes, test it and merge it back. It may be
visualized as.
Types of Version Control Systems:
• Local Version Control Systems
• Centralized Version Control Systems
• Distributed Version Control Systems
Local Version Control Systems: It is one of the simplest forms and has a
database that kept all the changes to files under revision control. RCS is one
of the most common VCS tools. It keeps patch sets (differences between files)
in a special format on disk. By adding up all the patches it can then re-create
what any file looked like at any point in time.
Centralized Version Control Systems: Centralized version control systems
contain just one repository globally and every user need to commit for
reflecting one’s changes in the repository. It is possible for others to see your
changes by updating.
Two things are required to make your changes visible to others which are:
• You commit
• They update
The benefit of CVCS (Centralized Version Control Systems) makes
collaboration amongst developers along with providing an insight to a certain
extent on what everyone else is doing on the project. It allows administrators
to fine-grained control over who can do what.
It has some downsides as well which led to the development of DVS. The
most obvious is the single point of failure that the centralized repository
represents if it goes down during that period collaboration and saving
versioned changes is not possible. What if the hard disk of the central
database becomes corrupted, and proper backups haven’t been kept? You
lose absolutely everything.
Distributed Version Control Systems: Distributed version control systems
contain multiple repositories. Each user has their own repository and working
copy. Just committing your changes will not give others access to your
changes. This is because commit will reflect those changes in your local
repository and you need to push them in order to make them visible on the
central repository. Similarly, When you update, you do not get others’
changes unless you have first pulled those changes into your repository.
To make your changes visible to others, 4 things are required:
• You commit
• You push
• They pull
• They update
The most popular distributed version control systems are Git, and Mercurial.
They help us overcome the problem of single point of failure.
Purpose of Version Control:
• Multiple people can work simultaneously on a single project. Everyone
works on and edits their own copy of the files and it is up to them
when they wish to share the changes made by them with the rest of
the team.
• It also enables one person to use multiple computers to work on a
project, so it is valuable even if you are working by yourself.
• It integrates the work that is done simultaneously by different
members of the team. In some rare cases, when conflicting edits are
made by two people to the same line of a file, then human assistance is
requested by the version control system in deciding what should be
done.
• Version control provides access to the historical versions of a project.
This is insurance against computer crashes or data loss. If any mistake is
made, you can easily roll back to a previous version. It is also possible
to undo specific edits that too without losing the work done in th
a) What are the fundamental differences between DevOps & Agile.
Typically a conclusion of
An advancement and administration
4. administration related to
approach.
designing.
within the time of requirement any of the among the improvement and
group individuals can loan help instead of operation groups.
holding up for the group leads or any pro
impedances.
• Chef is used to deploy and manage both on-premises servers and those
hosted in the cloud.
• It uses Ruby as its reference language.
Chef Terminology
Before proceeding toward its components and basic commands, let’s first
understand the basic terms used in Chef.
• Node: A managed machine that executes the configuration for the node
when the client runs
• Client: An authorized user in the Chef API
• Cookbook: A collection of recipes, resources, attributes, and definitions to
configure a service or an application.
• Recipe: A list of resources to be added to a node. As it is written in Ruby, it
gives us control over anything we would do in Ruby.
Chef Components
Now, let’s check out the important components used in Chef.
• Knife: A system admin tool used to interact with the server to take
cookbooks and custom config and load them into the server. Bootstrapping
certain servers is also possible with this component.
o Running knife- shows a list of commands that are supported.
• Chef client: It runs on managed servers, gathers information about itself,
syncs the cookbooks, and compiles the collection of resources and
converges it with the machine state.
• Web UI: A web-based interface that allows us to browse and edit cookbooks,
nodes, and clients.
• Server/API: The heart of the system that exposes a REST API that is used by
others and manages the knife, web interfaces, and nodes
Knife
The Knife command-line tool is the primary way that a workstation communicates the
contents of its chef-repo directory with a Chef server. It also provides an interface to
manage nodes, cookbooks, roles, environments, and databags.
• A Knife command executed from the workstation uses the following format:
• knife subcommand [ARGUMENT] (options)
• For example, to view the details of a Chef user, execute the following command:
• knife user show USER_NAME
File: ~/chef-repo/.chef/knife.rb
1 log_level :info
2 log_location STDOUT
3 node_name 'username'
4 client_key '~/chef-repo/.chef/username.pem'
5 validation_client_name 'shortname-validator'
6 validation_key '~/chef-repo/.chef/shortname.pem'
7 chef_server_url 'https://123.45.67.89/organizations/shortname'
8 syntax_check_cache_path '~/chef-repo/.chef/syntax_check_cache'
9 cookbook_path [ '~/chef-repo/cookbooks' ]
• log_level: The amount of logging to be stored in the log file. The default
value, :info, notes that any informational messages will be logged. Other values
include :debug, :warn, :error, and :fatal.
• log_location: The location of the log file. The default value, STOUT is for standard
output logging. If set to another value then standard output logging will still be
performed.
• node_name: The username of the person using the workstation. This user
requires a valid authorization key located on the workstation.
• client_key: The location of the user’s authorization key.
• validation_client_name: The name for the server validation key determining
whether a node is registered with the Chef server. These values must match
during a chef-client run.
• validation_key: The path to your organization’s validation key.
• chef_server_url: The URL of the Chef server, with shortname being the defined
shortname of your organization (this can also be an IP
address). /organizations/shortname must be included in the URL.
• syntax_check_cache_path: The location in which knife stores information about
files checked for appropriate Ruby syntax.
• cookbook_path: The path to the cookbook directory.
Knife allows for a variety of other useful operations on the Chef server and nodes. View
Chef’s Knife documentation for a full list of all available commands.
b) List types of handlers in chef and explain it
Use the chef_handler resource to enable handlers during a Chef Infra Client run. The
resource allows arguments to be passed to Chef Infra Client, which then applies the
conditions defined by the custom handler to the node attribute data collected during a
Chef Infra Client run, and then processes the handler based on that data.
The chef_handler resource is typically defined early in a node’s run-list (often being the
first item). This ensures that all of the handlers will be available for the entire Chef Infra
Client run.
Handler Types
Handler Description
An exception handler is used to identify situations that have caused a Chef Infra Client run to fail. An
exception handler can be loaded at the start of a Chef Infra Client run by adding a recipe that
exception
contains the chef_handler resource to a node's run-list. An exception handler runs when
the failed? property for the run_status object returns true .
A report handler is used when a Chef Infra Client run succeeds and reports back on certain details
about that Chef Infra Client run. A report handler can be loaded at the start of a Chef Infra Client run
report
by adding a recipe that contains the chef_handler resource to a node's run-list. A report handler
runs when the success? property for the run_status object returns true .
A start handler is used to run events at the beginning of a Chef Infra Client run. A start handler can
be loaded at the start of a Chef Infra Client run by adding the start handler to
start the start_handlers setting in the client.rb file or by installing the gem that contains the start
handler by using the chef_gem resource in a recipe in the chef-client cookbook. (A start handler
may not be loaded using the chef_handler resource.)
b) Explain Maven build life cycle.
A Build Lifecycle is a well-defined sequence of phases, which define the order in
which the goals are to be executed. Here phase represents a stage in life cycle. As
an example, a typical Maven Build Lifecycle consists of the following sequence of
phases.
Test Testing Tests the compiled source code suitable for testing
framework.
package packaging This phase creates the JAR/WAR package as mentioned in the
packaging in POM.xml.
• local
• central
• remote
Local Repository
Maven local repository is a folder location on your machine. It gets created when you
run any maven command for the first time.
Maven local repository keeps your project's all dependencies (library jars, plugin jars
etc.). When you run a Maven build, then Maven automatically downloads all the
dependency jars into the local repository. It helps to avoid references to dependencies
stored on remote machine every time a project is build.
Maven local repository by default get created by Maven in %USER_HOME%
directory. To override the default location, mention another path in Maven
settings.xml file available at %M2_HOME%\conf directory.
Central Repository
Maven central repository is repository provided by Maven community. It contains a
large number of commonly used libraries.
When Maven does not find any dependency in local repository, it starts searching in
central repository using following URL − https://repo1.maven.org/maven2/
Key concepts of Central repository are as follows −
To browse the content of central maven repository, maven community has provided
a URL − https://search.maven.org/#browse. Using this library, a developer can search
all the available libraries in central repository.
Remote Repository
Sometimes, Maven does not find a mentioned dependency in central repository as
well. It then stops the build process and output error message to console. To prevent
such situation, Maven provides concept of Remote Repository, which is developer's
own custom repository containing required libraries or other project jars.
For example, using below mentioned POM.xml, Maven will download dependency
(not available in central repository) from Remote Repositories mentioned in the same
pom.xml.
Maven Global and Local Repository
In Maven, there are two main types of repositories: the global repository and the
local repository. Here's an explanation of each:
The local repository is project-specific, meaning that each Maven project you
work on will have its own local repository. This allows projects to have isolated
dependencies and ensures that the build is reproducible, even if the global
repository is inaccessible.
By default, Maven will check the local repository first for dependencies and, if
not found, will then search the global repository. If a dependency is not found in
either repository, Maven will fail to build the project.
Managing and configuring repositories is an essential part of working with
Maven, as it determines where dependencies are retrieved from and stored.
How do you configure a GIT Repository to run code sanity checking tools
right before making commits, and preventing them if test fails?
To configure a Git repository to run code sanity checking tools before making commits and
prevent commits if the tests fail, you can utilize Git hooks. Git hooks are scripts that are
executed automatically at certain predefined points in the Git workflow.
2. Locate the `.git` directory. If it is not visible, make sure you have enabled the display of
hidden files in your file explorer.
3. Inside the `.git` directory, find the `hooks` directory. This directory contains various hook
scripts that Git can execute.
4. Choose the appropriate hook script to use. In this case, you can use the `pre-commit` hook,
which runs before a commit is made.
6. Add your code sanity checking commands or scripts to the `pre-commit` file. These
commands should include the tests or checks you want to run on the code.
You can include multiple commands or scripts to perform various checks like code
formatting, linting, unit tests, or any other checks relevant to your project.
8. Make the `pre-commit` file executable by running the following command in the terminal:
```
chmod +x .git/hooks/pre-commit
```
9. Now, when you attempt to make a commit, the `pre-commit` hook script will be triggered.
It will run the defined code sanity checks. If any of the tests or checks fail, the commit will be
aborted, preventing the changes from being committed.
By configuring the `pre-commit` hook script in this way, you can ensure that code sanity
checks are performed automatically before making commits. This helps maintain the quality
and consistency of your codebase.
Default (or Build) Lifecycle
This is the primary life cycle of Maven and is used to build the application. It has the
following 21 phases.
1
validate
Validates whether project is correct and all necessary information is available to complete
the build process.
2
initialize
Initializes build state, for example set properties.
3
generate-sources
Generate any source code to be included in compilation phase.
4
process-sources
Process the source code, for example, filter any value.
5
generate-resources
Generate resources to be included in the package.
6
process-resources
Copy and process the resources into the destination directory, ready for packaging phase.
7
compile
Compile the source code of the project.
8
process-classes
Post-process the generated files from compilation, for example to do bytecode
enhancement/optimization on Java classes.
9
generate-test-sources
Generate any test source code to be included in compilation phase.
10
process-test-sources
Process the test source code, for example, filter any values.
11
test-compile
Compile the test source code into the test destination directory.
12
process-test-classes
Process the generated files from test code file compilation.
13
test
Run tests using a suitable unit testing framework (Junit is one).
14
prepare-package
Perform any operations necessary to prepare a package before the actual packaging.
15
package
Take the compiled code and package it in its distributable format, such as a JAR, WAR, or
EAR file.
16
pre-integration-test
Perform actions required before integration tests are executed. For example, setting up the
required environment.
17
integration-test
Process and deploy the package if necessary into an environment where integration tests
can be run.
18
post-integration-test
Perform actions required after integration tests have been executed. For example, cleaning
up the environment.
19
verify
Run any check-ups to verify the package is valid and meets quality criteria.
20
install
Install the package into the local repository, which can be used as a dependency in other
projects locally.
21
deploy
Copies the final package to the remote repository for sharing with other developers and
projects.
The Maven Release Plugin is a plugin for Apache Maven, a popular build automation tool for
Java projects. The Release Plugin automates the release process by providing a set of goals
and phases to manage the versioning, tagging, and deployment of software releases.
1. Versioning: The Release Plugin helps manage the versioning of your project. It enables you
to change the version of your project's POM (Project Object Model) and update the SCM
(Source Code Management) with the new version.
2. Preparing a release: When you execute the Release Plugin, it goes through a series of
steps to prepare a release. It creates a new branch for the release and updates the POM files
with the release version (e.g., from 1.0.0-SNAPSHOT to 1.0.0). It also verifies that the project
builds successfully and that all necessary tests pass.
3. Tagging the release: Once the release is prepared, the Release Plugin creates a tag in the
SCM with the release version. This tag represents a snapshot of the project at that particular
release point.
4. Performing the release: After the release is tagged, the Release Plugin performs the actual
release process. It builds the project, creates distribution packages (if configured), and
deploys artifacts to a remote repository. This could be a local repository manager or a
remote Maven repository like Maven Central.
5. Updating the version: After the release is performed, the Release Plugin increments the
project version to the next development version (e.g., from 1.0.0 to 1.0.1-SNAPSHOT). This
ensures that subsequent development work uses an updated version.
The Maven Release Plugin simplifies the process of releasing software by automating many
of the manual steps involved. It ensures consistency in versioning, tags releases in the SCM,
and facilitates the deployment of artifacts to repositories. This helps streamline the release
workflow and reduces the potential for human error.
In Chef, environments are used to define the specific configuration settings for different stages or
environments of your infrastructure, such as development, testing, staging, and production.
Environments allow you to manage and control the configuration of nodes (servers) based on their
assigned environment.
Here's how you can create environments and add servers to environments in Chef:
1. Create an Environment:
- Open your Chef management console or use the Chef Development Kit (ChefDK) command-line
tools.
- Define the environment attributes, such as name, description, and default cookbook versions, in a
JSON file. For example, create a file named `myenvironment.json` with the following content:
```json
"name": "myenvironment",
"cookbook_versions": {
```
This example specifies the environment name as "myenvironment", provides a description, and
sets the desired version of the "mycookbook" to 1.0.0.
- Create the environment using the Chef management console or the `knife` command-line tool:
```shell
```
- Identify the nodes (servers) that you want to add to the environment. Each node in Chef has a
unique identifier, such as its hostname or IP address.
- Set the environment for each node either through the Chef management console or by using the
`knife` command-line tool.
- Using the `knife` command-line tool, you can assign an environment to a node with the following
command:
```shell
```
Replace `NODE_NAME` with the name of the node (server) and `ENVIRONMENT_NAME` with the
name of the desired environment.
- Alternatively, you can edit the node's attributes directly in the Chef management console and set
the `chef_environment` attribute to the desired environment name.
- Repeat the above steps for each node you want to add to the environment.
Once you have created the environments and assigned nodes to them, you can apply environment-
specific configuration settings by using cookbooks, roles, or attributes specific to each environment.
This allows you to manage the desired state of your infrastructure based on the assigned
environment, making it easier to maintain consistency and control throughout your infrastructure.
14. How to configure knife Execute some commands to test connection between knife
and workstation.
To configure `knife` and test the connection between `knife` and the workstation, you need to
perform the following steps:
- Download and install ChefDK on your workstation. ChefDK provides the necessary tools, including
`knife`, for managing Chef infrastructure.
- Follow the installation instructions specific to your operating system from the official Chef
website.
2. Configure `knife`:
```
```
- Client name: Enter the name of the client associated with `knife`.
- Validation key path: Specify the path to the validation key file provided by the Chef server.
- Knife configuration file: Choose a path for the `knife.rb` configuration file.
- Once `knife` is configured, you can test the connection to the Chef server by executing a simple
command.
- Run the following command to list the available nodes in your Chef server:
```
```
- If the connection is successful, you should see a list of nodes registered with the Chef server.
- If the connection fails, double-check your `knife.rb` configuration file and ensure that the
provided server URL, client name, and validation key path are correct.
By following these steps, you can configure `knife` and test the connection between `knife` and your
workstation. This allows you to interact with the Chef server, manage nodes, roles, environments,
cookbooks, and perform various other operations using the `knife` command-line tool.
Example: `sudo apt update` - Update the package lists using administrative privileges.
Example: `apt install packageName` - Install a package using the Advanced Packaging Tool (APT).
Example: `yum install packageName` - Install a package using the Yellowdog Updater Modified
(YUM).
4. `systemctl`: Control system services and manage the system's systemd service manager.
Example: `passwd username` - Change the password for the user "username".
8. `usermod`: Modify user account properties.
Example: `chown owner:group filename` - Change the owner and group of a file.
Example: `chmod 644 filename` - Set read and write permissions for the owner and read
permissions for the group and others.
Example: `top` - Display real-time system information, CPU usage, and memory usage.
These are just a few examples of Linux administration commands. Linux offers a vast array of
commands and utilities for system administration, allowing administrators to efficiently manage and
configure various aspects of the operating system.
1. Download the RPM package: Obtain the RPM package file from a trusted source or your software
provider.
3. Navigate to the directory containing the RPM package: Use the `cd` command to change the
directory to the location where the RPM package is saved.
4. Install the RPM package: Run the following command to install the RPM package:
```
rpm -i package.rpm
```
Replace "package.rpm" with the actual filename of the RPM package you downloaded.
5. Follow the installation prompts: The installation process will begin, and you may be prompted to
provide necessary information or confirm the installation. Follow the prompts to complete the
installation.
6. Verify the installation: After the installation is complete, you can verify it by running commands
associated with the installed software or checking the system's package management tools.
2. Run the YUM command: YUM simplifies the installation process by automatically resolving
dependencies.
```
```
Replace "package_name" with the name of the software package you want to install.
```
```
Replace "package_name" with the name of the software package you want to install.
3. Follow the installation prompts: YUM will handle the installation process, including downloading
and installing the required packages and dependencies.
4. Verify the installation: After the installation is complete, you can verify it by running commands
associated with the installed software or checking the system's package management tools.
Note: For YUM to work, your system must have internet connectivity and be configured to access the
appropriate software repositories.
By following these steps, you can install software using RPM and YUM package management
systems. These steps ensure that the software is correctly installed on your Linux system, allowing
you to utilize the installed packages as needed.
1. Open-source nature: Linux is an open-source operating system, which aligns well with the
principles of DevOps. The open-source nature allows for flexibility, customization, and collaboration,
enabling DevOps teams to adapt Linux to their specific needs.
2. Compatibility and support: Linux is compatible with a wide range of software tools and platforms
commonly used in DevOps, such as Docker, Kubernetes, Git, Jenkins, Ansible, and many others. Linux
also has extensive community support and a large user base, which means you can find resources,
documentation, and assistance easily.
3. Stability and reliability: Linux is known for its stability and reliability, making it an ideal choice for
hosting critical infrastructure and services in a DevOps environment. The robust architecture and
efficient resource management of Linux contribute to high availability and uptime, minimizing
disruptions in the deployment and delivery processes.
4. Command-line interface (CLI) and scripting capabilities: Linux provides a powerful command-line
interface and scripting capabilities, allowing DevOps engineers to automate tasks, write scripts, and
create complex workflows. This enables streamlined and efficient management of infrastructure,
deployments, and configurations.
5. Package management: Linux distributions offer package management systems like APT (Advanced
Package Tool) and YUM (Yellowdog Updater Modified), which simplify the installation, updating, and
removal of software packages. This facilitates the deployment and management of dependencies
and ensures consistency across environments.
6. Security: Linux is known for its strong security features and access control mechanisms. DevOps
teams can leverage Linux's security features, such as user and group permissions, firewall settings,
SELinux, and encryption, to enhance the security posture of their infrastructure and applications.
7. Scalability and performance: Linux is highly scalable and performs well even on resource-
constrained systems. This scalability allows DevOps teams to easily scale their infrastructure
horizontally or vertically based on demand, ensuring efficient resource utilization and optimal
performance.
8. DevOps tool integration: Linux is the preferred platform for many popular DevOps tools, which are
often developed and optimized specifically for Linux environments. DevOps teams can take
advantage of the extensive tooling ecosystem available on Linux to automate processes, manage
configurations, orchestrate deployments, and monitor systems effectively.
Overall, Linux provides a solid foundation for DevOps practices by offering compatibility, flexibility,
stability, security, and powerful automation capabilities. It has become the de facto operating system
for many DevOps teams, empowering them to build and manage robust, scalable, and efficient
infrastructure and applications.
1. Update package lists: Open a terminal on your Linux system and update the local package lists to
ensure you have the latest versions of packages:
```
```
```
```
2. Install Docker:
```
```
- For Red Hat-based distributions, use the following command to install Docker:
```
```
```
```
```
```
```
docker version
```
- This command will display the version of Docker installed on your system.
- By default, Docker requires root (sudo) access to run commands. If you want to run Docker
commands without using sudo, you can add your user to the `docker` group:
```
```
- After adding your user to the `docker` group, you may need to log out and log back in for the
changes to take effect.
Architecture of Docker
Docker makes use of a client-server architecture. The Docker client talks with the docker
daemon which helps in building, running, and distributing the docker containers. The Docker
client runs with the daemon on the same system or we can connect the Docker client with
the Docker daemon remotely. With the help of REST API over a UNIX socket or a network,
the docker client and daemon interact with each other.
Docker daemon manages all the services by communicating with other daemons. It manages
docker objects such as images, containers, networks, and volumes with the help of the API
requests of Docker.
Docker Client
With the help of the docker client, the docker users can interact with the docker. The docker
command uses the Docker API. The Docker client can communicate with multiple daemons.
When a docker client runs any docker command on the docker terminal then the terminal
sends instructions to the daemon. The Docker daemon gets those instructions from the
docker client withinside the shape of the command and REST API’s request.
The main objective of the docker client is to provide a way to direct the pull of images from
the docker registry and run them on the docker host. The common commands which are
used by clients are docker build, docker pull, and docker run.
Docker Host
A Docker host is a type of machine that is responsible for running more than one container.
It comprises the Docker daemon, Images, Containers, Networks, and Storage.
Docker Registry
All the docker images are stored in the docker registry. There is a public registry which is
known as a docker hub that can be used by anyone. We can run our private registry also.
With the help of docker run or docker pull commands, we can pull the required images from
our configured registry. Images are pushed into configured registry with the help of
the docker push command.
Docker Objects
Whenever we are using a docker, we are creating and use images, containers, volumes,
networks, and other objects. Now, we are going to discuss docker objects:-
Docker Images
An image contains instructions for creating a docker container. It is just a read-only template.
It is used to store and ship applications. Images are an important part of the docker
experience as they enable collaboration between developers in any way which is not
possible earlier.
Docker Containers
Containers are created from docker images as they are ready applications. With the help of
Docker API or CLI, we can start, stop, delete, or move a container. A container can access
only those resources which are defined in the image unless additional access is defined
during the building of an image in the container.
Docker Storage
We can store data within the writable layer of the container but it requires a storage
driver. Storage driver controls and manages the images and containers on our docker host.
Types of Docker Storage
1. Data Volumes: Data Volumes can be mounted directly into the filesystem of the
container and are essentially directories or files on the Docker Host filesystem.
2. Volume Container: In order to maintain the state of the containers (data) produced
by the running container, Docker volumes file systems are mounted on Docker
containers. independent container life cycle, the volumes are stored on the host. This
makes it simple for users to exchange file systems among containers and backup
data.
3. Directory Mounts: A host directory that is mounted as a volume in your container
might be specified.
4. Storage Plugins: Docker volume plugins enable us to integrate the Docker containers
with external volumes like Amazon EBS by this we can maintain the state of the
container.
Docker Networking
Docker networking provides complete isolation for docker containers. It means a user can
link a docker container to many networks. It requires very less OS instances to run the
workload.
Types of Docker Network
1. Bridge: It is the default network driver. We can use this when different containers
communicate with the same docker host.
2. Host: When you don’t need any isolation between the container and host then it is
used.
3. Overlay: For communication with each other, it will enable the swarm services.
4. None: It disables all networking.
5. macvlan: This network assigns MAC(Media Access control) address to the containers
which look like a physical address.