0% found this document useful (0 votes)
2 views39 pages

Unit V(pet)

The document discusses Microservices and DevOps, detailing the architecture, design patterns, and challenges associated with microservices. It emphasizes the importance of microservices in building scalable applications and outlines various design patterns such as API Gateway, Circuit Breaker, and Decomposition. Additionally, it highlights the complexities and challenges organizations face when adopting microservices, including bounded context and dynamic scaling.

Uploaded by

dhanekaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views39 pages

Unit V(pet)

The document discusses Microservices and DevOps, detailing the architecture, design patterns, and challenges associated with microservices. It emphasizes the importance of microservices in building scalable applications and outlines various design patterns such as API Gateway, Circuit Breaker, and Decomposition. Additionally, it highlights the complexities and challenges organizations face when adopting microservices, including bounded context and dynamic scaling.

Uploaded by

dhanekaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

MC4203- CLOUD COMPUTING TECHNOLOGIES

UNIT V MICROSERVICES AND


DEVOPS
Defining Microservices - Emergence of Microservice Architecture – Design
patterns of Microservices – The Mini web service architecture –Microservice
dependency tree – Challenges with Microservices - SOA vs Microservice –
Microservice and API – Deploying and maintaining Microservices – Reason
for having DevOps – Overview of DevOps – Core elements of DevOps – Life
cycle of DevOps – Adoption of DevOps- DevOps Tools – Build, Promotion
and Deployment in DevOps - DevOps in Business Enterprises.

DEFINING MICROSERVICES
Microservice Architecture is a Service Oriented Architecture. In the
microservice architecture, there are a large number of microservices.
By combining all the microservices, it constructs a big service. In the
microservice architecture, all the services communicate with each other.
The microservice defines an approach to the architecture that divides an
application into a pool of loosely coupled services that implements business
requirements. It is next to Service-Oriented Architecture (SOA). The most
important feature of the microservice- based architecture is that it can perform
continuous delivery of a large and complex application.
Microservice helps in breaking the application and build a logically
independent smaller applications. For example, we can build a cloud application
with the help of Amazon AWS with minimum efforts.
EMERGENCE OF MICROSERVICE ARCHITECTURE
A typical Microservice Architecture (MSA) should consist of thefollowing
components:
1. Clients
2. Identity Providers
3. API Gateway
4. Messaging Formats
5. Databases
6. Static Content
7. Management
8. Service Discovery
1. Clients
The architecture starts with different types of clients, from different
devices trying to perform various management capabilities such as search, build,
configure etc.
2. Identity Providers
These requests from the clients are then passed on the identity providers
who authenticate the requests of clients and communicate the requests to API
Gateway. The requests are then communicated to the internal services via well-
defined API Gateway.
3. API Gateway
Since clients don’t call the services directly, API Gateway acts as an entry
point for the clients to forward requests to appropriate microservices.

The advantages of using an API gateway include:


All the services can be updated without the clients knowing.
Services can also use messaging protocols that are not web-friendly.
The API Gateway can perform cross-cutting functions such as
providing security, load balancing etc.
After receiving the requests of clients, the internal architecture consists of
microservices which communicate with each other through messages to handle
client requests.
4. Messaging Formats
There are two types of messages through which they communicate:
Synchronous Messages: In the situation where clients wait for the responses
from a service, Microservices usually tend to use REST (Representational State
Transfer) as it relies on a stateless, client- server, and the HTTP protocol. This
protocol is used as it is a distributed environment each and every functionality
is represented with a resource to carry out operations
Asynchronous Messages: In the situation where clients do not wait for the
responses from a service, Microservices usually tend to use protocols such as
AMQP, STOMP, MQTT. These protocols are used in this type of
communication since the nature of messages is defined and these messages have
to be interoperable between implementations.
5. Data Handling
Well, each Micro service owns a private database to capture their data and
implement the respective business functionality. Also, the databases of Micro
services are updated through their service API only. The services provided by
Micro services are carried forward to any remote service which supports inter-
process communication fordifferent technology stacks.

Refer to the diagram below:


6. Static Content
After the Microservices communicate within themselves, they deploy the
static content to a cloud-based storage service that can deliver them directly to
the clients via Content Delivery Networks (CDNs).
Apart from the above components, there are some other components
appear in a typical Microservices Architecture:
7. Management

This component is responsible for balancing the services on nodes and


identifying failures.
8. Service Discovery
Acts as a guide to Microservices to find the route of communication
between them as it maintains a list of services on which nodes are located.
DESIGN PATTERNS OF MICROSERVICES
• Aggregator
• API Gateway
• Chained or Chain of Responsibility
• Asynchronous Messaging
• Database or Shared Data
• Event Sourcing
• Branch
• Command Query Responsibility Segregator
• Circuit Breaker
• Decomposition

i) Aggregator Pattern
Aggregator in the computing world refers to a website or program that
collects related items of data and displays them. So, even in Micro services
patterns, Aggregator is a basic web page which invokes various services to get
the required information or achieve the requiredfunctionality.
Also, since the source of output gets divided on breaking the monolithic
architecture to micro services, this pattern proves to be beneficial when you need
an output by combining data from multiple services. So, if we have two services
each having their own database, then an aggregator having a unique transaction
ID, would collect the data from each individual microservice, apply the business
logic and finally publish it as a REST endpoint. Later on, the data collected
canbe consumed by the respective services which require that collected data.
The Aggregate Design Pattern is based on the DRY principle. Based on
this principle, you can abstract the logic into a composite micro services and
aggregate that particular business logic into one service.
So, for example, if you consider two services: Service A and B, then you
can individually scale these services simultaneously by providing the data to the
composite micro service.

ii) API Gateway Design Pattern


Microservices are built in such a way that each service has its own
functionality. But, when an application is broken down into small autonomous
services, then there could be few problems that a developer might face. The
problems could be as follows:
a. How can I request information from multiple micro services?
b. Different UI require different data to respond to the same backend
database service
c. How to transform data according to the consumer requirement from
reusable Micro services
d. How to handle multiple protocol requests?
Well, the solution to these kinds of problems could be the API Gateway
Design Pattern. The API Gateway Design Pattern address not only the concerns
mentioned above but it solves many other problems. This micro service design
pattern can also be considered as the proxy service to route a request to the
concerned micro service. Being a variation of the Aggregator service, it can send
the request to multiple services and similarly aggregate the results back to the
composite or the consumer service. API Gateway also acts as the entry point for
all the micro services and creates fine-grained APIs’ for different types of
clients.

With the help of the API Gateway design pattern, the API gateways can
convert the protocol request from one type to other. Similarly, it can also offload
the authentication/authorization responsibility of the microservice.
So, once the client sends a request, these requests are passed to the API
Gateway which acts as an entry point to forward the clients’ requests to the
appropriate micro services. Then, with the help of the load balancer, the load of
the request is handled and the request is sent to the respective services. Micro
services use Service Discovery which acts as a guide to find the route of
communication between each of them. Micro services then communicate with
each other via a stateless server i.e. either by HTTP Request/Message Bus.

iii) Chained or Chain of Responsibility Pattern


Chained or Chain of Responsibility Design Patterns produces a single
output which is a combination of multiple chained outputs. So, if you have three
services lined up in a chain, then, the request from the client is first received by
Service A. Then, this service communicates with the next Service B and collects
data. Finally, the second service communicates with the third service to generate
the consolidated output. All these services use synchronous HTTP request or
response for messaging. Also, until the request passes through all the services
and the respective responses are generated, the client doesn’t get any output. So,
it is always recommended to not to make a long chain, as the client will wait
until the chain is completed.
One more important aspect which you need to understand, is that the
request from Service A to Service B may look different from Service B to
Service C. Similarly the response from Service C to Service B may look
completely different from Service B to Service A.

iv) Asynchronous Messaging Design Pattern


From the above pattern, it is quite obvious that the client gets blocked or
has to wait for a long time in synchronous messaging. But, if you do not want
the consumer, to wait for a long time, then you can opt for the Asynchronous
Messaging. In this type of micro services design pattern, all the services can
communicate with each other, but they do not have to communicate with each
other sequentially. So, if you consider 3 services: Service A, Service B, and
Service C. The request from the client can be directly sent to the Service
C and Service B simultaneously. These requests will be in a queue. Apart from
this, the request can also be sent to Service A whose response need nothave to be
sent to the same service through which request has come.

v) Database or Shared Data Pattern


For every application, there is humongous amount of data present. So, when we
break down an application from its monolithic architecture to microservices, it
is very important to note that each Micro service has sufficient amount of data
to process a request. So, either the system can have a database per each service
or it can have shared database per service. You can use database per service and
shared database per service to solve various problems.
The problems could be as follows:
Duplication of data and inconsistency
Different services have different kinds of storage requirements
Few business transactions can query the data, with multiple services
De-normalization of data

vi) Event Sourcing Design Pattern


The event sourcing design pattern creates events regarding the changes in
the application state. Also, these events are stored as a sequence of events to
help the developers track which change was made when. So, with the help of
this, you can always adjust the application state to cope up with the past
changes. You can also query these events, for any data change and
simultaneously publish these events from the event store. Once the events are
published, you can see the changes of the application state on the presentation
layer.

vii) Branch Pattern


Branch micro service design pattern is a design pattern in which you can
simultaneously process the requests and responses from two or more
independent micro services. So, unlike the chained design pattern, the request is
not passed in a sequence, but the request is passed to two or more mutually
exclusive micro services chains. This design pattern extends the Aggregator
design pattern and provides the flexibility to produce responses from multiple
chains or single chain. For example, if you consider an e-commerce application,
then you may need to retrieve data from multiple sources and this data could be
a collaborated output of data from various services. So, you can use the branch
pattern, to retrieve data from multiple sources.

viii) Command Query Responsibility Segregator (CQRS) Design Pattern


Every micro services design has either the database per service
model or the shared database per service. But, in the database per service model,
we cannot implement a query as the data access is only limited to one single
database. So, in such scenario you can use the CQRS pattern. According to this
pattern, the application will be divided into two parts: Command and Query. The
command part will handle all the requests related to CREATE, UPDATE,
DELETE while the query part will take care of the materialized views. The
materialized views are updated through a sequence of events which are creating
using the event source pattern discussed above.
ix) Circuit Breaker Pattern
As the name suggests, the Circuit Breaker design pattern is used to stop
the process of request and response if a service is not working. So, for example,
let’s say a client is sending a request to retrieve data from multiple services. But,
due to some issues, one of the services is down. Now, there are mainly two
problems you will face: first, since the client will not have any knowledge about
a particular service being down, the request will be continuously sent to that
service. The second problem is that the network resources will be exhausted with
low performance and bad user experience.
So, to avoid such problems, you can use the Circuit Breaker Design
Pattern. With the help of this pattern, the client will invoke a remote service via
a proxy. This proxy will basically behave as a circuit barrier. So, when the
number of failures crosses the threshold number, the circuit breaker trips for a
particular time period. Then, all the attempts to invoke the remote service will
fail in this timeout period. Once that time period is finished, the circuit breaker
will allow a limited number of tests to pass through and if those requests
succeed, the circuit breaker resumes back to the normal operation. Else, if there
is a failure, then the time out period begins again.
x) Decomposition Design Pattern
Microservices are developed with an idea on developers mind to create
small services, with each having their own functionality. But, breaking an
application into small autonomous units has to be done logically. So, to
decompose a small or big application into small services, you can use the
Decomposition patterns.
With the help of this pattern, either you can decompose an application
based on business capability or on based on the sub- domains. For example, if
you consider an e-commerce application, then you can have separate services
for orders, payment, customers, products if you decompose by business
capability.

But, in the same scenario, if you design the application by decomposing


the sub-domains, then you can have services for each and every class. Here, in
this example, if you consider the customer as a class, then this class will be used
in customer management, customer support, etc. So, to decompose, you can use
the Domain-Driven Design through which the whole domain model is broken
down into sub- domains. Then, each of these sub-domains will have their own
specific model and scope(bounded context). Now, when a developer designs
micro services, he/she will design those services around the scope or bounded
context.
Though these patterns may sound feasible to you, they are not feasible for
big monolithic applications. This is because of the fact that identifying sub-
domains and business capabilities is not an easy task for big applications. So, the
only way to decompose big monolithic applications is by following the Vine
Pattern or the Strangler Pattern.
Strangler Pattern or Vine Pattern
The Strangler Pattern or the Vine Pattern is based on the analogy to a vine
which basically strangles a tree that it is wrapped around. So, when this pattern
is applied on the web applications, a call goes back and forth for each URI call
and the services are broken down into different domains. These domains will be
hosted as separate services.
According to the strangler pattern, two separate applications will live side
by side in the same URI space, and one domain will be taken in to account at
an instance of time. So, eventually, the new refactored application wraps
around or strangles or replaces the original application until you can shut down
the monolithic application.

CHALLENGES OF MICROSERVICES
ARCHITECTURE
Micro service architecture is more complex than the legacy system. The
micro service environment becomes more complicated because the team has to
manage and support many moving parts. Here are some of the top challenges
that an organization face in their micro services journey:
• Bounded Context
• Dynamic Scale up and Scale Down
• Monitoring
• Fault Tolerance
• Cyclic dependencies
• DevOps Culture
Bounded context: The bounded context concept originated in Domain- Driven
Design (DDD) circles. It promotes the Object model first approach to service,
defining a data model that service is responsible for and is bound to. A bounded
context clarifies, encapsulates, and defines the specific responsibility to the
model. It ensures that the domain will not be distracted from the outside. Each
model must have a context implicitly defined within a sub-domain, and every
context defines boundaries.
In other words, the service owns its data and is responsible for its integrity
and mutability. It supports the most important feature of micro services, which is
independence and decoupling.

Dynamic scale up and scale down: The loads on the different micro services
may be at a different instance of the type. As well as auto-scaling up your micro
service should auto-scale down. It reduces the cost of the micro services. We can
distribute the load dynamically.
Monitoring: The traditional way of monitoring will not align well with micro
services because we have multiple services making up the same functionality
previously supported by a single application. When an error arises in the
application, finding the root cause can be challenging.
Fault Tolerance: Fault tolerance is the individual service that does not bring
down the overall system. The application can operate at a certain degree of
satisfaction when the failure occurs. Without fault tolerance, a single failure in
the system may cause a total breakdown. The circuit breaker can achieve fault
tolerance. The circuit breaker is a pattern that wraps the request to external
service and detects when they are faulty. Micro services need to tolerate both
internal and external failure.
Cyclic Dependency: Dependency management across different services, and
its functionality is very important. The cyclic dependency can create a problem,
if not identified and resolved promptly.
DevOps Culture: Micro services fits perfectly into the DevOps. It provides
faster delivery service, visibility across data, and cost- effective data. It can
extend their use of containerization switch from Service-Oriented-Architecture
(SOA) to Microservice Architecture(MSA).

DIFFERENCE BETWEEN MICROSERVICES ARCHITECTURE


(MSA)AND SERVICES-ORIENTED ARCHITECTURE
(SOA)

Microservice Based Service-Oriented


Architecture (MSA) Architecture (SOA)

Microservices uses lightweight


SOA supports multi-message
protocols such as REST,
protocols.
and HTTP, etc.

It focuses on application
It focuses on decoupling.
service reusability.

It uses a simple messaging It uses Enterprise Service


system for communication. Bus (ESB) for communication.

Microservices follows "share as SOA follows "share as much as


little as possible" architecture possible architecture" approach.
approach.
Microservices are much better in SOA is not better in fault
fault tolerance in comparisonto tolerance in comparison to
SOA. MSA.

Each microservice have an SOA services share the


independent database. whole data storage.

MSA used modern relational SOA used traditional relational


databases. databases.

MSA tries to minimize sharing


through bounded context (the
SOA enhances component
coupling of components and its
sharing.
data as a single unit with minimal
dependencies).

It is better suited for It is better for


the smaller and well portioned, a large and complex business
web-based system. application environment.

DIFFERENCE BETWEEN APIS AND


MICROSERVICES
The main differences between APIs and microservices:

An API is a contract that provides guidance for a consumer to use the


underlying service.
A microservice is an architectural design that separates portions of a (usually
monolithic) application into small, self-containing services.
By definition, this means an API is usually a portion of a microservice,
allowing for interaction with the microservice itself. Another way to think about
this is that the API serves as a contract for interactions within the microservice,
presenting the options available for interacting with the microservice.
However, if we look at the microservices diagram above, we can see that
each microservice is built slightly differently based on its

needs. Here are a few examples of different functions a microservice can have:
Providing CRUD operations for a particular entity type, such as a customer,
event, etc. This service would hold the ability to persist data in a database.
Providing a means to accept parameters and return results based on
(potentially intense) computations. The billing microservice above may take
information on an event or customer and return the billing information
required, without needing to store data.
With the above example, you can probably see that a microservice
is capable of being more than just an API for a system. An entire application
can encompass a series of microservices that use their own APIs for
communication with each other. In addition, each of these microservices can
abstract its own functionality, drawing logical boundaries for responsibility in
the application and separating concerns to make for a more maintainable
codebase.

DEPLOYING AND MAINTAINING MICROSERVICES


Deployment of Microservices
Deployment is very crucial for the functioning of Microservices. Here are
the points to keep in mind while deploying the Microservices:
Modularize the self-contained Microservices making them as standalone
components that can be reused across the application using automation.
Connect the microservices using bindings that can be manipulated easily.
Deploy the Microservices independently to ensure agility and lower the
impact on the application.
How to Deploy:
 You can use Docker to deploy the Microservices efficiently. Each of the
Microservices can be further broken down in processes, that can be run in
separate Docker containers. These can be specified with Dockerfiles and
Docker Compose configurationfiles.
 You can use provisioning tool such as Kubernetes to manage and run a
cluster of Docker Containers across multiple hosts with co- location,
service discovery and replication control feature making the deployment
powerful and efficient in case of large scale deployments.
Kubernetes defines resources as Objects such as Pods, Services, Volumes
and Namespaces.
Pod, which is the basic unit in Kubernetes consists of one or more
containers that are co-located on the host machine and share the same resources.
These pods have a unique IP address and can see other seedswithin the cluster.
Service combines a set of pods that work together inside a cluster.
Service is generally not exposed outside the cluster except onto an external
IP address outside the group using one of the behaviors: ClusterIP, NodePort,
LoadBalancer, and ExternalName.

Volumes are the persistent storage that exists for the lifetime of their
respective pods, that are mounted at specific mount points within the container.
These are defined by the pod configuration, which cannot be installed onto or
link to other volumes.

Namespaces are non-overlapping sets of resources which are intended to


be used in environments with many users that are spread across multiple teams,
projects, or environments.
Replication and scaling in Kubernetes is done by running a specified
number of a pod’s copies across the cluster using a Replication controller that
also takes care of the pod replacement.
Apache Mesos is another engine that could be used to deploy
Microservices apart from Docker and Kubernetes.

Post-Deployment Concerns of Microservices


Unlike the Monolithic Architecture where everything was centralized,
Microservices have everything decentralized at the Microservices level, making
each of the Microservices entirely responsible for all the end to end concerns.
This includes not just the design and deployment aspects but the post-
deployment concerns such as Security, Data Integrity, and Failures. In
Microservices, Security and Data Integrity are considered at the Microservices
level, where the independent databases follow a uniform compliance and
security measures are practiced.
While failures are addressed by lowering the impact on the entire
application due to a Microservice failure, this makes the architecture more fault
tolerant as compared to the Monolithic architecture.
With concern addressed at the Microservices level, it is essential to make
individual Microservices reliable to make the application robust as a whole.
DEVOPS
REASON FOR HAVING DEVOPS

Before DevOps, the development and operation team worked in complete


isolation.
Testing and Deployment were isolated activities done after design- build.
Hence they consumed more time than actual build cycles.
Without using DevOps, team members are spending a large amount of their
time in testing, deploying, and designing instead of building the project.
Manual code deployment leads to human errors in production
Coding & operation teams have their separate timelines and arenot in
synch causing further delays.
There is a demand to increase the rate of software delivery by business
stakeholders. As per Forrester Consulting Study, Only 17% of teams can use
delivery software fast enough. This proves the pain point.

OVERVIEW OF DEVOPS
Definition:
The DevOps is a combination of two words, one is software
Development, and second is Operations. This allows a single team to handle
the entire application lifecycle, from development to testing, deployment, and
operations. DevOps helps you to reduce the disconnection between software
developers, quality assurance (QA) engineers, and system administrators.
Advantages of Devops:
 DevOps promotes collaboration between Development and Operations
team to deploy code to production faster in an automated & repeatable
way.

 DevOps helps to increase organization speed to deliver applications and


services. It also allows organizations to serve their customers better and
compete more strongly in the market.

 DevOps can also be defined as a sequence of development and IT


operations with better communication and collaboration.

 DevOps has become one of the most valuable business disciplines for
enterprises or organizations. With the help of DevOps, quality, and
speed of the application delivery has improved to a great extent.

 DevOps is nothing but a practice or methodology of making


"Developers" and "Operations" folks work together.

 DevOps represents a change in the IT culture with a complete focus on


rapid IT service delivery through the adoption of agile practices in the
context of a system-oriented approach.

 DevOps is all about the integration of the operations and development


process. Organizations that have adopted DevOps noticed a 22%
improvement in software quality and a 17% improvement in application
deployment frequency and achieve a 22% hike in customer satisfaction.
19% of revenue hikes as a result of the successful DevOps
implementation.

HISTORY OF DEVOPS
In 2009, the first conference named DevOpsdays was held in Ghent
Belgium. Belgian consultant and Patrick Debois founded the conference.
In 2012, the state of DevOps report was launched and conceived by Alanna
Brown at Puppet.
In 2014, the annual State of DevOps report was published by Nicole
Forsgren, Jez Humble, Gene Kim, and others. They found DevOps
adoption was accelerating in 2014 also.
In 2015, Nicole Forsgren, Gene Kim, and Jez Humble founded DORA
(DevOps Research and Assignment).
In 2017, Nicole Forsgren, Gene Kim, and Jez Humble published
"Accelerate: Building and Scaling High Performing Technology
Organizations".
CORE ELEMENTS OF DEVOPS
The following elements for a good DevOps process to work:
1. Telemetry / Alerting
Telemetry can come in many forms, but operationally, you shouldfocus on two
main kinds:
Application logs: these can tell you about what kinds of data are flowing
through the system, and the kinds of errors you are
encountering. These should be able to help you quickly find out where
problems are in the system. Ideally, you are using a system that allows
you to track logs across the various components of your system, so you can
trace a single data event as it travels through the system (e.g. Kibana).
Metrics: these count and measure things. You can use these to track when
various components are being called (ones owned by the Development team as
well as ones external to the team), and how long they are taking to respond. Or,
you can count the number of times certain kinds of errors are thrown. Many
times, it’s useful toalso have a “heartbeat” metric, which simply indicates that
your application is actually running and not hung or crashed. You should be

able to create graphs of this data in a “system dashboard” (e.g. Grafana) and
make it available to the Operations or Development teams.
2. An Agile Framework
As your Development team works on new features, they should be
prepared to take on unplanned tasks based on information gathered from
telemetry. This might include fixing applications, adjusting configurations, or
adding capacity. An Agile Framework such as Scrum or Kanban gives you the
flexibility to change plans quickly. If using a framework such as Scrum, then
you should allocate time in every Sprint to take on production changes.
3. Automated Testing
In cases where a configuration or code change is necessary, having good
test automation allows the team to know that their change did not inadvertently
break something else. There can sometimes be hundreds of tests that can be
run over a complex piece of code. If manual testing takes days, then the team
cannot move quickly to fix problems. Automated testing can often determine
if afix is okay within minutes.
4. Continuous Integration / Continuous Deployment
The Development team should be using a code repository with a good
branching process to support Continuous Integration and Continuous
Deployment best practices. The team should be able to make the needed
changes, run an automated unit test, merge it into a production branch, run an
automated system test, and release the change into a production environment
within a relatively short amount of time. And, in the worst case, if the
change does not solve the problem (or creates other ones), the team
should be able to quickly roll back the change.

LIFE CYCLE OF DEVOPS


DevOps defines an agile relationship between operations and
Development. It is a process that is practiced by the development team and
operational engineers together from beginning to the final stage of the product.

Learning DevOps is not complete without understanding the DevOps


lifecycle phases. The DevOps lifecycle includes seven phases as given below:
1) Continuous Development
This phase involves the planning and coding of the software. The vision
of the project is decided during the planning phase. And the developers begin
developing the code for the application. There are no DevOps tools that are
required for planning, but there are several tools for maintaining the code.

2) Continuous Integration
This stage is the heart of the entire DevOps lifecycle. It is a
software development practice in which the developers require to commit
changes to the source code more frequently. This may be on a daily or
weekly basis. Then every commit is built, and this allows early detection of
problems if they are present. Building code is not only involved compilation,
but it also includes unit testing, integration testing, code review, and
packaging.
The code supporting new functionality is continuously integrated with the
existing code. Therefore, there is continuous development of software. The
updated code needs to be integrated continuously and smoothly with the
systems to reflect changes to the end-users.

Jenkins is a popular tool used in this phase. Whenever there is a change in


the Git repository, then Jenkins fetches the updated code and prepares a build
of that code, which is an executable file in the form of war or jar. Then this
build is forwarded to the test server or the production server.

3) Continuous Testing
This phase, where the developed software is continuously testing for
bugs. For constant testing, automation testing tools such as TestNG,
JUnit, Selenium, etc are used. These tools allow QAs to test multiple code-
bases thoroughly in parallel to ensure that there is no flaw in the functionality.
In this phase, Docker Containers can be used for simulating the test
environment.
Selenium does the automation testing, and TestNG generates the reports.
This entire testing phase can automate with the help of a Continuous
Integration tool called Jenkins.
Automation testing saves a lot of time and effort for executingthe tests
instead of doing this manually. Apart from that, report generation is a big plus.
The task of evaluating the test cases that failed in a test suite gets simpler.
Also, we can schedule the execution of the test cases at predefined times.
After testing, the code is continuously integrated with the existing code.
4) Continuous Monitoring
Monitoring is a phase that involves all the operational factors of the entire
DevOps process, where important information about the use of the software
is recorded and carefully processed to find out trends and identify problem
areas. Usually, the monitoring is integrated within the operational capabilities
of the software application.
It may occur in the form of documentation files or maybe produce large-
scale data about the application parameters when it is in a continuous use
position. The system errors such as server not reachable, low memory, etc are
resolved in this phase. It maintains the security and availability of the service.
5) Continuous Feedback
The application development is consistently improved by analyzing the
results from the operations of the software. This is carried out by placing the
critical phase of constant feedback between the operations and the
development of the next version of the current software application.
The continuity is the essential factor in the DevOps as it removes the
unnecessary steps which are required to take a software application from
development, using it to find out its issues and then producing a better version.
It kills the efficiency that may be possible with the app and reduce the number
of interested customers.
6) Continuous Deployment
In this phase, the code is deployed to the production servers. Also, it is
essential to ensure that the code is correctly used on all the servers.

The new code is deployed continuously, and configuration management


tools play an essential role in executing tasks frequently and quickly. Here are
some popular tools which are used in this phase, such as Chef, Puppet,
Ansible, and SaltStack.
Containerization tools are also playing an essential role in the deployment
phase. Vagrant and Docker are popular tools that are used for this purpose.
These tools help to produce consistency across development, staging, testing,
and production environment. They also help in scaling up and scaling down
instances softly.
Containerization tools help to maintain consistency across the
environments where the application is tested, developed, and deployed. There
is no chance of errors or failure in the production environment as they package
and replicate the same dependencies and packages used in the testing,
development, and staging environment. It makes the application easy to run on
different computers.

7) Continuous Operations
All DevOps operations are based on the continuity with complete
automation of the release process and allow the organization to accelerate the
overall time to market continuingly.

ADOPTION OF DEVOPS
DevOps is not a tool, technology or framework. Instead, it is a set of
practices that help bridge the gap between development andoperations teams in
an enterprise. By bridging the gap, DevOps eliminates communication barriers
and makes collaboration easier.
Enterprises should adopt DevOps for the following
By resolving collaboration problems and creating a unified
delivery ecosystem, DevOps enables organizations to:
a. Increase the overall productivity of the delivery process
b. Bring transparency across teams and communication channels
c. Accelerate delivery time by eliminating unnecessary processes and
operation overheads
d. Receive and process continuous feedback
e. Accelerate time-to-resolution for issues
f. Create a reliable delivery ecosystem, which leads to bettersoftware
quality
g. Reduce overall cost by streamlining the end-to-end deliveryprocess
How to get DevOps right
Most organizations fail in getting DevOps right because they treat it like
a framework, not a set of processes or practices. DevOps is more than
implementing the right tools and technologies. The success of DevOps depends
upon driving technical and cultural shifts together.
In order to fuel better collaboration, improve delivery and make a
DevOps project successful, three other factors play a crucial role.
These factors are people, tools and processes. People involved in DevOps
implementation must be trained together and digitization technologies, cloud
infrastructure and modern data management techniques must be leveraged in
accelerating the adoption of DevOps in an organization.
Apart from the factors stated above, continuous integration and
continuous delivery (CI/CD) are also cited as pillars of successful DevOps
implementation.
To establish and optimize the CI/CD model and reap the benefits,
organizations need to build an effective pipeline to automate their build,
integration and testing processes. Selecting the right tools is critical for
building that pipeline.
Choosing the right DevOps tools
There is no one single tool that meets all DevOps requirements. The
smartest move is to choose a set of tools that best suit the organization’s
software delivery environment, team and application.
The right tools enable organizations to create a strong DevOps foundation,
achieve a continuous process right from development to deployment, help
optimize resources and costs, facilitate seamless execution of the processes and
eventually meet organizational goals.
While choosing the right DevOps tools, organizations must consider the
following factors:
 The tools should have enterprise-class automation capability. It will help
scale the organizational workflows and continuously enhance the processes
without adding extra work.
 DevOps creates a need for integration of the entire delivery ecosystem.
Therefore, the tools should have integrationcapabilities.

DEVOPS TOOLS

Here are some most popular DevOps tools with brief explanation shown
in the below image, such as:

1) Puppet
Puppet is the most widely used DevOps tool. It allows the delivery and release
of the technology changes quickly and frequently. It has features of versioning,
automated testing, and continuous delivery. It enables to manage entire
infrastructure as code without expanding thesize of the team.
Features

 Real-time context-aware reporting.


 Model and manage the entire environment.
 Defined and continually enforce infrastructure.
 Desired state conflict detection and remediation.
 It inspects and reports on packages running across theinfrastructure.
 It eliminates manual work for the software delivery process.
 It helps the developer to deliver great software quickly.

2) Ansible
Ansible is a leading DevOps tool. Ansible is an open-source IT engine
that automates application deployment, cloud provisioning, intra service
orchestration, and other IT tools. It makes it easier for DevOps teams to scale
automation and speed up productivity.
Ansible is easy to deploy because it does not use any
agents or custom security infrastructure on the client-side, and by pushing
modules to the clients. These modules are executed locally on the client-side,
and the output is pushed back to the Ansible server.
Features
 It is easy to use to open source deploy applications.

 It helps in avoiding complexity in the software development


process. It eliminates repetitive tasks.

 It manages complex deployments and speeds up the developmentprocess.


3) Docker

Docker is a high-end DevOps tool that allows building, ship, and run
distributed applications on multiple systems. It also helps to assemble the apps
quickly from the components, and it is typically suitable for container
management.
Features

o It configures the system more comfortable and faster.


o It increases productivity.
o It provides containers that are used to run the application in an isolated
environment.
o It routes the incoming request for published ports on available nodes to an
active container. This feature enables the connection even if there is no
task running on the node.
o It allows saving secrets into the swarm itself.

4) Nagios
Nagios is one of the more useful tools for DevOps. It can determine the
errors and rectify them with the help of network, infrastructure, server, and log
monitoring systems.
Features
o It provides complete monitoring of desktop and server operating
systems.

o The network analyzer helps to identify bottlenecks and optimize


bandwidth utilization.
o It helps to monitor components such as services, application, OS,and
network protocol.
o It also provides to complete monitoring of Java ManagementExtensions.
5) CHEF
A chef is a useful tool for achieving scale, speed, and consistency. The
chef is a cloud-based system and open source technology. This technology uses
Ruby encoding to develop essential building blocks such as recipes and
cookbooks. The chef is used in infrastructure automation and helps in reducing
manual and repetitive tasks for infrastructure management.
Chef has got its convention for different building blocks, which are
required to manage and automate infrastructure.
Features
o It maintains high availability.
o It can manage multiple cloud environments.
o It uses popular Ruby language to create a domain-specific language.
o The chef does not make any assumptions about the current status of the
node. It uses its mechanism to get the current state of the machine.
6) Jenkins
Jenkins is a DevOps tool for monitoring the execution of repeated
tasks. Jenkins is a software that allows continuous integration. Jenkins will be
installed on a server where the central build will take place. It helps to integrate
project changes more efficiently by finding theissues quickly.
Features
o Jenkins increases the scale of automation.
o It can easily set up and configure via a web interface.
o It can distribute the tasks across multiple machines, thereby increasing
concurrency.
o It supports continuous integration and continuous delivery.
o It offers 400 plugins to support the building and testing any project
virtually.
o It requires little maintenance and has a built-in GUI tool for easy updates.
7) Git
Git is an open-source distributed version control system that is freely
available for everyone. It is designed to handle minor to major projects with
speed and efficiency. It is developed to co-ordinate the work among
programmers. The version control allows you to track and work together with
your team members at the same workspace. It is used as a critical distributed
version-control for the DevOps tool.
Features
o It is a free open source tool.
o It allows distributed development.
o It supports the pull request.
o It enables a faster release cycle.
o Git is very scalable.
o It is very secure and completes the tasks very fast.
8) SALTSTACK
Stackify is a lightweight DevOps tool. It shows real-time error queries,
logs, and more directly into the workstation. SALTSTACK is an ideal solution
for intelligent orchestration for the software-defined datacenter.
Features
o It eliminates messy configuration or data changes.
o It can trace detail of all the types of the web request.
o It allows us to find and fix the bugs before production.
o It provides secure access and configures image caches.
o It secures multi-tenancy with granular role-based access control.
o Flexible image management with a private registry to store andmanage
images.
9) Splunk
Splunk is a tool to make machine data usable, accessible, and valuable to
everyone. It delivers operational intelligence to DevOps teams. It helps
companies to be more secure, productive, and competitive.
Features
o It has the next-generation monitoring and analytics solution.
o It delivers a single, unified view of different IT services.
o Extend the Splunk platform with purpose-built solutions forsecurity.
o Data drive analytics with actionable insight.
10) Selenium
Selenium is a portable software testing framework for web applications.
It provides an easy interface for developing automated tests.
Features
 It is a free open source tool.
 It supports multiplatform for testing, such as Android and ios.
 It is easy to build a keyword-driven framework for a Web Driver.
 It creates robust browser-based regression automation suites andtests.

BUILD, PROMOTION AND DEPLOYMENT IN DEVOPS

Build in DevOps:
Build automation is the process of automating the retrieval of source code,
compiling it into binary code, executing automated tests, and publishing it into a
shared, centralized repository
Build automation is critical to successful DevOps processes.
Build means to Compile the project.
Deploy means to Compile the project & Publish the output.
For web applications no need to deploy or nothing need to do at client side
except simple browser with url.
5 Benefits of Build Automation:
Increases Productivity:
Build automation ensures fast feedback. This means your developersincrease
productivity.

They’ll spend less time dealing with tools and processes and more time
delivering value.
Accelerates Delivery:
Build automation helps you accelerate delivery. That’s because it eliminates
redundant tasks and ensures you find issues faster, so you can release faster
Improves Quality:
Build automation helps your team move faster. That means you’ll be able to
find issues faster and resolve them to improve the overall quality of your product
and avoid bad builds.
Maintains a Complete History:
Build automation maintains a complete history of files and changes. That
means you’ll be able to track issues back to their source.
Saves Time and Money:
Build automation saves time and money. That’s because build automation
sets you up for CI/CD, increases productivity, accelerates delivery, and
improves quality.
Automate the build process:
1. Write the code.
2. Commit code to a shared, centralized repository such as Perforce Helix
Core.
3. Scan the code using tools such as static analysis.
4. Start a code review.
5. Compile code and files.
6. Run automated testing.
Promotion and Deployment in DevOps:
DevOps resources. Deployment automation is what enables you to deploy
your software to testing and production environments with the push of a button.
Automation is essential to reduce the risk of production deployments.
An automated deployment process has the following inputs:
Packages created by the continuous integration (CI) process (these packages
should be deployable to any environment, including production).
Scripts to configure the environment, deploy the packages, and perform a
deployment test (sometimes known as a smoke test).
Environment-specific configuration information.

You might also like