Unit V(pet)
Unit V(pet)
DEFINING MICROSERVICES
Microservice Architecture is a Service Oriented Architecture. In the
microservice architecture, there are a large number of microservices.
By combining all the microservices, it constructs a big service. In the
microservice architecture, all the services communicate with each other.
The microservice defines an approach to the architecture that divides an
application into a pool of loosely coupled services that implements business
requirements. It is next to Service-Oriented Architecture (SOA). The most
important feature of the microservice- based architecture is that it can perform
continuous delivery of a large and complex application.
Microservice helps in breaking the application and build a logically
independent smaller applications. For example, we can build a cloud application
with the help of Amazon AWS with minimum efforts.
EMERGENCE OF MICROSERVICE ARCHITECTURE
A typical Microservice Architecture (MSA) should consist of thefollowing
components:
1. Clients
2. Identity Providers
3. API Gateway
4. Messaging Formats
5. Databases
6. Static Content
7. Management
8. Service Discovery
1. Clients
The architecture starts with different types of clients, from different
devices trying to perform various management capabilities such as search, build,
configure etc.
2. Identity Providers
These requests from the clients are then passed on the identity providers
who authenticate the requests of clients and communicate the requests to API
Gateway. The requests are then communicated to the internal services via well-
defined API Gateway.
3. API Gateway
Since clients don’t call the services directly, API Gateway acts as an entry
point for the clients to forward requests to appropriate microservices.
i) Aggregator Pattern
Aggregator in the computing world refers to a website or program that
collects related items of data and displays them. So, even in Micro services
patterns, Aggregator is a basic web page which invokes various services to get
the required information or achieve the requiredfunctionality.
Also, since the source of output gets divided on breaking the monolithic
architecture to micro services, this pattern proves to be beneficial when you need
an output by combining data from multiple services. So, if we have two services
each having their own database, then an aggregator having a unique transaction
ID, would collect the data from each individual microservice, apply the business
logic and finally publish it as a REST endpoint. Later on, the data collected
canbe consumed by the respective services which require that collected data.
The Aggregate Design Pattern is based on the DRY principle. Based on
this principle, you can abstract the logic into a composite micro services and
aggregate that particular business logic into one service.
So, for example, if you consider two services: Service A and B, then you
can individually scale these services simultaneously by providing the data to the
composite micro service.
With the help of the API Gateway design pattern, the API gateways can
convert the protocol request from one type to other. Similarly, it can also offload
the authentication/authorization responsibility of the microservice.
So, once the client sends a request, these requests are passed to the API
Gateway which acts as an entry point to forward the clients’ requests to the
appropriate micro services. Then, with the help of the load balancer, the load of
the request is handled and the request is sent to the respective services. Micro
services use Service Discovery which acts as a guide to find the route of
communication between each of them. Micro services then communicate with
each other via a stateless server i.e. either by HTTP Request/Message Bus.
CHALLENGES OF MICROSERVICES
ARCHITECTURE
Micro service architecture is more complex than the legacy system. The
micro service environment becomes more complicated because the team has to
manage and support many moving parts. Here are some of the top challenges
that an organization face in their micro services journey:
• Bounded Context
• Dynamic Scale up and Scale Down
• Monitoring
• Fault Tolerance
• Cyclic dependencies
• DevOps Culture
Bounded context: The bounded context concept originated in Domain- Driven
Design (DDD) circles. It promotes the Object model first approach to service,
defining a data model that service is responsible for and is bound to. A bounded
context clarifies, encapsulates, and defines the specific responsibility to the
model. It ensures that the domain will not be distracted from the outside. Each
model must have a context implicitly defined within a sub-domain, and every
context defines boundaries.
In other words, the service owns its data and is responsible for its integrity
and mutability. It supports the most important feature of micro services, which is
independence and decoupling.
Dynamic scale up and scale down: The loads on the different micro services
may be at a different instance of the type. As well as auto-scaling up your micro
service should auto-scale down. It reduces the cost of the micro services. We can
distribute the load dynamically.
Monitoring: The traditional way of monitoring will not align well with micro
services because we have multiple services making up the same functionality
previously supported by a single application. When an error arises in the
application, finding the root cause can be challenging.
Fault Tolerance: Fault tolerance is the individual service that does not bring
down the overall system. The application can operate at a certain degree of
satisfaction when the failure occurs. Without fault tolerance, a single failure in
the system may cause a total breakdown. The circuit breaker can achieve fault
tolerance. The circuit breaker is a pattern that wraps the request to external
service and detects when they are faulty. Micro services need to tolerate both
internal and external failure.
Cyclic Dependency: Dependency management across different services, and
its functionality is very important. The cyclic dependency can create a problem,
if not identified and resolved promptly.
DevOps Culture: Micro services fits perfectly into the DevOps. It provides
faster delivery service, visibility across data, and cost- effective data. It can
extend their use of containerization switch from Service-Oriented-Architecture
(SOA) to Microservice Architecture(MSA).
It focuses on application
It focuses on decoupling.
service reusability.
needs. Here are a few examples of different functions a microservice can have:
Providing CRUD operations for a particular entity type, such as a customer,
event, etc. This service would hold the ability to persist data in a database.
Providing a means to accept parameters and return results based on
(potentially intense) computations. The billing microservice above may take
information on an event or customer and return the billing information
required, without needing to store data.
With the above example, you can probably see that a microservice
is capable of being more than just an API for a system. An entire application
can encompass a series of microservices that use their own APIs for
communication with each other. In addition, each of these microservices can
abstract its own functionality, drawing logical boundaries for responsibility in
the application and separating concerns to make for a more maintainable
codebase.
Volumes are the persistent storage that exists for the lifetime of their
respective pods, that are mounted at specific mount points within the container.
These are defined by the pod configuration, which cannot be installed onto or
link to other volumes.
OVERVIEW OF DEVOPS
Definition:
The DevOps is a combination of two words, one is software
Development, and second is Operations. This allows a single team to handle
the entire application lifecycle, from development to testing, deployment, and
operations. DevOps helps you to reduce the disconnection between software
developers, quality assurance (QA) engineers, and system administrators.
Advantages of Devops:
DevOps promotes collaboration between Development and Operations
team to deploy code to production faster in an automated & repeatable
way.
DevOps has become one of the most valuable business disciplines for
enterprises or organizations. With the help of DevOps, quality, and
speed of the application delivery has improved to a great extent.
HISTORY OF DEVOPS
In 2009, the first conference named DevOpsdays was held in Ghent
Belgium. Belgian consultant and Patrick Debois founded the conference.
In 2012, the state of DevOps report was launched and conceived by Alanna
Brown at Puppet.
In 2014, the annual State of DevOps report was published by Nicole
Forsgren, Jez Humble, Gene Kim, and others. They found DevOps
adoption was accelerating in 2014 also.
In 2015, Nicole Forsgren, Gene Kim, and Jez Humble founded DORA
(DevOps Research and Assignment).
In 2017, Nicole Forsgren, Gene Kim, and Jez Humble published
"Accelerate: Building and Scaling High Performing Technology
Organizations".
CORE ELEMENTS OF DEVOPS
The following elements for a good DevOps process to work:
1. Telemetry / Alerting
Telemetry can come in many forms, but operationally, you shouldfocus on two
main kinds:
Application logs: these can tell you about what kinds of data are flowing
through the system, and the kinds of errors you are
encountering. These should be able to help you quickly find out where
problems are in the system. Ideally, you are using a system that allows
you to track logs across the various components of your system, so you can
trace a single data event as it travels through the system (e.g. Kibana).
Metrics: these count and measure things. You can use these to track when
various components are being called (ones owned by the Development team as
well as ones external to the team), and how long they are taking to respond. Or,
you can count the number of times certain kinds of errors are thrown. Many
times, it’s useful toalso have a “heartbeat” metric, which simply indicates that
your application is actually running and not hung or crashed. You should be
able to create graphs of this data in a “system dashboard” (e.g. Grafana) and
make it available to the Operations or Development teams.
2. An Agile Framework
As your Development team works on new features, they should be
prepared to take on unplanned tasks based on information gathered from
telemetry. This might include fixing applications, adjusting configurations, or
adding capacity. An Agile Framework such as Scrum or Kanban gives you the
flexibility to change plans quickly. If using a framework such as Scrum, then
you should allocate time in every Sprint to take on production changes.
3. Automated Testing
In cases where a configuration or code change is necessary, having good
test automation allows the team to know that their change did not inadvertently
break something else. There can sometimes be hundreds of tests that can be
run over a complex piece of code. If manual testing takes days, then the team
cannot move quickly to fix problems. Automated testing can often determine
if afix is okay within minutes.
4. Continuous Integration / Continuous Deployment
The Development team should be using a code repository with a good
branching process to support Continuous Integration and Continuous
Deployment best practices. The team should be able to make the needed
changes, run an automated unit test, merge it into a production branch, run an
automated system test, and release the change into a production environment
within a relatively short amount of time. And, in the worst case, if the
change does not solve the problem (or creates other ones), the team
should be able to quickly roll back the change.
2) Continuous Integration
This stage is the heart of the entire DevOps lifecycle. It is a
software development practice in which the developers require to commit
changes to the source code more frequently. This may be on a daily or
weekly basis. Then every commit is built, and this allows early detection of
problems if they are present. Building code is not only involved compilation,
but it also includes unit testing, integration testing, code review, and
packaging.
The code supporting new functionality is continuously integrated with the
existing code. Therefore, there is continuous development of software. The
updated code needs to be integrated continuously and smoothly with the
systems to reflect changes to the end-users.
3) Continuous Testing
This phase, where the developed software is continuously testing for
bugs. For constant testing, automation testing tools such as TestNG,
JUnit, Selenium, etc are used. These tools allow QAs to test multiple code-
bases thoroughly in parallel to ensure that there is no flaw in the functionality.
In this phase, Docker Containers can be used for simulating the test
environment.
Selenium does the automation testing, and TestNG generates the reports.
This entire testing phase can automate with the help of a Continuous
Integration tool called Jenkins.
Automation testing saves a lot of time and effort for executingthe tests
instead of doing this manually. Apart from that, report generation is a big plus.
The task of evaluating the test cases that failed in a test suite gets simpler.
Also, we can schedule the execution of the test cases at predefined times.
After testing, the code is continuously integrated with the existing code.
4) Continuous Monitoring
Monitoring is a phase that involves all the operational factors of the entire
DevOps process, where important information about the use of the software
is recorded and carefully processed to find out trends and identify problem
areas. Usually, the monitoring is integrated within the operational capabilities
of the software application.
It may occur in the form of documentation files or maybe produce large-
scale data about the application parameters when it is in a continuous use
position. The system errors such as server not reachable, low memory, etc are
resolved in this phase. It maintains the security and availability of the service.
5) Continuous Feedback
The application development is consistently improved by analyzing the
results from the operations of the software. This is carried out by placing the
critical phase of constant feedback between the operations and the
development of the next version of the current software application.
The continuity is the essential factor in the DevOps as it removes the
unnecessary steps which are required to take a software application from
development, using it to find out its issues and then producing a better version.
It kills the efficiency that may be possible with the app and reduce the number
of interested customers.
6) Continuous Deployment
In this phase, the code is deployed to the production servers. Also, it is
essential to ensure that the code is correctly used on all the servers.
7) Continuous Operations
All DevOps operations are based on the continuity with complete
automation of the release process and allow the organization to accelerate the
overall time to market continuingly.
ADOPTION OF DEVOPS
DevOps is not a tool, technology or framework. Instead, it is a set of
practices that help bridge the gap between development andoperations teams in
an enterprise. By bridging the gap, DevOps eliminates communication barriers
and makes collaboration easier.
Enterprises should adopt DevOps for the following
By resolving collaboration problems and creating a unified
delivery ecosystem, DevOps enables organizations to:
a. Increase the overall productivity of the delivery process
b. Bring transparency across teams and communication channels
c. Accelerate delivery time by eliminating unnecessary processes and
operation overheads
d. Receive and process continuous feedback
e. Accelerate time-to-resolution for issues
f. Create a reliable delivery ecosystem, which leads to bettersoftware
quality
g. Reduce overall cost by streamlining the end-to-end deliveryprocess
How to get DevOps right
Most organizations fail in getting DevOps right because they treat it like
a framework, not a set of processes or practices. DevOps is more than
implementing the right tools and technologies. The success of DevOps depends
upon driving technical and cultural shifts together.
In order to fuel better collaboration, improve delivery and make a
DevOps project successful, three other factors play a crucial role.
These factors are people, tools and processes. People involved in DevOps
implementation must be trained together and digitization technologies, cloud
infrastructure and modern data management techniques must be leveraged in
accelerating the adoption of DevOps in an organization.
Apart from the factors stated above, continuous integration and
continuous delivery (CI/CD) are also cited as pillars of successful DevOps
implementation.
To establish and optimize the CI/CD model and reap the benefits,
organizations need to build an effective pipeline to automate their build,
integration and testing processes. Selecting the right tools is critical for
building that pipeline.
Choosing the right DevOps tools
There is no one single tool that meets all DevOps requirements. The
smartest move is to choose a set of tools that best suit the organization’s
software delivery environment, team and application.
The right tools enable organizations to create a strong DevOps foundation,
achieve a continuous process right from development to deployment, help
optimize resources and costs, facilitate seamless execution of the processes and
eventually meet organizational goals.
While choosing the right DevOps tools, organizations must consider the
following factors:
The tools should have enterprise-class automation capability. It will help
scale the organizational workflows and continuously enhance the processes
without adding extra work.
DevOps creates a need for integration of the entire delivery ecosystem.
Therefore, the tools should have integrationcapabilities.
DEVOPS TOOLS
Here are some most popular DevOps tools with brief explanation shown
in the below image, such as:
1) Puppet
Puppet is the most widely used DevOps tool. It allows the delivery and release
of the technology changes quickly and frequently. It has features of versioning,
automated testing, and continuous delivery. It enables to manage entire
infrastructure as code without expanding thesize of the team.
Features
2) Ansible
Ansible is a leading DevOps tool. Ansible is an open-source IT engine
that automates application deployment, cloud provisioning, intra service
orchestration, and other IT tools. It makes it easier for DevOps teams to scale
automation and speed up productivity.
Ansible is easy to deploy because it does not use any
agents or custom security infrastructure on the client-side, and by pushing
modules to the clients. These modules are executed locally on the client-side,
and the output is pushed back to the Ansible server.
Features
It is easy to use to open source deploy applications.
Docker is a high-end DevOps tool that allows building, ship, and run
distributed applications on multiple systems. It also helps to assemble the apps
quickly from the components, and it is typically suitable for container
management.
Features
4) Nagios
Nagios is one of the more useful tools for DevOps. It can determine the
errors and rectify them with the help of network, infrastructure, server, and log
monitoring systems.
Features
o It provides complete monitoring of desktop and server operating
systems.
Build in DevOps:
Build automation is the process of automating the retrieval of source code,
compiling it into binary code, executing automated tests, and publishing it into a
shared, centralized repository
Build automation is critical to successful DevOps processes.
Build means to Compile the project.
Deploy means to Compile the project & Publish the output.
For web applications no need to deploy or nothing need to do at client side
except simple browser with url.
5 Benefits of Build Automation:
Increases Productivity:
Build automation ensures fast feedback. This means your developersincrease
productivity.
They’ll spend less time dealing with tools and processes and more time
delivering value.
Accelerates Delivery:
Build automation helps you accelerate delivery. That’s because it eliminates
redundant tasks and ensures you find issues faster, so you can release faster
Improves Quality:
Build automation helps your team move faster. That means you’ll be able to
find issues faster and resolve them to improve the overall quality of your product
and avoid bad builds.
Maintains a Complete History:
Build automation maintains a complete history of files and changes. That
means you’ll be able to track issues back to their source.
Saves Time and Money:
Build automation saves time and money. That’s because build automation
sets you up for CI/CD, increases productivity, accelerates delivery, and
improves quality.
Automate the build process:
1. Write the code.
2. Commit code to a shared, centralized repository such as Perforce Helix
Core.
3. Scan the code using tools such as static analysis.
4. Start a code review.
5. Compile code and files.
6. Run automated testing.
Promotion and Deployment in DevOps:
DevOps resources. Deployment automation is what enables you to deploy
your software to testing and production environments with the push of a button.
Automation is essential to reduce the risk of production deployments.
An automated deployment process has the following inputs:
Packages created by the continuous integration (CI) process (these packages
should be deployable to any environment, including production).
Scripts to configure the environment, deploy the packages, and perform a
deployment test (sometimes known as a smoke test).
Environment-specific configuration information.