0% found this document useful (0 votes)
13 views

INT363_Unit_5

Uploaded by

Kumar Mangalam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

INT363_Unit_5

Uploaded by

Kumar Mangalam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 69

UNIT V

CLOUD-NATIVE DEVELOPMENT

LTP:300

Textbook: Mastering Cloud-Native Microservices by Chetan Walia, BPB publications


Reference: Building Microservice by SAM NEWMAN, O’Reilly
1
www.lpu.in LOVELY PROFESSIONAL UNIVERSITY
UNIT – V: INTRODUCTION

Table of Content
 Cloud Native Architecture
 Loosely Coupled Services
 Service Discovery
 Load Balancing
 Auto Scaling
 Data Management
 The Twelve-factor App Methodology
 Serverless Architecture
 Case Studies of Netflix, Amazon, Uber etc.
Cloud-Native Deployment

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES
Introduction to Cloud Native Applications
1. You assume yourself as a new employee in a design session architecting a major e-commerce application.
2. The Question is: “How will you build it?”
3. If you followed the guidance from the past 15 years, one will most likely build the system shown in Figure.

4. You construct a large core application containing all of your domain logic. It includes modules such as Identity, Catalog,
Ordering, and more. They directly communicate with each other within a single server process. The modules share a large
relational database. The core exposes functionality via an HTML interface and a mobile app.
5. Congratulations! You just created a monolithic application

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES
Introduction to Cloud Native Applications
6. Monolithic offers some distinct advantages, like they are straightforward to:
a. BUILD
b. TEST
c. DEPLOY
d. TROUBLESHOOT
e. VERTICAL SCALE
7. Many successful apps that exist today were created as monoliths. The app is a hit and continues to evolve, iteration after
iteration, adding more functionality
8. At some point, however, one begins to feel uncomfortable. You find yourself losing control of the application. As time
goes on, the feeling becomes more intense, and you eventually enter a state known as the Fear Cycle:
a. The app has become so overwhelmingly complicated that no single person understands it.
b. One fear making changes – each change has unintended and costly side effects
c. New features/fixes become tricky, time-consuming, and expensive to implement.

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES
Introduction to Cloud Native Applications
8. At some point, however, one begins to feel uncomfortable. You find yourself losing control of the application. As time
goes on, the feeling becomes more intense, and you eventually enter a state known as the Fear Cycle:
d. Each release becomes as small as possible and requires a full deployment of the entire application
e. Once unstable component can crash the entire system
f. New technologies and frameworks aren’t an option
g. It is difficult to implement agile delivery methodologies
SOLUTION
Many organizations have addressed this monolithic fear cycle by adopting a cloud-native approach to building systems.

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES
Introduction to Cloud Native Applications
SOLUTION
Many organizations have addressed this monolithic fear cycle by adopting a cloud-native approach to building systems.

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES
Introduction to Cloud Native Applications
What is CLOUD NATIVE?
1. Cloud-native architecture and technologies are an approach to designing, constructing, and operating workloads that are
built into the cloud and take full advantage of the cloud computing model.
2. The Cloud Native Computing Foundation provides the official definition:
“Cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic
environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable
infrastructure, and declarative APIs exemplify this approach.”
3. These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust
automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.
4. Cloud native is about speed and agility. Business systems are evolving from enabling business capabilities to weapons of
strategic transformation that accelerate business velocity and growth. It’s imperative to get new ideas to market immediately.
5. At the same time, business systems have also become increasingly complex with users demanding more. They expect rapid
responsiveness, innovative features, and zero downtime.

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES
Introduction to Cloud Native Applications
What is CLOUD NATIVE?
6. Some of the companies that have implemented cloud-native techniques are:
Companies Experience
Netflix Has 600+ services in production. Deploys 100 times per day.

Uber Has 1,000+ services in production. Deploys several thousand times each week

WeChat Has 3,000+ services in production. Deploys 1,000 times a day.

7. As you can see, Netflix, Uber, and, WeChat expose cloud-native systems that consist of many independent services.
8. This architectural style enables them to rapidly respond to market conditions.
9. They instantaneously update small areas of a live, complex application, without a full redeployment.
10. They individually scale services as needed

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES
Introduction to Cloud Native Applications
The Pillars of CLOUD NATIVE?
1. The speed and agility of cloud-native derive from many factors. Foremost is cloud infrastructure
2. But there’s more: Five other foundational pillars as shown. They provide the bedrock for cloud-native systems.

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES
Introduction to Cloud Native Applications
THE CLOUD
1. Cloud-native systems take full advantage of the cloud service model
2. Designed to thrive in a dynamic, virtualized cloud environment, these systems make extensive use of Platform as a Service
(PaaS) compute infrastructure and managed services.
3. They treat the underlying infrastructure as disposable - provisioned in minutes and resized, scaled, or destroyed on demand –
via automation.
Example: Consider the widely accepted DevOps concept of Pets vs. Cattle.
1. In a traditional data center, servers are treated as Pets: physical machine, given a meaningful name, and cared for. You scale
by adding more resources to the same machine (scaling up). If the server becomes sick, you nurse it back to health. Should
the server become unavailable, everyone notices.
2. The Cattle service model is different. You provision each instance as a virtual machine or container. They’re identical and
assigned a system identifier such as Service-01, Service-02, and so on. One can scale by creating more of them (scaling out).
When one becomes unavailable, nobody notices.

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES
TWELVE-FACTOR APPLICATION
Modern Design  How would you design a cloud-native app? What would your architecture look like?
Solution  The Twelve-Factor Application
1. It is the widely accepted methodology for constructing cloud-based applications
2. It describes a set of principles and practices that developers follow to construct applications optimized for modern cloud
environments.
3. Special attention is given to portability across environments and declarative automation.
4. While applicable to any web-based application, many practitioners consider Twelve-Factor a solid foundation for building
cloud-native apps.
5. Systems built upon these principles can deploy and scale rapidly and add features to react quickly to market changes.

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES
TWELVE-FACTOR APPLICATION
The following table highlights the Twelve-Factor Methodology
S. No. Factor Explanation
A single code base for each microservice, stored in its own repository. Tracked with version control, it
1 Code Base
can deploy to multiple environments (QA, Staging, Production).
Each microservice isolates and packages its own dependencies, embracing changes without impacting
2 Dependencies
the entire system
Configuration information is moved out of the microservice and externalized through a configuration
3 Configurations management tool outside of the code. The same deployment can propagate across environments with
the correct configuration applied.
Ancillary resources (data stores, caches, message brokers) should be exposed via an addressable URL.
4 Backing Service
Doing so decouples the resource from the application, enabling it to be interchangeable.
Each release must enforce a strict separation across the build, release, and run stages. Each should be
Build, Release,
5 tagged with a unique ID and support the ability to roll back. Modern CI/CD systems help fulfill this
Run
principle.
Each microservice should execute in its own process, isolated from other running services.
6 Processes
Externalize required state to a backing service such as a distributed cache or data store
www.lpu.in LOVELY PROFESSIONAL UNIVERSITY
DEPLOYING MICROSEVICES
TWELVE-FACTOR APPLICATION
S. No. Factor Explanation
Each microservice should be self-contained with its interfaces and functionality exposed on its own port.
7 Port Binding
Doing so provides isolation from other microservices.
When capacity needs to increase, scale-out services horizontally across multiple identical processes
8 Concurrency (copies) as opposed to scaling-up a single large instance on the most powerful machine available.
Develop the application to be concurrent making scaling out in cloud environments seamless
Service instances should be disposable. Favor fast startup to increase scalability opportunities and
9 Disposability graceful shutdowns to leave the system in a correct state. Docker containers along with an orchestrator
inherently satisfy this requirement
Dev/Prod Keep environments across the application lifecycle as similar as possible, avoiding costly shortcuts.
10
Parity Here, the adoption of containers can greatly contribute by promoting the same execution environment
Treat logs generated by microservices as event streams. Process them with an event aggregator.
11 Logging Propagate log data to data-mining/log management tools like Azure Monitor or Splunk and eventually to
long-term archival.
Run administrative/management tasks, such as data cleanup or computing analytics, as one-off
Admin
12 processes. Use independent tools to invoke these tasks from the production environment, but separately
Process
from the application.
www.lpu.in LOVELY PROFESSIONAL UNIVERSITY
DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES
LOAD BALANCING
1. It is a runtime agent with logic fundamentally based on the
premise of employing horizontal scaling to balance a workload
across two or more IT resources to increase performance and
capacity beyond what a single IT resource can provide.
2. The load balancers can perform a range of specialized runtime
workload distributed functions that include:
 Asymmetric Distribution  Large workloads are issued to
IT resources with higher processing capacities.
 Workload Prioritization  Workloads are scheduled,
queued, discarded, and distributed according to their priority
levels.
 Content-aware distribution  Requests are distributed to
different IT resources as dictated by the requested content.

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES
LOAD BALANCING
1. A load balancer implemented as a service agent transparently distributes incoming workload requests messages across
two redundant cloud service implementations, which in turn maximizes performances for cloud service consumers.
2. A load balancer is programmed or configured with a set of performance and QoS rules and parameters with the general
objectives of optimizing IT resource usage, avoiding overloads, and maximizing throughputs.
3. The load balancer mechanism can exit as a:
 Multi-Layer Network Switch
 Dedicated Hardware Appliances
 Dedicated software-based system (common in server operating systems)
 Service Agent (controlled by cloud management software)
4. The load balancer is typically located on the communication path between the IT resources generating the workload and the IT resources
performing the workload processing.
5. This mechanism can be designed as a transparent agent that remains hidden from the cloud service consumers, or as a proxy component
that abstracts the IT resources performing their workload.

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES
LOAD BALANCING

1. New instances of the cloud services are automatically created to meet increasing usage requests
2. The load balancer uses round-robin scheduling to ensure that the traffic is distributed evenly among the active cloud
services.
www.lpu.in LOVELY PROFESSIONAL UNIVERSITY
DEPLOYING MICROSEVICES
AUTO SCALING LISTENER
1. The automated scaling listener mechanism is a service agent that monitors and tracks communications between cloud
service consumers and cloud services for dynamic scaling purposes.
2. Automated scaling listeners are deployed within the cloud, typically near the firewall, from where they automatically track
workload status information.
3. Workloads can be determined by the volume of cloud consumer-generated requests or via back-end processing demands
triggered by certain types of requests.
4. For example, a small amount of incoming data can result in a large amount of processing.
5. Automated scaling listeners can provide different types of responses to workload fluctuation conditions, such as:
 Automatically scaling IT resources out or in based on parameters previously defined by the cloud consumer
(commonly referred to as auto-scaling).
 Automatic notification of the cloud consumer when workloads exceed current thresholds or fall below allocated
resources. This way, the cloud consumer can choose to adjust its current IT resource allocation

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES
AUTO-SCALING LISTENER
6. Cloud provider vendors have different names for service agents acting as automated scaling listeners.
1. Three cloud service consumers attempt to access one cloud
service simultaneously
2. The automated scaling listener scales out and initiates the
creation of three redundant instances of the service
3. A fourth cloud service consumer attempts to use the cloud service
4. Programmed to allow up to only three instances of the cloud
service, the automated scaling listener rejects the fourth attempt
and notifies the cloud consumer that the requested workload limit
has been exceeded
5. The cloud consumer’s cloud resource administrator accesses the
remote administration environment to adjust the provisioning
setup and increase the redundant instance limit
www.lpu.in LOVELY PROFESSIONAL UNIVERSITY
DEPLOYING MICROSEVICES
SERVICE DISCOVERY
Service Discovery
1. It is a technology that automatically detects services and devices on a computer network.
2. It is how applications and microservices locate different components on a network.
3. A service discovery protocol is a network protocol that implements this technology and reduces manual configuration
tasks for the administration and the users.
4. The service discovery can be implemented on the central server and the client-server. It simply requires a common
language that allows the software to utilize another’s services without any additional intervention on the user’s part.
Service Registry
1. It is the database that tracks the availability of each microservice in an application.
2. The service registry must be updated whenever a new service is added to the system or service is taken offline or
unavailable.
3. This can be accomplished through service registration or third-party registration

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES
SERVICE DISCOVERY
Why do we need Service Discovery?
1. When a user is working on a microservice application, the user deals with the service instances that have dynamic
allocations.
2. This instance might also need to be changed at runtime, based on factors such as autoscaling, service upgrades, and
failures. In such scenarios, the services dependent on these instances must be aware.
Example:
1. Suppose you are working on a code that invokes a service with a REST API. The code needs the IP address and port of the
service instance to make a request. Now, if your application runs on hardware, these locations remain static and do not
need to be searched for every time.
2. However, in a microservice architecture, the number of instances will vary and they will not have their locations in a
configuration file. Thus, it is difficult to know the number of services at one point in time. You need service discovery to
locate service instances in dynamically assigned network locations in a cloud-based microservices location.

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES
SERVICE DISCOVERY
Component of Service Discovery
Service discovery has three components.
1. Service provider  that provides the services across a network.
2. Service registry that is a database containing the locations of all available service instances.
3. The service consumer retrieves the service provider’s location from the service registry and communicates with the
service instance.

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES
SERVICE DISCOVERY
Types Service Discovery
1. There are two primary patterns of service discovery. They are:
 client-side discovery and
 server-side discovery.
2. They both have their uses, advantages, and disadvantages.
3. The primary difference is that in client-side discovery, the responsibility of finding available service instances lies with the
client, and in server-side discovery, the responsibility lies with the server.
Client Side Service Discovery

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES
SERVICE DISCOVERY
1. In this type of service discovery, the service client or
consumer has to search the service registry to locate a
service provider.
2. Then, the client selects a suitable and free service instance
through a load balancing algorithm to make a request.
3. In this pattern, the service instance’s location gets
registered with the service registry as soon as the service
starts. The location information is deleted after the service
instance is terminated. This refresh occurs periodically
using a heartbeat mechanism.
Examples: Netflix OSS

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY


DEPLOYING MICROSEVICES

www.lpu.in LOVELY PROFESSIONAL UNIVERSITY

You might also like