INT363_Unit_5
INT363_Unit_5
CLOUD-NATIVE DEVELOPMENT
LTP:300
Table of Content
Cloud Native Architecture
Loosely Coupled Services
Service Discovery
Load Balancing
Auto Scaling
Data Management
The Twelve-factor App Methodology
Serverless Architecture
Case Studies of Netflix, Amazon, Uber etc.
Cloud-Native Deployment
4. You construct a large core application containing all of your domain logic. It includes modules such as Identity, Catalog,
Ordering, and more. They directly communicate with each other within a single server process. The modules share a large
relational database. The core exposes functionality via an HTML interface and a mobile app.
5. Congratulations! You just created a monolithic application
Uber Has 1,000+ services in production. Deploys several thousand times each week
7. As you can see, Netflix, Uber, and, WeChat expose cloud-native systems that consist of many independent services.
8. This architectural style enables them to rapidly respond to market conditions.
9. They instantaneously update small areas of a live, complex application, without a full redeployment.
10. They individually scale services as needed
1. New instances of the cloud services are automatically created to meet increasing usage requests
2. The load balancer uses round-robin scheduling to ensure that the traffic is distributed evenly among the active cloud
services.
www.lpu.in LOVELY PROFESSIONAL UNIVERSITY
DEPLOYING MICROSEVICES
AUTO SCALING LISTENER
1. The automated scaling listener mechanism is a service agent that monitors and tracks communications between cloud
service consumers and cloud services for dynamic scaling purposes.
2. Automated scaling listeners are deployed within the cloud, typically near the firewall, from where they automatically track
workload status information.
3. Workloads can be determined by the volume of cloud consumer-generated requests or via back-end processing demands
triggered by certain types of requests.
4. For example, a small amount of incoming data can result in a large amount of processing.
5. Automated scaling listeners can provide different types of responses to workload fluctuation conditions, such as:
Automatically scaling IT resources out or in based on parameters previously defined by the cloud consumer
(commonly referred to as auto-scaling).
Automatic notification of the cloud consumer when workloads exceed current thresholds or fall below allocated
resources. This way, the cloud consumer can choose to adjust its current IT resource allocation