0% found this document useful (0 votes)
41 views

Session2-Cloud computing

Cloud computing, originating in the 1960s, has evolved to enable applications to be designed as decentralized services that scale horizontally. It addresses challenges like peak load provisioning and resource underutilization through on-demand resource provisioning, known as elasticity. Major cloud providers like AWS, Azure, and Google Cloud dominate the market, each offering unique services and deployment models, while the architecture of applications is shifting towards microservices and requires careful consideration of design, management, and security.

Uploaded by

kushwahtanu2609
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views

Session2-Cloud computing

Cloud computing, originating in the 1960s, has evolved to enable applications to be designed as decentralized services that scale horizontally. It addresses challenges like peak load provisioning and resource underutilization through on-demand resource provisioning, known as elasticity. Major cloud providers like AWS, Azure, and Google Cloud dominate the market, each offering unique services and deployment models, while the architecture of applications is shifting towards microservices and requires careful consideration of design, management, and security.

Uploaded by

kushwahtanu2609
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Session 2: Cloud Computing

Understanding Cloud Computing

The concept was born in the 1960s from the ideas of pioneers like
J.C.R Licklider. The John mcCaurthy form a global net work
computation and framing computation as a public utility Flash
forward to 1997,when the term “Cloud Computing” was used by
first information systems professor Ramnath chellappa Within a
just a few years, companies began switching from hardware to
cloud services

• Changing Application Design in the Cloud:


o Applications are decomposed into smaller,
decentralized services.
o Services communicate through APIs or asynchronous
messaging/eventing.
o Applications scale horizontally by adding new
instances as demand increases.
• Challenges in Cloud-Based Applications:
o Application state is distributed.
o Operations are performed in parallel and
asynchronously.
o Systems must be resilient to failures.
o Deployments need to be automated and predictable.
o Monitoring and telemetry are essential for system
insights.
• Azure Application Architecture Guide:
o Designed to help navigate the changes in application
design and architecture.
Why do we need Cloud?

If we consider an online shopping application on-premise, it has


few challenges like :
• Peak usage during holidays and weekends
• Less load during rest of the time
Without cloud, the solution was Peak Load provisioning - to
procure more resources for peak usage. However, rest of the
time, resources remain underutilized.

On the other hand, a startup company has become famous and


started getting popular and more load is generated on servers for
incoming and outgoing data.

This may lead the startup to buy more infrastructure anticipating


future need. However, infrastructure remain idle or underutilized
if load is reduced.

So, without cloud solution need the following:


• Procure more infrastructure which may lead to high capital
investment
• High Maintenance of infrastructure and application need a
dedicated infrsturture maintenance team
• Ahead of time planning which is difficult to plan rightly for
future need of infrastructure
• Low infrastructure utilization during non-peak load
With cloud we can deal with above challenges: by on-demand
resource provisioning – when resource is needed we can get on
rental (pay-as-you-go) basis but releasing those if not used. This
is known as Elasticity.

Advantages:
• Trade Capital expenses for Variable expenses
• Benefit from massive economies of scale
• No guessing capacity – infrastructure can be adjusted with
number of users
• No investment in maintenance of servers and data centres
• Go global in minutes – cloud enables deploying of
applications in multiple region

Cloud's Impact on Application Design:


• Applications are transitioning from monolithic to
decentralized architectures.
• Applications are decomposed into smaller, independent
services.
• Services communicate via APIs, asynchronous messaging,
or eventing.
• Horizontal scaling allows applications to add new instances
as demand increases.

These trends bring new challenges. Application state is


distributed. Operations are done in parallel and asynchronously.
The system as a whole must be resilient when failures occur.
Deployments must be automated and predictable. Monitoring
and telemetry are critical for gaining insight into the system.
Three Deployment Models for Cloud:-
1. Public Cloud
2. Private Cloud
3. Hybrid Cloud
Three Service Models :

1. IAAS
a. Users need to manage Application, Data, Runtime,
Midleware, O/s
b. Cloud providers manage Virtualization, Servers,
Storage and Networking
2. PAAS:
a. Users need to manager application and data
b. Cloud providers manage Runtime, Midleware, O/s,
Virtualization, Servers, Storage and Networking
3. SAAS
a. Users need not to manage anything
b. Cloud providers manage everything
Three types of Virtualizations:

- CPU Virtualization
- Memory Virtualization
- I/O Device Virtualization
SAAS:
IAAS:
PAAS:

• Key Decisions in Designing a Cloud Application:


o Start by selecting the appropriate architecture for the
application.
o Consider factors like application complexity, domain
type, IaaS vs. PaaS model, and application
functionality.
o Assess the skills of the developer and DevOps teams.
o Account for whether the application has an existing
architecture.
• Understanding Architecture Styles:
o Architecture styles impose design constraints, shaping
the overall architecture.
o Constraints guide design choices and influence
outcomes.
o Each style offers both benefits and challenges.
o Evaluate trade-offs when adopting any specific
architecture style.

Lifecycle of Cloud Computing Solution:

• Define Purpose – Get proper understanding of requirements


• Define Hardware – Choose Compute service that will
provide right support to run application programs
• Define the Storage – Choose storage where you can backup
and achieve data over internet
• Define the network – Define and design network architecture
that securely delivers data, applications with low latency
and high transfer speed
• Define Security – Set up security services that enable
services for users’ authentication or limit access to AWS
resources using role based access.
• Define Management process and tools – Choose right
monitoring and process management tools which can
completely manage cloud environment
• Define testing the process – Choose tools to build, test and
deploy codes.
• Define Analytics – Finally choose right analytics services to
query data , generate reports

AWS, Google Cloud, and Microsoft Azure are the dominant


players in the cloud industry, often seen as the "rockstars."
However, the cloud landscape extends beyond these top three,
and it’s worth exploring the alternatives.
To make informed decisions, it’s essential to evaluate market
share, services, costs, and reputations—not only for the major
providers but also for lesser-known competitors. A skilled cloud
architect or engineer understands the value of exploring diverse
options and avoiding vendor lock-in.
Let’s dive in and uncover why these alternative platforms could
be worth your attention, investment, and data.

1. Amazon Web Services (AWS)


2. Microsoft Azure
3. Google Cloud
4. Alibaba Cloud
5. Oracle Cloud
6. Salesforce Cloud
7. IBM Cloud
8. Tencent Cloud
9. Huawei Cloud
10. DigitalOcean
11. Vercel
12. Linode/Akamai Connected Cloud
13. OVHcloud
14. Hetzner

Amazon Web Services (AWS)


AWS holds the largest cloud market share globally at 33%. It
offers a comprehensive range of services, including IaaS, PaaS,
and AI capabilities.
https://aws.amazon.com
Documentation: https://docs.aws.amazon.com/

• Highly scalable infrastructure with a vast global


network of data centers.
• Extensive service catalog covering storage, computing, AI,
and machine learning. Tons of different services.
• Strong ecosystem of third-party integrations and
developer tools.
• Can be complex and overwhelming for newer or less
technical users due to the vast array of services.
• Amazon S3 is widely used for scalable, durable storage
of large data sets, making it a popular choice for data
lakes and content delivery.
• Average to higher costs, with a robust free tier but
premium pricing for enterprise-level services and
extensive data transfer.
Popular services:
• EC2. Scalable virtual servers for flexible computing power.
• S3. Highly durable object storage with broad use cases.
• Lambda. Serverless compute service for event-driven
workloads.
• RDS. Managed relational databases supporting multiple
engines.
• VPC. Virtual network provisioning for secure cloud
environments.
AWS :
Microsoft Azure
Azure holds a 20% market share and is particularly strong
in hybrid cloud solutions and integration with Microsoft
products.
https://azure.microsoft.com/en-us/
Documentation: https://learn.microsoft.com/en-
us/azure/developer/
• Seamless integration with Microsoft products like Office
365 and Windows Server.
• Expansive hybrid cloud capabilities with Azure Stack for on-
premises integration.
• Strong AI and analytics offerings, particularly with Azure
Machine Learning.
• Less extensive global data center coverage compared to
AWS.
• Azure Active Directory is highly valued for seamless
integration with Microsoft environments, making it a go-to
for organizations needing strong identity and access
management.
• Azure Pipelines for Devops and deployments is a popular
service.
• Typically priced similarly to AWS, with competitive rates for
hybrid cloud services but premium costs for some AI and
advanced analytics offerings.
Popular services:
• Virtual Machines. Scalable cloud-based virtual servers.
• Azure Blob Storage. Object storage optimized for
unstructured data.
• Azure Functions. Serverless computing for event-driven
tasks.
• Azure SQL Database. Managed relational database as a
service.
• Azure Active Directory. Identity and access management
service.

Azure holds a 20% market share and is particularly strong


in hybrid cloud solutions and integration with Microsoft
products.
https://azure.microsoft.com/en-us/

Documentation: https://learn.microsoft.com/en-
us/azure/developer/
• Seamless integration with Microsoft products like Office
365 and Windows Server.
• Expansive hybrid cloud capabilities with Azure Stack for on-
premises integration.
• Strong AI and analytics offerings, particularly with Azure
Machine Learning.
• Less extensive global data center coverage compared to
AWS.
• Azure Active Directory is highly valued for seamless
integration with Microsoft environments, making it a go-to
for organizations needing strong identity and access
management.
• Azure Pipelines for Devops and deployments is a popular
service.
• Typically priced similarly to AWS, with competitive rates for
hybrid cloud services but premium costs for some AI and
advanced analytics offerings.
Popular services:
• Virtual Machines. Scalable cloud-based virtual servers.
• Azure Blob Storage. Object storage optimized for
unstructured data.
• Azure Functions. Serverless computing for event-driven
tasks.
• Azure SQL Database. Managed relational database as a
service.
• Azure Active Directory. Identity and access management
service.

It is a cloud computing platform and online portal to manage


resource and services provided by Microsoft.
- One leading Cloud Provider
- Provides 200+ services
- Relaiable, Secure and Cost Effective
Google Cloud
With 10% market share, Google Cloud is well-regarded for its AI
and data analytics services, often used by data-centric
organizations.
https://cloud.google.com/
Documentation: https://cloud.google.com/docs

• Powerful data and analytics


tools, including BigQuery and TensorFlow.
• Advanced AI services and machine learning tools with
Google AI Platform.
• Competitive pricing, especially for storage and data
processing.
• Lags behind AWS and Azure in terms of global reach and
enterprise adoption.
• BigQuery is particularly popular for fast, serverless data
warehousing, ideal for organizations focusing on data
analytics and large-scale processing.
• Generally considered to have average to lower
costs, however, it depends a lot on the service and usage,
with competitive pricing on storage and data processing,
especially for long-term usage.

Popular services:
• Compute Engine. Flexible virtual machines for compute
workloads.
• Cloud Storage. Unified object storage for various data
types.
• BigQuery. Serverless data warehouse for analytics.
• App Engine. Managed platform for web applications.
• Kubernetes Engine. Managed Kubernetes for containerized
applications
Compone Deplo
nt/ ymen Indicative
Purpose Short Description
Service t Price
Name Model
$0.25 per
Managed service for
Azure hour +
Data orchestrating and automating
Data PaaS activity-
Ingestion data workflows; supports ETL
Factory based
processes and data movement.
costs
Fully managed ETL service for
Data data preparation and $0.44 per
AWS Glue PaaS
Ingestion movement. Supports serverless DPU-hour
execution and job scheduling.
Integrated analytics platform
Azure Data combining big data and data
$5 per
Synapse Processi warehousing; supports SQL- PaaS
DWU-hour
Analytics ng based querying and integration
with Spark.
Fully managed cloud data
Data warehouse for SQL-based $0.25 per
Amazon
Processi analytics; supports petabyte- PaaS DC2.large
Redshift
ng scale processing and machine /hour
learning integration.
Object storage for unstructured
Azure data such as text and binary $0.0184
Blob Storage files; supports hot, cool, and PaaS per
Storage archive tiers for cost GB/month
management.
Scalable object storage service
for backup, archiving, and
Amazon $0.023 per
Storage application data. Offers various PaaS
S3 GB/month
storage classes for cost
optimization.
Azure Serverles Event-driven serverless $0.20 per
PaaS
Functions s compute service for lightweight 1M
workloads and integrations; execution
scales automatically based on s
demand.
Event-driven serverless
$0.20 per
compute platform for running
AWS Serverles 1M
code in response to triggers; PaaS
Lambda s execution
supports multiple runtime
s
environments.
Unified analytics platform
Azure
based on Apache Spark; $0.40 per
Databrick Compute PaaS
supports data science, AI, and DBU-hour
s
big data processing.
Managed big data platform for $0.11 per
Amazon running frameworks such as EC2
Compute PaaS
EMR Apache Spark, Hadoop, and instance-
Presto on a scalable cluster. hour
Globally distributed, multi-
Azure $0.008 per
model database service
Cosmos Database PaaS RU +
designed for high availability,
DB storage
low latency, and scalability.
Fully managed NoSQL database
Amazon service for key-value and $1.25 per
DynamoD Database document-based workloads; PaaS WCU/mon
B optimized for low-latency th
performance.
Comprehensive monitoring and
$2.30 per
logging solution for Azure
Azure Monitorin GB
resources; integrates with PaaS
Monitor g (ingested
Application Insights and Log
data)
Analytics.
Monitoring and observability
Amazon $0.30 per
Monitorin service for AWS resources and
CloudWat PaaS 1M API
g custom metrics; supports
ch calls
alarms, logs, and dashboards.
End-to-end service for building,
Azure deploying, and managing $0.005/ho
Machine
Machine machine learning models with PaaS ur for
Learning
Learning support for AutoML and inference
scalable compute.
Comprehensive service for
Amazon $0.058/ho
Machine building, training, and deploying
SageMak PaaS ur for
Learning ML models at scale with
er notebook
integrated Jupyter notebooks.
Cloud service for managing
$6 per
Azure Dev software development
SaaS user/mont
DevOps Studio pipelines; includes CI/CD,
h
repositories, and Agile tools.
Fully managed continuous $1 per
AWS
Dev integration and delivery service active
CodePipe PaaS
Studio for automating application pipeline/
line
updates across environments. month
Scalable IaaS offering for
Azure Virtual running virtualized compute $0.011/ho
Virtual Machine workloads; supports Windows IaaS ur (B1s
Machines s (VMs) and Linux OS with custom instance)
configurations.
Scalable IaaS service for hosting $0.012/ho
Virtual
Amazon virtual machines with diverse ur
Machine IaaS
EC2 instance types and operating (t4g.micro
s (VMs)
system support. )
Appendix (Extra Material)

N-tier N-tier is a traditional architecture for enterprise


applications. Dependencies are managed by dividing the
application into layers that perform logical functions, such as
presentation, business logic, and data access. A layer can only
call into layers that sit below it. However, this horizontal layering
can be a liability. It can be hard to introduce changes in one part
of the application without touching the rest of the application.
That makes frequent updates a challenge, limiting how quickly
new features can be added. N-tier is a natural fit for migrating
existing applications that already use a layered architecture. For
that reason, N-tier is most often seen in infrastructure as a
service (IaaS) solutions, or applications that use a mix of IaaS and
managed services

Web-Queue-Worker:
For a purely PaaS solution, consider a Web-Queue-Worker
architecture. In this style, the application has a web front end that
handles HTTP requests and a back-end worker that performs
CPU-intensive tasks or long-running operations. The front end
communicates to the worker through an asynchronous message
queue.
Web-queue-worker is suitable for relatively simple domains with
some resource-intensive tasks. Like N-tier, the architecture is
easy to understand. The use of managed services simplifies
deployment and operations. But with complex domains, it can be
hard to manage dependencies. The front end and the worker can
easily become large, monolithic components that are hard to
maintain and update. As with N-tier, this can reduce the
frequency of updates and limit innovation

Microservices If your application has a more complex domain,


consider moving to a Microservices architecture. A microservices
application is composed of many small, independent services.
Each service implements a single business capability. Services
are loosely coupled, communicating through API contracts. Each
service can be built by a small, focused development team.
Individual services can be deployed without a lot of coordination
between teams, which encourages frequent updates. A
microservice architecture is more complex to build and manage
than either N-tier or web-queue-worker. It requires a mature
development and DevOps culture. But done right, this style can
lead to higher release velocity, faster innovation, and a more
resilient architecture
CQRS The CQRS (Command and Query Responsibility
Segregation) style separates read and write operations into
separate models. This isolates the parts of the system that
update data from the parts that read the data. Moreover, reads
can be executed against a materialized view that is physically
separate from the write database. That lets you scale the read
and write workloads independently, and optimize the
materialized view for queries.
CQRS makes the most sense when it’s applied to a subsystem of
a larger architecture. Generally, you shouldn’t impose it across
the entire application, as that will just create unneeded
complexity. Consider it for collaborative domains where many
users access the same data.

Event-Driven Architecture:
Event-Driven Architectures use a publish-subscribe (pub-sub)
model, where producers publish events, and consumers
subscribe to them. The producers are independent from the
consumers, and consumers are independent from each other.
Consider an event-driven architecture for applications that ingest
and process a large volume of data with very low latency, such as
IoT solutions. This style is also useful when different subsystems
must perform different types of processing on the same event
data.
Big Data, Big Compute:
Big Data and Big Compute are specialized architecture styles for
workloads that fit certain specific profiles. Big data divides a very
large dataset into chunks, performing paralleling processing
across the entire set, for analysis and reporting. Big compute,
also called high-performance computing (HPC), makes parallel
computations across a large number (thousands) of cores.
Domains include simulations, modeling, and 3-D rendering.

N-Tier Architecture :
• Layers:
o Layers separate responsibilities and manage
dependencies in a system.
o Each layer has a specific responsibility.
o A higher layer can use services in a lower layer, but
lower layers cannot depend on higher layers.
• Tiers:
o Tiers are physically separated and run on separate
machines.
o Communication between tiers can be direct or through
asynchronous messaging (e.g., message queues).
o Layers can be hosted on the same tier or distributed
across multiple tiers.
• Advantages of Physical Tier Separation:
o Improves scalability and resiliency by isolating
resources.
o Adds latency due to additional network
communication.
• Traditional Three-Tier Architecture:
o Presentation tier: User interface layer.
o Middle tier (optional): Business logic and processing
layer.
o Database tier: Data storage and management layer.
• Complex Applications:
o Can include more than three tiers.
o Example: Two middle tiers encapsulating different
areas of functionality.
When to use N-tier architecture
• N-tier Architectures:

o Typically implemented as IaaS applications, with


each tier running on separate VMs.
o Can include managed services for caching,
messaging, and data storage for efficiency.
• Ideal Use Cases:
o Simple web applications.
o Migrating on-premises applications to Azure with
minimal changes.
o Unified development for on-premises and cloud
applications.
• Advantages:
o Common in traditional on-premises applications,
making them a natural fit for migrating workloads to
Azure.
Benefits
• Portability between cloud and on-premises, and between
cloud platforms.
• Less learning curve for most developers.
• Natural evolution from the traditional application model.
• Open to heterogeneous environment (Windows/Linux)

Challenges
• It’s easy to end up with a middle tier that just does CRUD
operations on the database, adding extra latency without
doing any useful work.
• Monolithic design prevents independent deployment of
features.
• Managing an IaaS application is more work than an
application that uses only managed services.
• It can be difficult to manage network security in a large
system.
Traditional on-premises • • • • • • • • •
Monolithic,
centralized Design for predictable scalability Relational database
Strong consistency Serial and synchronized processing Design to
avoid failures (MTBF) Occasional big updates Manual
management Snowflake servers Modern cloud • • • • • • • • •
Decomposed, de-centralized Design for elastic scale Polyglot
persistence (mix of storage technologies) Eventual consistency
Parallel and asynchronous processing Design for failure (MTTR)
Frequent small updates Automated self-management Immutable
infrastructure
Let's consider a couple of use cases.
Let's start with a online shopping application.
What is the challenge that an online shopping application faces?
Online shopping applications typically have peak usage during
holidays and weekends.
For example, during the Christmas period and the New Year
period, you'd have a lot of load on the application
and rest of the time you'll have low loads on the application.
Now, what was the solution before the cloud?
The solution before the Cloud was to do peak load provisioning.
What is peak load provisioning?
It is to buy infrastructure or procure infrastructure for the peak
load.
So, if this is the peak load you expect, you'd buy infrastructure to
support that kind of a load.
Now, think about this.
What would that infrastructure be doing in periods of low load?
It would just be sitting idle. Now,
let's consider another example.
Take a startup.
What is a challenge that it faces?
This challenge is also kind of a good news.
The startup suddenly becomes popular.
But what do you need is more infrastructure to support the load.
How do you handle the sudden increase in the load?
What was the solution before the Cloud?
Again, it was to procure infrastructure, assuming that you would
be successful.
And what if you are not really successful?
All the infrastructure that you bought is wasted.
So, the typical challenges before the emergence of Cloud were
the high cost of procuring infrastructure.
If you want to buy infrastructure ahead of time, it is very
expensive and it also needs ahead of time
planning.
Can you really guess the future?
Can you accurately estimate the peak load?
The other challenge is low infrastructure utilization.
You can do peak load provisioning and buy infrastructure for the
peak load, but during the rest of
the time, the infrastructure is wasted.
You have low infrastructure utilization and you need a dedicated
infrastructure maintenance team to
maintain this infrastructure.
Think about a startup.
Can they afford a dedicated infrastructure maintenance team?
And because of these challenges, we started moving towards the
Cloud.
Whenever we use the cloud, we are talking about a simple
question.
How about provisioning or renting resources when we want them
and releasing them back to the Cloud when
you do not need them?
This is also called On-demand resource provisioning.
If you have high load on your application, if you have huge number
of users using the application,
you'll provision or you'll rent a lot of resources from the Cloud.
And once the load goes down, once the number of users on your
application goes down, you'll release the resources back to the
Cloud.
You can see that the number of resources you use, increases with
the number of users who are using the application and that's why
this is also called elasticity.
Now, what are the advantages of this approach?
You're trading capital expense for variable expense.

You might also like