Full Cloud Computing
Full Cloud Computing
(IT)
SEMESTER - I (CBCS)
CLOUD COMPUTING
SUBJECT CODE : PSIT103
© UNIVERSITY OF MUMBAI
Prof. Suhas Pednekar
Vice Chancellor
University of Mumbai, Mumbai.
Prof. Ravindra D. Kulkarni Prof. Prakash Mahanwar
Pro Vice-Chancellor, Director
University of Mumbai. IDOL, University of Mumbai.
Published by : Director
ipin Enterprises Institute of Distance and Open Learning ,
University of Mumbai,
Tantia Jogani Industrial Estate, Unit No. 2,
Vidyanagari, Mumbai - 400 098.
Ground Floor, Sitaram Mill Compound,
DTP Composed J.R.
: Varda Offset andBoricha Marg, Mumbai - 400 011
Typesetters
Andheri (W), Mumbai - 400 053.Pace Computronics
"Samridhi" Paranjpe 'B' Scheme, Vile Parle (E), Mumbai - 57.
Printed by :
CONTENTS
Chapter No. Title Page No.
Unit I
1. Introduction to Cloud Computing 1
2. Principles of Parallel and Distributed Computing 43
3. Virtualization 92
Unit II
4. Cloud Computing Architecture 134
5. Fundamental Cloud Security and Industrial Platforms and New Developments 154
Unit III
6. Specialized Cloud Mechanisms 174
7. Cloud Management Mechanisms and Cloud Security Mechanisms 194
Unit IV
8. Fundamental Cloud Architectures 213
9. Advanced Cloud Architectures 225
Unit V
10. Cloud Delivery Model Consideration 238
11. Cost Metrics and Pricing Models and Service Quality Metrics and SLAs 251
*****
Syllabus
M. Sc (Information Technology) Semester – I
Course Name: Cloud Computing Course Code: PSIT103
Periods per week (1 Period is 60 minutes)4
Credits
4
Hours Marks
Evaluation System Theory Examination
Theory Internal
2½ 60
Theory Internal - 25
Practical No Details
1 - 10 Practical based on above syllabus, covering entire syllabus
1
INTRODUCTION TO CLOUD
COMPUTING
Unit Structure
1.0 Objective
1.1 Introduction
1.2 Cloud computing at a glance
1.2.1 The vision of cloud computing
1.2.2 Defining a cloud
1.2.3 A closer look
1.2.4 The cloud computing reference model
1.2.5 Characteristics and benefits
1.2.6 Challenges ahead
1.3 Historical developments
1.3.1 Distributed systems
1.3.2 Virtualization
1.3.3 Service-oriented computing
1.3.4 Utility-oriented computing
1.4 Building cloud computing environments
1.4.1 Application development
1.4.2 Infrastructure and system development
1.4.3 Computing platforms and technologies
1.4.3.1Amazon web services (AWS)
1.4.3.2 Google AppEngine
1.4.3.3 Microsoft Azure
1.4.3.4 Hadoop
1.4.3.5Force.com and Salesforce.com
1.4.3.6 Manjrasoft Aneka
1.5 Summary
1.6 Unit End questions
1.7 Reference for further reading
1.0 OBJECTIVE
This chapter would make your under the concept of following concepts
What is a cloud computing?
1
What are characteristics and benefits of cloud computing?
It’s Challenges.
Historical development of technologies toward the growth of cloud
computing
Types of Cloud Computing Models.
Different types of Services in the Cloud Computing.
Application development and Infrastructure and system development
technologies about the Cloud Computing.
Overview of different sets of Cloud Service Providers.
1.1 INTRODUCTION
3
FIGURE 1.1. Convergence of various advances
leading to the advent of cloud computing
1.2.1 The vision of cloud computing:
The virtual provision of cloud computing is hardware, runtime
environment and resources for a user by paying money. As of these items
can be used as long as the User, no upfront commitment requirement. The
whole computer device collection is turned into a Utilities set that can be
supplied and composed in hours rather than days together, to deploy
devices without Costs for maintenance. A cloud computer's long-term
vision is that IT services are traded without technology and as utilities on
an open market as barriers to the rules.
We can hope in the near future that it can be identified the solution
that clearly satisfies our needs entering our application on a global digital
market services for cloud computing. This market will make it possible to
automate the process of discovery and integration with its existing
software systems. A digital cloud trading platform is available services
will also enable service providers to boost their revenue. A cloud service
may also be a competitor's customer service to meet its consumer
commitments.
4
improve, making it even safer with a wide variety of techniques. Instead of
concentrating on what services and applications they allow, we do not
consider "cloud" to be the most relevant technology. The combination of
the wearable and the bringing your own device (BYOD) with cloud
technology with the Internet of Things ( IOT) would become a common
necessity in person and working life such that cloud technology is
overlooked as an enabler.
5
FIGURE 1.3 Cloud computing technologies, concepts, and ideas.
(Reference from “Mastering Cloud Computing Foundations and
Applications Programming” byRajkumar Buyya)
7
New YorkTimes:
The New York Times had the option to utilize 100 servers for 24
hours at the low standard cost of ten cent an hour for every server. In the
event that the New York times had bought even a solitary server for this
errand, the probable expense would have surpassed the $890 for simply
the hardware, and they likewise need to think about the expense of
administration, power and cooling Likewise, the handling would have
assumed control more than a quarter of a year with one server. On the off
chance that the New York Times had bought four servers, as Derek
Gottfrid had considered, it would have still taken almost a month of
calculation time. The quick turnaround time (sufficiently quick to run the
activity twice) and endlessly lower cost emphatically represents the
prevalent estimation of cloud services.
Washington Post:
In a related but more latest event, the Washington Post were able
to transform 17,481 pages of scanned document images into a searchable
database in just a day using Amazon EC2. On March 19th at 10am,
Hillary Clinton’s official White House schedule from 1993-2001 was
published to the public as a large array of scanned photographs (in PDF
format, but non-searchable). Washington Post programmer Peter Harkins
utilized 200 Amazon EC2 instances to conduct OCR (Optical Character
Recognition) on the scanned files to create searchable text – “ I used 1,407
hours of virtual machine time with a total cost of $144.62. We find it a
positive proof of concept.
DISA:
SmugMug :
Eli Lily:
TC3:
TC3 (Total Claims Capture & Control) is a healthcare services
9
company imparting claims management solution. TC3 now makes use of
Amazon’s cloud services to allow on-demand scaling of resource and
lower infrastructure costs. TC3’s CTO notes, “we're making use of
Amazon S3, EC2, and SQS to permit our claim processing capacity to
growth and reduce as required to satisfy our service level agreements
(SLAs). There are times we require massive quantities of computing
resource that a long way exceed our machine capacities and when these
conditions took place inside the past our natural response became to name
our hardware vendor for a quote. Now, by using the usage of AWS
products, we can dramatically reduce our processing time from weeksor
months right down to days or hours and pay much less than shopping,
housing and maintaining the servers ourselves” Another particular feature
of TC3's activities is that, because they provide US health-related services,
they are obligated to abide with the HIPPA (Health Insurance Portability
and Accountability Act). Regulatory compliance is one of the main
obstacles facing corporate adoption of cloud infrastructure – the fact that
TC3 is capable of complying with HIPPA on Amazon's platform is
significant.
How all of the computing made possible? in same IT services on
demand like computing power, storage and providing an runtime
environments for development of an applications on pay-as-you go basis
.cloud computing not only provides an opportunity for easily accessing of
IT services as per demand, but also provides newly ideas regarding IT
Services and resources as am utilities .Figure 1.4 provides a bird’s-eye
view of cloud computing
10
There are three deployment models for accessing the services of
cloud computing environment are public, private and hybrids clouds (see
Figure 1.5). The public cloud is one of the most common deployment
models in which computing services is offered by third-party vendors that
the consumer are able to access and purchase the resource from the public
cloud via the public internet. These can be free or on-demand, meaning
that consumers pay for their CPU cycles, storage or bandwidth per
use.Public clouds will save companies from the expensive procurement,
management and on-site maintenance of hardware and application
infrastructure — all management and maintenance of the system is held to
responsibility of the cloud service provider. Public clouds can also be
deployed faster than on-site infrastructures with a platform almost
constantly scalable. Although security issues have been posed by public
cloud implementations, the public cloud could be as secure as the most
efficiently operated private cloud deployment when it is implemented
correctly. A private cloud is an essentially one organization's cloud
service. In using a private cloud, the advantages of cloud computing are
experienced without sharing resources with other organizations. There can
be a private cloud within an organization, or be controlled from a third
party remotely, and accessed via the Internet (but it is not shared with
others, unlike a public cloud).Private cloud incorporates several of the
advantages of cloud computing — including elasticity, scalability and easy
service delivery — with the on-site control, security, and resource
customization .Many companies select private cloud over public cloud
(cloud computing services delivered through multi-customer
infrastructure) because private cloud is a simpler (or the only way) way to
satisfy their regulatory compliance requirements. Others prefer private
cloud because their workloads deal with confidential information,
intellectual property, and personally identifiable information (PII),
medical records, financial data and other sensitive data.Hybrid cloud is an
infrastructure that contains links between a user's cloud (typically referred
to as "private cloud") and a third-party cloud (typically referred to as
"public cloud"). Whilst the private and public areas of the hybrid cloud are
linked, they remain unique. This allows a hybrid cloud to simultaneously
offer the advantages of several implementation models. The sophistication
of hybrid clouds is very different. Some hybrid clouds, for example, only
connect the on-site to public clouds. The operations and application teams
are responsible for all the difficulties inherent in the two different
infrastructures.
11
FIGURE 1.5 Major deployment models for cloud computing.
Reference from “Mastering Cloud Computing Foundations and
Applications Programming” byRajkumar Buyya)
12
Figure 1.6 The Cloud Computing Reference Model.
Reference from “Mastering Cloud Computing Foundations and
Applications Programming” byRajkumar Buyya)
Cloud computing is an all-encompassing term for all resources that
are hosted on the Internet. These services are classified under three main
categories: infrastructure as a service (IaaS), platform as a service (PaaS)
and software as a service (SaaS).These categories are mutually related as
outlined in Figure 1.6 which gives an organic view of cloud computing.
The model structures the broad variety of cloud computing services in a
layered view from the base to the top of the computing stack.
The big difference in control between PaaS and IaaS is users got.
Essentially, PaaS makes it possible for suppliers to manage everything
IaaS calls for more customer management. In general, companies with a
software package or application already have specific purpose and you
should choose to install and run it in the cloud IaaS rather than PaaS.
2. On-Demand Self-Service:
This is one of the main and useful advantages of Cloud Computing
as the user can track server uptimes, capability and network storage on an
ongoing basis. The user can also monitor computing functionalities with
this feature.
3. Easy Maintenance:
The servers are managed easily and the downtime is small and
14
there are no downtime except in some cases. Cloud Computing offers an
update every time that increasingly enhances it. The updates are more
system friendly and operate with patched bugs faster than the older ones.
5. Availability:
The cloud capabilities can be changed and expanded according to
the usage. This review helps the consumer to buy additional cloud storage
for a very small price, if necessary.
6. Automatic System:
Cloud computing analyzes the data required automatically and
supports a certain service level of measuring capabilities. It is possible to
track, manage and report the usage. It provides both the host and the
customer with accountability.
7. Economical:
It is a one-off investment since the company (host) is required to
buy the storage, which can be made available to many companies, which
save the host from monthly or annual costs. Only the amount spent on the
basic maintenance and some additional costs are much smaller.
8. Security:
Cloud Security is one of cloud computing's best features. It
provides a snapshot of the data stored so that even if one of the servers is
damaged, the data cannot get lost. The information is stored on the storage
devices, which no other person can hack or use. The service of storage is
fast and reliable.
16
and provide customer trustworthiness. To overcome this challenge, third-
party services should be monitored and the performance, robustness, and
dependence of companies supervised.
4. Cost:
5. Downtime:
6. Lack of resources:
17
Figure 1.8 RightScale 2019 report revelation
8. Cloud Migration:
While it is very simple to release a new app in the cloud, transferring an
existing app to a cloud computing environment is harder. 62% said their
cloud migration projects are harder than they expected, according to the
report. In addition, 64% of migration projects took longer than expected
and 55% surpassed their budgets. In particular, organizations that migrate
their applications to the cloud reported migration downtime (37%), data
before cutbacks synchronization issues (40%), migration tooling problems
that work well (40%), slow migration of data (44%), security
configuration issues (40%), and time-consuming troubleshooting (47%).
And to solve these problems, close to 42% of the IT experts said that they
wanted to see their budget increases and that around 45% of them wanted
to work at an in-house professional, 50% wanted to set the project longer,
56% wanted more pre-migration tests.
9. Vendor lock-in:
IN 1969:
J.C.R. Licklider, responsible for the creation of the Advanced
Research Projects Agency (ARPANET), proposed the idea of an
"Intergalactic Computer Network" or "Galactic Network" (a computer
networking term similar to today’s Internet). His vision was to connect
everyone around the world and access programs and data from anywhere.
IN 1970:
Usage of tools such as VMware for virtualization. More than one
operating system can be run in a separate environment simultaneously. In
a different operating system it was possible to operate a completely
different computer (virtual machine).
IN 1997:
Prof Ramnath Chellappa in Dallas in 1997 seems to be the first
known definition of "cloud computing," "a paradigm in which computing
boundaries are defined solely on economic rather than technical limits
alone."
20
IN 1999: Salesforce.com was launched in 1999 as the pioneer of
delivering client applications through its simple website. The services firm
has been able to provide applications via the Internet for both the specialist
and mainstream software companies.
IN 2003: This first public release of Xen ,is a software system that
enables multiple virtual guest operating systems to be run simultaneous on
a single machine, which also known as the Virtual Machine Monitor (
VMM) as a hypervisor.
IN 2006: The Amazon cloud service was launched in 2006. First, its
Elastic Compute Cloud ( EC2) allowed people to use their own cloud
applications and to access computers. Simple Storage Service (S3) was
then released. This incorporated the user-as-you-go model and has become
the standard procedure for both users and the industry as a whole.
IN 2013: A total of £ 78 billion in the world 's market for public cloud
services was increased by 18.5% in 2012, with IaaS as one of the fastest
growing services on the market.
21
Distributed computing is a computer concept that refers most of
the time to multiple computer systems that work on a single problem. A
single problem in distributed computing is broken down into many parts,
and different computers solve each part. While the computers are
interconnected, they can communicate to each other to resolve the
problem. The computer functions as a single entity if done properly.
Mainframes:
PVM and MPI are the two methods most widely used in cluster
communication.
Grid computing:
Data Grid: a system that handles large distributed sets of data used to
control data and to share users. It builds virtual environments that facilitate
scattered and organized research. A data grid example is Southern
California’s Earthquake Center, which uses a middle software framework
to construct a digital library, a distributed filesystem and a continuous
archive.
1.3.2Virtualization:
26
Virtualization involves the creation of something's virtual
platform, including virtual computer hardware, virtual storage devices and
virtual computer networks.
Web 2.0:
"Websites which emphasize user-generated content, user-
friendliness, participatory culture, and interoperability for end users" or
participatory, or participative / activist and social websites. Web 2.0 is a
new concept that was first used in common usage in 1999 about 20 years
ago. It was first coined by Darcy DiNucci and later popularized during a
conference held in 2004 by Tim O'Reilly and Dale Doughtery. It is
necessary to remember that Web 2.0 frameworks deal only with website
design and use without placing the designers with technical requirements.
It represents the evolution of the World Wide Web; the web apps,
which enable interactive data sharing, user-centered design and worldwide
28
collaboration. Web 2.0 is a collective concept of Web-based technologies
that include blogging and wikis, online networking platforms, podcasting,
social networks, social bookmaking websites, Really Simple Syndication
(RSS) feeds. The main concept behind Web 2.0 is to enhance Web
applications' connectivity and enable users to easily and efficiently access
the Web. Cloud computing services are essentially Web applications that
provide computing services on the Internet on demand. As a consequence,
Cloud Computing uses a Web 2.0 methodology, Cloud Computing is
considered to provide a main Web 2.0 infrastructure; it facilitates and is
improved by the Web 2.0 Framework Beneath Web 2.0 is a set of web
technologies. Recently appeared or shifted to a new production stage RIAs
(Rich Internet Applications).One of them Web's most prominent
technology and quasi-standard AJAX (Asynchronous JavaScript and
XML). Other technologies like RSS (Really Simple Syndication), Widgets
(plug-in modular components) and Web services ( e.g.SOAP, REST).
31
Figure 1.17 Service-oriented Architecture
SOA benefits:
Scalability:
The utility computing shall ensure that adequate IT resources are
available under all situations. Improved service demand does not suffer
from its quality (e.g. response time).
Price of demand:
Automation:
35
on demand is provided by cloud computing. One of the most advantageous
classes of applications in this feature are Web applications. Their
performance is mostly influenced by broad range of applications using
various cloud services can generate workloads for specific user demands.
Several factors have facilitated the rapid diffusion of Web 2.0. First, Web
2.0 builds on a variety of technological developments and advancements
that allow users to easily create rich and complex applications, including
enterprise applications by leveraging the Internet now as the main utility
and user interaction platform. Such applications are characterized by
significant complex processes caused by user interactions and by
interaction between multiple steps behind the Web front. This is the
application are most sensitive to improper infrastructure and service
deployment sizing or work load variability.
These are all factors that affect the manner we program cloud-
based applications and systems. Cloud computing offers effectively
mechanisms to respond to demand rise by replicating the necessary
components of stressful (i.e. highly loaded) computing systems. The key
components that should guide the development of such systems are
dynamism, size and volatility.
While Amazon EC2 server instances can also have bare-metal EC2
instances, most Amazon EC2 server instances are virtual machines housed
on Amazon's infrastructure. The server is operated by the cloud provider
and you don't need to set up or maintain the hardware.) A vast number of
EC2 instances are available for different prices; generally speaking the
more computing capacity you use, the higher the EC2 instance you need.
(Bare metal Cloud Instances permit you to host a working load on a
37
physical computer, rather than a virtual machine. In certain Amazon EC2
examples, different types of applications such as the parallel processing of
big data workload GPUs are optimized for use.
Worker role is any role for Azure that works on applications and
services that do not usually require IIS. IIS is not enabled default in
Worker Roles. They are mainly utilized to support web-based background
processes and to do tasks such as compressing uploaded images
automatically, run scripts, get new messages out of queue and process and
more, when something changes the database.
For certain cases, instances of Web Role and Worker Roles work
together and are also used concurrently by an application. For example, a
web role example can accept applications from users, and then pass them
to a database worker role example.
39
1.4.3.4 Hadoop:
SUMMARY
*****
42
2
PRINCIPLES OF
PARALLEL AND DISTRIBUTED
COMPUTING
Unit Structure
2.0 Objective
2.1 Eras of computing
2.2 Parallel vs. distributed computing
2.3 Elements of parallel computing
2.3.1What is parallel processing?
2.3.2 Hardware architectures for parallel processing
2.3.2.1 Single-instruction, single-data (SISD) systems
2.3.2.2 Single-instruction, multiple-data (SIMD) systems
2.3.2.3 Multiple-instruction, single-data (MISD) systems
2.3.2.4 Multiple-instruction, multiple-data (MIMD) systems
2.3.3 Approaches to parallel programming
2.3.4 Levels of parallelism
2.3.5 Laws of caution
2.4 Elements of distributed computing
2.4.1General concepts and definitions
2.4.2Components of a distributed system
2.4.3Architectural styles for distributed computing
2.4.3.1 Component and connectors
2.4.3.2 Software architectural styles
2.4.3.3 System architectural styles
2.4.4 Models for interprocess communication
2.4.4.1 Message-based communication
2.4.4.2 Models for message-based communication
2.5 Technologies for distributed computing
2.5.1 Remote procedure call
2.5.2 Distributed object frameworks
2.5.2.1 Examples of distributed object frameworks
2.5.3 Service-oriented computing
2.5.3.1What is a service?
2.5.3.2Service-oriented architecture (SOA)
2.5.3.3Web services
2.5.3.4Service orientation and cloud computing
2.6 Summary
43
2.7 Review questions
2.8 Reference for further reading
2.0 OBJECTIVE
The computing components (hardware, software, infrastructures)
that allow the delivery of cloud computing services refer to a Cloud
system or cloud computing technology.
For example:
46
Distributed computing is limited to programs in a geographically-
limited area with components shared among computers. Broader
definitions both include common tasks and program components.
Distributed computing in the broadest sense means that something is
shared between many systems, which can also happen in different
locations.
48
5. There is indeed extensive R&D work on development tools and
environments and parallel processing technology is mature, and
commercially exploitable.
6. Essential networking technology advancement paves the way for
heterogeneous computing.
Examples:
Vector Pipelines: IBM 9000, Cray X-MP, Y-MP & C90, Fujitsu VP,
NEC SX-2, Hitachi S820, ETA10
50
Figure 2.6 : Single-instruction, multiple-data (SIMD) architecture.
Single Data: A single stream of data is fed into multiple processing units.
Example Z = sin(x)+cos(x)+tan(x)
On the same data set the system performs various operations. For most
applications, machines designed using MISD are not useful, some are
designed, but none of them are commercially available.
52
Figure 2.9 shared (left) and distributed (right)
memory MIMD architecture.
55
A distributed system is one in which components located at
networked computers communicate and coordinate their actions only by
passing messages.
Architectural Styles:
The first class is about the software's logical structure; the second
class contains all types representing the physical structure of the software
systems represented by their major components.
58
2.4.3.1 Component and connectors:
Component and connectors visions describe models consisting of
elements with a certain presence over time, such as processes, objects,
clients, servers, and data storage. In addition, component and connector
models provide interaction mechanisms, such as communication links and
protocols, information flows, and shared storage access, as components.
These interactions also are conducted across complex infrastructure, such
as middleware systems, communication channels, and process schedulers.
Component is a behavioral unit. The description of the component defines
what the job can do and needs to do. Connector is an indication that one
component is usually linked by relationships such as data flow or control
flow. Connector is a mechanism.
2.4.3.2 Software architectural styles:
Styles and patterns in software architecture define how to organize
the system components to build a complete system and to satisfy the
customer's requirements. A number of software architectural styles and
patterns are available in the software industry, so that it is necessary to
understand the special design of the project.
These models form the basis on which distributed systems are built
logically and discussed in the following sections.
Data centered architectures:
At the center of this architecture is a data store, which is often
accessed through other components that update, add, delete or modify the
data present in the store. This figure 2.14 shows a typical data-centric
style. A central repository is accessed by the client software. Variation of
such a method is used to turn the repository into a blackboard, as
client's data or client’s interest data change customer's notifications. It will
facilitate integrality with this data-centered architecture. This allows for
changes to existing components and the addition of new customer
components to the architecture without the permission or concern of other
customers. Customers may use blackboard mechanisms to transfer data.
Batch Sequential
Batch sequential compilation in 1970 was considered to be a sequence
process.
In Batch sequential,Separate program systems are run sequentially and
the data is transferred from one program to the next as an aggregation.
This is a typical paradigm for data processing.
The figure above shows the sequence of the pipe filter. All filters are
the processes running concurrently, which means they can run as
separate threads or coroutines or be fully located on various machines.
Every pipe has a filter connection and has its own role in filter's
operation. The filters are robust, with the addition and removal of
pipes on runtime.
Filter reads the data from their input pipes, performs its function on
these data and places the result on all output pipes. If the input pipes
are not enough data, the filter only waits for them.
Filter:
Filter is a component.
The interfaces are used to flow in a variety of inputs and to flow out a
variety of outputs.
It processes and refines the data input.
The independent entities are filters.
Two ways to create a filter exist:
1. Active Filter
2. Passive Filter
The active filter creates the pipes' data flow.
Data flow on the pipes is driven by the passive filter.
Filter does not share state with other filters.
The identity of upstream and downstream filters is unclear.
Separate threads are used for filters. It may be threads or coroutines of
hardware or software.
62
It has low connectivity and flexibility through sequential and parallel
execution.
Interpreter Style:
The interpreter is an architectural style that is ideal for applications
that can not specifically use the most adequate language or machine to
execute the solution. The style comprises a few parts that are a program
we attempt to run, an interpreter we are attempting to interpret, the
program's current state and the interpreter and the memory portion that
will carry the program, the program’s actual state and its current state.
Calls for procedures for communication between elements, and direct
memory access, are the connector for the architectural style of an
interpreter.
63
Four compositions of the interpreter:
Engine interpreter: the interpreter 's job is completed
Area of data storage: contains the pseudo code
Data store field: Reports current interpreter engine state
external data structure: Tracks the development of the source code
interpreted
Output: The output of the Program is placed in the state of the program
where the data is interpreted interface system part. This model is quite
useful in designing virtual machines for high-level programming (Java,
C#) and scripting languages (Awk, PERL, and so on).
Application portability and flexibility throughout different platforms
Virtualization. Machine code for one hardware architecture can be
executed on another via the virtual machine.
System behavior defined by custom language or data structure;
facilitates the development and comprehension of software.
Dynamic change supports (Efficiency)
Usually the interpreter only has to translate the code to a
Intermediate representation (or not translate at all), so that it takes
considerably less time to test change.
Top-Down Style:
The top down approach is basically the breakdown of a systems in
order to get details on its compositional sub- structures in a reversing
engineering manner (also known as stepwise design and stepwise refining
and in some cases used in a decomposition fashion). In a top-down
64
approach, an overview of the system is made and all first-level subsystems
are defined, but not comprehensive. Each subsystem is then further
modified, often at various other subsystem levels, until the full
specification has been reduced to smaller elements. The "black boxes"
helps define a top-down layout that is easier to manipulate. Nonetheless,
black boxes could not explain or be precise enough to validate the model
effectively. The big picture starts with the top down approach. It divides
into smaller pieces.
Object-Oriented Style:
The object-oriented programme, instead of actions & logic, is a
programming language paradigm structured around objects & data. In
order to take data, process it and generate results, a traditional procedure
program is organized. The program was centralized in terms of logic
instead of data. They focus object-orientated programming on objects and
their manipulation rather than on the logic that creates them.
Event systems:
Communication process:
The architectural type of communication process is also known as Client-
Server architecture.
Client: begins a server call that requests for some service.
Server: provides client data.
Returns data access when the server works synchronously
Advantages:
Easier to Build and Maintain
Better Security
Stable
Disadvantages:
Single point of failure
Less scalable
Any new node will first join this network. Upon joining, they may
either request or provide a service. A node's initiation phase (joining a
67
node) can vary based on network's implementation. There are two ways a
new node can learn what other nodes provide.
The new node must register and mention the services on the
network with the centralized look up server. So, just contact the
centralized look up system anytime you need to have a service and it will
direct you to the appropriate service provider.
Decentralized System:
Message Passing:
The principle of message is implemented in this model as the main
abstraction of the model. Units that exchange explicitly encoded data and
information in the form of a message. The structure and message’s
content differ or vary according to the model. Message Passing Interface
and OpenMP are significant examples of this model type.
Distributed Objects:
This is an implementation of the object-orientated model Remote
Procedure Call (RPC), which is understood in context for
remote invocation methods that are expanded through objects. Each
process assigns a series of interfaces that are remotely accessible. The
client process can request and invoke the methods accessible via these
interfaces. The standard runtime infrastructure transforms the local
method invocation into a remote request call and collects the execution
results. The interaction between the caller and the remote process takes
place via messages. This model is stateless by design, the complexity of
object state management and lifetime are illustrated by distributed object
models. Common Object Request Broker Architecture (CORBA),
Component Object Model (COM, DCOM and COM+), Java Remote
Method Invocation (RMI), and .NET Remoting are some of the most
important Distributed object infrastructure examples.
Active objects:
Web Services:
Web service technology offers an alternative to the RPC
framework over HTTP, allowing the interaction of established
components with various technologies. A web service is exposed as a
remote object stored on a web server and invocations of the system are
converted into HTTP requests packed using a particular protocol. It must
be remembered that the concept of message is a basic abstraction of
communication between interprocesses and is used either implicitly or
explicitly.
2.4.4.2 Models for message-based communication
Point-to-point message model:
71
Figure 2.20 Publish-and-subscribe message model
Clients that publish and also subscribe "register" with the broker
for communication paths to manage, clients and other housekeeping
activities to authenticate.
2. When the procedure is completed and results are produced, its results
are returned to the calling environment where it resumes to execute as
if back from a regular procedure call.
74
Working of RPC:
75
7. The client stops the return parameters and returns the execution to the
caller.
The server program creates a remote object and provides the client
with a reference to that object (using the registry).
The client program requests remote objects and tries to invoke its methods
on the server.
79
The following diagram shows the architecture of an RMI application.
Transport Layer: using this layer the client are connected with the
server. This connection is maintained with existing connection and
new connections are also created.
Stub: the stub is the proxy of a client remote object. This is located in
the client system; it serves as the client's gateway.
Skeleton: It's the object on the server side. To pass the request on to a
remote object, Stub interacts with the skeleton.
RRL (Remote Reference Layer): this is the layer that manages the
client's remote object reference.
80
The packed parameters are unbundled on the server side and the
appropriate method is invoked. This method is referred to as
unmarshalling.
RMI Registry is a name space that contains all server objects. The
server registers this object into an RMIregistry (using bind method () or
Rebind () methods) (methods), any time an object is created. Those are
registered using a single name known as the bind name.
The client requires a reference to that object to invoke a remote
object. The client must then retrieve the object from the registry by its
bind name (using the lookup () method).
The basic workflow of.Net Remoting can be seen from the figure
above. In addition, if a client calls Remote method, the client does not
directly call the methods. The remote object receives a proxy and is used
to call up the remote object method. The message is encrypted with a
corresponding Formatter (Binary Formatter or SOAP Formatter) in the
Configuration File when the proxy receives a process call from the Server
then the call will be sent to the Server using a channel selected
(TcpChannel or HttpChannel). The server side channel accepts the request
from the proxy and sends it to the server on the Remoting system where
the remote object methods are located and invoked methods on the
Remote Object. Once the remote procedure is executed, every call
outcome is returned to the client in the same way. It must be generated and
initialized in a process known as Activation before an object instance of a
Remotable type can be accessed. The activation is classified as Client
Activated Objects and Server Activated Objects in two types.
84
Architecture reduces the interaction between customers, which makes it
easier to scale.
Using Service-oriented Architecture to reduce costs: with a
Service-oriented Architecture it is possible to decrease costs while still
"maintaining a desired performance." It is possible for businesses to
restrict the amount of analyzes they need to create custom solutions using
Service-oriented Architecture.
You can search for the web services across the network and invoke
them appropriately. The web service will, when invoked, provide the
customer with the features that the web service invokes.
Data transmitted between the client and the server is the primary
component of a web service, namely XML. An XML is HTML equivalent,
and the intermediate language that many programming languages can easy
85
to understand and they only speak in XML while applications talk to each
other. This provides a can application interface for interacting with one
another in different programming languages. Web services use SOAP
(Simple Object Access Protocol) to transfer XML data between
applications. The data is transmitted through standard HTTP. The data
that is transmitted to the program from the web server is called SOAP. The
message from SOAP is just XML. The client application that calls to the
Web service can be written in any programming language, as this
document is written in XML.
A root element called the < Envelope > is needed in every SOAP
document. The first element of an XML document is the root element.
The envelope is divided into 2 parts in turn. The first is the header and the
second is the body.
86
The header comprises the routing data, the information to which the XML
document should be sent to.
If it is found, a web service will not be used. The client invoking the web
service should know the location of the web service.
Second, the client application wants to learn what the web service
does to invoke the right web service. It is achieved using WSDL, known
as the Web services description language. The WSDL file is another
XML file which mainly tells the web service what its client application
does. The client applications will understand the location and use of the
web services by using the WSDL document.
<definitions>
<message name="TutorialRequest">
<part name="TutorialID" type="xsd:string"/>
</message>
<message name="TutorialResponse">
<part name="TutorialName" type="xsd:string"/>
</message>
87
<portType name="Tutorial_PortType">
<operation name="Tutorial">
<input message="tns:TutorialRequest"/>
<output message="tns:TutorialResponse"/>
</operation>
</portType>
<output>
<soap:body
encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"
namespace="urn:examples:Tutorialservice"
use="encoded"/>
</output>
</operation>
</binding>
</definitions>
The main aspects of the above WSDL declaration are the following;
< portType >-In fact, this defines the Web service operation that is
referred to in our case as known as Tutorial. This procedure will obtain 2
messages, one is input and the other is output.
< binding >-The protocol that is used contains this element. And we
describe this in our case to use http (http:/schemas.xmlsoap.org/soap/http).
Additional details on the body of the operation are specified, including
namespace and the encoding of the message.
88
Universal Description, Discovery, and Integration (UDDI):
We now also realize why web services first came about, which
were to provide a platform to talk to each other with different applications.
But let's discuss some other advantages as to why web services are
relevant.
89
2.5.3.4 Service orientation and cloud computing:
SUMMARY
In this chapter we introduced parallel and distributed computing as
a framework on which cloud computing can be properly described. The
solution of a major issue emerged out of parallel and distributed
computingby using several processing components first and then multiple
network computer nodes.
90
3. Explain the major categories of parallel computing systems.
4. Explain the different levels of parallelism that can be obtained in a
computing system
5. What is a distributed system? What are the components that
characterize it?
6. What is an architectural style and how does it handle a distributed
system?
7. List the most important software architectural styles.
8. What are the fundamental system architectural styles?
9. Describe the most important model for message-based communication.
10. Discuss RPC and how it enables interprocess communication.
11. What is CORBA?
12. What is service-oriented computing?
13. What is market-oriented cloud computing?
*****
91
3
VIRTUALIZATION
Unit Structure
3.0 Objective
3.1 Introduction
3.2 Major Components of Virtualization Environment
3.2.1 Characteristics of Virtualization
3.3 Taxonomy of virtualization techniques
3.3.1 Execution virtualization
3.3.2 Machine reference model
3.3.2.1 Instruction Set Architecture (ISA)
3.3.2.2 Application Binary Interface
3.4 Security Rings and Privileged Mode
3.4.1 Ring 0 (most privileged) and 3 (least privileged)
3.4.2 Rings 1 and 2
3.5 Hardware-level virtualization
3.6 Hypervisors
3.6.1 Type 1 Hypervisor
3.6.2 Type 2 Hypervisor
3.6.3 Choosing the right hypervisor
3.6.7 Hypervisor Reference Model
3.7 Hardware virtualization techniques
3.7.1 Advantages of Hardware-Assisted Virtualization
3.8 Full virtualization
3.9 Paravirtualization
3.10 Programming language-level virtualization
3.10.1 Application-level virtualization
3.11 Other types of virtualization
3.11.1 Storage virtualization
3.11.2 Network Virtualization
3.11.3 Desktop virtualization
3.11.4 Application server virtualization
3.12 Virtualization and cloud computing
3.12.1 Pros and cons of virtualization
3.12.1.1Advantages of virtualization
3.12.1.2 Disadvantages of virtualization
3.13 Technology examples
3.13.1 Xen: paravirtualization
92
3.13.2 VMware: full virtualization
3.13.3 Full Virtualization and Binary Translation
3.13.4 Virtualization solutions
3.13.5 End-user (desktop) virtualization
3.13.6 Server virtualization
3.14 Microsoft Hyper-V
3.14.1 Architecture
3.15 Summary
3.16 Unit End Questions
3.17 Reference for further reading
3.0 OBJECTIVE
Virtualization abstracts hardware that can share common resources
with multiple workloads. A variety of workloads can be co-located on
shared virtualized hardware while maintaining complete insulation,
migrating freely through the infrastructures and scaling, when required.
3.1 INTRODUCTION
Cloud Virtualization makes server operating system and storage
devices a virtual platform. This will enable the user to also share a single
physical resource instance or application with several users by providing
multiple machines. Cloud virtualizations also administer work through the
transformation, scalability, economics and efficiency of traditional
computing.
93
Increased performance and computing capacity:
A unique corporate data center is, in most instances, unable to
compete in terms of security, performance, speed and cost - effectiveness
with the network of data centers provided by service provider. Since the
majority of services are available on demand, in a short period of time
users can also have large amounts of computing resources, with
tremendous ease and flexibility and without any costly investment.
In turn, Cloud services offer you the ability to free up memory and
computing power on your individual computers through remote hosting of
platforms, software and databases. The obvious result, in fact, is a
significant performance improvement.
Lack of space:
Data centers are continuously expanding with the necessity for
extra infrastructure, be it storage or computing power. Organizations like
Google and Microsoft are expanding their infrastructure by constructing
data centers as compare as football grounds in which contains thousands
of nodes. While this is feasible for IT big players, companies are often
unable to build an additional data center to accommodate extra resource
capacity. Together with this situation, unused of hardware resources
which led to the diffusion of a server consolidation, fundamental to the
virtualization is used in the technique.
Greening initiatives:
Virtualization is a core technology for the deployment of a cloud-
based infrastructure to run multiple operating system images
simultaneously on a single physical server. As a consolidation enabler,
server virtualization reduces the overall physical server size, with the
green benefits inherent.
1. GUEST:
As usual, the guest denotes the system component interacting with the
virtualization layer instead with the host machine. Usually one or more
virtual disk and VM definition files are presented to guests. A host
application which looks and manages every virtual machine as a different
application is centrally operated by virtual machines.
2. Hosts:
The host is the original environment in which the guest is to be managed.
Each host uses the common resources that the host gives to each guest.
The OS works as a host and manages the physical management of
resources and the support of the device.
3. Virtualization Layer
The virtualization layer ensures that the same or different environment
where the guest operates is recreated. It is an extra layer of abstract
between the hardware, the computing and the application running in the
network and storage. It usually helps to operate a single operating system
per machine which, compared with virtualization, is very inflexible.
1. Increased Security:
1. Execution Managed:
96
Figure: 3.2 Functions enabled by managed execution
(Reference from “Mastering Cloud Computing Foundations and
Applications Programming” by Rajkumar Buyya)
1. Sharing:
2. Aggregation:
3. Emulation:
4. Isolation:
97
In addition to these features, performance tuning is another
important feature enabled by virtualization. This feature is available a
reality owing to the considerable progress in virtualization supporting
software and hardware. By finely adjusting the properties of the resources
exposed in the virtual environment, the guests' performance is easier to
control. It offers a means to implement a quality of service (QoS)
infrastructure effectively.
5. Portability:
98
FIGURE 3.3 A taxonomy of virtualization techniques.
(Reference from “Mastering Cloud Computing Foundations and
Applications Programming” by Rajkumar Buyya)
99
FIGURE 3.4 A machine reference model
(Reference from “Mastering Cloud Computing Foundations and
Applications Programming” by Rajkumar Buyya)
100
The kernel, toolchain and architecture troika define an ABI. It must
be agreed by everybody on it. The architectures generally design a
preferred or standardized ABI, and operating systems abide to that
standardization more or less. Such information is usually documented in
the reference manual for the architecture. For instance, x86-64,
There are two main advantages to the layered model. First of all, it
protects from system crashes. Errors can usually be retrieved in higher
rings (with less access). Because Ring 0 has direct access to the
memory and CPU, it can be restarted without data loss or a CPU error in
an outer ring crashing process. Secondly, it provides enhanced security.
The procedure requires permission from the operating system to execute
instructions that require greater access to resources. Then the OS can
decide whether or not to grant the request. This selection process helps to
101
prevent unwanted or malicious behavior of your system.
FIGURE 3.5 Security rings and privilege modes (Reference from Mastering Cloud
Computing Foundations and Applications Programming” by Rajkumar Buyya)
3.4.1 Ring 0 (most privileged) and 3 (least privileged):
103
3.6 HYPERVISORS
A hypervisor is a key software piece which enables virtualization.
It abstracts from the actual hardware the guest machines and the operating
system they use.
Hypervisors create the CPU / Processor, RAM and other physical
resources virtualized layer that separates you from the virtual devices you
are creating.
The hypervisor on which we install the machine is called a host
machine, compared with virtual guest machines running over it.
Hypervisors emulate resources available for guest machines to use.
Regardless of which operating system you are booting with an actual
hardware, it believes that real physical hardware is available.From the
viewpoint of VM, the physical and virtual environment is unlike any
difference. In the virtual environment, Guest machines do not know that
the hypervisor has created them. Or share the computing power available.
VMs run on the hardware that powers them simultaneously, and they are
therefore fully dependent upon their stability operation.
Type 1 Hypervisor (also called bare metal or native)
Type 2 Hypervisor (also known as hosted hypervisors)
3.6.1 Type 1 Hypervisor:
A bare-metal hypervisor (type 1) is a software layer which is
installed directly above a physical server and its underlying hardware.
Examples of Type 1 hypervisors include VMware ESXi, Citrix XenServer
and Microsoft Hyper-V hypervisor.
There is no intermediate software or operating system, therefore
bare-metal hypervisor is the name. A Type 1 hypervisor, which does not
run inside Windows or any other operating system so it is proven to
provide excellent performance and stability.
Type 1 hypervisors are a very basic OS themselves, on which
virtual machines can be operated. The hypervisor’s physical machine is
used for server virtualization purpose only. For anything else, you can't
use it. In enterprise environments, type 1 hypervisors are mostly found.
1. Understand your needs: the data center (and your job) for the
company and its applications. In addition to the requirements of your
company, you (and your IT staff) also have your own requirements.
a. Flexibility
b. Scalability
c. Usability
d. Availability
e. Reliability
f. Efficiency
g. Reliable support
106
3.6.7 Hypervisor Reference Model:
Dispatcher:
The dispatcher acts as the monitor entry point, rerouting virtual machine
instance instructions to one of the other two modules.
ALLOCATOR:
The allocator is responsible for deciding the system resources to be given
to the virtual machine instance. It indicates that the dispatcher invokes the
allocator whenever virtual machine attempts to execute instructions that
modify the machine resources associated with the virtual machine.
107
Interpreter:
Equivalence / Fidelity:
Under the VMM a program is running should behave essentially the same
as when running directly on an equivalent machine.
Efficiency / Performance:
Without VMM intervention, a statistically dominant fraction of machine
instructions must be performed.
108
Privileged instructions:
Those that trap in user mode when the processor is in system mode do not
trap (Supervisor mode).
This can then be the main result of the analysis by Popek and Goldberg.
The theorem states intuitively that all instructions that could affect
VMM (sensitive instructions) to correctly function always trap and
transfer the control to VMM are sufficient for the building of a VMM.
This ensures the property of the resource control. Instead, native (i.e.
efficiently) non-privileged instructs should be executed. It is also
necessary to keep the equivalency property.
The VMM can efficiently virtualize the entire X86 instruction with
hardware-assisted virtualization using the classically-used hardware trap-
and-emulate model, rather than software, by handling these sensitive
instructions. CPU access can be accessible at Ring 1 and guest OSes by
hypervisors that support this technology, the same way as they would
when operating on a physical host. This makes it possible to virtualize
guest OSes without any changes.
110
FIGURE 3.10 New level of privilege in x86 architecture
3.7.1 Advantages of Hardware-Assisted Virtualization:
This implies that OS kernels need not be tweaked and can run as is
(as in par virtualization). The hypervisor does not have to take part in the
inefficient binary translation of the sensitive instructions at the same time.
Thus, it not only complies with the Popek and Goldberg criteria (of full
virtualization), but also improves its efficiency, because the instructions
are now trapped and emulated directly in the hardware.
3.9 PARAVIRTUALIZATION
112
FIGURE 3.12 Paravirtualization
113
FIGURE 3.13 Hypercalls to virtualization in Paravirtualization
Paravirtualization Advantages:
Through diverting the processes of the app into one file rather than
several scattered around the OS, the app runs easily on another device, and
apps that were previously incompatible may now operate adjacently.
120
Multiple Desktops: Users can control multiple desktops from the
same client PC, which can handle different tasks.
3.12.1.1Advantages of virtualization:
Scalability:
A virtual machine is merely as scalable as any other solution. One of the
key benefits of virtualization is that several systems can be integrated. It
offers you unbelievable flexibility that with a physical and bare metal
system which cannot be possible. This flexibility has a direct impact on
how companies can grow quickly and efficiently. Virtualization allows
122
data migration, upgrade, and instant performance improvement into new
VMs in a short time.
Consolidation of servers:
The design of virtual machines will replace almost 10:1 the physical
machines. This eliminates the need for physical computers while
providing efficient operation of systems and specifications. Such
consolidation will minimize costs and the requisite physical space for
computer systems.
Virtual WorkStations:
Virtualization provides the global versatility to allow multiple systems to
be run on a single computer to operate the systems remotely. VM also
reduces all hardware and desktop footprint.
Virtualization does not work well for any applications that require
physical hardware. An example is something using a dongle or other
hardware attached. Since the program needs to be a physical piece,
virtualization would cause more headache than remaining on a physical
system.
Testing is Critical:
Here, Ring 0 is the most privileged level and Ring 3 is the less
privileged level. Nearly every OS, except OS/2, uses only two different
levels, i.e. Ring 3, for user program and non-privilege OS, Ring 0 for
kernel code, and. It gives the Xen an opportunity to achieve
paravirtualization. This makes it possible to manage the Application
Binary Interface (ABI) unchanged and thus to switch from an application
point of view to xen-virtualized solutions.
125
The structure of the set of instructions x86 enables the execution of
code in the Ring 3 to move to ring 0 (kernel-mode). Such an operation is
performed at the hardware level, and thus, it can lead to TRAP or a silent
fault in a virtualized system, thus preventing the overall operation of the
guest OS in ring 1.
In theory, this condition exists via a subset of system calls.
Implementing the operating system needs a modification, and all of the
critical system calls need re-implementation by hypercalls to eradicate this
situation. Here, hypercalls are the special calls exposed via the Xen
Virtual Machine (VM) interface, and Xen's hypervisor appears to obtain,
manage and return the control with the aid of the supplied handler to the
Guest OS.
Paravirtualization calls for a shift to the OS-code base such that in
a xen-based environment, no guest OS is available for all operating
systems. This condition is used to prevent free hardware-assisted
virtualization, which requires the hypervisor to operate in Ring 1 and the
guest OS at Ring 0. Xen thus demonstrates some drawbacks with respect
to legacy hardware and legacy OS.
Paravirtualization calls for a shift to the OS-code base such that in
a xen-based environment, no guest OS is available for all operating
systems. This condition is used to prevent free hardware-assisted
virtualization, which requires the hypervisor to operate in Ring 1 and the
guest OS at Ring 0. Xen thus demonstrates some drawbacks with respect
to legacy hardware and legacy OS.
In reality, they cannot be changed in a responsible way to run ring
1, as their codebase is unreachable and the primary hardware currently has
no support for running it in a more privileged mode than ring 0. Open
source OS like Linux can be updated simply because it has open code and
Xen provides full virtualization support, while Windows components are
essentially not compliant with Xen until hardware-assisted virtualization is
available. With the introduction of new releases of OS, the issue is solved
and new hardware will support virtualization of x86.
127
Advantages of Binary Translation:
This method of virtualization provides Virtual Machines with the best
isolation and security.
Many guest OS will actually run concurrently on the same hardware in
a very isolated way.
This is only applied without hardware support or operating system
support in the virtualization of sensitive instructions and privileged
instructions.
3.14.1 Architecture:
In terms of partition, Hyper-V implements virtual machine
isolation. A partition, supported by the hypervisor, is a logical isolation
unit in which every guest operating device performs. In a hypervisor
instance, there must be at least one parent partition running Windows
Server (2008 and later) enabled. Within the parent partition the
virtualization software works and has direct access to hardware devices.
The parent partition produces child's partitions containing the guest
operating systems. A parent partition uses a hypercall API, the application
framework, to build children's partitions exposed to the Hyper-V.
131
A child partition has no access or its actual interrupts to the
physical processor. Rather, the processor has a virtual view and operates
in the guest virtual address, which may not actually be the entire virtual
address space depending upon on configuration of the hypervisor. Hyper-
V will only show a subset of processors on each partition, depending on
the VM configuration. The hypervisor manages the interrupts to the
processor with the aid of a logical Synthetic Interrupt controller (SynIC) to
the respective partition.
SUMMARY
*****
133
UNIT II
4
CLOUD COMPUTING ARCHITECTURE
Unit Structure
4.0. Objective
4.1 Introduction to Cloud Computing Architecture
4.1.1 Architecture
4.2 Fundamental concepts and models
4.3 Roles and Boundaries
4.3.1 Cloud Provider
4.3.2 Cloud Consumer
4.3.3 Cloud Service Owner
4.3.4 Cloud Resource Administrator
4.4 Cloud Characteristics
4.5 Cloud Delivery models.
4.5.1 On-Demand Usage
4.5.2 Ubiquitous Access
4.5.3 Multitenancy
4.5.4 Elasticity
4.5.5 Measured Usage
4.5.6 Resiliency
4.6 Cloud Deployment models
4.6.1 Infrastructure-as-a-Service (IaaS)
4.6.2 Platform-as-a-Service (PaaS)
4.6.3 Software-as-a-Service (SaaS)
4.7 Economics of the cloud
4.7.1 Public Clouds
4.7.2 Community Clouds
4.7.3 Private Clouds
4.7.4 Hybrid Clouds
4.8 Open challenges
4.9 Unit End Questions
4.10 References
4.0. OBJECTIVE
4.1.1 Engineering:
It is conceivable to orchestrate all the solid acknowledge of
distributed computing into a layered view covering the entire stack (see
Figure 4.1), from equipment machines to programming frameworks.
Cloud assets are tackled to flexibly "registering pull" required for offering
types of assistance. Frequently, this layer is actualized utilizing a
datacenter during which hundreds and thousands of hubs are stacked
together. Cloud framework are regularly heterogeneous in nature in bright
of the detail that a spread of assets, similar to bunches and even organized
PCs, are frequently wont to construct it. Additionally, database
frameworks and other stockpiling administrations likewise can be a piece
of the foundation.
135
The physical framework is accomplished by the center
middleware, the destinations of which are to flexibly a fitting runtime
condition for applications and to best use assets. At the absolute bottom of
the stack, virtualization innovations are wont to ensure runtime condition
customization, application seclusion, sandboxing, and nature of
administration.
The top layer of the reference model depicted in Figure 3.1 spreads
offices brought at the application level. These are commonly referenced to
as Software-as-a-Service (SaaS). By and large, these are Web-put together
applications that trust by reverence to the cloud to offer support to end-
clients. The strength of the cloud-conveyed by IaaS and PaaS goals
permits self-governing programming sellers to convey their application
offices over the Internet. Extra applications setting off to this layer are
those that unequivocally impact the Internet for their center functionalities
that trust on the cloud to withstand a bigger number of clients; this is the
situation of gaming entryways and, all in all, person to person
communication sites.
137
4.3 JOBS AND CUTOFF POINTS
Associations and people can accept contrasting sorts of pre-
characterized jobs relying on how they identify with or potentially
interface with a cloud and its introduced IT assets. Every one of things to
come parts contributes in and passes on out accountabilities regarding
cloud-based activity. The accompanying segments depict these parts and
recognize their principle associations.
139
Figure 4.4. A cloud resource administrator can be with a cloud
consumer organization and administer remotely accessible IT
resources that belong to the cloud consumer.
(Reference :Cloud Computing(Concepts, Technology & Architecture) by
Thomas Erl,Zaigham Mahmood, and Ricardo Puttini)
143
4.5.3 Multitenancy (and Resource Pooling) :
4.5.4 Versatility :
4.5.6 Strength:
Strong computing is a type of failover that dispenses excess
utilizations of IT properties across physical spots. IT properties can be pre-
arranged so that in the event that one gets lacking, agreement is
consequently given over to extra excess application. Inside distributed
computing, the attribute of flexibility can make reference to repetitive IT
properties inside a similar cloud (however in various physical areas) or
over various mists. Cloud clients can development both the reliability and
availability of their applications by utilizing the strength of cloud-based IT
properties.
Fig 4.10 cloud customer is using a virtual server within an IaaS atmosphere.
cloud consumers are delivered with a range of contractual guarantees by the
cloud provider, relating to physiognomies such as capacity, performance, and
availability.
145
(Reference :Cloud Computing(Concepts, Technology & Architecture) by
Thomas Erl,Zaigham Mahmood, and Ricardo Puttini)
Even motives a cloud buyer would apply and place properties into a PaaS
domain include:
• The cloud buyer needs to reach out on-premise conditions into the
cloud for versatility and financial purposes.
• The cloud customer utilizes the instant condition to totally substitute
an on-premise condition.
• The cloud consumer wants to go into a cloud dealer and takes its
personal cloud administrations to be complete available to additional
outer cloud buyers.
146
(Reference :Cloud Computing(Concepts, Technology & Architecture) by
Thomas Erl,Zaigham Mahmood, and Ricardo Puttini)
Figure 4.12. The cloud service customer is given access the cloud
agreement, but to not any fundamental IT resources or application
details. (Reference :Cloud Computing(Concepts, Technology &
Architecture) by Thomas Erl,Zaigham Mahmood, and Ricardo Puttini)
A cloud purchaser is normally allowed constrained authoritative
command above a SaaS practice. It's most normally provisioned by the
cloud provider, yet it are frequently formally claimed by whichever
element expect the cloud administration proprietor job. for example , an
enterprise going about as a cloud buyer while utilizing and managing a
PaaS domain can assemble a cloud administration that it chooses to
convey in that equivalent condition as a SaaS offering. An identical
association at that point adequately accept the cloud supplier job in bright
of the fact that the SaaS-based cloud administration is framed accessible to
different associations that go about as cloud purchasers when utilizing that
cloud administration.
147
4.7 CLOUD DEPLOYMENT MODELS
A cloud procedure classical says to a selected kind of cloud state, basically
documented by proprietorship, scope, and access. There are four steady
cloud sending models:
• Public cloud
• Community cloud
• Private cloud
• Hybrid cloud the resulting areas depict each.
149
Figure 4.15 Acloud service consumer within the organization’s on-
premise environment accesses a cloud service hosted on an equivalent
organization’s private cloud via a virtual private network.
(Reference :Cloud Computing(Concepts, Technology & Architecture) by
Thomas Erl,Zaigham Mahmood, and Ricardo Puttini)
150
Figure 4.16 establishment employing a hybrid cloud architecture that
uses both a individual and public cloud.
(Reference :Cloud Computing(Concepts, Technology & Architecture) by
Thomas Erl,Zaigham Mahmood, and Ricardo Puttini)
SUMMARY
In this chapter we study about the concept of Cloud Computing
Architecture, fundamental concepts and models, what are the Roles and
boundaries, Characteristics Cloud, what aretypes of Delivery models of
clouds , development models of cloud, Economics of the cloudand what
are the Open challenges of cloud computing
REVIEW QUESTION
1) Explain the Cloud Computing Architecture.
2) What are the Fundamental concepts of cloud computing?
3) Write a short note on models in cloud computing.
4) What are the Roles and boundaries of cloud computing?
5) What are the Characteristics of cloud computing?
6) What is meant by Cloud Deployment models?
REFERENCES
Cloud Computing(Concepts, Technology & Architecture) by Thomas
Erl,Zaigham Mahmood, and Ricardo Puttini
*****
153
5
FUNDAMENTAL CLOUD SECURITY AND
INDUSTRIAL PLATFORMS AND NEW
DEVELOPMENTS
Unit Structure
5.1 Fundamental Cloud Security
5.1.1 Confidentiality
5.1.2 Integrity
5.1.3 Authenticity
5.1.4 Availability
5.1.5 Threat
5.1.6 Vulnerability
5.1.7 Risk
5.1.8 Security Controls
5.1.9 Security Mechanisms
5.1.10 Security Policies
5.2 Basics
5.2.1 Threat Agents
5.2.2 Anonymous Attacker
5.2.3 Malicious Service Agent
5.2.4 Trusted Attacker
5.2.5 Malicious Insider
5.3 Threat agents
5.3.1 Traffic Eavesdropping
5.3.2 Malicious Intermediary
5.3.3 Denial of Service
5.3.4 Insufficient Authorization
5.3.5 Virtualization Attack
5.3.6 Overlapping Trust Boundaries
5.3.7 Risk Management
5.4 Cloud security threats
5.5 Additional considerations
5.5.1 Proposition of AWS
5.5.2 Understating Amazon Web Services
5.5.3 Component and Web Services of AWS
5.5.4 Elastic Cloud Compute
154
5.6 Industrial Platforms and New Developments
5.7 Amazon Web Services
5.7.1 More on MS Cloud
5.7.2 Azure Virtual Machines
5.7.3 Element of Microsoft Azure
5.7.4 Access Control of MS Cloud
5.8 Google App Engine
5.9 Microsoft Azure
Summary
5.10 Unit End Questions
5.11 References
5.1.1 Secrecy:
Figure 5.1.: The message issued by the cloud consumer to the cloud
service is considered confidential only if it is not accessed or read by
an unauthorized party.
(Reference :Cloud Computing(Concepts, Technology & Architecture) by
Thomas Erl,Zaigham Mahmood, and Ricardo Puttini)
5.1.2 Integrity:
Figure 5.2.: The message issued by the cloud consumer to the cloud
service is considered to have integrity if it has not been altered.
(Reference :Cloud Computing(Concepts, Technology & Architecture) by
Thomas Erl,Zaigham Mahmood, and Ricardo Puttini)
5.1.3 Realness:
5.1.4 Accessibility:
5.1.5 Danger:
156
5.1.6 Powerlessness:
A defenselessness is a shortcoming that can be misused either in
bright of the detail that it is ensured by inadequate security controls, or on
the grounds that present safety panels are overwhelmed by an assault. IT
asset vulnerabilities can have a scope of causes, including setup
inadequacies, security strategy shortcomings, client mistakes, equipment
or firmware defects, programming bugs, and helpless security design.
5.1.7 Hazard:
Hazard is the chance of misfortune or mischief emerging from
playing out an action. Hazard is regularly estimated by its danger level and
the quantity of conceivable or known vulnerabilities. Two measurements
that can be utilized to decide chance for an IT asset are:
• The likelihood of a danger happening to abuse vulnerabilities in the IT
asset
• The desire for misfortune upon the IT asset being undermined Insights
about hazard the board are shrouded later in this section.
158
5.2.3 Mysterious Attacker:
160
Figure 5.8 An externally positioned malicious service agents carries
out a traffic eavesdropping attack by intercepting a message sent by
the cloud service consumer to the cloud service. The service agent
makes an unauthorized copy of the message before it is sent along its
original path to the cloud service.
(Reference :Cloud Computing(Concepts, Technology & Architecture) by
Thomas Erl,Zaigham Mahmood, and Ricardo Puttini)
5.3.2 Malevolent Intermediary :
161
(Reference :Cloud Computing(Concepts, Technology & Architecture) by
Thomas Erl,Zaigham Mahmood, and Ricardo Puttini)
162
5.3.4 Refusal of Service:
The target of the refusal of administration (DoS) assault is to over-burden
IT assets to where they can't work appropriately. This type of assault is
normally propelled in one of the accompanying ways:
• The remaining task at hand on cloud administrations is falsely
expanded with impersonation messages or rehashed correspondence
demands.
• The system is over-burden with traffic to decrease its responsiveness
and handicapped person its exhibition.
• Multiple cloud administration demands are sent, every one of which
is intended to expend over the top memory and preparing assets.
Fruitful DoS assaults produce server corruption as well as disappointment,
as represented in Figure 5.10.:
Figure 5.11 Cloud service consumer A gains access to a database that was
implemented under the assumption that it would only be accessed through a
web service with a published service contract (as per cloud service consumer
B)(Reference :Cloud Computing(Concepts, Technology &
Architecture) by Thomas Erl,Zaigham Mahmood, and Ricardo
Puttini)
A variety of this assault, known as feeble confirmation, can result
when frail passwords or shared records are utilized to secure IT assets.
Inside cloud situations, these kinds of assaults can prompt noteworthy
effects relying upon the scope of IT assets and the scope of access to those
IT assets the assailant gains.
164
Figure 5.13 illustrates an example in which two cloud service consumers share
virtual servers hosted by the same physical server and, resultantly, their respective
trust boundaries overlap.
(Reference :Cloud Computing(Concepts, Technology & Architecture) by Thomas
Erl,Zaigham Mahmood, and Ricardo Puttini)
Figure 5.14 The on-going risk management process, which can be initiated
from any of the three stages.
165
(Reference :Cloud Computing(Concepts, Technology & Architecture) by
Thomas Erl,Zaigham Mahmood, and Ricardo Puttini)
• Risk Treatment: Mitigation arrangements and plans are structured
during the hazard treatment stage with the expectation of effectively
rewarding the dangers that were found during hazard evaluation. A few
dangers can be dispensed with, others can be moderated, while others can
be managed through re-appropriating or even joined into the protection as
well as working misfortune financial plans. The cloud supplier itself may
consent to accept accountability as a feature of its legally binding
commitments.
• Risk Control: The hazard control stage is identified with chance
checking, a three-advance procedure that is included studying related
occasions, surveying these occasions to decide the adequacy of past
appraisals and medicines, and recognizing any strategy modification
needs. Contingent upon the idea of the checking required, this stage might
be done or shared by the cloud supplier.
166
Figure 5.15
(Reference :Cloud Computing(Concepts, Technology & Architecture) by
Thomas Erl,Zaigham Mahmood, and Ricardo Puttini)
5.5.1Recommendation of AWS:
AWS has a tremendous offer. Just clients need to pay that they use,
which can spare a lot of cash. AWS has in extra of seventy administrations
including capacity, figure, database, organizing, application
administration, versatile, the executives, engineer's devices and IoT.
167
• Amazon Elastic Compute Cloud: (EC2; http://aws.amazon.com/ec2/)
is the incorporated use of AWS which encourages the administration
and use of virtual private servers that can run on Windows and Linux-
based stages over Xen Hypervisor. Various devices are utilized to help
Amazon's web administrations. These are:
Amazon Simple Queue Service is a message line and exchange
framework for dispersed Internet-based applications.
Amazon Simple Notification Service is utilized to distribute
message from an application.
Amazon CloudWatch is utilized for observing EC2 Cloud which
underpins by giving console or order line perspective on assets in
use.
Elastic Load Balancing is utilized to identify whether an
occurrence is falling flat or check whether the traffic is sound or
not.
Amazon's Simple Storage Service: is an online stockpiling and
reinforcement framework which has rapid information move
strategy called AWS Import/Export.
Additional web - services mechanisms are:
Amazon's Elastic Block Store
Amazon's Simple Database (DB)
Amazon's Relational Database Service
Amazon Cloudfront
A large number of services and utilities also support Amazon partners, i.e.,
the AWS infrastructure itself. These are:
Alexa Web Information Service
Amazon Associates Web Services (A2S)
Amazon DevPay
Elastic Map-Reduce
Amazon's Mechanical Turk
AWS Multi-factor Authentication
Amazon's Flexible payment Service (FPS)
Amazon's Fulfillment Web-Service (FWS)
Amazon Virtual Private Cloud
The term 'versatile' characterizes the capacity to resize your ability rapidly
varying. Executing a help may require the accompanying segments:
Application server (having large RAM allocation)
A load balancer
Database server
Firewall and network switches
Additional rack capacity
169
example, stockpiling and data transfer capacity are estimated by the
gigabyte.
Azure Services:
172
(Reference :Cloud Computing(Concepts, Technology & Architecture) by
Thomas Erl,Zaigham Mahmood, and Ricardo Puttini)
5.7.4 Access Control of MS Cloud:
It permits an application to believe the character of another
application and this strategy can meet up with personality suppliers, for
example, ADFS to make conveyed frameworks dependent on SOA. The
means required for Access Control are:
• The customer/client sends demand for verification from AC (Access
Control)
• Access Control creates a token dependent on put away guidelines for
server application
• The token is marked and come back to customer application
• The customer present the got token to the administration application
• The check of the mark is done at long last and uses a token to pick
whether cloud application is permitted or not.
SUMMARY
In this chapter we study about the concept of issues Fundamental
Cloud Security like Confidentiality, Integrity, Authenticity, Availability,
etc…. Basic concept about Threat Agents, Anonymous Attacker,
Malicious Service Agent, Trusted Attacker, etc.., study about threat agents
,etc….,which are the Cloud security threats, what the use of AWS, Web
Service and Component and Web Services of AWS, also Industrial
Platforms and New Developments, detail study of Amazon Web Services,
Google App Engine and Microsoft Azure.
REFERENCE
*****
173
UNIT III
6
SPECIALIZED CLOUD MECHANISMS
Unit Structure
6.1 Objectives
6.2 Introduction
6.3 Automated Scaling Listener
6.3.1 Case of DTGOV
6.3.2 A Scaling-Down
6.3.3 B Scaling-Up
6.4 Load Balancer
6.4.1 How does load balancing work?
6.5 SLA Monitor
6.6 Pay-per-use monitor
6.7 Audit monitor
6.8 Failover System
6.8.1 Failover systems come in two basic configurations
6.8.2 A Active-Active
6.8.3 B Active-Passive
6.9 Hypervisor
6.9.1 Hypervisors Are Divided Into Two Types
6.9.2 A Type one is the bare-metal hypervisor
6.9.3 B Type two is a hosted hypervisor that runs as a software layer
6.10 Resource Cluster
6.10.1 Common resource cluster types
6.10.2 A Server Cluster
6.10.3 B Database Cluster
6.10.4 C Large Dataset Cluster
6.10.5 There are two basic types of resource clusters
6.10.6 A Load Balanced Cluster
6.10.7 B High-Availability (HA) Cluster
6.11 Multidevice broker
6.12 State Management Database
6.13 Unit End Question
6.14 References
174
6.1 OBJECTIVE
6.2 INTRODUCTION
175
Fig 6.1
Three cloud service consumers attempt to access one cloud service
simultaneously (1). The automated scaling listener scales out and initiates
the creation of three redundant instances of the service (2). A fourth cloud
service consumer attempts to use the cloud service (3). Programmed to
allow up to only three instances of the cloud service, the automated
scaling listener rejects the fourth attempt and notifies the cloud consumer
that the requested workload limit has been exceeded (4). The cloud
consumer’s cloud resource administrator accesses the remote
administration environment to adjust the provisioning setup and increase
the redundant instance limit (5).
Fig 6.3
177
6.4 LOAD BALANCER
Workload Prioritization:
workloads are scheduled, queued, discarded, and distributed
workloads according to their priority levels
Here, load refers to not only the website traffic but also includes
CPU load, network load and memory capacity of each server. A load
balancing technique makes sure that each system in the network has same
amount of work at any instant of time. This means neither any of them is
excessively over-loaded, nor under-utilized. [1]
178
Fig 6.4
The load balancer mechanisms can exist as a:
Multi-layer network switch
Dedicated hardware appliance
Dedicated software-based system
Service agent
179
Fig 6.5
Figure 6.5 – A cloud service consumer interacts with a cloud service (1). An
SLA monitor intercepts the exchanged messages, evaluates the interaction, and
collects relevant runtime data in relation to quality-of-service guarantees defined
in the cloud service’s SLA (2A). The data collected is stored in a repository (2B)
that is part of the SLA management system (3). Queries can be issued and reports
can be generated for an external cloud resource administrator via a usage and
administration portal (4) or for an internal cloud resource administrator via the
SLA management system’s native user-interface (5).
Fig 6.6
181
Figure 6.7 A cloud service consumer sends a request message to the
cloud service (1). The pay-per-use monitor intercepts the message (2),
forwards it to the cloud service (3a), and stores the usage information in
accordance with its monitoring metrics (3b). The cloud service forwards
the response messages back to the cloud service consumer to provide the
requested service (4).
Fig 6.8
182
A cloud service consumer requests access to a cloud service by
sending a login request message with security credentials (1). The audit
monitor intercepts the message (2) and forwards it to the authentication
service (3). The authentication service processes the security credentials.
A response message is generated for the cloud service consumer, in
addition to the results from the login attempt (4). The audit monitor
intercepts the response message and stores the entire collected login event
details in the log database, as per the organization’s audit policy
requirements (5). Access has been granted, and a response is sent back to
the cloud service consumer (6).
6.8 FAILOVER SYSTEM
The failover system mechanism is used to increase the reliability
and availability of IT resources by using established clustering technology
to provide redundant implementations. A failover system is configured to
automatically switch over to a redundant or standby IT resource instance
whenever the currently active IT resource becomes unavailable.
Failover systems are commonly used for mission-critical programs
or for reusable services that can introduce a single point of failure for
multiple applications. A failover system can span more than one
geographical region so that each location hosts one or more redundant
implementations of the same IT resource.
This mechanism may rely on the resource replication mechanism
to supply the redundant IT resource instances, which are actively
monitored for the detection of errors and unavailability conditions.
6.8.1 Failover systems come in two basic configurations:
A. Active-Active: In an active-active configuration, redundant
implementations of the IT resource actively serve the workload
synchronously (Figure 6.8.1). Load balancing among active instances is
required. When a failure is detected, the failed instance is removed from
the load balancing scheduler (Figure 6.8.2). Whichever IT resource
remains operational when a failure is detected takes over the processing
(Figure 6.8.3).
Fig 6.9 The failover system monitors the operational status of Cloud Service A.
183
Figure 6.10 When a failure is detected in one Cloud Service A implementation,
the failover system commands the load balancer to switch over the workload to
the redundant Cloud Service A implementation.
B. Active-Passive:
In an active-passive configuration, a standby or inactive
implementation is activated to take over the processing from the IT
resource that becomes unavailable, and the corresponding workload is
redirected to the instance taking over the operation (Figures 4 to 5).
184
Some failover systems are designed to redirect workloads to active
IT resources that rely on specialized load balancers that detect failure
conditions and exclude failed IT resource instances from the workload
distribution. This type of failover system is suitable for IT resources that
do not require execution state management and provide stateless
processing capabilities. In technology architectures that are typically based
on clustering and virtualization technologies, the redundant or standby IT
resource implementations are also required to share their state and
execution context. A complex task that was executed on a failed IT
resource can remain operational in one if it’s redundant implementations.
Figure 6.12 The failover system monitors the operational status of Cloud
Service A. The Cloud Service A implementation acting as the active
instance is receiving cloud service consumer requests.
6.9 HYPERVISOR
A hypervisor is a hardware virtualization technique that allows
multiple guest operating systems (OS) to run on a single host system at the
same time. The guest OS shares the hardware of the host computer, such
that each OS appears to have its own processor, memory and other
hardware resources.
A hypervisor is also known as a virtual machine manager (VMM).
The hypervisor isolates the operating systems from the primary host
machine. The job of a hypervisor is to cater to the needs of a guest
operating system and to manage it efficiently. Each virtual machine is
independent and do not interfere with each another although they run on
the same host machine.[2]
Fig 6.15
186
6.9.1 Hypervisors Are Divided Into Two Types:
A. Type one is the bare-metal hypervisor that are deployed directly over
the host's system hardware without any underlying operating systems or
software. Some examples of the type 1 hypervisors are Microsoft Hyper-V
hypervisor, VMware ESXi, Citrix XenServer.
Fig 6.16
B. Type two is a hosted hypervisor that runs as a software layer within
a physical operating system. The hypervisor runs as a separate second
layer over the hardware while the operating system runs as a third layer.
The hosted hypervisors include Parallels Desktop and VMware Player.
Fig 6.17
187
6.10 RESOURCE CLUSTER
Cloud-based IT resources that are geographically diverse can be
logically combined into groups to improve their allocation and use. The
resource cluster mechanism is used to group multiple IT resource
instances so that they can be operated as a single IT resource. This
increases the combined computing capacity, load balancing, and
availability of the clustered IT resources.
Figure 6.18 Load balancing and resource replication are implemented through a cluster
enabled hypervisor. A dedicated storage area network is used to connect the clustered
storage and the clustered servers, which are able to share common cloud storage devices.
This simplifies the storage replication process, which is independently carried out at the
storage cluster.
Figure 6.19
189
A loosely coupled server cluster that incorporates a load balancer. There is
no shared storage. Resource replication is used to replicate cloud storage
devices through the network by the cluster software.
Figure 6.21
191
During the lifespan of a cloud service instance, it may be required to remain
stateful and keep state data cached in memory, even when idle.
SUMMARY
REFERENCES
https://www.znetlive.com/blog/what-is-load-balancing-in-cloud-
computing-and-its-advantages/
https://www.cloudoye.com/kb/general/what-is-hypervisor-in-cloud-
computing-and-its-types
https://patterns.arcitura.com/cloud-computing-
patterns/mechanisms/state_management_database
*****
193
7
CLOUD MANAGEMENT MECHANISMS
AND CLOUD SECURITY MECHANISMS
Unit Structure
7.1 Objective
7.2 Introduction
7.3 Remote Administration System
7.4 Resource Management System
7.5 SLA Management System
7.6 Billing Management System
7.7 Encryption
7.7.1 Symmetric Encryption
7.7.2 Asymmetric Encryption
7.8 Hashing
7.9 Digital Signature
7.10 Public Key Infrastructure (PKI)
7.11 Identity and Access Management (IAM)
7.12 Single Sign-On (SSO)
7.13 Cloud-Based Security Groups
7.14 Hardened Virtual Server Images
7.15 Unit End Question
7.17 References
7.1 OBJECTIVE
7.2 INTRODUCTION
Fig 7.1
Figure 7.1 – The remote administration system abstracts underlying
management systems to expose and centralize administration controls to
external cloud resource administrators. The system provides a
customizable user console, while programmatically interfacing with
underlying management systems via their APIs.
The tools and APIs provided by a remote administration system are
generally used by the cloud provider to develop and customize online
portals that provide cloud consumers with a variety of administrative
controls.
The following are the two primary types of portals that are created with
the remote administration system:
Usage and Administration Portal: A general purpose portal that
centralized management controls to different cloud-based IT resources
and can further provide IT resource usage reports.
195
Self-Service Portal: This is essentially a shopping portal that allows
cloud consumers to search an up-to-date list of cloud services and IT
resources that are available from a cloud provider (usually for lease).
The cloud consumer submits its chosen items to the cloud provider for
provisioning.
Fig 7.2
Figure 7.2: A cloud resource administrator uses the usage and
administration portal to configure an already leased virtual server (not
shown) to prepare it for hosting (1). The cloud resource administrator then
uses the self-service portal to select and request the provisioning of a new
cloud service (2). The cloud resource administrator then accesses the
usage and administration portal again to configure the newly provisioned
cloud service that is hosted on the virtual server (3). Throughout these
steps, the remote administration system interacts with the necessary
management systems to perform the requested actions (4).
Depending on:
the type of cloud product or cloud delivery model the cloud consumer
is leasing or using from the cloud provider,
the level of access control granted by the cloud provider to the cloud
consumer, and
further depending on which underlying management systems the
remote administration system interfaces with,
Fig 7.3.
Figure 7.3 Standardized APIs published by remote administration
systems from different clouds enable a cloud consumer to develop a
custom portal that centralizes a single IT resource management portal for
both cloud-based and on-premise IT resources.
Tasks that are typically automated and implemented through the resource
management system include:
managing virtual IT resource templates that are used to create pre-built
instances, such as virtual server images
allocating and releasing virtual IT resources into the available physical
infrastructure in response to the starting, pausing, resuming, and
termination of virtual IT resource instances
coordinating IT resources in relation to the involvement of other
mechanisms, such as resource replication, load balancer, and failover
system
enforcing usage and security policies throughout the lifecycle of cloud
service instances
monitoring operational conditions of IT resources
Fig 7.4
198
Figure 7.4 – The cloud consumer’s cloud resource administrator accesses
a usage and administration portal externally to administer a leased IT
resource (1). The cloud provider’s cloud resource administrator uses the
native user-interface provided by the VIM to perform internal resource
management tasks (2).
Fig 7.5
Figure 7.5 – The SLA monitor polls the cloud service by sending over
polling request messages (MREQ1 to MREQN). The monitor receives
polling response messages (M to M) that report that the service was “up”
at each polling cycle (1a). The SLA monitor stores the “up” time—time
period of all polling cycles 1 to N—in the log database (1b). The SLA
monitor polls the cloud service that sends polling request messages (M to
M). Polling response messages are not received (2a). The response
messages continue to time out, so the SLA monitor stores the “down”
time—time period of all polling cycles N+1 to N+M—in the log database
(2b). The SLA monitor sends a polling request message (M) and receives
the polling response message (M) (3a). The SLA monitor stores the “up”
time in the log database (3b).
Fig 7.6
Figure 7.6 – A cloud service consumer exchanges messages with a cloud service
(1). A pay-per-use monitor keeps track of the usage and collects data relevant to
billing (2A), which is forwarded to a repository that is part of the billing
management system (2B). The system periodically calculates the consolidated
cloud service usage fees and generates an invoice for the cloud consumer (3).
The invoice may be provided to the cloud consumer through the usage and
administration portal (4).
200
7.7 ENCRYPTION
Fig 7.7
Figure 7.7 A malicious service agent is unable to retrieve data from an
encrypted message. The retrieval attempt may furthermore be revealed to
the cloud service consumer.
Fig 7.8
Figure 7.7.2 The encryption mechanism is added to the communication
channel between outside users and Innovartus’ User Registration Portal.
This safeguards message confidentiality via the use of HTTPS.
7.8 HASHING
Hashing the hashing mechanism is used when a one-way, non-
reversible form of data protection is required. Once hashing has been
202
applied to a message, it is locked and no key is provided for the
message to be unlocked.
A common application of this mechanism is the storage of passwords.
Hashing technology can be used to derive a hashing code or message
digest from a message, which is often of a fixed length and smaller
than the original message.
The message sender can then utilize the hashing mechanism to attach
the message digest to the message. The recipient applies the same hash
function to the message to verify that the produced message digest is
identical to the one that accompanied the message.
Any alteration to the original data results in an entirely different
message digest and clearly indicates that tampering has occurred.
Fig 7.9
Figure 7.8. A hashing function is applied to protect the integrity of a
message that is intercepted and altered by a malicious service agent,
before it is forwarded. The firewall can be configured to determine that the
message has been altered, thereby enabling it to reject the message before
it can proceed to the cloud service.
Figure 7.11 Cloud Service Consumer B sends a message that was digitally signed but
was altered by trusted attacker Cloud Service Consumer A. Virtual Server B is
configured to verify digital signatures before processing incoming messages even if they
are within its trust boundary. The message is revealed as illegitimate due to its invalid
digital signature, and is therefore rejected by Virtual Server B.
Figure 7.12 Whenever a cloud consumer performs a management action that is related to
IT resources provisioned by DTGOV, the cloud service consumer program must include
a digital signature in the message request to prove the legitimacy of its user.
204
7.10 PUBLIC KEY INFRASTRUCTURE (PKI)
205
Figure 7.14 An external cloud resource administrator uses a digital
certificate to access the Web-based management environment. DTGOV’s
digital certificate is used in the HTTPS connection and then signed by a
trusted CA.
Figure 7.15 A cloud service consumer provides the security broker with
login credentials (1). The security broker responds with an authentication
token (message with small lock symbol) upon successful authentication,
which contains cloud service consumer identity information (2) that is
used to automatically authenticate the cloud service consumer across
Cloud Services A, B, and C (3).
207
Figure 7.16 The credentials received by the security broker are
propagated to ready-made environments across two different clouds. The
security broker is responsible for selecting the appropriate security
procedure with which to contact each cloud.
208
Fig 7.13.1
Figure 7.17 Cloud-Based Security Group A encompasses Virtual Servers
A and D and is assigned to Cloud Consumer A. Cloud-Based Security
Group B is comprised of Virtual Servers B, C, and E and is assigned to
Cloud Consumer B. If Cloud Service Consumer A’s credentials are
compromised, the attacker would only be able to access and damage the
virtual servers in Cloud-Based Security Group A, thereby protecting
Virtual Servers B, C, and E.
Fig 7.18
209
Figure 7.13.2 When an external cloud resource administrator accesses the Web
portal to allocate a virtual server, the requested security credentials are assessed
and mapped to an internal security policy that assigns a corresponding cloud-
based security group to the new virtual server.
Figure 7.19 A cloud provider applies its security policies to harden its standard
virtual server images. The hardened image template is saved in the VM images
repository as part of a resource management system.
Hardened virtual server images help counter the denial of service,
insufficient authorization, and overlapping trust boundaries threats.
210
Figure 7.20 The cloud resource administrator chooses the hardened virtual
server image option for the virtual servers provisioned for Cloud-Based
Security Group B.
SUMMARY
REFERENCES
https://patterns.arcitura.com/cloud-computing-
patterns/mechanisms/remote_administration_system
Cloud Computing Concepts, Technology & Architecture By Thomas Erl,
Zaigham Mahmood, and Ricardo Puttini Prentice Hall – 2013.
*****
212
UNIT IV
8
FUNDAMENTAL CLOUD
ARCHITECTURES
Unit Structure
8.0 Objectives
8.1 Introduction
8.2 Workload Distribution Architecture
8.3 Resource Pooling Architecture
8.4 Dynamic Scalability Architecture
8.5 Elastic Resource Capacity Architecture
8.6 Service Load Balancing Architecture
8.7 Cloud Bursting Architecture
8.8 Elastic Disk Provisioning Architecture
8.9 Redundant Storage Architecture
8.10 Summary
8.11. Questions
8.12 References
8.0 OBJECTIVE
8.1 INTRODUCTION
215
3) Storage pools, or cloud storage device pools: are group of file-
based or block-based storage structures that contain empty and/or
filled cloud storage devices.
216
Figure: Different Pool Architecture
217
2. Dynamic Vertical Scaling: in this type the resource instances are
scaled up and down when there is a need to adjust the processing capacity
of a single resource. For example, a virtual server that is being overloaded
can have its memory dynamically increased or it may have a processing
core added.
Resource pools are used by scaling technology that interacts with the
hypervisor and/or VIM to retrieve and return CPU and RAM resources
at runtime.
The runtime processing of the virtual server is monitored so that
additional processing power can be leveraged from the resource pool
via dynamic allocation, before capacity thresholds are met.
The virtual server and its hosted applications and resources are
vertically scaled in response.
This type of cloud architecture can be designed so that the intelligent
automation engine script sends its scaling request via the VIM instead
of to the hypervisor directly.
Virtual servers that participate in elastic resource allocation systems
may require rebooting in order for the dynamic resource allocation to
take effect.
220
8.8 ELASTIC DISK PROVISIONING ARCHITECTURE
221
Thin-provisioning software is installed on virtual servers that process
dynamic storage allocation via the hypervisor, while the pay-per-use
monitor tracks and reports granular billing-related disk usage data.
222
This cloud architecture primarily relies on a storage replication system
that keeps the primary cloud storage device synchronized with its
duplicate secondary cloud storage devices.
Cloud providers may locate secondary cloud storage devices in a
different geographical region than the primary cloud storage device,
usually for economic reasons.
The location of the secondary cloud storage devices can dictate the
protocol and method used for synchronization, like some replication
transport protocols have distance restrictions.
SUMMARY
QUESTIONS
REFERENCES
*****
224
9
ADVANCEDCLOUD ARCHITECTURES
Unit Structure
9.0 Objective
9.1 Hypervisor Clustering Architecture
9.2 Load Balanced Virtual Server Instances Architecture
9.3 Non-Disruptive Service Relocation Architecture
9.4 Zero Downtime Architecture
9.5 Cloud Balancing Architecture
9.6 Resource Reservation Architecture
9.7 Dynamic Failure Detection and Recovery Architecture
9.8 Bare-Metal Provisioning Architecture
9.9 Rapid Provisioning Architecture
9.10 Storage Workload Management Architecture
9.11. Summary
9.12 Questions
9.13 References
9.0 OBJECTIVE
225
The hypervisor cluster is controlled via a central VIM, which sends
regular heartbeat messages to the hypervisors to confirm that they are
up and running.
Unacknowledged heartbeat messages cause the VIM to initiate the live
VM migration program, in order to dynamically move the affected
virtual servers to a new host.
226
The capacity watchdog monitor tracks physical and virtual server
usage and reports any significant fluctuations to the capacity planner,
which is responsible for dynamically calculating physical server
computing capacities against virtual server capacity requirements.
If the capacity planner decides to move a virtual server to another host
to distribute the workload, the live VM migration program is signaled
to move the virtual server.
227
Instead of scaling cloud services in or out with redundant
implementations, cloud service activity can be temporarily diverted to
another hosting environment at runtime by adding a duplicate
implementation onto a new host.
Similarly, cloud service consumer requests can be temporarily
redirected to a duplicate implementation when the original
implementation needs to undergo a maintenance outage.
The relocation of the cloud service implementation and any cloud
service activity can also be permanent to accommodate cloud service
migrations to new physical server hosts.
A key aspect of the underlying architecture is that the new cloud
service implementation is guaranteed to be successfully receiving and
responding to cloud service consumer requests before the original
cloud service implementation is deactivated or removed.
A common approach is for live VM migration to move the entire
virtual server instance that is hosting the cloud service.
The automated scaling listener and/or load balancer mechanisms can
be used to trigger a temporary redirection of cloud service consumer
requests, in response to scaling and workload distribution
requirements. Either mechanism can contact the VIM to initiate the
live VM migration process.
Virtual server migration can occur in one of the following two ways,
depending on the location of the virtual server’s disks and
configuration:
A copy of the virtual server disks is created on the destination host, if
the virtual server disks are stored on a local storage device or non-
shared remote storage devices attached to the source host. After the
copy has been created, both virtual server instances are synchronized
and virtual server files are removed from the origin host.
228
Copying the virtual server disks is unnecessary if the virtual server’s
files are stored on a remote storage device that is shared between
origin and destination hosts. Ownership of the virtual server is simply
transferred from the origin to the destination physical server host, and
the virtual server’s state is automatically synchronized.
231
9.7 DYNAMIC FAILURE DETECTION AND
RECOVERY ARCHITECTURE
232
9.8 BARE-METAL PROVISIONING ARCHITECTURE
234
6. The sequence manager updates and sends the logs to the sequence
logger for storage.
7. The sequence manager requests that the deployment engine apply the
operating system baseline to the provisioned operating system.
8. The deployment engine applies the requested operating system
baseline.
9. The deployment engine informs the sequence manager that the
operating system baseline has been applied.
10. The sequence manager updates and sends the logs of completed steps
to the sequence logger for storage.
11. The sequence manager requests that the deployment engine install the
applications.
12. The deployment engine deploys the applications on the provisioned
server.
13. The deployment engine informs the sequence manager that the
applications have been installed.
14. The sequence manager updates and sends the logs of completed steps
to the sequence logger for storage.
15. The sequence manager requests that the deployment engine apply the
application’s configuration baseline.
16. The deployment engine applies the configuration baseline.
17. The deployment engine informs the sequence manager that the
configuration baseline has been applied.
18. The sequence manager updates and sends the logs of completed steps
to the sequence logger for storage.
235
Combining cloud storage devices into a group allows LUN data to
be distributed between available storage hosts equally. A storage
management system is configured and an automated scaling listener is
positioned to monitor and equalize runtime workloads among the grouped
cloud storage devices.
9.11 GLOSSARY
236
LUN migration: LUN migration is a specialized storage program that
is used to move LUNs from one storage device to another without
interruption, while remaining transparent to cloud consumers.
SUMMARY
QUESTIONS
REFERENCES
*****
237
UNIT V
10
CLOUD DELIVERY MODEL
CONSIDERATION
Unit Structure
10.0 Objectives
10.1 Introduction
10.2 Cloud Delivery Models: The Cloud Provider Perspective
10.2.1 Building IaaS Environments
10.2.2 Equipping PaaS Environments
10.2.3 Optimizing SaaS Environments
10.3 Cloud Delivery Models: The Cloud Consumer Perspective
10.3.1 Working with IaaS Environments
10.3.2 Working with PaaS Environments
10.3.3 Working with SaaS Services
10.4 Unit End Exercise
10.0 OBJECTIVES
Describe cloud delivery models for PaaS
Describe cloud delivery models for SaaS
Describe different ways in which cloud delivery models are
administered and utilized by cloud consumers
Working with IaaS Environments
Working with PaaS Environments
Working with SaaS Environments
10.1 INTRODUCTION
Figure 10.1
Data Centers:
Monitoring:
Security:
241
• cloud-based security groups for isolating virtual environments through
hypervisors and network segments via network management software
• hardened virtual server images for internal and externally available
virtual server environments
• various cloud usage monitors to track provisioned virtual IT resources
to detect abnormal usage patterns.
Now consider that many of the previously listed cloud services are offered
in one or more of the following implementation mediums:
• Mobile application
• REST service
• Web service
244
Each of these SaaS implementation mediums provide Web-based APIs for
interfacing by cloud consumers. Examples of online SaaS-based cloud
services with Web-based APIs include:
• Electronic payment services (PayPal)
• Mapping and routing services (Google Maps)
• Publishing tools (WordPress)
Mobile-enabled SaaS implementations are commonly supported by
the multidevice broker mechanism, unless the cloud service is intended
exclusively for access by specific mobile devices.
245
• Tenant Application Functional Module: This metric is used by pay-
per- use monitors for function-based billing. Cloud services can have
different functionality tiers according to whether the cloud consumer is
free-tier or a paid subscriber.
Figure 10.3 illustrates a typical usage scenario for virtual servers that are
being offered as IaaS services after having been created with management
interfaces
Cloud consumers have a high degree of control over how and to what
extent IT resources are provisioned as part of their IaaS environments.
For example:
Controlling scalability features (automated scaling, load balancing)
Controlling the lifecycle of virtual IT resources (shutting down,
restarting, powering up of virtual devices)
Controlling the virtual network environment and network access rules
(firewalls, logical network perimeters)
Establishing and displaying service provisioning agreements (account
conditions, usage terms)
Managing the attachment of cloud storage devices
Managing the pre-allocation of cloud-based IT resources (resource
reservation)
Managing credentials and passwords for cloud resource administrators
Managing credentials for cloud-based security groups that access
virtualized IT resources through an IAM
Managing security-related configurations
Managing customized virtual server image storage (importing,
exporting, backup)
Selecting high-availability options (failover, IT resource clustering)
Selecting and monitoring SLA metrics
Selecting basic software configurations (operating system, pre-
installed software for new virtual servers) selecting IaaS resource
instances from a number of available hardware- related configurations
247
and options (processing capabilities, RAM, storage)
Selecting the geographical regions in which cloud-based IT resources
should be hosted
Tracking and managing costs
For example:
Establishing and displaying service provisioning agreements, such as
account conditions and usage terms
Selecting software platform and development frameworks for ready-
made environments
Selecting instance types, which are most commonly frontend or
backend instances
Selecting cloud storage devices for use in ready-made environments
Controlling the lifecycle of PaaS-developed applications (deployment,
248
starting, shutdown, restarting, and release)
Controlling the versioning of deployed applications and modules
Configuring availability and reliability-related mechanisms
Managing credentials for developers and cloud resource administrators
using IAM
Managing general security settings, such as accessible network ports
Selecting and monitoring PaaS-related SLA metrics
Managing and monitoring usage and IT resource costs
Controlling scalability features such as usage quotas, active instance
thresholds, and the configuration and deployment of the automated
scaling listener and load balancer mechanisms
For example:
• Managing security-related configurations
• Managing select availability and reliability options
• Managing usage costs
• Managing user accounts, profiles, and access authorization
• Selecting and monitoring SLAs
• Setting manual and automated scalability options and limitations.
249
QUESTIONS
REFERENCES
*****
250
11
COST METRICS AND PRICING MODELS
AND SERVICE QUALITY METRICS
AND SLAS
Unit Structure
11.0 Objectives
11.1 Introduction
11.2 Business Cost Metrics
11.3 Cloud Usage Cost Metrics
11.3.1 Network Usage
11.3.2 Server Usage
11.3.4 Cloud Service Usage
11.4 Cost Management Considerations
11.4.1 Pricing Models
11.4.2 Additional Considerations
11.5 Service-level agreements (SLAs)
11.6 Service Quality Metrics
11.6.1 Service Availability Metrics
11.6.2 Service Reliability Metrics
11.6.3 Service Performance Metrics
11.6.4 Service Scalability Metrics
11.6.5 Service Resiliency Metrics
11.7 SLA Guidelines
11.8 Unit End Questions
11.0 OBJECTIVES
11.1INTRODUCTION
Additional Costs:
For example:
• Cost of Capital: The cost of capital is a value that represents the cost
incurred by raising required funds. For example, it will generally be
more expensive to raise an initial investment of $150,000 than it will be
252
to raise this amount over a period of three years. The relevancy of this
cost depends on how the organization goes about gathering the funds it
requires. If the cost of capital for an initial investment is high, then it
further helps justify the leasing of cloud-based IT resources.
• Sunk Costs: An organization will often have existing IT resources that
are already paid for and operational. The prior investment that has been
made in these on-premise IT resources is referred to as sunk costs.
When comparing up-front costs together with significant sunk costs, it
can be more difficult to justify the leasing of cloud-based IT resources
as an alternative.
• Integration Costs: Integration testing is a form of testing required to
measure the effort required to make IT resources compatible and
interoperable within a foreign environment, such as a new cloud
platform. Depending on the cloud deployment model and cloud
delivery model being considered by an organization, there may be the
need to further allocate funds to carry out integration testing and
additional labor related to enable interoperability between cloud service
consumers and cloud services. These expenses are referred to as
integration costs. High integration costs can make the option of leasing
cloud-based IT resources less appealing.
• Locked-in Costs: As explained in the Risks and Challenges section in
Chapter 3. cloud environments can impose portability limitations.
When performing a metrics analysis over a longer period of time, it
may be necessary to take into consideration the possibility of having to
move from one cloud provider to another. Due to the fact that cloud
service consumers can become dependent on proprietary characteristics
of a cloud environment, there are locked-in costs associated with this
type of move. Locked-in costs can further decrease the long-term
business value of leasing cloud-based IT resources.
The following sections describe a set of usage cost metrics for calculating
costs associated with cloud-based IT resource usage measurements:
• Network Usage: inbound and outbound network traffic, as well as
intracloud network traffic
• Server Usage: virtual server allocation (and resource reservation)
• Cloud Storage Device: storage capacity allocation
• Cloud Service: subscription duration, number of nominated users,
number of transactions (of cloud services and cloud-based
applications)
255
On-Demand Storage Space Allocation Metric:
• Description: duration and size of on-demand storage space allocation
in bytes
• Measurement: E, date of storage release / reallocation to date of
storage allocation (resets upon change in storage size)
• Frequency: continuous
• Cloud Delivery Model: IaaS, PaaS, SaaS
• Example: $0.01/GB per hour (typically expressed as GB/month)
I/O Data Transferred Metric:
• Description - amount of transferred I/O data
• Measurement - E, I/O data in bytes
• Frequency - continuous
• Cloud Delivery Model - IaaS, PaaS
• Example - $0.10/TB
256
11.4 COST MANAGEMENT CONSIDERATIONS
Cost management is often centered around the lifecycle phases of cloud
services, as follows:
• Cloud Service Design and Development: During this stage, the
vanilla pricing models and cost templates are typically defined by the
organization delivering the cloud service.
• Cloud Service Deployment: Prior to and during the deployment of a
cloud service, the backend architecture for usage measurement and
billing- related data collection is determined and implemented,
including the positioning of pay-per-use monitor and billing
management system mechanisms.
• Cloud Service Contracting: This phase consists of negotiations
between the cloud consumer and cloud provider with the goal of
reaching a mutual agreement on rates based on usage cost metrics.
• Cloud Service Offering: This stage entails the concrete offering of a
cloud service’s pricing models through cost templates, and any
available customization options.
• Cloud Service Provisioning: Cloud service usage and instance
creation thresholds may be imposed by the cloud provider or set by the
cloud consumer. Either way, these and other provisioning options can
impact usage costs and other fees.
• Cloud Service Operation: This is the phase during which active
usage of the cloud service produces usage cost metric data.
• Cloud Service Decommissioning: When a cloud service is
temporarily or permanently deactivated, statistical cost data may be
archived.
Figure 11.1 Common cloud service lifecycle stages as they relate to cost
management considerations.
257
11.4.1 Pricing Models:
The pricing models used by cloud providers are defined using templates
that specify unit costs for fine-grained resource usage according to usage
cost metrics. Various factors can influence a pricing model, such as:
• Market competition and regulatory requirements
• Overhead incurred during the design, development, deployment, and
operation of cloud services and other IT resources
• Opportunities to reduce expenses via IT resource sharing and data
center optimization
For example:
• Availability - up-time, outages, service duration
• Reliability - minimum time between failures, guaranteed rate of
successful responses
• Performance - capacity, response time, and delivery time guarantees
• Scalability - capacity fluctuation and responsiveness guarantees
• Resiliency - mean-time to switchover and recovery
259
SLA management systems use these metrics to perform periodic
measurements that verify compliance with SLA guarantees, in addition to
collecting SLA- related data for various types of statistical analyses.
Each service quality metric is ideally defined using the following
characteristics:
• Quantifiable: The unit of measure is clearly set, absolute, and
appropriate so that the metric can be based on quantitative
measurements.
• Repeatable: The methods of measuring the metric need to yield
identical results when repeated under identical conditions.
• Comparable: The units of measure used by a metric need to be
standardized and comparable. For example, a service quality metric
cannot measure smaller quantities of data in bits and larger quantities
in bytes.
• Easily Obtainable: The metric needs to be based on a non-
proprietary, common form of measurement that can be easily obtained
and understood by cloud consumers.
265
• Understanding the Scope of SLA Monitoring:
SLAs need to specify where monitoring is performed and where
measurements are calculated, primarily in relation to the cloud’s
firewall. For example, monitoring within the cloud firewall is not
always advantageous or relevant to the cloud consumer’s required QoS
guarantees. Even the most efficient firewalls have a measurable degree
of influence on performance and can further present a point of failure.
• Documenting Guarantees at Appropriate Granularity:
SLA templates used by cloud providers sometimes define guarantees
in broad terms. If a cloud consumer has specific requirements, the
corresponding level of detail should be used to describe the
guarantees. For example, if data replication needs to take place across
particular geographic locations, then these need to be specified directly
within the SLA.
• Defining Penalties for Non-Compliance:
If a cloud provider is unable to follow through on the QoS guarantees
promised within the SLAs, recourse can be formally documented in
terms of compensation, penalties, reimbursements, or otherwise.
• Incorporating Non-Measurable Requirements:
Some guarantees cannot be easily measured using service quality
metrics, but are relevant to QoS nonetheless, and should therefore still
be documented within the SLA. For example, a cloud consumer may
have specific security and privacy requirements for data hosted by the
cloud provider that can be addressed by assurances in the SLA for the
cloud storage device being leased.
• Disclosure of Compliance Verification and Management:
Cloud providers are often responsible for monitoring IT resources to
ensure compliance with their own SLAs. In this case, the SLAs
themselves should state what tools and practices are being used to
carry out the compliance checking process, in addition to any legal-
related auditing that may be occurring.
• Inclusion of Specific Metric Formulas:
Some cloud providers do not mention common SLA metrics or the
metrics-related calculations in their SLAs, instead focusing on service-
level descriptions that highlight the use of best practices and customer
support. Metrics being used to measure SLAs should be part of the
SLA document, including the formulas and calculations that the
metrics are based upon.
• Considering Independent SLA Monitoring:
Although cloud providers will often have sophisticated SLA
management systems and SLA monitors, it may be in the best interest
of a cloud consumer to hire a third-party organization to perform
independent monitoring as well, especially if there are suspicions that
SLA guarantees are not always being met by the cloud provider
(despite the results shown on periodically issued monitoring reports).
266
• Archiving SLA Data:
The SLA-related statistics collected by SLAmonitors are commonly
stored and archived by the cloud provider for future reporting
purposes. If a cloud provider intends to keep SLA data specific to a
cloud consumer even after the cloud consumer no longer continues its
business relationship with the cloud provider, then this should be
disclosed. The cloud consumer may have data privacy requirements
that disallow the unauthorized storage of this type of information.
Similarly, during and after a cloud consumer’s engagement with a
cloud provider, it may want to keep a copy of historical SLA-related
data as well. It may be especially useful for comparing cloud providers
in the future.
QUESTIONS
267
13. Explain Service Resiliency Metrics.
14. What are different guidelines for Service-Level Agreements?
REFERENCES
Mahmood, and Ricardo Puttini, The Prentice Hall Service Technology Series
ISBN-10 :9780133387520 ISBN-13 : 978-0133387520
https://www.studocu.com/in/document/srm-institute-of-science-and-
technology/cloud-computing/unit-4-cloud/9139371
https://www.coursehero.com/file/18595964/103940-Lec13/
*****
268