chap-1-cloud (1)
chap-1-cloud (1)
Computing
1.1 Introduction
Cloud computing has spawned start-ups in different new industry
verticals. It has forced the existing conglomerates to acclimatize and
adapt quickly to survive in the innovative environment. It is a set of
approaches that can help organizations quickly, effectively add and
subtract resources in almost real time. Cloud computing is a business
and economic model. It is the next stage in the evolution of Internet.
But still cloud computing is in its infancy stage. The term ‘cloud’ in
cloud computing refers to the means through which everything—
from computing power to computing infrastructure, applications,
business processes to personal collaboration—can be delivered to
you as a service wherever and whenever you need [6]. A cloud is a
group of interconnected network servers or PCs that may be private or
public. The data and the applications served by cloud are accessible to
a group of users through the network. But the cloud infrastructure and
technology are not-visible to the end-users. Cloud services include the
software delivery, infrastructure and storage over Internet based on
the end-users demand. It assembles large networks of virtualized
services. Hardware services like compute services, storage and
network and infrastructure services like web server, databases,
message queuing systems, monitoring systems etc. Cloud computing
is like a fluid that can expand and contract depending on the
customers/ business needs. That is, the users can add or remove the
resources as per their needs. Also understand here that this makes
cloud computing elastic. Also we can do this either manually or
using some automated tools.
1|Page
Cloud computing is shifting computing from the physical hardware
and locally managed software-enabled platforms to virtualized cloud-
hosted services. Cloud providers like Microsoft Azure, Amazon Web
Services (AWS), Rackspace, GoGrid etc. give users the option to
deploy their application over a pool of virtually infinite resources with
practically no money expenditure. It is the elasticity, cost
effectiveness and large availability of resources that force, motivate
and encourage these companies to shift from enterprise applications to
cloud computing.
In a recent survey conducted by different organizations:-
Gartner Research, 2014, it is observed that cloud computing
would be $150 billion business.
AMI partners and SMEs (Small and Medium Enterprise) are
expected to spend more than $100 billion on cloud computing.
IDC recently predicted that spending on public cloud hosted
applications will grow from $16.5 billion to over $55 billion in
2014.
Software companies are flocking to cloud now to reap the
ultimate benefits of cloud.
Recent McKinsey and Co. Report quotes that “clouds are
hardware-based services offering compute, network and storage
capacity where hardware management is highly abstracted from
the buyer, buyers incur infrastructure costs as variable OPEX
and infrastructure capacity is highly elastic.”
In another report from University of California Berkeley, the
key features of cloud computing are as follows-
o The illusion of infinite computing resources.
o The elimination of an up-front commitment by cloud
users.
o The ability to pay for use...as needed...”
2|Page
Several researchers have given different definitions of cloud but the
basic implication is same. Some of them are as follows:-
“Cloud Computing is a paradigm in which information is
permanently stored in servers on the Internet and cached
temporarily on clients that include desktops, entertainment
centres, table computers, notebooks, wall computers, handhelds,
sensors, monitors etc.” [Carl Hewitt, IEEE 2008]
Or
“It is information processing model in which centrally
administered computing capabilities is delivered as services, on an
as-needed basis, across the network to a variety of user-facing
devices.” [Brian et al. 2014]
Or
“It is a model for enabling convenient, on-demand network access
to a shared pool of configurable computing resources (e.g.
networks, servers, storage, applications and services) that can be
rapidly provisioned and released with minimal management
effort or service provider interaction.” [NIST, USA, 800-145]
Or
“It is an umbrella term to describe a category of sophisticated on-
demand computing services initially offered by commercial
providers like Amazon, Google and Microsoft.”
Or
“It denotes a model on which a computing infrastructure is
viewed as a ‘cloud’ from which businesses and individuals access
applications from anywhere in the world on demand.”
Or
3|Page
“Cloud is a parallel and distributed computing system consisting
of a collection on inter-connected and virtualized computers that
are dynamically provisioned and presented as one or more unified
computing resources based on service-level agreements (SLA)
established through negotiation between the service provider and
consumers.” [Buyya et al.]
Or
“Clouds are a large pool of easily usable and accessible
virtualized resources (such as hardware, development platforms
and/or services). These resources can be dynamically
reconfigured to adjust to a variable load (scale), allowing also for
an optimum resource utilization. This pool of resources is
typically exploited by a pay-per-use model in which guarantees
are offered by the Infrastructure Provider by means of
customized Service Level Agreements.” [Vaquero et al.]
Or
“Data centre hardware and software that provides services”.
[Armbrust et al.]
Or
“Cloud is more often used to refer to IT infrastructure deployed
on an Infrastructure as a Service provider data center.”
[Sotomayor et al.]
The following equation makes it more clearer:-
Hardware (like virtualization of hardware, multi-core chips)
4|Page
We shall be studying these a bit later.
Actors (players) of Cloud Computing
Three players make the world of cloud computing possible:-
1. Vendors.
2. Partners.
3. Business leaders.
Vendors that provide applications and enabling technology,
infrastructure, hardware and integration.
Partners of these vendors that create these cloud services for the
users/customers.
Business leaders who use or evaluate these cloud computing
services.
The point is that the cloud services should enable multi-tenancy i.e.
different companies should be able to share the same available
resources—online. Cloud computing cuts down the space, time, power
and cost extensively. For example, cloud services like Face book or
Linkedln and collaboration tools like video conferencing, document
management and webinars etc. are affecting business functioning a
lot.
Not only raw computing and storages but it also offers software
services of different types like APIs (Application Programming
Interfaces) and development tools that allow software web developers
to develop scalable projects. Please note that the ultimate goal is to
run the everyday It infrastructure in the cloud. Also note that it is
possible to define this umbrella term ‘cloud computing’ as
“The cloud services that are made available to the users on
demand via Internet from a cloud computing provider’s servers
like Microsoft Azure.” [Rajiv, 2016]
5|Page
1.2 Overview of Parallel Computing
The term parallel computing is different from cloud computing.
Parallel computing means running several computers, may be
kept in one room, but they are made to solve one problem only.
Such architectures are called as advanced computer architectures
and computers are known as parallel computers or
supercomputers. These computers use parallel programming
constructs and hence are also called as parallel computers. For
example, CRAY-XMP, CRAY-YMP, PARAGON, PARAM,
JUGENE and so on. On the other hand, cloud computing refers to the
use of available resources on Internet in a time and cost effective way.
This is possible due to sharing of the resources. So, cloud provides
software as a service, infrastructure as a service and platform as a
service. We will be discussing about this a bit later.
1.3 Grid Computing
Let us first of see the definition of grid computing as given in
Wikipedia (a free online encyclopedia):-
“Grid computing is a form of distributed computing whereby a
“super and virtual computer” is composed of a cluster of
networked, loosely-coupled computers, acting in concert to
perform very large tasks. This technology has been applied to
computationally-intensive scientific, mathematical and academic
problems through volunteer computing and it is used in
commercial enterprises for such diverse applications as drug
discovery, economic forecasting, seismic analysis and back-office
data processing in support of e-commerce and web services.”
Grids are more loosely coupled, heterogeneous and geographically
dispersed [7].
According to [6], grid computing is a step beyond distributed
processing, involving large number of networked computers that are
6|Page
harnessed to solve a common problem. Clouds are usually organized
as a computer grid.
According to Carl Kesselman and Ian Foster, grid computing is a
cluster of computers that were geographically distributed but worked
together to perform a common task. Please understand that in grid
computing a cluster of loosely coupled computers work together
to solve a single problem that involves massive amounts of
numerical calculations and compute cycles. The concept is just
similar to that of an electronic grid where we could connect and use
the power at any time. Grid computing uses grid-controlling software
that divides the work into smaller pieces and assigns each piece to a
pool of thousands of computers. Then later on the controlling unit
(CU) assembles the results to build the output. So, just as we have
electronic grids to harness electric power, similarly we have grid
computing to harness the power of computer that is free otherwise.
For example, SETI- Search for Extraterrestrial Intelligence is a
grid computing system. People all over the world share idle CPU
cycles of their computers to the SETI project.
1.4 Distributed Computing and its Variants-MANETS, PEER-
TO-PEER, CLOUD
It refers to the different tasks that are distributed among separate
nodes in the network. It includes:-
Grid Computing.
Peer-to-peer architecture.
Client-server architecture.
We have already seen what is a grid computing? Let us now compare
peer-to-peer architecture and cloud.
7|Page
Peer-to-Peer (P2P) architectures compared to cloud
In peer-to-peer network of hosts, resource sharing, processing and
communications control are fully decentralized. Each host acts as a
server (provider) of some services. But it depends on the other nodes
within the network for other services. All clients are same on the
network. On one hand, cloud computing is elastic and scalable in
terms of resource sharing. On the other hand, peer-to-peer
architectures are cheaper and simpler to manage.
Cloud computing needs heavy initial money investment and good
technology expertise while peer-to-peer deployments have a limited
extensibility property.
Client-Server architecture compared to cloud
Client-server architecture is a form of distributed computing wherein
the clients depend on the number of servers that will provide them the
services. So, its scalability involves more cost (processing power cost,
management costs and administrative costs). On the other hand, cloud
saves cost, time and manpower. All resources are shared by the
customer now. There is no additional cost involved as it is there in
client-server architectures. Also note that in client-server
deployments, a minimum of one server is a must. Hence, more
costs are involved. Cloud is, therefore, cheaper.
MANETS
MANETS stands for Mobile Adhoc Networks. According to the
routing strategy, the routing protocols can be classified as table-
driven and source-initiated protocols. On the other hand, based on the
network structure, they can be classified as flat-routing,
hierarchical routing and geographical position assisted routing
protocols.
8|Page
Table-driven / Proactive Protocols: The table-driven protocols are
also called as proactive protocols because they maintain the
routing information even before it is needed. In this protocol, each
and every node in the network maintains routing information to every
other node in the network. In general, routing information is kept in
the routing tables and is periodically updated as the network topology
changes. Many of these routing protocols come from the link-state
routing. Also note that these protocols are not suitable for larger
networks as they need to maintain node entries for each and every
node in the routing table of every node. This causes more overhead
in the routing table leading to consumption of more bandwidth.
For example, Fisheye State Routing Protocol (FSR), Optimized Link
State Routing Protocol (OLSR).
Also there are some on-demand routing protocols/ reactive protocols.
On-demand routing protocols/ reactive protocols: They are also
called as reactive protocols because they don’t maintain routing
information or routing activity as the network nodes if there is no
communication. If a node wants to send a packet to another node
then this protocol searches for the route in an on-demand manner and
establishes the connection in order to transmit and receive packet. The
route discovery occurs by flooding the route request packets
throughout the network.
For example, Dynamic Source Routing (DSR), Ad-hoc On-demand
Distance Vector (AODV) Routing.
MANETS may also use Hierarchical State Routing (HSR)
protocols. This protocol maintains a hierarchical topology where
elected cluster heads at the lowest level become members of the next
higher level. On the higher level, super clusters are formed. Please
understand that the nodes which want to communicate to a node
outside of their cluster ask their cluster head to forward their
9|Page
packet to the next level, until a cluster head of the other node is in
the same cluster. Then the packet travels down to the destination
node. Also note that HSR proposes to cluster nodes in a logical
way rather than in a geographical way.
Also they can use Zone Routing Protocols (ZRP). They are also
known as hybrid reactive/ proactive routing protocol. As the name
implies, ZRP is based on the concept of zones. A routing zone is
defined for each node separately and the zones of neighbouring nodes
overlap. The routing zone has a radius expressed in hops. So, the
zone includes the nodes whose distance from the node in question is
at most hops. Please note that the number of nodes in the routing
zone can be regulated by adjusting the transmission power of the
nodes. Also note that lowering the power reduces the number of
nodes within direct reach and vice versa. The number of
neighbouring nodes should be sufficient to provide adequate reach
ability and redundancy. On the other hand, a too large coverage
results in many zone members and the update traffic becomes
excessive.
MANETS may also use geographic position assisted routing. In
includes protocol like Location-Aided Routing (LAR) and Distance
Routing Effect Algorithm for Mobility (DREAM) protocols.
10 | P a g e
automatically. So, such systems are self-managing. In the field of
computer science also we have the concept of autonomic computing.
Such systems should be able to handle events autonomously like
malicious attacks, hardware and software faults, power shutdowns,
software updates and so on. IBM introduced this concept with the
following features:-
Features of autonomic systems:-
1. Self Awareness i.e. they know themselves very well.
2. Self Configuring: It should be able to configure and
reconfigure itself under varying conditions.
3. Self Optimizing: It should be able to optimize itself to improve
its execution.
4. Self Healing: It should be able to detect and correct problems
and to continue functioning as it is.
5. Self Protecting: It should be able to protect itself from both
internal and external attacks of security.
6. Open Systems: It should be developed using standard and open
protocols and interfaces.
Please note that the basic concept of autonomic systems is their
self-management. Also note that the objective of autonomous self-
healing process is to keep the elements working as per their
design specifications.
In nutshell, we can say that “it is a set of self-managing, self-healing,
self-configuration, self-optimization and self-protection features of
distributed computing resources that operate on the basis of a set of
pre-defined policies.”
1.6 Historical Development and Evolution of Cloud Computing
History of Cloud
11 | P a g e
Initially cloud computing was thought of, being public only. So, it
was named as public cloud. But due to security reasons, we shifted
from public clouds to private clouds. The focus was to make cloud
more secure and yet to provide the same services and resource
sharing. Then cloud infrastructures naturally evolved to what is
known as hybrid cloud. Hybrid clouds can be explained with the help
of an equation also:-
Hybrid Cloud = Public Cloud + Private Cloud
This means that now you can take the benefits of both internal
network storage as well as public data cloud that can be accessed from
anywhere in the world using Internet. Using broadband services along
with the cloud the companies can connect to larger networks to make
use of available resources. There is no need of a huge computer now
to handle complex tasks like database indexing.
Evolution of Cloud
In 1960s,
a) Joseph Licklider, a Professor at MIT, described the idea of cloud
computing and resource sharing.
b) Professor, John McCarthy, at MIT and Stanford focussed on the
concepts of time-sharing, computing power and applications
being used and sold as a utility and online social networking.
c) In 1966, Douglas F Parkhill, published a book on The Challenge
of Computer Utility” wherein he described the utility-like
features of cloud computing like dynamic provisioning, illusion
of infinite supply and being always online.
In 1970s,
a) In 1979, Dun and Bradstreet, bought National CSS that sold
time-sharing concept.
12 | P a g e
b) Even BBN Technologies, founded by MIT, in 1970s, marketed
time-sharing.
In 1980s,
a) In 1985, DEC also introduced VAX clusters where several VAX
machines were grouped together for resource sharing.
b) In 1980, Tim-Berbers-Lee, worked on hypertext and is known
as the father of Internet today.
Note: All these advancements were pre-cloud phases of cloud development.
In 1990s,
a) Ian Foster and Carl Kesselman wrote a book entitled ‘The Grid:
Blueprint for a New Computing Infrastructure’. They explain
the concepts of grid computing that can work cohesively for
computationally intensive tasks.
b) In 1998, the Data Protection Act of UK, had a very long-term
impact on cloud computing. This act covered data collection,
protection and sharing in a multi-tenant environment.
c) In 1999, Salesforce.com, who happens to be a pioneer in
software-as-a Service (SaaS) CRM, made it operational.
d) In mid 1990s, Yahoo too offered cloud-based email services.
e) Again in 1990s, server virtualization was introduced (based on
8086 Microprocessors). This became the base/foundation for
cloud resource sharing.
f) In 1998, VMware was founded by Mendel et al. at University of
California.
In 2000s,
13 | P a g e
a) In 2001, SIIA (Software and Information Industry Association)
used an acronym SaaS and compared it with ASP (Application
Service Provider).
b) In 2002, Amazon launched its web services to permit users to
integrate their website with Amazon’s online content. This later
became IaaS, EC2 (Elastic Compute Cloud) and S3 (Storage-as
– a-Service). They actually introduced pay-per-use pricing and
very soon it became standard with other companies too.
c) In 2003, Nicholas Carr, published a research paper in Harvard
Business Review named as ‘ IT doesn’t Matter” wherein he
described that corporate will start purchasing IT resources as
and when needed from external resources only.
d) In 2008, Gartner quotes cloud computing as an emerging
technology that is still in its infancy stage.
1.7 Vision of Cloud Computing
Cloud computing can save money and time. This is the major goal of
cloud. Big companies who provide their customers with cloud
services also provide SLAs i.e. SERVICE LEVEL AGREEMENTS.
We define SLA as a contract in which the service providing
companies agree on a specified level of service (or uptime). An
SLA gives the potential customers a sort of confidence to them to use
cloud computing services. A system administrator has a role here. It
should ensure that this uptime is constant. This they can easily
achieve because of the redundancy of cloud computing. Several SLAs
promote an uptime level of 99.999% but cannot always provide for
data redundancy to be at that level. This problem can be solved by
making sure that the data integrity is written into this SLA agreement
itself to prevent any kind of confusion.
14 | P a g e
1.8 Properties and Characteristics of Cloud Computing
Some of the key characteristics of cloud computing are as follows:-
1. Cloud service providers like MS-AZURE, AMAZON WEB
SERVICE (AWS), IBM, GOOGLE, provide on-demand self-
services. Cloud includes a set of approaches that can help
organizations quickly, effectively add and subtract resources in
almost real time.
2. Cloud services can be used even on mobile phones. So, they
have a broader network access.
3. Resources like memory, network bandwidth and virtual
machines can be easily shared now, according to Gartner,
pooling like this with resources builds economies to larger
extent. The cloud also focuses on maximizing the effectiveness
of the shared resources. Please understand that cloud
resources are usually not only shared by multiple users bit
are also dynamically reallocated per demand. This can work
for allocating resources to users. Also note that with cloud
computing, multiple users can access a single server to
retrieve and update their data without purchasing licenses
for different applications.
4. Cloud is elastic. This means that if needed you can scale-in or
scale-out resources easily.
5. It is possible to measure, manage and control the cloud
computing resource practices. Cloud works on pay-as-you-go”
principle just as our electricity meters work. So, you are
charged only for the time you are using cloud services.
6. Multi-tenancy is another feature of cloud. It refers to different
companies sharing the same underlying resources.
7. Cloud adopts features from SOA – Service-oriented
Architecture that can help the user break these problems into
services that can be then integrated to provide a solution. Please
15 | P a g e
note that cloud computing provides all of its resources as
services and makes use of the well-established standards.
8. Cloud computing is a marketing term. It refers to a model of
network computing where a program or application runs on
a connected server or servers rather than on a local
computing device like a PC, tablet or smart phone.
9. Like a traditional client-server model or legacy mainframe
computing, a user connects with a server to perform a task. The
difference with cloud computing is that the computing process
may run on one or many connected computers at the same time,
utilizing the concept of virtualization.
10. Cloud computing is NOT a quick fix solution. It requires a
lot of thinking before implementing it in an organization.
11. It requires a strong foundation of best practices in software
development, software architecture and service management
foundations.
12. It is user-centric, task-centric, document-centric, powerful,
accessible, intelligent and programmable.
13. Cloud computing is not network computing. Nor it is
traditional outsourcing.
14. It should facilitate a shift from the remote data to the
current data, from applications to the tasks and from computer
to the user, with the objective of accessing from any place and
sharing it with anyone. Authorized users have instant access.
15. Cloud when used with IT will be more beneficial than
when used in isolation.
16 | P a g e
of the cloud vendors include-MS Azure, Amazon, IBM, Oracle Cloud,
Salesforce and Google.
For example, Windows Azure is a Microsoft’s cloud-based
application platform for developing, managing and hosting
applications off-site. MS AZURE consists of several components like
cloud operating system itself, SQL Azure (that provides databases
services in the cloud) and .NET services. Please note that Azure
runs on computers that are physically located in Microsoft data
centres. We shall be discussing these data centres a bit later.
Let us now compare the three service models in a tabular form (see
table-1):-
Model Explanations Examples
IaaS Customer gets resources like AWS-Amazon Web
processing power, storage, Services, RackSpace,
network bandwidth, CPU and GoGrid, Verizon, IBM, AT
power. Once the user gets the &T.
infrastructure, he controls the
OS, data, applications,
services, host-based security
etc.
PaaS Customer is provided the MS AZURE, Google App
hardware infrastructure, Engine, Force.com,
network and operating Informatica-on-demand,
system to form a hosting Keynote Systems, Caspio,
environment. User can install Tibco and WaveMaker.
his applications and activate
services from the hosting
environment.
SaaS Customer/user is provided an SalesForce.com, Google,
access to an application. He MS, Ramco and Zoho.
has no control over the
hardware, network, security
or OS.
18 | P a g e
Public
Cloud
company-1
Company-2
Company-3
Figure-1: Public Cloud
19 | P a g e
10. No direct connectivity is provided by public cloud service
providers like Amazon AWS, MS and Google.
Public
Cloud
Company-1 Company-2
Figure-2: Private Clouds
22 | P a g e
Commun
ity
23 | P a g e
to handle complex tasks like database indexing. Please remember
the following points regarding hybrid clouds:-
1. Better scalability and reliability as it allows companies to move
from public to private clouds.
2. Better sharing of resources on demand.
3. It is an approach of extending the infrastructure beyond the
organizational firewall with more security.
4. More important applications are stored on hybrid but lesser
important applications and data are stored on a public cloud.
5. An example of hybrid usage is like a patient’s record or some
financial matters that cannot be put on public cloud servers as it
is sensitive information. They can make use of hybrid clouds.
6. This type of cloud is used during cloud bursting. In this case,
an organization generally uses its own computing infrastructure
but in case of higher load requirements, the company can access
clouds. Please understand that this means that the company
using hybrid cloud can manage an internal cloud/ private
cloud for its general usage and it can migrate the entire
application to the public cloud during heavy peak hours.
7. This can be shown diagrammatically also as in figure-4.
24 | P a g e
Public
Cloud
Migrated
Application
Private
cloud
Company-1 Company-2
26 | P a g e
1.10 Cloud Computing Environments
As per DELL’s report 2012, cloud is not just a technology only. It is
rather a corporate strategy based on business outcomes. The real
benefit of cloud comes when it is integrated with IT and leveraged
across all environments. Please understand that the service
consultants should work to understand the business and help
customers to plan, build, deploy, manage and access clouds that
meet their specific needs. Also note that need is to create
virtualization or visualization environments, develop and
implement the application on cloud platforms.
The NIST Cloud Computing Standards Roadmap Working group
(NIST-SP 500-292 std.), an agency of the US Department of
Commerce, has surveyed the existing standards landscape for
security, portability and inter-operability standards/ models/ studies/
use cases etc., relevant to cloud computing.
The overview of the Reference Architecture of cloud lists five major
actors: Cloud Consumer, Cloud Provider, Cloud Broker, Cloud
Auditor and Cloud Carrier [Source: NIST Executive Summary]
We have already seen the role of cloud provider. But a cloud auditor
has the job of security and privacy audit. A cloud broker has the job
of service intermediation, service aggregation
NIST also indentifies three deployment models-public cloud, private
cloud and hybrid clouds. The main differences between each are
based on how exclusive the computing services and resources are
made to a cloud consumer.
1.11 Cloud Services Requirements
We list some of the best practices that every successful cloud
computing platform should follow:-
1. Better Security: Providing best security at every level.
27 | P a g e
2. Better Transparency: Providing transparent, real-time,
accurate service performance and information.
3. True Multi-tenancy: Deliver maximum scalability and
performance to customers with a true multi-tenant architecture.
4. Proven Scale: Support millions of users with proven scalability.
5. Better Performance: Deliver consistent, high speed
performance globally.
6. Better Disaster Recovery: Protect customer data by running
the service on multiple geographically dispersed data centres
with extensive back-up, data archive and failover capabilities.
7. Better Availability: Equip world-class facilities with proven
high-availability infrastructure and application software.
8. Resource Reservation: Cloud should assure that at the needed
time the resources or the services will be positively available to
the customer.
9. Self-Service portal: cloud should offer a self-service facility to
its customers. Just as in Mc D. If there is no one to serve you a
pizza then you opt for self-service. Similarly, the cloud users
should be able to manage using a web-based self-service
portal.
10. Dynamic Resource Allocations: It should be possible by
the cloud to do resource distribution and re-distributions easily.
This dynamic resource allocation and de-allocations will
illustrate the efficiency of SaaS (Software-as-a-Service).
11. The resource distribution and actual cloud utilization must
be reported in an accounting database.
12. Dynamic workload management, resource automation and
metering of these resources are also the desired essentials of
cloud.
28 | P a g e
1.12 Cloud and Dynamic Infrastructure
We define dynamic infrastructure as an information technology
paradigm concerning the design of data centres so that the
underlying hardware and software can respond dynamically to
changing levels of demand in more fundamental and efficient
ways than before. This paradigm is also known as Infrastructure 2.0
and Next Generation Data Centre.
Principle (of Dynamic Infrastructures): “To leverage pooled IT
resources to provide flexible IT capacity, enabling seamless, real-
time allocation of IT resources in line with demand from business
processes”.
And this is achieved by using server virtualization technology to pool
computing resources wherever possible and allocating these resources
on-demand using automated tools. This provides load balancing as it
avoids under-utilization of resources too.
For example, Flex Frame for SAP is a server-level Dynamic
Infrastructure.
Eg2. Flex Frame for Oracle Solutions by Fujitsu Siemens
Computers.
Fujitsu defines that Dynamic Infrastructures enable customers to
assign IT resources dynamically to services as required and to
choose sourcing models which best fit their businesses. This
brings IT flexibility and efficiency to the next level.
IBM defines that a Dynamic Infrastructure integrates business
and IT assets and aligns them with the overall goals of the
business while taking a smarter, new and more streamlined
approach to helping improve service, reduce cost and manage
risk.
29 | P a g e
The approach of these is to dynamically assign servers to applications
on demand, levelling peaks and enabling companies to maximize the
benefit from their IT investments (ROI-Return-on-Investment).
Please note that if an enterprise switches to Dynamic
Infrastructures then it also reduces costs, improves quality-of-
service and make more important use of energy by reducing the
number of standby or under-utilized machines in their data
centres. Also note that these Dynamic Infrastructures provide for
failover from a smaller pool of spare machines. By reducing
redundant capacity, organizations are enabled to make more efficient
use of their IT budgets and devote greater proportions of their budget
to physical and virtual production servers.
Dynamic Infrastructures may also be used to provide security and
data protection when workloads are moved during migrations,
provisioning, enhancing performance or building c0-location
facilities.
Benefits of Dynamic Infrastructures:-
Enhancing performance.
Scalability.
System availability and uptime.
Better server utilizations
Performing routine maintenance of physical or virtual systems.
Mitigating interruption to business operations.
Reducing IT costs.
Providing business continuity.
33 | P a g e
1. Cloud security is of paramount importance today. It is so
because data is shared on cloud and this makes the data as well
as the information more vulnerable to cyber cloud attacks.
Ambrust et al. quote that current cloud offerings are essentially
public...exposing the system to more attacks.”
2. Not having better quality services in cloud can make
organizations decline.
3. Cloud should provide a better inter-operability and
portability as industry heavily needs it.
4. Resource sharing and complex data on net needs sufficient
bandwidth. More costs are involved. And this is not acceptable
to many companies.
5. Cloud is very much regular in failures. Cloud Reliability
means a failure-free operation of the cloud. And this happens
to be a very big issue.
6. Parallel data access by multiple customers at all times and mix
of hardware types will make the data protection in any cloud
very complex. Data must be made redundant/
duplicated/replicated and stored at different locations and that it
should be easily accessible. But having data redundancy means
also having a check on data location, latency, user workload,
backup, report generations, application testing etc. Thus, it is
not an easy task.
7. Cloud disaster recovery is very significant when we evaluate
cloud providers.
8. An issue also arises when you back up the cloud data. Like if
you download data on your pen drives, you need to pay for the
bandwidth. Another issue arises that you need to save the data as
a better place where security is more.
9. Data recovery to a cloud-based service site is tough, slow and
error prone. This happens more if you upload a large amount of
data to the cloud over a WAN connection.
34 | P a g e
10. Consumers of cloud services are not aware of where the
primary or replicated data copies reside. The user data is
usually distributed across many data-centres. Also a company’s
cloud data may not reside within the operating or registered
country.
11. Service reliability is also a bigger challenge like hardware
and software components being heterogeneous, connectivity
over multi-vendor WAN, user-friendliness etc.
12. Several users work simultaneously on different data sets in
the cloud. So, the data is split or fragmented into many pieces
and stored in various storage locations. This is called as data
fragmentation. This data spreading leads to inefficiency and
reduces the Read/Write performances.
13. Data integration is itself a challenge as the data that has
been distributed at different data- centres cannot be easily
integrated.
14. Cloud data can be accessed only when both the user and
the services are online. This needs bandwidth which further
depends on the amount of workload.
15. Data transformation is also an issue. The process of
converting cloud data’s format into a format that can be
easily used by other cloud applications is known as data
transformation. This is an issue because the transformed data
may not be compatible with different environments. Also data
transformation creates multiple copies and managing these is a
big issue.
16. Cloud standardization is also an issue today. The Cloud
Computing Interoperability Forum (CCIF) was formed by
companies like Intel, Sun and Cisco, in order to enable a global
cloud computing ecosystem whereby organizations are able to
seamlessly work together for the purposes for wider industry
option of cloud computing technology.” As another standard
organization, Unified Cloud Interface (UCI) was formed by
35 | P a g e
CCIF that aims to create a standard programmatic point of
access to an entire cloud infrastructure. Also, in Open Virtual
Format (OVF) the aim is in packing and distribution of software
to be run on Virtual Machines so that virtual appliances can be
made portable. Thus, efficient management of cloud service
providers means efficient management of virtualized resource
pools. Please understand that the multi-dimensional nature
of virtual machines complicates the process of finding a good
mapping of virtual machines onto available physical hosts
while maximizing user utility. Management of this data is also
an issue.
17. Data centres also consume huge amount of electricity. As
per the report published by HP,”100 server racks can consume
1.3 MW of power and another 1.3 MW are required by the
cooling system. This costs US dollars 2.6 million per year.”
Besides this monetary cost, data centres also impact the
environments in terms of CO2 emissions from the cooling
systems.
18. The need is to optimize application performance, so,
dynamic resource management can also improve utilization and
thus reduce energy consumption in data centres. And this can
be achieved by consolidating workload onto smaller number of
servers and turning off idle resources.
37 | P a g e
resources, cloud like MS AZURE allows you to easily adjust
your resource utilization to match the load.
3. High Availability and Durability: Cloud vendors like MS
AZURE, provides a platform for applications that can reliably
store and access server data through its storage services. Cloud
applications like MS AZURE have MS AZURE-SQL
DATABASE for the same purpose. It ensures high availability
of compute resources. For websites, you can meet the
requirements of Service Level Agreement (SLA) with only a
single instance. Please note that for cloud services and virtual
machines, you can meet the SLA requirements by having at
least two instances per role or machine type. For virtual
machines, the instances must be interchangeable and load
balanced. It is the cloud vendor like MS AZURE that monitors
the actual hardware that hosts these virtual machines and
instances. Also note that vendor like MS AZURE is able to
respond quickly to hardware restarts or failures by
deploying new instances or moving application code and
processing to other working hardware. The cloud vendors
like AZURE ensures high availability and durability for data
stored by one of its storage services. MS AZURE storage
services replicate all data to at least three different servers. By
default, this storage also replicates to a secondary MS AZURE
region. Similarly, MS AZURE SQL DATABASE replicates all
data to guarantee availability and durability.
4. Highly Available Services: Say, there is an online store that is
deployed in MS AZURE. Note that as this online store is a
revenue generator, so it is important and critical to stay it
running. To achieve this objective, AZURE data centre
performs service monitoring and automatic instance
management. The online store must also stay responsive to
customer demand. The elastic scaling ability of MS AZURE
accomplishes this. During peak shopping times, new
38 | P a g e
instances can come online to handle the increased usage.
Also, the online store must not lose orders. Please understand
that both MS AZURE and AZURE SQL DATABASE
provide highly available and durable storage options to hold
the order details and state throughout the order life cycle.
For the highest level of availability, you can deploy the same
application to multiple MS AZURE regions. Also note that it is
possible to design a service that remains available even if an
entire MS AZURE region experiences a temporary failure.
Doing this requires proper synchronization architecture and
procedures for routing users.
5. Periodic Workloads: Some applications like a demo or a utility
application that you want to make available only for several
days or weeks. They need not be run continuously. MS AZURE
allows you to easily create, deploy and share that application.
Note that once this purpose is achieved, you can remove the
application and you are charged only for the time it was
deployed.
Case Study: Consider a big company that runs complex data
analysis of sales numbers at the end of each month. Although
processing-intensive, the total time required to complete
analysis is at most two days. In an on-premises scenario, the
server required for this work would be under-utilized for the
majority of the month. In MS AZURE, the business would pay
only for the time the analysis application is running in the cloud.
Assume that the application architecture is designed for parallel
processing. The scale out features of MS AZURE would allow
the company to create large numbers of worker role instances
or virtual machines. Working together these can complete more
complex work in less time. In this case study, you should use
code or scripting to automatically deploy the application at the
appropriate time every month.
39 | P a g e
Note: Remove the deployment as just suspending the application is not sufficient,
as this will avoid charges for compute time.
40 | P a g e
out how you can manage your computing costs. You can decide
to implement automatic scaling through the use of the
AUTOSCALE feature or through the use AUTOSCALING
APPLICATION BLOCK. This can add or remove instances
based on custom rules (pre-determined amount). For example,
you might have 8 instances during business hours and 4
instances during non-business hours. You can also keep the
number of instances constant and only increase them manually
through the web portal as demand increases over time. MS
AZURE provides you the flexibility to make the decisions that
are right for your business.
7. Workload Spikes: This workload pattern also works on the
principle of elastic scale, as explained earlier. Re-consider the
example of sports news portal once again. Now, even as its
business is steadily growing, there is still a possibility of
temporary spikes or bursts of activity. For example, assume
that another popular news outlet refers to the site. This means
that the number of visitors to the site could dramatically
increase in a single day.
Example 2, consider a service that processes daily reports at the
end of the day. When the business day closes, each office sends
in a report that the company headquarters processes. Please note
that because the process is only active a few hours each day,
it is also a candidate for elastic scaling and deployment. Also
note that MS AZURE is suitable for temporarily scaling out
an application to handle load spikes and then scaling back
after the event has passed.
8. Infrastructure Offloading: It has been observed that most of
the cloud scenarios make use of elastic scaling of MS AZURE.
Also, even applications that show steady workload patterns will
do a significant cost savings using MS AZURE cloud services.
Please note that it is difficult and costlier to manage your
own data-centre as it is costlier in terms of energy, people,
41 | P a g e
skills, hardware, software licensing and facilities. Also note
that it is difficult to understand how costs are tied to
individual applications. MS AZURE, however, makes those
costs to minimum and more and more transparent too.
For example, MS AZURE VIRTUAL MACHINES (VM) and
VIRTUAL NETWORK (VN) provide an easier method of
migrating on-premises servers and networks to the cloud. But
transitioning on-premises applications to cloud services or
websites also alleviates the pressure on the on-premises data-
centre. MS AZURE and not these data centres are actually
responsible for providing the required computing and storage
resources for those applications. also MS AZURE provides a
pricing calculator for understanding specific costs. It also
provides a TOTAL COST OF OWNERSHIP (TCO) calculator
for estimating the overall cost reduction that cloud occur by
adopting MS AZURE.
9. Resource management, dynamic scaling and high availability
and durability are some of the main advantages of running
applications in cloud.
10. To ensure the highest levels of availability, for managing
unpredictable growth and for handling workload spikes, MS
AZURE is preferred.
11. Quick service, safe and secure service, multiple user
access, development environment and unlimited storage are
some of its benefits.
12. Lesser operational issues, more reliability, more flexibility,
innovative and easier communication among teams and
customers.
42 | P a g e
1. Cloud services are more complex than traditional services.
2. Cloud based software may not be a silver bullet for customers
using them or companies who are deploying them.
3. A company that uses cloud and its services will certainly rely
on technology. So, cloud is technology-based technology and
if technology fails somewhere then cloud will also fail.
4. Data on cloud is quite insecure and needs to be tested
extensively.
5. Since data on cloud is made redundant, so there is a need of
redundancy tool.
6. There is no physical back-up.
7. On one hand cloud has increased the business opportunities
while on the other hand it has disrupted several, well-
established IT businesses.
8. Transitions to the cloud services must be cautious and
calculated.
9. For critical applications, factors like data security,
compliance, availability and performance are also t be
considered.
10. Standards for cloud deployment are still in their infancy
stage. This makes portability from one provider to another
quite complex and unpredictable.
11. The cloud environment itself requires a strong foundation
of best practices in software development, software
architecture and service management foundations.
12. Cloud uses data centres that consume large amounts of
electricity. As per the HP report, 100 server racks can
consume 1.3MW of power and another 1.3MW are required
by the cooling system. This costs $2.6 million per year. Also
these data centres impact the environment in terms of CO2
emissions from the cooling systems. Thus, the need is to
minimize energy consumption in data centres.
43 | P a g e
1.16 Applications of Cloud Computing
Cloud computing has several applications in IT today. Some of them
are as follows:-
1. Cloud can be used with web and mobile applications easily as
these applications are easily scalable.
2. Cloud testing can be done by using constantly configured
resources, lower expenditure and lesser release cycles.
3. Gaming applications can be easily implemented in clouds.
4. ECG analysis can be easily done in clouds.
5. To study protein structures.
6. For satellite image processing.
7. Cloud takes CRM (Customer Relationship Management) and
ERP (Enterprise Resource Planning) to next level.
8. Social networking is very common now-a-days. So, social cloud
architecture is presented in literature now. In social cloud,
services can be mapped to particular users through Facebook
identification.
Summary
Cloud computing has deep ramifications in almost every field now.
Cloud Engineering is not far now. Cloud analysis, Cloud design,
Cloud coding, Cloud testing and Cloud maintenance are all the
current hot branches of cloud. Mobile cloud computing, cloud
security and cloud energy efficiency are some of the potential areas
of research today. The researches in cloud computing field has
changed the way IT services are invented, developed, scaled and
maintained. Information and services may be programmatically
aggregated. Both act as building blocks of complex compositions
called as service mashups. Many service providers like Amazon,
Facebook and Google have made their service APIs public by
using standard protocols like SOAP and REST. So, a fully
44 | P a g e
functional web application can be developed easily just by gluing
pieces with few LOC (Lines of Code).
a) Shared resources.
b) Improved Manageability.
c) Standardisation.
d) All of the above.
a) Solaris.
b) Linux.
c) Windows Azure.
d) None of the above.
a) Service mashups.
b) Data mashups.
c) Information mashups.
d) None of the above.
45 | P a g e
5. Systems that rely on monitoring probes and sensors and do self-
management are known as-
a) Automatic systems.
b) Autonomic systems.
c) Atmospheric systems.
d) None of the above.
46 | P a g e
c) Amazon Web Site.
d) None of the above.
47 | P a g e
b) Cloud showering.
c) Cloud computing.
d) None of the above.
17. A web interface that allows customers to access virtual machines is-
a) Amazon EC2.
b) Amazon MC2.
c) Amazon PC2.
d) None of the above.
18. The cloud operating system built on top of Microsoft data centre’s
infrastructure is-
a) MS AZURE.
b) AWS.
c) SALESFORCE.COM.
d) None of the above.
Answers –
1.
48 | P a g e
Ans. 1 As we know that cloud data centres are having several servers. This
increases the energy consumptions. These servers are designed to be overloaded
and overdesigned for better reliability. They must support redundancy, error-
correcting RAM, parity disk drives, (n + 1) power supplies and so on. All this
needs energy to cool and power it, light the data centre, provide security etc.
And this concept of purposely overdesigning a true server for a constant
reliable operation is known as duty cycle.
a) Amazon.
b) Google App Engine.
c) IBM.
d) Salesforce.com
e) MS AZURE.
a) Mosso.
49 | P a g e
b) Nirvanix
c) Skytap.
d) StrikeIron.
e) 3tera
f) 10gen
g) Cohesive Flexible Technologies.
h) Joyent.
a) Technical issues.
b) Business model issues.
c) Internet issues.
d) Security issues
e) Compatibility issues.
f) Social issues.
Ans. 6 A blog is a personal journal put up on the net/ web log. It focuses on
one topic and does not require much formatting. It is simple to create. As each
blog has its own URL, it is an easy way of adding new URLs which increase a
site’s popularity with search engines. A blog is an ideal vehicle for advertising
your business and its products. The syndication that is built into the blog
management means that the advertisement could reach a wide audience. So, the
blog will attract visitors to your site who wouldn’t otherwise find it. If we use
blogs as an e-marketing tool then they should be updated regularly.
User
Web
Application
51 | P a g e
the server can use the server’s processing power to run an application,
store data or perform any other computing task. Therefore, instead of
using a PC every time to run an application, the individual can now run
the application from anywhere in the world, as the server provides the
processing power to the application and the server is also connected to a
network via Internet or other connection platforms to be accessed from
anywhere. This has become possible due to increased computer
processing power available to human mankind.
52 | P a g e
Ans. 11 It refers to the capability to both scale out and scale back your
application depending on resource requirements. It is also called as an
elastic scale.
Q13. Cloud computing architecture consists of front end and back end.
Explain.
Ans. 13 The front end is the side that the client sees and back end is the
cloud section of a system. But a cloud infrastructure consists of storage,
network and computing components.
Exercise Questions
Q1. Define what is cloud computing? Discuss its benefits, challenges, issues
and its characteristics.
a) Elasticity.
b) Capacity planning.
c) Horizontal and vertical scaling.
53 | P a g e
Q6. List some web-based presentation programs.
[Hint: Google presentations, Preezo and Zoho Show.]
a) Mashups.
b) Duty cycle.
Q15. Public cloud is like the Internet and the private cloud is like Intranet.
Explain.
Q18. What are cloud data centres (CDC)? Also discuss some of the issues
related to them.
54 | P a g e
55 | P a g e