Inter Cloud Resource Management Notes
Inter Cloud Resource Management Notes
The Intercloud is a connection of global “standalone clouds,” similar to the Internet which is a
connection of “networks.” The Intercloud is based on scenarios where a single cloud has no
infinite physical resources or ubiquitous geographic footprint. If a cloud saturates the compute
and storage resources of its infrastructure, or is requested to use resources in a geography where
it has no footprint, it would still be able to satisfy such requests for service allocations sent from
its clients.
The Intercloud is architected to enable standalone clouds to work as one; it involves not just
connecting clouds but also accelerating development for cloud and unifying workload
management across clouds.
It ensures that network and security policies follow the workload, harnessing an expanded
ecosystem of best-of-breed partners and service offerings.
It takes advantage of global data while meeting local and regional requirements.
The Intercloud will deliver choice with compliance and control to empower customers to
innovate and transform their businesses.
The Intercloud scenario would address such situations where each cloud would use the
computational, storage, or any kind of resource (through semantic resource descriptions, and
open federation) of the infrastructures of other clouds.
Problem
This blueprint is trying to address following problems
Multi Region Capability
Enterprises are adopting cloud based service to address their growing computing workload and
they often want resources across multiple regions. These regions sometimes span across multiple
geographical boundaries.
Single providers may not be able to address such requests as setting up huge cloud across
multiple regions is a costly and risky investment. Providers often want to lease resources from
other provider cross regions to fulfill customer computing workload.
Cloud Bursting
In private in-house cloud deployments, enterprises want to bursts into a public/separate cloud
when the demand for computing capacity spikes.
In cloud computing, cloud bursting is a configuration which is set up between a private cloud
and a public cloud to deal with peaks in IT demand. If an organization using a private
cloud reaches 100 percent of its resource capacity, the overflow traffic is directed to a public
cloud so there is no interruption of services.
Expensive Architecture
Building multi region architecture to join multiple data centers using dedicated/leased lines is
costly and creates tight coupling. This solution is also not a scalable solution for incremental
growth.
By realizing InterCloud architectural principles in mechanisms in their offering, cloud providers
will be able to dynamically expand or resize their provisioning capability based on sudden spikes
in workload demands by leasing available computational and storage capabilities from other
cloud service providers; operate as part of a market-driven resource leasing federation, where
application service providers such as Salesforce.com host their services based on negotiated SLA
contracts driven by competitive market prices; and deliver on-demand, reliable, cost-effective,
and QoS-aware services based on virtualization technologies while ensuring high QoS standards
and minimizing service costs.
They need to be able to utilize market-based utility models as the basis for provisioning
of virtualized software services and federated hardware infrastructure among users with
heterogeneous applications.
Architecture of InterCloud
InterCloud Framework (Buyya et al., 2010) (Buyya et al., 2010) has presented the architecture of
a federated Cloud computing environment named InterCloud which supports the scaling of
applications across multiple Cloud providers. The main idea of the InterCloud framework is to
enhance Cloud providers provisioning capabilities and workloads by leasing the computational
and storage capabilities from other providers.
The proposed architecture consists of a Cloud Broker, a Cloud Exchange and a Cloud
Coordinator.
Clients’ approaches a Cloud broker to help them to meet their specified QoS requirements,
Cloud Coordinators, acts as gateway between their internal datacenters and external Clouds
publish their services.
Cloud Exchange acts as an intermediate bringing together service providers and users.
The Cloud Exchange (CEx) acts as a market maker for bringing together service producers and
consumers. It aggregates the infrastructure demands from the application brokers and evaluates
them against the available supply currently published by the Cloud Coordinators.
CEx allows the participants (Cloud Coordinators and Cloud Brokers) to locate providers and
consumers with fitting offers.
The SLA module stores the service terms and conditions that are being supported by the
Cloud to each respective Cloud Broker on a per user basis.
Based on these terms and conditions, the Pricing module can determine how service
requests are charged based on the available supply and required demand of computing
resources within the Cloud.
The Accounting module stores the actual usage information of resources by requests so
that the total usage cost of each user can be calculated.
The Billing module then charges the usage costs to users accordingly.
Cloud customers can normally associate two or more conflicting QoS targets with their
application services. In such cases, it is necessary to trade off one or more QoS targets to
find a superior solution.
Due to such diverse QoS targets and varying optimization objectives, we end up with a
Multi-dimensional Optimization Problem (MOP).
Virtualization:
VMs support flexible and utility driven configurations that control the share of processing
power they can consume based on the time criticality of the underlying application.
However, the current approaches to VM-based Cloud computing are limited to rather
inflexible configurations within a Cloud.
This limitation can be solved by developing mechanisms for transparent migration of
VMs across service boundaries with the aim of minimizing cost of service delivery (e.g.,
by migrating to a Cloud located in a region where the energy cost is low) and while still
meeting the SLAs.
The Mobility Manager is responsible for dynamic migration of VMs based on the real-
time feedback given by the Sensor service.
Currently, hypervisors such as VMware and Xen have a limitation that VMs can only be
migrated between hypervisors that are within the same subnet and share common storage.
Clearly, this is a serious bottleneck to achieve adaptive migration of VMs in federated
Cloud environments.
This limitation has to be addressed in order to support utility driven, power-aware
migration of VMs across service domains.
Sensor:
Sensor infrastructure will monitor the power consumption, heat dissipation, and
utilization of computing nodes in a virtualized Cloud environment.
To this end, we will extend our Service Oriented Sensor Web software system. Sensor
Web provides a middleware infrastructure and programming model for creating,
accessing, and utilizing tiny sensor devices that are deployed within a Cloud.
The Cloud Coordinator service makes use of Sensor Web services for dynamic sensing of
Cloud nodes and surrounding temperature.
The output data reported by sensors are feedback to the Coordinator’s Virtualization and
Scheduling components, to optimize the placement, migration, and allocation of VMs in
the Cloud.
Such sensor-based real time monitoring of the Cloud operating environment aids in
avoiding server breakdown and achieving optimal throughput out of the available
computing and storage nodes.
In a massive pool of resources we need to take care of these parameters as well.
Further, system components will need to share scalable methods for collecting and representing
monitored data.
User Interface: This provides the access linkage between a user application interface and the
broker.
The Application Interpreter translates the execution requirements of a user application which
include what is to be executed, the description of task inputs including remote data files (if
required), the information about task outputs (if present), and the desired QoS.
The Service Interpreter understands the service requirements needed for the execution which
comprise service location, service type, and specific details such as remote batch job submission
systems for computational services.
The Credential Interpreter reads the credentials for accessing necessary services.
Core Services:
They enable the main functionality of the broker.
The Service Negotiator bargains for Cloud services from the Cloud Exchange.
The Scheduler determines the most appropriate Cloud services for the user application
based on its application and service requirements.
The Service Monitor maintains the status of Cloud services by periodically checking the
availability of known Cloud services and discovering new services that are available.
If the local Cloud is unable to satisfy application requirements, a Cloud Broker lookup
request that encapsulates the user’s QoS parameter is submitted to the Cloud Exchange,
which matches the lookup request against the available offers.
The matching procedure considers two main system performance metrics:
1. first, the user specified QoS targets must be satisfied within acceptable bounds
2. second, the allocation should not lead to overloading (in terms of utilization, power
consumption) of the nodes.
In case the match occurs the quote is forwarded to the requester (Scheduler). Following
that, the Scheduling and Allocation component deploys the application with the Cloud
that was suggested by Cloud market.
Execution Interface:
This provides execution support for the user application.
The Job Dispatcher creates the necessary broker agent and requests data files (if any) to
be dispatched with the user application to the remote Cloud resources for execution.
The Job Monitor observes the execution status of the job so that the results of the job are
returned to the user upon job completion.
Persistence: This maintains the state of the User Interface, Core Services, and Execution
Interface in a database. This facilitates recovery when the broker fails and assists in user-level
accounting.
• Directory: The market directory allows the global CEx participants to locate providers or
consumers with the appropriate bids/offers.
Cloud providers can publish the available supply of resources and their offered prices. Cloud
consumers can then search for suitable providers and submit their bids for required resources.
Standard interfaces need to be provided so that both providers and consumers can access
resource information from one another readily and seamlessly.
• Auctioneer: Auctioneers periodically clear bids and asks received from the global CEx
participants. Auctioneers are third party controllers that do not represent any providers or
consumers. Since the auctioneers are in total control of the entire trading process, they need to be
trusted by participants.
• Bank: The banking system enforces the financial transactions pertaining to agreements
between the global CEx participants.
The banks are also independent and not controlled by any providers and consumers; thus
facilitating impartiality and trust among all Cloud market participants that the financial
transactions are conducted correctly without any bias.
This should be realized by integrating with online payment management services, such as
PayPal, with Clouds providing accounting services.
The Cloud Exchange (CEx) acts as a market maker for bringing together service producers and
consumers.
It aggregates the infrastructure demands from application brokers and evaluates them
against the available supply currently published by the cloud coordinators.
It supports trading of cloud services based on competitive economic models such as commodity
markets.
CEx allows participants to locate providers and consumers with fitting offers.
Such markets enable services to be commoditized, and thus will pave the way for creation of
dynamic market infrastructure for trading based on SLAs.
An SLA specifies the details of the service to be provided in terms of metrics agreed upon by all
parties, and incentives and penalties for meeting and violating the expectations, respectively.
The availability of a banking system within the market ensures that financial transactions
pertaining to SLAs between participants are carried out in a secure and dependable environment.
Advantages of InterCloud:
1. Protection: Utilizing multiple providers could protect against
catastrophes(damage/disaster) that could impact a specific provider’s physical facility.
2. Robust Informative Marketplace: In today’s market where there are many pricing
options and performance variables, it is extremely difficult to choose a provider that
would fit your needs. With the cooperation and standardization of all providers, the
cloud industry could function similar to the airline industry. A customer could visit a
virtual marketplace and enter price preference and required specifications for virtual
instances and receive recommendations that fit their needs.
3. Expands Global Reach of Cloud Providers: Despite the fact that many cloud
providers have datacenters all over the globe, there might be a customer who needs a
datacenter in a location that is not offered by one provider, but is offered by another
provider. One provider would be able to service this customer with the datacenter of
another provider.
For example, 10 years ago there was no danger that somebody could remotely take over your car
with a cyberattack. But a car today with onboard computers, a GPS receiver and wireless
connections is vulnerable. Someone can take over a car. They cannot steer it (unless we are
talking about Google’s driverless car), Garrett noted, but can do other things. “So when you take
that possibility and spread it out, it makes you wonder what type of future world we will have if
somebody can come in remotely change your heating or air conditioning or shut down your car”
Still, a lot of future benefits will arise as a result of connected devices and access to more
information such as the better tracking of the rise and spread of epidemics, a larger sampling of
medicines or the ability to detect manufacturing defects, Garrett said.