0% found this document useful (0 votes)
4 views

Cloud Computing Unit I and II

Cloud Computing Unit I and II

Uploaded by

PARTHIBAN M
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Cloud Computing Unit I and II

Cloud Computing Unit I and II

Uploaded by

PARTHIBAN M
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

UNIT -1

Scalable Computing over the Internet

High-Throughput Computing-HTC
HTC paradigm pays more attention to high-flux computing. The main application for high-flux
computing is in Internet searches and web services by millions or more users simultaneously.
The performance measures high throughput or the number of tasks completed per unit of time.
HTC technology needs to improve batch processing speed, and also address the acute problems
of cost, energy savings, security, and reliability at many data and enterprise computing centers

Computing Paradigm Distinctions


 Centralized computing
o This is a computing paradigm by which all computer resources are centralized in
one physical system.
o All resources (processors, memory, and storage) are fully shared and tightly coupled
within one integrated OS.
o Many data centers and supercomputers are centralized systems, but they are used in parallel,
distributed, and cloud computing applications.

• Parallel computing
 In parallel computing, all processors are either tightly coupled with centralized
shared memory or loosely coupled with distributed memory
 . Interprocessor communication is accomplished through shared memory or via
message passing.
 A computer system capable of parallel computing is commonly known as a parallel computer
 Programs running in a parallel computer are called parallel programs. The process of
writing parallel programs is often referred to as parallel programming

• Distributed computing
 A distributed system consists of multiple autonomous computers, each having its own
private memory, communicating through a computer network.
 Information exchange in a distributed system is accomplished through message passing.
 A computer program that runs in a distributed system is known as a distributed program.
 The process of writing distributed programs is referred to as distributed programming.
 Distributed computing system uses multiple computers to solve large-scale problemsover
the Internet using a centralized computer to solve computational problems.

• Cloud computing
 An Internet cloud of resources can be either a centralized or a distributed computing system.
The cloud applies parallel or distributed computing, or both.
 Clouds can be built with physical or virtualized resources over large data centers that
are centralized or distributed.
 Cloud computing can also be a form of utility computing or service computing
Degrees of Parallelism
 Bit-level parallelism (BLP) :
o converts bit-serial processing toword-level processing gradually.
 Instruction-levelparallelism (ILP)
o the processor executes multiple instructions simultaneously rather thanonly one instruction
at a time.
o ILP is executed through pipelining, superscalarcomputing, VLIW (very long
instruction word) architectures, and multithreading.
o ILP requiresbranch prediction, dynamic scheduling, speculation, and compiler support to
work efficiently.
 Data-level parallelism (DLP)
o DLP through SIMD (single instruction, multipledata) and vector machines using vector
or array types of instructions.
o DLP requires even more hardwaresupport and compiler assistance to work properly.
 Task-level parallelism (TLP):
o Ever since the introduction of multicoreprocessors and chip multiprocessors (CMPs), we
have been exploring TLP
o TLP is far from beingvery successful due to difficulty in programming and compilation
of code for efficient execution onmulticore CMPs.

 Utility Computing
o Utility computing focuses on a business model in which customers receive computing
resources from a paid service provider. All grid/cloud platforms are regarded as utility
service providers.

 The Internet of Things (IoT)


o Traditional Internet connects machines to machines or web pages to web pages.
o IoT was introduced in 1999 at MIT
o networked interconnection of everyday objects, tools, devices, or computers
o a wireless network of sensors that interconnect all things in our daily life.
o Three communication patterns co-exist: namely H2H (human-to-human), H2T (human-
tothing),and T2T (thing-to-thing).
o connect things (including human and machine objects) at any time and any
place intelligently with low cost
o IPv6 protocol, 2128 IP addresses are available to distinguish all the objects on
Earth, including all computers and pervasive devices
o IoT needs to be designed to track 100 trillion static or moving objects simultaneously.
o IoT demands universal addressability of all of the objects orthings.
o The dynamic connections will grow exponentially into a new dynamic network of networks,
called the Internet of Things (IoT).

Cyber-Physical Systems
o A cyber-physical system (CPS) is the result of interaction between computational
processes and the physical world.
o CPS integrates “cyber” (heterogeneous, asynchronous) with “physical” (concurrent and
information-dense) objects
o CPS merges the “3C” technologies of computation, communication, and control into
an intelligent closed feedback system
o IoT emphasizes various networking connections among physical objects, while the CPS
emphasizes exploration of virtual reality (VR) applications in the physicalworld

SYSTEM MODELS FOR DISTRIBUTED AND CLOUD COMPUTING


o Distributed and cloud computing systems are built over a large number of
autonomous computer nodes. These node machines are interconnected by SANs,
LANs, or WANs
o A massive system is with millions of computers connected to edge networks.
o Massive systems are considered highly scalable
o massive systems are classified into four groups: clusters, P2P networks, computing grids,
and Internet clouds

Computing cluster
o A computing cluster consists of interconnected stand-alone computers which work
cooperatively as a single integrated computing resource.

Cluster Architecture
o the architecture consists of a typical server cluster built around a low-latency, high bandwidth
interconnection network.
o build a larger cluster with more nodes, the interconnection network can be built with
multiple levels of Gigabit Ethernet, Myrinet, or InfiniBand switches.
o Through hierarchical construction using a SAN, LAN, or WAN, one can build
scalable clusters with an increasing number of nodes
o cluster is connected to the Internet via a virtual private network (VPN) gateway.
o gateway IP address locates the cluster
o Clusters have loosely coupled node computers.
o All resources of a server node are managed by
their own OS.
o Most clusters have multiple system images as a result
of having many autonomous nodes under different OS
control

Single-System Image -Cluster


o an ideal cluster should merge multiple system images intoa single-system image (SSI)
o acluster operating system or some middleware have to support SSI at various
levels, including the sharing of CPUs, memory, and I/O across all cluster nodes.
o illusion created by software or hardware that presents a collection of resources as
one integrated, powerful resource
o SSI makes the cluster appear like a single machine to the user.
o A cluster with multiple system images is nothing but a collection ofindependent
computers.

Hardware, Software, and Middleware Support –Cluster


o Clusters exploring massive parallelism are commonly known as MPPs –Massive Parallel
Processing
o The building blocks are computer nodes (PCs, workstations, servers, or SMP),
special communication software such as PVM or MPI, and a network interface card
in each computer node.
o Most clusters run under the Linux OS.
o nodes are interconnected by a high-bandwidth network
o Special cluster middleware supports are needed to create SSI or high availability (HA).
o all distributed memory to be shared by all servers by forming distributed shared
memory (DSM).
o SSI features are expensive
o achieving SSI, many clusters are loosely coupled machines
o virtual clusters are created dynamically, upon user demand
Grid Computing
A web service such as HTTP enables remote access of remote web pages
computing grid offers an infrastructure that couples computers, software/middleware, special
instruments, and people and sensors together
 Enterprises or organizations present grids as integrated computing resources. They can also
beviewed as virtual platforms to support virtual organizations.
 The computers used in a grid are primarilyworkstations, servers, clusters,
and supercomputers

Peer-to-Peer Network-P2P
 P2P architecture offers a distributed model of networked systems.
 P2P network is client-oriented instead of server-oriented
 In a P2P system, every node acts as both a client and a server
 Peer machines are simply client computers connected to the Internet.
 All client machines act autonomously to join or leave the system freely. This implies that
no master-slave relationship exists among the peers.
 No central coordination or central database is needed. The system is self-organizing
with distributed control.
 P2P two layer of abstractions as given in the figure

 Each peer machine joins or leaves the


P2P network voluntarily
 Only the participatingpeers form the
physical network at any time.
 Physical network is simply an ad hoc
networkformed at various Internet
domains
randomly using the TCP/IP and NAI
protocols.

Peer-to-Peer Network-Overlay network

 Data items or files are distributed in the participating peers.


 Based on communication or file-sharing needs, the peer IDs form an overlay network at
the logical level.
 When a new peer joins the system, its peer ID is added as a node in the overlay network.
 When an existing peer leaves the system, its peer ID is removed from the overlay network
automatically.
 An unstructured overlay network is characterized by a random graph. There is no fixed route
to send messages or files among the nodes. Often, flooding is applied to send a query to all
nodes in an unstructured overlay, thus resulting in heavy network traffic and nondeterministic
search results.
 Structured overlay networks follow certain connectivity topology and rules for inserting and
removing nodes (peer IDs) from the overlay graph

Cloud Computing
 A cloud is a pool of virtualized computer resources.
 A cloud can host a variety of different workloads, including batch-style backend jobs
and interactive and user-facing applications.”
 Cloud computing applies a virtualized platform with elastic resources on demand
by provisioning hardware, software, and data sets dynamically

The Cloud Landscape


Infrastructure as a Service (IaaS)
 This model puts together infrastructures demanded by users—namely servers, storage,
networks, and the data center fabric.
 The user can deploy and run on multiple VMs running guest OSes on specific applications.
 The user does not manage or control the underlying cloud infrastructure, but can
specify when to request and release the needed resources.
Platform as a Service (PaaS)
 This model enables the user to deploy user-built applications onto a virtualized cloud
platform.
 PaaS includes middleware, databases, development tools, andsome runtime support such
as Web 2.0 and Java.
 The platform includes both hardware andsoftware integrated with specific
programming interfaces.
 The provider supplies the API andsoftware tools (e.g., Java, Python, Web 2.0, .NET).
The user is freed from managing the cloudinfrastructure.
Software as a Service (SaaS)
 This refers to browser-initiated application software overthousands of paid cloud customers.
 The SaaS model applies to business processes, industryapplications, consumer relationship
management (CRM), enterprise resources planning (ERP),human resources (HR), and
collaborative applications.
 On the customer side, there is no upfrontinvestment in servers or software licensing.
 On the provider side, costs are rather low, comparedwith conventional hosting of user
applications
Internet clouds offer four deployment modes: private, public, managed, and hybrid

SOFTWARE ENVIRONMENTS FOR DISTRIBUTED SYSTEMSAND CLOUDS

Service-Oriented Architecture (SOA)


 In grids/web services, Java, and CORBA, an entity is, respectively, a service, a Java
object, and a CORBA distributed object in a variety of languages.
 These architectures build on the traditional seven Open Systems Interconnection (OSI)
layers that provide the base networking abstractions.
 On top of this we have a base software environment, which would be
o .NET or Apache Axis for web services,
o the Java Virtual Machine for Java, and a broker network for CORBA
 On top of this base environment one would build a higher level environment reflecting
the special features of the distributed computing environment.
 SOAapplies to building grids, clouds, grids of clouds, clouds of grids, clouds of clouds
(also known asinterclouds),
 SS (sensor service : A large number of sensors provide data-collectionservices (ZigBee
device, a Bluetoothdevice, WiFi access point, a personal computer, a GPA, or a wireless
phoneetc
 Filter services : to eliminate unwanted raw data, in orderto respond to specific requests
from the web, the grid, or web services

Layered Architecture for Web Services and Grids


 Entity Interfaces
 Java methodinterfaces correspond to the Web Services Description Language (WSDL),
 CORBA interface - definition language (IDL) specifications
 These interfaces are linked with customized, high-level communication systems:
SOAP, RMI, and IIOP
 These communication systems support features including particular message patterns
(such as Remote Procedure Call or RPC), fault recovery, and specialized routing.
 Communication systems are built on message-oriented middleware (enterprise
bus) infrastructure such as Web-Sphere MQ or Java Message Service (JMS)

Cases of fault tolerance- the features in the Web Services


Reliable Messaging (WSRM)
Security -reimplements the capabilities seen in concepts such
as Internet Protocol Security (IPsec)
Several models with, for example, JNDI (Jini and Java
Naming and DirectoryInterface) illustrating different
approaches within the Java distributed object model. The
CORBA TradingService, UDDI (Universal Description,
Discovery, and Integration), LDAP (Lightweight Directory
Access Protocol), and ebXML (Electronic Business using
eXtensibleMarkup Language
earlier years, CORBA and Java approaches were used in
distributed systems rather than today’sSOAP, XML, or
REST (Representational State Transfer).

Web Services and Tools


REST approach:
 delegates most ofthe difficult problems to application (implementation-specific) software.
In a web services language
 minimal information in the header, and the message body (that is opaque to
genericmessage processing) carries all the needed information.
 architectures are clearly more appropriatefor rapid technology environments.
 REST can use XML schemas but not those that are part of SOAP; “XML overHTTP” is
a popular design choice in this regard.
 Above the communication and managementlayers, we have the ability to compose
new entities or distributed programs by integrating severalentities together.

CORBA and Java:


 the distributed entities are linked with RPCs, and the simplest way to buildcomposite
applications is to view the entities as objects and use the traditional ways of linking
themtogether.
 For Java, this could be as simple as writing a Java program with method calls replaced
byRemote Method Invocation (RMI),
 CORBA supports a similar model with a syntax reflecting theC++ style of its entity (object)
interfaces.
Parallel and Distributed Programming Models

PERFORMANCE, SECURITY, AND ENERGY EFFICIENCY

Performance Metrics:

 In a distributed system, performance is attributed to a large numberof factors.


 System throughput is often measured in MIPS, Tflops (tera floating-point
operations persecond), or TPS (transactions per second).
 Systemoverhead is often attributed to OS boot time, compile time, I/O data rate, and
the runtime support systemused.
 Other performance-related metrics include the QoS for Internet and web services;
systemavailability and dependability; and security resilience for system defense against
network attacks

Dimensions of Scalability

Any resource upgrade ina system should be backward compatible with existing hardware and
software resources. System scaling can increase or decrease resources depending on many
practicalfactors

Size scalability
 This refers to achieving higher performance or more functionality by increasingthe machine
size.
 The word “size” refers to adding processors, cache, memory, storage, or I/Ochannels. The
most obvious way to determine size scalability is to simply count the number ofprocessors
installed.
 Not all parallel computer or distributed architectures are equally sizescalable.
 For example, the IBM S2 was scaled up to 512 processors in 1997. But in
2008, theIBMBlueGene/L system scaled up to 65,000 processors.
• Software scalability
 This refers to upgrades in the OS or compilers, adding mathematical andengineering
libraries, porting new application software, and installing more user-
friendlyprogramming environments.
 Some software upgrades may not work with large systemconfigurations.
 Testing and fine-tuning of new software on larger systems is a nontrivial job.
• Application scalability
 This refers to matching problem size scalability with machine sizescalability.
 Problem size affects the size of the data set or the workload increase. Instead of
increasingmachine size, users can enlarge the problem size to enhance system efficiency
or cost-effectiveness.
• Technology scalability
 This refers to a system that can adapt to changes in building technologies,such as the
component and networking technologies
 Whenscaling a system design with new technology one must consider three aspects:
time, space, andheterogeneity.
 (1) Time refers to generation scalability. When changing to new-generation
processors,one must consider the impact to the motherboard, power supply, packaging
and cooling,and so forth. Based on past experience, most systems upgrade their
commodity processors everythree to five years.
 (2) Space is related to packaging and energy concerns. Technology scalabilitydemands
harmony and portability among suppliers.
 (3) Heterogeneity refers to the use ofhardware components or software packages from
different vendors. Heterogeneity may limit thescalability.

Amdahl’s Law

 Let the program has been parallelized or partitioned for parallelexecution on a cluster
of many processing nodes.
 Assume that a fraction α of the code must be executedsequentially, called the
sequential bottleneck.
 Therefore, (1 − α) of the code can be compiledfor parallel execution by n processors.
The total execution time of the program is calculated byα T + (1 − α)T/n, where the first
term is the sequential execution time on a single processor and thesecond term is the
parallel execution time on n processing nodes.
 I/O time or exception handling timeis also not included in the following speedup analysis.

 Amdahl’s Law states that the speedup factorof using the n-processor system over the use
of a single processor is expressed by:
 the code is fully parallelizable with α = 0. As the cluster becomes sufficiently large,
that is, n →∞, S approaches 1/α, an upper bound on the speedup S.

 this upper bound is independentof the cluster size n. The sequential bottleneck is
the portion of the code that cannot be parallelized.

Gustafson’s Law

 To achieve higher efficiency when using a large cluster, we must consider scaling the
problem sizeto match the cluster capability. This leads to the following speedup law
proposed by John Gustafson(1988), referred as scaled-workload speedup.
 Let W be the workload in a given program.
 When using an n-processor system, the user scales the workload to W′ = αW + (1 −
α)nW.Scaled workload W′ is essentially the sequential execution time on a single processor.
The parallelexecution time of a scaled workload W′ on n processors is defined by a scaled-
workload speedupas follows:

Network Threats and Data Integrity


UNIT -2

Levels of Virtualization Implementation


 Virtualization is a computer architecture technology by which multiple virtual
machines (VMs) aremultiplexed in the same hardware machine.
 After virtualization, different user applications managed by their own operating
systems (guest OS) can run onthe same hardware independent of the host OS
 done by adding additional software, called a virtualization layer
 This virtualization layer is known as hypervisor or virtual machine monitor (VMM)
 function of the software layer for virtualization is to virtualize the physical hardware of
a host machine into virtual resources to be used by the VMs
 Common virtualization layers include the instruction set architecture (ISA) level,
hardware level, operating system level, library support level, and application level

Instruction Set Architecture Level


 At the ISA level, virtualization is performed by emulating a given ISA by the ISA of the host
machine. For example, MIPS binary code can run on an x86-based host machine with the
help of ISA emulation. With this approach, it is possible to run a large amount of legacy
binary code written for various processors on any given new hardware host machine.
 Instruction set emulation leads to virtual ISAs created on any hardware machine. The basic
emulation method is through code interpretation. An interpreter program interprets the source
instructions to target instructions one by one. One source instruction may require tens or
hundreds of native target instructions to perform its function. Obviously, this process is
relatively slow. For better performance, dynamic binary translation is desired.
 This approach translates basic blocks of dynamic source instructions to target instructions.
The basic blocks can also be extended to program traces or super blocks to increase
translation efficiency.
 Instruction set emulation requires binary translation and optimization. A virtual instruction
set architecture (V-ISA) thus requires adding a processor-specific software translation layer
to the compiler.

Hardware Abstraction Level


 Hardware-level virtualization is performed right on top of the bare hardware.
 Thisapproach generates a virtual hardware environment for a VM.
 The process managesthe underlying hardware through virtualization. The idea is to
virtualize a computer’s resources, such asits processors, memory, and I/O devices.
 The intention is to upgrade the hardware utilization rate bymultiple users concurrently.
The idea was implemented in the IBM VM/370 in the 1960s.
 Morerecently, the Xen hypervisor has been applied to virtualize x86-based machines to run
Linux or otherguest OS applications.

Cloud Computing Page 31


Operating System Level
 This refers to an abstraction layer between traditional OS and user applications.
 OS-level virtualizationcreates isolated containers on a single physical server and the
OS instances to utilize the hardwareand software in data centers.
 The containers behave like real servers.
 OS-level virtualization iscommonly used in creating virtual hosting environments to
allocate hardware resources among alarge number of mutually distrusting users.
 It is also used, to a lesser extent, in consolidating serverhardware by moving services
on separate hosts into containers or VMs on one server.

Library Support Level


 Most applications use APIs exported by user-level libraries rather than using lengthy
system callsby the OS.
 Since most systems provide well-documented APIs, such an interface becomes
anothercandidate for virtualization.
 Virtualization with library interfaces is possible by controlling the
communicationlink between applications and the rest of a system through API hooks.

User-Application Level
 Virtualization at the application level virtualizes an application as a VM.
 On a traditional OS, anapplication often runs as a process. Therefore, application-level
virtualization is also known as process-level virtualization.
 The most popular approach is to deploy high level language (HLL)VMs. In this scenario, the
virtualization layer sits as an application program on top of the operatingsystem,
 The layer exports an abstraction of a VM that can run programs written and compiledto a
particular abstract machine definition.
 Any program written in the HLL and compiled for thisVM will be able to run on it. The
Microsoft .NET CLR and Java Virtual Machine (JVM) are twogood examples of this class of
VM.

VMM Design Requirements and Providers


 layer between real hardware and traditional operating systems. This layer is
commonly called the Virtual Machine Monitor (VMM)
 three requirements for a VMM
 a VMM should provide an environment for programs which is essentially identical to
the original machine
 programs run in this environment should show, at worst, only minor decreases in speed
 VMM should be in complete control of the system resources.
 VMM includes the following aspects:
 (1) The VMM is responsible for allocating hardware resources for programs;
 (2) it is not possible for a program to access any resource not explicitly allocated to it;
 (3) it is possible under certain circumstances for a VMM to regain control of
resources already allocated.

Cloud Computing Page 32


Virtualization Support at the OS Level
 Why OS-Level Virtualization? :
o it is slow to initialize a hardware-level VM because each VM creates its
own image from scratch.
 OS virtualization inserts a virtualization layer inside an operating system to partition a
machine’s physical resources.
 It enables multiple isolated VMs within a single operating system kernel.
 This kind of VM is often called a virtual execution environment (VE), Virtual
Private System (VPS), or simply container
 The benefits of OS extensions are twofold:
o (1) VMs at the operating system level have minimal startup/shutdown costs,
low resource requirements, and high scalability;
o (2) for an OS-level VM, it is possible for a VM and its host environment
to synchronize state changes when necessary.

Middleware Support for Virtualization


 Library-level virtualization is also known as user-level Application Binary
Interface (ABI) or API emulation.
 This type of virtualization can create execution environments for running alien
programs on a platform

Hypervisor and Xen Architecture


 The hypervisor software sits directly between the physical hardware and its OS.
 This virtualization layer is referred to as either the VMM or the hypervisor

Xen Architecture
 Xen is an open source hypervisor program developed by Cambridge University.
 Xen is a microkernel hypervisor
 The core components of a Xen system are the hypervisor, kernel, and applications
 The guest OS, which has control ability, is called Domain 0, and the others are called
Domain U
 Domain 0 is designed to access hardware directly and manage devices

Cloud Computing Page 33


• VM state is akin to a tree: the current state of the machine is a point that progresses
monotonically as the software executes.
• VMs are allowed to roll back to previous states in their execution (e.g., to fix
configuration errors) or rerun from the same point many times

Full virtualization
 Full virtualization, noncritical instructions run on the hardware directly while critical
instructions are discovered and replaced with traps into the VMM to be emulated by
software
 VMware puts the VMM at Ring 0 and the guest OS at Ring 1.
 The VMM scans the instruction stream and identifies the privileged, control- and
behavior-sensitive instructions.
 When these instructions are identified, they are trapped into the VMM, which
emulates the behavior of these instructions.
 The method used in this emulation is called binary translation.
 Therefore, full virtualization combines binary translation and direct execution.

Cloud Computing Page 34


Para-Virtualization
• Para-virtualization needs to modify the guest operating systems
• A para-virtualized VM provides special APIs requiring substantial OS modifications
in user applications

CPU Virtualization
• A CPU architecture is virtualizable if it supports the ability to run the VM’s privileged and
unprivileged instructions in the CPU’s user mode while the VMM runs in supervisor
mode.
• Hardware-Assisted CPU Virtualization: This technique attempts to simplify
virtualization because full or paravirtualization is complicated

Memory Virtualization
• Memory Virtualization :the operating system maintains mappings of virtual memory to
machine memory using page table
• All modern x86 CPUs include a memory management unit (MMU) and a translation
lookaside buffer (TLB) to optimize virtual memory performance
• Two-stage mapping process should be maintained by the guest OS and the VMM,
respectively: virtual memory to physical memory and physical memory to machine
memory.
• The VMM is responsible for mapping the guest physical memory to the actual machine
memory.

Cloud Computing Page 35


Cloud Computing Page 36
I/O Virtualization
• I/O Virtualization managing the routing of I/O requests between virtual devices and the
shared physical hardware
• managing the routing of I/O requests between virtual devices and the shared physical
hardware
• Full device emulation emulates well-known, real-world devices All the functions of a
device or bus infrastructure, such as device enumeration, identification, interrupts, and
DMA, are replicated in software. This software is located in the VMM and acts as a
virtual device
• Two-stage mapping process should be maintained by the guest OS and the VMM,
respectively: virtual memory to physical memory and physical memory to machine
memory.
• The VMM is responsible for mapping the guest physical memory to the actual machine
memory.

Virtualization in Multi-Core Processors


• Muti-core virtualization has raised some new challenges
• Two difficulties: Application programs must be parallelized to use all cores fully,
and software must explicitly
• Assign tasks to the cores, which is a very complex problem
• The first challenge, new programming models, languages, and libraries are needed to
make parallel programming easier.
• The second challenge has spawned research involving scheduling algorithms
and resource management policies
• Dynamic heterogeneity is emerging to mix the fat CPU core and thin GPU cores on
the same chip

Cloud Computing Page 37


• In many-core chip multiprocessors (CMPs)
• Instead of supporting time-sharing jobs on one or a few cores, use the abundant cores
space-sharing, where single-threaded or multithreaded jobs are simultaneously
assigned to separate groups of cores

Physical versus Virtual Clusters


• Virtual clusters are built with VMs installed at distributed servers from one or
more physical clusters.
• Assign tasks to the cores, which is a very complex problem
• Fast deployment
• High-Performance Virtual Storage
• reduce duplicated blocks

Virtual Clusters
• Four ways to manage a virtual cluster.
• First, you can use a guest-based manager, by which the cluster manager resides on a
guest system.
• The host-based manager supervises the guest systems and can restart the guest system
on another physical machine
• Third way to manage a virtual cluster is to use an independent cluster manager on both
the host and guest systems.
• Finally, use an integrated cluster on the guest and host systems.
• This means the manager must be designed to distinguish between virtualized resources
and physical resources

Virtualization for data-center automation

• Data-center automation means that huge volumes of hardware,software, and database


resources in these data centers can be allocated dynamically to millions of Internet users
simultaneously, with guaranteed QoS and cost-effectiveness
• This automation process is triggered by the growth of virtualization products and cloud
computing services.

Cloud Computing Page 38


• The latest virtualization development highlights high availability (HA), backup
services, workload balancing, and further increases in client bases.
Server Consolidation in Data Centers
• heterogeneous workloads -chatty workloads and noninteractive workloads
• Server consolidation is an approach to improve the low utility ratio of
hardware resources by reducing the number of physical servers

Virtual Storage Management


• storage virtualization has a different meaning in a system virtualization environment
• system virtualization, virtual storage includes the storage managed by VMMs
• and guest OSes data stored in this environment
• can be classified into two categories: VM images and application data.

Cloud OS for Virtualized Data Centers


• Data centers must be virtualized to serve as cloud providers
• Eucalyptus for Virtual Networking of Private Cloud :
• Eucalyptus is an open source software system intended mainly for supporting
Infrastructure as a Service (IaaS) clouds
• The system primarily supports virtual networking and the management of VMs;
• virtual storage is not supported.
• Its purpose is to build private clouds
• three resource managers
o Instance Manager
o Group Manager
o Cloud Manager

Cloud Computing Page 39

You might also like