0% found this document useful (0 votes)
13 views

CC Unit2 Notes2

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

CC Unit2 Notes2

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

lOMoAR cPSD|30 736 168

Topic -1: Implementation levels of virtualization.


Virtualization is a computer architecture technology by which multiple virtual machines
(VMs) are multiplexed in the same hardware machine. The idea is to separate the hardware
from the software to yield better system efficiency.
Levels of Virtualization Implementation: A traditional computer runs with a host operating
system specially tailored for its hardware architecture, as shown in Figure 3.1(a). After
virtualization, different user applications managed by their own operating systems (guest OS)
can run on the same hardware, independent of the host OS.
lOMoAR cPSD|30 736 168

Instruction Set Architecture Level At the ISA level, virtualization is performed by


emulating a given ISA by the ISA of the host machine. For example, MIPS binary code can
run on an x86-based host machine with the help of ISA emulation. With this approach, it is
possible to run a large amount of legacy binary code written for various processors on any
given new hardware host machine. Instruction set emulation leads to virtual ISAs created on
any hardware machine.
Hardware Abstraction Level Hardware-level virtualization is performed right on top of the
bare hardware. On the one hand, this approach generates a virtual hardware environment for a
VM.
Operating System Level This refers to an abstraction layer between traditional OS and user
applications. OS-level virtualization creates isolated containers on a single physical server
and the OS instances to utilize the hardware and software in data centers.
Library Support Level Most applications use APIs exported by user-level libraries rather
than using lengthy system calls by the OS. Since most systems provide well-documented
APIs, such an interface becomes another candidate for virtualization.
User-Application Level Virtualization at the application level virtualizes an application as a
VM. On a traditional OS, an application often runs as a process. Therefore, application-level
virtualization is also known as Virtual Machines and Virtualization of Clusters and Data
Centers process-level virtualization.
Relative Merits of Different Approaches It compares the relative merits of implementing
virtualization at various levels. The column headings correspond to four technical merits.
“Higher Performance” and “Application Flexibility” are self-explanatory. “Implementation
Complexity” implies the cost to implement that particular virtualization level. “Application
Isolation” refers to the effort required to isolate resources committed to different VMs. Each
row corresponds to a particular level of virtualization.

VMM Design Requirements and Providers: hardware-level virtualization inserts a layer


between real hardware and traditional operating systems. This layer is commonly called the
Virtual Machine Monitor (VMM) and it manages the hardware resources of a computing
system.

There are three requirements for a VMM.


• First, a VMM should provide an environment for programs which is essentially
identical to the original machine.
• Second, programs run in this environment should show, at worst, only minor
decreases in speed.
lOMoAR cPSD|30 736 168

• Third, a VMM should be in complete control of the system resources.

Virtualization Support at the OS Level With the help of VM technology, a new computing
mode known as cloud computing is emerging. Cloud computing is transforming the
computing landscape by shifting the hardware and staffing costs of managing a
computational center to third parties, just like banks.
Why OS-Level Virtualization As mentioned earlier, it is slow to initialize a hardware-level
VM because each VM creates its own image from scratch. In a cloud computing
environment, perhaps thousands of VMs need to be initialized simultaneously. Besides slow
operation, storing the VM images also becomes an issue. As a matter of fact, there is
considerable repeated content among VM images.

Advantages of OS Extensions Compared to hardware-level virtualization, the benefits of OS


extensions are twofold: (1) VMs at the operating system level have minimal startup/shutdown
costs, low resource requirements, and high scalability; and
(2) for an OS-level VM, it is possible for a VM and its host environment to synchronize state
changes when necessary.
These benefits can be achieved via two mechanisms of OS-level virtualization:
(1) All OS-level VMs on the same physical machine share a single operating system kernel;
and
(2) the virtualization layer can be designed in a way that allows processes in VMs to access
as many resources of the host machine as possible, but never to modify them.
lOMoAR cPSD|30 736 168

Disadvantages of OS Extensions The main disadvantage of OS extensions is that all the


VMs at operating system level on a single container must have the same kind of guest
operating system. That is, although different OS-level VMs may have different operating
system distributions, they must pertain to the same operating system family.
Virtualization on Linux or Windows Platforms By far, most reported OS-level
virtualization systems are Linux-based. Virtualization support on the Windows-based
platform is still in the research stage. The Linux kernel offers an abstraction layer to allow
software processes to work with and operate on resources without knowing the hardware
details. New hardware may need a new Linux kernel to support.
Middleware Support for Virtualization Library-level virtualization is also known as user-
level Application Binary Interface (ABI) or API emulation. This type of virtualization can
create execution environments for running alien programs on a platform rather than creating a
VM to run the entire operating system. API call interception and remapping are the key
functions performed.
The vCUDA for Virtualization of General-Purpose GPUs:
The vCUDA employs a client-server model to implement CUDA virtualization. It
consists of three user space components: the vCUDA library, a virtual GPU in the guest OS
(which acts as a client), and the vCUDA stub in the host OS (which acts as a server). The
vCUDA library resides in the guest OS as a substitute for the standard CUDA library. It is
responsible for intercepting and redirecting API calls from the client to the stub. Besides
these tasks, vCUDA also creates vGPUs and manages them.
Basic Concept of VDUA Architecture:
lOMoAR cPSD|30 736 168

Topic 2: VIRTUALIZATION STRUCTURES/TOOLS AND MECHANISMS

the architectures of a machine before and after virtualization. Before virtualization, the
operating system manages the hardware. After virtualization, a virtualization layer is inserted
between the hardware and the operating system. In such a case, the virtualization layer is
responsible for converting portions of the real hardware into virtual hardware.
Hypervisor and Xen Architecture The hypervisor supports hardware-level virtualization on
bare metal devices like CPU, memory, disk and network interfaces. The hypervisor software
sits directly between the physical hardware and its OS.
The Xen Architecture Xen is an open source hypervisor program developed by Cambridge
University. Xen is a microkernel hypervisor, which separates the policy from the mechanism.

Binary Translation with Full Virtualization


Depending on implementation technologies, hardware virtualization can be classified into
two categories: full virtualization and host-based virtualization. Full virtualization does not
need to modify the host OS. It relies on binary translation to trap and to virtualize the
execution of certain sensitive, non virtualizable instructions.
Full Virtualization With full virtualization, noncritical instructions run on the hardware
directly while critical instructions are discovered and replaced with traps into the VMM to be
emulated by software.
Binary Translation of Guest OS Requests Using a VMM This approach was implemented
by VMware and many other software companies. As shown in Figure 3.6, VMware puts the
lOMoAR cPSD|30 736 168

VMM at Ring 0 and the guest OS at Ring 1. The VMM scans the instruction stream and
identifies the privileged, control- and behavior-sensitive instructions.
Host-Based Virtualization An alternative VM architecture is to install a virtualization layer
on top of the host OS. This host OS is still responsible for managing the hardware. The guest
OSes are installed and run on top of the virtualization layer. Dedicated applications may run
on the VMs. Certainly, some other applications can also run with the host OS directly.

Para-Virtualization with Compiler Support Para-virtualization needs to modify the guest


operating systems. A para-virtualized VM provides special APIs requiring substantial OS
modifications in user applications. Performance degradation is a critical issue of a virtualized
system.
Para-Virtualization Architecture When the x86 processor is virtualized, a virtualization
layer is inserted between the hardware and the OS. According to the x86 ring definition, the
virtualization layer should also be installed at Ring 0. Different instructions at Ring 0 may
cause some problems. In Figure 3.8, we show that para-virtualization replaces
nonvirtualizable instructions with hypercalls that communicate directly with the hypervisor or
VMM. However, when the guest OS kernel is modified for virtualization, it can no longer run
on the hardware directly.

KVM (Kernel-Based VM) This is a Linux para-virtualization system—a part of the Linux
version 2.6.20 kernel. Memory management and scheduling activities are carried out by the
lOMoAR cPSD|30 736 168

existing Linux kernel. The KVM does the rest, which makes it simpler than the hypervisor
that controls the entire machine.
Para-Virtualization with Compiler Support: the full virtualization architecture which
intercepts and emulates privileged and sensitive instructions at runtime, para-virtualization
handles these instructions at compile time. The guest OS kernel is modified to replace the
privileged and sensitive instructions with hypercalls to the hypervisor or VMM. Xen assumes
such a para-virtualization architecture.

Next Topic: VIRTUALIZATION OF CPU, MEMORY, AND I/O DEVICES.


Hardware Support for Virtualization Modern operating systems and processors permit
multiple processes to run simultaneously. If there is no protection mechanism in a processor,
all instructions from different processes will access the hardware directly and cause a system
crash.

CPU Virtualization: VM is a duplicate of an existing computer system in which a majority


of the VM instructions are executed on the host processor in native mode. Thus, unprivileged
instructions of VMs run directly on the host machine for higher efficiency.
Hardware-Assisted CPU Virtualization: This technique attempts to simplify virtualization
because full or para virtualization is complicated. Intel and AMD add an additional mode
called privilege mode level (some people call it Ring-1) to x86 processors.
lOMoAR cPSD|30 736 168

CPU state for VMs, a set of additional instructions is added. At the time of this writing, Xen,
VMware, and the Microsoft Virtual PC all implement their hypervisors by using the VT-x
technology. Generally, hardware-assisted virtualization should have high efficiency.
However, since the transition from the hypervisor to the guest OS incurs high overhead
switches between processor modes, it sometimes cannot outperform binary translation.
Memory Virtualization:
Virtual memory virtualization is similar to the virtual memory support provided by modern
operating systems. In a traditional execution environment, the operating system maintains
mappings of virtual memory to machine memory using page tables, which is a one-stage
mapping from virtual memory to machine memory. All modern x86 CPUs include a memory
management unit (MMU) and a translation lookaside buffer (TLB) to optimize virtual
memory performance.

Since each page table of the guest OSes has a separate page table in the VMM corresponding
to it, the VMM page table is called the shadow page table. Nested page tables add another
layer of indirection to virtual memory. The MMU already handles virtual-to-physical
translations as defined by the OS. Then the physical memory addresses are translated to
machine addresses using another set of page tables defined by the hypervisor.
lOMoAR cPSD|30 736 168

4 I/O Virtualization I/O virtualization involves managing the routing of I/O requests
between virtual devices and the shared physical hardware. At the time of this writing, there
are three ways to implement I/O virtualization: full device emulation, para-virtualization, and
direct I/O. Full device emulation is the first approach for I/O virtualization.

Virtualization in Multi-Core Processors Virtualizing a multi-core processor is relatively


more complicated than virtualizing a uni-core processor. Though multicore processors are
claimed to have higher performance by integrating multiple processor cores in a single chip,
muti-core virtualiuzation has raised some new challenges to computer architects, compiler
constructors, system designers, and application programmers.
Physical versus Virtual Processor Cores proposed a multicore virtualization method to
allow hardware designers to get an abstraction of the low-level details of the processor cores.
This technique alleviates the burden and inefficiency of managing hardware resources by
software.
Virtual Hierarchy The emerging many-core chip multiprocessors (CMPs) provides a new
computing landscape. Instead of supporting time-sharing jobs on one or a few cores, we can
use the abundant cores in a space-sharing, where single-threaded or multithreaded jobs are
simultaneously assigned to separate groups of cores for long time intervals.
lOMoAR cPSD|30 736 168

The idea is illustrated in Figure 3.17(a). Space sharing is applied to assign three workloads to
three clusters of virtual cores: namely VM0 and VM3 for database workload, VM1 and VM2
for web server workload, and VM4–VM7 for middleware workload. The basic assumption is
that each workload runs in its own VM. However, space sharing applies equally within a
single operating system. Statically distributing the directory among tiles can do much better,
provided operating systems or hypervisors carefully map virtual pages to physical frames.
Marty and Hill suggested a two-level virtual coherence and caching hierarchy that
harmonizes with the assignment of tiles to the virtual clusters of VMs.

Figure 3.17(b) illustrates a logical view of such a virtual cluster hierarchy in two levels. Each
VM operates in a isolated fashion at the first level. This will minimize both miss access time
and performance interference with other workloads or VMs. Moreover, the shared resources
of cache capacity, inter-connect links, and miss handling are mostly isolated between VMs.
The second level maintains a globally shared memory. This facilitates dynamically
repartitioning resources without costly cache flushes. Furthermore, maintaining globally
shared memory minimizes changes to existing system software and allows virtualization
features such as content-based page sharing. A virtual hierarchy adapts to space-shared
workloads like multiprogramming and server consolidation.

You might also like