CC Unit2 Notes2
CC Unit2 Notes2
Virtualization Support at the OS Level With the help of VM technology, a new computing
mode known as cloud computing is emerging. Cloud computing is transforming the
computing landscape by shifting the hardware and staffing costs of managing a
computational center to third parties, just like banks.
Why OS-Level Virtualization As mentioned earlier, it is slow to initialize a hardware-level
VM because each VM creates its own image from scratch. In a cloud computing
environment, perhaps thousands of VMs need to be initialized simultaneously. Besides slow
operation, storing the VM images also becomes an issue. As a matter of fact, there is
considerable repeated content among VM images.
the architectures of a machine before and after virtualization. Before virtualization, the
operating system manages the hardware. After virtualization, a virtualization layer is inserted
between the hardware and the operating system. In such a case, the virtualization layer is
responsible for converting portions of the real hardware into virtual hardware.
Hypervisor and Xen Architecture The hypervisor supports hardware-level virtualization on
bare metal devices like CPU, memory, disk and network interfaces. The hypervisor software
sits directly between the physical hardware and its OS.
The Xen Architecture Xen is an open source hypervisor program developed by Cambridge
University. Xen is a microkernel hypervisor, which separates the policy from the mechanism.
VMM at Ring 0 and the guest OS at Ring 1. The VMM scans the instruction stream and
identifies the privileged, control- and behavior-sensitive instructions.
Host-Based Virtualization An alternative VM architecture is to install a virtualization layer
on top of the host OS. This host OS is still responsible for managing the hardware. The guest
OSes are installed and run on top of the virtualization layer. Dedicated applications may run
on the VMs. Certainly, some other applications can also run with the host OS directly.
KVM (Kernel-Based VM) This is a Linux para-virtualization system—a part of the Linux
version 2.6.20 kernel. Memory management and scheduling activities are carried out by the
lOMoAR cPSD|30 736 168
existing Linux kernel. The KVM does the rest, which makes it simpler than the hypervisor
that controls the entire machine.
Para-Virtualization with Compiler Support: the full virtualization architecture which
intercepts and emulates privileged and sensitive instructions at runtime, para-virtualization
handles these instructions at compile time. The guest OS kernel is modified to replace the
privileged and sensitive instructions with hypercalls to the hypervisor or VMM. Xen assumes
such a para-virtualization architecture.
CPU state for VMs, a set of additional instructions is added. At the time of this writing, Xen,
VMware, and the Microsoft Virtual PC all implement their hypervisors by using the VT-x
technology. Generally, hardware-assisted virtualization should have high efficiency.
However, since the transition from the hypervisor to the guest OS incurs high overhead
switches between processor modes, it sometimes cannot outperform binary translation.
Memory Virtualization:
Virtual memory virtualization is similar to the virtual memory support provided by modern
operating systems. In a traditional execution environment, the operating system maintains
mappings of virtual memory to machine memory using page tables, which is a one-stage
mapping from virtual memory to machine memory. All modern x86 CPUs include a memory
management unit (MMU) and a translation lookaside buffer (TLB) to optimize virtual
memory performance.
Since each page table of the guest OSes has a separate page table in the VMM corresponding
to it, the VMM page table is called the shadow page table. Nested page tables add another
layer of indirection to virtual memory. The MMU already handles virtual-to-physical
translations as defined by the OS. Then the physical memory addresses are translated to
machine addresses using another set of page tables defined by the hypervisor.
lOMoAR cPSD|30 736 168
4 I/O Virtualization I/O virtualization involves managing the routing of I/O requests
between virtual devices and the shared physical hardware. At the time of this writing, there
are three ways to implement I/O virtualization: full device emulation, para-virtualization, and
direct I/O. Full device emulation is the first approach for I/O virtualization.
The idea is illustrated in Figure 3.17(a). Space sharing is applied to assign three workloads to
three clusters of virtual cores: namely VM0 and VM3 for database workload, VM1 and VM2
for web server workload, and VM4–VM7 for middleware workload. The basic assumption is
that each workload runs in its own VM. However, space sharing applies equally within a
single operating system. Statically distributing the directory among tiles can do much better,
provided operating systems or hypervisors carefully map virtual pages to physical frames.
Marty and Hill suggested a two-level virtual coherence and caching hierarchy that
harmonizes with the assignment of tiles to the virtual clusters of VMs.
Figure 3.17(b) illustrates a logical view of such a virtual cluster hierarchy in two levels. Each
VM operates in a isolated fashion at the first level. This will minimize both miss access time
and performance interference with other workloads or VMs. Moreover, the shared resources
of cache capacity, inter-connect links, and miss handling are mostly isolated between VMs.
The second level maintains a globally shared memory. This facilitates dynamically
repartitioning resources without costly cache flushes. Furthermore, maintaining globally
shared memory minimizes changes to existing system software and allows virtualization
features such as content-based page sharing. A virtual hierarchy adapts to space-shared
workloads like multiprogramming and server consolidation.