Unit Iii CC LM Cse
Unit Iii CC LM Cse
Unit- III
Virtual Machines and Virtualization
A traditional computer runs with a host operating system specially tailored for its hardware
architecture, as shown in Figure .
After virtualization, different user applications managed by their own operating systems
(guest OS) can run on the same hardware, independent of the host OS. This is often done by
adding additional software, called a virtualization layer .This virtualization layer is known as
hypervisor or virtual machine monitor (VMM).
Level virtualization is performed right on top of the bare hardware. On the one hand, this
approach generates a virtual hardware environment for a VM. On the other hand, the process
manages the underlying hardware through virtualization.
The idea is to virtualize a computer’s resources, such as its processors, memory, and I/O
devices. The intention is to upgrade the hardware utilization rate by multiple users
concurrently.
This refers to an abstraction layer between traditional OS and user applications. OS-level
virtualization creates isolated containers on a single physical server and the OS instances to
utilize the hardware and software in data centers.
The containers behave like real servers. OS-level virtualization is commonly used in creating
virtual hosting environments to allocate hardware resources among a large number of
mutually distrusting users.
Most applications use APIs exported by user-level libraries rather than using lengthy system
calls by the OS.
Virtualization with library interfaces is possible by controlling the communication link
between applications and the rest of a system through API hooks.
The software tool WINE has implemented this approach to support Windows applications on
top of UNIX hosts.
User-Application Level:
layer exports an abstraction of a VM that can run programs written and compiled to a
particular abstract machine definition.
As mentioned earlier, hardware-level virtualization inserts a layer between real hardware and
traditional operating systems.
This layer is commonly called the Virtual Machine Monitor (VMM) and it manages the
hardware resources of a computing system. Each time programs access the hardware the
VMM captures the process.
In this sense, the VMM acts as a traditional OS.
One hardware component, such as the CPU, can be virtualized as several virtual copies.
Therefore, several traditional operating systems which are the same or different can sit on the
same set of hardware simultaneously.
Three main modules, dispatcher, allocator, and interpreter, coordinate their activity in order
to emulate the underlying hardware.
The dispatcher constitutes the entry point of the monitor and reroutes the instructions issued
by the virtual machine instance to one of the two other modules.
The allocator is responsible for deciding the system resources to be provided to the VM:
whenever a virtual machine tries to execute an instruction that results in changing the
machine resources associated with that VM, the allocator is invoked by the dispatcher.
The interpreter module consists of interpreter routines. These are executed whenever a virtual
machine executes a privileged instruction: a trap is triggered and the corresponding routine is
executed.
Equivalence : A guest running under the control of a virtual machine manager should exhibit
the same behavior as when it is executed directly on the physical host.
Resource control : The virtual machine manager should be in complete control of virtualized
resources.
Efficiency : A statistically dominant fraction of the machine instructions should be executed
without intervention from the virtual machine manager
With the help of VM technology, a new computing mode known as cloud computing is
emerging.
Cloud computing is transforming the computing landscape by shifting the hardware and
staffing costs of managing a computational center to third parties, just like banks.
However, cloud computing has at least two challenges.
The first is the ability to use a variable number of physical machines and VM instances
depending on the needs of a problem.
For example, a task may need only a single CPU during some phases of execution but may
need hundreds of CPUs at other times.
The second challenge concerns the slow operation of instantiating new VMs.
Advantages of OS Extensions:
(1) VMs at the operating system level have minimal startup/shutdown costs, low resource
requirements, and high scalability
(2) for an OS-level VM, it is possible for a VM and its host environment to synchronize state
changes when necessary.
(1) All OS-level VMs on the same physical machine share a single operating system kernel .
(2) the virtualization layer can be designed in a way that allows processes in VMs to access
as many resources of the host machine as possible, but never to modify them.
Disadvantages of OS Extensions:
The main disadvantage of OS extensions is that all the VMs at operating system level on a
single container must have the same kind of guest operating system.
The virtualization layer is inserted inside the OS to partition the hardware resources for
multiple VMs to run their applications in multiple virtual environments. To implement OS-
The hypervisor supports hardware-level virtualization on bare metal devices like CPU,
memory, disk and network interfaces.
The hypervisor software sits directly between the physical hardware and its OS. This
virtualization layer is referred to as either the VMM or the hypervisor.
The hypervisor provides hypercalls for the guest OSes and applications. Depending on the
functionality, a hypervisor can assume a micro-kernel architecture like the Microsoft Hyper-
V.
It can assume a monolithic hypervisor architecture like the VMware ESX for server
virtualization. A micro-kernel hypervisor includes only the basic and unchanging functions.
Unfortunately, it also brings a series of security problems during the software life cycle and
data lifetime. Traditionally, a machine’s lifetime can be envisioned as a straight line where
the current state of the machine is a point that progresses monotonically as the software
executes.
During this time, configuration changes are made, software is installed, and patches are
applied. In such an environment, the VM state is akin to a tree: At any point, execution can
go into N different branches where multiple instances of a VM can exist at any point in this
tree at any given time.
This approach was implemented by VMware and many other software companies. VMware
puts the VMM at Ring 0 and the guest OS at Ring 1. The VMM scans the instruction stream
and identifies the privileged, control- and behavior-sensitive instructions.
When these instructions are identified, they are trapped into the VMM, which emulates the
behavior of these instructions. The method used in this emulation is called binary translation.
Therefore, full virtualization combines binary translation and direct execution.
The guest OS is completely decoupled from the underlying hardware. Consequently, the
guest OS is unaware that it is being virtualized. The performance of full virtualization may
not be ideal, because it involves binary translation which is rather time-consuming.
Binary translation employs a code cache to store translated hot instructions to improve
performance, but it increases the cost of memory usage.
Host-Based Virtualization :
An alternative VM architecture is to install a virtualization layer on top of the host OS. This
host OS is still responsible for managing the hardware. The guest OSes are installed and run
on top of the virtualization layer.
Dedicated applications may run on the VMs. Certainly, some other applications can also run
with the host OS directly. This hostbased architecture has some distinct advantages, as
enumerated next.
First, the user can install this VM architecture without modifying the host OS. The
virtualizing software can rely on the host OS to provide device drivers and other low-level
services.
Second, the host-based approach appeals to many host machine configurations. Compared to
the hypervisor/VMM architecture, the performance of the host-based architecture may also be
low.
When an application requests hardware access, it involves four layers of mapping which
downgrades performance significantly.
When the ISA of a guest OS is different from the ISA of Ring 3 Ring 2 Ring 1 Binary
translation of OS requests Direct execution of user requests Ring 0 User apps Guest OS
VMM Host computer system hardware.
Para-Virtualization Architecture :
When the x86 processor is virtualized, a virtualization layer is inserted between the hardware
and the OS. According to the x86 ring definition, the virtualization layer should also be
installed at Ring 0.
Different instructions at Ring 0 may cause some problems. In Figure 3.8, we show that para-
virtualization replaces nonvirtualizable instructions with hypercalls that communicate directly
with the hypervisor or VMM.
However, when the guest OS kernel is modified for virtualization, it can no longer run on the
hardware directly.
The popular Xen, KVM, and VMware ESX are good examples. 3.2.3.2 KVM (Kernel-Based
VM) This is a Linux para-virtualization system—a part of the Linux version 2.6.20 kernel.
Memory management and scheduling activities are carried out by the existing Linux kernel.
IV Year –II- 2020- C
Cloud Computing 14
Support Unlike the full virtualization architecture which intercepts and emulates privileged
and sensitive instructions at runtime, para-virtualization handles these instructions at compile
time.
The guest OS kernel is modified to replace the privileged and sensitive instructions with
hypercalls to the hypervisor or VMM. Xen assumes such a para-virtualization architecture.
The guest OS running in a guest domain may run at Ring 1 instead of at Ring 0. This implies
that the guest OS may not be able to execute some privileged and sensitive instructions.
The privileged instructions are implemented by hypercalls to the hypervisor. After replacing
the instructions with hypercalls, the modified guest OS emulates the behavior of the original
guest OS.