PPT -CC-UNIT-2
PPT -CC-UNIT-2
UNIT- II: Virtual Machines and Virtualization of Clusters and Data Centers
3. Virtualization of CPU,
1-9
Virtualization at Hardware Abstraction level
1 - 10
Virtualization at Hardware Abstraction level
• A hardware abstraction layer (HAL) is a logical division of code that serves as an abstraction
layer between a computer's physical hardware and its software.
• It provides a device driver interface allowing a program to communicate with the hardware.
• The main purpose of a HAL is to conceal different hardware architectures from the OS by
providing a uniform interface to the system peripherals.
1 - 11
Virtualization at Hardware Abstraction level
The HAL provides the following benefits:
• Allowing applications to extract as much performance out of the hardware devices as possible
• Enabling the OS to perform regardless of the hardware architecture
• Enabling device drivers to provide direct access to each hardware device, which allows programs
to be device-independent
• Allowing software programs to communicate with the hardware devices at a general level
• Facilitating portability
1 - 12
Virtualization at Operating System (OS) level
Advantage:
• Has minimal starup/shutdown cost, low resource requirement, and high
scalability; synchronize VM and host state changes.
Shortcoming & limitation:
• All VMs at the operating system level must have the same kind of guest OS
• Poor application flexibility and isolation.
1 - 13
Virtualization at Hardware Abstraction level
1 - 14
Virtualization at Operating System (OS) level
1 - 15
Library Support Level
• Most applications use APIs exported by user-level libraries rather than using lengthy system calls by the
OS.
• Since most systems provide well-documented APIs, such an interface becomes another candidate for
virtualization.
• Virtualization with library interfaces is possible by controlling the communication link between applications
and the rest of a system through API hooks.
• The software tool WINE has implemented this approach to support Windows applications on top of UNIX
hosts.
• Another example is the vCUDA which allows applications executing within VMs to leverage GPU hardware
acceleration.
• The most popular approach is to deploy high level language (HLL) VMs.
• In this scenario, the virtualization layer sits as an application program on top of the operating system,
and the layer exports an abstraction of a VM that can run programs written and compiled to a particular
abstract machine definition.
• Any program written in the HLL and compiled for this VM will be able to run on it.
• The Microsoft .NET CLR and Java Virtual Machine (JVM) are two good examples of this class of VM.
1 - 25
Virtualization for Linux
1 - 26
Advantages of OS Extensions
• VMs at the OS level have minimal start-up shutdown costs, low resource
requirements and high scalability.
• For an OS level VM, the VM and its host environment can synchronise state changes
• This is the other name for Library-level Virtualization and is also known as user-level
Application Binary Interface or API emulation.
• This type of virtualization can create execution environments for running alien
(new/unknown) programs on a platform rather than creating a VM to run the entire OS.
1 - 32
1 - 33
Hypervisor and Xen Architecture
▪ Like other virtualization systems, many guest Oses can run on top of the hypervisor.
▪ The guest OS, which has control ability, is called Domain 0, and the others are called Domain U.
▪ Domain 0 is a privileged guest OS of Xen.
▪ It is first loaded when Xen boots without any file system drivers being available.
▪ Domain 0 is designed to access hardware directly and manage devices. Therefore, one of the
responsibilities of Domain 0 is to allocate and map hardware resources for the guest domains (the
Domain U domains).
2. host-based virtualization.
• Full virtualization does not need to modify the host OS. It relies on binary translation to trap and to virtualize the execution of
certain sensitive, non-virtualizable instructions.
• The guest OSes and their applications consist of noncritical and critical instructions.
• In a host-based system, both a host OS and a guest OS are used. A virtualization software layer is built between the host OS and
guest OS.
• Both the hypervisor and VMM approaches are considered full virtualization.
• Noncritical instructions do not control hardware or threaten the security of the system, but critical instructions do.
• Therefore, running noncritical instructions on hardware not only can promote efficiency, but also can ensure system security.
• This host OS is still responsible for managing the hardware. The guest OSes are installed and run on top of
the virtualization layer.
• The virtualizing software can rely on the host OS to provide device drivers and other low-level services.
• The virtualization layer can be inserted at different positions in a machine software stack.
• However, para-virtualization attempts to reduce the virtualization overhead, and thus improve
performance by modifying only the guest OS kernel.
• A hypercall is based on the same concept as a system call. System calls are used by an application to request
services from the OS and provide the interface between the application or process and the OS. Hypercalls
work the same way, except the hypervisor is used.
1 - 45
Examples of para virtualization
• To support virtualization, processors such as the x86 employ a special running mode and
instructions, known as hardware-assisted virtualization.
• If there is no protection mechanism in a processor, all instructions from different processes will access the hardware
directly and cause a system crash.
• All processors have at least two modes, user mode and supervisor mode, to ensure controlled access of critical
hardware.
• Instructions running in supervisor mode are called privileged instructions. Other instructions are unprivileged
instructions.
• In a virtualized environment, it is more difficult to make OSes and applications run correctly because there are more
layers in the machine stack
• Thus, unprivileged instructions of VMs run directly on the host machine for higher efficiency.
• Other critical instructions should be handled carefully for correctness and stability.
• The critical instructions are divided into three categories: privileged instructions, control sensitive instructions, and behavior-
sensitive instructions.
• Privileged instructions execute in a privileged mode and will be trapped if executed outside this mode.
• Behavior-sensitive instructions have different behaviors depending on the configuration of resources, including the load and
store operations over the virtual memory.
• Intel and AMD add an additional mode called privilege mode level (some people call it
Ring-1) to x86 processors.
• Therefore, operating systems can still run at Ring 0 and the hypervisor can run at Ring -1.
• All the privileged and sensitive instructions are trapped in the hypervisor automatically.
• In a traditional execution environment, the operating system maintains mappings of virtual memory to machine
memory using page tables, which is a one-stage mapping from virtual memory to machine memory.
• All modern x86 CPUs include a memory management unit (MMU) and a translation lookaside buffer (TLB) to optimize
virtual memory performance.
• A translation lookaside buffer (TLB) is a memory cache that is used to reduce the time taken to access a user memory
location. It is a part of the chip's memory-management unit (MMU). The TLB stores the recent translations of virtual
memory to physical memory and can be called an address-translation cache.
• However, in a virtual execution environment, virtual memory virtualization involves sharing the physical system
memory in RAM and dynamically allocating it to the physical memory of the VMs.
• Emulation is using software to provide a different execution environment or architecture. For example, you might have an Android
emulator run on a Windows box. The Windows box doesn't have the same processor that an Android device does so the emulator
actually executes the Android application through software.
• In para-VZ, the frontend driver runs in Domain-U; it manages the requests of the guest OS. The backend
driver runs in Domain-0 and is responsible for managing the real I/O devices. This methodology (para) gives
more performance but has a higher CPU overhead.
• This lets the VM access devices directly; achieves high performance with lower costs. Currently, it is used
only for the mainframes.
• virtual clusters is that they consist of many of the same server, all divided up the same way.
Big jobs are handled by adding more virtual instance to the workflow.
• The virtual cluster nodes can be either physical or virtual (VMs) with different operating systems.
• A VM runs with a guest OS that manages the resources in the physical machine.
• The purpose of using VMs is to consolidate multiple functionalities on the same server.
• VMs can be replicated in multiple servers to promote parallelism, fault tolerance and disaster discovery.
• The failure of some physical nodes will slow the work but the failure of VMs will cause no harm (fault
tolerance is high).
• It also has a drawback – a VM must stop working if its host node fails. This can
be lessened by migrating from one node to another for a similar VM.
• We can use a guest-based manager, by which the cluster manager resides inside a guest OS.
• We can bring out a host-based manager which itself is a cluster manager on the host systems.
• An independent cluster manager, which can be used on both the host and the guest – making the
infrastructure complex.
• Finally, we might also use an integrated cluster (manager), on the guest and host operating systems; here the
manager must clearly distinguish between physical and virtual resources.
• Active State: This refers to a VM that has been instantiated at the VZ platform to perform a task.
• Paused State: A VM has been instantiated but disabled temporarily to process a task or is in a waiting state
itself.
• Suspended State: A VM enters this state if its machine file and virtual resources are stored back to the disk.
• Steps 0 and 1: Start migration automatically and checkout load balances and server consolidation.
• Step 2: Transfer memory (transfer the memory data + recopy any data that is changed during the process). This goes on iteratively till
• Step 3: Suspend the VM and copy the last portion of the data.
• Steps 4 and 5: Commit and activate the new host. Here, all the data is recovered, and the VM is started from exactly the place where it
• File system migration refers to the system management operations related to stopping
access to a file system, and then restarting these operations to access the file system
from a different computer system.
• Network migrations involves transferring the data and programs from an old
network to a new network.
• Data-center automation means that huge volumes of hardware, software, and database resources in
these data centers can be allocated dynamically to millions of Internet users simultaneously.
• Server consolidation is an approach to improve the low utility ratio of hardware resources by reducing the
number of physical servers.
• Server consolidation is an approach to the efficient usage of computer server resources in order to reduce
the total number of servers or server locations that an organization requires.
a) Chatty (Interactive) Workloads: These types may reach the peak at a particular time and may be silent at some
other time.
b) Non-Interactive Workloads: These don’t require any users’ efforts to make progress after they have been
submitted.
moves all servers to a centralized location. This greatly simplifies maintenance duties for IT staff as they can
immediately access all systems without traveling. This also simplifies security, backing up data and instituting an
• Physical Consolidation - An organization reduces the total number of servers by merging the workload onto
fewer servers. The new setup retains a homogeneous environment in that it is still running on a single
platform.
• This approach runs multiple platforms and diverse applications on a single server (or cluster).
• This technique uses partitioning and virtualization to run many "virtual servers" on a single machine.
• This makes efficient use of system resources while minimizing upkeep tasks.
• Content-addressed storage (CAS) is a method of providing fast access to fixed content (data
that is not expected to be updated) by assigning it a permanent place on disk.
• CAS makes data retrieval straightforward by storing it in such a way that an object cannot
be duplicated or modified once it has been stored; thus, its location is unambiguous.
• Its purpose is to build private clouds that can interact with end users through Ethernet or the Internet.
• The system also supports interaction with other private clouds or public clouds over the Internet.
• An Intruder is a person who attempts to gain unauthorized access to a system, to damage that system, or to
• Intrusions are unauthorized access to a certain computer from local or network users and intrusion detection is
network or devices for suspicious activities and helps to detect intrusions. Typically, an IDS
is connected to Security Information and Event Management (SIEM) system, which collects
outputs from various security systems and filters out malicious activities report them.
• An intrusion detection system (IDS) is a system that monitors network traffic for suspicious
activity and issues alerts when such activity is discovered. While anomaly detection and
reporting is the primary function, some intrusion detection systems are capable of taking
actions when malicious activity or anomalous traffic is detected, including blocking traffic
2. the IDS is integrated into the VMM and has the same privilege to access the hardware as well as the VMM
• The policy framework can monitor events in different guest VMs by operating system interface library and
PTrace indicates trace to secure policy of monitored host.
• Therefore, an analysis of the intrusion action is extremely important after an intrusion occurs.
• Thus, when an operating system is attacked by attackers, the log service should be unaffected.
• A honeypot is a purposely defective system that simulates an operating system to cheat and monitor the
actions of an attacker.
• A honeynet is a network set up with intentional vulnerabilities; its purpose is to invite attack, so that an
attacker's activities and methods can be studied and that information used to increase network security.
• The concept of the honeypot is sometimes extended to a network of honeypots, known as a honeynet.