0% found this document useful (0 votes)
2 views

Unit 3_ Virtualization

The document discusses various levels of virtualization, including instruction set architecture, hardware, operating system, library support, and application levels, detailing how each enables multiple operating systems or applications to run on the same physical hardware. It explains the role of hypervisors and virtual machine monitors in managing resources and ensuring isolation among virtual machines. Additionally, it covers the benefits and challenges of OS-level virtualization, particularly in cloud computing environments.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Unit 3_ Virtualization

The document discusses various levels of virtualization, including instruction set architecture, hardware, operating system, library support, and application levels, detailing how each enables multiple operating systems or applications to run on the same physical hardware. It explains the role of hypervisors and virtual machine monitors in managing resources and ensuring isolation among virtual machines. Additionally, it covers the benefits and challenges of OS-level virtualization, particularly in cloud computing environments.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 64

Virtual

Machines
and
Virtualization
of Clusters
and Data
Centers
Contents taken from
various sources
IMPLEMENTATION LEVELS OF
VIRTUALIZATION

● traditional computer- host OS


specially for its hardware
architecture.
● After virtualization- different
user applications managed by
their own OS (guest OS) can
run on the same hardware,
independent of the host OS
● virtualization layer -
virtualization software-
hypervisor or virtual machine
monitor (VMM)
Chapter 3 IMPLEMENTATION
LEVELS OF VIRTUALIZATION

This can be implemented at various operational levels


Common virtualization layers
1. Instruction Set Architecture (ISA) Level: Changes how
instructions are processed.
2. Hardware Level: Uses a hypervisor to manage virtual machines.
3. Operating System Level: Creates multiple isolated environments
within one OS.
4. Library Support Level: Provides virtualization at the software
library level.
5. Application Level: Runs software in an isolated virtual
environment.
Instruction Set Architecture
(ISA) Level Virtualization
At the ISA level- virtualization allows a computer to run programs made for a different type of processor
(CPU).
This is done using emulation, where one CPU (host) pretends to be another CPU (guest).
Emulation Method:

1. Code Interpretation (Slower)

● The computer translates each instruction from the original CPU type (source) to the new CPU type (target)
one by one.
● This process is slow because one instruction may need dozens or hundreds of translations.

2. Dynamic Binary Translation (Faster)

● Instead of translating one instruction at a time, the system translates entire blocks of code at once.
● It can also optimize the translation, making execution much faster.
● These blocks of translated code are stored and reused, reducing processing time.
E.g. Imagine you have a program that was designed to run on an old MIPS processor, but you only have a modern x86-based
computer. Normally, that program wouldn’t work. However, with ISA emulation, the x86 computer can "pretend" to be a MIPS
processor, allowing the program to run.
Hardware Abstraction Level
Virtualization

Hardware-level virtualization - creating virtual versions of computer hardware (like processors,


memory, and storage) so that multiple users or operating systems can share the same physical machine
efficiently.

● A special software layer, called a hypervisor, runs directly on the physical hardware.
● The hypervisor creates multiple virtual machines (VMs), each acting like a separate computer.
● Each VM can have its own operating system (OS) and applications.

E.g. run two different operating systems (Windows & Linux) at the same time on one computer.
The hypervisor splits the computer into two virtual computers (VMs), letting both OSs run on the same machine.
Operating System (OS) Level
Virtualization

OS-level virtualization - creates multiple isolated environments (called containers) on a single operating
system.

These containers act like separate computers, but they share the same OS kernel.

● Instead of creating full virtual machines (VMs) with their own OS, containers share the host OS but remain
isolated from each other.
● Each container has its own files, applications, and system resources, acting like a real server but using
fewer system resources.
● This is commonly used in data centers to efficiently allocate resources among multiple users.

E.g. Imagine you have a restaurant kitchen (the host OS) where different chefs (containers) work independently.
They share the same kitchen equipment (hardware and OS), but each chef (container) prepares their own dish (application)
separately.
Library Support Level
• Library support level virtualization- programs don’t talk directly to the operating system.

• they communicate through software libraries (APIs) that act as a middle layer between the application and the system.

● Many programs use APIs (Application Programming Interfaces) to request system resources instead of making direct OS
calls.
● Virtualization happens at this API level by modifying or redirecting these API calls to create a virtual environment.
● This allows applications designed for one system to run on another without modifying the OS itself.

Example:

1. WINE – Lets Windows applications run on Linux by replacing Windows system calls with Linux equivalents.
○ Without WINE: Windows apps wouldn’t work on Linux.
○ With WINE: The app thinks it’s running on Windows, but it’s actually using Linux!
User-Application Level

Application-level virtualization creates a virtual environment where applications can run independently from the
operating system. Instead of modifying the OS, it virtualizes just the application.

• a technology that isolates applications from the underlying operating system, allowing them to run independently on
different platforms or environments without requiring specific installations.
User-Application Level

Types of Application-Level Virtualization

1. High-Level Language (HLL) Virtual Machines

● Some applications don’t run directly on the OS but inside a virtual runtime environment.
● This allows apps to be cross-platform (work on different operating systems).
● Examples:
○ Java Virtual Machine (JVM) – Runs Java programs on any OS.
○ Microsoft .NET CLR – Runs .NET applications on Windows.

2. Application Isolation & Sandboxing

● Each app is wrapped in its own environment, separate from the OS.
● The app doesn’t interfere with other programs or system settings.
● Example:
○ LANDesk Application Virtualization – Allows apps to run without installation or modifying system files.
VMM Design Requirements

• VMM- special software layer that sits between the hardware and operating systems.
• allows multiple operating systems to run simultaneously on the same physical machine by creating virtual copies of
hardware components (like CPU, memory, and storage).
• Controls hardware resources (CPU, memory, storage, etc.).
• Manages multiple virtual machines (VMs) running different operating systems.
• Ensures each VM runs as if it has its own dedicated hardware.

• E.g. Think of a VMM as a hotel manager who assigns rooms (hardware resources) to guests (operating systems).
Each guest (OS) feels like they have their own private space, but they are actually sharing the hotel (hardware)
with others.
VMM Design Requirements

• Three Key Requirements of a VMM


1. Same Environment as the Real Machine
• Programs running on a VM should behave exactly as they would on a real computer.
• The only exceptions are performance changes due to resource sharing and time delays.
2. Minimal Performance Loss
• Running programs on a VM should be almost as fast as running them on a real computer.
• A good VMM minimizes slowdowns.
3. Full Control of System Resources
• The VMM decides how much CPU, memory, and storage each VM gets.
• It ensures fair resource sharing when multiple VMs run on the same hardware .
VMM Design Requirements
• A VMM - responsible for managing and controlling hardware resources when running multiple virtual machines (VMs).
• To do this, the VMM follows three key rules:
1. The VMM Allocates Resources
• The VMM decides how much CPU, memory, and storage each virtual machine gets.
• Each VM thinks it has its own hardware, but in reality, the VMM manages resource sharing.
2. Programs Cannot Access Unassigned Resources
• A virtual machine can only use the resources assigned to it.
• One VM cannot interfere with another VM or access hardware directly.
3. The VMM Can Take Back Resources
• In some cases, the VMM can reclaim hardware resources from a VM and reassign them to another.
• This happens when a VM is using more than needed, or the system needs to balance workloads.
Virtualization support at OS
level

• OS Level Virtualization
• Virtualization allows multiple virtual machines (VMs) to run on a single physical computer.
• This helps cloud computing, where businesses rent computing power instead of buying expensive hardware.
• Use of OS-Level Virtualization
• Traditional virtualization (hardware-level) creates a new OS for each VM, which slows down performance and needs a lot
of memory.
• OS-level virtualization solves this problem by sharing the same OS kernel among multiple VMs.
• Each Virtual Environment (VE) or Container looks like a separate server but actually runs on the same OS.
OS Level Virtualization

• Challenges in Cloud Computing


• Changing Resource Needs: Sometimes, a task needs only one CPU, but at other times, it may need hundreds of CPUs.
Managing this is difficult.
• Slow VM Creation: Each new VM starts from scratch, which takes time. Also, storing many VM images requires a lot of
space.
• Benefits of OS-Level Virtualization
• Faster VM Startup: No need to load a new OS every time.
• Less Storage Required: Shared OS kernel avoids duplication.
• Better Performance: No need for extra hardware modifications.
OS Level Virtualization
VIRTUALIZATION
STRUCTURES/TOOLS AND
MECHANISMS

• Virtualization allows multiple operating systems (like Windows and Linux) to run on the same physical machine at the
same time.
• To achieve this, a virtualization layer is added between the hardware and the operating system.
• This layer creates virtual hardware that different operating systems can use.
• There are three types of virtualization architectures, based on where this virtualization layer is placed:
1. Hypervisor Architecture
2. Paravirtualization
3. Host-Based Virtualization
Hypervisor and Xen Architecture

• A hypervisor, also called a Virtual Machine Monitor (VMM), is software that enables virtualization by managing virtual
machines (VMs).
• The hypervisor sits directly on the physical hardware (like CPU, memory, and network).
• It creates and manages VMs, ensuring that each VM gets a fair share of the hardware.
• The hypervisor allows multiple OSes to run on one machine without interfering with each other.
Hypervisor and Xen Architecture

• Types of Hypervisors
1. Micro-Kernel Hypervisor
• Only handles the basic unchanging functions like memory management and CPU scheduling.
• Device drivers and other changeable components are outside the hypervisor.
• Example: Microsoft Hyper-V
• Advantage: Smaller in size, more secure, and stable.
2. Monolithic Hypervisor
• Includes all components, such as device drivers, inside the hypervisor.
• Example: VMware ESX
• Advantage: More features and better performance.
Xen Architecture
• Xen is an open-source hypervisor developed by Cambridge University.
• It is a micro-kernel hypervisor, meaning it has a small and lightweight core.
• Instead of handling everything, Xen separates policy (decision-making) from mechanism (execution).

• Key Components of Xen Architecture


1. Hypervisor (Xen itself)
1. Sits between the hardware and the operating system.
2. Provides a virtual environment for running multiple OSes.
3. Does not include device drivers. Instead, it allows guest OSes to access physical devices directly.
2. Domain 0 (Dom0) - The Boss OS
1. A special guest OS with extra privileges.
2. The first OS that runs when Xen starts.
3. Manages hardware and allocates resources (CPU, memory, storage) to other VMs.
4. Acts like a control center for managing VMs.
3. Domain U (DomU) - Regular VMs
1. Guest operating systems that run inside Xen.
2. Do not have direct access to hardware. Instead, they rely on Dom0 to handle hardware interactions.
Working of Xen

• When Xen starts, Dom0 is loaded first (even before any file system drivers).
• Dom0 controls everything – it creates and manages all other VMs (DomU).
• Guest VMs (DomU) rely on Dom0 for resource allocation (CPU, memory, devices).
• If Dom0 is hacked, the entire Xen system is at risk. That’s why security policies for Dom0 are crucial.
Types of Hardware Virtualization

• Hardware virtualization allows multiple operating systems (OS) to run on a single physical machine.
• It is classified into two categories based on how it is implemented:
• Full Virtualization-
• the virtual machines (VMs) run without modifying the original (guest) operating system.
• The guest OS does not know that it is running in a virtualized environment.
• A technique called binary translation is used to manage certain sensitive instructions and make them work in the virtual
environment.
• Working:
• When an application or OS sends a command to the hardware, the hypervisor (virtualization software) translates and
manages it.
• It ensures that critical instructions are handled securely, while normal instructions run as usual.
Types of Hardware virtualization

• Host-Based Virtualization
• There are two OS layers: The host OS (installed directly on the hardware).
• The guest OS (runs inside a virtual machine).
• A virtualization software layer sits between the host OS and the guest OS to manage resource sharing.
• Working:
• The host OS handles all hardware communication, and the virtualization software creates virtual machines.
• The guest OS runs inside these virtual machines but relies on the host OS for hardware access.
Full Virtualization (Type-1
Hypervisor)

• Full virtualization is a method that allows a guest operating system (OS) to run without any modifications on a virtual
machine (VM).
• Working:
• Noncritical instructions (safe commands that don’t affect hardware or security) run directly on the physical hardware to
ensure fast performance.
• Critical instructions (commands that control hardware or impact security) are trapped and handled by the Virtual Machine
Monitor (VMM) or hypervisor using software emulation.
Binary Translation of Guest OS
Requests Using a VMM

• Binary translation is a technique used in full virtualization where the Virtual Machine Monitor (VMM) translates certain
guest OS instructions into a format that can be executed safely on the physical hardware.
• Working:
• Ring Levels & VMM Placement
• The VMM (Virtual Machine Monitor) is placed at Ring 0, which has the highest privilege level (full control over hardware).
• The Guest OS is placed at Ring 1, meaning it does not directly control hardware but interacts through the VMM.
• User Applications run at Ring 3, just like in a normal system
• Instruction Handling
• The VMM scans all instructions from the Guest OS.
• Non-privileged instructions run directly on hardware (fast execution).
• Privileged and sensitive instructions (which can control hardware) are trapped by the VMM.
• These instructions are translated (binary translation) and emulated by the VMM to ensure safe execution.
Binary Translation of Guest OS
Requests Using a VMM
Host-Based Virtualization (type 2
hypervisor)

• a way to run virtual machines (VMs) on an existing operating system (host OS) without directly controlling the
hardware.
• Instead of replacing the host OS, a virtualization layer (software like VMware Workstation, VirtualBox, or Microsoft
Hyper-V) is installed on top of it to manage virtual machines.
• Working:
• Host OS → Manages the hardware (CPU, memory, storage, etc.).
• Virtualization Layer → Installed as software on the host OS to create and manage VMs.
• Guest OS → Runs inside the virtualization software as a virtual machine.
• Applications → Can run either inside the VM or directly on the host OS.
Host-Based Virtualization

• Host OS
• The host OS (Windows, Linux, or macOS) is the main operating system that controls the hardware resources.
• The user first installs a virtualization software (like VMware Workstation, VirtualBox, or Hyper-V) on the host OS.
• The host OS continues to manage system hardware such as CPU, memory, storage, and networking.
• The Virtualization Software
• is installed on top of the host OS.
• This hypervisor acts as an intermediary between the guest OS and the host OS.
• It creates and manages virtual machines (VMs) by allocating system resources.
Host-Based Virtualization
• The Guest OS Layer
• Each virtual machine runs its own Guest OS, which can be different from the Host OS.
• The guest OS thinks it is running on real hardware, but in reality, it is interacting with virtualized hardware managed by the
hypervisor.
• Virtualization of Hardware Components
• The hypervisor provides virtualized versions of hardware components to the guest OS, such as:
Virtual CPU (vCPU) – A portion of the host CPU is assigned to the VM.
Virtual Memory – Part of the host’s RAM is allocated to the VM.
Virtual Storage – The VM uses a file (VHD, VMDK, etc.) that acts like a hard disk.
Virtual Network Interface – The VM can connect to the internet or other VMs via a virtual switch.
• Execution of Applications on the Virtual Machine
• Applications can be installed and executed inside the VM, just like on a real computer.
• Some applications run directly on the host OS, bypassing the virtualization layer.
• Performance is affected because every instruction must go through multiple layers (Guest OS → Hypervisor → Host OS
→ Hardware).
Host-Based Virtualization

• Step-by-Step Working of Host-Based Virtualization


• User installs virtualization software (e.g., VMware Workstation) on the host OS.
• A new virtual machine (VM) is created using the hypervisor.
• The guest OS is installed on the VM (e.g., installing Ubuntu on a Windows laptop).
• The guest OS requests hardware resources (CPU, memory, storage, network).
• The hypervisor translates these requests and passes them to the host OS.
• The host OS interacts with the physical hardware and returns results to the guest OS.
• The guest OS processes user applications as if it were running on physical hardware.
Type1 and Type 2 Hypervisor
Para- Virtualization

• type of virtualization where the guest operating system (OS) is modified to work
better with the hypervisor
• Instead of running an unmodified OS as in full virtualization, para-virtualization
requires changes to the guest OS so that it can communicate efficiently with the
hypervisor.
• Para virtualization- https://www.youtube.com/watch?v=GAuDZ1sBPjA
Para- Virtualization
Hardware support for Virtualization

• What is hardware assisted virtualization? - https://www.youtube.com/watch?v=_tYvGTCskx8


• Modern processors, like x86 (Intel & AMD), support hardware-assisted virtualization, which improves the efficiency of virtual
machines by allowing virtualization at the hardware level.
• eliminates the need for complex binary translation (used in full virtualization) and improves performance.
• In a virtualized environment, multiple operating systems (OSes) and applications run simultaneously.
• If there is no protection mechanism, they would all directly access the hardware, which could cause conflicts, crashes, or
security issues.
• To prevent this, processors have different operating modes that control access to hardware.
Hardware support for Virtualization

Modern x86 processors (Intel & AMD) divide instructions into two categories:
• Privileged Instructions (require full control over hardware, e.g., memory management, CPU scheduling).
• Unprivileged Instructions (general-purpose tasks, e.g., running user applications).

Before Hardware-Assisted Virtualization:


• The guest OS would try to execute privileged instructions at Ring 0.
• The hypervisor(VMM) had to intercept & translate them (binary translation), which slowed down performance.
🔹 After Hardware-Assisted Virtualization:
• The CPU traps these privileged instructions automatically and hands them to the hypervisor (Ring -1).
• This improves efficiency and makes virtualization much faster and more secure.
CPU Virtualization

• CPU virtualization allows a virtual machine (VM) to act like a real computer, using the host machine's processor to run most of
its instructions.
• Most VM instructions run directly on the host CPU – This makes virtualization fast and efficient.
• Some critical instructions cannot be executed directly – These are trapped by the Virtual Machine Monitor (VMM) (or hypervisor)
to ensure system stability and security.
CPU Virtualization
• Types of critical instructions:
1. Privileged Instructions
• These only run in high-privilege (Ring 0) mode, such as controlling hardware or managing memory.
• If a VM tries to execute them, they are trapped by the VMM to prevent direct access.
• Example: Instructions for changing CPU mode or controlling interrupts.
2. Control-Sensitive Instructions
• These modify important system settings, like how resources are allocated.
• The VMM must control and validate these changes.
• Example: Changing memory access permissions.
3. Behavior-Sensitive Instructions
• These instructions behave differently depending on system conditions.
• If the VM executes these without control, the system might become unstable.
• Example: Loading or storing data in memory that may depend on CPU settings.
CPU Virtualization

Role of the VMM (Virtual Machine Monitor)


• The VMM (or hypervisor) sits between the VM and the hardware, acting as a security guard.
• It traps and handles privileged, control-sensitive, and behavior-sensitive instructions.
• This ensures stability and prevents VMs from interfering with each other.
Memory Virtualization

• Memory virtualization allows multiple Virtual Machines (VMs) to share the physical memory (RAM) of a computer while making
each VM believe it has its own dedicated memory.
In a regular computer (without virtualization):
• The Operating System (OS) manages memory using page tables (a kind of memory map).
• Virtual memory (used by applications) is translated into physical memory (RAM) by the Memory Management Unit (MMU).
• Translation Lookaside Buffer (TLB) helps speed up memory access by caching these mappings.
• If a program requests data from memory, the OS quickly translates the virtual address (used by software) to a physical address
(actual location in RAM).
Memory Virtualization

When running multiple Virtual Machines (VMs), each guest OS believes it controls the memory.
• But in reality, the hypervisor (VMM) controls the actual machine memory.
• Since VMs don’t directly access real memory, two-step mapping is needed:
Virtual Memory → Guest Physical Memory (handled by the guest OS).
Guest Physical Memory → Actual Machine Memory (handled by the VMM).
The VMM (hypervisor) ensures that each VM only accesses its assigned memory.
I/O Virtualization

• I/O (Input/Output) virtualization allows multiple virtual machines (VMs) to share physical I/O devices like network cards, storage
devices, and USB ports.
• each VM thinks it has its own devices, but in reality, they all share the same physical hardware.
• A single computer may have multiple virtual machines (VMs) running at the same time.
• If each VM needed its own physical device (keyboard, network card, hard drive, etc.), we would need multiple hardware copies,
which is not practical.
• I/O virtualization allows VMs to share the same physical devices safely and efficiently without knowing they are sharing.
I/O Virtualization

• 3 types:
• Full Device Emulation (Software-Based)
The VMM (Hypervisor) pretends to be a real device by emulating (simulating) it in software.
• The guest OS thinks it is interacting with a real physical device, but in reality, it is talking to a virtual device created by the
hypervisor.
• The hypervisor then translates the request and sends it to the actual hardware.
I/O Virtualization

• Para-Virtualized I/O (Optimized Software Approach)


• Instead of fully emulating a device, the guest OS is modified to work better with the hypervisor by using a special driver.
• There are two special drivers:
• Frontend driver (inside the guest OS) → Collects I/O requests.
• Backend driver (inside the hypervisor) → Communicates with the actual hardware.
• Both drivers share memory to exchange data instead of relying on slow device emulation.
I/O Virtualization

• Para-Virtualized I/O (Optimized Software Approach)


• Instead of fully emulating a device, the guest OS is modified to work better with the hypervisor by using a special driver.
• There are two special drivers:
• Frontend driver (inside the guest OS) → Collects I/O requests.
• Backend driver (inside the hypervisor) → Communicates with the actual hardware.
• Both drivers share memory to exchange data instead of relying on slow device emulation.
I/O Virtualization

• Direct I/O Access (Near-Native Performance)


• The VM directly communicates with the physical device instead of using the hypervisor.
• The VM gets exclusive control over a physical device, like a network card or GPU.
• This method gives almost the same speed as using a physical machine.
VIRTUAL CLUSTERS AND RESOURCE
MANAGEMENT- Physical vs Virtual
Clusters

• Physical Cluster:group of physical servers (computers) that are connected through a physical network, such as a Local Area Network (LAN). These
servers work together to perform computing tasks.
• Virtual Cluster: group of virtual machines (VMs) that run on physical servers. These VMs are connected using a virtual network, which allows them to
communicate as if they were in the same physical location, even if they are distributed across different physical clusters.
Physical vs Virtual Clusters
• Key Properties of Virtual Clusters:
1) Virtual Machines (VMs) can be created on physical machines
• A virtual cluster consists of either physical machines or VMs.
• Multiple VMs can run on a single physical machine.
• Each VM can have a different operating system (OS).
2) Guest OS vs. Host OS
• Each VM has its own OS (guest OS), which can be different from the OS of the physical machine (host OS).
• The host OS controls the hardware resources and manages the VMs.
3) Dynamic Scaling
• Virtual clusters can expand or shrink depending on demand.
• This is similar to how a peer-to-peer (P2P) network grows dynamically.
Physical vs Virtual Clusters

4) Failure Management
• If a physical machine fails, the VMs on it stop working.
• However, a VM failure does not affect the physical machine it runs on.
5) Managing Virtual Clusters
Since many VMs run on different physical machines, managing them efficiently is essential. Key management tasks include:
• Deployment and monitoring of virtual clusters
• Resource scheduling and load balancing
• Server consolidation (optimizing the use of physical servers)
• Ensuring fault tolerance (handling failures without downtime)
Physical vs Virtual Clusters -Virtual
Clusters Based on Application
Partitioning
Physical vs Virtual Clusters -Fast
Deployment and Effective
Scheduling

When running applications on cloud or virtual environments, we need a system that can quickly deploy software and efficiently
manage resources.
This ensures that virtual machines (VMs) are used optimally, improving performance and reducing waste.
1. Fast Deployment:
Deployment refers to two main tasks:
• Setting up software (OS, libraries, applications) on physical machines inside a cluster as fast as possible.
• Switching runtime environments quickly from one user's virtual cluster to another.
• multiple users needing virtual machines (VMs) for their applications:
• If a user finishes using their VM, the system should immediately shut it down or suspend it to free up resources for other users.
• This ensures that computing resources are not wasted and can be reused efficiently.
Physical vs Virtual Clusters -Fast
Deployment and Effective
Scheduling

2. Green Computing and Energy Efficiency


• Green computing means using computing resources efficiently to reduce power consumption and environmental impact.
• Traditional energy-saving techniques focus only on individual computers, not the entire cluster.
• Some energy-saving methods only work on specific types of computers (homogeneous workstations).
• Live migration (moving VMs from one machine to another) helps save energy, but it has overhead (extra processing costs).
• If too many live migrations happen, it can slow down the system and affect performance.
• The challenge is to create smart migration strategies that save power without reducing the performance of the system.
Physical vs Virtual Clusters -Fast
Deployment and Effective
Scheduling
3. Load Balancing in Virtual Clusters
• Virtualization also helps with load balancing—distributing workload evenly across all servers.
• The system monitors the load (workload) on each VM and adjusts resources accordingly.
• User login frequency and other data are used to determine if more resources are needed.
Auto scale-up & scale-down:
• If a VM is overloaded, more resources are allocated to handle the demand.
• If a VM has low usage, unnecessary resources are freed to be used elsewhere.
• The system tries to place VMs on the best physical machines for optimal performance.
Dynamic Load Balancing Using VM Migration
If some servers are too busy while others are idle, the system can:
• Move VMs from overloaded machines to less busy machines using live migration.
• This helps improve system performance and reduces response time for applications.
Physical vs Virtual Clusters -
High-Performance Virtual Storage

When using virtual machines (VMs) in a cloud or cluster environment, storage plays a crucial role.
A good virtual storage system ensures that VMs are deployed quickly, use less disk space, and can be managed efficiently.
Virtual storage refers to the way disk space is managed in a virtualized system.
• A template VM is a preconfigured virtual machine image that can be duplicated and distributed across different physical servers.
• This saves time because instead of installing everything from scratch, users can just copy an existing template.
• Efficient storage management ensures that the same data is not stored multiple times unnecessarily.
Physical vs Virtual Clusters -High-
Performance Virtual Storage

Virtual Storage to Reduce Redundant Data


A distributed file system is used in virtual clusters, meaning data is stored across multiple servers.
• Sometimes, multiple VMs use the same files and software.
• Instead of storing duplicate copies of the same data, the system uses hash values to check if a file block already exists.
• If the data block exists, it reuses the existing one instead of creating a new copy.
• This reduces storage space and improves performance.
Each user has a profile that keeps track of which data blocks belong to their VMs.
• If a user modifies a file, a new block is created and recorded in their profile.
• This means only newly created or modified data takes up extra space.
High-Performance Virtual Storage
Steps to Deploy a Group of VMs:
Deploying multiple VMs onto a cluster involves four main steps:
1. Preparing the disk image
• A disk image is a file that contains an entire operating system and applications.
• Instead of creating a new VM from scratch, a template is used.
2. Configuring the VMs
• Each VM is given a name, network settings, CPU, and memory allocation.
• The configurations are recorded in a file.
3. Choosing the destination nodes
• The system must decide which physical server will run each VM.
• The goal is to balance workloads across servers.
4. Executing the deployment command
• The VM is finally created and launched on the selected physical server.
Live VM Migration Steps and
Performance Effects
Live VM Migration Steps and
Performance Effects

1. VM Running on Host A
● The VM is actively running on Host A and providing services.

2. Stage 0: Pre-Migration

● The system selects an alternative host (Host B) for migration.


● Required resources (CPU, memory, storage) are prepared.
● Block devices (disk storage) are mirrored.

3. Stage 1: Reservation
● Request issued to migrate an OS from host A to host B
● A container is initialized on the target host (Host B) to receive the migrating VM.

4. Stage 2: Iterative Pre-Copy (Overhead Due to Copying)

● The VM’s memory is copied in multiple rounds.


● In the first round, all memory pages are transferred.
● In subsequent rounds, only the “dirty” (changed) pages are copied to reduce transfer time.
● This process continues until the remaining memory to be copied is small enough for a final quick transfer.
Live VM Migration Steps and
Performance Effects
5. Stage 3: Stop and Copy (Downtime)

● The VM is temporarily stopped on Host A.


● The last portion of memory, along with CPU and network states, is transferred.
● The system updates the network (using ARP) to redirect traffic to Host B.
● This downtime should be minimal to avoid noticeable service disruption.

6. Stage 4: Commitment

● The VM state on Host A is released.


● Host B officially takes over the VM.

7. Stage 5: Activation

● The VM starts running on Host B.


● It reconnects to storage, network, and other devices.
● Normal operations resume.

8. VM Running on Host B

● The migration is complete, and the VM is fully operational on the new host.
VM Migration of Memory, Files, and
Network Resources
1. Memory Migration

Memory migration is one of the most critical parts of VM migration. Since a VM uses RAM (memory) to store its running state, all of
that data needs to be moved efficiently to the new server.

Common Techniques for Memory Migration

a) Internet Suspend-Resume (ISR)

● This technique takes advantage of temporal locality, meaning that most of the memory contents before and after migration
remain the same.
● Instead of transferring everything, only the changes in memory are sent.
● It organizes files into a tree structure, so only the modified parts need to be copied.
● Downside: It causes longer downtime since the VM must be completely stopped before being resumed.
VM Migration of Memory, Files, and
Network Resources
b) Precopy Approach (Used in Live Migration)

● Step 1: All memory pages are copied from the source to the destination while the VM is still running.
● Step 2: Only the changed (dirty) pages are copied in multiple rounds to minimize data transfer.
● Step 3: The VM is paused briefly, and the final memory changes are copied.
● Step 4: The VM resumes on the new host.
● Advantage: The VM remains functional for most of the migration.
● Disadvantage: It consumes high network bandwidth, which may slow down other applications.

c) Postcopy Approach

● The VM first moves to the new host without its memory.


● When the VM starts on the new host, it fetches memory pages from the old host on demand.
● Advantage: The total migration time is lower.
● Disadvantage: If the network is slow, the VM may crash because it cannot fetch memory pages fast enough.
VM Migration of Memory, Files, and
Network Resources
d) Memory Compression

● Instead of sending full memory pages, the system compresses them before transfer.
● This reduces the amount of data that needs to be moved.
● Advantage: Less network usage, faster migration.
● Disadvantage: Requires extra CPU power for compression.
File System Migration

A VM’s files include its operating system, applications, and user data. Moving these files efficiently is essential to avoid long
downtimes.

Common Techniques for File Migration

a) Virtual Disk Migration

● Each VM has a virtual disk that stores all its data.


● The system moves the entire disk along with the VM.
● Problem: This takes a lot of time and bandwidth, making it impractical for large files.

b) Network-Accessible Global File System

● Instead of moving files, all machines in the cluster share a common file system (like a cloud drive).
● When the VM migrates, it simply accesses its files over the network.
● Advantage: No need to copy files, making migration faster.
● Disadvantage: Requires a high-speed network for seamless access.
File System Migration

c) Smart Copying

● Uses spatial locality, meaning that files at the new location are almost the same as the files at the old location.
● Only the differences between the two file systems are copied.
● Advantage: Saves bandwidth and time.

d) Proactive State Transfer

● The system predicts where the VM will move in advance and copies necessary files before the actual migration.
● Advantage: Reduces downtime, as most files are already in place.
Network Migration

When a VM moves, its network connections must also move so that users and other systems can continue communicating with it.

Common Techniques for Network Migration

a) Virtual IP Addresses

● Each VM is assigned a virtual IP that stays the same, no matter where the VM is running.
● The system updates the network so that traffic reaches the VM’s new location.
● Advantage: No need for changes in applications or user configurations.

b) Address Resolution Protocol (ARP) Update

● When the VM moves, the new host sends an ARP reply to update all devices on the network about its new location.
● Advantage: Fast and automatic redirection.
● Disadvantage: Some data packets may be lost during the transition.
Network Migration

c) Keeping the Same MAC Address

● The migrating VM keeps its MAC address (network card identity).


● The network switch detects the new location and updates routing information.
● Advantage: No changes are needed in the network settings.

You might also like