0% found this document useful (0 votes)
137 views12 pages

Levels of Virtualization in Cloud Computing

Uploaded by

csecgirls0203
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
137 views12 pages

Levels of Virtualization in Cloud Computing

Uploaded by

csecgirls0203
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Virtualization Implementation Levels (Detailed Explanation)

1. Instruction Set Architecture (ISA) Level

 Here virtualization happens at the processor instruction level.

 One processor architecture (say Intel) can emulate another (say ARM).

 Basic method = code interpretation → each instruction is converted step by step.

o Problem: Very slow (1 source instruction → 100 native instructions).

 Better method = Dynamic Binary Translation

o Instead of line by line, it translates blocks of instructions (traces/super


blocks).

o This makes execution faster.

 👉 Example: Running old 32-bit software on modern 64-bit processors using


emulation.

2. Hardware Abstraction Level

 Virtualization happens directly on hardware using a Hypervisor (VMM).

 Hypervisor creates virtual hardware (CPU, memory, I/O) for each VM.

 Each VM thinks → “I have my own complete machine.”

 👉 Example: VMware, KVM, Xen Hypervisor.

 Advantage: Improves hardware utilization (many users use same physical server).

3. Operating System (OS) Level

 OS kernel itself provides virtualization.

 It creates containers (isolated user environments).

 Containers share the same OS but behave like separate servers.

 👉 Example: Docker, LXC (Linux Containers).

 Use Case: Hosting providers → one server → many customers (mutually distrust each
other).
4. Library Support Level

 Virtualization done at library/API layer.

 Apps don’t directly talk to OS, instead they use virtual libraries.

 Example 1: WINE → Runs Windows apps on Linux by replacing library calls.

 Example 2: vCUDA → Allows VMs to use GPU acceleration.

 👉 Mana style: Like giving app “fake functions” that redirect to other OS.

5. User / Application Level

 Virtualization done at application process level.

 Apps run in a virtual environment created by a software runtime.

 Example 1: Java Virtual Machine (JVM) → Runs Java bytecode anywhere.

 Example 2: .NET CLR → Runs .NET apps.

 Other forms: Application sandboxing, isolation, streaming.

o App is “wrapped” → doesn’t touch host OS directly → easy to install/remove.

 👉 Mana style: Like “plastic cover” around an app → safe, isolated, portable.

🔑 Quick Memory Aid

 ISA Level → Instructions emulated/translated.

 Hardware Level → Hypervisor creates fake hardware.

 OS Level → Containers on one kernel.

 Library Level → Fake libraries (WINE, vCUDA).

 User Level → JVM, CLR, sandbox apps.

✨ Exam Tip:

 Write short intro line (“Virtualization can be implemented at 5 levels...”)

 Then explain 5 points (3–4 lines each) with examples.

 Always draw the diagram (stack from ISA → Hardware → OS → Library → User).
🌐 Role of Virtual Cluster

 Normal ga physical cluster ante → multiple real computers inter-connected to work


as one.

 Virtual cluster ante → instead of real machines, we use virtual machines (VMs)
connected through network.

 Mana style: Think of it like a college project team. Instead of real students, we use
“avatars” of students online, but they still work together as a team.

👉 Benefits (Roles):

 Hardware ni efficient ga use chestundi.

 Supports parallel computing (tasks split cheyyi, faster complete).

 Easy ga create, manage, reconfigure cheyyachu (physical machines move cheyyalsina


pani ledu).

 Cloud providers like AWS / Azure build big clusters using only VMs.

VMM (Virtual Machine Manager / Hypervisor)

 VMM = software layer sitting on top of hardware.

 Its job: create & manage VMs.

 Mana style analogy: Like traffic police.

o One road (hardware) lo many vehicles (VMs) nadustayi.

o Traffic police (VMM) allocate space, stop accidents, make sure every vehicle
gets fair chance.

👉 Roles:

 Creates multiple VMs on one hardware.

 Allocates resources (CPU, memory, disk) fairly.

 Ensures isolation → Oka VM crash ayina, others safe.

💻 VM (Virtual Machine)

 VM = software computer that acts like a real computer.

 Each VM has its own OS + apps.


 Mana style analogy: Like each student in hostel has his own room → independent
life, but all share same building (hardware).

👉 Benefits:

 Run multiple OS on same system (Windows + Linux together).

 Cost effective → no need for separate hardware.

 Security → each VM is isolated from the other.

✅ So final picture:

 Virtual Cluster = many VMs connected like a team.

 VMM = manager/traffic police controlling VMs.

 VM = virtual computers (rooms for students).

🌐 Virtualization of CPU, Memory, and I/O Devices

1. CPU Virtualization

 Idea: Multiple VMs share one CPU.

 How? → VMM (hypervisor) gives each VM a time slice of CPU.

 Normal instructions (unprivileged) → run directly on hardware (fast).

 Special instructions (privileged) → trapped and handled by VMM (to prevent


crashes).

 Hardware-assisted virtualization (Intel VT, AMD-V) makes it easier → adds an extra


mode (Ring -1) so hypervisor can run below the OS safely.

 CPU virtualization in cloud computing allows a single physical CPU to host multiple
virtual machines (VMs), each with its own operating system and applications. This
technology is fundamental to cloud computing, enabling efficient resource
utilization, scalability, and cost-effectiveness by allowing multiple users to share the
same hardware.

👉 Analogy: Think of CPU like a


cricket pitch 🏏. Many teams
(VMs) want to bat. The umpire
(VMM) gives turns (time slices). Normal play is direct, but if someone breaks rules, umpire
interferes.

🧠 2. Memory Virtualization

 Each VM feels like it has its own private memory, but actually they share the same
RAM.

 Mapping is done in two stages:

1. Guest OS: Virtual memory → Guest physical memory.

2. VMM: Guest physical memory → Actual machine memory.

 To speed up, VMM uses Shadow Page Tables + hardware MMU & TLB.

 Ensures isolation → one VM can’t see or corrupt another’s memory.

👉 Analogy: Like hostel rooms. Each student (VM) feels they have their own cupboard
(memory), but the warden (VMM) actually assigns space from the big storeroom (RAM).

🔌 3. I/O Virtualization

 VMs need devices (disk, network, USB). Only limited real devices exist, so they are
virtualized.

 Three main methods:

1. Full device emulation – VMM completely simulates the device in software


(slow but compatible).
2. Para-virtualization – Split driver model (frontend in VM, backend in host).
Faster, but higher CPU usage.

3. Direct I/O – VM talks to device directly. Very fast, almost native performance,
but less flexible.

 SV-IO (Self-Virtualized I/O): Uses multicore processors to create Virtual Interfaces


(VIFs) for each VM (like virtual NIC, virtual disk).

👉 Analogy:

 Full emulation = Teacher writes notes for every student individually.

 Para-virtualization = Assistant + teacher share work.

 Direct I/O = Students directly access library without middleman.

System Model for Distributed & Cloud Computing

1️⃣ Idea:

 Distributed & cloud systems = multiple computers (nodes) working together.

 Distributed System: Nodes coordinate to share tasks.

 Cloud System: Nodes provide on-demand, scalable, virtualized services.

2️⃣ Components with Example:

 Clients/Users: Request services.

o Example: Your laptop accessing Google Drive.

 Servers/Nodes: Do processing.
o Example: AWS EC2 instances running your app.

 Middleware: Connects client & servers.

o Example: APIs, load balancers that manage traffic.

 Network: Links everything.

o Example: Internet or LAN.

 Resources: CPU, memory, storage, apps.

o Example: S3 storage, virtual machines, databases.

3️⃣ Key Features:

 Transparency: User doesn’t know where resources are physically.

o Example: Uploading file to Google Drive – you don’t see which server stores
it.

 Scalability: Nodes can be added or removed as needed.

o Example: Netflix adds more servers during high traffic.

 Fault Tolerance: If one node fails, others continue.

o Example: If one AWS server goes down, others serve your request.

 Resource Sharing: Efficient sharing of CPU, memory, storage.

o Example: Multiple users using same database server without conflict.

4️⃣ Difference Between Distributed & Cloud:

 Distributed = fixed nodes, mostly computation, manual scaling

 Cloud = on-demand, computation + storage + SaaS, automatic scaling, virtualization


mandatory

💡 Exam Tip:

 Write points like above, give 1-line example for each.

 Optional: draw tiny diagram if you have time.

Compare & Contrast: Centralized vs Decentralized vs Distributed Systems

1️⃣ Centralized System:

 Definition: One main server/node controls everything.


 Example: Bank ATM network where central server processes all transactions.

 Features:

o Single point of control → easy to manage

o Single point of failure → if server fails, whole system stops

o Easier security and updates

2️⃣ Decentralized System:

 Definition: Multiple servers, each controlling its own part, but not fully connected.

 Example: University with multiple departments, each having its own server for
student records.

 Features:

o No single point of failure (if one server fails, others continue)

o Nodes work independently

o Harder to manage globally

3️⃣ Distributed System:

 Definition: Multiple interconnected nodes working together as a single system.

 Example: Google Search Engine – many servers process queries together.

 Features:

o Fault-tolerant → if one node fails, others take over

o Scalable → can add more nodes

o Resources are shared efficiently

Quick Comparison Table (exam-style, easy to write)

Feature Centralized Decentralized Distributed

Multiple independent Multiple interconnected


Control Single node
nodes nodes

Failure Single point → system Only part fails Fault-tolerant, system


Feature Centralized Decentralized Distributed

stops continues

Management Easy Medium Complex

Google Search, Cloud


Example Bank ATM network University departments
services

💡 Exam Tip:

 Write definition + 1 example + 2 key features for each.

 Table is optional, but teachers love it for comparisons.

9.a) Virtualization (Full Explanation)

Definition:

 Virtualization ante real physical hardware ni virtual ga create cheyyadam.

 Idea: One physical machine → multiple virtual machines (VMs) or resources.

 Example: One big server lo 3 VMs run cheyyadam – Windows, Linux, Ubuntu – same
time lo.

Why we use it:

 Cost-cutting → less hardware use

 Easy management → software updates in one place

 Isolation → one VM crash ayyite, others safe

Types of Virtualization with Simple Example:

1️⃣ Hardware Virtualization:

 Physical hardware ni divide chesi multiple VMs run cheyadam.

 Example: VMware lo Windows VM run cheyyadam on Linux server.

2️⃣ OS-level Virtualization / Containers:

 Single OS kernel meeda multiple apps run cheyyadam in isolated containers.

 Example: Docker container lo web app run cheyyadam.

3️⃣ Storage Virtualization:


 Multiple storage devices ni single storage system la chupincheyadam.

 Example: SAN (Storage Area Network) – 4 disks → 1 single storage pool.

4️⃣ Network Virtualization:

 Network resources ni virtual ga create cheyyadam.

 Example: VLAN – one physical network → 3 virtual networks.

5️⃣ Desktop Virtualization:

 User desktops ni central server lo run cheyyadam.

 Example: VDI – office desktop ni home laptop nundi access cheyyadam.

9.b) Scalable Computing & Fault Tolerance / Reliability (Full Explanation)

Step 1: Scalable Computing

 Cloud computing lo resources on-demand add or remove cheyyadam.

 Example: Netflix streaming peak time lo servers add cheyyadam automatically →


users ki smooth streaming.

Step 2: Fault Tolerance

 Multiple servers/nodes run together. Oka node fail aithe, workload mariyu servers ki
shift avtundi.

 Example: AWS EC2 – oka instance crash aite, auto-scaling replace cheyyadam.

Step 3: Reliability

 Redundancy → data/services multiple servers/data centers lo store.

 Even if one fails, service continues.

 Example: Google Drive files multiple data centers lo store cheyyadam → one center
fail aite kuda files safe.

Step 4: Why Important

 Users always get service without interruption

 Businesses don’t lose money or data during failures

 Cloud providers guarantee high availability

Exam Writing Tip:

 Write definition first, then how it works step by step, then examples.
 Always mention fault tolerance + reliability separately.

 Examples = marks guaranteed.

HTC vs HPC in the Age of Internet Computing

HTC (High Throughput Computing):

 Internet age lo chala computers ni connect chesi, many small tasks ni process
cheyyadam.

 Focus = total work done over time

 Example: Folding@Home – millions of volunteers computers protein folding


simulations run cheyyadam

 Simple way to remember: “Many small jobs done by connected PCs over internet”

HPC (High Performance Computing):

 Internet age lo, supercomputers / clusters use chesi few heavy tasks fast complete
cheyyadam

 Focus = speed / performance

 Example: Weather prediction – supercomputer crunches huge calculations quickly

 Simple way to remember: “One big job done fast using powerful nodes”

Key Differences (mana style, no boxes):

 HTC = Many small jobs, throughput main focus, uses many internet-connected PCs

 HPC = Few big jobs, speed main focus, uses supercomputers or clusters

💡 Exam Tip:

 Definition + example + 1 line difference = full marks.

 Mention “age of Internet computing” → shows understanding of modern context.

If you want, nenu next question ki kuda mana style lo, exam-ready, examples tho chesi, 1–2
hours lo full revision set chestha.

Next question enti?

You said:
i want box raa

ChatGPT said:

Aa ok ra 😎, ipudu boxes tho exam-ready version, mana style lo, examples tho, age of
Internet computing context tho.

HTC vs HPC in the Age of Internet Computing

Feature HTC (High Throughput Computing) HPC (High Performance Computing)

Complete many small tasks over


Focus Complete few big tasks very fast
time

Maximize total work done


Goal Maximize speed / performance
(throughput)

Resources Many internet-connected PCs Supercomputers / clusters

Folding@Home – millions of Weather prediction – supercomputer


Example
protein folding simulations crunches huge data

Easy Way to
“Many small jobs over internet” “One big job, very fast”
Remember

Common questions

Powered by AI

Virtualization supports parallel computing in cloud environments by allowing multiple virtual machines (VMs) to run concurrently on a single physical server using a Virtual Machine Manager (VMM). This configuration enables the execution of parallel tasks by distributing workloads across VMs efficiently. Benefits include increased resource utilization, as multiple processes can be executed simultaneously, scalable resource allocation based on demand, and improved cost-efficiency by maximizing the usage of existing infrastructure without the need for additional physical machines .

There are five levels of virtualization implementation: Instruction Set Architecture (ISA) Level, Hardware Abstraction Level, Operating System (OS) Level, Library Support Level, and User/Application Level. Hardware-level virtualization, in particular, utilizes a Hypervisor (VMM) to create virtual hardware for each VM, allowing them to operate as though they have their own complete machines. This improves hardware utilization by enabling multiple users to use the same physical server efficiently . The creation of virtual machines on one hardware platform allows the sharing and effective management of resources such as CPU, memory, and disk space .

I/O virtualization is implemented through three main methods: Full device emulation, Para-virtualization, and Direct I/O. Full device emulation utilizes the VMM to completely simulate a device in software, which maximizes compatibility at the cost of performance due to its slower execution . Para-virtualization uses a split driver model, with parts of the driver in both the virtual machine and host, leading to faster performance but higher CPU usage and less compatibility with non-optimized applications . Direct I/O allows VMs to access devices directly, resulting in native-like performance. However, it offers less flexibility and requires specific hardware support .

The VMM or Hypervisor is a software layer that sits on top of hardware, responsible for creating and managing virtual machines (VMs). It ensures isolation by allocating resources such as CPU, memory, and disk to each VM independently. If one VM crashes, others remain unaffected due to this isolated allocation and management. Furthermore, VMM handles special instructions to prevent crashes, such as trapping privileged instructions that could potentially disrupt the system .

Cloud computing enhances scalability by allowing resources to be added or removed on-demand, such as Netflix adding more servers during peak streaming times. This leads to smooth experiences for users despite high demand . Fault tolerance is achieved through redundancy and auto-scaling, where multiple servers handle workloads, and operations shift seamlessly to available servers if any node fails. This ensures continuous service availability and minimizes interruptions. These features are crucial for modern applications to maintain high availability and reliability, minimizing downtime and protecting business continuity and user experience .

OS-level virtualization would be preferred over hardware-level virtualization when the goal is to run multiple isolated user environments on the same OS kernel, which is especially useful for hosting providers that need to efficiently allocate resources to multiple customers with varying needs who may not trust each other. This approach provides cost-effective isolation and quicker deployment of environments as compared to hardware-level virtualization, which involves more overhead from managing individual hypervisors .

Hardware-assisted virtualization, such as Intel VT and AMD-V, adds an extra mode known as Ring -1, allowing the hypervisor to safely run below the operating system. This technology enables the execution of unprivileged instructions directly on the hardware for faster performance. Special or privileged instructions are still managed by the VMM to maintain system stability. This support enhances performance by reducing the need for software-based instruction emulation and allows more efficient sharing of the CPU among multiple VMs in cloud computing environments .

High Throughput Computing (HTC) focuses on maximizing the total amount of work done over time, typically involving many small tasks processed across numerous internet-connected PCs. An example is Folding@Home, where millions of computers perform protein folding simulations collaboratively . In contrast, High Performance Computing (HPC) emphasizes speed and performance, seeking to quickly complete a few large tasks using supercomputers or clusters. An example is weather prediction, where significant computational resources are used to process high volumes of data efficiently . The primary goal of HTC is throughput, while HPC aims for rapid execution of complex processes.

Memory virtualization is important because it allows each virtual machine (VM) to operate as if it has its own dedicated memory space, even though they share the same physical RAM. This is achieved through a two-stage mapping process: the Guest OS maps virtual memory to a guest physical memory area, and the VMM maps this guest physical memory to actual machine memory. By doing so, memory virtualization contributes to security by ensuring isolation between VMs; one VM cannot see or corrupt another's memory, as the mappings are managed independently. Techniques such as Shadow Page Tables, along with hardware Memory Management Unit (MMU) and Translation Lookaside Buffer (TLB), further optimize this process by speeding up access and ensuring secure management .

A virtual cluster comprises interconnected virtual machines (VMs) instead of real, physical machines as seen in a physical cluster. The benefits of virtual clusters over physical ones include improved efficiency in hardware utilization and ease of management. Since physical machines do not need to be relocated for scaling or configuration changes, virtual clusters can be created, managed, and reconfigured more readily. This flexibility supports parallel computing and simplifies operations for cloud providers, such as AWS and Azure, who use only VMs to build large clusters, achieving efficient resource usage and cost savings .

You might also like