0% found this document useful (0 votes)
9 views

Lecture Notes _Unit 4_Cloud Computing_KCA014 (1)

Virtualization creates an abstraction layer over computer hardware, allowing multiple virtual machines (VMs) to run on a single physical machine, each with its own operating system. This technology enhances efficiency, reduces costs, and supports cloud computing by optimizing hardware usage and enabling server consolidation. While virtualization offers significant benefits, it also presents challenges such as high initial investment, complexity, and potential security risks.

Uploaded by

Manu Tyagi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Lecture Notes _Unit 4_Cloud Computing_KCA014 (1)

Virtualization creates an abstraction layer over computer hardware, allowing multiple virtual machines (VMs) to run on a single physical machine, each with its own operating system. This technology enhances efficiency, reduces costs, and supports cloud computing by optimizing hardware usage and enabling server consolidation. While virtualization offers significant benefits, it also presents challenges such as high initial investment, complexity, and potential security risks.

Uploaded by

Manu Tyagi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Unit-4

Virtualization for Cloud


4.1.1 Introduction:

Virtualization uses software to create an abstraction layer over computer hardware that allows the
hardware elements of a single computer—processors, memory, storage and more—to be divided
into multiple virtual computers, commonly called virtual machines (VMs). Each VM runs its own
operating system (OS) and behaves like an independent computer, even though it is running on
just a portion of the actual underlying computer hardware.

Definition- Virtualization is the process of creating a software-based, or virtual, representation of


something, such as virtual applications, servers, storage and networks. It is the single most
effective way to reduce IT expenses while boosting efficiency and agility for all size businesses.

4.1.2 Brief History:

While virtualization technology can be sourced back to the 1960s, it wasn’t widely adopted until
the early 2000s. The technologies that enabled virtualization—like hypervisors—were developed
decades ago to give multiple users simultaneous access to computers that performed batch
processing. Batch processing was a popular computing style in the business sector that ran
routine tasks thousands of times very quickly (like payroll).

But, over the next few decades, other solutions to them any users/single machine problem grew
in popularity while virtualization didn’t. One of those other solutions was time-sharing, which
isolated users within operating systems—inadvertently leading to other operating systems like
UNIX, which eventually gave way to Linux®. All the while, virtualization remained a largely un
adopted, niche technology.

Fast forward to the 1990s. Most enterprises had physical servers and single-vendor IT stacks,
which didn’t allow legacy apps to run on a different vendor’s hardware. As companies updated
their IT environments with less- expensive commodity servers, operating systems, and
applications from a variety of vendors, they were bound to underused physical hardware—each
server could only run 1 vendor-specific task.

This is where virtualization really took off. It was the natural solution to 2 problems: companies
could partition their servers and run legacy apps on multiple operating system types and
versions. Servers started being used more efficiently (or not at all), thereby reducing the costs
associated with purchase, set up, cooling, and maintenance.

Virtualization’s widespread applicability helped reduce vendor lock-in and made it the
foundation of cloud computing. It’s so prevalent across enterprises today that specialized
virtualization management software is often needed to help keep track of it all.
4.1.3 Need of Virtualization:

a. Enhanced Performance:
Currently, the end user system i.e. PC is sufficiently powerful to fulfill all the basic computation
requirements of the user, with various additional capabilities which are rarely used by the user.
Most of their systems have sufficient resources which can host a virtual machine manager and
can perform a virtual machine with acceptable performance so far.

b. Limited Use of Hardware and Software Resources:


The limited use of the resources leads to under-utilization of hardware and software resources.
As all the PCs of the user are sufficiently capable to fulfill their regular computational needs
that’s why many of their computers are used often which can be used 24/7 continuously without
any interruption. The efficiency of IT infrastructure could be increase by using these resources
after hours for other purposes. This environment is possible to attain with the help of
Virtualization.

c. Shortage of Space:
The regular requirement for additional capacity, whether memory storage or compute power,
leads data centers raise rapidly. Companies like Google, Microsoft and Amazon develop their
infrastructure by building data centers as per their needs. Mostly, enterprises unable to pay to
build any other data center to accommodate additional resource capacity. This heads to the
diffusion of a technique which is known as server consolidation.

d. Eco-Friendly Initiatives:
At this time, corporations are actively seeking for various methods to minimize their
expenditures on power which is consumed by their systems. Data centers are main power
consumers and maintaining a data center operations needs a continuous power supply as well as
a good amount of energy is needed to keep them cool for well- functioning. Therefore, server
consolidation drops the power consumed and cooling impact by having a fall in number of
servers. Virtualization can provide a sophisticated method of server consolidation.

e. Administrative Costs:
Furthermore, the rise in demand for capacity surplus, that convert into more servers in a data
center, accountable for a significant increase in administrative costs. Hardware monitoring,
server setup and updates, defective hardware replacement, server resources monitoring, and
backups are included in common system administration tasks. These are personnel intensive
operations. The administrative costs is increased as per the number of servers. Virtualization
decreases number of required servers for a given workload, hence reduces the cost of
administrative employees.

4.2.1 Pros and Cons of Virtualization:

a. Benefits / Pros of Virtualization:

i. Efficiency:

Virtualization lets you have one machine serve as many virtual machines. This not only means
you need fewer servers, but you can use the ones you have to their fullest capacity. These
efficiency gains translate into cost savings on hardware, cooling, and maintenance—not to
mention the environmental benefit of a lower carbon footprint.

Virtualization also allows you to run multiple types of apps, desktops, and operating systems on
a single machine, instead of requiring separate servers for different vendors. This frees you from
relying on specific vendors and makes the management of your IT resources much less time-
consuming, allowing your IT team to be more productive.

ii. Reliability:

Virtualization technology allows you to easily back up and recover your data using virtual
machine snapshots of existing servers. It’s also simple to automate this backup process to keep
all your data up to date. If an emergency happens and you need to restore from a backed up
virtual machine, it’s easy to migrate this virtual machine to a new location in a few minutes. This
results in greater reliability and business continuity because it’s easier to recover from disaster or
loss.

iii. Business Strategy:

Virtualization software gives your organization more flexibility in how you test and allocate
resources. Because of how easy it is to back up and restore virtual machines, your IT team can
test and experiment with new technology easily. Virtualization also lets you create a cloud
strategy by allocating virtual machine resources into a shared pool for your organization. This
cloud-based infrastructure gives your IT team control over who accesses which resources and
from which devices, improving security and flexibility.

iv. Minimal Downtime:

OS and application crashes can cause downtime and disrupt use productivity. Admins can run
multiple redundant virtual machines alongside each other and failover between them when
problems arise. Running multiple redundant physical servers is more expensive.

v. Faster Provisioning:
Buying, installing, and configuring hardware for each application is time-consuming. Provided
that the hardware is already in place, provisioning virtual machines to run all your applications is
significantly faster. You can even automate it using management software and build it into
existing workflows.

Easier Management:
Replacing physical computers with software-defined VMs makes it easier to use and manage
policies written in software. This allows you to create automated IT service management
workflows. For example, automated deployment and configuration tools enable administrators to
define collections of virtual machines and applications as services, in software templates. This
means that they can install those services repeatedly and consistently without cumbersome, time-
consuming. and error- prone manual setup. Admins can use virtualization security policies to
mandate certain security configurations based on the role of the virtual machine. Policies can even
increase resource efficiency by retiring unused virtual machines to save on space and computing
power.

b. Disadvantages / Cons of Virtualization:

i. High Initial Investment: While virtualization reduces costs in the long run, the initial setup costs for
storage and servers can be higher than a traditional setup.

ii. Complexity: Managing virtualized environments can be complex, especially as the number of VMs
increases.
iii. Security Risks: Virtualization introduces additional layers, which may pose security risks if
not properly configured and monitored.

iv. Learning New Infrastructure: As Organization shifted from Servers to Cloud. They required
skilled staff who can work with cloud easily. Either they hire new IT staff with relevant skill
or provide training on that skill which increase the cost of company.

v. Data can be at Risk: Working on virtual instances on shared resources means that our data is
hosted on third party resource which put’s our data in vulnerable condition. Any hacker can
attack on our data or try to perform unauthorized access. Without Security solution our data
is in threaten situation.

4.2.2 Types of Virtualization:


 Desktop virtualization
 Network Virtualization
 Storage Virtualization
 Data Virtualization
 Application Virtualization
 Data Center Virtualization
 CPU Virtualization
 GPU Virtualization
 Linux Virtualization
 Cloud Cirtualization

4.2.2.1 Desktop Virtualization

Desktop virtualization lets you run multiple desktop operating systems, each in its own VM
on the same computer.

There are two types of desktop virtualization:

 Virtual Desktop Infrastructure (VDI) runs multiple desktops in VMs on a central


server and streams them to users who log in on thin client devices. In this way, VDI
lets an organization provide its users access to variety of OSs from any device,
without installing OSs on any device.
 Local Desktop Virtualization runs a hypervisor on a local computer, enabling the
user to run one or more additional OSs on that computer and switch from one OS to
another as needed without changing anything about the primary OS.

4.2.2.2 Network Virtualization:

Network virtualization uses software to create a ―view‖ of the network that an administrator
can use to manage the network from a single console. It abstracts hardware elements and
functions (e.g., connections, switches, routers, etc.) and abstracts them into software running
on a hypervisor. The network administrator can modify and control these elements without
touching the underlying physical components, which dramatically simplifies network
management.

Types of network virtualization include software-defined networking (SDN), which


virtualizes hardware that controls network traffic routing (called the―control plane‖),and
network function virtualization (NFV), which virtualizes one or more hardware
appliances that provide a specific network function (e.g.,a firewall, load balancer, or traffic
analyzer), making those appliances easier to configure, provision, and manage.

4.3.1 Storage Virtualization:

Storage virtualization enables all the storage devices on the network— whether they’re
installed on individual servers or stand alone storage units to be accessed and managed as a
single storage device. Specifically, storage virtualization masses all blocks of storage into a
single shared pool from which they can be assigned to any VM on the network as needed.
Storage virtualization makes it easier to provision storage for VMs and makes maximum use
of all available storage on the network.

4.3.2 Data Virtualization

Modern enterprises store data from multiple applications, using multiple file formats, in
multiple locations, ranging from the cloud to on-premise hardware and software systems.
Data virtualization lets any application access all of that data—irrespective of source,
format, or location.

Data virtualization tools create a software layer between the applications accessing the data
and the systems storing it. The layer translates an application’s data request or query as
needed and returns results that canspan multiple systems. Data virtualization can help break
down data silos when other types of integration aren’t feasible, desirable, or affordable.

4.3.3 Application Virtualization

Application virtualization runs application software without installing it directly on the


user’s OS. This differs from complete desktop virtualization (mentioned above) because
only the application runs in a virtual environment—the OS on the end user’s device runs as
usual. There are three types of application virtualization:

 Local application virtualization: The entire application runs on the end point
device but runs in a runtime environment instead of on the native hardware.
 Application Streaming: The application lives on a server which sends small
components of the software to run on the end user's device when needed.
 Server-based application virtualization: The application runs entirely on a server
that sends only its user interface to the client device.

4.3.4 Datacenter Virtualization:


Data center virtualization abstracts most of a data center’s hardware into software,
effectively enabling an administrator to divide a single physical data center into multiple
virtual data centers for different clients.

Each client can access its own infrastructure as a service (IaaS), which would run on the
same underlying physical hardware. Virtual data centers offer an easy on-ramp into cloud-
based computing, letting a company quickly set up a complete data center environment
without purchasing infrastructure hardware.

4.3.5 CPU Virtualization:

CPU (central processing unit) virtualization is the fundamental technology that makes
hypervisors, virtual machines, and operating systems possible. It allows a single CPU to be
divided into multiple virtual CPUs for use by multiple VMs.

At first, CPU virtualization was entirely software-defined, but many of today’s processors
include extended instruction sets that support CPU virtualization, which improves VM
performance.

4.3.6 GPU Virtualization:

A GPU (graphical processing unit) is a special multi-core processor that improves overall
computing performance by taking over heavy-duty graphic or mathematical processing.
GPU virtualization lets multiple VMs use all or some of a single GPU’s processing power
for faster video, artificial intelligence (AI), and other graphic- or math-intensive
applications.
 Pass-through GPUs make the entire GPU available to a single guest OS.
 Shared v GPUs divide physical GPU cores among several virtual GPUs (vGPUs)
for use by server- based VMs.

4.3.7 Linux Virtualization:

Linux includes its own hypervisor, called the kernel-based virtual machine (KVM), which
supports Intel and AMD’s virtualization processor extensions so you can create x86-based
VMs from within a Linux host OS.

As an open source OS, Linux is highly customizable. You can create VMs running versions
of Linux tailored for specific workloads or security-hardened versions for more sensitive
applications.

4.3.8 Cloud Virtualization:

As noted above, the cloud computing model depends on virtualization. By virtualizing


servers, storage, and other physical data center resources, cloud computing providers can
offer a range of services to customers, including the following:
 Infrastructure as a service (IaaS):Virtualized server, storage, and network
resources you can configure based on their requirements.
 Platform as a Service (PaaS):Virtualized development tools, databases, and other
cloud-based services you can use to build you own cloud-based applications and
solutions.
 Software as a Service(SaaS): Software applications you use on the cloud. SaaS is
the cloud-based service most abstracted from the hardware.
4.4.1 Virtual Machine:

A virtual machine is a computer file, typically called an image, which behaves like an actual
computer. In other words, creating a computer within a computer. It runs in a window, much like
any other programme, giving the end user the same experience on a virtual machine as they
would have on the host operating system itself. The virtual machine is sandboxed from the rest
of the system, meaning that the software inside a virtual machine can not escape or tamper with
the computer itself. This produces an ideal environment for testing other operating systems
including beta releases, accessing virus-infected data, creating operating system backups and
running software or applications on operating systems for which they were not originally
intended.

Multiple virtual machines can run simultaneously on the same physical computer. For servers,
the multiple operating systems run side-by-side with a piece of software called a hypervisor to
manage them, while desktop computers typically employ one operating system to run the other
operating systems within its programme windows. Each virtual machine provides its own virtual
hardware, including CPUs, memory, hard drives, network interfaces and other devices. The
virtual hardware is then mapped to the real hardware on the physical machine which saves costs
by reducing the need for physical hardware systems alongwith the associated maintenance costs
that go with it, plus reduces power and cooling demand.

4.4.2 Virtualization Working:

Virtualization creates several virtual machines (also known as virtual computers, virtual
instances, virtual versions or VMs) from one physical machine using software called a
hypervisor. Because these virtual machines perform just like physical machines while only
relying on the resources of one computer system, virtualization allows IT organizations to run
multiple operating systems on a single server (also known as a host). During these operations,
the hypervisor allocates computing resources to each virtual computer as needed. This makes IT
operations much more efficient and cost-effective. Flexible resource allocation like this made
virtualization the foundation of cloud computing.

Virtualization methods can change based on the user’s operating system. For example, Linux
machines offer a unique open-source hypervisor known as the kernel-based virtual machine
(KVM). Because KVM is part ofLinux, it allows the host machine to run multiple VMs without
a separate hypervisor. However, KVM is not supported by all IT solution providers and requires
Linux expertise in order to implement it.

4.4.2.1:The Virtualization Process Follows the Steps Listed Below:

 Hypervisors detach the physical resources from their physical environments.


 Resources are taken and divided, as needed, from the physical environment to the various
virtual environments.
 Systemusersworkwithandperformcomputationswithinthevirtualenvironment.
 Once the virtual environment is running, a user or program can send an instruction that
requires extra resources form the physical environment. In response, the hypervisor
relays the message to the physical system and stores the changes. This process will
happen at an almost native speed.

4.4.3 Virtualization Reference Model:

Three major Components falls under this category in a virtualized environment:


a. Guest:
The guest represents the system component that interacts with the virtualization layer rather
than with the host, as would normally happen. Guests usuallyconsist of one or more virtual
disk files, and a VM definition
file.VirtualMachinesarecentrallymanagedbyahostapplicationthatseesandmanageseachvirtual
machine as a different application.
b. Host:
Thehostrepresentstheoriginalenvironmentwheretheguestissupposedtobemanaged.Eachguestr
unson the host using shared resources donated to it by the host. The operating system,
works as the host and manages the physical resource management, and the device support.
c. Virtualization Layer:
Thevirtualizationlayerisresponsibleforrecreatingthesameoradifferentenvironmentwherethegu
estwill operate. It is an additional abstraction layer between a network and storage
hardware, computing, and the application running on it. Usually it helps to run a single
operating system per machine which can be very inflexible compared to the usage of
virtualization.
Fig. Virtualization Reference Model

4.5.1 Virtual Machine Monitor:


IBM introduced another level of indirection in the form of a Virtual Machine Monitor (VMM) (also called
a hypervisor). Specifically, the monitor sits between one or more operating systems and the hardware and
gives the illusion to each running OS that it controls the machine. Behind the scenes, however, the monitor
actually is in control of the hardware, and must multiplex running OSes across the physical resources of the
machine. Indeed, the VMM serves as an operating system for operating systems, but at a much lower level;
the OS must still think it is interacting with the physical hardware. Thus, transparency is a major goal of
VMMs. The VM Monitor (VMM) is an interface between the guest OS and the hardware. It
intercepts calls to the peripheral devices and memory tables from each guest OS and intercedes on
its behalf. In reverse, when a disk or SSD write creates an interrupt, the VM monitor injects that
interrupt into the appropriate guest OS.
4.5.2 Virtual Machine Properties:

4.5.2.1 Increased Security:

The ability to control the execution of a guest program in a completely transparent manner opens
new possibilities for delivering a secure, controlled execution environment. All the operations of
the guest programs are generally performed against the virtual machine, which then translates
and applies them to the host programs.
A virtual machine manager can control and filter the activity of the guest programs, thus
preventing some harmful operations from being performed. Resources exposed by the host can
then be hidden or simply protected from the guest. Increased security is a requirement when
dealing with untrusted code.
Example-1: Untrusted code can be analyzed in Cuckoo sandboxes environment.
The term sandbox identifies an isolated execution environment where instructions can be filtered
and blocked before being translated and executed in the real execution environment.
Example-2: The expression sandboxed version of the Java Virtual Machine (JVM) refers to a
particular configuration of the JVM where, by means of security policy, instructions that are
considered potentially harmful can be blocked.

4.5.2.2 Managed Execution:


In particular, sharing, aggregation, emulation, and isolation are the most relevant features.
Fig. Functions Enabled by a Managed Execution

4.5.2.3 Sharing:
Virtualization allows the creation of a separate computing environment within the same host. This
basic feature is used to reduce the number of active servers and limit power consumption.
4.5.2.4 Aggregation:
It is possible to share physical resources among several guests, but virtualization also allows
aggregation, which is the opposite process. A group of separate hosts can be tied together and
represented to guests as a single virtual host. This functionality is implemented with cluster
management software, which harnesses the physical resources of a homogeneous group of machines
and represents them as a single resource.
4.5.2.5 Emulation:
Guest programs are executed within an environment that is controlled by the virtualization layer,
which ultimately is a program. Also, a completely different environment with respect to the host can
be emulated, thus allowing the execution of guest programs requiring specific characteristics that
are not present in the physical host.
4.5.2.6 Isolation:
Virtualization allows providing guests—whether they are operating systems, applications, or other
entities—with a completely separate environment, in which they are executed. The guest program
performs its activity by interacting with an abstraction layer, which provides access to the
underlying resources. The virtual machine can filter the activity of the guest and prevent harmful
operations against the host.
Besides these characteristics, another important capability enabled by virtualization is performance
tuning. This feature is a reality at present, given the considerable advances in hardware and software
supporting virtualization. It becomes easier to control the performance of the guest by finely tuning
the properties of the resources exposed through the virtual environment. This capability provides a
means to effectively implement a quality-of-service (QoS) infrastructure.
4.5.2.7. Portability:
The concept of portability applies in different ways according to the specific type of virtualization
considered.
In the case of a hardware virtualization solution, the guest is packaged into a virtual image that, in
most cases, can be safely moved and executed on top of different virtual machines.
In the case of programming-level virtualization, as implemented by the JVM or the .NET runtime,
the binary code representing application components (jars or assemblies) can run without any
recompilation on any implementation of the corresponding virtual machine.

4.5.2.8 Resource Sharing:

Virtualization allows multiple virtual machines to share the resources of a single physical machine,
such as CPU, memory, storage, and network bandwidth. This improves hardware utilization and
reduces the need for additional physical servers.
4.5.2.9 Cloud Migration:
Virtualization can be a stepping stone for organizations looking to migrate to the cloud. By
virtualizing their existing infrastructure, organizations can make it easier to move workloads to the
cloud and take advantage of cloud-based services
4.6.1 Virtualization Structures:
Before virtualization, the operating system manages the hardware. After virtualization, a virtualization layer is
inserted between the hardware and the operating system. In such a case, the virtualization layer is responsible
for converting portions of the real hardware into virtual hardware. Therefore, different operating systems such
as Linux and Windows can run on the same physical machine, simultaneously.
Depending on the position of the virtualization layer, there are several classes of VM
architectures, namely-
1. Hypervisor and Xen Architecture
2. Binary Translation with Full Virtualization
3. Para-Virtualization with Compiler Support

1. Hypervisor and Xen Architecture


The hypervisor supports hardware-level virtualization on bare metal devices like CPU,
memory, disk and network interfaces. The hypervisor software sits directly between the
physical hardware and its OS. This virtualization layer is referred to as either the VMM or
the hypervisor. The hypervisor provides hyper calls for the guest OS and applications.
Xen is an open source hypervisor program developed by Cambridge University. Xen is a
micro-kernel hypervisor, which separates the policy from the mechanism. The Xen
hypervisor implements all the mechanisms, leaving the policy to be handled by Domain
0,Xen provides a virtual environment located between the hardware and the OS.

The guest OS, which has control ability, is called Domain 0, and the others are called
Domain U. Domain 0 is a privileged guest OS of Xen. It is first loaded when Xen boots
without any file system drivers beingavailable. Domain 0 is designed to access hardware
directly and manage devices. Therefore, one of the responsibilities of Domain 0 is to
allocate and map hardware resources for the guest domains (the Domain U domains).

2. Binary Translation with Full Virtualization- Depending on implementation


technologies, hardware virtualization can be classified into two categories: full virtualization
and host-based virtualization. Full virtualization does not need to modify the host OS. It
relies on binary translation to trap and to virtualize the execution of certain sensitive, non
virtualizable instructions. The guest OSes and their applications consist of noncritical and
critical instructions. In a host-based system, both a host OS and a guest OS are used. A
virtualization software layer is built between the host OS and guest OS.
i. Full Virtualization
With full virtualization, noncritical instructions run on the hardware directly while critical
instructions are discovered and replaced with traps into the VMM to be emulated by
software. Both the hypervisor and VMM approaches are considered full virtualization.
because binary translation can incur a large performance overhead. Non critical instructions
do not control hardware or threaten the security of the system, but critical instructions do.
Therefore, running noncritical instructions on hardware not only can promote efficiency, but
also can ensure system security.
ii. Binary Translation of Guest OS Requests Using a VMM
This approach was implemented by VMware and many other software companies. VMware
puts the VMM at Ring 0 and the guest OS at Ring 1. The VMM scans the instruction stream
and identifies the privileged, control- and behavior-sensitive instructions. When these
instructions are identified, they are trapped into the VMM, which emulates the behavior of
these instructions. The method used in this emulation is called binary translation. Therefore,
full virtualization combines binary translation and direct execution. The guest OS is
completely decoupled from the underlying hardware. Consequently, the guest OS is unaware
that it is being virtualized.
The performance of full virtualization may not be ideal, because it involves binary
translation which is rather time-consuming. In particular, the full virtualization of I/O-
intensive applications is a really a big challenge. Binary translation employs a code cache to
store translated hot instructions to improve performance, but it increases the cost of memory
usage. At the time of this writing, the performance of full virtualization on the x86
architecture is typically 80 percent to 97 percent that of the host machine.
iii. Host-Based Virtualization
An alternative VM architecture is to install a virtualization layer on top of the host OS. This
host OS is still responsible for managing the hardware. The guest OS are installed and run
on top of the virtualization layer. This host-based architecture has some distinct advantages,
as enumerated next. First, the user can install this VM architecture without modifying the
host OS. The virtualizing software can relyon the host OS to provide device drivers and
other low-level services. This will simplify the VM design and ease its deployment.
3. Para-Virtualization with Compiler Support
Para-virtualization needs to modify the guest operating systems. A para virtualized VM
provide special APIs requiring substantial OS modifications in user applications.
Performance degradation is a critical issue of a virtualized system. No one wants to use a
VM if it is much slower than using a physical machine. The virtualization layer can be
inserted at different positions in a machine soft-ware stack. However, para virtualization
attempts to reduce the virtualization overhead, and thus improve performance by modifying
only the guest OS kernel.
4.7.1 HLL –VM

HLL VM stands for High Level Language Virtual Machine, and it is a type of virtual machine that is
used in cloud computing to run high-level programming languages, such as Java, Python, and Ruby,
among others.
In cloud computing, a virtual machine is a software abstraction of a physical machine that runs an
operating system and applications. A virtual machine allows users to run multiple operating systems
and applications on a single physical machine, which makes it a popular technology for cloud
computing.
HLL VMs, such as the Java Virtual Machine (JVM), are designed to run high-level programming
languages that are compiled into byte code. The HLL VM translates the byte code into machine
language that can be executed by the underlying hardware. This provides a layer of abstraction between
the high-level programming language and the hardware, which makes it easier to write and deploy
applications.
One of the benefits of using HLL VMs in cloud computing is that they provide a consistent and reliable
environment for running applications, regardless of the underlying hardware. HLL VMs are also
portable, which means that applications can be moved between different cloud providers without the
need for significant code changes.
In summary, HLL VMs are an important technology in cloud computing that enable the running of
high-level programming languages in a virtualized environment, providing a layer of abstraction
between the application and the underlying hardware, and allowing for portability and scalability of
applications.
KVM stands for Kernel-based Virtual Machine. It’s a technology that allows you to run multiple,
separate “virtual” computers on a single physical machine.

Imagine your computer as a large apartment building. Normally, it’s like having one big apartment that
takes up the whole building. With KVM, you can divide this big apartment into several smaller
apartments (these are the “virtual” computers), allowing different people (or different computer tasks) to
live in their own spaces without interfering with each other.

KVM is a versatile and powerful virtualization solution that leverages the Linux kernel to create and
manage virtual machines efficiently.
Integration with Linux: KVM is part of the Linux kernel, which makes it a robust and integrated
solution for virtualization on systems running the Linux operating system.

Hardware Virtualization Support: It leverages hardware virtualization features of modern processors,


like Intel VT or AMD-V, to provide a performant and efficient virtualization environment.

Flexibility and Compatibility: Supports various guest operating systems including Linux, Windows,
and others. VMs can be easily migrated between hosts running KVM without needing any conversion.

Key Features and Concepts of KVM

1. Hypervisor: KVM acts as a hypervisor, a software layer that enables multiple operating systems
to run concurrently on the same hardware. It leverages hardware virtualization extensions (Intel
VT-x and AMD-V) to provide efficient and secure virtualization.
2. Full Virtualization: KVM allows you to run guest VMs with different operating systems, such
as Linux, Windows, and others, as if they were running on dedicated physical hardware. This
provides isolation and flexibility.
3. Hardware Emulation: KVM can emulate a range of hardware components for VMs, including
CPUs, memory, network adapters, and storage devices. This enables compatibility with various
guest OSes.
4. Performance: KVM offers high performance, as it directly utilizes the host machine’s CPU and
memory resources. This makes it well-suited for running resource-intensive workloads.
5. Management Tools: Various management tools and interfaces, like virtmanager, libvirt, and
virsh, help you create, configure, and manage VMs on KVM-enabled hosts.

4.7.2 VMware:

VMware is a software company that specializes in virtualization and cloud computing. It provides an
alternative to dedicated hosts. In 2024, VMware was acquired by Broadcom, raising questions about the
future of its products and uncertainty about future licensing costs. This has led many organizations to
consider alternatives such as Nutanix.

VMware's solution categories include:

 Virtualization: Creates a software layer that allows a computer's hardware to be divided into
multiple virtual machines (VMs). Each VM can run its own operating system and act like a
separate computer.
 Networking: Simplifies application delivery and automates operations with network
virtualization and load balancing.
 Cloud infrastructure: Deploys private cloud infrastructure solutions.
 Software-defined data center (SDDC): Virtualizes almost every computing function into a
software-defined data center.
 Storage software: Allows IT departments to place application workloads on the most cost-
effective compute resource.
4.8.1 Virtual Box:
A Virtual Box or VB is a Hypervisor for X86 computers from Oracle corporation. It was first
developed by Innotek GmbH and released in 2007 as an open source software package. The company
was later acquired by Sun Micro Systems in 2008. After that, Oracle has continued the development of
Virtual Box since 2010 and the product name is titled as Oracle VM Virtual Box. Virtual Box comes in
different flavours depending upon the operating systems for which it is being configured. Virtual Box
Ubuntu is more preferred, even though Virtual Box for windows is equally popular. With the advent of
android phones Virtual Box for android is becoming the new face of VM in smart phones. Use of
Virtual Box In general, a Virtual Box is a software virtualization package that can be installed on any
operating system as an application software. It allows additional operating systems to be installed on it,
as a Guest OS.
It can than create and manage free guest virtual machines, each with a guest operating system and its
own virtual environment. Virtual Box is being supported by many operating systems like Windows XP,
Windows Vista, Windows 7, Linux, Mac OS X, Solaris, and Open Solaries. Supported guest operating
systems are versions and derivations of Windows, Linux, OS/2, BSD, Haiku, etc. Virtual Box gets a lot
of support, primarily because it is free and open-source. It also allows unlimited snapshots – a feature
only available in VMWare Pro. VMWare, on the other hand, is great for drag and drop functionality
between host and the VM, but many features come only in paid version.

4.8.2 Hypervisor:

A fundamental element of hardware virtualization is the hypervisor, or virtual machine manager


(VMM). It recreates a hardware environment, where guest operating systems are installed. There
are two major types of hypervisors: Type I and Type II.

Type I hypervisors run directly on top of the hardware. Therefore, they take the place of the
operating systems and interact directly with the ISA interface exposed by the underlying
hardware, and emulate this interface in order to allow the management of guest operating
systems. This type of hypervisors is also called native virtual machine, since it run natively on
hardware.

Type II hypervisors require the support of an operating system to provide virtualization services.
This means that they are programs managed by the operating system, which interact with it
through the ABI and emulate the ISA of virtual hardware for guest operating systems. This type
of hypervisors is also called hosted virtual machine, since it is hosted within an operating
system.
Fig-Hosted (left) and Native (right) Virtual Machine

You might also like