Lecture Notes _Unit 4_Cloud Computing_KCA014 (1)
Lecture Notes _Unit 4_Cloud Computing_KCA014 (1)
Virtualization uses software to create an abstraction layer over computer hardware that allows the
hardware elements of a single computer—processors, memory, storage and more—to be divided
into multiple virtual computers, commonly called virtual machines (VMs). Each VM runs its own
operating system (OS) and behaves like an independent computer, even though it is running on
just a portion of the actual underlying computer hardware.
While virtualization technology can be sourced back to the 1960s, it wasn’t widely adopted until
the early 2000s. The technologies that enabled virtualization—like hypervisors—were developed
decades ago to give multiple users simultaneous access to computers that performed batch
processing. Batch processing was a popular computing style in the business sector that ran
routine tasks thousands of times very quickly (like payroll).
But, over the next few decades, other solutions to them any users/single machine problem grew
in popularity while virtualization didn’t. One of those other solutions was time-sharing, which
isolated users within operating systems—inadvertently leading to other operating systems like
UNIX, which eventually gave way to Linux®. All the while, virtualization remained a largely un
adopted, niche technology.
Fast forward to the 1990s. Most enterprises had physical servers and single-vendor IT stacks,
which didn’t allow legacy apps to run on a different vendor’s hardware. As companies updated
their IT environments with less- expensive commodity servers, operating systems, and
applications from a variety of vendors, they were bound to underused physical hardware—each
server could only run 1 vendor-specific task.
This is where virtualization really took off. It was the natural solution to 2 problems: companies
could partition their servers and run legacy apps on multiple operating system types and
versions. Servers started being used more efficiently (or not at all), thereby reducing the costs
associated with purchase, set up, cooling, and maintenance.
Virtualization’s widespread applicability helped reduce vendor lock-in and made it the
foundation of cloud computing. It’s so prevalent across enterprises today that specialized
virtualization management software is often needed to help keep track of it all.
4.1.3 Need of Virtualization:
a. Enhanced Performance:
Currently, the end user system i.e. PC is sufficiently powerful to fulfill all the basic computation
requirements of the user, with various additional capabilities which are rarely used by the user.
Most of their systems have sufficient resources which can host a virtual machine manager and
can perform a virtual machine with acceptable performance so far.
c. Shortage of Space:
The regular requirement for additional capacity, whether memory storage or compute power,
leads data centers raise rapidly. Companies like Google, Microsoft and Amazon develop their
infrastructure by building data centers as per their needs. Mostly, enterprises unable to pay to
build any other data center to accommodate additional resource capacity. This heads to the
diffusion of a technique which is known as server consolidation.
d. Eco-Friendly Initiatives:
At this time, corporations are actively seeking for various methods to minimize their
expenditures on power which is consumed by their systems. Data centers are main power
consumers and maintaining a data center operations needs a continuous power supply as well as
a good amount of energy is needed to keep them cool for well- functioning. Therefore, server
consolidation drops the power consumed and cooling impact by having a fall in number of
servers. Virtualization can provide a sophisticated method of server consolidation.
e. Administrative Costs:
Furthermore, the rise in demand for capacity surplus, that convert into more servers in a data
center, accountable for a significant increase in administrative costs. Hardware monitoring,
server setup and updates, defective hardware replacement, server resources monitoring, and
backups are included in common system administration tasks. These are personnel intensive
operations. The administrative costs is increased as per the number of servers. Virtualization
decreases number of required servers for a given workload, hence reduces the cost of
administrative employees.
i. Efficiency:
Virtualization lets you have one machine serve as many virtual machines. This not only means
you need fewer servers, but you can use the ones you have to their fullest capacity. These
efficiency gains translate into cost savings on hardware, cooling, and maintenance—not to
mention the environmental benefit of a lower carbon footprint.
Virtualization also allows you to run multiple types of apps, desktops, and operating systems on
a single machine, instead of requiring separate servers for different vendors. This frees you from
relying on specific vendors and makes the management of your IT resources much less time-
consuming, allowing your IT team to be more productive.
ii. Reliability:
Virtualization technology allows you to easily back up and recover your data using virtual
machine snapshots of existing servers. It’s also simple to automate this backup process to keep
all your data up to date. If an emergency happens and you need to restore from a backed up
virtual machine, it’s easy to migrate this virtual machine to a new location in a few minutes. This
results in greater reliability and business continuity because it’s easier to recover from disaster or
loss.
Virtualization software gives your organization more flexibility in how you test and allocate
resources. Because of how easy it is to back up and restore virtual machines, your IT team can
test and experiment with new technology easily. Virtualization also lets you create a cloud
strategy by allocating virtual machine resources into a shared pool for your organization. This
cloud-based infrastructure gives your IT team control over who accesses which resources and
from which devices, improving security and flexibility.
OS and application crashes can cause downtime and disrupt use productivity. Admins can run
multiple redundant virtual machines alongside each other and failover between them when
problems arise. Running multiple redundant physical servers is more expensive.
v. Faster Provisioning:
Buying, installing, and configuring hardware for each application is time-consuming. Provided
that the hardware is already in place, provisioning virtual machines to run all your applications is
significantly faster. You can even automate it using management software and build it into
existing workflows.
Easier Management:
Replacing physical computers with software-defined VMs makes it easier to use and manage
policies written in software. This allows you to create automated IT service management
workflows. For example, automated deployment and configuration tools enable administrators to
define collections of virtual machines and applications as services, in software templates. This
means that they can install those services repeatedly and consistently without cumbersome, time-
consuming. and error- prone manual setup. Admins can use virtualization security policies to
mandate certain security configurations based on the role of the virtual machine. Policies can even
increase resource efficiency by retiring unused virtual machines to save on space and computing
power.
i. High Initial Investment: While virtualization reduces costs in the long run, the initial setup costs for
storage and servers can be higher than a traditional setup.
ii. Complexity: Managing virtualized environments can be complex, especially as the number of VMs
increases.
iii. Security Risks: Virtualization introduces additional layers, which may pose security risks if
not properly configured and monitored.
iv. Learning New Infrastructure: As Organization shifted from Servers to Cloud. They required
skilled staff who can work with cloud easily. Either they hire new IT staff with relevant skill
or provide training on that skill which increase the cost of company.
v. Data can be at Risk: Working on virtual instances on shared resources means that our data is
hosted on third party resource which put’s our data in vulnerable condition. Any hacker can
attack on our data or try to perform unauthorized access. Without Security solution our data
is in threaten situation.
Desktop virtualization lets you run multiple desktop operating systems, each in its own VM
on the same computer.
Network virtualization uses software to create a ―view‖ of the network that an administrator
can use to manage the network from a single console. It abstracts hardware elements and
functions (e.g., connections, switches, routers, etc.) and abstracts them into software running
on a hypervisor. The network administrator can modify and control these elements without
touching the underlying physical components, which dramatically simplifies network
management.
Storage virtualization enables all the storage devices on the network— whether they’re
installed on individual servers or stand alone storage units to be accessed and managed as a
single storage device. Specifically, storage virtualization masses all blocks of storage into a
single shared pool from which they can be assigned to any VM on the network as needed.
Storage virtualization makes it easier to provision storage for VMs and makes maximum use
of all available storage on the network.
Modern enterprises store data from multiple applications, using multiple file formats, in
multiple locations, ranging from the cloud to on-premise hardware and software systems.
Data virtualization lets any application access all of that data—irrespective of source,
format, or location.
Data virtualization tools create a software layer between the applications accessing the data
and the systems storing it. The layer translates an application’s data request or query as
needed and returns results that canspan multiple systems. Data virtualization can help break
down data silos when other types of integration aren’t feasible, desirable, or affordable.
Local application virtualization: The entire application runs on the end point
device but runs in a runtime environment instead of on the native hardware.
Application Streaming: The application lives on a server which sends small
components of the software to run on the end user's device when needed.
Server-based application virtualization: The application runs entirely on a server
that sends only its user interface to the client device.
Each client can access its own infrastructure as a service (IaaS), which would run on the
same underlying physical hardware. Virtual data centers offer an easy on-ramp into cloud-
based computing, letting a company quickly set up a complete data center environment
without purchasing infrastructure hardware.
CPU (central processing unit) virtualization is the fundamental technology that makes
hypervisors, virtual machines, and operating systems possible. It allows a single CPU to be
divided into multiple virtual CPUs for use by multiple VMs.
At first, CPU virtualization was entirely software-defined, but many of today’s processors
include extended instruction sets that support CPU virtualization, which improves VM
performance.
A GPU (graphical processing unit) is a special multi-core processor that improves overall
computing performance by taking over heavy-duty graphic or mathematical processing.
GPU virtualization lets multiple VMs use all or some of a single GPU’s processing power
for faster video, artificial intelligence (AI), and other graphic- or math-intensive
applications.
Pass-through GPUs make the entire GPU available to a single guest OS.
Shared v GPUs divide physical GPU cores among several virtual GPUs (vGPUs)
for use by server- based VMs.
Linux includes its own hypervisor, called the kernel-based virtual machine (KVM), which
supports Intel and AMD’s virtualization processor extensions so you can create x86-based
VMs from within a Linux host OS.
As an open source OS, Linux is highly customizable. You can create VMs running versions
of Linux tailored for specific workloads or security-hardened versions for more sensitive
applications.
A virtual machine is a computer file, typically called an image, which behaves like an actual
computer. In other words, creating a computer within a computer. It runs in a window, much like
any other programme, giving the end user the same experience on a virtual machine as they
would have on the host operating system itself. The virtual machine is sandboxed from the rest
of the system, meaning that the software inside a virtual machine can not escape or tamper with
the computer itself. This produces an ideal environment for testing other operating systems
including beta releases, accessing virus-infected data, creating operating system backups and
running software or applications on operating systems for which they were not originally
intended.
Multiple virtual machines can run simultaneously on the same physical computer. For servers,
the multiple operating systems run side-by-side with a piece of software called a hypervisor to
manage them, while desktop computers typically employ one operating system to run the other
operating systems within its programme windows. Each virtual machine provides its own virtual
hardware, including CPUs, memory, hard drives, network interfaces and other devices. The
virtual hardware is then mapped to the real hardware on the physical machine which saves costs
by reducing the need for physical hardware systems alongwith the associated maintenance costs
that go with it, plus reduces power and cooling demand.
Virtualization creates several virtual machines (also known as virtual computers, virtual
instances, virtual versions or VMs) from one physical machine using software called a
hypervisor. Because these virtual machines perform just like physical machines while only
relying on the resources of one computer system, virtualization allows IT organizations to run
multiple operating systems on a single server (also known as a host). During these operations,
the hypervisor allocates computing resources to each virtual computer as needed. This makes IT
operations much more efficient and cost-effective. Flexible resource allocation like this made
virtualization the foundation of cloud computing.
Virtualization methods can change based on the user’s operating system. For example, Linux
machines offer a unique open-source hypervisor known as the kernel-based virtual machine
(KVM). Because KVM is part ofLinux, it allows the host machine to run multiple VMs without
a separate hypervisor. However, KVM is not supported by all IT solution providers and requires
Linux expertise in order to implement it.
The ability to control the execution of a guest program in a completely transparent manner opens
new possibilities for delivering a secure, controlled execution environment. All the operations of
the guest programs are generally performed against the virtual machine, which then translates
and applies them to the host programs.
A virtual machine manager can control and filter the activity of the guest programs, thus
preventing some harmful operations from being performed. Resources exposed by the host can
then be hidden or simply protected from the guest. Increased security is a requirement when
dealing with untrusted code.
Example-1: Untrusted code can be analyzed in Cuckoo sandboxes environment.
The term sandbox identifies an isolated execution environment where instructions can be filtered
and blocked before being translated and executed in the real execution environment.
Example-2: The expression sandboxed version of the Java Virtual Machine (JVM) refers to a
particular configuration of the JVM where, by means of security policy, instructions that are
considered potentially harmful can be blocked.
4.5.2.3 Sharing:
Virtualization allows the creation of a separate computing environment within the same host. This
basic feature is used to reduce the number of active servers and limit power consumption.
4.5.2.4 Aggregation:
It is possible to share physical resources among several guests, but virtualization also allows
aggregation, which is the opposite process. A group of separate hosts can be tied together and
represented to guests as a single virtual host. This functionality is implemented with cluster
management software, which harnesses the physical resources of a homogeneous group of machines
and represents them as a single resource.
4.5.2.5 Emulation:
Guest programs are executed within an environment that is controlled by the virtualization layer,
which ultimately is a program. Also, a completely different environment with respect to the host can
be emulated, thus allowing the execution of guest programs requiring specific characteristics that
are not present in the physical host.
4.5.2.6 Isolation:
Virtualization allows providing guests—whether they are operating systems, applications, or other
entities—with a completely separate environment, in which they are executed. The guest program
performs its activity by interacting with an abstraction layer, which provides access to the
underlying resources. The virtual machine can filter the activity of the guest and prevent harmful
operations against the host.
Besides these characteristics, another important capability enabled by virtualization is performance
tuning. This feature is a reality at present, given the considerable advances in hardware and software
supporting virtualization. It becomes easier to control the performance of the guest by finely tuning
the properties of the resources exposed through the virtual environment. This capability provides a
means to effectively implement a quality-of-service (QoS) infrastructure.
4.5.2.7. Portability:
The concept of portability applies in different ways according to the specific type of virtualization
considered.
In the case of a hardware virtualization solution, the guest is packaged into a virtual image that, in
most cases, can be safely moved and executed on top of different virtual machines.
In the case of programming-level virtualization, as implemented by the JVM or the .NET runtime,
the binary code representing application components (jars or assemblies) can run without any
recompilation on any implementation of the corresponding virtual machine.
Virtualization allows multiple virtual machines to share the resources of a single physical machine,
such as CPU, memory, storage, and network bandwidth. This improves hardware utilization and
reduces the need for additional physical servers.
4.5.2.9 Cloud Migration:
Virtualization can be a stepping stone for organizations looking to migrate to the cloud. By
virtualizing their existing infrastructure, organizations can make it easier to move workloads to the
cloud and take advantage of cloud-based services
4.6.1 Virtualization Structures:
Before virtualization, the operating system manages the hardware. After virtualization, a virtualization layer is
inserted between the hardware and the operating system. In such a case, the virtualization layer is responsible
for converting portions of the real hardware into virtual hardware. Therefore, different operating systems such
as Linux and Windows can run on the same physical machine, simultaneously.
Depending on the position of the virtualization layer, there are several classes of VM
architectures, namely-
1. Hypervisor and Xen Architecture
2. Binary Translation with Full Virtualization
3. Para-Virtualization with Compiler Support
The guest OS, which has control ability, is called Domain 0, and the others are called
Domain U. Domain 0 is a privileged guest OS of Xen. It is first loaded when Xen boots
without any file system drivers beingavailable. Domain 0 is designed to access hardware
directly and manage devices. Therefore, one of the responsibilities of Domain 0 is to
allocate and map hardware resources for the guest domains (the Domain U domains).
HLL VM stands for High Level Language Virtual Machine, and it is a type of virtual machine that is
used in cloud computing to run high-level programming languages, such as Java, Python, and Ruby,
among others.
In cloud computing, a virtual machine is a software abstraction of a physical machine that runs an
operating system and applications. A virtual machine allows users to run multiple operating systems
and applications on a single physical machine, which makes it a popular technology for cloud
computing.
HLL VMs, such as the Java Virtual Machine (JVM), are designed to run high-level programming
languages that are compiled into byte code. The HLL VM translates the byte code into machine
language that can be executed by the underlying hardware. This provides a layer of abstraction between
the high-level programming language and the hardware, which makes it easier to write and deploy
applications.
One of the benefits of using HLL VMs in cloud computing is that they provide a consistent and reliable
environment for running applications, regardless of the underlying hardware. HLL VMs are also
portable, which means that applications can be moved between different cloud providers without the
need for significant code changes.
In summary, HLL VMs are an important technology in cloud computing that enable the running of
high-level programming languages in a virtualized environment, providing a layer of abstraction
between the application and the underlying hardware, and allowing for portability and scalability of
applications.
KVM stands for Kernel-based Virtual Machine. It’s a technology that allows you to run multiple,
separate “virtual” computers on a single physical machine.
Imagine your computer as a large apartment building. Normally, it’s like having one big apartment that
takes up the whole building. With KVM, you can divide this big apartment into several smaller
apartments (these are the “virtual” computers), allowing different people (or different computer tasks) to
live in their own spaces without interfering with each other.
KVM is a versatile and powerful virtualization solution that leverages the Linux kernel to create and
manage virtual machines efficiently.
Integration with Linux: KVM is part of the Linux kernel, which makes it a robust and integrated
solution for virtualization on systems running the Linux operating system.
Flexibility and Compatibility: Supports various guest operating systems including Linux, Windows,
and others. VMs can be easily migrated between hosts running KVM without needing any conversion.
1. Hypervisor: KVM acts as a hypervisor, a software layer that enables multiple operating systems
to run concurrently on the same hardware. It leverages hardware virtualization extensions (Intel
VT-x and AMD-V) to provide efficient and secure virtualization.
2. Full Virtualization: KVM allows you to run guest VMs with different operating systems, such
as Linux, Windows, and others, as if they were running on dedicated physical hardware. This
provides isolation and flexibility.
3. Hardware Emulation: KVM can emulate a range of hardware components for VMs, including
CPUs, memory, network adapters, and storage devices. This enables compatibility with various
guest OSes.
4. Performance: KVM offers high performance, as it directly utilizes the host machine’s CPU and
memory resources. This makes it well-suited for running resource-intensive workloads.
5. Management Tools: Various management tools and interfaces, like virtmanager, libvirt, and
virsh, help you create, configure, and manage VMs on KVM-enabled hosts.
4.7.2 VMware:
VMware is a software company that specializes in virtualization and cloud computing. It provides an
alternative to dedicated hosts. In 2024, VMware was acquired by Broadcom, raising questions about the
future of its products and uncertainty about future licensing costs. This has led many organizations to
consider alternatives such as Nutanix.
Virtualization: Creates a software layer that allows a computer's hardware to be divided into
multiple virtual machines (VMs). Each VM can run its own operating system and act like a
separate computer.
Networking: Simplifies application delivery and automates operations with network
virtualization and load balancing.
Cloud infrastructure: Deploys private cloud infrastructure solutions.
Software-defined data center (SDDC): Virtualizes almost every computing function into a
software-defined data center.
Storage software: Allows IT departments to place application workloads on the most cost-
effective compute resource.
4.8.1 Virtual Box:
A Virtual Box or VB is a Hypervisor for X86 computers from Oracle corporation. It was first
developed by Innotek GmbH and released in 2007 as an open source software package. The company
was later acquired by Sun Micro Systems in 2008. After that, Oracle has continued the development of
Virtual Box since 2010 and the product name is titled as Oracle VM Virtual Box. Virtual Box comes in
different flavours depending upon the operating systems for which it is being configured. Virtual Box
Ubuntu is more preferred, even though Virtual Box for windows is equally popular. With the advent of
android phones Virtual Box for android is becoming the new face of VM in smart phones. Use of
Virtual Box In general, a Virtual Box is a software virtualization package that can be installed on any
operating system as an application software. It allows additional operating systems to be installed on it,
as a Guest OS.
It can than create and manage free guest virtual machines, each with a guest operating system and its
own virtual environment. Virtual Box is being supported by many operating systems like Windows XP,
Windows Vista, Windows 7, Linux, Mac OS X, Solaris, and Open Solaries. Supported guest operating
systems are versions and derivations of Windows, Linux, OS/2, BSD, Haiku, etc. Virtual Box gets a lot
of support, primarily because it is free and open-source. It also allows unlimited snapshots – a feature
only available in VMWare Pro. VMWare, on the other hand, is great for drag and drop functionality
between host and the VM, but many features come only in paid version.
4.8.2 Hypervisor:
Type I hypervisors run directly on top of the hardware. Therefore, they take the place of the
operating systems and interact directly with the ISA interface exposed by the underlying
hardware, and emulate this interface in order to allow the management of guest operating
systems. This type of hypervisors is also called native virtual machine, since it run natively on
hardware.
Type II hypervisors require the support of an operating system to provide virtualization services.
This means that they are programs managed by the operating system, which interact with it
through the ABI and emulate the ISA of virtual hardware for guest operating systems. This type
of hypervisors is also called hosted virtual machine, since it is hosted within an operating
system.
Fig-Hosted (left) and Native (right) Virtual Machine