Abhi Finalreport (20CS03)
Abhi Finalreport (20CS03)
In
Submitted By
Abhishek
(20CS03)
Under Supervision of
2024
Candidate’s Declaration
I certify that the above statement made by the candidate is true to the best of my
knowledge.
2
ACKNOWLEDGEMENT
The satisfaction that accompanies the successful completion of any task would be
incomplete without the mention of people whose ceaseless cooperation made it
possible, whose constant guidance and encouragement crown all efforts with
success. We are grateful to our seminar topic guide Dr. Pankaj Roy for his
guidance, inspiration and constructive suggestions that helped us in the
preparation of this seminar topic. We are also thankful to our colleagues who have
helped us in successful completion of the seminar topic.
3
Abstract
4
Content
1. Introduction 1-2
2. How Does Virtualization work 2
3. Virtual Machine 3
4. Virtual Infrastructure 4
12. Conclusion 25
13.Refernces 26
5
1. Introduction
Virtualization is a proven software technology that is rapidly transforming the IT
landscape and fundamentally changing the way that people computer. Today's
powerful x86 computer hardware was designed to run a single operating system
and a single application. This leaves most machines vastly underutilized.
Virtualization lets you run multiple virtual machines on a single physical machine,
sharing the resources of that single computer across multiple environments.
Different operating systems and multiple applications on the same physical
computer.
1
Virtualization can also involve execution of guest software cross- compiled for a
different instruction set or CPU architecture; such emulation or simulation
environments help developers bring up new processors and cross-debug
embedded hardware.
A virtual machine provides a software environment that allows software to run on
bare hardware. This environment is created by a virtual-machine monitor, also
known as a hypervisor. A virtual machine is an efficient, isolated duplicate of the
real machine. The hypervisor presents an interface that looks like hardware to the
"guest" operating system. It allows multiple operating system instances to run
concurrently on a single computer; it is a means of separating hardware from a
single operating system. it can control the guests' use of CPU, memory, and
storage, even allowing a guest OS to migrate from one machine to another. It is
also a method of partitioning one physical server computer into multiple "virtual"
servers, giving each the appearance and capabilities of running on its own
dedicated machine. Each virtual server functions as a full- fledged server and can
be independently rebooted.
2. How Does Virtualization Work?
2
3. Virtual Machine
Fig. 2
A virtual machine is a tightly isolated software container that can run its own operating
systems and applications as if it were a physical computer. A virtual machine behaves
exactly like a physical computer and contains its own virtual (i.e. Software- based)
CPU, RAM hard disk and network interface card (NIC).
An operating system can't tell the difference between a virtual machine and a physical
machine, nor can applications or other computers on a network. Even the virtual
machine thinks it is a "real" computer. Nevertheless, a virtual machine is composed
entirely of software and contains no hardware components whatsoever. As a result,
virtual machines offer a number of distinct advantages over physical hardware.
3
4. Virtual Infrastructure
A virtual infrastructure lets you share your physical resources of multiple
machines across your entire infrastructure. A virtual machine lets you share the
resources of a single physical computer across multiple virtual machines for
maximum efficiency. Re- sources are shared across multiple virtual machines and
applications. This resource optimization drives greater flexibility in the
organization and results in lower capital and operational costs.
4
5. History of Virtualization
Virtualization is a proven concept that was first developed in the 1960s by IBM as
a way to logically partition large, mainframe hardware into separate virtual
machines. These partitions allowed mainframes to "multitask"; run multiple
applications and processes at the same time.
Virtualization was effectively abandoned during the 1980s and 1990s when client-
server applications and inexpensive x86 servers and desktops established the
model of distributed computing. The growth in x86 server and desktop
deployments has introduced new IT infrastructure and operational challenges.
These challenges include:
Present Day
Today, computers based on x86 architecture are faced with the same
problems of rigidity and underutilization that mainframes faced in the
1960s. Today's powerful x86 computer hardware was originally designed
to run only a single operating system and a single application, but
virtualization breaks that bond, making it possible to run multiple
operating systems and multiple applications on the same computer at the
same time. increasing the utilization and flexibility of hardware.
6
Virtual machines can be used to create operating systems, or execution
environments with resource limits, and given the right schedulers, resource
guarantees.
Virtual machines can provide the illusion of hardware, or hardware
configuration that you do not have (such as SCSI devices, multiple
processors,) Virtualization can also be used to simulate networks of
independent computers.
Virtual machines can be used to run multiple operating systems
simultaneously: different versions, or even entirely different systems, which
can be on hot standby. Some such systems may be hard or impossible to run on
newer real hardware.
Virtual machines allow for powerful debugging and performance monitoring.
Virtual machines can isolate what they run, so they provide fault and error
containment. You can inject faults pro actively into software to study its
subsequent behavior.
Virtual machines are great tools for research and academic experiments. Since
they provide isolation, they are safer to work with. They encapsulate the entire
state of a running system: you can save the state, examine it, modify it, reload
it, and so on. The state also provides an abstraction of the workload being run.
Virtualization can enable existing operating systems to run on shared memory
multiprocessors.
Driving out the cost of IT infrastructure through more efficient use of available
resources.
Simplifying the infrastructure.
Increasing system availability.
Delivering consistently good performance.
Centralizing systems, data, and infrastructure
7
6. Virtual Machine & Hypervisor
Virtual machine (VM) is a software implementation of a machine (computer) that
executes programs like a real machine.
8
The main advantages of system VMs are:
• multiple OS environments can co-exist on the same computer, in strong
isolation from each other.
• the virtual machine can provide an instruction set architecture (ISA) that is
somewhat different from that of the real machine.
The guest OS's do not have to be all the same, making it possible to run different
OS's on the same computer (e.g., Microsoft Windows and Linux, or older versions
of an OS in order to support software that has not yet been ported to the
latest version).
Process virtual machines:
A process VM, sometimes called an application virtual machine,
runs as a normal application inside an OS and supports a single process. It is
created when that process is started and destroyed when it exits. Its purpose is to
provide a platform- independent programming environment that abstracts away
details of the underlying hardware or operating system, and allows a program to
execute in the same way on any platform.
A process VM provides a high-level abstraction that of a high-level
programming language (compared to the low-level ISA abstraction of the system
VM). Process VMs are implemented using an interpreter; performance comparable
to compiled programming languages is achieved by the use of just-in-time
compilation. This type of VM has become popular with the Java (JVM). And .NET
Framework, which runs on a VM called the Common Language Runtime.
Techniques
Emulation of the underlying raw hardware (native execution)
9
Fig. 5 VMWare Workstation running Ubuntu on Windows Vista
10
Hypervisor
A hypervisor, also called virtual machine monitor (VMM), is a computer hardware
platform virtualization software that allows multiple operating systems to run on a
host computer concurrently.
Classifications
Hypervisors are classified in two types:
Type 1 (or native, bare-metal) hypervisors are software systems that run directly
on the host's hardware as a hardware control and guest operating system
monitor. A guest operating system thus runs on another level above the
hypervisor.
Type 2 (or hosted) hypervisors are software applications running within a
conventional operating system environment. Considering the hypervisor layer
being a distinct software layer, guest operating systems thus run at the third
level above the hardware.
11
7. Classification of Virtualization
Here we discuss about different types of virtualization
Platform Virtualization, which separates an operating system from the
underlying platform resources
• Full virtualization
• Hardware-assisted virtualization
• Partial virtualization
• Para virtualization
• Operating system-level virtualization
• Hosted environment
Resource Virtualization, the virtualization of specific system resources, such
as storage volumes, name spaces, and network resources
o Storage virtualization, the process of completely abstracting logical storage
from physical storage
• RAID - redundant array of independent disks
• Disk partitioning
o Network virtualization, creation of a virtualized network ad- dressing space
within or across network subnets
Computer clusters and grid computing, the combination of
multiple discrete computers into larger meta computers.
Desktop Virtualization, the remote manipulation of a computer desktop.
Application virtualization, the hosting of individual applications on
alien hardware/software
• Portable application
• Cross-platform virtualization
• Emulation or simulation
12
Platform Virtualization
Platform virtualization is a virtualization of computers or operating systems. It hides the
physical characteristics of computing platform from the users, instead showing another
abstract, emulated computing platform.
Concept
The creation and management of virtual machines has been called platform virtualization, or
server virtualization. Platform virtualization is performed on a given hardware platform by
host software (a control program), which creates a simulated computer environment, a
virtual machine, for its guest software. The guest software, which is often itself a complete
operating system, runs just as if it were installed on a stand-alone hardware platform.
Typically, many such virtual machines are simulated on a single physical machine, their
number limited by the host's hardware resources. Typically there is no requirement for a
guest OS to be the same as the host one. The guest system often requires access to specific
peripheral devices to function, so the simulation must support the guest's interfaces to those
devices.
Trivial examples of such devices are hard disk drive or network interface card.
There are several approaches to platform virtualization.
13
Full virtualization
In full virtualization, the virtual machine simulates enough hardware to allow an
unmodified "guest" OS (one designed for the same instruction set) to be run in
isolation. This approach was pioneered in 1966 with IBM CP-40 and CP-67,
predecessors of VM family.
Hardware-assisted virtualization
In hardware-assisted virtualization, the hardware provides architectural support that
facilitates building a virtual machine monitor and allows guest OSes to be run in
isolation. In 2005 and 2006, Intel and AMD provided additional hardware to support
virtualization. Examples include Linux KVM, VMware Workstation, VMware Fusion,
Microsoft Virtual PC, Xen, Parallels Desktop for Mac, VirtualBox and Parallels
Workstation. Hardware virtualization technologies include:
• AMD-V x86 virtualization (previously known as Pacifica)
• IBM Advanced POWER virtualization
• Intel VT x86 virtualization (previously known as Vanderpool)
• UltraSPARC TI and UltraSPARC T2 processors from Sun Microsystems
have the Hyper-Privileged execution mode
Partial virtualization
In partial virtualization (and also "address space virtualization"): The virtual machine
simulates multiple instances of much (but not all) of an underlying hardware
environment, particularly address spaces. Such an environment supports resource
sharing and process isolation, but does not allow separate "guest" operating system
instances.
Para virtualization
In para virtualization, the virtual machine does not necessarily simulate hardware, but
instead (or in addition) offers a special API that can only be used by modifying the
"guest" OS. This system call to the hypervisor is called a “Hypercall" in TRANGO and
Xen.
14
Operating system-level virtualization
In operating system-level virtualization, a physical server is virtualized at the operating
system level, enabling multiple isolated and secure virtualized servers to run on a single
physical server. The "guest" OS environments share the same OS as the host system i.e.
the same OS kernel is used to implement the "guest" environments. Applications running
in a given "guest" environment view it as a stand-alone system.
Hosted environment
Applications that hosted by the third-party servers and that can be called or can be used
by a remote system's environment.
15
8. Resource Virtualization
The virtualization of specific system resources, such as storage volumes, name
spaces, and network resources is the resource virtualization.
Storage Virtualization
Storage virtualization is the pooling of multiple physical storage resources into
what appears to be a single storage resource that is centrally managed. Storage virtualization
automates tedious and extremely time-consuming storage administration tasks. This means
the storage administrator can perform the tasks of backup, archiving, and recovery more
easily and in less time, because the overall complexity of the storage infrastructure is
disguised. Storage virtualization is commonly used in file systems, storage area networks
(SANs), switches and virtual tape systems. Users can implement storage virtualization with
software, hybrid hardware or software appliances. Virtualization hides the physical
complexity of storage from storage administrators and applications, making it possible to
manage all storage as a single resource. In addition to easing the storage management burden,
this approach dramatically improves the efficiency and cuts overall costs.
The Advantages of Storage Virtualization
Storage virtualization provides many advantages. First, it enables the pooling
of multiple physical resources into a smaller number of resources or even a single resource,
which reduces complexity. Many environments have become complex, which increases the
storage management gap. With regard to resources, pooling is an important way to achieve
simplicity. A second advantage of using storage virtualization is that it automates many time-
consuming tasks. In other words, policy-driven virtualization tools take people out of the loop
of addressing each alert or interrupt in the storage business. A third advantage of storage
virtualization is that it can be used to disguise the overall complexity of the infrastructure.
Network virtualization
Network virtualization is the process of combining hardware and software
network resources and network functionality into a single, software-based administrative
entity, a virtual network. Network virtualization involves platform virtualization, often
combined with resource virtualization. Network virtualization is categorized as either
external, combining many networks, or parts of networks, into a virtual unit, or internal,
providing network-like functionality to the software containers on a single system. Whether
virtualization is internal or external depends on the implementation provided by vendors that
support the technology.
16
Components of a virtual network
Various equipment and software vendors offer network virtualization by combining any of the
following:
Network hardware, such as switches and network adapters, also known as network
interface cards (NICs)
Networks, such as virtual LANs (VLANs) and containers such as virtual ma-
chines and Solaris Containers.
Network storage devices
Network media, such as Ethernet and Fiber Channel
1
9. Cluster Computing
1
Compute clusters
Clusters are used for primarily computational purposes, rather than handling IO oriented
operations such as web service or databases. For instance, a cluster might support
computational simulations of weather or vehicle crashes.
Grid computing
Grids are usually compute clusters, but more focused on throughput like a computing
utility rather than running fewer, tightly-coupled jobs. grids will incorporate
heterogeneous collections of computers, possibly distributed geographically distributed
nodes, sometimes administered by unrelated organizations.
Grid computing is optimized for workloads which consist of many independent jobs or
packets of work, which do not have to share data between the jobs during the computation
process. Grids serve to manage the allocation of jobs to computers which will perform the
work independently of the rest of the grid cluster. Resources such as storage may be
shared by all the nodes, but intermediate results of one job do not affect other jobs in
progress on other nodes of the grid.
Application Virtualization
Application virtualization is a term that describes software
technologies that improve portability, manageability and compatibility of applications
by encapsulating them from the underlying operating system on which they are
executed. A fully virtualized application is not installed in the traditional sense,
although it is still executed as if it is. The application is fooled at runtime into
believing that it is directly interfacing with the original operating system and all the
resources managed by it, when in reality it is not. Application virtualization differs
from operating system virtualization in that in the latter case, the whole operating
system is virtualized rather than only specific applications.
Description
Limited application virtualization is used in modern operating systems such as
Microsoft Windows and Linux. For example, In File Mappings were introduced with
Windows NT to virtualize (into the Registry) the legacy INI files of applications
originally written for Windows 3.1.
Full application virtualization requires a virtualization layer. This layer must be
installed on a machine to intercept all file and Registry operations of virtualized
applications and transparently redirect these operations into a virtualized location. The
application performing the file operations never knows that it's not accessing the
physical resource it believes it is. In this way, applications with many dependent files
and settings can be made portable by redirecting all their input/output to a single
physical file, and traditionally incompatible applications can be executed side-by-side.
Benefits of application virtualization
i. Allows applications to run in environments that do not suit the native ap-
plication (e.g. Wine allows Microsoft Windows applications to run on Linux).
ii. Uses fewer resources than a separate virtual machine.
iii. Run incompatible applications side-by-side, at the same time and with minimal
regression testing against one another.
iv. Implement the security principle of least privilege by removing the requirement
for end-users to have Administrator privileges in order to run poorly written
applications.
v. Simplified operating system migrations.
Disadvantages of application virtualization
i. Applications have to be "packaged" or "sequenced" before they will run in a
virtualized way.
ii. Minimal increased resource requirements (memory and disk storage).Not all
software can be virtualized. Some examples include applications that require a
device driver and 16-bit applications that need to run in shared memory space.
iii. Some compatibility issues between legacy applications and newer operating
systems cannot be addressed by application virtualization (although they can still
be run on an older operating system under a virtual ma- chine).
2
Cross-platform virtualization
Cross-platform virtualization is a form of computer virtualization that
allows software compiled for a specific CPU and operating system to run unmodified
on computers with different CPUs and/or operating systems, through a combination
of dynamic binary translation and operating system call mapping. Since the software
runs on a virtualized equivalent of the original computer, it does not require
recompilation or porting, thus saving time and development resources. However, the
processing overhead of binary translation and call mapping imposes a performance
penalty, when compared to natively-compiled software. For this reason, cross-
platform virtualization may be used as a temporary solution until resources are
available to port the software.
By creating an abstraction layer capable of running software compiled
for a different computer system, cross-platform virtualization characterizes the popek
and Goldberg virtualization requirements outlined. Cross-platform virtualization is
distinct from emulation and binary translation which involve the direct translation of
one CPU instruction set to another since the inclusion of operating system call
mapping provides a more complete virtualized environment. Cross-platform
virtualization is also complementary to server virtualization and desktop
virtualization solutions, since these are typically constrained to a single CPU type,
such as x86 or POWER.
Emulation
Emulation or Emulator may refer to as imitation of behavior of a
computer or other electronic system with the help of another type of
computer/system. Console emulator, a program that allows a computer or modern
console to emulate another video game console. Hardware emulation, the use of
special purpose hardware to emulate the behavior of a yet to be built system, with
greater speed than pure software emulation.
2
Simulation
Simulation is the imitation of some real thing, state of affairs, or
process. The act of simulating something generally entails representing certain key
characteristics or behaviors of a selected physical or abstract system. A computer
simulation (or "sim") is an attempt to model a real-life or hypothetical situation on a
computer so that it can be studied to see how the system works. By changing
variables, predictions may be made about the behavior of the system. Computer
simulation has become a useful part of modeling many natural systems in physics,
chemistry and biology, and human systems in economics as well as in engineering
to gain insight into the operation of those systems.
10. Desktop virtualization
Desktop virtualization or Virtual desktop infrastructure (VDI) is a
server-centric computing model that borrows from the traditional thin-client model
but is designed to give system administrators and end-users the ability to host and
centrally manage desktop virtual machines in the data center while giving end users
a full PC desktop experience.
Rationale
Installing and maintaining separate PC workstations is complex, and
traditionally users have almost unlimited ability to install or remove software.
Desktop virtualization provides many of the advantages of a terminal server, but (if
so desired and configured by system administrators) can provide users much more
flexibility. Each, for instance might be allowed to install and configure their own
applications. Users also gain the ability to access their server-based virtual desktop
from other locations.
Advantages
Instant provisioning of new desktops
Near-zero downtime in the event of hardware failures
Significant reduction in the cost of new application deployment
Robust desktop image management capabilities
Existing desktop-like performance including multiple monitors, bidirectional
audio/video, streaming video, USB support etc.
2
11. VMware Workstation
VMware Workstation is a virtual machine software suite for x86 and x86-64
computers from VMware, a division of EMC Corporation. This software suite allows users
to set up multiple x86 and x86-64 virtual computers and to use one or more of these virtual
machines simultaneously with the hosting operating system. Each virtual machine instance
can execute its own guest operating system, such as Windows, Linux, BSD variants, or
others. In simple terms, VMware Workstation allows one physical machine to run multiple
operating systems simultaneously.
Microsoft Virtual Server
Microsoft Virtual Server is a virtualization solution that facilitates the
creation of virtual machines on the Windows XP, Windows Vista and Windows Server
2003 operating systems. Originally developed by Connectix, it was acquired by Microsoft
prior to release. Virtual PC is Microsoft's related desktop virtualization software package.
Virtual machines are created and managed through an IIS web-based interface or through a
Windows client application tool called VMRC plus. The current version is Microsoft
Virtual Server 2005 R2 SP1. New features in R2 SP1 include Linux guest operating system
support, Virtual Disk Pre compactor, SMP (but not for the Guest OS),x86-64 (x64) Host
OS support (but not Guest OS support), the ability to mount virtual hard drives on the host
OS and additional operating systems including Windows Vista.
23
It also provides a Volume Shadow Copy writer which enables live backups of the
Guest OS on a Windows Server 2003 or Windows Server 2008 Host. A utility to
mount VHD images is also included since SP1. Officially supported Linux guest
operating systems include Red Hat Enterprise Linux versions 2.1-5.0, Red Hat Linux
9.0, SUSE Linux and SUSE Linux Enterprise Server versions 9and10.
Microsoft Virtual PC
Microsoft Virtual PC is a virtualization suite for Microsoft Windows
operating systems, and an emulation suite for Mac OS X on PowerPC-based systems.
The software was originally written by Connectix, and was subsequently acquired by
Microsoft. In July 2006 Microsoft released the Windows-hosted version as a free
product. In August 2006 Microsoft announced the Macintosh-hosted version would
not be ported to Intel-based Macintoshes, effectively discontinuing the product as
PowerPC-based Macintoshes are no longer manufactured. Virtual PC virtualizes a
standard PC and its associated hardware. Supported Windows operating systems can
run inside Virtual PC. However, other operating systems like Linux may run, but are
not officially supported (for example, Ubuntu, a popular Linux distribution can get
past the boot screen of the Live CD (and function fully) when using
Safe Graphics Mode).
Virtual Box
Virtual Box is an x86 virtualization software package, originally created by German
software company innotek now developed by Sun Microsystems as part of its Sun
XVM virtualization platform. It is installed on an existing host operating system;
within this application, additional operating systems, each known as a Guest OS, can
be loaded and run, each with its own virtual environment. Supported host operating
systems include Linux, Mac OS X, OS/2 Warp, Windows XP or Vista, and Solaris,
while supported guest operating systems include FreeBSD, Linux, OpenBSD, OS/2
Warp, Windows and Solaris. According to a 2007 survey, Virtual Box is the third
most popular software package for running Windows programs on Linux desktops.
hypervisor boots and given special management privileges and direct access to the
physical hardware. The system administrator logs into dom0 in order to start any
further guest operating systems, called "domain U" (domain U) in Xen terminology.
2
12. Conclusion
Virtualization dramatically improves the efficiency and
availability of resources and applications. Earlier Internal resources are
underutilized under the old -one server, one application|| model and users
spend too much time managing servers rather innovating. By virtualization
platform, users can respond faster and more efficiently than ever before.
Users can save 50-70% on overall IT costs by consolidating their resource
pools and delivering highly available machines.
Other major improvements by using virtualization are that they can:
Reduce capital costs by requiring less hardware and lowering operational
costs while increasing your server to admin ratio.
Ensure enterprise applications perform with the highest availability and
performance.
Build up business continuity through improved disaster recovery
solutions and deliver high availability throughout the datacenter.
Improve desktop management with faster deployment of desktops and
fewer support calls due to application conflicts.
Even after the implementations of distributed computing and
other technologies, virtualization proved to be an effective in using the
available resources of a system fully in an efficient way.
2
13. References
Websites:
[1.] www.wikipedia.com
[2.] http://www.vmware.com
[3.] http://www.kernalthread.com.
[4.] www.virtualizationadmin.com
[5.] www.virtualization.org
[6.] www.microsft.com/virtualization.aspx
Books:
[1.] Virtualization: From Beginners to Professionals, A press Publications.
[2.] Operating System Concept: Slivers chat z-Galvin-Gagne Wiley Publications.
[3.] VMware White Paper: VMware Official Digital Repository.
26