0% found this document useful (0 votes)
29 views

Abhi Finalreport (20CS03)

This document provides a seminar report on VMware virtualization submitted by Abhishek in partial fulfillment for a bachelor's degree. It includes an introduction to virtualization, how virtualization works, definitions of virtual machines and virtual infrastructure. It also covers the history of virtualization and classifications of virtualization techniques. The report was prepared under the supervision of Prof. Ravindra Singh.

Uploaded by

Prashant Patel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

Abhi Finalreport (20CS03)

This document provides a seminar report on VMware virtualization submitted by Abhishek in partial fulfillment for a bachelor's degree. It includes an introduction to virtualization, how virtualization works, definitions of virtual machines and virtual infrastructure. It also covers the history of virtualization and classifications of virtualization techniques. The report was prepared under the supervision of Prof. Ravindra Singh.

Uploaded by

Prashant Patel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 32

VMware Virtualization

A Seminar Report Submitted in partial fulfilment for the award of


Degree of Bachelor of Technology

In

Computer Science & Information Technology

Submitted By

Abhishek
(20CS03)

Under Supervision of

Prof. Ravindra Singh

Department of Computer Science & Information


Technology

Faculty of Engineering and Technology

M.J.P. Rohilkhand University, Bareilly

2024
Candidate’s Declaration

I hereby declare that the seminar report titled “VMware Virtualization”, is


prepared by me based on available literature and I have not submitted it anywhere
else for the award of any degree or diploma.

Date: / / 2024 Abhishek(20CS03)

Certificate from Supervisor

I certify that the above statement made by the candidate is true to the best of my
knowledge.

Date: / / 2024 Prof. Ravindra Singh

2
ACKNOWLEDGEMENT

The satisfaction that accompanies the successful completion of any task would be
incomplete without the mention of people whose ceaseless cooperation made it
possible, whose constant guidance and encouragement crown all efforts with
success. We are grateful to our seminar topic guide Dr. Pankaj Roy for his
guidance, inspiration and constructive suggestions that helped us in the
preparation of this seminar topic. We are also thankful to our colleagues who have
helped us in successful completion of the seminar topic.

3
Abstract

VMware pioneered x86-based Virtualization in 1998 and continues to be the


innovator in that market, providing the fundamental Virtualization technologies
for all leading x86-based hardware suppliers. The company offers a variety of
software-based partitioning approaches, utilizing both hosted (Workstation and
VMware Server) and hypervisor (ESX Server) architectures.

VMware virtual machine (VM) approach creates a uniform hardware image-


platform, also VMware's Virtual Centre provides management and provisioning of
virtual machines, continuous workload consolidation across physical servers and
VMotion technology for virtual machine mobility.

Virtual Center is virtual infrastructure management software that centrally


manages an enterprise virtual machines as a single, logical pool of resources. With
Virtual Centre, an administrator can manage thousands of Windows NT, Windows
2000, Windows 2003, Linux and NetWare servers from a single point of control.

Keywords: VM Virtual Machine, ESX Hypervisor Server, VMotion Virtual


Motion Technology.

4
Content
1. Introduction 1-2
2. How Does Virtualization work 2

3. Virtual Machine 3

4. Virtual Infrastructure 4

5. History of Virtualization 5-7

6. Virtual Machine & Hypervisor 8

7. Classification of Virtualization 6-15

8. Resources Virtualization 16-17

9. Cluster Computing 18-20

10. Desktop Virtualization 22

11. VMware Workstation 23-24

12. Conclusion 25

13.Refernces 26

5
1. Introduction
Virtualization is a proven software technology that is rapidly transforming the IT
landscape and fundamentally changing the way that people computer. Today's
powerful x86 computer hardware was designed to run a single operating system
and a single application. This leaves most machines vastly underutilized.
Virtualization lets you run multiple virtual machines on a single physical machine,
sharing the resources of that single computer across multiple environments.
Different operating systems and multiple applications on the same physical
computer.

Virtualization is a framework or methodology of dividing the resources of the


computer into multiple execution environments, by applying one or more concepts
or technologies such as hardware and software partitioning, time sharing, or
complete machine simulation, emulation, quality of service, and many others.

Virtualization is technology for supporting execution of computer program code,


from applications to entire operating systems, in a software-controlled
environment Such a Virtual Machine (VM) environment abstracts available
system resources (memory, storage, CPU cores, I/O etc.) and presents them in a
regular fashion, such that guest software cannot distinguish VM-based execution
from running on hare physical hardware.

Fig. 1 Virtual Machine

Virtualization commonly refers to native virtualization, where the VM platform


and the guest software target the same microprocessor instruction
set and comparable system architectures.

1
Virtualization can also involve execution of guest software cross- compiled for a
different instruction set or CPU architecture; such emulation or simulation
environments help developers bring up new processors and cross-debug
embedded hardware.
A virtual machine provides a software environment that allows software to run on
bare hardware. This environment is created by a virtual-machine monitor, also
known as a hypervisor. A virtual machine is an efficient, isolated duplicate of the
real machine. The hypervisor presents an interface that looks like hardware to the
"guest" operating system. It allows multiple operating system instances to run
concurrently on a single computer; it is a means of separating hardware from a
single operating system. it can control the guests' use of CPU, memory, and
storage, even allowing a guest OS to migrate from one machine to another. It is
also a method of partitioning one physical server computer into multiple "virtual"
servers, giving each the appearance and capabilities of running on its own
dedicated machine. Each virtual server functions as a full- fledged server and can
be independently rebooted.
2. How Does Virtualization Work?

Virtualization platform transform or "virtualize" the hardware resources of an


x86-based computer-including the CPU, RAM, hard disk and network controller-
to create a fully functional virtual machine that can run its own operating system
and ap- plications just like a "real" computer. Each virtual machine contains a
complete system, eliminating potential conflicts. Virtualization works by
inserting a thin layer of software directly on the computer hardware or on a host
operating system. This contains a virtual machine monitor or "hypervisor" that
allocates hardware resources dynamically and transparently. Multiple operating
systems run concurrently on a single physical computer and share hardware
resources with each other. By encapsulating an entire ma- chine, including CPU,
memory, operating system, and network devices, a virtual ma- chine is
completely compatible with all standard x86 operating systems, applications, and
device drivers. You can safely run several operating systems and applications at
the same time on a single computer, with each having access to the resources it
needs when it needs them.

2
3. Virtual Machine

Fig. 2

A virtual machine is a tightly isolated software container that can run its own operating
systems and applications as if it were a physical computer. A virtual machine behaves
exactly like a physical computer and contains its own virtual (i.e. Software- based)
CPU, RAM hard disk and network interface card (NIC).
An operating system can't tell the difference between a virtual machine and a physical
machine, nor can applications or other computers on a network. Even the virtual
machine thinks it is a "real" computer. Nevertheless, a virtual machine is composed
entirely of software and contains no hardware components whatsoever. As a result,
virtual machines offer a number of distinct advantages over physical hardware.

Virtual Machine Benefits:


Virtual machines possess four key characteristics that benefit the user.
• Compatibility: Virtual machines are compatible with all standard x86 computers.
• Isolation: Virtual machines are isolated from each other as if physically separated.
• Encapsulation: Virtual machines encapsulate a complete computing environment.
• Hardware independence: Virtual machines run independently of underlying
hardware.

3
4. Virtual Infrastructure
A virtual infrastructure lets you share your physical resources of multiple
machines across your entire infrastructure. A virtual machine lets you share the
resources of a single physical computer across multiple virtual machines for
maximum efficiency. Re- sources are shared across multiple virtual machines and
applications. This resource optimization drives greater flexibility in the
organization and results in lower capital and operational costs.

Fig.3 virtual infrastructure

A virtual infrastructure consists of the following components:


• Bare-metal hypervisors to enable full virtualization of each x86 Computer.
• Virtual infrastructure services such as resource management and
consolidated backup to optimize available resources among virtual
machines.

4
5. History of Virtualization
Virtualization is a proven concept that was first developed in the 1960s by IBM as
a way to logically partition large, mainframe hardware into separate virtual
machines. These partitions allowed mainframes to "multitask"; run multiple
applications and processes at the same time.

Virtualization was effectively abandoned during the 1980s and 1990s when client-
server applications and inexpensive x86 servers and desktops established the
model of distributed computing. The growth in x86 server and desktop
deployments has introduced new IT infrastructure and operational challenges.
These challenges include:

 Low Infrastructure Utilization- Typical x86 server deployments achieve an


average utilization of only 10% to 15% of total capacity. Organizations
typically run one application per server to avoid the risk of vulnerabilities in
one application affecting the availability of another application on the same
server.
This level should have answers to these type of questions:
 Increasing Physical Infrastructure Costs- The operational costs to
support growing physical infrastructure have steadily increased. Most
computing infra- structure must remain operational at all times, resulting in
power consumption, cooling and facilities costs that do not vary with
utilization levels.
 Increasing IT Management Costs- As computing environments become more
complex, the level of specialized education and experience required for
infrastructure management personnel and the associated costs of such personnel
have in- creased. Organizations spend disproportionate time and resources on
manual tasks associated with server maintenance, and thus require more
personnel to complete these tasks.
 Insufficient Fail over and Disaster Protection- Organizations are increasingly
affected by the downtime of critical server applications and inaccessibility of
critical end user desktops. The threat of security attacks, natural disasters,
health pandemics and terrorism has elevated the importance of business
continuity planning for both desktops and servers.
5
 High Maintenance end-user desktops- Managing and securing
enterprise desktops present numerous challenges. Controlling a
distributed desktop environment and enforcing management, access and
security policies without impairing users' ability to work effectively
is complex and expensive.

Present Day
Today, computers based on x86 architecture are faced with the same
problems of rigidity and underutilization that mainframes faced in the
1960s. Today's powerful x86 computer hardware was originally designed
to run only a single operating system and a single application, but
virtualization breaks that bond, making it possible to run multiple
operating systems and multiple applications on the same computer at the
same time. increasing the utilization and flexibility of hardware.

Why Virtualization? A List of Reasons


Following are some reasons for and benefits of virtualization:
 Virtual machines can be used to consolidate the workloads of several
under- utilized servers to fewer machines, perhaps a single machine
(server consolidation). Related benefits are savings on hardware,
environmental costs, management, and administration of the server
infrastructure.
 The need to run legacy applications is served well by virtual machines.
A legacy application might simply not run on newer hardware and/or
operating systems. Even if it does, if may under-utilize the server.
 Virtual machines can be used to provide secure, isolated sandboxes for
running untrusted applications. You could even create such an
execution environment dynamically on the fly as you download
something from the Internet and run it.

6
 Virtual machines can be used to create operating systems, or execution
environments with resource limits, and given the right schedulers, resource
guarantees.
 Virtual machines can provide the illusion of hardware, or hardware
configuration that you do not have (such as SCSI devices, multiple
processors,) Virtualization can also be used to simulate networks of
independent computers.
 Virtual machines can be used to run multiple operating systems
simultaneously: different versions, or even entirely different systems, which
can be on hot standby. Some such systems may be hard or impossible to run on
newer real hardware.
 Virtual machines allow for powerful debugging and performance monitoring.
 Virtual machines can isolate what they run, so they provide fault and error
containment. You can inject faults pro actively into software to study its
subsequent behavior.
 Virtual machines are great tools for research and academic experiments. Since
they provide isolation, they are safer to work with. They encapsulate the entire
state of a running system: you can save the state, examine it, modify it, reload
it, and so on. The state also provides an abstraction of the workload being run.
 Virtualization can enable existing operating systems to run on shared memory
multiprocessors.
 Driving out the cost of IT infrastructure through more efficient use of available
resources.
 Simplifying the infrastructure.
 Increasing system availability.
 Delivering consistently good performance.
 Centralizing systems, data, and infrastructure

7
6. Virtual Machine & Hypervisor
Virtual machine (VM) is a software implementation of a machine (computer) that
executes programs like a real machine.

Fig. 4 Connectix Virtual PC Version 3 in Mac OS 9


A virtual machine was originally defined by Popek and Goldberg as "an efficient,
isolated duplicate of a real machine". Virtual machines are separated into two
major categories, based on their use and degree of correspondence to any real
machine. A system virtual machine provides a complete system platform which
supports the execution of a complete operating system (OS). Process virtual
machine is designed to run a single program, which means that it supports a single
process. An essential characteristic of a virtual machine is that the software running
inside is limited to the resources and abstractions provided by the virtual machine -
- it cannot break out of its virtual world.

System virtual machines


System virtual machines (sometimes called hardware virtual machines) allow the
sharing of the underlying physical machine resources between different virtual
machines, each running its own operating system. The software layer providing the
virtualization is called a virtual machine monitor or hypervisor. A hypervisor can
run on bare hardware (Type 1 or native VM) or on top of an operating system
(Type 2 or hosted VM).

8
The main advantages of system VMs are:
• multiple OS environments can co-exist on the same computer, in strong
isolation from each other.
• the virtual machine can provide an instruction set architecture (ISA) that is
somewhat different from that of the real machine.
The guest OS's do not have to be all the same, making it possible to run different
OS's on the same computer (e.g., Microsoft Windows and Linux, or older versions
of an OS in order to support software that has not yet been ported to the
latest version).
Process virtual machines:
A process VM, sometimes called an application virtual machine,
runs as a normal application inside an OS and supports a single process. It is
created when that process is started and destroyed when it exits. Its purpose is to
provide a platform- independent programming environment that abstracts away
details of the underlying hardware or operating system, and allows a program to
execute in the same way on any platform.
A process VM provides a high-level abstraction that of a high-level
programming language (compared to the low-level ISA abstraction of the system
VM). Process VMs are implemented using an interpreter; performance comparable
to compiled programming languages is achieved by the use of just-in-time
compilation. This type of VM has become popular with the Java (JVM). And .NET
Framework, which runs on a VM called the Common Language Runtime.
Techniques
Emulation of the underlying raw hardware (native execution)

9
Fig. 5 VMWare Workstation running Ubuntu on Windows Vista

This approach is described as full virtualization of the hardware, and can be


implemented using a Type 1 or Type 2 hypervisor. Each virtual machine can run
any operating system supported by the underlying hardware. Users can thus run
two or more different "guest" operating systems simultaneously, in separate
"private" virtual computers.
Full virtualization is particularly helpful in operating system
development, when experimental new code can be run at the same time as older,
more stable, versions, each in a separate virtual machine.
Emulation of a non-native system
Virtual machines can also perform the role of an emulator, allowing software
applications and operating systems written for another computer processor
architecture to be run. Some virtual machines emulate hardware that only exists as
a detailed specification. For example:
• The specification of the Java virtual machine.
• The Common Language Infrastructure virtual machine at the heart of the
Microsoft .NET initiative.
• Open Firmware allows plug-in hardware to include boot-time diagnostics,
configuration code, and device drivers that will run on any kind of CPU.

10
Hypervisor
A hypervisor, also called virtual machine monitor (VMM), is a computer hardware
platform virtualization software that allows multiple operating systems to run on a
host computer concurrently.
Classifications
Hypervisors are classified in two types:
 Type 1 (or native, bare-metal) hypervisors are software systems that run directly
on the host's hardware as a hardware control and guest operating system
monitor. A guest operating system thus runs on another level above the
hypervisor.
 Type 2 (or hosted) hypervisors are software applications running within a
conventional operating system environment. Considering the hypervisor layer
being a distinct software layer, guest operating systems thus run at the third
level above the hardware.

11
7. Classification of Virtualization
Here we discuss about different types of virtualization
 Platform Virtualization, which separates an operating system from the
underlying platform resources
• Full virtualization
• Hardware-assisted virtualization
• Partial virtualization
• Para virtualization
• Operating system-level virtualization
• Hosted environment
 Resource Virtualization, the virtualization of specific system resources, such
as storage volumes, name spaces, and network resources
o Storage virtualization, the process of completely abstracting logical storage
from physical storage
• RAID - redundant array of independent disks
• Disk partitioning
o Network virtualization, creation of a virtualized network ad- dressing space
within or across network subnets
 Computer clusters and grid computing, the combination of
multiple discrete computers into larger meta computers.
 Desktop Virtualization, the remote manipulation of a computer desktop.
 Application virtualization, the hosting of individual applications on
alien hardware/software
• Portable application
• Cross-platform virtualization
• Emulation or simulation

12
Platform Virtualization
Platform virtualization is a virtualization of computers or operating systems. It hides the
physical characteristics of computing platform from the users, instead showing another
abstract, emulated computing platform.

Fig. 6 VMware Workstation Ubuntu on Windows, an example of platform Virtualization

Concept
The creation and management of virtual machines has been called platform virtualization, or
server virtualization. Platform virtualization is performed on a given hardware platform by
host software (a control program), which creates a simulated computer environment, a
virtual machine, for its guest software. The guest software, which is often itself a complete
operating system, runs just as if it were installed on a stand-alone hardware platform.
Typically, many such virtual machines are simulated on a single physical machine, their
number limited by the host's hardware resources. Typically there is no requirement for a
guest OS to be the same as the host one. The guest system often requires access to specific
peripheral devices to function, so the simulation must support the guest's interfaces to those
devices.
Trivial examples of such devices are hard disk drive or network interface card.
There are several approaches to platform virtualization.

13
Full virtualization
In full virtualization, the virtual machine simulates enough hardware to allow an
unmodified "guest" OS (one designed for the same instruction set) to be run in
isolation. This approach was pioneered in 1966 with IBM CP-40 and CP-67,
predecessors of VM family.
Hardware-assisted virtualization
In hardware-assisted virtualization, the hardware provides architectural support that
facilitates building a virtual machine monitor and allows guest OSes to be run in
isolation. In 2005 and 2006, Intel and AMD provided additional hardware to support
virtualization. Examples include Linux KVM, VMware Workstation, VMware Fusion,
Microsoft Virtual PC, Xen, Parallels Desktop for Mac, VirtualBox and Parallels
Workstation. Hardware virtualization technologies include:
• AMD-V x86 virtualization (previously known as Pacifica)
• IBM Advanced POWER virtualization
• Intel VT x86 virtualization (previously known as Vanderpool)
• UltraSPARC TI and UltraSPARC T2 processors from Sun Microsystems
have the Hyper-Privileged execution mode

Partial virtualization
In partial virtualization (and also "address space virtualization"): The virtual machine
simulates multiple instances of much (but not all) of an underlying hardware
environment, particularly address spaces. Such an environment supports resource
sharing and process isolation, but does not allow separate "guest" operating system
instances.
Para virtualization
In para virtualization, the virtual machine does not necessarily simulate hardware, but
instead (or in addition) offers a special API that can only be used by modifying the
"guest" OS. This system call to the hypervisor is called a “Hypercall" in TRANGO and
Xen.

14
Operating system-level virtualization
In operating system-level virtualization, a physical server is virtualized at the operating
system level, enabling multiple isolated and secure virtualized servers to run on a single
physical server. The "guest" OS environments share the same OS as the host system i.e.
the same OS kernel is used to implement the "guest" environments. Applications running
in a given "guest" environment view it as a stand-alone system.
Hosted environment
Applications that hosted by the third-party servers and that can be called or can be used
by a remote system's environment.

15
8. Resource Virtualization
The virtualization of specific system resources, such as storage volumes, name
spaces, and network resources is the resource virtualization.

Storage Virtualization
Storage virtualization is the pooling of multiple physical storage resources into
what appears to be a single storage resource that is centrally managed. Storage virtualization
automates tedious and extremely time-consuming storage administration tasks. This means
the storage administrator can perform the tasks of backup, archiving, and recovery more
easily and in less time, because the overall complexity of the storage infrastructure is
disguised. Storage virtualization is commonly used in file systems, storage area networks
(SANs), switches and virtual tape systems. Users can implement storage virtualization with
software, hybrid hardware or software appliances. Virtualization hides the physical
complexity of storage from storage administrators and applications, making it possible to
manage all storage as a single resource. In addition to easing the storage management burden,
this approach dramatically improves the efficiency and cuts overall costs.
The Advantages of Storage Virtualization
Storage virtualization provides many advantages. First, it enables the pooling
of multiple physical resources into a smaller number of resources or even a single resource,
which reduces complexity. Many environments have become complex, which increases the
storage management gap. With regard to resources, pooling is an important way to achieve
simplicity. A second advantage of using storage virtualization is that it automates many time-
consuming tasks. In other words, policy-driven virtualization tools take people out of the loop
of addressing each alert or interrupt in the storage business. A third advantage of storage
virtualization is that it can be used to disguise the overall complexity of the infrastructure.
Network virtualization
Network virtualization is the process of combining hardware and software
network resources and network functionality into a single, software-based administrative
entity, a virtual network. Network virtualization involves platform virtualization, often
combined with resource virtualization. Network virtualization is categorized as either
external, combining many networks, or parts of networks, into a virtual unit, or internal,
providing network-like functionality to the software containers on a single system. Whether
virtualization is internal or external depends on the implementation provided by vendors that
support the technology.

16
Components of a virtual network
Various equipment and software vendors offer network virtualization by combining any of the
following:
 Network hardware, such as switches and network adapters, also known as network
interface cards (NICs)
 Networks, such as virtual LANs (VLANs) and containers such as virtual ma-
chines and Solaris Containers.
 Network storage devices
 Network media, such as Ethernet and Fiber Channel

External network virtualization


External network virtualization, in which one or more local networks are combined or
subdivided into virtual networks, with the goal of improving the efficiency of a large
corporate network or data center. The key components of an external virtual network are the
VLAN and the network switch. Using VLAN and switch technology, the system
administrator can configure systems physically attached to the same local network into
different virtual networks. Conversely, VLAN technology enables the system administrator to
combine systems on separate local networks into a VLAN spanning the segments of a large
corporate network.
Internal network virtualization
In internal network virtualization, a single system is configured with containers, such as
the Xen domain, combined with hypervisor control programs or pseudo-interfaces such as the
VNIC, to create a network in a box. This solution improves overall efficiency of a single
system by isolating applications to separate containers and/or pseudo interfaces.
Combined internal and external network virtualization
Some VMM offer both internal and external network virtualization. Basic approach is
network in the box on a single system, using virtual machines that are managed by hypervisor
software. Infrastructure software connects and combines networks in multiple boxes into an
external virtualization scenario.

1
9. Cluster Computing

Fig. 7 An Example of computer Cluster

A computer cluster is a group of linked computers, working together closely so that in


many respects they form a single computer. The components of a cluster are
commonly, but not always, connected to each other through fast local area networks.
Clusters are usually deployed to improve performance and/or availability over that
provided by a single computer, while typically being much more cost-effective than
single computers of comparable speed or availability.
Cluster categorizations
High-availability (HA) clusters
High-availability clusters (also known as failover clusters) are
implemented primarily for the purpose of improving the availability of services which
the cluster provides. They operate by having redundant nodes, which are then used to
provide service when system components fail..
Load-balancing clusters
Load-balancing clusters operate by distributing a workload evenly over
multiple back end nodes. Typically the cluster will be configured with multiple
redundant load-balancing front ends.

1
Compute clusters
Clusters are used for primarily computational purposes, rather than handling IO oriented
operations such as web service or databases. For instance, a cluster might support
computational simulations of weather or vehicle crashes.

Grid computing
Grids are usually compute clusters, but more focused on throughput like a computing
utility rather than running fewer, tightly-coupled jobs. grids will incorporate
heterogeneous collections of computers, possibly distributed geographically distributed
nodes, sometimes administered by unrelated organizations.
Grid computing is optimized for workloads which consist of many independent jobs or
packets of work, which do not have to share data between the jobs during the computation
process. Grids serve to manage the allocation of jobs to computers which will perform the
work independently of the rest of the grid cluster. Resources such as storage may be
shared by all the nodes, but intermediate results of one job do not affect other jobs in
progress on other nodes of the grid.
Application Virtualization
Application virtualization is a term that describes software
technologies that improve portability, manageability and compatibility of applications
by encapsulating them from the underlying operating system on which they are
executed. A fully virtualized application is not installed in the traditional sense,
although it is still executed as if it is. The application is fooled at runtime into
believing that it is directly interfacing with the original operating system and all the
resources managed by it, when in reality it is not. Application virtualization differs
from operating system virtualization in that in the latter case, the whole operating
system is virtualized rather than only specific applications.
Description
Limited application virtualization is used in modern operating systems such as
Microsoft Windows and Linux. For example, In File Mappings were introduced with
Windows NT to virtualize (into the Registry) the legacy INI files of applications
originally written for Windows 3.1.
Full application virtualization requires a virtualization layer. This layer must be
installed on a machine to intercept all file and Registry operations of virtualized
applications and transparently redirect these operations into a virtualized location. The
application performing the file operations never knows that it's not accessing the
physical resource it believes it is. In this way, applications with many dependent files
and settings can be made portable by redirecting all their input/output to a single
physical file, and traditionally incompatible applications can be executed side-by-side.
Benefits of application virtualization
i. Allows applications to run in environments that do not suit the native ap-
plication (e.g. Wine allows Microsoft Windows applications to run on Linux).
ii. Uses fewer resources than a separate virtual machine.
iii. Run incompatible applications side-by-side, at the same time and with minimal
regression testing against one another.
iv. Implement the security principle of least privilege by removing the requirement
for end-users to have Administrator privileges in order to run poorly written
applications.
v. Simplified operating system migrations.
Disadvantages of application virtualization
i. Applications have to be "packaged" or "sequenced" before they will run in a
virtualized way.
ii. Minimal increased resource requirements (memory and disk storage).Not all
software can be virtualized. Some examples include applications that require a
device driver and 16-bit applications that need to run in shared memory space.
iii. Some compatibility issues between legacy applications and newer operating
systems cannot be addressed by application virtualization (although they can still
be run on an older operating system under a virtual ma- chine).

2
Cross-platform virtualization
Cross-platform virtualization is a form of computer virtualization that
allows software compiled for a specific CPU and operating system to run unmodified
on computers with different CPUs and/or operating systems, through a combination
of dynamic binary translation and operating system call mapping. Since the software
runs on a virtualized equivalent of the original computer, it does not require
recompilation or porting, thus saving time and development resources. However, the
processing overhead of binary translation and call mapping imposes a performance
penalty, when compared to natively-compiled software. For this reason, cross-
platform virtualization may be used as a temporary solution until resources are
available to port the software.
By creating an abstraction layer capable of running software compiled
for a different computer system, cross-platform virtualization characterizes the popek
and Goldberg virtualization requirements outlined. Cross-platform virtualization is
distinct from emulation and binary translation which involve the direct translation of
one CPU instruction set to another since the inclusion of operating system call
mapping provides a more complete virtualized environment. Cross-platform
virtualization is also complementary to server virtualization and desktop
virtualization solutions, since these are typically constrained to a single CPU type,
such as x86 or POWER.
Emulation
Emulation or Emulator may refer to as imitation of behavior of a
computer or other electronic system with the help of another type of
computer/system. Console emulator, a program that allows a computer or modern
console to emulate another video game console. Hardware emulation, the use of
special purpose hardware to emulate the behavior of a yet to be built system, with
greater speed than pure software emulation.

2
Simulation
Simulation is the imitation of some real thing, state of affairs, or
process. The act of simulating something generally entails representing certain key
characteristics or behaviors of a selected physical or abstract system. A computer
simulation (or "sim") is an attempt to model a real-life or hypothetical situation on a
computer so that it can be studied to see how the system works. By changing
variables, predictions may be made about the behavior of the system. Computer
simulation has become a useful part of modeling many natural systems in physics,
chemistry and biology, and human systems in economics as well as in engineering
to gain insight into the operation of those systems.
10. Desktop virtualization
Desktop virtualization or Virtual desktop infrastructure (VDI) is a
server-centric computing model that borrows from the traditional thin-client model
but is designed to give system administrators and end-users the ability to host and
centrally manage desktop virtual machines in the data center while giving end users
a full PC desktop experience.
Rationale
Installing and maintaining separate PC workstations is complex, and
traditionally users have almost unlimited ability to install or remove software.
Desktop virtualization provides many of the advantages of a terminal server, but (if
so desired and configured by system administrators) can provide users much more
flexibility. Each, for instance might be allowed to install and configure their own
applications. Users also gain the ability to access their server-based virtual desktop
from other locations.
Advantages
 Instant provisioning of new desktops
 Near-zero downtime in the event of hardware failures
 Significant reduction in the cost of new application deployment
 Robust desktop image management capabilities
 Existing desktop-like performance including multiple monitors, bidirectional
audio/video, streaming video, USB support etc.

2
11. VMware Workstation

Fig. 8 VMware Workstation 6.5


running Ubuntu Fig. 9 Snapshot Manager
in VMware Workstation 6

VMware Workstation is a virtual machine software suite for x86 and x86-64
computers from VMware, a division of EMC Corporation. This software suite allows users
to set up multiple x86 and x86-64 virtual computers and to use one or more of these virtual
machines simultaneously with the hosting operating system. Each virtual machine instance
can execute its own guest operating system, such as Windows, Linux, BSD variants, or
others. In simple terms, VMware Workstation allows one physical machine to run multiple
operating systems simultaneously.
Microsoft Virtual Server
Microsoft Virtual Server is a virtualization solution that facilitates the
creation of virtual machines on the Windows XP, Windows Vista and Windows Server
2003 operating systems. Originally developed by Connectix, it was acquired by Microsoft
prior to release. Virtual PC is Microsoft's related desktop virtualization software package.
Virtual machines are created and managed through an IIS web-based interface or through a
Windows client application tool called VMRC plus. The current version is Microsoft
Virtual Server 2005 R2 SP1. New features in R2 SP1 include Linux guest operating system
support, Virtual Disk Pre compactor, SMP (but not for the Guest OS),x86-64 (x64) Host
OS support (but not Guest OS support), the ability to mount virtual hard drives on the host
OS and additional operating systems including Windows Vista.

23
It also provides a Volume Shadow Copy writer which enables live backups of the
Guest OS on a Windows Server 2003 or Windows Server 2008 Host. A utility to
mount VHD images is also included since SP1. Officially supported Linux guest
operating systems include Red Hat Enterprise Linux versions 2.1-5.0, Red Hat Linux
9.0, SUSE Linux and SUSE Linux Enterprise Server versions 9and10.
Microsoft Virtual PC
Microsoft Virtual PC is a virtualization suite for Microsoft Windows
operating systems, and an emulation suite for Mac OS X on PowerPC-based systems.
The software was originally written by Connectix, and was subsequently acquired by
Microsoft. In July 2006 Microsoft released the Windows-hosted version as a free
product. In August 2006 Microsoft announced the Macintosh-hosted version would
not be ported to Intel-based Macintoshes, effectively discontinuing the product as
PowerPC-based Macintoshes are no longer manufactured. Virtual PC virtualizes a
standard PC and its associated hardware. Supported Windows operating systems can
run inside Virtual PC. However, other operating systems like Linux may run, but are
not officially supported (for example, Ubuntu, a popular Linux distribution can get
past the boot screen of the Live CD (and function fully) when using
Safe Graphics Mode).
Virtual Box
Virtual Box is an x86 virtualization software package, originally created by German
software company innotek now developed by Sun Microsystems as part of its Sun
XVM virtualization platform. It is installed on an existing host operating system;
within this application, additional operating systems, each known as a Guest OS, can
be loaded and run, each with its own virtual environment. Supported host operating
systems include Linux, Mac OS X, OS/2 Warp, Windows XP or Vista, and Solaris,
while supported guest operating systems include FreeBSD, Linux, OpenBSD, OS/2
Warp, Windows and Solaris. According to a 2007 survey, Virtual Box is the third
most popular software package for running Windows programs on Linux desktops.
hypervisor boots and given special management privileges and direct access to the
physical hardware. The system administrator logs into dom0 in order to start any
further guest operating systems, called "domain U" (domain U) in Xen terminology.

2
12. Conclusion
Virtualization dramatically improves the efficiency and
availability of resources and applications. Earlier Internal resources are
underutilized under the old -one server, one application|| model and users
spend too much time managing servers rather innovating. By virtualization
platform, users can respond faster and more efficiently than ever before.
Users can save 50-70% on overall IT costs by consolidating their resource
pools and delivering highly available machines.
Other major improvements by using virtualization are that they can:
 Reduce capital costs by requiring less hardware and lowering operational
costs while increasing your server to admin ratio.
 Ensure enterprise applications perform with the highest availability and
performance.
 Build up business continuity through improved disaster recovery
solutions and deliver high availability throughout the datacenter.
 Improve desktop management with faster deployment of desktops and
fewer support calls due to application conflicts.
Even after the implementations of distributed computing and
other technologies, virtualization proved to be an effective in using the
available resources of a system fully in an efficient way.

2
13. References
Websites:
[1.] www.wikipedia.com
[2.] http://www.vmware.com
[3.] http://www.kernalthread.com.
[4.] www.virtualizationadmin.com
[5.] www.virtualization.org
[6.] www.microsft.com/virtualization.aspx

Books:
[1.] Virtualization: From Beginners to Professionals, A press Publications.
[2.] Operating System Concept: Slivers chat z-Galvin-Gagne Wiley Publications.
[3.] VMware White Paper: VMware Official Digital Repository.

26

You might also like