Virtualization and Virtualization Infrastructure
Virtualization and Virtualization Infrastructure
Course Objective: To gain expertise in Virtualization, Virtual Machines and deploy practical
virtualization solution
Virtualization can be defined as a process that enables the creation of a virtual version of a
desktop, operating system, network resources, or server. Virtualization plays a key and
dominant role in cloud computing. This ensures that the physical delivery of the resource or
an application is separated from the actual resource itself. It helps reduce the space or cost
involved with the resource. This technique enables the end-user to run multiple desktop
operating systems and applications simultaneously on the same hardware and software. The
process also ensures virtual emulation of products or services in the same machine, and it
does not slow down or impact the system’s efficiency.
Virtualization is the most needed in cloud computing. Virtualization helps in transferring data
easily, protects from system failures, reduces the cost of operations, and provides security to
data. Virtualization also helps in increasing the efficiency of the development and operations
team by not creating the physical systems for their tasks. They can use virtual machines and
servers for testing applications or software. There are five major needs of virtualization which
are:
Hypervisor
The hypervisor is a firmware or low-level program that acts as a Virtual Machine Manager.
Virtualization is generally achieved through the hypervisor. A hypervisor enables the
separation of operating systems with the underlying hardware. It enables the host machine to
run many virtual machines simultaneously and share the same physical computer resources.
Hypervisor Classifications:
Software systems that run directly on the host’s software as a hardware control and guest
operating system monitor. A guest operating system thus runs on another level above
the hypervisor. This is the classic implementation of virtual machine architectures. A
variation of this is embedding the hypervisor in the firmware of the platform, as is done in the
case of Hitachi’s Virtage hypervisor and VMware ESXi. Examples of this virtual machine
architecture are Oracle VM, Microsoft Hyper-V, VMWare ESX and Xen.
Advantages
Such kinds of hypervisors are very efficient because they have direct access to the
physical hardware resources (like CPU, Memory, Network, Physical storage). This causes
the empowerment the security because there is nothing any kind of the third-party resources
so that attacker couldn’t compromise with anything.
Disadvantages
One problem with Type-1 hypervisor is that they usually need a dedicated separate machine
to perform its operation and to instruct different VMs and control the host hardware
resources.
Advantages
Such kind of hypervisors allows quick and easy access to a guest Operating System alongside
the host machine running. These hypervisors usually come with additional useful
features for guest machine. Such tools enhance the coordination between the host machine
and guest machine.
Disadvantages
Here is no direct access to the physical hardware resources so the efficiency of these
hypervisors lags in performance as compared to the Type-1 hypervisors, and potential
security risks are also there an attacker can compromise the security weakness if there is
access to the host operating system so he can access the guest operating system.
Hardware Virtual machine provides a complete system platform environment which supports
the execution of a complete operating system (OS) VMWare, Xen, Virtual BOX. These types
of virtual machines give us complete system platform and give the execution of the complete
virtual operating system. Just like virtual box, system virtual machine is providing an
environment for an OS to be installed completely. We can see in below image that our
hardware of Real Machine is being distributed between two simulated operating systems by
Virtual machine monitor. And then some programs, processes are going on in that distributed
hardware of simulated machines separately.
Application Virtualization
Network Virtualization
Desktop Virtualization
Storage Virtualization
Server Virtualization
Application Virtualization
Network virtualization helps manage and monitor the entire computer network as a single
administrative entity. Admins can keep a track of various elements of network infrastructure
such as routers and switches from a single software-based administrator’s console. Network
virtualization helps network optimization for data transfer rates, flexibility, reliability,
security, and scalability. It improves the overall network’s productivity and efficiency. It
becomes easier for administrators to allocate and distribute resources conveniently and ensure
high and stable network performance.
Desktop Virtualization
Desktop virtualization is when the host server can run virtual machines using a
hypervisor (a software program). A hypervisor can directly be installed on the host
machine or over the operating system (like Windows, Mac, and Linux). Virtualized
desktops don’t use the host system’s hard drive; instead, they run on a remote central server.
This type of virtualization is useful for development and testing teams who need to develop
or test applications on different operating systems.
Storage Virtualization
Storage virtualization is the process of pooling physical storage of multiple network storage
devices so it looks like a single storage device. Storage virtualization facilitates archiving,
easy backup, and recovery tasks. It helps administrators allocate, move, change and set up
resources efficiently across the organizational infrastructure.
Server Virtualization
Server virtualization is a process of partitioning the resources of a single server into multiple
virtual servers. These virtual servers can run as separate machines. Server virtualization
allows businesses to run multiple independent OSs (guests or virtual) all with
different configurations using a single (host) server. The process also saves the hardware
cost involved in keeping a host of physical servers, so businesses can make their server
infrastructure more streamlined.
Management Virtualization
2. Management Layers:
2. Containerization:
Efficiency and Scalability: Containers share the host operating system kernel
and utilize fewer resources than VMs, enabling higher density and faster
deployment of applications.
Virtualization Management
Virtual Server: Virtual Server is a generic term used to describe a physical box running
virtualization software (hypervisor) on it. A new virtual server can be provisioned by
installing Oracle VM server software on a bare metal physical box. "Virtual Server" is a
target type in Enterprise Manager that represents the Oracle VM targets.
Every Oracle VM server can perform one or more of the functions described below:
Master Server function: The Master Server is the core of the server pool operations. It acts
as the contact point of the server pool to the outside world, and also as the dispatcher to other
servers within the server pool. The load balancing is implemented by the Master Server. For
example, when you start a guest virtual machine, the Master Server will choose a Guest VM
Server with the maximum resources available to run the guest virtual machine. There can be
only one Master Server in a server pool. The state of a virtual server pool is equivalent to the
state of the master virtual server.
Utility Server function: The Utility Server is responsible for I/O intensive operations such
as copying or moving files. Its function focuses on the creation and removal operations of
guest virtual machines, virtual servers, and server pools. There can be one or more Utility
Servers in a server pool. When there are several Utility Servers, Server Pool Master chooses
the Utility Server with the maximum CPU resources available to conduct the task.
Guest VM Server function: The primary function of the Guest VM Server is to run guest
virtual machines, thus acting as a hypervisor. There can be one or more Guest VM Servers in
a server pool. When there are several Guest VM Servers, Master Server chooses the Guest
VM Server with the maximum resources available (including memory and CPU) to start and
run the virtual machine.
Monitoring Server: The monitoring server monitors the virtual server remotely. Multiple
virtual servers are monitored by one agent. Enterprise Manager agent must not be installed on
virtual servers.
Virtual Server Pool: A Server Pool is a logical grouping of one or more virtual servers that
share common storage. A virtual server can belong to only one virtual server pool at a time.
Guest virtual machines and resources are also associated with server pools. Oracle VM
Server Pool is an aggregate target type in Enterprise Manager to represent the server pool of
Oracle VM Servers. When the Oracle VM Server Pool is created, the user is asked to provide
the details of the Master Server for that pool. By default, this Oracle VM Server also
performs the functions of the Utility Server and Guest VM Server. The user can later change
the Utility Server and Guest VM Server functions using the Edit Virtual Server action.
Guest Virtual Machine: Guest Virtual Machine (also known as Guest VM) is the container
running on top of a virtual server. Multiple guest virtual machines can run on a single virtual
server. Guest virtual machines can be created from Oracle VM templates. Oracle VM
templates provide pre-installed and pre-configured software images to deploy a fully
configured software stack.
ISA virtualization can work through ISA emulation. This is used to run many legacy codes
that were written for a different configuration of hardware. These codes run on any virtual
machine using the ISA. With this, a binary code that originally needed some additional
layers to run is now capable of running on the x86 machines. It can also be tweaked to run on
the x64 machine. With ISA, it is possible to make the virtual machine hardware agnostic. For
the basic emulation, an interpreter is needed, which interprets the source code and
then converts it into a hardware format that can be read. This then allows processing. This is
one of the five implementation levels of virtualization in cloud computing.
True to its name HAL lets the virtualization perform at the level of the hardware. This makes
use of a hypervisor which is used for functioning. At this level, the virtual machine is formed,
and this manages the hardware using the process of virtualization. It allows the virtualization
of each of the hardware components, which could be the input-output device, the memory,
the processor, etc. Multiple users will not be able to use the same hardware and also
use multiple virtualization instances at the very same time. This is mostly used in the cloud-
based infrastructure.
At the level of the operating system, the virtualization model is capable of creating a layer
that is abstract between the operating system and the application. This is an isolated
container that is on the operating system and the physical server, which makes use of the
software and hardware. Each of these then functions in the form of a server. When there are
several users, and no one wants to share the hardware, then this is where the
virtualization level is used. Every user will get his virtual environment using a virtual
hardware resource that is dedicated. In this way, there is no question of any conflict.
Library Level
The operating system is cumbersome, and this is when the applications make use of the API
that is from the libraries at a user level. These APIs are documented well, and this is why the
library virtualization level is preferred in these scenarios. API hooks make it possible as it
controls the link of communication from the application to the system.
Application Level
Hypervisor architecture
Paravirtualization
Host-based virtualization
Full Virtualization
With full virtualization, noncritical instructions run on the hardware directly while critical
instructions are discovered and replaced with traps into the VMM to be emulated by
software. Both the hypervisor and VMM approaches are considered full virtualization.
Noncritical instructions do not control hardware or threaten the security of the system, but
critical instructions do. Therefore, running noncritical instructions on hardware not only can
promote efficiency, but also can ensure system security.
Host-Based Virtualization
An alternative VM architecture is to install a virtualization layer on top of the host OS. This
host OS is still responsible for managing the hardware. The guest OSes are installed and run
on top of the virtualization layer. Dedicated applications may run on the VMs. Certainly,
some other applications can also run with the host OS directly. This host based architecture
has some distinct advantages, as enumerated next. First, the user can install this VM
architecture without modifying the host OS. Second, the host-based approach appeals to
many host machine configurations.
Para-Virtualization
It needs to modify the guest operating systems. A para-virtualized VM provides special APIs
requiring substantial OS modifications in user applications. Performance degradation is a
critical issue of a virtualized system. Figure illustrates the concept of a para-virtualized VM
architecture. The guest OS are para-virtualized. They are assisted by an intelligent compiler
to replace the non virtualizable OS instructions by hypercalls. The traditional x86 processor
offers four instruction execution rings: Rings 0, 1, 2, and 3. The lower the ring number, the
higher the privilege of instruction being executed. The OS is responsible for managing the
hardware and the privileged instructions to execute at Ring 0, while user-level applications
run at Ring 3. Although para-virtualization reduces the overhead, it has incurred problems
like compatibility and portability, because it must support the unmodified OS as well.
Second, the cost is high, because they may require deep OS kernel modifications. Finally, the
performance advantage of para-virtualization varies greatly due to workload variations.
Modern operating systems and processors permit multiple processes to run simultaneously. If
there is no protection mechanism in a processor, all instructions from different processes will
access the hardware directly and cause a system crash. Therefore, all processors have at least
two modes, user 7 mode and supervisor mode, to ensure controlled access of critical
hardware. Instructions running in supervisor mode are called privileged instructions. Other
instructions are unprivileged instructions. In a virtualized environment, it is more difficult to
make OSes and applications run correctly because there are more layers in the machine stack.
Figure shows the hardware support by Intel.
CPU Virtualization
Unprivileged instructions of VMs run directly on the host machine for higher efficiency.
Other critical instructions should be handled carefully for correctness and stability. The
critical instructions are divided into three categories: privileged instructions, controls
sensitive instructions, and behaviour sensitive instructions. Privileged instructions execute in
a privileged mode and will be trapped if executed outside this mode. Control-sensitive
instructions attempt to change the configuration of resources used. Behavior-sensitive
instructions have different behaviors depending on the configuration of resources, including
the load and store operations over the virtual memory. CPU architecture is virtualizable if it
supports the ability to run the VM’s privileged and unprivileged instructions in the CPU’s
user mode while the VMM runs in supervisor mode. When the privileged instructions
including control- and behavior-sensitive instructions of a VM are executed, they are trapped
in the VMM. RISC CPU architectures can be naturally virtualized because all control and
behavior-sensitive instructions are privileged instructions. On the contrary, x86 CPU
architectures are not primarily designed to support virtualization.
Memory Virtualization
Virtual memory virtualization is similar to the virtual memory support provided by modern 8
operating systems. In a traditional environment, the OS maintains page table for mappings of
virtual memory to machine memory, which is a one-stage mapping. All modern x86 CPUs
include a memory management unit (MMU) and a translation lookaside buffer (TLB) to
optimize virtual memory performance. However, in a virtual execution environment, virtual
memory virtualization involves sharing the physical system memory in RAM and
dynamically allocating it to the physical memory of the VMs. A two-stage mapping process
should be maintained by the guest OS and the VMM, respectively: virtual memory to
physical memory and physical memory to machine memory. The VMM is responsible for
mapping the guest physical memory to the actual machine memory in guest OS. Since each
page table of the guest OSes has a separate page table in the VMM corresponding to it, the
VMM page table is called the shadow page table. VMware uses shadow page tables to
perform virtual-memory-to-machine-memory address translation. Processors use TLB
hardware to map the virtual memory directly to the machine memory to avoid the two levels
of translation on every access. When the guest OS changes the virtual memory to a physical
memory mapping, the VMM updates the shadow page tables to enable a direct lookup.
I/O virtualization
It involves managing the routing of I/O requests between virtual devices and the shared
physical hardware. There are three ways to implement I/O virtualization: Full device
emulation Para-virtualization Direct I/O.
All the functions of a device like device enumeration, identification, interrupts, and DMA,
are replicated in software and it is located in the VMM and acts as a virtual device. The I/O
access requests of the guest OS are trapped in the VMM which interacts with the I/O devices.
Para-virtualization
It is a split driver model consisting of a frontend driver and a backend driver. The frontend
driver is running in Domain U and the backend driver is running in Domain 0. They interact
with each other via a block of shared memory. The frontend driver manages the I/O requests
of the guest OSes and the backend driver is responsible for managing the real I/O devices and
multiplexing the I/O data of different VMs. Although para-I/O-virtualization achieves better
device performance than full device emulation, it comes with a higher CPU overhead.
It lets the VM access devices directly. It can achieve close-to-native performance without
high CPU costs. However, current direct I/O virtualization implementations focus on
networking for mainframes. Another way to help I/O virtualization is via self-virtualized I/O
(SV-IO). The key idea is to harness the rich resources of a multicore processor. All tasks
associated with virtualizing an I/O device are encapsulated in SV-IO. SV-IO defines one
virtual interface (VIF) for every kind of virtualized I/O device, such as virtual network
interfaces, virtual block devices (disk), virtual camera devices, and others. The guest OS
interacts with the VIFs via VIF device drivers. Each VIF consists of two message queues.
One is for outgoing messages to the devices and the other is for incoming messages from the
devices. In addition, each VIF has a unique ID for identifying it in SV-IO.
Virtual Hierarchy
A virtual hierarchy is a cache hierarchy that can adapt to fit the workload or mix of
workloads. The hierarchy’s first level locates data blocks close to the cores needing them for
faster access, establishes a shared-cache domain, and establishes a point of coherence for
faster communication. The first level can also provide isolation between independent
workloads. A miss at the L1 cache can invoke the L2 access. The following figure illustrates
a logical view of such a virtual cluster hierarchy in two levels. Each VM operates in a
isolated fashion at the first level which minimize both miss access time and performance
interference with other workloads or VMs. The second level maintains a globally shared
memory facilitates dynamically repartitioning resources without costly cache flushes.
Virtual cluster based on application partitioning or customization. The most important thing
is to determine how to store those images in the system efficiently. There are common
installations for most users or applications, such as operating systems or user-level
programming libraries. These software packages can be preinstalled as templates (called
template VMs). With these templates, users can build their own software stacks. New OS
instances can be copied from the template VM.
Deployment means two things: to construct and distribute software stacks (OS, libraries,
applications) to a physical node inside clusters as fast as possible, and to quickly switch
runtime environments from one user’s virtual cluster to another user’s virtual cluster. If one
user finishes using his system, the corresponding virtual cluster should shut down or suspend
quickly to save the resources to run other VMs for other users
Basically, there are four steps to deploy a group of VMs onto a target cluster: preparing the
disk image, configuring the VMs, choosing the destination nodes, and executing the VM
deployment command on every host. Many systems use templates to simplify the disk image
preparation process. A template is a disk image that includes a preinstalled operating system
with or without certain application software. Templates could implement the COW (Copy on
rite) format. A new COW backup file is very small and easy to create and transfer.
There are four ways to manage a virtual cluster. First, we can use a guest-based manager, by
which the cluster manager resides on a guest system. In this case, multiple VMs form a
virtual cluster. We can build a cluster manager on the host systems. The host-based manager
supervises the guest systems and can restart the guest system on another physical machine.
Third way to manage a virtual cluster is to use an independent cluster manager on both the
host and guest systems. Finally, you can use an integrated cluster on the guest and host
systems. This means the manager must be designed to distinguish between virtualized
resources and physical resources. A VM can be in one of the following four states.
An inactive state is defined by the virtualization platform, under which the VM is not
enabled.
An active state refers to a VM that has been instantiated at the virtualization platform
to perform a real task.
A paused state corresponds to a VM that has been instantiated but disabled to process
a task or paused in a waiting state.
A VM enters the suspended state if its machine file and virtual resources are stored
back to the disk.
Live migration of a VM from one machine to another consists of the following six steps:
Steps 0 and 1: Start migration. This step makes preparations for the migration,
including determining the migrating VM and the destination host
Steps 2: Transfer memory. Since the whole execution state of the VM is stored in
memory, sending the VM’s memory to the destination node ensures continuity of the
service provided by the VM. All of the memory data is transferred
Step 3: Suspend the VM and copy the last portion of the data. The migrating VM’s
execution is suspended when the last round’s memory data is transferred.
Steps 4 and 5: Commit and activate the new host. After all the needed data is copied,
on the destination host, the VM reloads the states and recovers the execution of
programs in it, and the service provided by this VM continues.
When one system migrates to another physical node, we should consider the following issues:
Memory migration can be in a range of hundreds of megabytes to a few gigabytes in a typical
system today, and it needs to be done in an efficient manner. The Internet Suspend-Resume
(ISR) technique exploits temporal locality as memory states are likely to have considerable
overlap in the suspended and the resumed instances of a VM. To exploit temporal locality,
each file in the file system is represented as a tree of small subfiles. A copy of this tree exists
in both the suspended and resumed VM instances.
File System Migration
Location-independent view of the file system that is available on all hosts. A simple way to
achieve this is to provide each VM with its own virtual disk which the file system is mapped
to and 12 transport the contents of this virtual disk along with the other states of the VM A
distributed file system is used in ISR serving as a transport mechanism for propagating a
suspended VM state. The actual file systems themselves are not mapped onto the distributed
file system.
Network Migration
To enable remote systems to locate and communicate with a VM, each VM must be assigned
a virtual IP address known to other entities. This address can be distinct from the IP address
of the host machine where the VM is currently located. Each VM can also have its own
distinct virtual MAC address. The VMM maintains a mapping of the virtual IP and MAC
addresses to their corresponding VMs. Live migration is a key feature of system
virtualization technologies. Here, we focus on VM migration within a cluster environment
where a network-accessible storage system, such as storage area network (SAN) or network
attached storage (NAS), is employed. Only memory and CPU status needs to be transferred
from the source node to the target node. In fact, these issues with the precopy approach are
caused by the large amount of transferred data during the whole migration process. A
checkpointing/recovery and trace/replay approach (CR/ TR-Motion) is proposed to provide
fast VM migration. Another strategy of postcopy is introduced for live migration of VMs.
Here, all memory pages are transferred only once during the whole migration process and the
baseline total migration time is reduced.
In data centers, a large number of heterogeneous workloads can run on servers at various
times. These heterogeneous workloads can be roughly divided into two categories: chatty
workloads and noninteractive workloads. Chatty workloads may burst at some point and
return to a silent state at some other point. A web video service is an example of this,
whereby a lot of people use it at night and few people use it during the day. Noninteractive
workloads do not require people’s efforts to make progress after they are submitted. Server
consolidation is an approach to improve the low utility ratio of hardware resources by
reducing the number of physical servers. The use of VMs increases resource management
complexity. This causes a challenge in terms of how to improve resource utilization as well
as guarantee QoS in data centers.
Consolidation enhances hardware utilization. Many underutilized servers are
consolidated into fewer servers to enhance resource utilization. Consolidation also
facilitates backup services and disaster recovery.
This approach enables more agile provisioning and deployment of resources. In a
virtual environment, the images of the guest OSes and their applications are readily
cloned and reused.
The total cost of ownership is reduced. In this sense, server virtualization causes
deferred purchases of new servers, a smaller data-center footprint, lower maintenance
costs, and lower power, cooling, and cabling requirements.
This approach improves availability and business continuity. The crash of a guest OS
has no effect on the host OS or any other guest OS.
In system virtualization, virtual storage includes the storage managed by VMMs and guest
OSes. Generally, the data stored in this environment can be classified into two categories:
VM images and application data. The VM images are special to the virtual environment,
while application data includes all other data which is the same as the data in traditional OS
environments. In data centers, there are often thousands of VMs, which cause the VM images
to become flooded. The research is going on in this field to make management easy while
enhancing performance and reducing the amount of storage occupied by the VM images.
Parallax is a distributed storage system customized for virtualization environments. Content
Addressable Storage (CAS) is a solution to reduce the total size of VM images, and therefore
supports a large set of VM-based systems in data centers.
Parallax designs a novel architecture in which storage features that have traditionally been
implemented directly on high-end storage arrays and switchers are relocated into a federation
of storage VMs. These storage VMs share the same physical hosts as the VMs that they
serve. Figure provides an overview of the Parallax system architecture. It supports all popular
system virtualization techniques, such as paravirtualization and full virtualization. For each
physical machine, Parallax customizes a special storage appliance VM. The storage appliance
VM acts as a block virtualization layer between individual VMs and the physical storage
device. It provides a virtual disk for each VM on the same physical machine.
Cloud OS for Virtualized Data Centers
Data centers must be virtualized to serve as cloud providers. Virtual infrastructure (VI)
managers and OSes are specially tailored for virtualizing data centers which often own a
large number of servers in clusters. Nimbus, Eucalyptus, and Open Nebula are all open
source software available to the public. Only vSphere 4 is a proprietary OS for cloud
resource virtualization and management over data centers. These VI managers are used to
create VMs and aggregate them into virtual clusters as elastic resources.
Eucalyptus is an open source software system (Figure 3.27) intended mainly for supporting
Infrastructure as a Service (IaaS) clouds. The system primarily supports virtual networking
and the management of VMs
vSphere 4
vSphere extends earlier virtualization software products by VMware, namely the VMware
Workstation, ESX for server virtualization, and Virtual Infrastructure for server clusters
overall architecture. The system interacts with user applications via an interface layer, called
vCenter. It is primarily intended to offer virtualization support and resource management of
data-center resources in building private clouds. VMware claims the system is the first cloud
OS that supports availability, security, and scalability in providing cloud computing services.
The vSphere 4 is built with two functional software suites: infrastructure services and
application services. It also has three component packages intended mainly for virtualization
purposes: vCompute is supported by ESX, ESXi, and DRS virtualization libraries from
VMware. The application services are also divided into three groups: availability, security,
and scalability. To fully understand the use of vSphere 4, users must also learn how to use the
vCenter interfaces in order to link with existing applications or to develop new applications.
A VM in the host machine entirely encapsulates the state of the guest operating system
running inside it. Encapsulated machine state can be copied and shared over the network and
removed like a 15 normal file, which proposes a challenge to VM security. In general, a
VMM can provide secure isolation and a VM accesses hardware resources through the
control of the VMM, so the VMM is the base of the security of a virtual system. Normally,
one VM is taken as a management VM to have some privileges such as creating, suspending,
resuming, or deleting a VM.
Intrusions are unauthorized access to computer from local/ network users and intrusion
detection is used to recognize the unauthorized access. An intrusion detection system (IDS) is
built on operating systems, and is based on the characteristics of intrusion actions. Typical
IDS can be classified as a host-based IDS (HIDS) or a network-based IDS (NIDS), depending
on the data source. A HIDS can be implemented on the monitored system. A NIDS is based
on the flow of network traffic which can’t detect fake actions.
Virtualization-based intrusion detection can isolate guest VMs on the same hardware
platform. VMM monitors and audits access requests for hardware and system software. There
are two different methods for implementing a VM-based IDS: Either the IDS is an
independent process in each VM or a high-privileged VM on the VMM; or the IDS is
integrated into the VMM and has the same privilege to access the hardware as well as the
VMM. The proposed IDS to run on a VMM as a high-privileged VM is depicted in the
following figure.
The VM-based IDS contains a policy engine and a policy module. The policy framework can
monitor events in different guest VMs by operating system interface library and PTrace
indicates trace to secure policy of monitored host. It’s difficult to predict and prevent all
intrusions without delay. At the time of this writing, most computer systems use logs to
analyse attack actions, but it is hard to ensure the credibility and integrity of a log. The IDS
log service is based on the operating system kernel. Thus, when an operating system is
invaded by attackers, the log service should be unaffected. Besides IDS, honey nets are also
prevalent in intrusion detection. They attract and provide a fake system view to attackers in
order to protect the real system. In addition, the attack action can be analysed, and a secure
IDS can be built. A honeypot is a purposely defective system that simulates an operating
system to cheat and monitor the actions of an attacker. A honeypot can be divided into
physical and virtual forms. A guest operating system and the applications running on it
constitute a VM. The host operating system and VMM must be guaranteed to prevent attacks
from the VM in a virtual honeypot.