Virtualization
Virtualization
ual features such as forwarding rules inside a network. This chapter gives
1
network virtualization and Software Defined Networking provides full net-
1.1. Introduction
Communication networks such as the Internet, data center networks or enterprise
networks have become a critical infrastructure of our society. Although these commu-
nications networks and their protocols have been a great success, they have been de-
signed for providing connectivity in a best-effort manner. However, given the current
cient. The reasons are manifold: future applications like Internet-of-Things or robotics
predictable performance while they are sharing the same underlying network infras-
tructure. Whereas traditional communication networks have been planned and oper-
ated by humans, resulting in a rather slow operation and update, modern applications
require fast and automatic changes to new requirements as those of future networking
Indeed, traditionally it has been assumed that communication networks serve ap-
plications with homogeneous network resource requirements not changing over time.
milliseconds and are possibly highly diverse with respect to their required network
resources [Benson et al., 2010, Erman and Ramakrishnan, 2013, Garcia-Dorado et al.,
2012, Gehlen et al., 2012]. For example for CPSs, communication networks must serve
2
latency-critical control loops where resource adaptations must be put into effect within
share the same physical infrastructures. As a consequence, they rely on the same pro-
tocol stack. However, communication network infrastructures with their current pro-
a timely manner. Hence, today’s communication networks lack the flexibility in pro-
viding efficient resource sharing with a high level of adaptability, needed to support
demands with diverse network resource requirements changing over time. Overall, this
results in a performance that is far from perfect for both the network operator and
Two paradigms, namely Network Virtualization (NV) [Anderson et al., 2005] and
Software-Defined Networking (SDN) [McKeown et al., 2008], are expected to cope with
those requirements for flexible network resource sharing and adaptability. Whereas NV
flexible resource sharing [Anderson et al., 2005], SDN introduces a new way of flexibly
ants, i.e., Service Providers (SP), to use virtual resources according to their users’
demands. Due to NV, InPs and SPs can control physical and virtual resources re-
spectively in a dynamic and independent manner. To gain the highest efficiency out
of virtualized networks, InPs need mechanisms that quickly provide (virtual) network
resources in a predictable and isolated manner. On the other side, SPs should be able
to flexibly request and control their resources with high degree of freedom. Hence, NV
opens a new path towards communication systems hosting multiple virtual networks
of SPs.
SDN decouples control planes of network devices, such as routers and switches,
from their data planes. Using open interfaces such as OpenFlow [McKeown et al.,
2008], SDN provides new means of network programmability [McKeown et al., 2008].
With networks being completely programmable, SDN can realize Network Operat-
3
ing Systems (NOSs) integrating new emerging concepts; NOSs can be tailored to
raising mechanisms from the research field of Artificial Intelligence (AI). This might
lead to future NOSs, and communication networks that self-adapt to unforeseen events,
e.g., based on knowledge that is inferred at runtime from the behavior of network users,
Combining NV and SDN offers the advantages of both worlds: a flexible and dy-
those resources through SDN. This is called the virtualization of software-defined net-
sharing one infrastructure [Sherwood et al., 2009, 2010, Al-Shabibi et al., 2014a,b].
With both paradigms, it is expected that multiple vSDNs coexist while each one is in-
dividually managed by its own NOS. The combination makes it possible to implement,
test, and even introduce new NOSs at runtime into existing networking infrastructures.
(VMs) and their physical resource access [Barham et al., 2003], a so-called virtualiza-
tion layer realizes the virtualization of SDNs [Sherwood et al., 2009]. The virtualization
layer assigns, manages, and controls the physical network resources, while coordinating
the access of virtual network tenants. The virtualization layer in SDN-based networks
is realized by one or many network hypervisors [Koponen et al., 2014]. They imple-
ment the control logic needed for virtualizing software-defined networks. They act as
proxies between the NOSs of tenants and the shared physical infrastructure, where
the vSDN networks reside. Due to their key position, a deep understanding of design
SPs with guaranteed and predictable network performance - a critical obstacle for the
of network virtualization and the SDN network hypervisors concept, i.e., NV and SDN.
4
Control
ControlPlane
Plane
CP CP
DP DP DP DP
CP CP CP
DP DP DP DP DP DP
Legacy Network Soware-defined Network
(a) Legacy network where the control plane (b) Software-defined network where the con-
(CP) and the data plane (DP) are integrated trol plane (CP) is decoupled from the data
into a device. plane (DP) of the devices.
sors. Finally, we briefly survey the first SDN network hypervisor, namely FlowVisor.
1.2. Background
data plane. Fig. 1.1a shows an example network where the control plane is distributed
among the devices. The control plane is responsible for control decisions, e.g., to
populate the routing tables of Internet Protocol (IP) routers for effective packet for-
on the available functions and protocols is always needed in case of adaptations. This,
however, may hinder the innovation of communication networks: in order to let oper-
ating systems cooperate, basic agreements need to be established first. Achieving such
In order to overcome such problems, SDN decouples the control plane from the
5
data plane, which allows a centralized logical control of distributed devices [McKeown
et al., 2008]. Fig 1.1b illustrates an example where the control is decoupled from the
devices. The control plane logic is centralized in the SDN controller, which operates
the SDN switches. The centralized control maintains the global network state, which is
distributed across the data plane devices. While the data plane devices still carry out
the forwarding of data packets, the centralized control now instructs the data plane
networking devices. The D-CPI is used between the physical data plane and the (log-
ically centralized) control plane. The connection between SDN devices and SDN con-
nels, SDN switches still need to implement logics, so-called agents, which receive and
execute commands from the external control plane. To provide a common operation
and control among heterogeneous SDN devices, instruction sets that abstract the
physical data plane hardware are needed. The most most popular development is the
of the first SDN controllers has been NOX [Gude et al., 2008]. Many SDN controllers
have followed: e.g., Ryu [ryu], ONOS [Berde et al., 2014], Beacon [Erickson, 2013],
allows to freely design control plane architectures among the whole spectrum from
SDN applies a match and action paradigm to realize packet forwarding decisions.
The SDN controller pushes instruction sets to the data plane, which include a match
1
This interface has also been known as the Southbound interface earlier.
6
and action specification. In SDN, match specifications define a flow of packets, i.e., net-
work traffic flow. A match is determined by the header values of packets of a network
flow. For example, OF defines a set of header fields including, e.g., the Transmission
Control Protocol (TCP) header and the IP header. An SDN controller instructs SDN
switches to match on the specified fields and apply actions. The main actions are for-
ward, drop, or modify network packets. Each version of the OF specification extended
the set of header fields that can be matched on as well as the available actions.
An SDN switch stores the instructions on how to handle network packets in one or
multiple flow tables. The size for storing flow table entries is a defining characteristic
Current OF switches use Ternary Content Addressable Memories (TCAMs) for hard-
ware tables, which are fast but costly and limited. In contrast, software tables provide
more space but are slower than hardware tables in storing and complex matching
of network flows [Kuźniar et al., 2015]. Accordingly, for a predictable isolation, such
plications, like firewalls or load balancers, reside in the application control plane and
use such A-CPIs, . However, networking applications can be developed upon the pro-
application designers and operators can again freely develop in any programming lan-
guage, they are dependent on the A-CPI protocol of the respective controller, e.g.,
the REST API of ONOS [Berde et al., 2014] in OF-based networks. Unfortunately, no
common instruction set for the A-CPI has been defined yet.
This book chapter targets network hypervisors for OpenFlow (OF)-based SDN net-
works. Hence, this section introduces background information on the OF protocol, its
implementation aspects and message types, which are defined by the Open Networking
7
2011a,b, 2012, 2013, 2014a]. Moreover, it introduces several SDN network hypervisors.
many OF controllers, switches, and network hypervisors exist. Accordingly, when dis-
switches are connected via multiple OpenFlow (OF) control channels with the con-
troller.
An OF switch typically has one control channel for one controller; auxiliary (par-
allel) control channels are possible, e.g., to improve redundancy. The OF specification
does not specify the control channel to be an out-band network, i.e., dedicated switches
for the control traffic, or an in-band one, where control traffic is transmitted through
Literature sometimes calls the entity that implements the OF specification on the
switch side the OpenFlow (OF) agent [Kreutz et al., 2015]. Given the IP address
of the controller, OF agents initiate TCP connections to the controller via the OF
switches before starting network control and operation. OF messages, i.e., commands,
to request features or to send a packet out on the data path of the switch. Only
switches send asynchronous messages. Switches use these messages to report network
events to controllers or changes of the their states. Both controllers or switches can
8
send Symmetric messages. They are sent without solicitation. The following message
OFPT PACKET IN: Switches send OFPT PACKET IN messages to controllers to transfer
the control of the packet. They are either triggered by a flow entry (a rule that
be found and the switch is then sending OFPT PACKET IN messages as its default
behavior).
Controller-to-Switch messages:
OFPT FEATURES REQUEST and OFPT FEATURES REPLY: This message request/reply
pattern exchanges the main information on switch identities and on switch capa-
bilities. A controller normally requests features once when a new control channel
connection is established.
OFPT FLOW MOD: This message modifies flow table entries; it adds, modifies, or re-
OFMP PORT STATS: The controller sends this message to request statistics about one
or many ports of the switch. Statistics can be about received or transmitted packets
OFPT PACKET OUT: A controller uses this message type to send a packet out through
the datapath of a switch. The controller sends this message, for instance, to discover
wards, we connect both worlds, i.e., SDN and NV through the notion of network
hypervisors.
9
Service Customer Service Customer
Provider 1 SP1 Provider 1 SP1
Service Customer Service Customer
Provider 2 Internet Service Provider (ISP) SP2 Provider 2 Infrastructure Provider (InP) SP2
(a) Traditional role: SP providing access to (b) NV business model: SPs request virtual
service providers and customers. SPs cannot networks. Customers connect to services via
impact network decision and resource alloca- virtual networks.
tion.
Figure 1.2: Traditional business model with SP versus NV business model with INP.
been one of the main drivers for the deployment of data centers and clouds [Goldberg,
1974, Li et al., 2010, Sahoo et al., 2010, Douglis and Krieger, 2013, Smith and Nair,
2005, Zhang et al., 2014]. Inspired by this successful development, NV has initially
been investigated for testbed deployments [Anderson et al., 2005, Turner and Taylor,
2005, Feamster et al., 2007]. The idea of sharing physical networking resources among
serving production network traffic: NV is seen as the key enabler for overcoming the
ossification of the Internet [Anderson et al., 2005, Feamster et al., 2007, Turner and
Taylor, 2005]. As the idea of virtual networks is not new in general (e.g., VLAN de-
fines layer 2 virtual networks), different network virtualization definitions and models
have been proposed [Chowdhury and Boutaba, 2008, Bari et al., 2013, Casado et al.,
2010, Koponen et al., 2015, 2014], e.g., based on the domain (data centers, wide area
NV has led to new business models, which are seen as main drivers for innovation
for communication network technologies. One business model for network virtualiza-
following. Fig. 1.2 compares the traditional roles with the NV roles. Traditionally, an
10
Internet Service Provider (ISP) provides Internet access for its customers towards a
Service Provider (SP), such as Google, Netflix, Amazon, etc., which host their ser-
vices in data centers (Fig. 1.2a). In the business models of network virtualization, as
illustrated in Fig. 1.2b, the traditional role of an Internet Service Provider (ISP) of
managing and operating networks is split into an SP role and an InP role [Feamster
et al., 2007]. The SP’s role can be enriched with network control. They become the
operators of virtual networks. Thus, SPs can use their knowledge about their services
and applications to implement advanced network control algorithms, which are de-
signed to meet the service and application requirements. It is then the task of the
InP to provide virtual networks to the SPs. SPs (tenants) might even create virtual
hury and Boutaba, 2009]. Hence, different networking domains use varying technolo-
gies to realize virtual networks. For instance, VXLAN [Mahalingam et al., 2014],
GRE [Farinacci et al., 2000], or GRE’s NV variant NVGRE [Garg and Wang, 2015]
are used in data centers to interconnect virtual machines of tenants. Techniques such
as Multiprotocol Label Switching (MPLS) [Xiao et al., 2000, Rosen et al., 2001] create
Full network virtualization comprises all physical resources that are needed to
provide virtual networks with guaranteed and predictable performance. The ability to
program virtual networks, e.g., by using SDN, is a further important key aspect of (full)
network virtualization [Koponen et al., 2014]. Taking a look at Virtual Local Area
fully benefit from NV opportunities, tenants should obtain virtual network resources,
including full views of network topologies and allocated networking resources, involving
link data rates and network node resources, such as Central Processing Unit (CPU)
or memory.
11
Providing isolated and programmable virtual networks has manifold advantages:
first, network operators can design, develop, and test novel networking paradigms,
architecture [Anderson et al., 2005]. Second, network systems that are designed to
the demands of the served applications or users do not suffer from the overhead of
SPs should be enabled to offer new services over existing infrastructures much faster
with higher flexibility, i.e., ease to adapt their networks to changing user and service
demands [Keller et al., 2012]. One way to offer this kind of feature is to use so-called
works. They provide the main network functions for virtualization of SDN networks.
In this section, we explain how to virtualize SDN networks through a network hyper-
visor and its virtualization functions. We highlight the main functions that need to
NV.
12
SDN C1 SDN C2
SDN Controller
VN 2
Virtualization Layer
VN 1
Figure 1.3: Comparison of virtual networks, SDN network, and virtual SDN networks.
Dotted lines on Fig. 1.3a indicate the embedding of the virtual nodes. Dashed lines
in Fig. 1.3b illustrate the control connections between the SDN controller and the
physical SDN network. Black dashed lines in Fig 1.3c show the connection between
tenant controllers and the virtualization layer, while dashed colored lines show the
connections between the virtualization layer and the physical SDN network. Dotted
lines between virtual nodes on the physical network indicate the virtual paths between
them.
virtual resource programmability towards tenants can now be put into effect [Jain
and Paul, 2013]. Accordingly, NV is seen as one killer application of SDN [Drutskoy
et al., 2013, Feamster et al., 2014], that is, it provides the programmability of virtual
network resources. The result of combining NV and SDN are vSDNs sharing the same
infrastructure.
networks, and virtual software-defined networks. Figure 1.3a shows the traditional
view of virtual networks. Two virtual networks are hosted on a substrate network.
The dashed lines illustrate the location of the virtual resources. The interconnection
of the virtual nodes is determined by the path embedding or routing concept of the
InP. A clear way how a tenant can control and configure its virtual network is not
given. SDN provides one option for providing virtual network resource control and
even configuration to tenants. Figure 1.3b illustrates how an SDN controller operates
on to top of a physical SDN network. The SDN controller is located outside of the
network elements. It controls the network based on a logically centralized view. Fig-
ure 1.3c shows the combination of NV and SDN. A virtualization layer is responsible
13
for managing the physical network. Besides, the virtualization layer orchestrates the
control access among SDN controllers (here SDN C1 and C2) of tenants. As an ex-
ample, tenant 1 (VN 1) has access to three network elements while tenant 2 (VN 2)
has access to two network elements. Note the virtualization layer in the middle that
pervisors
By adding a virtualization layer, i.e., a network hypervisor, on top of the networking
hardware, multiple vSDN operating systems are alleviated to control resources of the
same substrate network. This concept has been proposed by [Sherwood et al., 2009,
2010]. The network hypervisor interacts with the networking hardware via the D-CPI
through an SDN protocol, e.g., OF. In case of NV, the hypervisor provides on top the
same D-CPI interface towards virtual network tenants. This feature of the hypervisor
to interface through multiple D-CPI with multiple virtual SDN controllers is seen as
Fig. 1.4 illustrates the difference between SDN networks and vSDNs in terms of
their interfaces to tenants. In SDN networks (Fig. 1.4a), network applications are
running on top of an SDN controller. The applications use the A-CPI of the controller
For vSDNs, as depicted in Fig. 1.4b, the network hypervisor implements the vir-
tualization layer. The tenants now communicate again via a D-CPI with the network
hypervisor. Still, on top of the tenant controllers, applications communicate via the
controllers’ A-CPI interfaces during runtime. The hypervisor acts as a proxy: it in-
tercepts the control messages between tenants and the physical SDN network. The
hypervisor acts as the SDN controller towards the physical SDN network. It trans-
lates the control plane messages between the tenant SDN controllers and the physical
14
App1 App2 App1 App2
A-CPI
App1 App2 vSDN Ctrl-1 vSDN Ctrl-2
A-CPI D-CPI
SDN Controller SDN Network Hypervisor
D-CPI D-CPI
(a) Applications interact via A-CPI with SDN (b) Comparison of SDN and vSDNs, and
Controller. SDN controller interacts via D- respective interfaces. The tenant controllers
CPI with physical SDN network. communicate through the SDN network hy-
pervisor with their virtual switches.
Figure 1.4: Comparison of SDN and Virtual Software-Defined Network (vSDN), and
respective interfaces.
SDN network. Message translation is the main functional task a hypervisor has to
accomplish.
izing SDN networks: they need to grant access to tenants, isolate the virtual networks
on the data plane, avoid interference on the control plane, guarantee predictable net-
work operation, grant adaptation capabilities etc. In the following, we outline the
15
Virtual Virtual Virtual Virtual
Switch Switch Switch Switch
Network Hypervisor Network Hypervisor
Figure 1.5: Comparison between switch partitioning and switch partitioning & aggre-
gation. Fig. 1.5a shows a physical switch partitioned into two virtual switches. Fig. 1.5b
illustrates partitioning and aggregation. Here, the right virtual switch represents an
aggregation among two physical switches.
1.4.3.1. Abstraction
Overall, many tasks are affected by the realization of the main feature that hypervisors
need to offer — the abstraction feature. Abstraction means ”the act of considering
advantage of NV and SDN [Casado et al., 2014, 2010, Douglis and Krieger, 2013],
an SDN network hypervisor should be able to abstract details of the physical SDN
network hypervisor. The available features and capabilities are directly communicated
Three SDN network abstraction features are seen as the basic building blocks of a
virtualization layer for SDN: topology abstraction, physical node resource abstraction,
Topology Abstraction
Topology abstraction involves the abstraction of topology information, i.e., the in-
formation about the physical nodes and links that tenants receive as their view of
the topology. The actual view of tenants is defined by the mapping of the requested
16
vSDN Request: vSDN Request: vSDN Request:
(a) View abstraction with a (b) View abstraction with 1- (c) Link abstraction with 1-
1-to-N link mapping and 1- to-N link mapping and 1-to- to-N link mapping and 1-to-
to-1 node mapping, which in- 1 node mapping, which does 1 node mapping, which does
volves the intermediate node. not involve the intermediate not involve the intermediate
No path abstraction and no node. Path abstraction and nodes. Path abstraction and
path splitting. no path splitting. path splitting.
Figure 1.6: Comparison between link abstraction procedures. On top the requested
virtual network. In the middle the provided view based on the embedding on the
bottom.
nodes/links to the physical network and the abstraction level provided by the vir-
physical nodes/links as a ”1-to-N” mapping. A virtual node, for instance, can span
across many physical nodes. In case a tenant receives a ”1-to-N” mapping without
abstraction, he has to do additional work; the tenant has to implement the forwarding
tenant requests only a virtual link between two nodes, he receives a view also con-
taining intermediate nodes to be managed by the tenant. In case nodes and links are
The provided information about nodes involves their locations and their intercon-
nections through links. A virtual node can be realized on one (”1-to-1”) or across many
physical nodes (”1-to-N”). Fig. 1.5 illustrates the two cases. Fig. 1.5a shows an exam-
ple where a switch is partitioned into multiple virtual instances. Each virtual switch is
running on one physical switch only. As an example for node aggregation, i.e., where
two physical instances are abstracted as one, a tenant operating a secure SDN network
wants to operate incoming and outgoing nodes of a topology only. Thus, a physical
17
topology consisting of many nodes might be represented via ”one big switch” [Mon-
santo et al., 2013, Chowdhury and Boutaba, 2008, Casado et al., 2010, Jin et al., 2015].
As illustrated in Fig. 1.5b, the right green switch is spanning two physical instances.
The network hypervisor aggregates the information of both physical switches into one
virtual switch.
Similar, different mapping and abstraction options for virtual links or paths ex-
ist. Physical links and paths are abstracted as virtual links. Realizations of virtual
links can consist of multiple hops, i.e., physical nodes. A tenant’s view might contain
the intermediate nodes or not. An example where intermediate nodes are not hidden
from the view is shown by Fig. 1.6a. The request involves two nodes and a virtual
path connecting them. As the physical path realization of the virtual link spans an
intermediate node, the view depicts this node. Fig. 1.6b shows a view that hides the
intermediate node. Besides, a virtual path can also be realized via multiple physical
paths, which is illustrated by Fig. 1.6c. For this, the infrastructure needs to provide
resources. For SDN networks, memory is additionally differentiated from the space to
store matching entries for network flows, i.e., flow table space. While flow table space
is only used for realizing the match-and-action paradigm, memory might be needed
byte counter for charging). CPU resource information can be abstracted in different
ways. It can be represented by the number of available CPU cores or the amount of
memories. Flow table resources involve, e.g., the number of flow tables or the number
of TCAMs [Pagiamtzis and Sheikholeslami, 2006, Panigrahy and Sharma, 2002]. For
18
Isolation of Vir-
tual SDN Networks
Figure 1.7: Network hypervisors isolate three network virtualization attributes: control
plane, data plane, and vSDN addressing.
instance, if switches provide multiple table types such as physical and software tables,
network hypervisors need to reserve parts of these tables according to the demanded
performance. Again, based on the level of abstraction, the different table types (soft-
ware or hardware) are abstracted from tenants’ views in order to lower operational
such as different priorities in case of a priority queuing discipline, as well as the link
buffers. Tenants might request virtual networks with delay or loss guarantees. To
guarantee delay and upper loss bounds, substrate operators need to operate queues
and buffers for the tenants. That is, the operation of queues and buffers is abstracted
from the tenant. However, tenants might even request to operate queues and buffers
themselves. For instance, the operation of meters, which rely on buffers, is a funda-
mental feature of recent OF versions. To provide tenants with their requested modes
of operations, like metering, substrate operators need to carefully manage the physical
resources.
19
1.4.3.2. Isolation
Network hypervisors should provide isolated virtual networks for tenants while trying
to perceive the best possible resource efficiency out of the physical infrastructure.
While tenants do not only demand physical resources for their operations, physical
resources are also needed to put virtualization into effect, e.g., to realize isolation.
Physical resources can be classified into three main categories as depicted in Fig 1.7:
the provided addressing space, the control plane and the data plane resources.
of an SDN network. This means, tenants should be free to address flows according
i.e., the type of network (e.g., layer 2 only) and the used protocols (e.g., MPLS as the
only tunneling protocol). The available headers and their configuration possibilities
determine the amount of available flows. In virtualized SDN networks, more possibili-
ties are available: e.g., if the vSDN topologies of two tenants do not overlap physically,
the same address space in both vSDNs would be available. Or if tenants request lower
OF versions than the deployed ones, extra header fields added by higher OF versions
could also be used for differentiation. For instance, OF 1.1 introduced MPLS that can
(i.e., their CPU, memory, etc.), which host SDN controllers, directly influence the
control plane performance [Kuzniar et al., 2014, Rotsos et al., 2012, Tootoonchian
20
and Ganjali, 2010, Tootoonchian et al., 2012]. For instance, when an SDN controller
is under heavy load, i.e., available CPUs run at high utilization, OF control packet
delayed. On the other hand, also the resources of the control channels and switches
can impact the data plane performance. In traditional routers, the routing processor
running the control plane logic communicates with the data plane elements (e.g.,
Forwarding Information Base (FIB)) over a separate bus (e.g., a PCI bus). In SDN,
controllers are running as external entities with their control channels at the mercy of
the network. If a control channel is currently utilized by many OF packets, delay and
even loss might occur, which can lead to delayed forwarding decisions. On switches, so
called agents manage the connection towards the external control plane. Thus, switch
resources consumed by agents (node CPU & memory, and link buffer & data rate) can
factors: the physical network, i.e., node and link capabilities; the processing speed
of a hypervisor being determined by the CPU of its host; the control plane latency
between tenant controllers and hypervisors is determined by the available data rate of
the connecting network. What comes in addition is the potential drawback of sharing
infrastructures with too many control messages. Without isolation, the overload gen-
erated by a tenant can then degrade the perceived performance of other tenants, which
degrades the data plane performance. To provide predictable and guaranteed perfor-
mance, a resource isolation scheme for tenants needs to carefully allocate all involved
21
Data Plane Isolation
The main resources of the data plane are node CPUs, node hardware accelerators, node
flow table space, link buffers, link queues, and link data rates. Node resources need to
be reserved and isolated between tenants for efficient forwarding and processing of the
tenants’ data plane traffic. On switches, for instance, different sources can utilize their
CPU resources: (1) generation of SDN messages, (2) processing data plane packets
on the switches’ CPUs, i.e., their ”slow paths”, and (3) switch state monitoring and
storing [Sherwood et al., 2009]. As there is an overhead of control plane processing due
to involved networking operations, i.e., exchanging control messages with the network
all involved resources. Besides, the utilization of the resources might change under
varying workloads. Such variations need also be taken into account when allocating
resources.
sor
FlowVisor (FV) [Sherwood et al., 2009] has been the first hypervisor for virtualizing
between multiple SDN controllers. In this section, we will briefly describe how FlowVi-
sor addresses some of the aforementioned attributes of network hypervisors. For a more
22
Architecture. FV is a software network hypervisor and can run stand-alone on
any commodity server or inside a virtual machine. Sitting between the tenant SDN
controllers and the SDN networking hardware, FV processes the control traffic between
tenants from and to the SDN switches. FV further controls the view of the network
towards the tenants, i.e., it can abstract switch resources. FV supports OF 1.0 Open
flowspaces for tenants. If tenants try to address flows outside their flowspaces, FV
for flowspace isolation. For shared switches, FV ensures that tenants cannot share
flowspaces, i.e., packet headers. For non-shared switches, flowspaces can be reused
Bandwidth Isolation. While OF in its original version has not provided any QoS
techniques for data plane isolation, FV realized data plane isolation by using VLAN
priority bits in data packets. Switches are configured out-of-band by a network ad-
ministrator to make use, if available, of priority queues. FV, then, rewrites tenant
rules to further set the VLAN priorities of the tenants’ data packets [Sofia, 2009]. The
so called VLAN Priority Code Point (PCP) specifies the 3-bit VLAN PCP field for
mapping to eight distinct priorities. As a result, data packets can be mapped to the
Topology Isolation. FV isolates the topology in a way that tenants only see the
ports and switches that are part of their slices. For this, FV edits and forwards OF
Switch CPU Isolation. In contrast to legacy switches, where control does not need
they need to process. The reason is that the processing needed for external connec-
23
tions adds overhead on switch CPUs. In detail, OF agents need to encapsulate and
decapsulate control messages from and to TCP packets. Thus, in case a switch has
to process too many OF messages, its CPU might become overloaded. In order to
ensure that switch CPUs are shared efficiently among all slices, FV limits the rate of
messages sent between SDN switches and tenant controllers. For this, FV implements
that operates in such timescales is TCAM [Fang Yu et al., 2004]. However, due to its
cost, TCAM space is limited. FV reserves each tenant parts of the table space for
operation. For this, FV keeps track of the used table space per tenant. Tenants cannot
use table space of other tenants in case they demand more table space. FV sends
tenants, which exceed their capacities, an indicator message telling that the flowspace
is full.
its distinct transaction identifier. If tenants use the same identifier, FV rewrites the
1.6. Summary
In this cahper we have described the necessity of network abstraction to address het-
network virtualization provides an abstraction of the network topology and of the net-
work node and link resources to offer an individual network view, a virtual network,
to a network application provider. Several virtual networks may run on the same net-
24
work substrate and have to be isolated from each other. A network hypervisor as an
intermediary layer takes care of providing the abstracted view and of the isolation.
virtualization and SDN realizes full network abstraction and programmability, this
chapter has used this combination to illustrate and explain network virtualization.
Without loss of generality, the network virtualization concept based on the described
hypervisor concepts can also be applied without SDN. However, when dealing with
25
26
Bibliography
Patrick Kwadwo Agyapong, Mikio Iwamura, Dirk Staehle, Wolfgang Kiess, and Anass
Ali Al-Shabibi, Marc De Leenheer, Matteo Gerola, Ayaka Koshibe, William Snow,
Netw. Summit (ONS), pages 1–2, Santa Clara, CA, March 2014a.
Ali Al-Shabibi, Marc De Leenheer, and Bill Snow. OpenVirteX: Make your virtual
Netw., HotSDN ’14, pages 25–30. ACM, August 2014b. ISBN 978-1-4503-2989-7.
Thomas Anderson, Larry Peterson, Scott Shenker, and Jonathan Turner. Overcoming
Hitesh Ballani, Paolo Costa, Thomas Karagiannis, and Ant Rowstron. Towards pre-
Paul Barham, Boris Dragovic, Keir Fraser, Steven Hand, Tim Harris, Alex Ho,
Rolf Neugebauer, Ian Pratt, and Andrew Warfield. Xen and the art of virtu-
alization. In Proc. ACM Symp. on Operating Systems, SOSP ’03, pages 164–
27
177. ACM, 2003. ISBN 1-58113-757-5. doi: 10.1145/945445.945462. URL http:
//doi.acm.org/10.1145/945445.945462.
Maxim Podlesny, Md Golam Rabbani, Qi Zhang, and Mohamed Faten Zhani. Data
Theophilus Benson, Aditya Akella, and David a. Maltz. Network traffic characteristics
of data centers in the wild. In Proc. ACM SIGCOMM IMC, page 267, New York,
New York, USA, November 2010. ACM Press. ISBN 9781450304832. doi: 10.1145/
Pankaj Berde, Matteo Gerola, Jonathan Hart, Yuta Higuchi, Masayoshi Kobayashi,
Toshio Koide, Bob Lantz, William Snow, Guru Parulkar, Brian O’Connor, and
Pavlin Radoslavov. ONOS: Towards an open, distributed SDN OS. In Proc. ACM
Workshop on Hot Topics in Softw. Defined Netw., pages 1–6, Chicago, Illinois, USA,
Andreas Blenk, Arsany Basta, Martin Reisslein, and Wolfgang Kellerer. Survey
arnumber=7295561http://ieeexplore.ieee.org/document/7295561/.
Martı́n Casado, Teemu Koponen, Rajiv Ramanathan, and Scott Shenker. Virtualizing
for Extensible Services of Tomorrow (PRESTO), PRESTO ’10, pages 8:1–8:6. ACM,
http://doi.acm.org/10.1145/1921151.1921162.
Martin Casado, Nate Foster, and Arjun Guha. Abstractions for software-defined
28
00010782. doi: 10.1145/2661061.2661063. URL http://dl.acm.org/citation.cfm?
doid=2661061.2661063.
of the art and research challenges. IEEE Commun. Mag., 47(7):20–26, July 2009.
org/lpdocs/epic03/wrapper.htm?arnumber=5183468.
Fred Douglis and Orran Krieger. Virtualization. IEEE Internet Computing, 17(2):
Dmitry Drutskoy, Eric Keller, and Jennifer Rexford. Scalable network virtualization
David Erickson. The beacon openflow controller. In Proc. ACM Workshop on Hot
Topics in Softw. Defined Netw., HotSDN ’13, pages 13–18. ACM, August 2013.
10.1145/2491185.2491189.
Jeffrey Erman and K.K. Ramakrishnan. Understanding the super-sized traffic of the
super bowl. In Proc. ACM SIGCOMM IMC, IMC ’13, pages 353–360. ACM, 2013.
10.1145/2504730.2504770.
Fang Yu, R.H. Katz, and T.V. Lakshman. Gigabit rate packet pattern-matching using
TCAM. In Proc. IEEE ICNP, pages 174–183, October 2004. doi: 10.1109/ICNP.
2004.1348108.
Dino Farinacci, Tony Li, Stan Hanks, David Meyer, and Paul Traina. Generic
29
routing encapsulation (gre). RFC 2784, RFC Editor, March 2000. URL http:
//www.rfc-editor.org/rfc/rfc2784.txt. http://www.rfc-editor.org/rfc/rfc2784.txt.
Nick Feamster, Lixin Gao, and Jennifer Rexford. How to lease the internet in
your spare time. ACM SIGCOMM Computer Commun. Rev., 37(1):61, January
citation.cfm?doid=1198255.1198265.
Nick Feamster, Jennifer Rexford, and Ellen Zegura. The road to SDN: An intellectual
Jose Luis Garcia-Dorado, Alessandro Finamore, Marco Mellia, Michela Meo, and
Access Technology Impact. IEEE Trans. on Netw. and Serv. Manage., 9(2):142–
http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6158423.
Pankaj Garg and Yu-Shun Wang. Nvgre: Network virtualization using generic routing
Vinicius Gehlen, Alessandro Finamore, Marco Mellia, and Maurizio M. Munafò. Un-
covering the Big Players of the Web, pages 15–28. Springer Berlin Heidelberg, Berlin,
https://doi.org/10.1007/978-3-642-28534-9 2.
1974.
Natasha Gude, Teemu Koponen, Justin Pettit, Ben Pfaff, Martı́n Casado, Nick McK-
eown, and Scott Shenker. NOX: towards an operating system for networks. ACM
30
Raj Jain and Subharthi Paul. Network virtualization and software defined networking
for cloud computing: a survey. IEEE Commun. Mag., 51(11):24–31, November 2013.
Xin Jin, Jennifer Gossels, Jennifer Rexford, and David Walker. CoVisor: A compo-
URL http://dl.acm.org/citation.cfm?id=2789770.2789777.
Eric Keller, Dushyant Arora, Soudeh Ghorbani, Matt Caesar, and Jennifer Rex-
ford. Live Migration of an Entire Network (and its Hosts). Princeton Uni-
versity Computer Science Technical Report, pages 109–114, 2012. doi: 10.1145/
//ftp.cs.princeton.edu/techreports/2012/926.pdf.
T. Koponen, M. Casado, P.S. Ingram, W.A. Lambeth, P.J. Balland, K.E. Amidon,
Teemu Koponen, Martin Casado, Natasha Gude, Jeremy Stribling, Leon Poutievski,
Min Zhu, Rajiv Ramanathan, Yuichiro Iwata, Hiroaki Inoue, Takayuki Hama, and
Scott Shenker. Onix: A distributed control platform for large-scale production net-
works. In Proc. USENIX Conf. OSDI, OSDI’10, pages 351–364. USENIX Associa-
Teemu Koponen, Keith Amidon, Peter Balland, Martin Casado, Anupam Chanda,
Bryan Fulton, Igor Ganichev, Jesse Gross, Paul Ingram, Ethan Jackson, Andrew
Lambeth, Romain Lenglet, Shih-Hao Li, Amar Padmanabhan, Justin Pettit, Ben
Pfaff, Rajiv Ramanathan, Scott Shenker, Alan Shieh, Jeremy Stribling, Pankaj
Thakkar, Dan Wendlandt, Alexander Yip, and Ronghua Zhang. Network virtu-
http://dl.acm.org/citation.cfm?id=2616448.2616468.
31
Azodolmolky, and Steve Uhlig. Software-defined networking: A comprehensive sur-
Maciej Kuzniar, Peter Peresini, and Dejan Kostic. What you need to know about
SDN control and data planes. Technical report, EPFL, TR 199497, 2014.
Maciej Kuźniar, Peter Perešı́ni, and Dejan Kostić. What You Need to Know About
SDN Flow Tables, pages 347–359. Springer International Publishing, Cham, 2015.
10.1007/978-3-319-15509-8 26.
Yunfa Li, Wanqing Li, and Congfeng Jiang. A survey of virtual machine system:
Current technology and future trends. In Proc. IEEE Int. Symp. on Electronic
Commerce and Security (ISECS), pages 332–336, July 2010. doi: 10.1109/ISECS.
2010.80.
C. Wright. Virtual extensible local area network (vxlan): A framework for overlaying
virtualized layer 2 networks over layer 3 networks. RFC 7348, RFC Editor, August
rfc/rfc7348.txt.
Nick McKeown, Tom Anderson, Hari Balakrishnan, Guru Parulkar, Larry Peterson,
Jennifer Rexford, Scott Shenker, and Jonathan Turner. OpenFlow: enabling innova-
2008.
Christopher Monsanto, Joshua Reich, Nate Foster, Jennifer Rexford, and David
NSDI, pages 1–13, Lombard, IL, April 2013. USENIX Association. ISBN 978-1-
presentation/monsanto.
32
Open Networking Foundation (ONF). OpenFlow Switch Specifications 1.0 (ONF
2013/04/openflow-spec-v1.0.0.pdf.
Open Networking Foundation (ONF). OpenFlow Switch Specifications 1.1.0 (ONF TS-
com/wp-content/uploads/2014/10/openflow-spec-v1.1.0.pdf.
Open Networking Foundation (ONF). OpenFlow Switch Specifications 1.2 (ONF TS-
sdn-resources/onf-specifications/openflow/openflow-spec-v1.2.pdf.
com/wp-content/uploads/2014/10/openflow-spec-v1.3.0.pdf.
Open Networking Foundation (ONF). OpenFlow Switch Specifications 1.4 (ONF TS-
sdn-resources/onf-specifications/openflow/openflow-spec-v1.4.0.pdf.
downloads/sdn-resources/onf-specifications/openflow/openflow-switch-v1.5.0.
noipr.pdf.
Open Networking Foundation (ONF). SDN Architecture Overview, Version 1.0, ONF
stories/downloads/sdn-resources/technical-reports/TR SDN-ARCH-Overview-1.
1-11112014.02.pdf.
33
OpenDaylight. A linux foundation collaborative project, 2013. URL http://www.
opendaylight.org.
cuits and architectures: A tutorial and survey. IEEE J. Solid-State Circuits, 41(3):
712–727, 2006.
Rina Panigrahy and Samar Sharma. Reducing TCAM power consumption and increas-
ing throughput. In Proc. IEEE High Perf. Interconnects, pages 107–112, August
txt. http://www.rfc-editor.org/rfc/rfc3031.txt.
Charalampos Rotsos, Nadi Sarrar, Steve Uhlig, Rob Sherwood, and Andrew W. Moore.
on concepts, taxonomy and associated security issues. In Proc. IEEE Int. Conf. on
Computer and Netw. Techn. (ICCNT), pages 222–226, April 2010. doi: 10.1109/
ICCNT.2010.49.
Rob Sherwood, Glen Gibb, Kok-Kiong Yap, Guido Appenzeller, Martin Casado, Nick
Rob Sherwood, Jad Naous, Srinivasan Seetharaman, David Underhill, Tatsuya Yabe,
Kok-Kiong Yap, Yiannis Yiakoumis, Hongyi Zeng, Guido Appenzeller, Ramesh Jo-
hari, Nick McKeown, Michael Chan, Guru Parulkar, Adam Covington, Glen Gibb,
Mario Flajslik, Nikhil Handigol, Te-Yuan Huang, Peyman Kazemian, and Masayoshi
34
Kobayashi. Carving research slices out of your production networks with OpenFlow.
James E Smith and Ravi Nair. The architecture of virtual machines. IEEE Computer,
38(5):32–38, 2005.
ICE’12, pages 10–10, Berkeley, CA, USA, April 2012. USENIX Association. URL
http://dl.acm.org/citation.cfm?id=2228283.2228297.
Amin Tootoonchian and Yashar Ganjali. HyperFlow: A distributed control plane for
URL http://dl.acm.org/citation.cfm?id=1863133.1863136.
J.S. Turner and D.E. Taylor. Diversifying the Internet. In Proc. IEEE Globecom,
Xipeng Xiao, Alan Hannan, Brook Bailey, and Lionel M Ni. Traffic engineering with
Minlan Yu, Jennifer Rexford, Michael J. Freedman, and Jia Wang. Scalable flow-
Zhaoning Zhang, Ziyang Li, Kui Wu, Dongsheng Li, Huiba Li, Yuxing Peng, and
IEEE Trans. Parallel and Distr. Systems, 25(12):3328–3338, December 2014. ISSN
35
36