Computers 12 00180
Computers 12 00180
Article
Developing a Novel Hierarchical VPLS Architecture Using
Q-in-Q Tunneling in Router and Switch Design
Morteza Biabani 1, * , Nasser Yazdani 1 and Hossein Fotouhi 2
Abstract: Virtual Private LAN Services (VPLS) is an ethernet-based Virtual Private Network (VPN)
service that provides multipoint-to-multipoint Layer 2 VPN service, where each site is geographically
dispersed across a Wide Area Network (WAN). The adaptability and scalability of VPLS are limited
despite the fact that they provide a flexible solution for connecting geographically dispersed sites.
Furthermore, the construction of tunnels connecting customer locations that are separated by great
distances adds a substantial amount of latency to the user traffic transportation. To address these
issues, a novel Hierarchical VPLS (H-VPLS) architecture has been developed using 802.1Q tunneling
(also known as Q-in-Q) on high-speed and commodity routers to satisfy the additional requirements of
new VPLS applications. The Vector Packet Processing (VPP) performs as the router’s data plane, and
FRRouting (FRR), an open-source network routing software suite, acts as the router’s control plane.
The router is designed to seamlessly forward VPLS packets using the Request For Comments (RFCs)
4762, 4446, 4447, 4448, and 4385 from The Internet Engineering Task Force (IETF) integrated with
VPP. In addition, the Label Distribution Protocol (LDP) is used for Multi-Protocol Label Switching
(MPLS) Pseudo-Wire (PW) signaling in FRR. The proposed mechanism has been implemented on a
software-based router in the Linux environment and tested for its functionality, signaling, and control
plane processes. The router is also implemented on commodity hardware for testing the functionality
of VPLS in the real world. Finally, the analysis of the results verifies the efficiency of the proposed
Citation: Biabani, M.; Yazdani, N.; mechanism in terms of throughput, latency, and packet loss ratio.
Fotouhi, H. Developing a Novel
Hierarchical VPLS Architecture Keywords: Virtual Private LAN Service (VPLS); Vector Packet Processing (VPP); FRRouting (FRR);
Using Q-in-Q Tunneling in Router
Q-in-Q tunneling; MPLS; LDP
and Switch Design. Computers 2023,
12, 180. https://doi.org/10.3390/
computers12090180
big layer 2 switches. So far, VPLS is vendor-specific, which means that in order to support
VPLS on a vendor’s device, the service provider should purchase VPLS modules that are
only compatible with its own product, and also in this situation, it is unclear if the VPLS
module is inter-operable with that of the device of other vendors located somewhere else.
This gives us the inspiration to implement a vendor-agnostic VPLS on high-speed and
commodity routers [4,12,14]. The development of new VPLS applications necessitates the
incorporation of new operational needs, such as improved security, streamlined service
provisioning, better network resource use, improved scalability, and autonomous network
management support [15,16]. However, it is difficult for such functionality to be provided
for new applications in the outdated VPLS architectures that are now in use.
When VPLS was first introduced, it was described as a flat design. Nevertheless,
because a full mesh of Pseudo-Wires (PWs) was necessary, flat topologies encountered
significant scaling problems in both the data and control planes for more extensive net-
works. The Hierarchical VPLS (H-VPLS) architecture was suggested as a solution to this
problem [10]. By reducing the number of PWs, the H-VPLS design offers a workable
solution to the scaling problem. One of the hierarchical versions is to use Software-Defined
Network (SDN) structures. According to the authors of reference [17], SDN might be used
to address the VPLS scalability issue where the control channel is the communication path
between the control plane and the data plane.
FRRouting (simply FRR) [18] is an open-source control plane protocol suite for Linux
and Unix platforms. It consists of various protocols and features, especially MPLS and
Label Distribution Protocol (LDP) [19,20]. A system with FRR installed acts as a single
router and can inter-operate seamlessly with routers of other vendors. In addition, Pseudo-
Wire (PW) technology [21], based on a Layer 2 VPN Enabler, is tuned to transmit ethernet
packets, and these packets are transferred independently of other users’ traffic on the
customer’s private network. FRR, based on the Linux or Unix platform, makes the router
use the kernel’s features. In fact, in this case, the kernel is the router’s data plane [22].
Having a Linux or Unix kernel as the data plane has some disadvantages. Firstly, Linux
kernel does not support VPLS forwarding and secondly, the performance is quite low
compared to the performance of other vendors’ products. Hence, we use Vector Packet
Processing (VPP) [22–25] for our data plane. VPP is a fast network stack running in Linux
Userspace on multiple architectures including x86. High performance, rich features, and
modularity make developers integrate VPP with other products such as OpenStack and
Kubernetes [25,26]. VPP is built on top of DPDK (Data Plane Development Kit) [27], a
set of libraries and tools to accelerate packet processing workloads by offloading packet
processing from kernel to user process [28]. Despite the fact that DPDK is fast in packet
processing, it treats packets serially. But, on the contrary, VPP takes the batch processing
approach with a single set of instructions. As a result, there will be no lower amount of cache
misses [23,24]. Thus, due to the features provided in FRR and VPP for the VPLS control
plane and data plane specifications, they are suitable alternatives since other open-source
frameworks do not support either the VPLS control plane or data plane specifications.
Hence, we will cover one of the most efficient VPLS implementation scenarios in
terms of scalability and simplicity, which allows the service provider to set a specific VLAN
tag for each customer on the network. In fact, we use FRR as a control plane and VPP
as a data plane. We manipulated both FRR and VPP separately to support VPLS. This
means that if a customer adds a tag to their frames, the service provider will also add
another tag called 802.1Q tunneling (Q-in-Q) to it [29]. In essence, service providers enable
trunk connections to the customer at all branches and network connection centers, which
would be virtually impossible to give discrepancies and interference, security issues, and
additional load management. So, Cisco’s recommendation for such conditions is Q-in-Q
tunnel [29]. Double Tagging helps the service provider encapsulate common VLAN tags in
a dedicated VLAN. After that, we integrated them as a single VPLS service over high-speed
routers [5,30,31]. In order to achieve the above-mentioned goals and also to implement
the fundamental functions, we are inspired by important Internet Engineering Task Force
Computers 2023, 12, 180 3 of 25
(IETF) RFCs [32] such as 4762 [12], 4446 [33], 4447 [34], 4448 [9], and 4385 [35]. Afterward,
we evaluated the proposed service including two Cisco 7600 series routers [36] as well as
two high-speed Nexcom NSA 7136 routers [37] with three Cisco 3750-x series switches in
the real world [38,39]. The major contributions of our work are listed as follows:
• Proposed a novel H-VPLS architecture that integrates FRR as the control plane and
VPP as the data plane for router and switch design.
• Developed VPLS via Q-in-Q tunneling, which significantly enhances traffic engineering.
• Conducted a thorough review of existing VPLS configuration scenarios.
• Implemented the fundamental control plane and data plane functions by focusing on
RFCs 4762 [12], 4446 [33], 4447 [34], 4448 [9], and 4385 [35].
• Demonstrated that VPP and FRR provide transparent multipoint-to-multipoint ser-
vices and interoperability with other vendors’ products using VPLS.
The rest of the paper is as follows. Sections 2 and 3 discuss the preliminary and related
works, respectively. Section 4 presents the design and implementation of the proposed
architecture. Section 5 contains our evaluation and results. Finally, the conclusion is
presented in Section 6.
2. Preliminary
In this section, we first present the concepts and terminologies, followed by the existing
VPLS architectures.
CE CE
IP/MPLS
Attachment
Circuit
PE1 PE2
CE
CE
Pseudo Wire
PDUs
Router or
Switch
• Customer Site
• Customer Edge device or simply CE: Customer network which is directly connected
to the ISP.
Computers 2023, 12, 180 4 of 25
• Provider Edge device or simply PE: This connects one or multiple CEs to the ISP. PE is
responsible for converting IP to MPLS packets.
• Virtual Switch Instances or simply VSI: This is an ethernet bridge function entity of
a VPLS instance on a PE. It forwards Layer 2 frames based on MAC addresses and
VLAN tags.
• Network Provider device or simply P: ISP’s devices inside the core network which
runs MPLS.
• PW signaling or simply PW: This is a bidirectional virtual connection between VSIs
on two PEs. It consists of two unidirectional MPLS virtual circuits.
• Attachment Circuit or simply AC: The connection between CE and PE.
• The tunnel is a direct channel between PEs for data transmission. In this work, we use
an MPLS tunnel.
A Packet Switched Network (PSN) tunnel must first be created between the two edge
routers, and then a PW will be created inside the tunnel. The common physical link is
called the Attachment Circuit [35]. As shown in Figure 2, the router can take the frame
from two sides. When the router shares a frame from a shared physical link, it performs a
MAC learning procedure just like a Bridge module.
L2 Forwarding Table
Virtual
Switch
Emulated
Virtual Forwarding
Bridge Module LAN
Interface Instance
To customer To Core
Network Network
Figure 2. Exhibiting the router’s sides to take frames when establishing tunnels [4,12].
When the router takes the frame from the network, it performs MAC Learning via PW
(Virtual Forwarding Instance (VFI) operation). Regardless of whether the PE router carries
out the MAC Learning from a shared physical link, it puts the MACs on a shared MAC
table and performs the layer 2 Forwarding using the layer 2 Forwarding Table accordingly.
So, when the router takes a frame from the shared physical link (Attachment Circuit), the
MAC takes a MAP from the shared physical link. And when it takes a frame from the
network, it writes the MAC Source to the frame to PW Label. As a result, when the frame
reaches the router, it determines which PW package should be sent to the destination [12].
• Hub and Spoke [40]: In this model, as shown in Figure 4, instead of having PW
between all PE routers, PW is inserted between all PE routers with a central P router.
The advantages of this model are the simplicity and lower number of PW required.
Also, since there is no Redundant path, it makes a loop-free connection. The drawback
of this model is the single point of failure because if the central router fails, all VPLS
communications will fail. Therefore, this model is not implemented in the real world.
• Partial Mesh [40]: This model, as shown in Figure 5, was created to add Redundancy
to the Hub and Spoke model. Due to the structure of this model, we must disable
the Split horizon, but due to having the Redundant path, we must activate Spanning
Tree Protocol (STP). Since providers do not want to run STP on the network core, this
model is also less common in practice.
• H-VPLS with MPLS Access Network [40]: In this model, as shown in Figure 6, the
network is divided into two parts, Top Tier with Full mesh, and Bottom Tier with Hub
and Spoke. This architecture will reduce the number of PWs and will be a hierarchical
model. The disadvantage of this model is the single point of failure. Since each of the
U-PE routers is connected to the Top Tier network with a link, if this connection is
corrupted or disconnected, the entire Bottom Tier connection of the Top Tier network
will be disconnected.
• H-VPLS with Q-in-Q Access Network: In the H-VPLS model proposed by the authors
of reference [40], the Provider network is partitioned into ethernet-based islands, as
depicted in Figure 7. These islands are interconnected using MPLS. Specifically, in the
Bottom Tier of this architecture, PW Q-in-Q is utilized as the type of PW. The advan-
tages of this topology, including easy availability through ethernet and hierarchical
backing via Q-in-Q access, scalable customer VLANs (4K × 4K), and supporting 4K
customers per access domain, have encouraged us to implement this model.
CE
PE PE
CE
PE
CE
CE
PE PE
CE
PE
CE
PE PE
CE
CE
PE
CE
CE CE
Top Tier
U-PE U-PE
N-PE N-PE
CE CE
U-PE U-PE
N-PE
CE CE
CE CE
Top Tier
U-PE U-PE
N-PE N-PE
CE CE
U-PE U-PE
N-PE
CE CE
3. Related Works
In this section, we delve into an exploration of recent advancements in VPLS and
highlight their distinct characteristics.
When VPLS was initially proposed, it was described as possessing a flat topology
that was well-suited for small to medium-sized networks [10,41]. However, these flat
topologies encountered significant scaling challenges in both the data and control planes as
larger networks necessitated a complete mesh of PWs. To mitigate this issue, the H-VPLS
design was introduced as a viable solution by reducing the number of PWs and effectively
addressing the scalability concerns [15]. In response to the early obstacles pertaining to
Computers 2023, 12, 180 7 of 25
scalability, security, and neighbor discovery, MPLS (Multiprotocol Label Switching) was
employed instead of VPLS. Subsequently, the Border Gateway Protocol (BGP) and the Label
Distribution Protocol (LDP) were suggested as two standard implementations for signaling
and automatic neighbor detection [16]. Consequently, multiple architectural designs were
proposed to enhance the operational capabilities of these frameworks.
The authors of reference [41] provide an in-depth analysis of VPLS and its design,
elucidating its capabilities, such as MAC addressing, packet encapsulation, loop avoidance,
auto-discovery, and signaling. They also briefly touch upon the advantages of H-VPLS over
flat topologies. However, with the increase in users, scalability and security have emerged
as critical concerns in VPLS. In reference [42], the architecture and applications of SDN
are explained, while the authors of reference [43] explore novel approaches to enhance
VPLS security and scalability, leveraging the potential of SDN. According to reference [42],
SDN has the potential to address the scalability issues in VPLS. The communication path
between the control plane and the data plane is referred to as the control channel, which is
established using SDN control protocols. Network control functionalities and services are
implemented as software applications in the application plane [17]. The controller sends
flow entries to the data plane devices to govern their forwarding behavior, which is stored
in local flow tables. If there is no matching entry, the packet is forwarded to the controller
for further processing. In reference [17], an SD-VPLS architecture is described where each
LAN is represented as an island, with the Island Controller responsible for controlling the
OpenFlow switch, packet acceptance, and forwarding to the provider’s network.
The majority of VPLS topologies utilize MPLS as the underlying network infrastruc-
ture. MPLS acts as a virtual bridge within the network core, connecting various attachment
circuits on each PE device and creating a unified broadcast domain [44]. BGP is a funda-
mental component of VPLS networks. BGP consists of speakers, peers, connections, and
border routers. A BGP speaker is a host responsible for implementing the BGP protocol
within the network. BGP peers are pairs of BGP speakers that establish connections and
communicate with each other. An external BGP peer is located in a different autonomous
system than the reference BGP speaker, while an internal BGP peer is within the same
autonomous system [45]. Reliable BGP links are established using TCP. Signaling plays
a crucial role in VPLS and involves the exchange of demultiplexers to establish and tear
down PWs. Specific PW characteristics are transmitted during signaling, allowing a PE to
establish a particular VPLS. Signaling occurs after the auto-discovery process is completed.
The exchange of demultiplexers among PEs enables the transmission of update messages
to other PEs in the same VPLS instance, thereby increasing the load on both the PE and the
control plane [10]. MPLS relies on the agreement between LSRs on how to interpret labels,
which is essential for forwarding traffic between and through LSRs. LDP is a set of protocols
used for this purpose. Each LSR shares its bindings with other LSRs to ensure consistent
label interpretation [12]. In VPLS, a PE acts as an edge router capable of executing LDP
signaling and/or routing protocols to establish PWs. Additionally, a PE can create tunnels
to communicate with other PEs and forward traffic through PWs.
Any modification to the VPLS topology, such as the addition or removal of a PE
device, necessitates updating the configuration of every PE within that particular VPLS
instance. In a flat VPLS architecture, the control plane is primarily responsible for auto-
discovery and the establishment and termination of PWs. However, scalability issues can
arise due to the requirement of a complete mesh of PWs between VPLS peers. Additionally,
broadcasting messages to every speaker can impair the scalability of VPLS [46]. The
hierarchical architecture of VPLS largely resolves scalability challenges. VPLS designs,
such as H-VPLS and SD-VPLS, exhibit superior control plane scalability compared to flat
systems, such as BGP-based VPLS and LDP-based VPLS. To enhance the scalability of VPLS,
the authors of reference [47] proposed a method that enables the utilization of the same
tunnels or PWs for multiple clients. In the data plane of VPLS, two key responsibilities are
routing packets and encapsulating Ethernet frames. One significant scalability issue in the
data plane is the MAC table explosion [12]. For instance, if each PE router is connected
Computers 2023, 12, 180 8 of 25
to N customer terminals for each service instance and there are M sites in the customer
network, the total number of entries stored in each PE is N × M. When dealing with a
large number of terminals (e.g., 25,000) and multiple locations (e.g., 10), each PE router
would need to handle 250,000 entries. This quantity is typically beyond the capacity of
most ethernet switches [48].
Several studies, including references [44,49], have investigated and compared the
scalability of different VPLS implementations. Additionally, reference [50] proposed a
scalable virtual Layer 2 implementation, and reference [6] suggested various methods
to enhance the scalability of VPLS. Each of these studies is discussed below. In their
work, the authors of reference [49] conducted a comparative analysis of the control plane
scalability of L2VPN and L3VPN based on MPLS. This research examined and compared
the performance of L2VPN and L3VPN using various metrics, including creation and
deletion times, control plane memory usage, and overall memory usage. The findings of
this study serve as a foundation for future scalability comparisons between traditional
VPNs and their Software-Defined Networking (SDN)-based counterparts. Reference [50]
proposed a scalable virtual Layer 2 implementation. Their study focused on analyzing
the scalability of virtual Layer 2 for applications that utilize server clusters, such as data
centers, as well as globally distributed applications. The research explored both hardware
and software factors that can potentially impact VPLS compatibility. Given that VPLS
deployments often involve equipment from different suppliers, compatibility issues may
arise. To address this challenge, reducing reliance on vendor-specific hardware is crucial.
These studies contribute to the understanding of VPLS scalability and propose solutions
to address compatibility and performance issues. Future research in this area can build
upon these findings to further enhance the scalability of VPLS deployments and explore
the potential benefits of SDN-based approaches.
In general, the current H-VPLS architectures often involve complex network man-
agement. However, the use of the Q-in-Q technique presents an opportunity to extend
VLANs across a service provider’s network by adding an extra VLAN tag to the primary
ethernet frame. By combining VPLS with Q-in-Q tunneling, it becomes possible to create a
virtual network that spans multiple sites and service provider networks [10]. Moreover,
the combination of VPLS and Q-in-Q tunneling provides flexibility in network design and
enables the integration of diverse network environments. Implementing and managing
VPLS with Q-in-Q tunneling can be complex, requiring coordination between different
network devices and service providers. Compatibility issues may arise, as the effectiveness
of Q-in-Q tunneling can vary across different network equipment and vendors [40].
Therefore, our objective is to propose an innovative H-VPLS architecture utilizing
Q-in-Q tunneling, specifically on high-speed and commodity routers, to meet the additional
requirements posed by emerging VPLS applications.
4. Proposed Architecture
In this section, we elaborate on the overall design architecture, followed by demonstrat-
ing the relevant codes and pseudo codes. We have developed a novel H-VPLS architecture
via Q-in-Q tunneling on high-speed and commodity routers to satisfy the additional re-
quirements of new VPLS applications, in which VPP performs as the router’s data plane
and FRR, an open-source network routing software suite, acts as the router’s control plane.
On the other hand, the LDP is used for MPLS PW signaling in FRR. The basic and the
most important incentive in implementing VPLS is the connection of multiple LANs with
different geographical locations dispersed in WAN and the creation of an illusion that all
of the sites see each other as if they are in the same LAN [4,12]. In this paper, as shown in
Figure 8, the LDP provided by Cisco is used for PW signaling in our implementation. To
receive packages at all label switched paths (LSPs) in the MPLS network, all LSRs must run
LDP and exchange labels [19,28,33].
Computers 2023, 12, 180 9 of 25
Remote
LDP Session
L2 Packet
The use of IP routing protocols instead of the Spanning tree protocol and MPLS tags
instead of VLAN identifiers in the infrastructure will significantly improve the development
of the VPLS service. The difference between VPLS and L3VPN is in the Customer Edge
(CE) side equipment.
These packets then traverse the graph in a pipeline manner. The dispatch function in VPP
takes up to 256 packets and forms them as a single vector element. When the first packet
goes into the core, the l-cache gets warm by having the code of the first node inside it and,
for the rest of the packets, there will be no cache misses for executing that node.
Huge core 0
Pages
Memory
Data
Packets
IPv4-
IPv4 IPv4-Local IPv4-ICMP
rewrite
Ethernet
Ethernet
output
Ethernet
Input-node
TX
DPDK
There are a lot of workflow graphs for packet processing inside VPP, and also the user
can attach their workflow to the main graph of VPP. The fundamental gain VPP brings is
cache efficiency. In particular, there are two distinct caches: Instruction Cache and Data
Cache. The former is for caching instructions and the latter is for caching data. According
to Figure 10a, when the ethernet header is about to process, for the first time, the node
which represents the ethernet header process instruction is transferred to the instruction
cache due to cache miss. After that, for the rest of the headers, the instruction code will
have remained in I-cache. After finishing the batch processing of the vector of packets, it is
time for the processing of the IP headers (Figure 10b). Therefore, VPP batches IP headers
and puts the vector inside D-Cache. At this time, the node which represents the instructions
for processing the IP header is transferred to I-Cache due to cache miss. So, in essence, the
cache miss for instructions is reduced significantly.
Figure 10. VPP Process: (a) Ethernet header; (b) IP header [23].
Computers 2023, 12, 180 11 of 25
Daemons in FRR can be maintained by a single user interface called vtysh. vtysh
connects to every daemon using the UNIX domain socket and acts as a proxy for user input.
directly without the addition of intrusive routing protocols and at high speed. While in a
hierarchical structure, traffic is transferred to another layer through routing protocols, and
this can reduce network performance.
Zebra
create netlink
listen
Router Plugin librtnl
add/del
learn
addr fib
Arp
VPP- Data Plane
Userspace
KernelSpace
Hardware
NIC NIC
• Ldpd(): This function enables LDP on all active MPLS interfaces. LDP is used for
distributing labels between PE routers and establishing LPSs in MPLS networks.
• L2vpn(): This function is responsible for creating, deleting, and updating PWs. PWs
are used to provide point-to-point connectivity between two CE devices over a service
provider’s MPLS network. L2VPN is a technology that enables the transport of Layer
2 traffic between different locations over an MPLS network.
• Ldpe(): This function sends hello packages periodically and creates sessions with
other LDP neighbors. LDP neighbors are routers that are directly connected and
exchange label information with each other. The hello packages are used to discover
and establish LDP sessions with other routers.
• Lde(): This function is responsible for the distribution of marked labels. When an LSP
is established between two routers, labels are assigned to the traffic that is being trans-
ported over the MPLS network. The LDE (Label Distribution Entity) is responsible for
distributing these labels to other routers in the network.
In order to implement the proposed VPLS architecture in the FRR section, changes
should be made in the codes related to L2VPN. In the L2vpn() section, VPLS-related
functions should be added, including functions to create, delete, and update VPLSs, as
well as VPLS database management functions. Also, we need to define VPLS parameters in
LDP_Virtual teletype (VTY)_Configure file. For example, VLAN ID and VLAN mapping
must be determined for VPLS. Moreover, the functions related to sending and receiving
VPLS data in layer two (Layer 2) should be added to LDP. To achieve this, parts of the
code related to L2TP (Layer 2 Tunneling Protocol) and MPLS message processing may
be manipulated. In summary, the implementation configuration related to PW in FRR
includes enabling LDP on all active MPLS interfaces, creating, deleting, and updating PWs,
sending hello packages periodically, and distributing marked labels. These functions are
essential for establishing and maintaining PW connectivity between CE devices over an
MPLS network. In the FRR package, the ldpd directory contains the implementation code
for the LDP daemon. The LDP daemon is responsible for managing the LDP protocol and
establishing LSPs between routers in an MPLS network.
The LDP_Virtual teletype (VTY)_Configure file is a configuration file used to configure
the LDP daemon. By activating L2vpn in this file, the LDP daemon is instructed to enable
support for L2VPNs and to create, delete, and update PWs. In the code implementation,
the LDP daemon is written in the C programming language. The LDP daemon source code
is located in the ldpd directory of the FRR package. The ldpd code is organized into several
files, including the following:
• ldpd.c: This file contains the main function for the LDP daemon. It sets up the LDP
protocol, initializes the LDP database, and starts the LDP event loop.
• ldp_interface.c: This file contains the code for managing LDP interfaces. It handles in-
terface events, such as interface up/down, and enables LDP on active MPLS interfaces.
• ldp_l2vpn.c: This file contains the code for managing Layer 2 VPNs. It handles the
creation, deletion, and updating of PWs, and also manages the L2VPN database.
• ldp_ldp.c: This file contains the code for managing LDP sessions and exchanging label
information with other LDP routers. It implements the LDP protocol and handles
LDP messages.
• ldp_label.c: This file contains the code for managing LDP labels. It handles label
distribution, label assignment, and label retention.
Overall, the ldpd directory in the FRR package contains the implementation code for
the LDP daemon, which is responsible for managing the LDP protocol and establishing LSPs
in an MPLS network. By activating L2vpn in the LDP_Virtual teletype (VTY)_Configure
file, the LDP daemon is instructed to enable support for Layer 2 VPNs and to manage PWs.
The implementation code is written in the C programming language and is organized into
several files based on their functionality.
In the following, using LDP_Virtual teletype (VTY)_Configure file, and by activating
L2vpn [56], the following sub-functions are called, which are summarized as follows:
Computers 2023, 12, 180 14 of 25
In the l2vpn_Configure file, the following sub-functions are used inside the l2vpn func-
tion to perform the searches or changes needed to attach PW. Here, Line 2 of Algorithm 2,
PwNew() creates PW given by the admin. PwFind(pw, InterfaceName) searches PW given
by the admin in the corresponding tree. If there is such a PW to assign, we can activate it
in the next steps if it is disabled and is assigned to l2vpn. In the following two functions,
PwFindActive(pw) and PwFindinActive(pw), the active or inactive state of the given PW is
specified. PwUpdateInfo(pw)—in the event of a new change, the index will be updated.
PwInit(pw, fec) is called in ldpd(). When the LDE_ENGINE process is considered to run,
PW information is transferred to the kernel. By calling PwExit(pw) in ldpd() function,
the necessary updates about PW will be made. As shown in line 8 of Algorithm 2, two
sub-functions appear:
Regarding kernel_remove(), when a path changes, zebra advertises the new path without
removing the previous path. So another process has to be done in zebra to identify the next
hop that has been removed along the way and remove the label from zebra. Regarding
kernel_update(), recently deleted local tags should not be used immediately because this
causes problems in terms of network instability. The garbage collector timer must be
restarted and re-used when the network is stable.
Before calling PwNegotiate(nbr, fec) in line 19, the following steps 10–18 must be derived
(by calling PwOk(pw, fec)): LSP formation towards the remote endpoint, tagging, size of
MTU, and PW status. The validity of LSP formation is checked by zebra. However, if
the PW status TLV is not supported in the remote peer, the peer will automatically delete
this field. In this case, the PEs must call the process of discarding the label to change the
signaling status.
According to RFC 4447 [34], if the PW status is not declared in the initial_label_mapping
package, SendPwStatus(nbr, nm) returns the value of zero to execute the process of discard-
ing the label. When changes are made to the PW flag, the new changes must be notified to
the neighbors assigned by the lde. These changes are sent to each neighbor with notification
messages. After receiving a notification message from the RecvPwStatus(nbr, nm) function,
the recipient PE updates the tag to the correct value. If the PW has not been removed,
Computers 2023, 12, 180 15 of 25
then the related configurations are fixed, otherwise the package is discarded by calling
RecvPwWildCard (nbr, nm, fec). If the pw status has changed from up to down, the assigned
labels should be discarded from its neighbors’ table by calling PwStatusUpdate(l2vpn, fec).
Contrarily, if it returns to the up status, it should be rewritten. However, in l2vpn defined
with the help of PwCtl(pwctl), if PW is a subset of its member, the state of that PW sets to
1. By calling BindingCtl(fn, fec, pwctl), the tags specified in the label mapping for PW are
bounded to the local and remote PW tags and, after running this function, the values of the
PW tags are completely allocated.
In the 30th line of the algorithm, when admin defines an l2vpn configured file, it
has received neighbor information and added those to its neighborhood table. This can
be done using ldpe_l2vpn_init(l2vpn, pw) and ldpe_l2vpn_exit(pw). Additionally, by calling
ldpe_l2vpn_pwexit(pw, tnbr), if the defined PW for l2vpn has already been used, the PW
should be misdiagnosed and prevented from repeating.
Algorithm 2 l2vpn_PW ()
1: Input: [struct l2vpn_pw *pw; struct fec fec; struct notify_msg nm; struct fec_node
*fn; struct fec_nh *fnh; struct l2vpn_pw *pw; struct tnbr *tnbr; static struct ctl_pw
pwctl; ]
2: Call struct l2vpn_pw *PwNew ();
3: Call struct l2vpn_pw *PwFind (pw, InterfaceName);
4: Call struct l2vpn_pw *PwFindActive (pw);
5: Call struct l2vpn_pw *PwFindinActive(pw)
6: Call Void PwUpdateInfo (pw);
7: Call Void PwInit (pw, fec)
8: Call PwExit(pw) { comment: lde_kernel_remove and lde_kernel_update}
9: Call Int PwOk (pw, fec)
10: if fnh− >remote_label == NO_LABEL then
11: return (0); { comment: /* check for a remote label */}
12: end if
13: if pw− >l2vpn− >mtu != pw− >remote_mtu then
14: return (0); { comment: /* MTUs must match */}
15: end if
16: if (pw− >flags & F_PW_STATUSTLV) && pw− >remote_status !=
PW_FORWARDING then
17: return (0); { comment: /* check pw status if applicable */}
18: end if
19: Call Int PwNegotiate (nbr, fec);
20: if pw == NULL then
21: return (0);{ comment: pw not configured, return and record– the mapping later}
22: end if
23: {comment: /* RFC4447—pseudowire status negotiation */}
24: Call Void SendPwStatus (nbr, nm)
25: Call Void RecvPwStatus (nbr, nm)
26: Call Void RecvPwWildCard (nbr, nm, fec) {/*RFC4447 PwID group wildcard*/}
27: Call Int PwStatusUpdate (l2vpn, fec)
28: Call Void PwCtl (pwctl)
29: Call Void BindingCtl (fn, fec, pwctl)
30: Call Void ldpe_l2vpn_init (l2vpn, pw)
31: Call Void ldpe_l2vpn_exit (pw)
32: Call Void ldpe_l2vpn_pw_exit (pw, tnbr)
released, and if it is not found, it will be registered in the database. In the MergeActivePw()
function, if the active PW is wasted, it will be removed from the database and released.
And in UpdateExistingActivePw(), if the LDP address is changed in a PW, it is sufficient that
the target passengers are reinstalled, but under any of the following session conditions it
must be changed.
If PW flags and configuration TLV status have changed, all neighbors must be reset.
However, if PW type has changed most of the transfer unit, PW or LSR ID, the PW FEC
must be reinstalled. Finally, calling MergeInActivePw() is like the description of active PW,
except that the operation is performed on passive PW.
Finally, after applying the above algorithms, our control plane will be ready to provide
the proposed service.
Algorithm 3 Ldpd ()
1: Input: [struct l2vpnpw *pw; union ldpdaddr addr; struct nbr *nbr; struct ldpd_conf
*xconf; struct l2vpn *l2vpn]
2: previous_pw_type = l2vpn− >pw_type;
3: previous_mtu = l2vpn− >mtu;
4: Call static void MergeInterface (xconf, l2vpn);
5: Call MergeActivePw (pw);
6: {comment: /* find deleted active pseudowires and also find new active pseudowires
*/}
7: Call UpdateExistingActivePw(pw, addr); {/*changes that require a session restart*/}
8: Call MergeInActivePw(pw);
Data Plane (VPP) In this section, we explain the algorithms implemented in VPP [23]
as our fast data plane to create VPLS based on Python APIs. The logical process for VPLS
code in the Data Plane Development Kit (DPDK) is as follows:
• Creation of l2 tunnel using the VppMPLSTunnelInterface algorithm;
• Addressing routes using VppMplsRoute;
• Bridge-domain using structured set_l2_bridge;
• Obtain the packages and address them;
• Packet encapsulation methods;
• Learning and forwarding part;
• The stream section of the packages in each direction;
• Disable Bridge domain after finishing work.
In the following, the above items are described along with the structure in the corre-
sponding platform.
In (a) process: The VppMPLSTunnelInterface algorithm is called from the main al-
gorithm VPP_Interface. The goal of this algorithm is to establish L2 on MPLS. First, the
VPP_Interface algorithm is checked, then the VppMPLSTunnelInterface, and finally the
VPP_Neighbor. The ABCMeta, util, and VPP_Neighbor items need to be imported into
the VPP_Interface algorithm. The ABC item is used to establish infrastructure in Python
to define abstract base algorithms (ABCs). ABCMeta is also used to define and create
ABC. In the util item, it is necessary to activate the util algorithm to establish the test
structure in vpp. The algorithm VPP_Interface creates vpp interfaces and related subjects.
There are many properties defined in this algorithm, but the ones required for the VPLS
service can include assigning index interfaces, selecting names for interfaces, and activating
MPLS on the VPP interface. VppMPLSTunnelInterface itself imports the following: from
vpp_ip_route import VppRoutePath and from VPP_Interface import VppInterface. The last
item is the VPP_Neighbor algorithm. This algorithm is required to establish VPP_Interface.
In Algorithm 4, we have the function of finding neighbors. It should be noted that this
section works with inet_pton, inet_ntop, AF_INET, and AF_INET6 sockets. Here, the
VppMPLSTunnelInterface code is as follows:
Computers 2023, 12, 180 17 of 25
4: super(VppMPLSTunnelInterface,self).__init__(test)
5: self._test = test
6: self.t_paths = paths
7: self.is_multicast = is_multicast
8: self.is_l2 = is_l2
9: def add_vpp_config(self):
10: self._sw_if_index = 0xffffffff
In (b) process, by calling the vpp_ip_route algorithm, the routing is carried out. Before
defining this function, the following rules must be satisfied:
1: {comment: # from vnet / vnet / mpls / mpls _types.h}
2: MPLS_IETF_MAX_LABEL = 0xfffff
3: MPLS_LABEL_INVALID = MPLS_IETF_MAX_LABEL + 1
This section sets the maximum label to 0xfffff and determines the value of the MPLS
label. An important part of VPLS performance will depend on this, as the important
VppMplsRoute function is defined in vpp_ip_route.
In Algorithm 5, the local label, EOS bit, and table ID are set. If the EOS bit is equal to one, this
means that we will not have multiple labels. The route-finding function in line 10 is also defined.
Algorithm 5 VppMplsRoute(VppObject):
1: {comment: MPLS Route/LSP}
2: def __init__(self, test, local_label, eos_bit, paths, table_id=0, is_multicast=0):
3: self._test = test
4: self.paths = paths
5: self.local_label = local_label
6: self.eos_bit = eos_bit
7: self.table_id = table_id
8: self.is_multicast = is_multicast
9: def find_route(test, ip_addr, len, table_id=0, inet=AF_INET)
5. Simulation Results
In this section, we outline two test approaches: lightweight and practical scenarios.
The lightweight scenario refers to the implementation of the proposed protocol on the
software-based router in the Linux environment, through which we are trying to show the
performance, signaling, and control plane processes. To the best of our knowledge, since
we face simulation constraints at this level, we configured a high level of testing based on
the practical scenario. At this level, the router is implemented on the commodity hardware
to test the functionality of VPLS in the real world. Finally, based on the analysis of results,
we evaluate the efficiency of this service in the three factors of throughput, latency, and
packet loss ratio. Also, Table 1 demonstrates the test specifications.
Specifications Value
OS Ubuntu 22.04.2 LTS
FRR 7.0 stable
Vector Packet Processing 19.08
Plugin VPP Sandbox/router
Hardware Nexcom NSA 7136
CPU Dual Intel® Xeon® Processor E5-2600 v4
Number of Cores 32
RAM 96
Number of Interfaces 16 (1 Gbps ) and 6 (10 Gbps)
Tester Ixia
Emulator Gns3
Cisco routers 7600 Series
Cisco switches Catalyst 3750-X Series
The following are some instructions for the configurations performed in the
lightweight scenario.
• The R1 and R2 router configuration are illustrated in Figure 14;
• Configurations for Rahyab router: The settings for the above scenario Figure 13 are
given for each router as follows:
1. Enable IP forwarding feature in Linux operating system that can be applied to both
versions of IP.
2. Enable MPLS on each interface.
3. Define a new bridge and PW in the kernel.
4. Enable bridge and PW.
Computers 2023, 12, 180 19 of 25
5. Connect the bridge and PW created in the previous section to the ethernet link. It
is worth mentioning that the settings for steps 1 to 5 are done in the Command Line
environment of Linux, and these settings are the same for all Rahyab routers.
6. Turn on and give IP addresses to the interfaces, enable OSPF and MPLS, and perform
configurations related to each of them (these are not explained in this document
because they are thoroughly explained in other documents).
7. Enable VPLS on PE routers (routers 1 and 3 in Figure 13). As a result of running
the command, the following commands are activated under VPLS, and the interface
ens32 and PW mpw0 are recognized as its members. By determining the address of
the router connected to the other end of PW, the VPLS neighborhood of the current
router is determined.
To ensure the accuracy of the VPLS and topology commands, we check the output of
the binding and VC commands. For instance, the result of executing these two commands
on router 3 will be as follows. If a binding is done correctly, each VC will be assigned a
local tag and a round tag. Figure 15 confirms this goal.
a) b)
Figure 14. R1 and R2 configurations based on simulation scenario Figure 13. Settings according to
sub-figures (a,b) should be considered.
a) b) c)
Rahyab# show running Rahyab# show running Rahyab# show running
config config config
Interface ens33 Interface ens33 Interface ens32
Ip address 10.0.1.1/24 Ip address 10.0.1.2/24 Ip address 10.0.2.2/24
! ! !
Interface lo Interface ens32 Interface lo
Ip address 1.1.1.1/32 Ip address 10.0.2.1/24 Ip address 3.3.3.3/32
! ! !
router ospf Interface lo router ospf
ospf router-id 1.1.1.1 Ip address 2.2.2.2/32 ospf router-id 3.3.3.3
network 1.1.1.1/32 area ! network 3.3.3.3/32 area
0.0.0.0 router ospf 0.0.0.0
network 10.0.1.0/24 area ospf router-id 2.2.2.2 network 10.0.2.0/24 area
0.0.0.0 network 2.2.2.2/32 area 0.0.0.0
! 0.0.0.0 !
mpls ldp network 10.0.1.0/24 area mpls ldp
router-id 1.1.1.1 0.0.0.0 router-id 3.3.3.3
dual-stack cisco-interop network 10.0.2.0/24 area dual-stack cisco-interop
! 0.0.0.0 !
Address family ipv4 Address family ipv4
Discovry transport- ! Discovry transport-addres
address 1.1.1.1 mpls ldp 3.3.3.3
! router-id 2.2.2.2 !
Interface ens33 dual-stack cisco-interop Interface ens32
! ! !
Exit-address-family Address family ipv4 Exit-address-family
! Discovry transport- !
! address 2.2.2.2 !
L2vpn ENG type vpls ! L2vpn ENG type vpls
Bridge br0 Interface ens33 Bridge br0
Member interface ens32 ! Member interface ens33
! Interface ens32 !
Member pseudowire mpw0 ! Member pseudowire mpw0
Neighbor lsr-id 3.3.3.3 Exit-address-family Neighbor lsr-id 1.1.1.1
Pw-id 100 ! Pw-id 100
! ! !
! !
d) e)
show l2vpn atom binding show l2vpn atom vc
destination address: 1.1.1.1, VC: 100 Interface Peer ID VC ID Name Status
Local Label: 25 --------- -------------- ------ ------------ -------
Cbit: 1,VC Type: Ethernet, GroupID:0, MTU: mpw0 3.3.3.3 100 ENG UP
1500
Remote label: 16
Figure 15. Results of lightweight H-VPLS scenario Figure 13. Settings according to sub-figures (a–e)
should be considered.
Computers 2023, 12, 180 20 of 25
Also, it can be seen in Figure 15e, if PW signaling works properly, the state of PW will
be up.
According to our simulation, although the control plane works in this scenario, the
data plane in PEs does not work since PEs employ bridges for transferring packets from
one customer to another. Therefore, we decided to keep carrying out our experiment in the
real world.
Figure 16. Our practical H-VPLS simulation scenario via Q-in-Q tunneling.
On PE routers, we consider the Q-in-Q tunnel tag to be 110. The purpose is to see if
the ARP table of switches is able to learn the MAC address of the rest of the remote hosts,
and the ability of the hosts to ping each other. In this scenario, Rahyab1 connects to Cisco1
Computers 2023, 12, 180 21 of 25
by Virtual Circuit mpw3. Rahyab1 is labeled 21 for its local label, which means Rahyab1
sends its packets with label 21 to Cisco1. In addition, Rahyab1 is labeled 29 for its remote
label which means Rahyab1 receives packets from Cisco1 with label 29. Therefore, Rahyab1
knows that these labels belong to Cisco1 by leveraging VPN id 110 and router-id 1.1.1.1.
Finally, it can distinguish between packets received from Cisco1 or Rahyab2 based on local
and remote labels and also router-id.
120000000
Theoretical Throughput (fps)
Throughput Rate (frames per second)
80000000
60000000
40000000
20000000
0
64 128 256 512 1024 1280 1518
Frame Size
Figure 17. Throughput rate per various frame sizes for our practical model.
In addition, as shown in Figures 18 and 19, the average latency for small-sized frames
is approximately in the same range (around 20 µs), while the average latency for big-sized
frames is around four times larger than that of small-sized frames. This is mainly due to
the time it takes for the router to process large frames. The larger the size of frames, the
more time it takes to process each frame and, as a result, there will be higher latency.
Moreover, the average amount of the small and big frames’ jitter is less than 1µs, as
shown in Figure 19. It is important to note that the average jitter for frames larger than
1024 bytes is, relatively, in the order of nanoseconds, which is three orders of magnitude
smaller than that of small-sized frames. Moreover, as shown in Figure 20, we discussed
queue build-up in small-sized frames. One of the causes of long queues is tail packet loss.
Because the size of frames is short, there will be a huge number of incoming short frames,
and as a result, they will create a long input queue and increase the probability of frame
drop at the tail of the queue. The size of the frames can affect the distribution of traffic in
the network. If frames of different sizes randomly enter the network, smaller frames (such
as 64 and 128 bytes) may act as a noise load in the network and cause more interference,
leading to an increased packet loss ratio.
Computers 2023, 12, 180 22 of 25
50
45
40
35
Average Latency 30
25
20
15
10
0
64 128 256 512 1024 1280 1518
Frame Size
Figure 18. Average latency per various frame sizes for our practical model.
0.3
0.25
0.2
Average Jitter
0.15
0.1
0.05
0
64 128 256 512 1024 1280 1518
Frame Size
Figure 19. Average jitter per various frame sizes for our practical model.
50
45
40
35
Maximum Loss (%)
30
25
20
15
10
0
64 128 256 512
Frame Size
Figure 20. Maximum packet loss ratio per various frame sizes for our practical model.
Computers 2023, 12, 180 23 of 25
6. Conclusions
We developed a novel H-VPLS architecture via Q-in-Q tunneling on a commodity
router. Our work is based on utilizing and enhancing two well-known open-source pack-
ages: VPP as the router’s fast data plane and FRR, a modular control plane protocol suite, to
implement VPLS. Both VPP and FRR have active and dynamic communities, and they are
the only open-source frameworks that support VPLS. FRR in the control plane implements
control messages and relevant signaling, while VPP in the data plane side provides the
capability of forwarding VPLS packets and manually labeling them. We have tested the
implementation in both simulation and real physical scenarios, and the results indicate
that the VPLS control plane works in compliance with the RFCs mentioned in the previous
sections, as well as seamless interoperability with other vendors’ VPLS. Additionally, VPLS
can be implemented in software without the need for specific support from the underlying
hardware. Finally, the overall performance of the router shows that our proposed approach,
based on open-source frameworks, is applicable in the real world.
In future work, we plan to test and compare the performance of our software-based
VPLS with other vendors’ VPLS. There are two types of VPLS: H-VPLS-QinQ tunnel and
H-VPLS-MPLS PW. As we have implemented the first type, we will also consider the design
and implementation of the second type in future works.
References
1. Vallet, J.; Brun, O. Online OSPF weights optimization in IP networks. Comput. Netw. 2014, 60, 1–12. [CrossRef]
2. Bocci, M.; Cowburn, I.; Guillet, J. Network high availability for ethernet services using IP/MPLS networks. IEEE Commun. Mag.
2008, 46, 90–96. [CrossRef]
3. Ben-Yacoub, L.L. On managing traffic over virtual private network links. J. Commun. Netw. 2000, 2, 138–146. [CrossRef]
4. Sajassi, A. Comprehensive Model for VPLS. US Patent 8,213,435, 3 July 2012
5. Liyanage, M.; Ylianttila, M.; Gurtov, A. Improving the tunnel management performance of secure VPLS architectures with SDN.
In Proceedings of the 2016 13th IEEE Annual Consumer Communications & Networking Conference (CCNC), Las Vegas, NV,
USA, 9–12 January 2016; pp. 530–536.
6. Liyanage, M.; Ylianttila, M.; Gurtov, A. Fast Transmission Mechanism for Secure VPLS Architectures. In Proceedings of the
2017 IEEE International Conference on Computer and Information Technology (CIT), Helsinki, Finland, 21–23 August 2017;
pp. 192–196.
7. Filsfils, C.; Evans, J. Engineering a multiservice IP backbone to support tight SLAs. Comput. Netw. 2002, 40, 131–148. [CrossRef]
8. Bensalah, F.; El Kamoun, N. A novel approach for improving MPLS VPN security by adopting the software defined network
paradigm. Procedia Comput. Sci. 2019, 160, 831–836. [CrossRef]
9. Martini, L.; Rosen, E.; El-Aawar, N.; Heron, G. Encapsulation Methods for Transport of Ethernet over MPLS Networks. RFC4448,
April 2006. Available online: https://www.rfc-editor.org/rfc/rfc4448 (accessed on 8 August 2023 ).
10. Gaur, K.; Kalla, A.; Grover, J.; Borhani, M.; Gurtov, A.; Liyanage, M. A survey of virtual private LAN services (VPLS): Past,
present and future. Comput. Netw. 2021, 196, 108245. [CrossRef]
11. Hernandez-Valencia, E.J.; Koppol, P.; Lau, W.C. Managed virtual private LAN services. Bell Labs Tech. J. 2003, 7, 61–76. [CrossRef]
12. Lasserre, M.; Kompella, V. IETF RFC 4762: Virtual Private LAN Service (VPLS) Using Label Distribution Protocol (LDP) Signaling,
2007. Available online: https://www.rfc-editor.org/rfc/rfc4762.html (accessed on 8 August 2023).
13. Biabani, M.; Yazdani, N.; Fotouhi, H. REFIT: Robustness Enhancement Against Cascading Failure in IoT Networks. IEEE Access
2021, 9, 40768–40782. [CrossRef]
14. Wirtgen, T.; Dénos, C.; De Coninck, Q.; Jadin, M.; Bonaventure, O. The Case for Pluginized Routing Protocols. In Proceedings of
the 2019 IEEE 27th International Conference on Network Protocols (ICNP), Chicago, IL, USA, 8–10 October 2019; pp. 1–12.
Computers 2023, 12, 180 24 of 25
15. Liyanage, M.; Ylianttila, M.; Gurtov, A. Enhancing security, scalability and flexibility of virtual private LAN services. In
Proceedings of the 2017 IEEE International Conference on Computer and Information Technology (CIT), Helsinki, Finland, 21–23
August 2017; pp. 286–291.
16. Di Battista, G.; Rimondini, M.; Sadolfo, G. Monitoring the status of MPLS VPN and VPLS based on BGP signaling information. In
Proceedings of the 2012 IEEE Network Operations and Management Symposium, Maui, HI, USA, 16–20 April 2012; pp. 237–244.
17. Liyanage, M.; Ylianttila, M.; Gurtov, A. Software defined VPLS architectures: Opportunities and challenges. In Proceedings of
the 2017 IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), Montreal,
QC, Canada, 8–13 October 2017; pp. 1–7.
18. The FRRouting Community. Available online: https://frrouting.org/ (accessed on 18 June 2020).
19. Minei, I.; Marques, P.R. Automatic Traffic Mapping for Multi-Protocol Label Switching networks. US Patent 10,193,801, 29
January 2019.
20. Andersson, L. Technical Report, LDP Specification, RFC 5036, Minei, I., Thomas, B., Eds.; 2007. Available online: https:
//dl.acm.org/doi/abs/10.17487/RFC5036 (accessed on 8 August 2023).
21. Stoll, D.; Thomas, W.; Belzner, M. The role of pseudo-wires for layer 2 services in intelligent transport networks. Bell Labs Tech. J.
2007, 12, 207–220. [CrossRef]
22. Kaushalram, A.S.; Budiu, M.; Kim, C. Data-Plane Stateful Processing Units in Packet Processing Pipelines, US Patent 10,523,764,
31 December 2019.
23. Vector Packet Processing (VPP) Platform. Available online: https://wiki.fd.io/view/VPP (accessed on 18 June 2020).
24. Linguaglossa, L.; Rossi, D.; Pontarelli, S.; Barach, D.; Marjon, D.; Pfister, P. High-Speed Data Plane and Network Functions
Virtualization by Vectorizing Packet Processing. Comput. Netw. 2019, 149, 187–199. [CrossRef]
25. Daly, J.; Bruschi, V.; Linguaglossa, L.; Pontarelli, S.; Rossi, D.; Tollet, J.; Torng, E.; Yourtchenko, A. Tuplemerge: Fast software
packet processing for online packet classification. IEEE/ACM Trans. Netw. 2019, 27, 1417–1431. [CrossRef]
26. Shukla, S.K.; et al. Low power hardware implementations for network packet processing elements. Integration 2018, 62, 170–181.
27. Data Plane Development Kit. Available online: http://dpdk.org (accessed on 18 June 2020).
28. Zhang, T.; Linguaglossa, L.; Gallo, M.; Giaccone, P.; Rossi, D. FloWatcher-DPDK: Lightweight line-rate flow-level monitoring in
software. IEEE Trans. Netw. Serv. Manag. 2019, 16, 1143–1156. [CrossRef]
29. IEEE Std 802.1 Q-2011; IEEE Standard for Local and Metropolitan Area Networks—Media Access Control (MAC) Bridges and
Virtual Bridge Local Area Networks. IEEE SA: Piscataway, NJ, USA, 2011.
30. Barach, D.; Linguaglossa, L.; Marion, D.; Pfister, P.; Pontarelli, S.; Rossi, D. High-speed software data plane via vectorized packet
processing. IEEE Commun. Mag. 2018, 56, 97–103. [CrossRef]
31. Liyanage, M.; Ylianttila, M.; Gurtov, A. Secure hierarchical VPLS architecture for provider provisioned networks. IEEE Access
2015, 3, 967–984. [CrossRef]
32. Standard IETF RFCs. Available online: https://www.ietf.org/standards/rfcs/ (accessed on 18 June 2020).
33. Martini, L. ÂIANA Allocations for Pseudo Wire Edge to Edge Emulation (PWE3) Â. Technical Report, RFC 4446, 2006. Available
online: https://datatracker.ietf.org/doc/html/rfc4446 (accessed on 8 August 2023).
34. Martini, L.; Rosen, E.; El-Aawar, N.; Smith, T.; Heron, G. RFC 4447: Pseudowire Setup and Maintenance Using the Label
Distribution Protocol (LDP). The Internet Society. 2006. Available online: https://www.rfc-editor.org/rfc/rfc8077 (accessed on
8 August 2023).
35. Bryant, S.; Swallow, G.; Martini, L.; McPherson, D. Pseudowire Emulation Edge-to-Edge (PWE3) Control Word for Use over an
MPLS PSN. IETF RFC4385, 2006. Available online: https://patents.google.com/patent/US10523764B2/en?oq=US+Patent+10
%2c523%2c764 (accessed on 8 August 2023).
36. Cisco. Available online: https://www.cisco.com/ (accessed on 18 June 2020).
37. Nexcom. Available online: https://www.nexcom.com/ (accessed on 18 June 2020).
38. Manzoor, A.; Hussain, M.; Mehrban, S. Performance Analysis and Route Optimization: Redistribution between EIGRP, OSPF &
BGP Routing Protocols. Comput. Stand. Interfaces 2020, 68, 103391.
39. Holterbach, T.; Bü, T.; Rellstab, T.; Vanbever, L. An open platform to teach how the internet practically works. ACM SIGCOMM
Comput. Commun. Rev. 2020, 50, 45–52. [CrossRef]
40. Tiso, J.; Hutton, K.T.; Teare, D.; Schofield, M.D. Designing Cisco Network Service Architectures (ARCH): Foundation Learning Guide;
Cisco Press: Indianapolis, IN, USA, 2011.
41. Dong, X.; Yu, S. VPLS: An effective technology for building scalable transparent LAN services. In Network Architectures,
Management, and Applications II, Proceedings of the Asia-Pacific Optical Communications, Beijing, China, 7–11 November 2004; SPIE
Digital Library: Bellingham, WA, USA, 2005; Volume 5626, pp. 137–147.
42. Xia, W.; Wen, Y.; Foh, C.H.; Niyato, D.; Xie, H. A survey on software-defined networking. IEEE Commun. Surv. Tutor. 2014,
17, 27–51. [CrossRef]
43. Ahmad, I.; Namal, S.; Ylianttila, M.; Gurtov, A. Security in software defined networks: A survey. IEEE Commun. Surv. Tutor. 2015,
17, 2317–2346. [CrossRef]
44. Palmieri, F. VPN scalability over high performance backbones evaluating MPLS VPN against traditional approaches. In
Proceedings of the Proceedings of the Eighth IEEE Symposium on Computers and Communications—ISCC 2003, Kiris-Kemer,
Turkey, 30 June–3 July 2003; pp. 975–981.
Computers 2023, 12, 180 25 of 25
45. Rekhter, Y.; Li, T.; Hares, S. A Border Gateway Protocol 4 (BGP-4). Technical Report, 2006. Available online: https://www.rfc-
editor.org/rfc/rfc4271 (accessed on 8 August 2023).
46. Khandekar, S.; Kompella, V.; Regan, J.; Tingle, N.; Menezes, P.; Lassere, M.; Kompella, K.; Borden, M.; Soon, T.; Heron, G.; et al.
Hierarchical Virtual Private LAN Service; Internet Draft; IETF: Wilmington, DE, USA, 2002.
47. Martini, L.; Sajassi, A.; Townsley, W.M.; Pruss, R.M. Scalable Virtual Private Local Area Network Service. US Patent 7,751,399, 6
July 2010.
48. Chiruvolu, G.; Ge, A.; Elie-Dit-Cosaque, D.; Ali, M.; Rouyer, J. Issues and approaches on extending Ethernet beyond LANs. IEEE
Commun. Mag. 2004, 42, 80–86. [CrossRef]
49. López, G.; Grampín, E. Scalability testing of legacy MPLS-based Virtual Private Networks. In Proceedings of the 2017 IEEE
URUCON, Montevideo, Uruguay, 23–25 October 2017; pp. 1–4.
50. Dunbar, L.; Mack-Crane, T.B.; Hares, S.; Sultan, R.; Ashwood-Smith, P.; Yin, G. Virtual Layer 2 and Mechanism to Make Iit
Scalable. US Patent 9,160,609, 13 October 2015.
51. Fahad, M.; Khan, B.M.; Bilal, R.; Young, R.C.; Beard, C.; Zaidi, S.S.H. Multibillion packet lookup for next generation networks.
Comput. Electr. Eng. 2020, 84, 106612. [CrossRef]
52. Valenti, A.; Pompei, S.; Matera, F.; Beleffi, G.T.; Forin, D. Quality of service control in Ethernet passive optical networks based on
virtual private LAN service technique. Electron. Lett. 2009, 45, 992–993. [CrossRef]
53. Peter, S.; Li, J.; Zhang, I.; Ports, D.R.; Woos, D.; Krishnamurthy, A.; Anderson, T.; Roscoe, T. Arrakis: The operating system is the
control plane. ACM Trans. Comput. Syst. (TOCS) 2015, 33, 1–30. [CrossRef]
54. Quagga Is a Routing Software Suite. Available online: https://www.quagga.net/ (accessed on 18 June 2020).
55. Lim, L.K.; Gao, J.; Ng, T.E.; Chandra, P.R.; Steenkiste, P.; Zhang, H. Customizable virtual private network service with QoS.
Comput. Netw. 2001, 36, 137–151.
56. Dhaini, A.R.; Ho, P.H.; Jiang, X. WiMAX-VPON: A framework of layer-2 VPNs for next-generation access networks. IEEE/OSA J.
Opt. Commun. Netw. 2010, 2, 400–414. [CrossRef]
57. Ixia Tester. Available online: https://www.ixiacom.com/solutions/network-test-solutions (accessed on 18 June 2020).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.