IP Router Testing, Isolation and Automation: Peddireddy Divya
IP Router Testing, Isolation and Automation: Peddireddy Divya
Peddireddy Divya
Faculty of Computing
Blekinge Institute of Technology
SE-371 79 Karlskrona Sweden
This thesis is submitted to the Faculty of Computing at Blekinge Institute of Technology in
partial fulfillment of the requirements for the degree of Master of Science in Electrical
Engineering with emphasis on Telecommunication Systems. The thesis is equivalent to 20
weeks of full time studies.
Contact Information:
Author(s):
Peddireddy Divya
E-mail: [email protected]
External advisor:
Hamed Ordibehesht
E-mail: [email protected]
University advisor:
Professor Dr. Kurt Tutschku
Department of Communication Systems
i
i
ABSTRACT
Context. Test Automation is a technique followed by the present software development industries to
reduce the time and effort invested for manual testing. The process of automating the existing manual
tests has now gained popularity in the Telecommunications industry as well. The Telecom industries
are looking for ways to improve their existing test methods with automation and express the benefit of
introducing test automation.
At the same time, the existing methods of testing for throughput calculation in industries involve
measurements on a larger timescale, like one second. The possibility to measure the throughput of
network elements like routers on smaller timescales gives a better understanding about the forwarding
capabilities, resource sharing and traffic isolation in these network devices.
Objectives. In this research, we develop a framework for automatically evaluating the performance of
routers on multiple timescales, one second, one millisecond and less. The benefit of introducing test
automation is expressed in terms of Return on Investment, by comparing the benefit of manual and
automated testing. The performance of a physical router, in terms of throughput is measured for
varying frame sizes and at multiple timescales.
Methods. The method followed for expressing the benefit of test automation is quantitative. At the
same time, the methodology followed for evaluating the throughput of a router on multiple timescales
is experimental and quantitative, using passive measurements. A framework is developed for
automatically conducting the given test, which enables the user to test the performance of network
devices with minimum user intervention and with improved accuracy.
Results. The results of this thesis work include the benefit of test automation, in terms of Return on
Investment when compared to manual testing; followed by the performance of router on multiple
timescales. The results indicate that test automation can improve the existing manual testing methods
by introducing greater accuracy in testing. The throughput results indicate that the performance of a
physical router varies on multiple timescales, like one second and one millisecond. The throughput of
the router is evaluated for varying frame sizes. It is observed that the difference in the coefficient of
variance at the egress and ingress of the router is more for smaller frame sizes, when compared to
larger frame sizes. Also, the difference is more on smaller timescales when compared to larger
timescales.
Conclusions. This thesis work concludes that the developed test automation framework can be used
and extended for automating several test cases at the network layer. The automation framework
reduces the execution time and introduces accuracy when compared to manual testing. The benefit of
test automation is expressed in terms of Return on Investment. The throughput results are in line with
the hypothesis that the performance of a physical router varies on multiple timescales. The
performance, in terms of throughput, is expressed using a previously suggested performance metric. It
is observed that there is a greater difference in the Coefficient of Variance values (at the egress and
ingress of a router) on smaller timescales when compared to larger timescales. This difference is more
for smaller frame sizes when compared with larger frame sizes.
I
ACKNOWLEDGEMENTS
I sincerely thank my supervisor, Dr.Kurt Tutschku for his encouragement, constant support and
guidance throughout my thesis work. Working under his supervision has greatly improved my
knowledge and problem solving ability. His ideas and ideals have played a crucial role in the
successful completion of my degree. He has taught me to be accurate in every minute step I take and
to never give up on the tasks no matter how hard they seem.
I would also like to thank Mr.Hamed Ordibehesht for giving me an opportunity to work with Ericsson.
He is very encouraging, understanding and patient. I am deeply indebted to my mentor and guide,
Mr.Hans Engberg who has been very patient in answering all my questions, even the minute ones. He
has played a crucial role in building my Networking knowledge. No matter how many times I
shutdown the server or the network interface card, he has always restarted them again patiently. He
has taught me everything I know about testing of the networking equipment. I am also thankful to
Mr.Neil Navneet for his valuable guidance. He has given the most appropriate suggestions whenever I
was stuck in my thesis and his support helped me publish my thesis results in time.
On a whole, I am indebted to Ericsson and the entire department of IP Networks, for greatly
improving my knowledge and helping me throughout my thesis work in Stockholm. The work culture
at Ericsson has taught me lessons for life. It automatically generated interest in me to learn about the
latest technologies from the brightest minds of the industry. It is a beautiful place to learn, work and
grow. Ericsson provides a platform for students to put forward their innovative ideas.
I am grateful to my parents for their endless love, support and encouragement. I would like to thank
my elder sister for helping in every way possible for the successful completion of my thesis work. The
loving face of my pet dog always makes me smile whenever I am sad. They have done their best in
every way possible to help me complete my studies in Sweden. I am indebted to them for life and
treasure them the most. Their happiness means everything to me.
Finally, I thank the almighty God for giving me the strength to overcome all the hurdles and to
complete my studies with flying colors. I hope that I am forgiven for my mistakes one day. There
were moments when I just felt like giving up or was scared to take a step forward. I believe it is his
invincible force that kept me going somehow during the difficult times and still helps me to move
forward.
II
CONTENTS
ABSTRACT ...........................................................................................................................................I
ACKNOWLEDGEMENTS ................................................................................................................ II
ACRONYMS..................................................................................................................................... VII
1 INTRODUCTION ....................................................................................................................... 1
1.1 MOTIVATION ......................................................................................................................... 2
1.2 PROBLEM STATEMENT AND HYPOTHESIS............................................................................... 2
1.3 RESEARCH QUESTIONS .......................................................................................................... 2
1.4 CONTRIBUTION ...................................................................................................................... 3
1.5 THESIS OUTLINE .................................................................................................................... 3
2 FUNDAMENTAL CONCEPTS AND AIMS ............................................................................ 4
2.1 MAJOR TELECOMMUNICATION SYSTEM DESIGN AIMS .......................................................... 4
2.2 NEED FOR TEST AUTOMATION IN TELECOM INDUSTRY ......................................................... 5
2.3 NETWORK FUNDAMENTALS ................................................................................................... 8
2.3.1 OSI Model ......................................................................................................................... 8
2.3.2 IP Protocol........................................................................................................................ 9
2.4 THROUGHPUT ...................................................................................................................... 11
2.5 STATISTICAL DEFINITIONS ................................................................................................... 12
2.6 IXIA .................................................................................................................................... 12
2.6.1 IxNetwork ........................................................................................................................ 13
2.6.2 Traffic Generation .......................................................................................................... 13
2.6.3 Automation with IXIA ..................................................................................................... 14
3 RELATED WORK .................................................................................................................... 15
3.1 TEST AUTOMATION.............................................................................................................. 15
3.2 THROUGHPUT CALCULATION............................................................................................... 16
4 METHODOLOGY .................................................................................................................... 18
4.1 AUTOMATION METHOD ....................................................................................................... 18
4.1.1 Test Automation using IXIA ............................................................................................ 18
4.1.2 Return on Investment ...................................................................................................... 20
4.1.3 Test Automation Cost Calculation .................................................................................. 20
4.2 THROUGHPUT EVALUATION................................................................................................. 22
4.3 ANTICIPATED BEHAVIOR FOR DUT ...................................................................................... 23
4.4 DATA EXTRACTION.............................................................................................................. 24
4.5 TEST-BED SET-UP ............................................................................................................... 27
5 IMPLEMENTATION AND EXPERIMENT .......................................................................... 29
5.1 IXIA SPECIFICATIONS .......................................................................................................... 29
5.2 DUT SPECIFICATIONS .......................................................................................................... 29
5.3 OTHER SPECIFICATIONS ....................................................................................................... 30
5.3.1 Layer 2 Switch ................................................................................................................ 30
III
5.3.2 Physical Link Characteristics ......................................................................................... 30
5.4 PACKET CAPTURE ................................................................................................................ 31
5.5 EXPERIMENTAL PROCEDURE................................................................................................ 32
5.6 AUTOMATION IN EXPERIMENTATION ................................................................................... 33
6 RESULTS AND ANALYSIS..................................................................................................... 35
6.1 RETURN ON INVESTMENT..................................................................................................... 35
6.2 THROUGHPUT ...................................................................................................................... 36
7 CONCLUSION AND FUTURE WORK ................................................................................. 38
7.1 ANSWERING RESEARCH QUESTIONS .................................................................................... 38
7.2 LIMITATIONS AND CHALLENGES .......................................................................................... 39
7.3 FUTURE WORK .................................................................................................................... 40
REFERENCES ................................................................................................................................... 42
APPENDIX A...................................................................................................................................... 45
IV
LIST OF TABLES
Table 1: IxNetwork Traffic Wizard ........................................................................................ 14
Table 2: IXIA Specifications .................................................................................................. 27
Table 3: DuT Specifications ................................................................................................... 27
Table 4: Layer 2 Switch Specifications .................................................................................. 28
Table 5: Physical Link Characteristics ................................................................................... 28
Table 6: Optical Switch Characteristics ................................................................................. 29
Table 7: Manual-Automated Steps ......................................................................................... 32
V
LIST OF FIGURES
Figure 1: Number of Tests vs Length of Input Parameters....................................................... 6
Figure 2: Efforts for Manual and Automated Testing .............................................................. 7
Figure 3: OSI Reference Model ................................................................................................ 9
Figure 4: IPv4 Datagram Header ............................................................................................ 10
Figure 5: IXIA Client-Server Relationship............................................................................. 19
Figure 6: Obtained Linear Curve for ROI .............................................................................. 22
Figure 7: Unequal Resource Sharing on Smaller Timescales ................................................. 24
Figure 8: Traffic Flow for Simultaneous Streams .................................................................. 24
Figure 9: Calculation of Throughput for 1 second ................................................................. 25
Figure 10: Calculation of Throughput for 1 millisecond and less .......................................... 26
Figure 11: Packet Location in Different Scenarios ................................................................. 27
Figure 12: Test-Bed Set-Up .................................................................................................... 28
Figure 13: Splitting at Network Tap ....................................................................................... 32
Figure 14: Difference in CoV per Second .............................................................................. 36
Figure 15: Difference in CoV per Millisecond ....................................................................... 37
Figure 16: Difference in CoV on Multiple Timescales .......................................................... 37
VI
ACRONYMS
4G Fourth Generation
5G Fifth Generation
AC Alternating Current
ARP Address Resolution Protocol
ATM Asynchronous Transfer Mode
BFD Bidirectional Forwarding Detection
BGP Border Gateway Protocol
CDF Cumulative Distribution Function
CoV Coefficient of Variation
DC Direct Current
DPMI Distributed Passive Measurement Infrastructure
DRR Deficit Round Robin
DUT Device under Test
DWDM Dense Wavelength Division Multiplexing
EAPS Ethernet Automatic Protection Switching
FTP File Transfer Protocol
GA General Availability
Gbps Gigabit per Second
GE Gigabit Ethernet
HDLC High-Level Data Link Control
HTTP Hypertext Transfer Protocol
ICMP Internet Control Message Protocol
IETF Internet Engineering Task Force
IGMP Internet Group Management Protocol
IP Internet Protocol
ISO International Organization for Standardization
IS-IS Intermediate System-Intermediate System Protocol
L2 VPN Layer 2 Virtual Private Network
L3 VPN Layer 3 Virtual Private Network
LACP Link Aggregation Control Protocol
LDP Label Distribution Protocol
LTE Long Term Evolution
VII
MAC Media Access Control
Mbps Megabit per Second
MPLS Multi-Protocol Label Switching
MSTP Multiple Spanning Tree Protocol
NFV Network Functions Virtualization
NGN Network Generation Networking
OSI Open Systems Interconnect
OSPF Open Shortest Path Forwarding
PSTN Public Switched Telephone Network
QoS Quality of Service
RADIUS Remote Authentication Dial-In User Service
RARP Reverse Address Resolution Protocol
RIP Routing Information Protocol
RJ45 Registered Jack 45
RMON Remote Monitoring Protocol
RSTP Rapid Spanning Tree Protocol
RSVP Resource Reservation Protocol
SDN Software Defined Network
SFP Small-Form-factor Pluggable
SNMP Simple Network Management Protocol
SSH v2c Secure Shell Protocol Version 2C
STP Spanning Tree Protocol
TCACS Terminal Access Controller Access-Control System
TCP Transmission Control Protocol
VLAN Virtual Local Area Network
VM Virtual Machine
VMAN Virtual Metropolitan Area Networks
VRRP Virtual Router Redundancy Protocol
VRF Virtual Routing and Forwarding
WRED Weighted Random Early Detection
WRR Weighted Round Robin
VIII
1 INTRODUCTION
The future of telecommunication networks lies in providing a “single converged
IP-based infrastructure” [1], acting as a “packet highway”, thereby providing a
platform to support both present and future services. Such networks are termed as
“Next Generation Networks” (NGN) [2] which aim to provide a smooth transition to
an “all-IP” based network. The aim of such an evolution is to replace the existing
networks such as PSTN and cable TV with services such as IPTV, VOIP and many
more. Other NGN features include interoperability with the current networks,
separation of services from the transport technology and enhancing security. The
factors which need to be looked at when implementing such networks are Quality of
Service (QoS), Security and Reliability [2]. Multi-Protocol Label Switching (MPLS)
has been found to be the most encouraging technology for implementing such
networks and meet the QoS requirements [2].
The Internet Protocol (IP) is the dominant protocol for transmission of data in the
present day telecommunication networks [3]. IP is a connectionless protocol, and as a
result, a connection-oriented protocol called ATM was suggested. This protocol was
not preferred due to its larger cost and greater complexity [4]. Consequently, the
Internet Engineering Task Force (IETF) had proposed MPLS as a combination of both
IP and ATM to yield the desired levels of QoS in the network [4]. This protocol
enables feasible shaping of the network’s traffic and improves the distribution of
traffic flows in the network. It supports the Layer 3 protocols, like IPv4, IPv6 along
with Layer 2 protocols, like Ethernet, ATM and HDLC. The main advantage of MPLS
when compared to IP is the reduced complexity of switching or routing look-up tables.
Thus, MPLS/IP technology based routers are used by most of the service providers in
their backbone networks [3]. These routers also support extensive exterior, interior
gateway protocols, high-performance multicast routing and synchronization
capabilities. Apart from the traditional router functionalities, the routers nowadays are
being manufactured to support SDN, 4G and other advanced radio functionalities like
advanced LTE and 5G. Such advanced routers have very high forwarding capabilities,
in terms of 100 Gbps, greatly improve network performance, enable optimal usage of
network resources, facilitate application aware traffic engineering and enable the
deployment of scalable networks.
At the same time, automated testing has become crucial for present software
development processes due to the establishment of “test driven development” and
“continuous integration” [5]. It is feasible to carry out more tests with test automation
when compared to manual testing and guarantee the quality of a given system [6].
While companies such as IXIA, Spirent and Fluke provide equipment for network
testing, open source software products have also been proposed for testing [7]. There
are some tests which are required to be executed frequently and need to go through
“test automation” [8]. Test automation increases the speed of work, facilitates repeated
testing, conservation of resources and helps to perform more tests in less time [5].
This thesis focuses on developing a framework for automatically evaluating the
performance of routers on multiple timescales, based on a performance metric which
was proposed for comparing different NFV elements [9]. The aim is to provide a
proof-of-concept for the suggested test automation method and throughput calculation
methodology. The aim is to also express the benefit of test automation using a suitable
metric. The test automation framework can help one to conduct the manual tests
automatically with greater accuracy. The throughput methodology can help alleviate
the need to compare different routers based on traditional performance evaluation
methods and hardware description, as it requires a detailed knowledge about the
scheduling, queuing policies, internal routing and switching schemes of a router. The
performance is thus described by doing external measurements and calculations.
1
1.1 Motivation
The physical routers which are manufactured today by various vendors are multi-
functional and capable of supporting various applications. Such routers have very high
forwarding capabilities, in gigabit per second to meet the growing demand for high
speed internet, carrier investments and to create space for forthcoming IP services.
With the growing focus on developing new technologies, these routers are tested with
the existing testing methodologies. These testing methodologies calculate the
throughput on a larger duration, i.e. per second. The study about the impact of various
traffic streams and further jitter analysis on smaller timescales is a field which requires
more research, as it focuses on the timely delivery of packets. This is the first factor of
motivation.
At the same time, the network devices are tested manually, which consume a lot of
time and resources for configuring the devices, storing the results and performing
individual analysis. Tests can be conducted more accurately if they are performed with
minimum human intervention, as this can greatly reduce the introduction of errors due
to manual testing. This is the second factor of motivation which motivates the student
to develop an automatic testing framework for testing various network devices,
especially routers and propose a framework which is capable of performing automatic
calculation of throughput on smaller timescales.
2
time scales and developing an automated framework for testing the same. These
questions are answered based on the experimental results obtained.
1.4 Contribution
The main contribution of this thesis is to develop an automated testing framework
to test the methodology which was suggested for performance evaluation of virtual
elements, on actual physical routers. The tests performed during the implementation
phase are automated by identifying and developing suitable automation techniques.
This thesis gives an insight about the degree of traffic isolation in physical routers
on smaller time scales when sending multiple traffic flows simultaneously. The impact
of one traffic stream over the other on smaller time scales is also understood. These are
expressed by calculating the throughput on smaller time scales by using the statistical
method suggested. Such analysis is crucial for time critical applications, where the
packets must arrive within well-defined time frames. At the same time, this thesis
provides an idea about how to develop an automated framework for performance
evaluation. A proof-of-concept of the developed test automation framework is
provided and the benefit of test automation is expressed. Such automation framework
can greatly reduce the time and resources consumed during manual testing.
The methodology suggested can be used to automatically test and compare the
performance of various physical routers, irrespective of their hardware specifications,
solely based on external statistical calculations and their default forwarding behavior.
Such testing can provide an understanding about the forwarding (scheduling) schemes
inside the routers. One can improve the forwarding schemes further and focus on
hardware changes if performance issues are identified in a router.
3
2 FUNDAMENTAL CONCEPTS AND AIMS
This section describes the major objectives for designing various systems in
current networks and the need for test automation in the Telecommunications industry.
It is later followed by some general fundamental concepts relevant for this thesis work.
There is a need to develop network architectures and services that are powerful
when it comes to addressing the above design feature. The aim of Future
Internet is to develop network services with “robustness, survivability, and
collaborative properties” [12].
Thus, when a new system is being developed, one must make sure that it
provides and supports the anticipated functions and behavior. It needs to be
engineered properly to support the desired mechanisms.
The concept of reliability ensures that the quality of the media content
delivered over the internet, like video, gaming etc [12] is kept intact in the
future internet. This means that a system is working as expected under all test
conditions. There is a need to develop “experimentation test-beds for new
4
architectures” [14]. These new test-beds can be implemented considering real-
time scenarios for thorough experimentation and validation of various
functions and use (test) cases [14].
As stated above, such high reliability and availability of the system can only
be confirmed by testing it under numerous real-world conditions. When it
comes to routers, its reliability and availability is measured under different
configurations, router settings and network load scenarios.
The goal of Future Internet is “lowering the complexity for the same level of
performance and functionality at a given cost” [12]. Hence,
telecommunications equipment manufacturers need to design high
performance equipment at low cost. However, increasing complexity and
relationship in between various features makes it difficult to conduct tests and
in turn increases the cost of design.
This thesis is an effort to address the issues B) and C) of the system design
process. The reliability and availability of the system should be measured by execution
of multiple tests. The collective testing of typical and untypical use cases multiple
times helps an experimenter to obtain statistical tangible results. One method to reduce
the design and production costs is to develop an automation environment to test the
reliability of such a system. Such automation environment should have low overhead
in usage and design. This low overhead leads to high Return on Investment (ROI)
values for testing even a smaller number of use cases and speeds up the design
process.
5
the Telecommunications industry in this section. Some of the general reasons
highlighting the advantage of test automation are mentioned below:
A) Brute force and rigorous tests are not feasible and are too costly in large
network systems with many input parameters.
For example, let us consider a System under Test (SuT) with binary input
parameters and a total length of “n” bits for all parameters. Typically, the
number of tests, “y” is proportional to two to the power of n, y = 2n. From
Figure 1, it can be observed that the number of test cases increases
(exponentially) with an increase in the number of input parameters.
Number of Tests
This can be done by “repetition of tests” and “structured selection and change
of parameters”. “Repetition” as the name suggests stands for repeating the
tests multiple times. The major advantage of test automation is that it
introduces “reusability and repeatability” [21]. For testing, we can state
reliability as the ratio of number of passed tests to the total number of tests to
be conducted. One reason for conducting repeated tests is to obtain statistical
significance for the tests results. “Structured Selection” stands for testing
certain range of the test parameters. An example of structured selection are the
varying packet size ranges considered in this thesis for experimentation,
Packet_Size_Range ϵ [128, 256, 512, 1024, 1518].
6
C) Reduce the costs involved for testing.
One of the major reasons for introducing automation in testing is to reduce the
costs, either in the form of time or money invested in performing manual tests.
With the introduction of automation, for example, the experimenter does not
need to wait in front of the experiment while the test is in progress [6].
Automated Manual
F
D
B Net Income = D - C
Effort
C E
Number of Tests
Figure 2: Efforts for Manual and Automated Testing
7
Prior to point B, the ROI is negative because the effort (or cost) involved in
performing an automated test is more than that of a manual test. As more test
cases are automated, the cost reduces and for test cases beyond point B, the
ROI becomes increasingly positive.
Thus, looking for the number of test cases where both ROIautomated > 0 and
ROIautomated > c * (cost_of_automated_framework_development) is useful
when developing new frameworks for test automation. Here, c is the
depreciation cost according to the “matching principle” used in depreciation
[23]. The idea is to express the benefit of developing the test automation
framework over the entire period it was in use. For example, the test
automation equipment and hardware can be bought for a cost, say A, and is
estimated to work for 5 years. It is a good approach to express the return on
investment using the revenue and expenditure on a yearly (time basis) basis
instead of expressing it in the very first year, when the automation framework
is deployed.
The tests can be conducted more accurately with reduced (or no) human
interaction. Such automation eliminates the source of error introduced due to
manual testing [24]. The testing for high availability of 99.999% can introduce
numerous errors or differences in results for manual testing and configuration
settings.
8
reference model and the functions related to each layer. When the data is moving from
the upper to lower layers, a header or trailer are added to ensure the interoperability.
Similarly, as the data moves from the lower to upper layers, the header (or trailer) is
removed and data is processed accordingly at the corresponding layers.
9
devices in the network can have the same IP address. Thus, an IPv4 address is “unique
and universal” [25]. The address space is defined as “the total number of addresses
used by the protocol” [25]. It is more than 4 billion for IPv4 protocol. The address
space is more for IPv6 protocol as it uses 128 bits address. These addresses are most
commonly described using the “Dotted-Decimal Notation” [25]. Class C private
addresses were used for identifying the devices in the test-bed of this thesis.
As mentioned before, one of the reasons for introducing MPLS technology is the
connectionless nature and unreliability of the IP protocol [4]. It is also challenging to
operate “traffic engineering” methods in networks implemented using IP protocol [26].
The problem of congestion is often introduced in IP networks because an individual
decision is made for every incoming packet which arrives at the router interfaces. The
routing decisions are made without taking into consideration the physical link capacity
and the traffic specifications. It often leads to dropping of packets on the physical
links. On a whole, it can be said that using traditional IP routing leads to congestion
(or over-utilization) on some network links and under-utilization on the remaining
links [26].
The authors of [26] describe MPLS as “a set of procedures” which integrates both
the performance, QoS and traffic management of the “Layer 2 label-swapping
paradigm” and the “Layer 3 routing services”. An MPLS network is divided into the
core and edge network. The routers in the edge are termed as Label Edge Routers, i.e.
LERs while the routers in the core are termed as Label Switch Routers, LSRs. The
core routers are connected to only “MPLS capable” routers, whereas the edge routers
are connected to both “MPLS capable and incapable” routers [26]. The first LER
which receives the IP packets and transforms them into MPLS packets for forwarding
in the MPLS domain is known as the “ingress LER”. Similarly, the LER which finally
removes the MPLS labels and sends it to the outside network is known as the “egress
LER” [26]. An MPLS packet is assigned a label at the edge of the MPLS network and
a path is fixed for routing a particular packet [3]. These paths are termed as LSPs,
Label-Switched Paths, which make it feasible for the service providers to forward
specific packet types through their backbone network without taking into account the
IP headers. The labels are changed by the routers for each incoming packet when
forwarding them to the adjacent router. The forwarding is faster as there is no need to
send packets based on IP forwarding tables.
10
2.4 Throughput
The performance of an IP Network is most commonly expressed in terms of
Throughput, Packet Loss and Delay [28]. Among the three metrics, throughput
measurement is most preferred is it gives a measure of the transmission capacity and
perceived service quality for the end user. It should be noted that the selected protocol
has an impact on the measurements [28]. The “throughput measurement results
obtained using one protocol cannot generally be assumed to be transferrable to other
protocols” [28]. In general, the following equation is used for measuring throughput:
This parameter is “a highly variable stochastic parameter that varies in both small
and large timescales” [28]. Throughput can be calculated at various layers in a network
and the relation between them is given as:
At the same time, the idea of throughput is conceived under various perspectives,
the network’s point of view and user’s point of view. The network layer deals with the
lower layer protocols whereas the end user deals with the entire end-to-end
communication, i.e. “full protocol stack”. A difference in the throughput
measurements is also obtained when the measurement is performed at different points
in the network. For example, consider a TCP connection, an end user sees the
connection from his computer, on his web browser to a web server. But, in a network
perspective, there are many intermediate devices like routers, switches, firewalls, etc.
The throughput can be calculated through active as well as passive measurements.
Active measurement is done for access networks, while passive measurement is done
in core networks. There are two methods for calculating throughput, “best effort
approach” and “windowed approach”. With the first method, it is possible to extract
from a particular test file, maximum measurement samples within a given time. In the
second method, a predetermined duration is selected for sending and receiving data.
The throughput is affected with the client/server hardware being used for
measurements, the operating system of the measurement system, nature of the shared
medium etc. Some of the statistical features which are considered during throughput
measurements are mean, median, CDF and variance. Finally, throughput can also be
based on “predefined time period or predefined amount of traffic” [28].
Throughput is usually expressed as frames per second, in bits (or bytes) per
second. Some basic guidelines for performing a throughput test on a DuT is given in
[29]. In [30], throughput is defined as “the maximum rate at which none of the offered
frames are dropped by the device”. “Absolute offered throughput is the number of data
units transmitted and absolute delivered throughput is the number of data units
received. Throughput rate is the number of data units sent or received per unit time”
[32, p. 63].
In [25], throughput is described as “a measure of how fast we can actually send
data through a network”. It is worth noting the difference between bandwidth and
throughput. “Bandwidth is a potential measurement of a link”, whereas throughput is
“an actual measurement of how fast we can send data” [25]. Even though the physical
link has its bandwidth labelled as 1 Gbps, the devices on this link may handle data up
to only 500 Mbps. Thus, it would not be possible to send more than 500 Mbps data on
the physical link. The authors in [25] use throughput as a measure for performance
management. Such measurement focuses on administering the throughput levels of
11
both the network devices (like routers) and network links, so that they do not fall
below the specified levels.
where xi is the value of the random variable for an outcome of i, P(xi) is the
probability that the random variable will be the outcome i. For a continuous random
variable, the equation is:
The mean is a measure of “central tendency”. Also, median and mode are
calculated as alternate measures of central tendency.
The variance of a random variable X, expressed as 2 or Var(X) is given by the
following equation:
It is the measure of “dispersion of a random variable about its mean” [32]. “The
larger the variance, the more likely the random variable is to take on values far from its
mean” [32].
The standard deviation of a random variable X is defined as the square root of
variance, i.e.
2.6 IXIA
IXIA is a company which provides “application performance and security
resilience solutions to validate, secure, and optimize businesses’ physical and virtual
networks” [34]. IXIA’s Network Test Solutions [35] enables the network equipment
builders to perform “pre-deployment testing” by testing their newly developed
12
equipment through simulation in a complex network dealing with real-time traffic. It
provides the organizations with an “end-to-end approach” to justify their built network
equipment, determine the performance of their built “networks and data centers” [35].
IXIA has released three products, namely, IxNetwork, IxLoad, IxChariot for testing
the built networks in a test bed at various layers. All these three products are sold in
the form of physical and virtual load modules, in the form of a chassis.
IxLoad is used for testing the application layer services [36]. IxLoad can simulate
application layer traffic, like voice, video, storage, internet, etc. It provides “virtual
environment testing” for testing the cloud, virtual servers and virtual machines.
IxChariot provides a way for “assessing and troubleshooting networks and applications
before and after deployment” [37]. It consists of a server and end-points which are
mainly the PCs, mobiles and data centers, and “real-world applications” are imitated to
evaluate the performance of these systems under load conditions. IxNetwork provides
testing features at the network layer and is described in detail in the following section.
2.6.1 IxNetwork
IxNetwork facilitates network layer testing by simulating MPLS, SDN, carrier
Ethernet and other Layer 2-3 protocols. It has the capability to emulate and design tons
of traffic flows and routes which enables the network manufacturers to perform stress
tests to evaluate the data plane performance [38]. It allows testing of IGMP scenarios,
convergence testing during live migration, and LACP. IXIA provides this emulation of
protocols through “CPU-based test port” in an IXIA chassis [38]. Each port has the
capacity to imitate a large number of routers, bridges and hosts in thousands of
networks. This tool comes with a user-friendly Graphical User Interface (GUI) and
wizard for configuring the routes, traffic flows and other features in the test bed. It also
provides real-time statistical analysis in terms of QoS parameters like latency, delay
variation, packet loss and inter-arrival time. IxNetwork comes with an advanced
feature of report generation which enables the user to create customizable reports by
creating graphs, data analysis, and PDF/HTML reports. There is also a feature known
as QuickTests for implementing “industry-standard test methodologies” like ITU-T Y
1564, RFCs 2889, etc. IxNetwork comes with a “Resource Manager” to manage the
developed complex topologies, the configuration changes, compare and integrate
various resources for testing [38]. At the same time, IxNetwork has the feature to
capture the packets which can be used for further analysis.
2.6.2 Traffic Generation
As mentioned before, IxNetwork provides “dynamic traffic support” for testing the
network layer services [38]. It comes with “line-rate traffic generation” capability [38].
With IxNetwork installed, one can change frame sizes, packet rate, line rate and layer
2 bit rate using the GUI. It is also possible to generate and design multiple traffic
streams, configure each packet header by modifying the header fields as per the
requirements, and generate “dynamically-updated MPLS and PPP traffic” [38]. The
wizard of IXIA enables the user to start, stop and pause the traffic.
IxNetwork also has the feature of “Multi-field ingress tracking” and “Multi-egress
Tracking”. The ingress tracking allows the user to record flows by making use of the
“user-defined fields” whereas the egress tracking provides a comparison between the
traffic sent and the traffic received. This tracking helps to check the changes made in
the packets when flowing from source to destination and QoS marking [38].
IxNetwork comes with basic, advanced traffic wizard along with the feature of
Quick Flow groups. It is possible to generate 4 million traffic streams and track them,
configure 16,000 distinct flow groups, generate 4,096 hardware streams from each port
and 4 million MPLS labels. It is possible to perform traffic measurements in terms of
loss, rate, latency, jitter, inter-arrival time, sequence, timestamps, TrueView
13
Convergence, packet loss duration, misdirected packets, late packets, re-ordered
packets, duplicate packets and in-order packets [38]. The generated statistics can be
used per IXIA port, per CPU, Tx-Rx frame rate, data plane performance per port,
flow-level measurements and user-defined flow detection mechanism [38]. Some of
the features of IxNetwork traffic generator are given in Table 1.
Feature Specification
Traffic ATM, Ethernet, HDLC, IPv4, IPv6,
MPLS, MPLS VPN, Multicast, VLAN
Port Mapping One-One, many-many, fully meshed
source/destination port mapping
Flow Groups Built based on VLAN ID or QoS
Traffic Profile Supports ARP, Auto Re-ARP.
QoS based on TOS, DSCP, MPLS EXP.
Rate based on line rate, packet rate, layer
2 bit rate.
Frame size can be fixed, increment, IMIX
Payload is increment/decrement byte,
random, can be customized as well.
Packet Error Can inject bad CRC/no CRC
Flow Tracking Tracking of flows based on QoS, VLAN,
source/destination MAC/IP, MPLS label
Flow Filtering and Detection Based on user-defined criteria.
Filtering based on latency, packet loss
Packet Editor Can edit packet header and payload.
Tracking enabling for user-defined flows.
Payload can be fixed, repeating, etc.
Table 1: IxNetwork Traffic Wizard [38]
2.6.3 Automation with IXIA
There is a strong potential to perform test automation using IXIA. The APIs of
IXIA provide all the required modules to successfully automate a test. Automation
with IXIA can be performed by using the GUI containing the “Test Composer” and
“Quick Tests” option or by using the “ScriptGen” module of IXIA.
With Test Composer, one can write a script consisting of the steps needed to
successfully execute a test while with Quick Tests, one can perform testing based on
the industry defined standards along with the custom tests defined by user. Both this
features are based on the GUI. These two modules are more user friendly as they do
not require a detailed understanding about IxNetwork’s APIs and their functions. It is
possible to execute multiple test suites in regression and collect the test results
automatically. The test engineers need to learn about the various IXIA commands for
generating traffic, collecting statistics and emulating protocols. The IXIA commands
are given in the form of a sequence of steps with the desired input parameters. On the
other hand, with “ScriptGen”, one can generate a script in TCL, Perl or Python of the
current test configuration which can be re-used and modified to conduct more tests as
per the desired configurations.
ScriptGen is an additional supporting module which is available as a part of the
TCL client installation. It creates a TCL program of the current configuration of the
IXIA ports connected to the network devices and the configured traffic. It is also
possible to create a TCL script for any test script that is opened in TestComposer. The
resulting TCL script is used as a base for creating automated tests. In this thesis, we
reuse the script generated from the “ScriptGen” module from IxNetwork GUI and
develop it further for performing test automation.
14
3 RELATED WORK
This section describes the work done so far in the areas of Test Automation and
Throughput Calculation.
15
at 307% and test evaluation at 41%. A higher ROI value for a testing task indicates
that it possess a greater potential for automation.
The authors of [7] developed an open source automatic testing framework, while
indicating commercial testing frameworks like Spirent, IXIA, Fluke and Endace as
expensive. This framework could generate traffic at a rate of 4x 10 Gbps. Additional
features like packet capture, timestamping with a precision of 6.25 nanoseconds, and
synchronization was provided with GPS. This framework was implemented by using
NetFPGA-10G. The OSNT architecture consists of a traffic generator with 10 Gigabit
Ethernet interfaces, a traffic monitoring module for capturing packets, a module for
high precision timestamping and a feature to provide a scalable system. This paper
focuses on the need for “high-precision traffic characterization” [7] which is usually
provided by Endace DAG cards. It can generate TCP, UDP, IP, BGP traffic and
various other standard protocols at line rate. It is possible to edit packet header fields,
perform “per-packet traffic shaping” and acquire “per-flow” statistics. This also
provides the feature of packet filtering and testing with deformed packets, which can
be configured as per the test case. This framework is for testing in education and
research fields, but for industrial testing, using IXIA is feasible.
The testing staff are posed with a challenge to reduce the cost and time for
hardware and software testing and at the same time maintain the quality and accuracy
of a test [20]. The testing process usually involves testing a product or testing a “multi-
faceted network”. The testing personnel thus need to imitate the real-world scenario
and evaluate the performance. A method is therefore needed for developing,
configuring and managing a test automation framework. The various steps which can
be followed to develop a test automation framework are depicted in [20]. Some of the
notable steps include specifying the test item and the associated network technology,
specifying the client and test server, test variables. Once all of the above are defined,
one can schedule and run the test, look for possible errors, handle these errors and
store the results for further analysis. An advanced approach to provide an end-to-end
solution in test automation using various modules is given in [24]. The method
involves selecting one or more test scripts based on a particular network service by one
or more users, selecting a suitable network topology, scheduling, executing the test and
storing and analyzing the generated log report. The code for the selected test scripts is
generated from predefined libraries present in one or more external device libraries.
The developed method also has a feature for simultaneously executing numerous tests
and alert the users about errors. For an end-to-end solution, all these tests are stored in
an execution server and can be configured based on user inputs for testing voice, video
and data services in a communication network.
16
identification of core performance issues of advanced IP network applications. A
passive measurement set-up for observation of packet streams is also provided in this
paper. It also presents a method for calculating the throughput of an application on
smaller timescales and is an extension of the theoretical concepts of the “Fluid Flow
Model” suggested in the above paper. For supporting the idea, a video conference
based on H.323 over UDP/IP was conducted between Wurzburg, Germany and
Karlskrona, Sweden through European Research Networks. A bottleneck between the
two links was introduced by sending additional UDP packet streams. When a
disturbance of 8 Mbps was introduced, disturbances in the video occurred. A
disturbance of 10 Mbps demolished the video conference session. For performance
analysis, a timing window of 1 minute with 100 milliseconds of resolution was chosen.
Histogram difference plots for both video and voice were plotted. In the case of video
streaming, huge differences were observed in throughput statistics indicating potential
bottlenecks in the network. The possible reasons for these deviations were the “jitter”
introduced by the network or due to the additional UDP stream of 8 Mbps. At the same
time, the QoS reduced below the desired levels when this additional disturbance of
UDP streams was introduced. On a whole, this paper highlights the use of “passive
measurement” and “throughput histogram difference plots” for alerting both end users
and network operators about hidden performance issues on smaller timescales.
In “A Performance Evaluation Metric for NFV Elements on Multiple Timescales”
[9], the above ideas were extended to a virtualized environment. The authors proposed
a performance metric, independent of the virtualization technology which expresses
the performance in terms of throughput on multiple timescales. Their evaluation
method could successfully express the “transparency” and “degree of isolation” of a
virtual environment. A proof-of-concept of their suggested methodology was given,
where the performance of a XEN virtual router was observed on multiple timescales.
The metric expresses the coefficient of throughput variation by considering the inter-
packet time for each traffic flow. At first, a comparison was performed between the
coefficient of throughput variation and the suggested metric by considering a scenario
of four virtual routers and a round-robin input packet stream. Next, a comparison was
performed when the virtual router was deployed on hypervisors, Xen and VirtualBox.
Later, a live demo was conducted where the impact of one traffic flow on the other
was identified when simultaneous traffic flows were being sent through a router. The
experiment used a capturing duration of 25 seconds and a jumping window of duration
one second. A huge variation in throughput was observed on smaller time scales which
highlighted unfair sharing of resources and change in packet order from the ingress to
the egress, thus, facilitating the identification of performance decrease in a virtual
environment. This paper received an award in Globecom, 2013.
Through the thesis, “Analysis of Resource Isolation and Resource Management in
Network Virtualization” [40], the author further strengthened the above research. The
coefficient of variation was calculated as the difference of CoV at the egress and
ingress for two experiments. The experiments involved sending N traffic streams, from
N sources to N destinations through a single physical system with a virtual bridge,
running N virtual machines. A passive measurement infrastructure equipped with
DAG cards, known as DPMI [41] was used for capturing these N traffic streams.
Initially, when only one VM was used, the effect of the hypervisor on performance
was not significantly visible. But, the addition of one more VM and additional UDP
traffic of 5 Mbps showed significant variation in the throughput at timescales of
0.0025 and 0.005 in the form of histograms. On a whole, this research highlighted the
identification of potential bottlenecks using the above suggested methodology and the
need for analyzing the various dependencies in a virtual system and improving the
scheduling mechanisms in such systems.
17
4 METHODOLOGY
As mentioned before, the aim of this thesis is to validate the methodology
suggested for evaluating the NFV elements for evaluating the performance of a
physical router and propose a test automation framework for testing the same. This
section describes the methodologies followed for performing test automation with
IXIA, calculation of Return on Investment (ROI) which highlights the benefit of
using test automation in the telecommunications industry and calculating
throughput on multiple timescales. A detailed literature review is performed in all
the three cases before starting the experiments and performing the calculations.
The methodology for automating the given test with IXIA uses the High-
Level HLT API for automatically configuring the ports on a chassis, starting and
stopping the traffic. It deals with the APIs used for developing the TCL script and
reusing the TCL script from the ScriptGen module of IXIA. The ROI is calculated
based on the formula given in [6] as reduction in execution time due to automation.
Also, a linear ROI curve is proposed to extend the idea of ROI calculation for “N”
use cases.
The methodology adopted to calculate the throughput when sending multiple
traffic streams is based on [9], histograms are used to display the performed
statistical analysis and for comparing the performance when sending simultaneous
traffic streams [10]. The throughput is calculated by subjecting the router to
different traffic loads and comparing the epoch times of the packets at the ingress
and egress of the router. The scripts provided in [40] are used as a reference and
new script is developed further to perform statistical analysis. The throughput is
calculated at the network layer, using a passive measurement set-up and adopting a
jumping window approach.
18
communicates with the ports of the chassis. The chassis and the IXIA TCL Server
have the same IP address. The commands in the TCL script running on the client
machine are run on these servers for automatically configuring IXIA for a given test.
IXIA has a powerful set of APIs (Application Programming Interfaces) which are used
for communicating with the IxNetwork TCL Server and IXIA TCL Server. There are
mainly two APIs which can be used for automating a given test, namely, IxOS and
HLT API. With IxOS, one can directly communicate with the IXIA TCL Server,
without the need of IxNetwork TCL Server through IxTclHAL commands. On the
other hand, the HLT API has Perl, Python and TCL APIs to communicate with the
IxNetwork TCL Server. Each API (IxOS, HLT API) has a different set of commands
for communicating with the IXIA TCL server on the chassis. It is possible to schedule
tests and store the results automatically using the IXIA TCL Server, so that a system
can be tested for a longer duration without human intervention. Figure 5 describes the
various modules of IXIA communicating with each other.
In this thesis, we install the IXIA client software on a Linux platform and use the
HLT (TCL) API for performing test automation. This is because the same Linux client
is used for automatically capturing packets required for throughput calculation, and
configuring the routers and switches through command line. The HLT API is used
because the configuration script generated by the IxNetwork ScriptGen GUI uses this
API for communicating with IXIA. Thus, the IxNetwork client software is installed on
the Linux platform and an external IxNetwork TCL server is used. The basic
configuration script was first generated using the GUI of IxNetwork on the Windows
platform. This script was modified and developed further according to the given test
and executed in the HLT API console in the Linux platform.
19
The various commands of the HLT API that were used for developing the major
backend of the TCL script are described in Appendix A(1). It is to be noted that the
entire script is not published because IXIA is not an open source software. Also, the
version of the IXIA client software, the IxNetwork TCL Server and TCL Server must
be the same due to compatibility issues. If the IXIA versions are different, one receives
an error of version mismatch and testing is stopped.
4.1.2 Return on Investment
The methodology used for highlighting the benefit of test automation in the
telecommunications industry is based on [6]. The Return on Investment is calculated
based on the formula given in this paper, given below. A common practice is to
calculate ROI for software testing processes, but not for test cases in the
telecommunications industry. An effort is being made to address this issue in this
thesis, along with providing a proof-of-concept for the throughput calculation
methodology.
The testing department uses IXIA to perform testing of the System under Test
(SuT). This system can either be a network of devices like routers, switches or an
individual device (networking equipment), Device under Test (DuT). This test bed is
simultaneously accessed by various users across the globe. Hence, there is a constraint
on time to execute a particular test. Thus, the testing team has to execute a given test in
a given time frame, thereby highlighting the need for automatically executing the tests.
As per [6], there are four major testing tasks, namely, “Test-case design”, “Test-
scripting”, “Test-execution”, “Test-evaluation”. As the primary goal of this thesis is to
automate the test case for throughput calculation and show the advantage of
automation in terms of ROI, compared to manual testing, the “Test Automation
Decision Matrix” (TADM) is used. It should be noted that the number of use cases is
one, as we are automating a single test. The value of 0 indicates that the given test
phase is manual, and the value of 1 indicates that it is automated. The value for test
design is set to 0, as we have considered only one test case for automation, i.e.
throughput calculation, and the test cases were also designed manually. Test scripting
is set to 0, as IXIA’s ScriptGen module was used to generate the TCL script of the
current configuration and was developed further to include additional requirements as
per the present test case. IXIA generates manual scripts which can be reused for
further testing. The value for test execution is 1, as the developed test script is
executed automatically to configure IXIA ports, traffic items, start and stop traffic and
calculate the throughput on multiple timescales. Test evaluation is set to 0, as the user
makes the decision about the final result of execution of the test. The cost and benefit
for manual and automatic testing for the given use case are calculated in hours (time),
and the total ROI is expressed in the form of a percentage. Also, even though there are
various phases in testing, the focus is made on test execution, as the remaining test
phases were difficult to evaluate and express for the given test case. The ROI was
calculated manually (only one use case was considered), unlike the program
Automated-Testing Decision Support System which was developed using Java in [6].
4.1.3 Test Automation Cost Calculation
The formula for calculating the ROI in terms of reduction in execution time has
been described in the above section, namely Section 4.1.2. Since calculating ROI for a
single use case is not sufficient, the obtained reduction in execution times are used to
propose an ROI curve to extend the developed test automation framework for “N” use
20
cases and describe the effect of the use cases on ROI. In general management scenario,
Return on Investment is expressed as:
Gain from Investment is the financial gain obtained from the initial investment
cost. In the current thesis scenario of test automation, we define the gain from
investment as the cost of doing the tests. It can also be the benefit (profit) obtained by
using the developed test automation tool. Cost of investment is the initial invested cost,
at the beginning for developing a new product. In this thesis, we define the cost of
investment is as the work in the form of overhead by using the developed test
automation tool. In both the cases, cost is usually expressed in units of money in an
actual business scenario. We express money in terms of manpower (or man hours) or
execution time in this thesis.
For both non-automated and automated testing, the time required to configure the
test set-up is the upper bound. Let “N” be the number of tests (or the number of use
cases). We assume different variables to describe the times involved for performing
the various testing tasks. The following variables are used in the equations below:
In both cases, we propose a standard unit to express the costs involved in testing
and describe it as Money Units (MU) per hour:
The Gain from investment is calculated in this thesis as the difference of the test
costs involved in performing the non-automated test and the automated test. As
mentioned above,
The gain from investment is expressed as the difference between the variables A
and B as:
The cost of investment is denoted by variable R, which expresses the time taken to
configure the automated test tool. This cost is the initial investment cost for developing
the test automation framework:
In this thesis, “P” is the time taken to set-up the use case in a non-automated test
environment. In real time, this was found to be 16 hours. The values for other variables
are R = C = 4 hours, Q = 0.25 hours and S = 0.1 hours.
21
Thus, the cost for conducting “N” non-automated tests is given by A as:
Similarly, the cost for conducting “N” automated tests, B is given as:
Hence, the ROI for automating “N” tests (or use cases) can be expressed by the
following equation:
A graph is drawn by varying the values of “N” to illustrate a linear curve. The
linear curve is described by Figure 6 below. This graph shows the effect of ROI on the
number of use cases. The proposed curve has a negative slope and the ROI becomes
positive as the value of “N” (number of tests or use cases) increases.
20
y (Return on Investment)
15
10
5
0
-5 -5 -4 -3 -2 -1 0 1 2 3 4 5
-10
y = 4.0375N - 5
-15
-20
-25
-30
x (N, number of use cases)
22
The processing time is denoted by cp. In ideal case, this value is zero. But, in
practical scenarios, this value is assumed constant. The output, tetout is delayed by cp.
Thus, in an isolated environment, the events Tiin and Tiout are related as:
The coefficient of variation, CoV for this random variable is given by the
equation below:
The difference between the CoV values at the egress and the ingress expresses
the degree of isolation, in this scenario, traffic isolation:
The above described method will be used to answer the research questions.
23
A scenario indicating a change in the packet order in a virtualized
environment is depicted in Figure 6:
As seen in the figure, there is a change in the order of packets (events) at the
egress of the router. The reason for this is the scheduling mechanism in the
hypervisor, which affects the packet order, thereby leading to unfair resource
sharing on smaller timescales. It should be noted that even though we see equal
sharing of resources on larger timescales, this is not true on smaller timescales in a
virtualized environment and is often not noticed. This change is expressed
mathematically by using the variables Rk,Δin/out and Δ described above as
coefficient of throughput variation. This idea of change in order of packets in a
virtualized environment is extended to a physical environment for testing the
performance of physical routers on multiple timescales.
24
The three traffic streams flow simultaneously for a duration of 15 seconds. The
total duration for sending the traffic is 25 seconds, with each traffic stream starting at a
gap of 5 seconds. To state more clearly, the first traffic stream starts at the beginning,
say at time 0. After the first traffic stream flows for a duration of 5 seconds, the second
traffic stream is started, at time 5. Now, the two traffic streams are flowing
simultaneously. When the two traffic streams flow simultaneously for a duration of 5
seconds, the third traffic stream is started at time 10. The three traffic streams flow
simultaneously for a duration of 15 seconds. The three simultaneous flowing traffic
streams are stopped after a duration of 15 seconds, at time 25. It should be noted that
the time of flow for each traffic stream is different, like traffic stream 1 flows for the
entire duration of 25 seconds, traffic stream 2 flows for 20 seconds and traffic stream 3
flows for 15 seconds. The duration of the simultaneous traffic flow of the three streams
is of importance for this thesis work. The sending of multiple traffic streams can be
understood in detail in Section 5.5, Experimental Procedure and from Figure 8
described above.
In Figure 8, the packets belonging to each stream are specified with a different
color, namely blue, magenta and green respectively. Only the packets belonging to
stream 1 (blue color) flow for a duration of 5 seconds at the very beginning. After they
flow for a duration of 5 seconds, stream 2 is started simultaneously (magenta color).
The two streams flow for a duration of 5 seconds. Then, stream 3 is started (green
color). Thus, packets of stream 1 flow for the entire duration of 25 seconds, stream 2
for 20 seconds and stream 3 for 15 seconds. Also, the three streams flow
simultaneously for the duration of 14 seconds. The aim here is to calculate throughput
for a duration of 1 second and on timescales less than 1 second, like 1 millisecond.
When the user wants to calculate the throughput on a timescale of 1 second, let us
consider the window of duration of 5 seconds, when all the three streams are flowing
simultaneously and calculate the throughput. This window of 5 seconds duration is
divided into smaller intervals, with each interval having the size of 1 second and the
throughput is calculated per second. This scenario is depicted in Figure 9.
As seen in the above figure, an ICMP packet, in the form of a ping request was
sent before starting to send the traffic streams. This was done to identify the starting of
IPv4 traffic (or the traffic streams). This starting packet is later discarded once the
required traffic streams are identified and is not used for throughput calculation. Next,
the starting IP packet marking the beginning of simultaneous flow of all three traffic
streams is also identified. In the above figure, each different color of the packet – blue,
magenta, green indicates a packet belonging to a particular traffic stream.
25
Theoretically, it is assumed that the Layer 2 switch in the test bed generates a round
robin fashion of packets for the three traffic streams. There is no issue if this does not
occur in real time, it is shown only for depiction purpose. All the traffic streams are
flowing on a physical link with 10 Gbps link capacity. The user can also consider a
window size of duration other than 5 seconds, like 15 seconds for calculating
throughput per second. The value of 5 seconds is shown here for explanation purpose.
Similarly, when a user wishes to calculate the throughput for a timescale less than
one second, say 1 millisecond, 0.1 millisecond or less, we consider a window of
duration 1 second. This window of 1 second duration is divided into smaller intervals
where each interval has a size of say 1 millisecond (or 0.1 millisecond). It should be
noted here that when the timescale value is 1 millisecond, for a window of 1 second,
the number of intervals becomes 1000. Thus, the throughput is calculated for each
interval of duration 1 millisecond as depicted in Figure 10.
26
than the end time of the current interval being considered for throughput calculation.
But, its start time is less than the interval end time of the current interval.
Finally, the packet belongs to the next interval, if all the above conditions are not
satisfied. For the next interval, the interval end time of the previous interval becomes
the interval start time, and so on. This method is followed to analyze the packet data
obtained from log files of the Wireshark server. The above extraction method is
implemented in the form of a Perl script to calculate the throughput on multiple
timescales. The script is given in Appendix A(3). The user needs to give the input of
the larger window size, like 5 seconds or 1 second and the smaller window size which
depicts the value for calculating the throughput on smaller timescales in the Perl script.
27
through the router. Thus, we use a passive measurement system for storing the
packet information. The traffic from all the three IXIA ports is directed to only one
IXIA port at the receiver end, i.e, only one destination port. It is due to the lack of
resources at the laboratory that the experiment was conducted under limited
conditions. A detailed information about the specifications of the devices used in
this experiment is given in Section 5. It is worth mentioning that each of the three
IXIA ports at the source side has a maximum physical link capacity of 1 Gbps and
they are connected to the 1 Gbps optical cables. The IXIA port at the destination
has a maximum physical link capacity of 10 Gbps and so it is connected to the 10
Gbps optical cable.
It should be noted that the cables used in the entire test-bed are optical in
nature. The physical link capacity between the three different traffic sources (or
IXIA ports) and the Layer 2 switch is 1 Gbps. The physical link capacity in the rest
of the test-bed, i.e. between layer 2 switch and optical layer 1 switch; between
network taps and Wireshark server; between the router and layer 1 switch and the
destination is 10 Gbps. The entire test is controlled by the Linux client on which
the IXIA client software. It is also through the same Linux client that the various
network elements are configured in the test-bed like Layer 2 Switch, Optical Layer
1 Switch and the router.
28
5 IMPLEMENTATION AND EXPERIMENT
This section gives an overview of the physical environment of the test bed which
was used for conducting the given experiment. It describes some of the general
specifications of network equipment used for performing the given test, like Layer 2
switch, DuT, Layer 1 switch and network taps. It also describes IXIA software version
which was used for developing TCL scripts for test automation, the characteristics of
the physical cables interconnecting the devices and the packet capture set-up. The
steps for successfully conducting the test, crucial details of the experiment and
potential areas of automation for the given test case are mentioned as well.
Software Version
IxNetwork 7.40 GA (7.40.929.28)
IxOS 6.80 GA (6.80.110.12)
HLT API 4.95 GA (4.95.117.44)
TCL Interpreter 8.5
Table 2: IXIA Specifications
Feature Value
Port Configuration Up to 2x 10GE ports, Dedicated SFP,
RJ45 Ports
Performance Management IP Performance Monitoring
Ping and Traceroute, BFD
IPv4 Protocols OSPF, IS-IS, BGP, LDP, RIP, VRF
MPLS Protocols L3VPN, L2VPN, RSVP
Layer 2 Properties IEEE 802.1 Q Virtual LAN
Network Management SNMP, RADIUS, RMON, TACACS+
QoS Policing up to 1Gbps, WRED queuing,
Scheduling is combination of Strict
Priority or/and Deficit Round Robin
Operating Environment Operating Temperature: -40C to 65C
Humidity: 0-95% Non-condensing
Power: -48 VDC, 110-240 AC
Table 3: DuT Specifications
29
5.3 Other Specifications
This section describes some of the general characteristics of the Layer 2 Switch
which was used in the test-bed and the properties of the physical links interconnecting
the various network devices.
5.3.1 Layer 2 Switch
The sole function of the layer 2 switch which is used in this test bed is to perform
forwarding of the multiple packet streams from the source, i.e. IXIA ports to the
destination IXIA port through the DuT. This switch has both Layer 2 and Layer 3
Gigabit Ethernet switching capability and it supports stacked VLAN and MPLS
services. Some of the general specifications of this switch are given in Table 4.
Feature Value
Power Supply Dual power inlet
Nominal input voltage: -48 V
Ports Configuration GE, 10 GE Interface ports supporting
SFP, DWDM
Management SSHv2c, TACACS+
SNMP v2/v3
Secure FTP
RADIUS authentication
HTTP for management via web interface
QoS Strict Scheduling, WRR, DRR
DiffServ precedence
Policy-based routing
Protocols EAPS, STP, RSTP, MSTP, RIP, OSPF,
VRRP, VMAN, multicast routing
Operating Temperature -5C to 85C
Capacity 32K MAC addresses, Jumbo frames
12K Layer 3 route table size
32 Virtual Router
Table 4: Layer 2 Switch Specifications
5.3.2 Physical Link Characteristics
This subsection describes the capacity of the cables used for interconnecting the
various network devices in the test-bed. It is important as line rate (line speed/link
speed) is one of the parameters which was considered for calculating throughput. All
the cables used for interconnecting the devices are optical cables. Table 5 describes the
source-destination pair and the capacity of the physical link interconnecting them.
30
5.4 Packet Capture
As mentioned previously in Section 4.5, a passive measurement system is used to
capture the packets at the ingress and egress of the router. The packet capture set-up
consists of a Layer 1 Optical Switch, Optical Network Taps and a Wireshark Server.
The traffic at the ingress and egress flows from the Layer 1 switch, through the
network taps to finally reach the capturing interfaces of the Wireshark Server.
The Layer 1 Switch, more specifically is an “ S Series Optical Circuit Switch” [42]
developed by Calient. This switch is capable of providing interconnectivity in
networks with speeds ranging from 10 Gbps to 100 Gbps, and more. The switching
module is based on MEMS technology, i.e. Micro-Electrical-Mechanical System. It
has a GUI for configuring the various interfaces of the switch. At the same time, it also
supports TL1 commands and SSH, where the user can manually log into this switch
and execute the configuration commands. The general specifications of this switch are
described in Table 6:
Feature Value
Power Supply 12V DC, 24V DC,
-48V dual redundant power options
Ports Configuration 320 Ports (Tx/Rx Pairs)
Power Dissipation Less than 45 Watts
Temperature -5C to 50C (Operating)
-40 to 70C (Non-Operating)
Loss Maximum Insertion Loss is 3 dB
Features GUI-driven, EMS-Ready, supports TL1,
SNMP, COBRA and OpenFlow
Table 6: Optical Switch Characteristics
The network taps which were used were “All-Optical Taps” manufactured by VSS
Monitoring [43]. The optical split ratio was 70:30, which means that on reaching the
network taps, 70% of light energy was sent to the DuT, and the remaining 30% of light
was sent to the Wireshark Server, where packets were captured at the respective
interfaces. The split ratio is calculated by the mechanism described by the company
which manufactures the taps. The optical loss in a system is calculated so as to
determine the allowable split ratio of a new tap installation, which was found to be
70:30 for the selected network tap. This is because an optical signal degrades as it
propagates through the network. The signal is attenuated by network components like
switches, fibre cables, splitters and the wavelength of the optical signal being used.
As shown in Figure 13 below, the traffic is flowing from the DuT to the
destination IXIA port, with a set-up of Layer 1 switch and network tap in between. The
tap has four ports, namely, NetA, NetB, MonA, MonB. The ports NetA, NetB are
connected to the Layer 1 switch ports and ports MonA, MonB are connected to the two
Wireshark interfaces. In our test case, we consider only unidirectional traffic flow, i.e.
from DuT to IXIA and not vice versa. Thus, the concerning ports are NetA, MonA and
p2p1. The traffic reaches the source port of Layer 1 switch (2.3.4) and from there it is
directed to the optical network tap at port NetA. As mentioned above, the tap is
configured at 70:30 optical split ratio. The 70% of light reaches NetB port and from
there it is directed to the port of Layer 1 switch (2.3.8) and finally the destination IXIA
port. The remaining 30% of light reaches port MonA of the network tap and finally to
the port p2p1 of the Wireshark server. In brief, traffic being received at port NetA of
the network tap is split for transmission on port NetB and port MonA.
31
Figure 13: Splitting at Network Tap
32
obtained. When capturing packets at the two Wireshark interfaces, the user
needs to make sure that the number of packets at the ingress and egress are
same (equal) and no packets are being dropped at the interfaces when saving
the “.pcap” file.
8. The “.pcap” files are converted into a text (log) file by using a T-Shark filter,
where fields like Packet Number (Frame Number), Epoch Time, Frame Size,
Source and Destination IP addresses are extracted.
9. The two log files containing the packet information at the ingress and egress of
the router are given as input to the developed Perl script to evaluate the
throughput on multiple timescales. The user can specify the timescale with
which he would like to measure the throughput in the form of arguments when
running the Perl script. The timescale can be 1 second, 1 millisecond or 0.1
millisecond. It should be noted that the log files are fetched from the
Wireshark Server to the Linux Client through File Transfer Protocol manually
through command line. The Perl script is executed in the Linux Client, where
the IXIA client software is also installed.
10. The Perl script calculates the Coefficient of Variation (CoV) at the ingress and
egress of the router. The CoV at the egress is subtracted from the CoV at the
ingress. A positive and larger CoV value indicates that there is greater amount
of variation in the throughput when multiple traffic streams are flowing
simultaneously through the Device under Test.
The same experiment is repeated for varying frame sizes of 128 Bytes, 256 Bytes,
512 Bytes, 1024 Bytes and 1518 Bytes at 100 Mbps. All the three traffic items are sent
at the same frame size and speed (layer 2 bit rate). The CoV is calculated for each case
at timescales of 1 second and 1 millisecond. The experiment is repeated five times for
each frame size (5 iterations for each frame size) and the average CoV is calculated.
33
Manual Automation
Configuration of Router Connection to chassis
Selection of IXIA desired ports
Cabling in the test bed Configuration of IXIA ports – IP Address
and Default Gateway
The user needs to manually give the Configuration of Traffic Items – Frame
following parameters when performing Size, Speed
the test automatically: Starting of simultaneous traffic streams –
Chassis IP, TCL Server IP each traffic stream at a gap of 5 seconds
IxNetwork TCL Server IP Stopping of traffic
IP address and Default Gateway for IXIA Releasing IXIA ports from current
ports configuration.
Frame size and speed for each of the three
traffic items The parameters for traffic configuration
Time scale value for throughput need to be simply written in a text file.
calculation The format of text file is given in
Appendix A(5).
Launching of automated script: Automatic capture at Wireshark
The user needs to manually launch the Interfaces simultaneously.
script which is developed for automatic Automatic fetching of log files from
testing from his Linux Client Machine in Wireshark Server to Linux Client
the terminal.
The user needs to manually subtract the Automatic execution of Perl script for the
CoV values obtained at the ingress and two log files fetched from Wireshark
egress of the router to determine variation Server for CoV calculation.
in throughput.
User needs to manually perform the
iterations for varying frame sizes
Table 7: Manual-Automated Steps
34
6 RESULTS AND ANALYSIS
This chapter expresses the result of benefit of test automation in terms of Return
on Investment and the coefficient of variance in throughput on multiple timescales. An
automation testing framework was developed to evaluate the throughput of a router on
multiple timescales. In the beginning, a discussion is provided for the return on
investment calculation which has been used in the thesis work to express the benefit of
test automation. ROI was calculated for a single use case by taking into consideration
the reduction in execution time obtained by automating the given use case. Later, an
ROI curve is proposed to extend this idea for “N” use cases. Finally, the throughput is
evaluated on two timescales, namely 1 second and 1 millisecond for varying frame
sizes.
35
test automation method of using IXIA in Linux client needs to be developed further
and extended to perform testing for larger use cases and calculate the overall ROI, by
taking into consideration all the possible use cases.
6.2 Throughput
The experiment of throughput evaluation on multiple timescales was conducted for
the standard frame sizes – 128, 256, 512, 1024 and 1518 Bytes. The CoV was
calculated at the ingress and the egress of the router and its value at the ingress was
subtracted from the egress value. The experiment involved sending three simultaneous
traffic streams through the DuT (router), from three different sources to one common
destination and capturing the packets at the egress and the ingress of the DuT using a
Wireshark server. The CoV was calculated for 1 second and 1 millisecond. The
average was calculated after performing five iterations.
A comparison of the difference in the CoV values at a timescale of 1 second for
varying frame sizes is provided in Figure 14. It can be understood that there is no
difference in the CoV values for the larger frame sizes of 1024 and 1518 Bytes. The
difference between the CoV values at the egress and ingress was found to be zero. In
other words, the variance at the egress and ingress was equal for 1024 and 1518 Bytes.
A difference in CoV was observed for the smaller frame sizes – 128, 256, 512 Bytes. It
can be noticed that the difference in the CoV values decreases as we move from the
frame size to the larger frame size. Also, for 128 Bytes, this difference is in the order
of ten to the negative sixth power. This difference is very minute, but it is highlighted
in this thesis. The reason for not conducting the experiment at 64 Bytes is given in
Section 7.2.
Figure 15 presents the difference in the CoV values for the same frame sizes at a
timescale of one millisecond. The trend for the varying frame sizes is similar to that of
the trend observed at a timescale of one second. It is observed that the difference in the
CoV values decreases as we move from the smaller frame size to the larger frame size.
The difference in the CoV values increased from 0 to ten to the negative power of four
for the largest frame size of 1518 Bytes. Again, the increase is very little, but it is
worth noticing. This is much more clearly noticeable for the smaller frame size of 128
Bytes. The value increased from ten to the negative sixth power to ten to the negative
third power.
36
Figure 15: Difference in CoV per Millisecond
In Figure 16, the difference in the CoV value for the varying frame sizes is shown
at timescales of 1 second and 1 millisecond. This difference is largest for smaller
frame size, i.e. 128 Bytes when compared to the largest frame size, 1518 Bytes, both at
1 millisecond and 1 second. The difference in CoV values is more for the timescale
value of 1 millisecond, in contrast to the timescale value of 1 second. This difference is
more for the smaller frame size, when compared to the larger frame size. This result
shows that the hypothesis assumed in this thesis is true, meaning that there is an
impact of one traffic stream over the other, which leads to a difference in the CoV
values. It can also be said that the amount of data in each smaller interval (of duration
1 millisecond) is not always same which leads to more variation. It means that there is
little amount of jitter added by the DuT. This scenario is usually not noticed when the
throughput is measured on larger timescales. Even though these values are very
minute, they are mentioned to support the hypothesis assumed in this thesis.
37
7 CONCLUSION AND FUTURE WORK
The purpose of this thesis work is to develop an automatic testing framework for
measuring the performance of routers on multiple timescales. The routers from
multiple vendors can be tested automatically with the suggested automation method
and throughput calculation methodology. This thesis work was carried out by using
IXIA for automatic generation of traffic. A passive measurement infrastructure
consisting of Layer 1 Switch, Network Taps and a Wireshark Server was used. The
performance was expressed in terms of throughput on multiple timescales, like 1
second and 1 millisecond. Finally, the benefit of developing the given test automation
framework by using IXIA in a Linux Client was described in terms of Return on
Investment by taking into consideration the reduction in execution time for the test.
The developed test automation framework greatly reduced the complexity when
conducting tests using IXIA. It introduced greater flexibility and speed, in terms of
execution time while testing, saving time and resources. A proof-of-concept of the
developed test automation method was successfully implemented and tested for
evaluating the performance of a router.
38
compared to 1518 Bytes. It was zero for 1518 Bytes, whereas for 128
Bytes, it was slightly greater (in the order of ten to the negative sixth
power). This trend continued on the timescale value of 1 millisecond
where the difference was clearly noticeable as seen from Figure 15. The
difference value increased from zero to ten to the negative power of four
for 1518 Bytes. There was a greater increase in CoV for the smaller frame
size, in the order of ten to the negative power of three. With these two
figures, it is concluded that the packet streams with smaller frame sizes led
to more difference in the CoV values when compared to larger frame sizes
when passing through the router.
It should be noted that the ROI calculation and Throughput calculation values are
all anonymized. Each calculation is divided by a constant factor, to make sure that the
entire data is not made public and maintain the data integrity.
39
It should be noted there are methods for tuning the Wireshark server to avoid
packet loss, but it was not performed as the remaining experiments in the lab
needed to be stalled and then the server had to be fixed.
3. Another biggest challenge was the time taken for the execution of the Perl
script for calculating the throughput on smaller timescales. For example, the
size of the T-Shark log file at ingress (or egress) for 128 bytes was 260 Mega
Bytes and it took around 6 hours for the complete execution of the script for a
timescale of 1 millisecond. The smaller the timescale value, the more time was
consumed in script execution. In some instances, the script execution process
was killed by the Linux operating system due to insufficient memory. As a
result, the experiments were conducted at 100 Mbps line speed and timescale
values of 1 second and 1 millisecond. The CoV at a timescale of 0.1
millisecond was thus not calculated.
4. The most important parameter affecting the throughput results is the physical
link capacity of 10 Gbps. This made the calculations in the script complex, and
such high forwarding capacity tremendously increased the size of the log files.
5. There were numerous scenarios when the difference in the CoV values at the
egress and ingress was negative. It is believed that the negative variation in
throughput might be because of the manner in which the two network taps
were connected to the Wireshark Server. One network tap was connected to a
network interface card (NIC) which was directly integrated to the motherboard
of the Wireshark Server, while the other tap was connected to an external NIC
on Wireshark. As such, these negative values were discarded and only the
iterations with positive CoV values were considered.
6. The ROI calculation can be expressed more clearly by considering the various
external parameters and costs, different use cases and projects.
These limitations were not tackled because of lack of resources, timing constraints
and other external factors. In the end, the experiments could not be performed again
for more iterations due to repeated license and server issues with IXIA, packet loss in
Wireshark and power glitch which lead to equipment failure and reinstallation
introduced further delay for successful completion of thesis work. As both automation
and experimentation had to be performed simultaneously, better analysis of the results
could not be performed.
40
The ROI needs to be calculated in the actual real-time business scenario, where
investment cost is taken into consideration, and this test automation methodology
needs to be extended further for different use cases and needs to be implemented for
bigger testing projects. Also, one needs to consider the other parameters like test case
design, test scripting and test evaluation for expressing the complete benefit of test
automation. A comparison of ROI for different sub-cases of a given use case with
partial and complete test automation can also be presented.
41
REFERENCES
[1] S. Baraković and J. Baraković, “Traffic performances improvement using DiffServ and
MPLS networks,” in Information, Communication and Automation Technologies,
2009. ICAT 2009. XXII International Symposium on, 2009, pp. 1–8.
[2] U. M. Mir, A. H. Mir, A. Bashir, and M. A. Chishti, “DiffServ-Aware Multi Protocol
Label Switching Based Quality of Service in Next Generation Networks,” in Advance
Computing Conference (IACC), 2014 IEEE International, 2014, pp. 233–238.
[3] M. Hlozak, J. Frnda, Z. Chmelikova, and M. Voznak, “Analysis of Cisco and Huawei
routers cooperation for MPLS network design,” in Telecommunications Forum Telfor
(℡FOR), 2014 22nd, 2014, pp. 115–118.
[4] V. Kher, A. Arman, and D. S. Saini, “Hybrid evolutionary MPLS Tunneling Algorithm
based on high priority bits,” in Futuristic Trends on Computational Analysis and
Knowledge Management (ABLAZE), 2015 International Conference on, 2015, pp. 495–
499.
[5] K. Wiklund, D. Sundmark, S. Eldh, and K. Lundvist, “Impediments for Automated
Testing -- An Empirical Analysis of a User Support Discussion Board,” 2014, pp. 113–
122.
[6] Y. Amannejad, V. Garousi, R. Irving, and Z. Sahaf, “A Search-Based Approach for
Cost-Effective Software Test Automation Decision Support and an Industrial Case
Study,” 2014, pp. 302–311.
[7] G. Antichi, M. Shahbaz, Y. Geng, N. Zilberman, A. Covington, M. Bruyere, N.
McKeown, N. Feamster, B. Felderman, M. Blott, and others, “Osnt: Open source
network tester,” Netw. IEEE, vol. 28, no. 5, pp. 6–12, 2014.
[8] S. Thummalapenta, S. Sinha, N. Singhania, and S. Chandra, “Automating test
automation,” in Software Engineering (ICSE), 2012 34th International Conference on,
2012, pp. 881–891.
[9] D. Stezenbach, K. Tutschku, and M. Fiedler, “A Performance Evaluation Metric for
NFV Elements on Multiple Timescales,” 2013.
[10] M. Fiedler, K. Tutschku, P. Carlsson, and A. Nilsson, “Identification of performance
degradation in IP networks using throughput statistics,” in Teletraffic Science and
Engineering, vol. 5, Elsevier, 2003, pp. 399–408.
[11] N. Feamster, J. Rexford, and E. Zegura, “The road to SDN,” Queue, vol. 11, no. 12, p.
20, 2013.
[12] T. Zahariadis, D. Papadimitriou, H. Tschofenig, S. Haller, P. Daras, G. D. Stamoulis,
and M. Hauswirth, “Towards a future internet architecture,” in The Future Internet
Assembly, 2011, pp. 7–18.
[13] “Federal Standard 1037C: Glossary of Telecommunications Terms.” [Online].
Available: http://www.its.bldrdoc.gov/fs-1037/fs-1037c.htm. [Accessed: 12-Sep-2016].
[14] J. Pan, S. Paul, and R. Jain, “A survey of the research on future internet architectures,”
IEEE Commun. Mag., vol. 49, no. 7, pp. 26–36, 2011.
[15] K. Benz and T. Bohnert, “Dependability modeling framework: a test procedure for
high availability in cloud operating systems,” in Vehicular Technology Conference
(VTC Fall), 2013 IEEE 78th, 2013, pp. 1–8.
[16] U. Franke, P. Johnson, J. König, and L. Marcks von Würtemberg, “Availability of
enterprise IT systems: an expert-based Bayesian framework,” Softw. Qual. J., vol. 20,
no. 2, pp. 369–394, Jun. 2012.
[17] M. Vogt, R. Martens, and T. Andvaag, “Availability modeling of services in IP
networks,” in Design of Reliable Communication Networks, 2003.(DRCN 2003).
Proceedings. Fourth International Workshop on, 2003, pp. 167–172.
[18] E. Marcus and H. Stern, Blueprints for high availability, 2nd ed. Indianapolis, Ind:
Wiley Pub, 2003.
42
[19] M. Rausand and A. Høyland, System reliability theory: models, statistical methods,
and applications, 2nd ed. Hoboken, NJ: Wiley-Interscience, 2004.
[20] S. C. Maffre, R. V. Alligala, J. K. Epps, and R. L. Halstead, Method and system for test
automation and dynamic test environment configuration. Google Patents, 2013.
[21] D. M. Rafi, K. R. K. Moses, K. Petersen, and M. V. Mäntylä, “Benefits and limitations
of automated software testing: Systematic literature review and practitioner survey,” in
Proceedings of the 7th International Workshop on Automation of Software Test, 2012,
pp. 36–42.
[22] L.-O. Damm and L. Lundberg, “Results from introducing component-level test
automation and Test-Driven Development,” J. Syst. Softw., vol. 79, no. 7, pp. 1001–
1014, Jul. 2006.
[23] S. James, “The relationship between accounting and taxation,” 2002.
[24] P. K. Kurapati, S. K. Misra, and S. Mohan, Method and system for an end-to-end
solution in a test automation framework. Google Patents, 2014.
[25] B. A. Forouzan and S. C. Fegan, Data communications and networking, 4th ed. New
York: McGraw-Hill Higher Education, 2007.
[26] M. K. Porwal, A. Yadav, and S. V. Charhate, “Traffic Analysis of MPLS and Non
MPLS Network including MPLS Signaling Protocols and Traffic Distribution in OSPF
and MPLS,” 2008, pp. 187–192.
[27] J. Postel, “Internet Protocol.” [Online]. Available: https://tools.ietf.org/html/rfc791.
[Accessed: 12-Sep-2016].
[28] “EG 203 165 - V1.1.1 - Speech and multimedia Transmission Quality (STQ);
Throughput Measurement Guidelines - eg_203165v010101m.pdf.” [Online].
Available:
http://www.etsi.org/deliver/etsi_eg/203100_203199/203165/01.01.01_50/eg_203165v0
10101m.pdf. [Accessed: 12-Sep-2016].
[29] J. McQuaid and S. Bradner, “Benchmarking Methodology for Network Interconnect
Devices.” [Online]. Available: https://tools.ietf.org/html/rfc2544. [Accessed: 12-Sep-
2016].
[30] “RFC 1242 - Terminology for IP Multicast Benchmarking <Draft-ietf-bmwg-mcast-
06.txt>.” [Online]. Available: https://tools.ietf.org/html/rfc1242. [Accessed: 12-Sep-
2016].
[31] “RFC 6374 - Packet Loss and Delay Measurement for MPLS Networks.” [Online].
Available: https://tools.ietf.org/html/rfc6374. [Accessed: 12-Sep-2016].
[32] W. D. Kelton and A. M. Law, Simulation modeling and analysis. McGraw Hill Boston,
2000.
[33] B. Everitt and A. Skrondal, The Cambridge dictionary of statistics. Cambridge; New
York: Cambridge University Press, 2010.
[34] “About Us.” [Online]. Available: http://www.ixiacom.com/about-us. [Accessed: 12-
Sep-2016].
[35] “Ixia Network Testing Solutions.” [Online]. Available:
http://www.ixiacom.com/solutions/network-test-solutions. [Accessed: 12-Sep-2016].
[36] “IxLoad.” [Online]. Available: http://www.ixiacom.com/products/ixload. [Accessed:
12-Sep-2016].
[37] “IxChariot.” [Online]. Available: http://www.ixiacom.com/products/ixchariot.
[Accessed: 12-Sep-2016].
[38] “IxNetwork.” [Online]. Available: http://www.ixiacom.com/products/ixnetwork.
[Accessed: 12-Sep-2016].
[39] M. Fiedler and K. Tutschku, “Application of the stochastic fluid flow model for
bottleneck identification and classification,” in SCS Conference on Design, Analysis,
and Simulation of Distributed Systems (DASD 2003), 2003.
[40] R. Lindholm, “Analysis of Resource Isolation and Resource Management in Network
Virtualization,” 2016.
43
[41] P. Arlos, M. Fiedler, and A. A. Nilsson, “A distributed passive measurement
infrastructure,” in Passive and Active Network Measurement, Springer, 2005, pp. 215–
227.
[42] “S Series Optical Circuit Switch | CALIENT Technologies.” [Online]. Available:
http://www.calient.net/products/s-series-photonic-switch/. [Accessed: 12-Sep-2016].
[43] “Network TAPs | Products | VSS Monitoring.” [Online]. Available:
http://www.vssmonitoring.com/taps/. [Accessed: 12-Sep-2016].
44
APPENDIX A
This section describes the scripts which were developed for performing
automation of the given test case. The scripts were developed using TCL, Perl and
Shell scripting. The TCL script is not described in full as the scripts developed using
IXIA are copyrighted. Only some of the modules of this script are mentioned. The
throughput on multiple timescales was calculated using the Perl script, which is
published in full. The shell scripts were used for simultaneously calling multiple
scripts for execution. The format of the text file which was given as user input is also
mentioned.
close $fd
# //vport/interface
$::ixnHLT_log
interface_config://vport:<1>/interface:<1>...
set _result_ [::ixia::interface_config \
-mode modify \
-port_handle $ixnHLT(PORT-
HANDLE,//vport:<1>) \
-gateway $gate1 \
-intf_ip_addr $ip1 \
-netmask $sub1 \
-check_opposite_ip_version 0 \
-src_mac_addr 0000.2322.4fc4 \
-arp_on_linkup 1 \
-ns_on_linkup 1 \
-single_arp_per_gateway 1 \
-single_ns_per_gateway 1 \
-mtu 1500 \
-vlan 0 \
45
-l23_config_type protocol_interface \
]
# Check status
if {[keylget _result_ status] != $::SUCCESS} {
$::ixnHLT_errorHandler [info script]
$_result_
}
catch {
set
ixnHLT(HANDLE,//vport:<1>/interface:<1>) [keylget
_result_ interface_handle]
lappend ixnHLT(VPORT-CONFIG-HANDLES,
//vport:<1>, interface_config) \
$ixnHLT(HANDLE,//vport:<1>/interface:<1>)
}
46
-number_of_packets_per_stream 1 \
-loop_count 1 \
-min_gap_bytes 12 \
]
# Check status
if {[keylget _result_ status] != $::SUCCESS} {
$::ixnHLT_errorHandler [info script]
$_result_
}
# -- Post Options
$::ixnHLT_log {Configuring post options for
config elem:
//traffic/trafficItem:<1>/configElement:<1>}
set _result_ [::ixia::traffic_config \
-mode modify \
-traffic_generator ixnetwork_540 \
-stream_id $current_config_element \
-transmit_distribution none \
]
# Check status
if {[keylget _result_ status] != $::SUCCESS} {
$::ixnHLT_errorHandler [info script]
$_result_
}
# -- Post Options
$::ixnHLT_log {Configuring post options for
config elem:
//traffic/trafficItem:<2>/configElement:<1>}
set _result_ [::ixia::traffic_config \
-mode modify \
-traffic_generator ixnetwork_540 \
-stream_id $current_config_element \
-transmit_distribution none \
]
# Check status
if {[keylget _result_ status] != $::SUCCESS} {
$::ixnHLT_errorHandler [info script]
$_result_
}
# -- Post Options
$::ixnHLT_log {Configuring post options for
config elem:
//traffic/trafficItem:<3>/configElement:<1>}
set _result_ [::ixia::traffic_config \
-mode modify \
-traffic_generator ixnetwork_540 \
47
-stream_id $current_config_element \
-transmit_distribution none \
]
# Check status
if {[keylget _result_ status] != $::SUCCESS} {
$::ixnHLT_errorHandler [info script]
$_result_
}
after 5000
set r [::ixia::traffic_control \
-action run \
-traffic_generator ixnetwork_540 \
-handle $stream2 \
-max_wait_timer 0 \
-type l23 \
]
after 5000
48
if {[keylget r status] != $::SUCCESS} {
$::ixnHLT_errorHandler [info script] $r
}
after 15000
# ######################
# stop phase of the test
# ######################
# Check status
if {[keylget _result_ status] != $::SUCCESS} {
$::ixnHLT_errorHandler [info script] $_result_
}
49
A.2. Automatic Capture at Wireshark Interfaces:
The Perl script to simultaneously start capture at the two Wireshark interfaces is
given below. It is to be noted that two scripts were written, namely “test1.pl”,
“test2.pl” and were launched simultaneously to perform live capture. The files saved
on the Wireshark Server are automatically deleted using “delete.pl”
A.2.1. test1.pl
#!/usr/bin/perl
use strict;
use warnings;
use Net::OpenSSH;
my $user = 'edivped';
my $password = 'passwd';
my $host = 'wireshark';
my $ssh = Net::OpenSSH->new(host=>"$host",
user=>"$user", port=>22, password=>"$password");
$ssh->system($cmd1);
$ssh->system($cmd2);
A.2.2. test2.pl
#!/usr/bin/perl
use strict;
use warnings;
use Net::OpenSSH;
my $user = 'edivped';
my $password = 'passwd';
my $host = 'wireshark';
my $ssh = Net::OpenSSH->new(host=>"$host",
user=>"$user", port=>22, password=>"$password");
$ssh->system($cmd1);
$ssh->system($cmd2);
50
A.2.3. delete.pl
#!/usr/bin/perl
use strict;
use warnings;
use Net::OpenSSH;
my $user = 'edivped';
my $password = 'passwd';
my $host = 'wireshark';
my $ssh = Net::OpenSSH->new(host=>"$host",
user=>"$user", port=>22, password=>"$password");
$ssh->system($cmd1);
#!/usr/bin/env perl
use Getopt::Long;
use POSIX;
use Data::Dumper;
use Math::BigFloat;
GetOptions (
"delta_time=f" => \$arg_delta_time,
#in seconds
"interval_time=f" =>
\$arg_interval_time #in seconds
);
51
my %pkt_data=();
while (<>) {
chomp; # strip record separator
if
(/(\d+)\s+(\d+\.\d+)\s+(\d+)\s+(\d+\.\d+\.\d+\.\d+)\
s+(\d+\.\d+\.\d+\.\d+)/) {
$pkt_no = $1;
$epoch_time_stamp = $2; #in secs
$pkt_size = $3; #in bytes
$src_ip_addr = $4;
$dst_ip_addr = $5;
$pkt_start_time = $epoch_time_stamp;
$pkt_end_time =
$epoch_time_stamp+(($pkt_size*8)/$line_speed);
$pkt_data{$pkt_start_time}{$src_ip_addr}{$dst_ip_add
r}{$pkt_no}=$pkt_size;
if ($pkt_start_time{$src_ip_addr} eq
undef) {
$pkt_start_time{$src_ip_addr} =
$pkt_start_time;
}
$pkt_end_time{$src_ip_addr} =
$pkt_end_time;
52
$window_end_time =
$pkt_end_time{$src_ip_address};
print "\nWindow End Time:$window_end_time\n";
last;
}
$sub_window_start = Math::BigFloat-
>new($window_start_time);
$sub_window_end = Math::BigFloat-
>new($sub_window_start+$arg_delta_time);
#$sub_window_start = $window_start_time;
#$sub_window_end =
$sub_window_start+$arg_delta_time;
$count=0;
@amount=();
$count1=0;
$y1=0;
53
$interval_start =
$sub_window_start+($i*($arg_interval_time));
$interval_end =
$interval_start+$arg_interval_time;
$src = $ip_src_tp;
$length=$pkt_data{$time_tp}{$ip_src_tp}{$ip_dst_tp}{$no_pk
t_tp};
$kdiv2=($length*8)/$line_speed;
$a=Math::BigFloat-
>new($time_tp); $endtime=Math::BigFloat-
>new($a+$kdiv2);
$interval_end1=Math::BigFloat->new($interval_end);
$interval_start1=Math::BigFloat-
>new($interval_start);
if ($a-
>bge($interval_start1) && $endtime-
>ble($interval_end1) && $a->blt($interval_end1)) {
$amount[$i]=$amount[$i]+$length;
$count1++;
}
else {
if ($a-
>bge($interval_start1) && $endtime-
>bgt($interval_end1) && $a->blt($interval_end1)) {
$add=1;
$part1=0;
$part2=0;
$i1=0;
$i1=$add+$i;
$part1=(($interval_end1-
$a)*$line_speed)/8; $part2=(($endtime-
$interval_end1)*$line_speed)/8;
54
$amount[$i]=$amount[$i]+$part1;
$amount[$i1]=$amount[$i1]+$part2;
$y1++;
}
}
}
}
}
}
#Throughput Calculation
for ($j=0; $j < $no_of_intervals; $j++) {
$throughput[$j]=0;
$throughput[$j] = $amount[$j];
$sum_of_throughput =
$sum_of_throughput+$throughput[$j];
print "\nThroughput in Interval:$j is
$throughput[$j]";
print "\nSum of Throughput in Interval:$j is
$sum_of_throughput";
$sum_of_squared_throughput=
$sum_of_squared_throughput+($throughput[$j]*$through
put[$j]);
}
$mean=$sum_of_throughput/$j;
$mean_squared=($mean*$mean);
$sum_of_squared_throughput=$sum_of_squared_through
put/$j;
$var=$sum_of_squared_throughput-$mean_squared;
$sigma=sqrt($var);
55
#!/bin/bash
sh ixiatcl div11.tcl & perl test1.pl & perl
test2.pl
perl delete.pl
10.64.213.40
10.64.213.83
192.168.5.4
192.168.184.1
192.168.5.1
192.168.5.2
192.168.5.3
255.255.255.0
192.168.184.1
255.255.255.0
1518
100
1518
100
1518
100
56