0% found this document useful (0 votes)
34 views

IP Router Testing, Isolation and Automation: Peddireddy Divya

Description of the full text, as describe by the author.

Uploaded by

jaygkee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

IP Router Testing, Isolation and Automation: Peddireddy Divya

Description of the full text, as describe by the author.

Uploaded by

jaygkee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

Thesis no: MSEE-2016-48

IP Router Testing, Isolation and


Automation

Peddireddy Divya

Faculty of Computing
Blekinge Institute of Technology
SE-371 79 Karlskrona Sweden
This thesis is submitted to the Faculty of Computing at Blekinge Institute of Technology in
partial fulfillment of the requirements for the degree of Master of Science in Electrical
Engineering with emphasis on Telecommunication Systems. The thesis is equivalent to 20
weeks of full time studies.

Contact Information:
Author(s):
Peddireddy Divya
E-mail: [email protected]

External advisor:
Hamed Ordibehesht
E-mail: [email protected]

University advisor:
Professor Dr. Kurt Tutschku
Department of Communication Systems

Faculty of Computing Internet : www.bth.se


Blekinge Institute of Technology Phone : +46 455 38 50 00
SE-371 79 Karlskrona, Sweden Fax : +46 455 38 50 57

i
i
ABSTRACT

Context. Test Automation is a technique followed by the present software development industries to
reduce the time and effort invested for manual testing. The process of automating the existing manual
tests has now gained popularity in the Telecommunications industry as well. The Telecom industries
are looking for ways to improve their existing test methods with automation and express the benefit of
introducing test automation.
At the same time, the existing methods of testing for throughput calculation in industries involve
measurements on a larger timescale, like one second. The possibility to measure the throughput of
network elements like routers on smaller timescales gives a better understanding about the forwarding
capabilities, resource sharing and traffic isolation in these network devices.

Objectives. In this research, we develop a framework for automatically evaluating the performance of
routers on multiple timescales, one second, one millisecond and less. The benefit of introducing test
automation is expressed in terms of Return on Investment, by comparing the benefit of manual and
automated testing. The performance of a physical router, in terms of throughput is measured for
varying frame sizes and at multiple timescales.

Methods. The method followed for expressing the benefit of test automation is quantitative. At the
same time, the methodology followed for evaluating the throughput of a router on multiple timescales
is experimental and quantitative, using passive measurements. A framework is developed for
automatically conducting the given test, which enables the user to test the performance of network
devices with minimum user intervention and with improved accuracy.

Results. The results of this thesis work include the benefit of test automation, in terms of Return on
Investment when compared to manual testing; followed by the performance of router on multiple
timescales. The results indicate that test automation can improve the existing manual testing methods
by introducing greater accuracy in testing. The throughput results indicate that the performance of a
physical router varies on multiple timescales, like one second and one millisecond. The throughput of
the router is evaluated for varying frame sizes. It is observed that the difference in the coefficient of
variance at the egress and ingress of the router is more for smaller frame sizes, when compared to
larger frame sizes. Also, the difference is more on smaller timescales when compared to larger
timescales.

Conclusions. This thesis work concludes that the developed test automation framework can be used
and extended for automating several test cases at the network layer. The automation framework
reduces the execution time and introduces accuracy when compared to manual testing. The benefit of
test automation is expressed in terms of Return on Investment. The throughput results are in line with
the hypothesis that the performance of a physical router varies on multiple timescales. The
performance, in terms of throughput, is expressed using a previously suggested performance metric. It
is observed that there is a greater difference in the Coefficient of Variance values (at the egress and
ingress of a router) on smaller timescales when compared to larger timescales. This difference is more
for smaller frame sizes when compared with larger frame sizes.

Keywords: Automation, IP, Measurements,


Performance, Router, Testing, Throughput.

I
ACKNOWLEDGEMENTS

I sincerely thank my supervisor, Dr.Kurt Tutschku for his encouragement, constant support and
guidance throughout my thesis work. Working under his supervision has greatly improved my
knowledge and problem solving ability. His ideas and ideals have played a crucial role in the
successful completion of my degree. He has taught me to be accurate in every minute step I take and
to never give up on the tasks no matter how hard they seem.

I would also like to thank Mr.Hamed Ordibehesht for giving me an opportunity to work with Ericsson.
He is very encouraging, understanding and patient. I am deeply indebted to my mentor and guide,
Mr.Hans Engberg who has been very patient in answering all my questions, even the minute ones. He
has played a crucial role in building my Networking knowledge. No matter how many times I
shutdown the server or the network interface card, he has always restarted them again patiently. He
has taught me everything I know about testing of the networking equipment. I am also thankful to
Mr.Neil Navneet for his valuable guidance. He has given the most appropriate suggestions whenever I
was stuck in my thesis and his support helped me publish my thesis results in time.
On a whole, I am indebted to Ericsson and the entire department of IP Networks, for greatly
improving my knowledge and helping me throughout my thesis work in Stockholm. The work culture
at Ericsson has taught me lessons for life. It automatically generated interest in me to learn about the
latest technologies from the brightest minds of the industry. It is a beautiful place to learn, work and
grow. Ericsson provides a platform for students to put forward their innovative ideas.

I am grateful to my parents for their endless love, support and encouragement. I would like to thank
my elder sister for helping in every way possible for the successful completion of my thesis work. The
loving face of my pet dog always makes me smile whenever I am sad. They have done their best in
every way possible to help me complete my studies in Sweden. I am indebted to them for life and
treasure them the most. Their happiness means everything to me.

Finally, I thank the almighty God for giving me the strength to overcome all the hurdles and to
complete my studies with flying colors. I hope that I am forgiven for my mistakes one day. There
were moments when I just felt like giving up or was scared to take a step forward. I believe it is his
invincible force that kept me going somehow during the difficult times and still helps me to move
forward.

II
CONTENTS
ABSTRACT ...........................................................................................................................................I

ACKNOWLEDGEMENTS ................................................................................................................ II

CONTENTS ....................................................................................................................................... III

LIST OF TABLES ............................................................................................................................... V

LIST OF FIGURES ............................................................................................................................VI

ACRONYMS..................................................................................................................................... VII

1 INTRODUCTION ....................................................................................................................... 1
1.1 MOTIVATION ......................................................................................................................... 2
1.2 PROBLEM STATEMENT AND HYPOTHESIS............................................................................... 2
1.3 RESEARCH QUESTIONS .......................................................................................................... 2
1.4 CONTRIBUTION ...................................................................................................................... 3
1.5 THESIS OUTLINE .................................................................................................................... 3
2 FUNDAMENTAL CONCEPTS AND AIMS ............................................................................ 4
2.1 MAJOR TELECOMMUNICATION SYSTEM DESIGN AIMS .......................................................... 4
2.2 NEED FOR TEST AUTOMATION IN TELECOM INDUSTRY ......................................................... 5
2.3 NETWORK FUNDAMENTALS ................................................................................................... 8
2.3.1 OSI Model ......................................................................................................................... 8
2.3.2 IP Protocol........................................................................................................................ 9
2.4 THROUGHPUT ...................................................................................................................... 11
2.5 STATISTICAL DEFINITIONS ................................................................................................... 12
2.6 IXIA .................................................................................................................................... 12
2.6.1 IxNetwork ........................................................................................................................ 13
2.6.2 Traffic Generation .......................................................................................................... 13
2.6.3 Automation with IXIA ..................................................................................................... 14
3 RELATED WORK .................................................................................................................... 15
3.1 TEST AUTOMATION.............................................................................................................. 15
3.2 THROUGHPUT CALCULATION............................................................................................... 16
4 METHODOLOGY .................................................................................................................... 18
4.1 AUTOMATION METHOD ....................................................................................................... 18
4.1.1 Test Automation using IXIA ............................................................................................ 18
4.1.2 Return on Investment ...................................................................................................... 20
4.1.3 Test Automation Cost Calculation .................................................................................. 20
4.2 THROUGHPUT EVALUATION................................................................................................. 22
4.3 ANTICIPATED BEHAVIOR FOR DUT ...................................................................................... 23
4.4 DATA EXTRACTION.............................................................................................................. 24
4.5 TEST-BED SET-UP ............................................................................................................... 27
5 IMPLEMENTATION AND EXPERIMENT .......................................................................... 29
5.1 IXIA SPECIFICATIONS .......................................................................................................... 29
5.2 DUT SPECIFICATIONS .......................................................................................................... 29
5.3 OTHER SPECIFICATIONS ....................................................................................................... 30
5.3.1 Layer 2 Switch ................................................................................................................ 30
III
5.3.2 Physical Link Characteristics ......................................................................................... 30
5.4 PACKET CAPTURE ................................................................................................................ 31
5.5 EXPERIMENTAL PROCEDURE................................................................................................ 32
5.6 AUTOMATION IN EXPERIMENTATION ................................................................................... 33
6 RESULTS AND ANALYSIS..................................................................................................... 35
6.1 RETURN ON INVESTMENT..................................................................................................... 35
6.2 THROUGHPUT ...................................................................................................................... 36
7 CONCLUSION AND FUTURE WORK ................................................................................. 38
7.1 ANSWERING RESEARCH QUESTIONS .................................................................................... 38
7.2 LIMITATIONS AND CHALLENGES .......................................................................................... 39
7.3 FUTURE WORK .................................................................................................................... 40
REFERENCES ................................................................................................................................... 42

APPENDIX A...................................................................................................................................... 45

IV
LIST OF TABLES
Table 1: IxNetwork Traffic Wizard ........................................................................................ 14
Table 2: IXIA Specifications .................................................................................................. 27
Table 3: DuT Specifications ................................................................................................... 27
Table 4: Layer 2 Switch Specifications .................................................................................. 28
Table 5: Physical Link Characteristics ................................................................................... 28
Table 6: Optical Switch Characteristics ................................................................................. 29
Table 7: Manual-Automated Steps ......................................................................................... 32

V
LIST OF FIGURES
Figure 1: Number of Tests vs Length of Input Parameters....................................................... 6
Figure 2: Efforts for Manual and Automated Testing .............................................................. 7
Figure 3: OSI Reference Model ................................................................................................ 9
Figure 4: IPv4 Datagram Header ............................................................................................ 10
Figure 5: IXIA Client-Server Relationship............................................................................. 19
Figure 6: Obtained Linear Curve for ROI .............................................................................. 22
Figure 7: Unequal Resource Sharing on Smaller Timescales ................................................. 24
Figure 8: Traffic Flow for Simultaneous Streams .................................................................. 24
Figure 9: Calculation of Throughput for 1 second ................................................................. 25
Figure 10: Calculation of Throughput for 1 millisecond and less .......................................... 26
Figure 11: Packet Location in Different Scenarios ................................................................. 27
Figure 12: Test-Bed Set-Up .................................................................................................... 28
Figure 13: Splitting at Network Tap ....................................................................................... 32
Figure 14: Difference in CoV per Second .............................................................................. 36
Figure 15: Difference in CoV per Millisecond ....................................................................... 37
Figure 16: Difference in CoV on Multiple Timescales .......................................................... 37

VI
ACRONYMS
4G Fourth Generation
5G Fifth Generation
AC Alternating Current
ARP Address Resolution Protocol
ATM Asynchronous Transfer Mode
BFD Bidirectional Forwarding Detection
BGP Border Gateway Protocol
CDF Cumulative Distribution Function
CoV Coefficient of Variation
DC Direct Current
DPMI Distributed Passive Measurement Infrastructure
DRR Deficit Round Robin
DUT Device under Test
DWDM Dense Wavelength Division Multiplexing
EAPS Ethernet Automatic Protection Switching
FTP File Transfer Protocol
GA General Availability
Gbps Gigabit per Second
GE Gigabit Ethernet
HDLC High-Level Data Link Control
HTTP Hypertext Transfer Protocol
ICMP Internet Control Message Protocol
IETF Internet Engineering Task Force
IGMP Internet Group Management Protocol
IP Internet Protocol
ISO International Organization for Standardization
IS-IS Intermediate System-Intermediate System Protocol
L2 VPN Layer 2 Virtual Private Network
L3 VPN Layer 3 Virtual Private Network
LACP Link Aggregation Control Protocol
LDP Label Distribution Protocol
LTE Long Term Evolution

VII
MAC Media Access Control
Mbps Megabit per Second
MPLS Multi-Protocol Label Switching
MSTP Multiple Spanning Tree Protocol
NFV Network Functions Virtualization
NGN Network Generation Networking
OSI Open Systems Interconnect
OSPF Open Shortest Path Forwarding
PSTN Public Switched Telephone Network
QoS Quality of Service
RADIUS Remote Authentication Dial-In User Service
RARP Reverse Address Resolution Protocol
RIP Routing Information Protocol
RJ45 Registered Jack 45
RMON Remote Monitoring Protocol
RSTP Rapid Spanning Tree Protocol
RSVP Resource Reservation Protocol
SDN Software Defined Network
SFP Small-Form-factor Pluggable
SNMP Simple Network Management Protocol
SSH v2c Secure Shell Protocol Version 2C
STP Spanning Tree Protocol
TCACS Terminal Access Controller Access-Control System
TCP Transmission Control Protocol
VLAN Virtual Local Area Network
VM Virtual Machine
VMAN Virtual Metropolitan Area Networks
VRRP Virtual Router Redundancy Protocol
VRF Virtual Routing and Forwarding
WRED Weighted Random Early Detection
WRR Weighted Round Robin

VIII
1 INTRODUCTION
The future of telecommunication networks lies in providing a “single converged
IP-based infrastructure” [1], acting as a “packet highway”, thereby providing a
platform to support both present and future services. Such networks are termed as
“Next Generation Networks” (NGN) [2] which aim to provide a smooth transition to
an “all-IP” based network. The aim of such an evolution is to replace the existing
networks such as PSTN and cable TV with services such as IPTV, VOIP and many
more. Other NGN features include interoperability with the current networks,
separation of services from the transport technology and enhancing security. The
factors which need to be looked at when implementing such networks are Quality of
Service (QoS), Security and Reliability [2]. Multi-Protocol Label Switching (MPLS)
has been found to be the most encouraging technology for implementing such
networks and meet the QoS requirements [2].
The Internet Protocol (IP) is the dominant protocol for transmission of data in the
present day telecommunication networks [3]. IP is a connectionless protocol, and as a
result, a connection-oriented protocol called ATM was suggested. This protocol was
not preferred due to its larger cost and greater complexity [4]. Consequently, the
Internet Engineering Task Force (IETF) had proposed MPLS as a combination of both
IP and ATM to yield the desired levels of QoS in the network [4]. This protocol
enables feasible shaping of the network’s traffic and improves the distribution of
traffic flows in the network. It supports the Layer 3 protocols, like IPv4, IPv6 along
with Layer 2 protocols, like Ethernet, ATM and HDLC. The main advantage of MPLS
when compared to IP is the reduced complexity of switching or routing look-up tables.
Thus, MPLS/IP technology based routers are used by most of the service providers in
their backbone networks [3]. These routers also support extensive exterior, interior
gateway protocols, high-performance multicast routing and synchronization
capabilities. Apart from the traditional router functionalities, the routers nowadays are
being manufactured to support SDN, 4G and other advanced radio functionalities like
advanced LTE and 5G. Such advanced routers have very high forwarding capabilities,
in terms of 100 Gbps, greatly improve network performance, enable optimal usage of
network resources, facilitate application aware traffic engineering and enable the
deployment of scalable networks.
At the same time, automated testing has become crucial for present software
development processes due to the establishment of “test driven development” and
“continuous integration” [5]. It is feasible to carry out more tests with test automation
when compared to manual testing and guarantee the quality of a given system [6].
While companies such as IXIA, Spirent and Fluke provide equipment for network
testing, open source software products have also been proposed for testing [7]. There
are some tests which are required to be executed frequently and need to go through
“test automation” [8]. Test automation increases the speed of work, facilitates repeated
testing, conservation of resources and helps to perform more tests in less time [5].
This thesis focuses on developing a framework for automatically evaluating the
performance of routers on multiple timescales, based on a performance metric which
was proposed for comparing different NFV elements [9]. The aim is to provide a
proof-of-concept for the suggested test automation method and throughput calculation
methodology. The aim is to also express the benefit of test automation using a suitable
metric. The test automation framework can help one to conduct the manual tests
automatically with greater accuracy. The throughput methodology can help alleviate
the need to compare different routers based on traditional performance evaluation
methods and hardware description, as it requires a detailed knowledge about the
scheduling, queuing policies, internal routing and switching schemes of a router. The
performance is thus described by doing external measurements and calculations.

1
1.1 Motivation
The physical routers which are manufactured today by various vendors are multi-
functional and capable of supporting various applications. Such routers have very high
forwarding capabilities, in gigabit per second to meet the growing demand for high
speed internet, carrier investments and to create space for forthcoming IP services.
With the growing focus on developing new technologies, these routers are tested with
the existing testing methodologies. These testing methodologies calculate the
throughput on a larger duration, i.e. per second. The study about the impact of various
traffic streams and further jitter analysis on smaller timescales is a field which requires
more research, as it focuses on the timely delivery of packets. This is the first factor of
motivation.
At the same time, the network devices are tested manually, which consume a lot of
time and resources for configuring the devices, storing the results and performing
individual analysis. Tests can be conducted more accurately if they are performed with
minimum human intervention, as this can greatly reduce the introduction of errors due
to manual testing. This is the second factor of motivation which motivates the student
to develop an automatic testing framework for testing various network devices,
especially routers and propose a framework which is capable of performing automatic
calculation of throughput on smaller timescales.

1.2 Problem Statement and Hypothesis


Testing the performance of IP Networks using conventional throughput statistics
may not be always sufficient, especially when considering time critical network
applications. The traditional throughput statistics are “averages on intervals in the
range of minutes and more”, such that the packets with a very shorter life time
affecting a real time application might be overlooked [10].
There can be scenarios when there are multiple traffic streams arriving to the
router to be forwarded at the same time. One packet stream might affect the other,
which on a whole affects the overall delivery of packets in well-defined time frames.
In such scenarios, the packets might not reach the destination in well-defined time
frames, rather they might arrive arbitrarily. This concept of multiple traffic streams
being independent of each other is the base for traffic isolation in this thesis. The
performance is expressed in terms of variation in the statistical behavior of the packets
at the egress and ingress of the DuT. The difference in the Coefficient of Variance
(CoV) at the egress and ingress is an indication of the performance of the router. If the
difference in the CoV values is small, it means that the forwarding in the router is
rapid and there is no change in the packet order. Otherwise, if there is a huge variation
in the CoV values at the egress and ingress, it means that multiple traffic streams are
affecting each other which leads to a change in the packet order. This serves as the
hypothesis for this thesis work and a detailed explanation is given in Section 4.3.
Thus, there arises a need for evaluating the performance of hardware routers, in
terms of throughput, on multiple time scales, especially with a focus on smaller time
scales. One of the best ways to test the throughput performance of routers is by
performing test automation. An automated framework for testing the performance of
network elements for time critical applications can greatly contribute towards
improving current testing mechanisms and help us to identify performance degradation
issues.

1.3 Research Questions


The aim of this thesis is to answer the research questions listed below. These
questions are regarding the performance of routers, in terms of throughput on multiple

2
time scales and developing an automated framework for testing the same. These
questions are answered based on the experimental results obtained.

Q1. What is the performance delivered by a router on multiple timescales?


Q2. How do varying frame sizes affect the throughput of a router on multiple time
scales?
Q3. How to develop an automated testing framework for evaluating throughput? What
is the benefit of developing an automated testing framework in terms of Return on
Investment, ROI?

1.4 Contribution
The main contribution of this thesis is to develop an automated testing framework
to test the methodology which was suggested for performance evaluation of virtual
elements, on actual physical routers. The tests performed during the implementation
phase are automated by identifying and developing suitable automation techniques.
This thesis gives an insight about the degree of traffic isolation in physical routers
on smaller time scales when sending multiple traffic flows simultaneously. The impact
of one traffic stream over the other on smaller time scales is also understood. These are
expressed by calculating the throughput on smaller time scales by using the statistical
method suggested. Such analysis is crucial for time critical applications, where the
packets must arrive within well-defined time frames. At the same time, this thesis
provides an idea about how to develop an automated framework for performance
evaluation. A proof-of-concept of the developed test automation framework is
provided and the benefit of test automation is expressed. Such automation framework
can greatly reduce the time and resources consumed during manual testing.
The methodology suggested can be used to automatically test and compare the
performance of various physical routers, irrespective of their hardware specifications,
solely based on external statistical calculations and their default forwarding behavior.
Such testing can provide an understanding about the forwarding (scheduling) schemes
inside the routers. One can improve the forwarding schemes further and focus on
hardware changes if performance issues are identified in a router.

1.5 Thesis Outline


Chapter 1, which is the current chapter gives an introduction about the current
technologies employed in developing modern day physical routers. It also describes
the primary motivation for performing this thesis and the problem at hand. This
chapter also discusses the research questions which will be addressed later in this
thesis and the main contribution for performing this thesis work. Chapter 2 focuses on
giving an understanding about the various fundamental concepts which are crucial for
test automation and throughput evaluation. Chapter 3 describes the related work in the
areas of test automation and throughput evaluation. Chapter 4 discusses the
methodology that has been adopted to perform test automation and expressing the
benefit of automation using a suitable metric. It also describes the method used for
performance evaluation of routers on multiple time scales. Chapter 5 illustrates the
implementation and experimental test bed for testing the suggested methodology. It
also describes the potential automation areas for experimentation. Chapter 6 deals with
the results and analysis. The benefit of test automation is expressed after identifying
suitable areas for test automation and automating them. The results concerned with the
throughput calculation methodology are also discussed. Chapter 7 consists of
conclusion and future work. It is in this chapter that the research questions are
answered, the limitations which were identified while performing this thesis are
reported and possible future work is suggested.

3
2 FUNDAMENTAL CONCEPTS AND AIMS
This section describes the major objectives for designing various systems in
current networks and the need for test automation in the Telecommunications industry.
It is later followed by some general fundamental concepts relevant for this thesis work.

2.1 Major Telecommunication System Design Aims


The networks today are complex and distributed. The software incorporated in the
networking equipment like routers is “closed and proprietary”. Networks still work
under conventional circumstances like “individual protocols, mechanisms, and
configurations interfaces” [11]. This leads to high functional costs and complexity. As
such, there is a rise in concepts like “Future Internet Architecture [12], Next
Generation Networks”. Some of the general features that must be looked at when
designing a networking infrastructure (or equipment) for meeting the demands of
future internet are described below. This idea is later extended to the design of high
performance routers and their testing.

A) Advanced Networking Functionality including Interoperability.

There is a need to develop network architectures and services that are powerful
when it comes to addressing the above design feature. The aim of Future
Internet is to develop network services with “robustness, survivability, and
collaborative properties” [12].

When manufacturing routers, it is essential to design them in such a way that


they support the default forwarding, routing functionality and MPLS features
for supporting backbone networks [1]. It is also important that the system
under design, i.e. the router needs additional new features and routing
algorithms to improve bandwidth utilization, reduce packet loss which can
lead to “high network utilization” [4]. The implementation of such new
features requires innovation and changes in the actual physical (or hardware)
design of the routers. Thus, we refer to “functionality” for describing the
various features that define the characteristics of the system under design.

“Interoperability” is defined as “the ability of systems, units, or forces to


provide services to and accept services from other systems, units or forces and
to use the services so exchanged to enable them to operate effectively
together” [13]. It is essential because when a new equipment is designed, all
the developed features must be compatible with each other and should work
effectively with the remaining nodes in the overall network.

Thus, when a new system is being developed, one must make sure that it
provides and supports the anticipated functions and behavior. It needs to be
engineered properly to support the desired mechanisms.

B) Provide reliability and availability for the system.

The concept of reliability ensures that the quality of the media content
delivered over the internet, like video, gaming etc [12] is kept intact in the
future internet. This means that a system is working as expected under all test
conditions. There is a need to develop “experimentation test-beds for new

4
architectures” [14]. These new test-beds can be implemented considering real-
time scenarios for thorough experimentation and validation of various
functions and use (test) cases [14].

According to [15], “ability of a Configuration Item or IT Service to perform its


agreed Function when required” is defined as “Availability”. It is most
commonly expressed as the ratio of (MTTF) and (MTTF+MTTR), where
MTTF is the “mean time to failure” and MTTR is the “mean time to repair”. It
can also be portrayed as the time that a system is available as a fraction of all
time [16]. The aim is to achieve a high availability of 99.999% (five nines
concept) in the performance of such networking equipment as per the Carrier
Grade Standard [17].
This is applicable to the network provided by the service provider as a whole
as well as networking equipment. Statistically speaking, the downtime per year
for 99.999 percent availability is 5.25 minutes [18].

It is described in [15] as:

“Reliability” is often expressed synonymously to availability as “the ability of


an item to perform a required function, under given environmental and
operational conditions and for a stated period of time” [19].

As stated above, such high reliability and availability of the system can only
be confirmed by testing it under numerous real-world conditions. When it
comes to routers, its reliability and availability is measured under different
configurations, router settings and network load scenarios.

C) Reduce design costs.

The goal of Future Internet is “lowering the complexity for the same level of
performance and functionality at a given cost” [12]. Hence,
telecommunications equipment manufacturers need to design high
performance equipment at low cost. However, increasing complexity and
relationship in between various features makes it difficult to conduct tests and
in turn increases the cost of design.

This thesis is an effort to address the issues B) and C) of the system design
process. The reliability and availability of the system should be measured by execution
of multiple tests. The collective testing of typical and untypical use cases multiple
times helps an experimenter to obtain statistical tangible results. One method to reduce
the design and production costs is to develop an automation environment to test the
reliability of such a system. Such automation environment should have low overhead
in usage and design. This low overhead leads to high Return on Investment (ROI)
values for testing even a smaller number of use cases and speeds up the design
process.

2.2 Need for Test Automation in Telecom Industry


At present, there is an increased demand to achieve test automation in the
Telecommunications sector. Test Automation is a concept which usually implies
towards the Software Development industry. An effort is made to extend this idea in

5
the Telecommunications industry in this section. Some of the general reasons
highlighting the advantage of test automation are mentioned below:

A) Brute force and rigorous tests are not feasible and are too costly in large
network systems with many input parameters.

The major challenge involved when it comes to testing in networking is to


accurately perform a given test under “complex configurations” [20].
According to [20], such testing involves “multiple components” and
“significant configuration of hardware, software and network resources”.

For example, let us consider a System under Test (SuT) with binary input
parameters and a total length of “n” bits for all parameters. Typically, the
number of tests, “y” is proportional to two to the power of n, y = 2n. From
Figure 1, it can be observed that the number of test cases increases
(exponentially) with an increase in the number of input parameters.
Number of Tests

Length of Input Parameters (n)

Figure 1: Number of Tests vs Length of Input Parameters

Hence, it is difficult to test a given system with all possible combinations of


the input parameters, especially when the input parameters are large. Thus,
there arises a need to identify the suitable input parameters at which the
system can be tested successfully. If the input parameters are large and there is
a definite need to test all of them, one needs to develop suitable automation
methods for testing such a large system, as manually testing is not feasible.

B) Increase test reliability.

This can be done by “repetition of tests” and “structured selection and change
of parameters”. “Repetition” as the name suggests stands for repeating the
tests multiple times. The major advantage of test automation is that it
introduces “reusability and repeatability” [21]. For testing, we can state
reliability as the ratio of number of passed tests to the total number of tests to
be conducted. One reason for conducting repeated tests is to obtain statistical
significance for the tests results. “Structured Selection” stands for testing
certain range of the test parameters. An example of structured selection are the
varying packet size ranges considered in this thesis for experimentation,
Packet_Size_Range ϵ [128, 256, 512, 1024, 1518].

6
C) Reduce the costs involved for testing.

One of the major reasons for introducing automation in testing is to reduce the
costs, either in the form of time or money invested in performing manual tests.
With the introduction of automation, for example, the experimenter does not
need to wait in front of the experiment while the test is in progress [6].

The testing for high availability is cumbersome and expensive.


For example, let us say that we have 100,000 different test cases, each with a
different configuration that differs slightly from the other. It is also assumed
that each test has equal importance for the developed system. In order to
achieve 99.999% system availability, it needs to be tested 100000 times and
should work as expected under 99999 tests. It would become expensive to
conduct such numerous tests manually.

At the same time, the development of a new test automation framework


requires an additional investment cost (and effort) for its design,
implementation and development. In Figure 2, the light grey curve indicates
the effort involved for manual testing. It is assumed that the efforts required
for conducting a manual test increase linearly with the number of tests that
need to be conducted. The dark grey curve indicates the efforts involved for
conducting the tests automatically. The preliminary investment cost required
for developing a new test automation framework is depicted by point A in the
figure. Initially, the cost for testing in an automated environment is greater
than that of manual testing. The two curves intersect at point B, where the cost
involved for conducting both manual and automated tests is the same. It can be
observed that beyond point B, the cost for doing automated tests is less when
compared to manual tests. This point B is assumed as the threshold limit,
beyond which there is a reduction in effort and gain in income due to the
investment involved for developing a test automation framework. Thus, the
Return on Investment (ROI) or the net income can be considered beyond this
point, by calculating the difference between points (D,C) and (F,E).

Automated Manual
F
D
B Net Income = D - C
Effort

C E

Number of Tests
Figure 2: Efforts for Manual and Automated Testing

The ROI is calculated as [22]:

7
Prior to point B, the ROI is negative because the effort (or cost) involved in
performing an automated test is more than that of a manual test. As more test
cases are automated, the cost reduces and for test cases beyond point B, the
ROI becomes increasingly positive.

Thus, looking for the number of test cases where both ROIautomated > 0 and
ROIautomated > c * (cost_of_automated_framework_development) is useful
when developing new frameworks for test automation. Here, c is the
depreciation cost according to the “matching principle” used in depreciation
[23]. The idea is to express the benefit of developing the test automation
framework over the entire period it was in use. For example, the test
automation equipment and hardware can be bought for a cost, say A, and is
estimated to work for 5 years. It is a good approach to express the return on
investment using the revenue and expenditure on a yearly (time basis) basis
instead of expressing it in the very first year, when the automation framework
is deployed.

D) Increase test accuracy

The tests can be conducted more accurately with reduced (or no) human
interaction. Such automation eliminates the source of error introduced due to
manual testing [24]. The testing for high availability of 99.999% can introduce
numerous errors or differences in results for manual testing and configuration
settings.

2.3 Network Fundamentals


The word “telecommunication” stands for “communication at a distance” and
“data” for “information”. Thus, “data communication” stands for exchange of data
through a transmission medium between devices [25]. The efficiency of such a system
depends on “delivery, accuracy, timeliness and jitter”. There are five basic components
of a communication system as defined by [25]. A data communication system consists
of a sender, receiver, message (data), transmission medium and a protocol. A protocol
is defined as a “set of rules” controlling a communication. It is like an “agreement”
among the two communication parties, the sender and the receiver, for successful
transmission and reception of data. A network is defined as “a set of devices connected
by communication links” [25]. The success of a network, in a broader sense, is
expressed in terms of performance, security and reliability. Today, the Internet is a
“hierarchical structure” consisting of interconnecting local and wide area network
through various connecting devices like routers, switches. The Open Systems
Interconnection (OSI) model was developed by ISO to describe the various layers of
communication in a network. However, this model was dominated by TCP/IP, but is
still used to express the basic idea and function of a data communications network.
2.3.1 OSI Model
The OSI Model is a seven layered framework, describing the mechanism of data
transmission across a network. The seven layers are physical, data link, network,
transport, session, presentation and application layer. Each layer in this framework
performs a specific function, different from the remaining layers. Layers 1-3 are
described as “network support layers” which are responsible “physical aspects of
moving data from one device to another” [25]. Layers 5-7 are “user support layers” for
providing interactions among various “software systems”. The transport layer (layer 4)
is responsible for interconnecting these two groups. Figure 3 describes the OSI

8
reference model and the functions related to each layer. When the data is moving from
the upper to lower layers, a header or trailer are added to ensure the interoperability.
Similarly, as the data moves from the lower to upper layers, the header (or trailer) is
removed and data is processed accordingly at the corresponding layers.

Figure 3: OSI Reference Model [25]

In thesis, we focus on the network layer whose primary function is “source-to-


destination delivery of a packet” [25]. It performs the functions of routing and logical
addressing. When systems are connected in a same network, there is no function for
the network layer. It is only for communication among systems belonging to different
networks. The network layer was designed “to solve the delivery problem through
several links” [25]. The functionality at this layer can be described in three different
scenarios, namely at the source, at the router (or switch) and the destination. At the
source of the network layer, a packet is generated using the information from the
transport layer. The function of this layer at the router (or switch) is to determine the
interface to which the packet should be sent next based on its routing tables. The
functionality of the network layer at the destination is described by its ability to verify
the addresses. The reassembly of the packets is performed at the destination, before
delivering the entire data to the transport layer. Data transmission at the network layer
takes place in the form of “packets”. It should be noted that this layer is equivalent to
the internet (internetwork) layer in the TCP/IP protocol suite. The protocols used in
this layer are IP, ARP, RARP, IGMP and ICMP.
2.3.2 IP Protocol
IP stands for “Internetworking Protocol”. It is described as “unreliable and
connectionless protocol-a best effort delivery service” for a “packet-switching
network” [25]. It does not provide any type of flow or error control. IP is combined
with TCP protocol for reliable delivery of data. Packets at this layer are described as
“datagrams”. It is not necessary that each and every datagram originating from the
same source and about to reach the same destination must follow the same path. The
datagrams can travel different paths and reach the destination out of order. IP relies on
upper layer protocols to resolve these problems. Figure 4 shows the format of an IPv4
datagram header. It consists of header and data. The length of the header varies
between 20-60 bytes and the rest is the data. The maximum limit for the data in an
IPv4 datagram is the difference between 65,535 and the header length.
An “IP Address” is used to identify each device (a host or a router) uniquely at the
network layer. There are two versions, IPv4 and IPv6. In this thesis, we focus on IPv4
address as it currently being used in the internet. This is 32 bits in length, no two

9
devices in the network can have the same IP address. Thus, an IPv4 address is “unique
and universal” [25]. The address space is defined as “the total number of addresses
used by the protocol” [25]. It is more than 4 billion for IPv4 protocol. The address
space is more for IPv6 protocol as it uses 128 bits address. These addresses are most
commonly described using the “Dotted-Decimal Notation” [25]. Class C private
addresses were used for identifying the devices in the test-bed of this thesis.
As mentioned before, one of the reasons for introducing MPLS technology is the
connectionless nature and unreliability of the IP protocol [4]. It is also challenging to
operate “traffic engineering” methods in networks implemented using IP protocol [26].
The problem of congestion is often introduced in IP networks because an individual
decision is made for every incoming packet which arrives at the router interfaces. The
routing decisions are made without taking into consideration the physical link capacity
and the traffic specifications. It often leads to dropping of packets on the physical
links. On a whole, it can be said that using traditional IP routing leads to congestion
(or over-utilization) on some network links and under-utilization on the remaining
links [26].
The authors of [26] describe MPLS as “a set of procedures” which integrates both
the performance, QoS and traffic management of the “Layer 2 label-swapping
paradigm” and the “Layer 3 routing services”. An MPLS network is divided into the
core and edge network. The routers in the edge are termed as Label Edge Routers, i.e.
LERs while the routers in the core are termed as Label Switch Routers, LSRs. The
core routers are connected to only “MPLS capable” routers, whereas the edge routers
are connected to both “MPLS capable and incapable” routers [26]. The first LER
which receives the IP packets and transforms them into MPLS packets for forwarding
in the MPLS domain is known as the “ingress LER”. Similarly, the LER which finally
removes the MPLS labels and sends it to the outside network is known as the “egress
LER” [26]. An MPLS packet is assigned a label at the edge of the MPLS network and
a path is fixed for routing a particular packet [3]. These paths are termed as LSPs,
Label-Switched Paths, which make it feasible for the service providers to forward
specific packet types through their backbone network without taking into account the
IP headers. The labels are changed by the routers for each incoming packet when
forwarding them to the adjacent router. The forwarding is faster as there is no need to
send packets based on IP forwarding tables.

Figure 4: IPv4 Datagram Header [27]

10
2.4 Throughput
The performance of an IP Network is most commonly expressed in terms of
Throughput, Packet Loss and Delay [28]. Among the three metrics, throughput
measurement is most preferred is it gives a measure of the transmission capacity and
perceived service quality for the end user. It should be noted that the selected protocol
has an impact on the measurements [28]. The “throughput measurement results
obtained using one protocol cannot generally be assumed to be transferrable to other
protocols” [28]. In general, the following equation is used for measuring throughput:

This parameter is “a highly variable stochastic parameter that varies in both small
and large timescales” [28]. Throughput can be calculated at various layers in a network
and the relation between them is given as:

At the same time, the idea of throughput is conceived under various perspectives,
the network’s point of view and user’s point of view. The network layer deals with the
lower layer protocols whereas the end user deals with the entire end-to-end
communication, i.e. “full protocol stack”. A difference in the throughput
measurements is also obtained when the measurement is performed at different points
in the network. For example, consider a TCP connection, an end user sees the
connection from his computer, on his web browser to a web server. But, in a network
perspective, there are many intermediate devices like routers, switches, firewalls, etc.
The throughput can be calculated through active as well as passive measurements.
Active measurement is done for access networks, while passive measurement is done
in core networks. There are two methods for calculating throughput, “best effort
approach” and “windowed approach”. With the first method, it is possible to extract
from a particular test file, maximum measurement samples within a given time. In the
second method, a predetermined duration is selected for sending and receiving data.
The throughput is affected with the client/server hardware being used for
measurements, the operating system of the measurement system, nature of the shared
medium etc. Some of the statistical features which are considered during throughput
measurements are mean, median, CDF and variance. Finally, throughput can also be
based on “predefined time period or predefined amount of traffic” [28].
Throughput is usually expressed as frames per second, in bits (or bytes) per
second. Some basic guidelines for performing a throughput test on a DuT is given in
[29]. In [30], throughput is defined as “the maximum rate at which none of the offered
frames are dropped by the device”. “Absolute offered throughput is the number of data
units transmitted and absolute delivered throughput is the number of data units
received. Throughput rate is the number of data units sent or received per unit time”
[32, p. 63].
In [25], throughput is described as “a measure of how fast we can actually send
data through a network”. It is worth noting the difference between bandwidth and
throughput. “Bandwidth is a potential measurement of a link”, whereas throughput is
“an actual measurement of how fast we can send data” [25]. Even though the physical
link has its bandwidth labelled as 1 Gbps, the devices on this link may handle data up
to only 500 Mbps. Thus, it would not be possible to send more than 500 Mbps data on
the physical link. The authors in [25] use throughput as a measure for performance
management. Such measurement focuses on administering the throughput levels of

11
both the network devices (like routers) and network links, so that they do not fall
below the specified levels.

2.5 Statistical Definitions


The definitions described in this section are as per the book “Simulation, Modeling
and Analysis” [32]. These definitions are useful for explaining the methodology
described in Section 4 and for the statistical analysis for calculating the throughput.
“An experiment is a process whose outcome is not known with certainty. The set
of all possible outcomes of an experiment is called the sample space” [32]. “A random
variable is a function that assigns a real number to each point in the sample space S”
[32]. A random variable can be discrete or continuous in nature.
A discrete random variable takes on “at most a countable number of values”. A
“probability mass function” is defined to express all the probability statements which
can be computed from the probability, p(x). A continuous random variable takes
“uncountably infinite number of different values”. The “probability density function”
expresses all the probability statements pertaining to this continuous random variable.
The mean of a random variable X, also denoted as expected value, is expressed by
E(Xi) or µi and is given by the following equation for a discrete random variable:

where xi is the value of the random variable for an outcome of i, P(xi) is the
probability that the random variable will be the outcome i. For a continuous random
variable, the equation is:

The mean is a measure of “central tendency”. Also, median and mode are
calculated as alternate measures of central tendency.
The variance of a random variable X, expressed as 2 or Var(X) is given by the
following equation:

It is the measure of “dispersion of a random variable about its mean” [32]. “The
larger the variance, the more likely the random variable is to take on values far from its
mean” [32].
The standard deviation of a random variable X is defined as the square root of
variance, i.e.

The coefficient of variation is defined as the ratio of , the standard deviation to


E(X), the mean. It is a “measure of spread for a set of data” and is often expressed as
the percentage of this ratio [33].

2.6 IXIA
IXIA is a company which provides “application performance and security
resilience solutions to validate, secure, and optimize businesses’ physical and virtual
networks” [34]. IXIA’s Network Test Solutions [35] enables the network equipment
builders to perform “pre-deployment testing” by testing their newly developed

12
equipment through simulation in a complex network dealing with real-time traffic. It
provides the organizations with an “end-to-end approach” to justify their built network
equipment, determine the performance of their built “networks and data centers” [35].
IXIA has released three products, namely, IxNetwork, IxLoad, IxChariot for testing
the built networks in a test bed at various layers. All these three products are sold in
the form of physical and virtual load modules, in the form of a chassis.
IxLoad is used for testing the application layer services [36]. IxLoad can simulate
application layer traffic, like voice, video, storage, internet, etc. It provides “virtual
environment testing” for testing the cloud, virtual servers and virtual machines.
IxChariot provides a way for “assessing and troubleshooting networks and applications
before and after deployment” [37]. It consists of a server and end-points which are
mainly the PCs, mobiles and data centers, and “real-world applications” are imitated to
evaluate the performance of these systems under load conditions. IxNetwork provides
testing features at the network layer and is described in detail in the following section.
2.6.1 IxNetwork
IxNetwork facilitates network layer testing by simulating MPLS, SDN, carrier
Ethernet and other Layer 2-3 protocols. It has the capability to emulate and design tons
of traffic flows and routes which enables the network manufacturers to perform stress
tests to evaluate the data plane performance [38]. It allows testing of IGMP scenarios,
convergence testing during live migration, and LACP. IXIA provides this emulation of
protocols through “CPU-based test port” in an IXIA chassis [38]. Each port has the
capacity to imitate a large number of routers, bridges and hosts in thousands of
networks. This tool comes with a user-friendly Graphical User Interface (GUI) and
wizard for configuring the routes, traffic flows and other features in the test bed. It also
provides real-time statistical analysis in terms of QoS parameters like latency, delay
variation, packet loss and inter-arrival time. IxNetwork comes with an advanced
feature of report generation which enables the user to create customizable reports by
creating graphs, data analysis, and PDF/HTML reports. There is also a feature known
as QuickTests for implementing “industry-standard test methodologies” like ITU-T Y
1564, RFCs 2889, etc. IxNetwork comes with a “Resource Manager” to manage the
developed complex topologies, the configuration changes, compare and integrate
various resources for testing [38]. At the same time, IxNetwork has the feature to
capture the packets which can be used for further analysis.
2.6.2 Traffic Generation
As mentioned before, IxNetwork provides “dynamic traffic support” for testing the
network layer services [38]. It comes with “line-rate traffic generation” capability [38].
With IxNetwork installed, one can change frame sizes, packet rate, line rate and layer
2 bit rate using the GUI. It is also possible to generate and design multiple traffic
streams, configure each packet header by modifying the header fields as per the
requirements, and generate “dynamically-updated MPLS and PPP traffic” [38]. The
wizard of IXIA enables the user to start, stop and pause the traffic.
IxNetwork also has the feature of “Multi-field ingress tracking” and “Multi-egress
Tracking”. The ingress tracking allows the user to record flows by making use of the
“user-defined fields” whereas the egress tracking provides a comparison between the
traffic sent and the traffic received. This tracking helps to check the changes made in
the packets when flowing from source to destination and QoS marking [38].
IxNetwork comes with basic, advanced traffic wizard along with the feature of
Quick Flow groups. It is possible to generate 4 million traffic streams and track them,
configure 16,000 distinct flow groups, generate 4,096 hardware streams from each port
and 4 million MPLS labels. It is possible to perform traffic measurements in terms of
loss, rate, latency, jitter, inter-arrival time, sequence, timestamps, TrueView

13
Convergence, packet loss duration, misdirected packets, late packets, re-ordered
packets, duplicate packets and in-order packets [38]. The generated statistics can be
used per IXIA port, per CPU, Tx-Rx frame rate, data plane performance per port,
flow-level measurements and user-defined flow detection mechanism [38]. Some of
the features of IxNetwork traffic generator are given in Table 1.

Feature Specification
Traffic ATM, Ethernet, HDLC, IPv4, IPv6,
MPLS, MPLS VPN, Multicast, VLAN
Port Mapping One-One, many-many, fully meshed
source/destination port mapping
Flow Groups Built based on VLAN ID or QoS
Traffic Profile Supports ARP, Auto Re-ARP.
QoS based on TOS, DSCP, MPLS EXP.
Rate based on line rate, packet rate, layer
2 bit rate.
Frame size can be fixed, increment, IMIX
Payload is increment/decrement byte,
random, can be customized as well.
Packet Error Can inject bad CRC/no CRC
Flow Tracking Tracking of flows based on QoS, VLAN,
source/destination MAC/IP, MPLS label
Flow Filtering and Detection Based on user-defined criteria.
Filtering based on latency, packet loss
Packet Editor Can edit packet header and payload.
Tracking enabling for user-defined flows.
Payload can be fixed, repeating, etc.
Table 1: IxNetwork Traffic Wizard [38]
2.6.3 Automation with IXIA
There is a strong potential to perform test automation using IXIA. The APIs of
IXIA provide all the required modules to successfully automate a test. Automation
with IXIA can be performed by using the GUI containing the “Test Composer” and
“Quick Tests” option or by using the “ScriptGen” module of IXIA.
With Test Composer, one can write a script consisting of the steps needed to
successfully execute a test while with Quick Tests, one can perform testing based on
the industry defined standards along with the custom tests defined by user. Both this
features are based on the GUI. These two modules are more user friendly as they do
not require a detailed understanding about IxNetwork’s APIs and their functions. It is
possible to execute multiple test suites in regression and collect the test results
automatically. The test engineers need to learn about the various IXIA commands for
generating traffic, collecting statistics and emulating protocols. The IXIA commands
are given in the form of a sequence of steps with the desired input parameters. On the
other hand, with “ScriptGen”, one can generate a script in TCL, Perl or Python of the
current test configuration which can be re-used and modified to conduct more tests as
per the desired configurations.
ScriptGen is an additional supporting module which is available as a part of the
TCL client installation. It creates a TCL program of the current configuration of the
IXIA ports connected to the network devices and the configured traffic. It is also
possible to create a TCL script for any test script that is opened in TestComposer. The
resulting TCL script is used as a base for creating automated tests. In this thesis, we
reuse the script generated from the “ScriptGen” module from IxNetwork GUI and
develop it further for performing test automation.

14
3 RELATED WORK
This section describes the work done so far in the areas of Test Automation and
Throughput Calculation.

3.1 Test Automation


The authors of “Impediments for Automated Testing – An Empirical Analysis of a
User Support Discussion Board” [5] consider automated testing as “corner-stone” for a
software development process. It was described that the testers need to have good
skills while dealing with the testing tools. Test Automation is “tool-dependent”, as it
depends on the test environment, the test execution steps and investigation of
outcomes. As a result, the size of the designed scripts for automatic testing are very
huge and must be developed carefully. The common automation steps as described by
this paper are: “configuring the system under test, controlling stimuli generators and
monitoring equipment, generating a verdict and sending the results to a database” [5].
It identifies the flaws when the testers work with test automation. The major areas
where the authors found a difficulty during test automation were establishing a system
for automatic testing and using the APIs for automation. The testers require more skill
and support for developing automation scripts by using the provided APIs.
The paper “Automating Test Automation” [8] suggests the steps which can be
followed for automating a given test case. This paper proposes using “natural
language” for first describing the steps for test automation, and then developing
specifics for these described steps in terms of test scripts. In this paper, test automation
is defined as “the task of creating a mechanically interpretable representation of a
manual test case” [8]. This is usually in the form of a “programming or scripting
language” or “a sequence of calls to executable subroutines with the relevant
arguments” [8]. The authors developed a tool, which could take the “natural language”
as input and could generate a mechanically interpretable output. This tool was tested
on web applications. The important points to be noted from this paper is to how to
develop an automated testing framework by using “natural language”.
The authors of “Results of Introducing Component-Level Test Automation and
Test-Driven Development” [22] highlighted the need for “early fault detection” in an
industrial software development process. The concept of “Test Driven Development
(TDD)” was applied at a component level for test automation. A case study was
carried out where the proposed concept was deployed in an industrial environment for
comparing two projects. The results indicated a reduction in the FST, i.e. ‘Faults-Slip-
Through’ and AFC, i.e. Avoidable Fault Cost. A decrease of about 30% was found in
the total cost. The Return on Investment, ROI, describing the benefits of the test
automating methodology was positive. On a whole, the suggested methodology could
bring about 5-6% reduction in the whole project budget.
In [6], the authors develop an approach describing how to identify potential areas
of automation while testing a given system. It is mentioned that not all parts of a test
can be automated and completely replaced with test automation. The testers need to
carefully examine and determine “what to automate” and “which parts should remain
manual” [6]. They divide the testing task into four types, namely “test-case design, test
scripting, test execution and test evaluation” [6]. When performing a test, each of the
above steps can be executed manually or automatically. The authors thus use the terms
“full automation”, “manual work” and “partial automation”. “Return on Investment”
(ROI) was calculated for each of the above described testing tasks. The experiments
showed that test execution had the largest ROI at 675%, followed by test-case design

15
at 307% and test evaluation at 41%. A higher ROI value for a testing task indicates
that it possess a greater potential for automation.
The authors of [7] developed an open source automatic testing framework, while
indicating commercial testing frameworks like Spirent, IXIA, Fluke and Endace as
expensive. This framework could generate traffic at a rate of 4x 10 Gbps. Additional
features like packet capture, timestamping with a precision of 6.25 nanoseconds, and
synchronization was provided with GPS. This framework was implemented by using
NetFPGA-10G. The OSNT architecture consists of a traffic generator with 10 Gigabit
Ethernet interfaces, a traffic monitoring module for capturing packets, a module for
high precision timestamping and a feature to provide a scalable system. This paper
focuses on the need for “high-precision traffic characterization” [7] which is usually
provided by Endace DAG cards. It can generate TCP, UDP, IP, BGP traffic and
various other standard protocols at line rate. It is possible to edit packet header fields,
perform “per-packet traffic shaping” and acquire “per-flow” statistics. This also
provides the feature of packet filtering and testing with deformed packets, which can
be configured as per the test case. This framework is for testing in education and
research fields, but for industrial testing, using IXIA is feasible.
The testing staff are posed with a challenge to reduce the cost and time for
hardware and software testing and at the same time maintain the quality and accuracy
of a test [20]. The testing process usually involves testing a product or testing a “multi-
faceted network”. The testing personnel thus need to imitate the real-world scenario
and evaluate the performance. A method is therefore needed for developing,
configuring and managing a test automation framework. The various steps which can
be followed to develop a test automation framework are depicted in [20]. Some of the
notable steps include specifying the test item and the associated network technology,
specifying the client and test server, test variables. Once all of the above are defined,
one can schedule and run the test, look for possible errors, handle these errors and
store the results for further analysis. An advanced approach to provide an end-to-end
solution in test automation using various modules is given in [24]. The method
involves selecting one or more test scripts based on a particular network service by one
or more users, selecting a suitable network topology, scheduling, executing the test and
storing and analyzing the generated log report. The code for the selected test scripts is
generated from predefined libraries present in one or more external device libraries.
The developed method also has a feature for simultaneously executing numerous tests
and alert the users about errors. For an end-to-end solution, all these tests are stored in
an execution server and can be configured based on user inputs for testing voice, video
and data services in a communication network.

3.2 Throughput Calculation


In “Application of the Stochastic Fluid Flow Model for Bottleneck Identification
and Classification” [39], the authors presented an analytical method for identifying the
performance bottlenecks in a network, in terms of throughput. The method is based on
the “stochastic fluid flow” which gives the average bit rate values for various traffic
streams. A numerical example was given where a bottleneck of capacity C Mbps was
subjected to variable and constant bit rate sources. The aggregated bit rate distribution
at the output of the bottleneck was determined for varying capacities and buffer sizes.
Also, a comparison of the bit rate statistics between the constant bit stream and one of
the variable bit streams was presented. The comparison showed a huge resemblance in
their bit rate characteristics. In both the cases, infinite buffer and bufferless scenarios
were considered.
The authors of “Identification of Performance Degradation in IP Networks Using
Throughput Statistics” [10] make use of histograms to express the performance of a
video conferencing application. Such indication provides feasible comparison and

16
identification of core performance issues of advanced IP network applications. A
passive measurement set-up for observation of packet streams is also provided in this
paper. It also presents a method for calculating the throughput of an application on
smaller timescales and is an extension of the theoretical concepts of the “Fluid Flow
Model” suggested in the above paper. For supporting the idea, a video conference
based on H.323 over UDP/IP was conducted between Wurzburg, Germany and
Karlskrona, Sweden through European Research Networks. A bottleneck between the
two links was introduced by sending additional UDP packet streams. When a
disturbance of 8 Mbps was introduced, disturbances in the video occurred. A
disturbance of 10 Mbps demolished the video conference session. For performance
analysis, a timing window of 1 minute with 100 milliseconds of resolution was chosen.
Histogram difference plots for both video and voice were plotted. In the case of video
streaming, huge differences were observed in throughput statistics indicating potential
bottlenecks in the network. The possible reasons for these deviations were the “jitter”
introduced by the network or due to the additional UDP stream of 8 Mbps. At the same
time, the QoS reduced below the desired levels when this additional disturbance of
UDP streams was introduced. On a whole, this paper highlights the use of “passive
measurement” and “throughput histogram difference plots” for alerting both end users
and network operators about hidden performance issues on smaller timescales.
In “A Performance Evaluation Metric for NFV Elements on Multiple Timescales”
[9], the above ideas were extended to a virtualized environment. The authors proposed
a performance metric, independent of the virtualization technology which expresses
the performance in terms of throughput on multiple timescales. Their evaluation
method could successfully express the “transparency” and “degree of isolation” of a
virtual environment. A proof-of-concept of their suggested methodology was given,
where the performance of a XEN virtual router was observed on multiple timescales.
The metric expresses the coefficient of throughput variation by considering the inter-
packet time for each traffic flow. At first, a comparison was performed between the
coefficient of throughput variation and the suggested metric by considering a scenario
of four virtual routers and a round-robin input packet stream. Next, a comparison was
performed when the virtual router was deployed on hypervisors, Xen and VirtualBox.
Later, a live demo was conducted where the impact of one traffic flow on the other
was identified when simultaneous traffic flows were being sent through a router. The
experiment used a capturing duration of 25 seconds and a jumping window of duration
one second. A huge variation in throughput was observed on smaller time scales which
highlighted unfair sharing of resources and change in packet order from the ingress to
the egress, thus, facilitating the identification of performance decrease in a virtual
environment. This paper received an award in Globecom, 2013.
Through the thesis, “Analysis of Resource Isolation and Resource Management in
Network Virtualization” [40], the author further strengthened the above research. The
coefficient of variation was calculated as the difference of CoV at the egress and
ingress for two experiments. The experiments involved sending N traffic streams, from
N sources to N destinations through a single physical system with a virtual bridge,
running N virtual machines. A passive measurement infrastructure equipped with
DAG cards, known as DPMI [41] was used for capturing these N traffic streams.
Initially, when only one VM was used, the effect of the hypervisor on performance
was not significantly visible. But, the addition of one more VM and additional UDP
traffic of 5 Mbps showed significant variation in the throughput at timescales of
0.0025 and 0.005 in the form of histograms. On a whole, this research highlighted the
identification of potential bottlenecks using the above suggested methodology and the
need for analyzing the various dependencies in a virtual system and improving the
scheduling mechanisms in such systems.

17
4 METHODOLOGY
As mentioned before, the aim of this thesis is to validate the methodology
suggested for evaluating the NFV elements for evaluating the performance of a
physical router and propose a test automation framework for testing the same. This
section describes the methodologies followed for performing test automation with
IXIA, calculation of Return on Investment (ROI) which highlights the benefit of
using test automation in the telecommunications industry and calculating
throughput on multiple timescales. A detailed literature review is performed in all
the three cases before starting the experiments and performing the calculations.
The methodology for automating the given test with IXIA uses the High-
Level HLT API for automatically configuring the ports on a chassis, starting and
stopping the traffic. It deals with the APIs used for developing the TCL script and
reusing the TCL script from the ScriptGen module of IXIA. The ROI is calculated
based on the formula given in [6] as reduction in execution time due to automation.
Also, a linear ROI curve is proposed to extend the idea of ROI calculation for “N”
use cases.
The methodology adopted to calculate the throughput when sending multiple
traffic streams is based on [9], histograms are used to display the performed
statistical analysis and for comparing the performance when sending simultaneous
traffic streams [10]. The throughput is calculated by subjecting the router to
different traffic loads and comparing the epoch times of the packets at the ingress
and egress of the router. The scripts provided in [40] are used as a reference and
new script is developed further to perform statistical analysis. The throughput is
calculated at the network layer, using a passive measurement set-up and adopting a
jumping window approach.

4.1 Automation Method


4.1.1 Test Automation using IXIA
IXIA comes in the form of a client-server package. One needs to have an IXIA
account to download the IXIA client software. The industry purchases IXIA software
and hardware for testing its network equipment. The hardware consists of a chassis,
which in turn consists of IXIA ports and a TCL Server. The software is a set of APIs
for interacting with the IXIA chassis and ports. The IXIA ports are connected to the
system under test for simulating the real-time end-to-end enterprise traffic.
The IXIA client software can be installed on our Operating System, on both
Windows and Linux platforms. When installed on a Windows platform, it comes with
an IxNetwork GUI, where the user manually configures each node (or port) by
assigning an IP address, default gateway and traffic item. He needs to manually start
and stop the traffic and release the ports after the test is complete. The GUI comes with
TestComposer and ScriptGen options (mentioned previously in Section 2.6.3).
IXIA client connects to an IxNetwork TCL Server for configuring the ports on a
chassis. On Windows platform, this server is installed on the client machine, making
the testing process smoother. On the other hand, when IxNetwork client software is
installed on a Linux machine, there is no GUI for selecting the ports and configuring
the traffic. The user needs to run the TCL script in command line for automatically
configuring the ports, traffic items, starting, sending and stopping the traffic. Unlike
the Windows client, for Linux client, the IxNetwork TCL Server is not installed on the
client machine. The client needs to connect to an external IxNetwork TCL Server.
Once the client is connected to the IxNetwork TCL Server, it can communicate
with the IXIA TCL Server. The IXIA TCL Server is the major module which

18
communicates with the ports of the chassis. The chassis and the IXIA TCL Server
have the same IP address. The commands in the TCL script running on the client
machine are run on these servers for automatically configuring IXIA for a given test.
IXIA has a powerful set of APIs (Application Programming Interfaces) which are used
for communicating with the IxNetwork TCL Server and IXIA TCL Server. There are
mainly two APIs which can be used for automating a given test, namely, IxOS and
HLT API. With IxOS, one can directly communicate with the IXIA TCL Server,
without the need of IxNetwork TCL Server through IxTclHAL commands. On the
other hand, the HLT API has Perl, Python and TCL APIs to communicate with the
IxNetwork TCL Server. Each API (IxOS, HLT API) has a different set of commands
for communicating with the IXIA TCL server on the chassis. It is possible to schedule
tests and store the results automatically using the IXIA TCL Server, so that a system
can be tested for a longer duration without human intervention. Figure 5 describes the
various modules of IXIA communicating with each other.
In this thesis, we install the IXIA client software on a Linux platform and use the
HLT (TCL) API for performing test automation. This is because the same Linux client
is used for automatically capturing packets required for throughput calculation, and
configuring the routers and switches through command line. The HLT API is used
because the configuration script generated by the IxNetwork ScriptGen GUI uses this
API for communicating with IXIA. Thus, the IxNetwork client software is installed on
the Linux platform and an external IxNetwork TCL server is used. The basic
configuration script was first generated using the GUI of IxNetwork on the Windows
platform. This script was modified and developed further according to the given test
and executed in the HLT API console in the Linux platform.

Figure 5: IXIA Client-Server Relationship

19
The various commands of the HLT API that were used for developing the major
backend of the TCL script are described in Appendix A(1). It is to be noted that the
entire script is not published because IXIA is not an open source software. Also, the
version of the IXIA client software, the IxNetwork TCL Server and TCL Server must
be the same due to compatibility issues. If the IXIA versions are different, one receives
an error of version mismatch and testing is stopped.
4.1.2 Return on Investment
The methodology used for highlighting the benefit of test automation in the
telecommunications industry is based on [6]. The Return on Investment is calculated
based on the formula given in this paper, given below. A common practice is to
calculate ROI for software testing processes, but not for test cases in the
telecommunications industry. An effort is being made to address this issue in this
thesis, along with providing a proof-of-concept for the throughput calculation
methodology.

The testing department uses IXIA to perform testing of the System under Test
(SuT). This system can either be a network of devices like routers, switches or an
individual device (networking equipment), Device under Test (DuT). This test bed is
simultaneously accessed by various users across the globe. Hence, there is a constraint
on time to execute a particular test. Thus, the testing team has to execute a given test in
a given time frame, thereby highlighting the need for automatically executing the tests.
As per [6], there are four major testing tasks, namely, “Test-case design”, “Test-
scripting”, “Test-execution”, “Test-evaluation”. As the primary goal of this thesis is to
automate the test case for throughput calculation and show the advantage of
automation in terms of ROI, compared to manual testing, the “Test Automation
Decision Matrix” (TADM) is used. It should be noted that the number of use cases is
one, as we are automating a single test. The value of 0 indicates that the given test
phase is manual, and the value of 1 indicates that it is automated. The value for test
design is set to 0, as we have considered only one test case for automation, i.e.
throughput calculation, and the test cases were also designed manually. Test scripting
is set to 0, as IXIA’s ScriptGen module was used to generate the TCL script of the
current configuration and was developed further to include additional requirements as
per the present test case. IXIA generates manual scripts which can be reused for
further testing. The value for test execution is 1, as the developed test script is
executed automatically to configure IXIA ports, traffic items, start and stop traffic and
calculate the throughput on multiple timescales. Test evaluation is set to 0, as the user
makes the decision about the final result of execution of the test. The cost and benefit
for manual and automatic testing for the given use case are calculated in hours (time),
and the total ROI is expressed in the form of a percentage. Also, even though there are
various phases in testing, the focus is made on test execution, as the remaining test
phases were difficult to evaluate and express for the given test case. The ROI was
calculated manually (only one use case was considered), unlike the program
Automated-Testing Decision Support System which was developed using Java in [6].
4.1.3 Test Automation Cost Calculation
The formula for calculating the ROI in terms of reduction in execution time has
been described in the above section, namely Section 4.1.2. Since calculating ROI for a
single use case is not sufficient, the obtained reduction in execution times are used to
propose an ROI curve to extend the developed test automation framework for “N” use

20
cases and describe the effect of the use cases on ROI. In general management scenario,
Return on Investment is expressed as:

Gain from Investment is the financial gain obtained from the initial investment
cost. In the current thesis scenario of test automation, we define the gain from
investment as the cost of doing the tests. It can also be the benefit (profit) obtained by
using the developed test automation tool. Cost of investment is the initial invested cost,
at the beginning for developing a new product. In this thesis, we define the cost of
investment is as the work in the form of overhead by using the developed test
automation tool. In both the cases, cost is usually expressed in units of money in an
actual business scenario. We express money in terms of manpower (or man hours) or
execution time in this thesis.
For both non-automated and automated testing, the time required to configure the
test set-up is the upper bound. Let “N” be the number of tests (or the number of use
cases). We assume different variables to describe the times involved for performing
the various testing tasks. The following variables are used in the equations below:

A) For conducting “N” non-automated tests, the cost, A is calculated as:

B) For conducting “N” automated tests, the cost, B is given as:

In both cases, we propose a standard unit to express the costs involved in testing
and describe it as Money Units (MU) per hour:

The Gain from investment is calculated in this thesis as the difference of the test
costs involved in performing the non-automated test and the automated test. As
mentioned above,

The gain from investment is expressed as the difference between the variables A
and B as:

The cost of investment is denoted by variable R, which expresses the time taken to
configure the automated test tool. This cost is the initial investment cost for developing
the test automation framework:

In this thesis, “P” is the time taken to set-up the use case in a non-automated test
environment. In real time, this was found to be 16 hours. The values for other variables
are R = C = 4 hours, Q = 0.25 hours and S = 0.1 hours.

21
Thus, the cost for conducting “N” non-automated tests is given by A as:

Similarly, the cost for conducting “N” automated tests, B is given as:

Hence, the ROI for automating “N” tests (or use cases) can be expressed by the
following equation:

Substituting the above numeric values, the above equation becomes:

Finally, we obtain a linear equation to express the benefit of test automation in


terms of ROI as:

A graph is drawn by varying the values of “N” to illustrate a linear curve. The
linear curve is described by Figure 6 below. This graph shows the effect of ROI on the
number of use cases. The proposed curve has a negative slope and the ROI becomes
positive as the value of “N” (number of tests or use cases) increases.

20
y (Return on Investment)

15
10
5
0
-5 -5 -4 -3 -2 -1 0 1 2 3 4 5
-10
y = 4.0375N - 5
-15
-20
-25
-30
x (N, number of use cases)

Figure 6: Obtained Linear Curve for ROI

4.2 Throughput Evaluation


The throughput is calculated on multiple timescales based on the following
methodology:
The performance metric used for calculating the throughput is expressed as
the Coefficient of Variation, CoV. The input for this metric is the inter-packet time
[9]. The inter-packet time, T, at the ingress and the egress is expressed as:

22
The processing time is denoted by cp. In ideal case, this value is zero. But, in
practical scenarios, this value is assumed constant. The output, tetout is delayed by cp.
Thus, in an isolated environment, the events Tiin and Tiout are related as:

Xin/out is a random variable which denotes the amount of processed


events at the ingress and egress up to a particular time, say k. The variable
Rk,Δ in/out denotes the ratio of the difference between the value of Xk, Δ in/out
and Xk-1,Δt in/out and the time Δ. It is given by the equation below:

The coefficient of variation, CoV for this random variable is given by the
equation below:

The difference between the CoV values at the egress and the ingress expresses
the degree of isolation, in this scenario, traffic isolation:

Let n be the number of intervals, as Δ denotes the duration of each interval,


the total window size, Z, is given by the product of these two variables as:

The above described method will be used to answer the research questions.

4.3 Anticipated Behavior for DuT


This section provides a detailed explanation for the need of evaluating
throughput on multiple timescales, especially smaller timescales based on the
assumed hypothesis. The primary assumed hypothesis is that in a system
forwarding packets from multiple packet streams, there is always an impact of one
traffic stream over the other which affects the overall scheduling of the packets.
There is a change in the packet order of the packets due to the scheduling
mechanism of the system. This behavior is indicated mathematically by the
difference in the Coefficient of Variance (CoV) at the egress and the ingress of the
system. The system assumed here is a physical router, which is the DuT.
Thus, the DuT (system) is said to have optimal performance when there is
minimal change in the packet order of the traffic streams when they are flowing
from the ingress to the egress of the router. This means that when there are
simultaneous traffic streams arriving to the router to be forwarded at the same time,
the scheduling and forwarding mechanism is fast enough, so that the packets from
these streams are forwarded independent of each other, and they arrive to their
respective destinations in well-defined time frames. As previously mentioned, the
performance is expressed in terms of variation in the statistical behavior of the
packets at the egress and ingress of the DuT. If the difference in the CoV values at
the egress and ingress is small, the DuT has good performance. Otherwise, if there
is a huge variation in the CoV values at the egress and ingress, it means that
multiple traffic streams are affecting each other and the forwarding (or scheduling)
mechanism of the DuT needs to be improved further.

23
A scenario indicating a change in the packet order in a virtualized
environment is depicted in Figure 6:

Figure 7: Unequal Resource Sharing on Smaller Timescales [9]

As seen in the figure, there is a change in the order of packets (events) at the
egress of the router. The reason for this is the scheduling mechanism in the
hypervisor, which affects the packet order, thereby leading to unfair resource
sharing on smaller timescales. It should be noted that even though we see equal
sharing of resources on larger timescales, this is not true on smaller timescales in a
virtualized environment and is often not noticed. This change is expressed
mathematically by using the variables Rk,Δin/out and Δ described above as
coefficient of throughput variation. This idea of change in order of packets in a
virtualized environment is extended to a physical environment for testing the
performance of physical routers on multiple timescales.

4.4 Data Extraction


This section describes how the data (or packet information) was extracted from the
obtained log files of the Wireshark Server to calculate the throughput on multiple
timescales. It should be noted that there are two log files, one at ingress and other at
the egress of the router (Device under Test), where each log file contains the packet
information like Packet Number, Epoch Time, Frame Size, Source and Destination IP
address.

Figure 8: Traffic Flow for Simultaneous Streams

24
The three traffic streams flow simultaneously for a duration of 15 seconds. The
total duration for sending the traffic is 25 seconds, with each traffic stream starting at a
gap of 5 seconds. To state more clearly, the first traffic stream starts at the beginning,
say at time 0. After the first traffic stream flows for a duration of 5 seconds, the second
traffic stream is started, at time 5. Now, the two traffic streams are flowing
simultaneously. When the two traffic streams flow simultaneously for a duration of 5
seconds, the third traffic stream is started at time 10. The three traffic streams flow
simultaneously for a duration of 15 seconds. The three simultaneous flowing traffic
streams are stopped after a duration of 15 seconds, at time 25. It should be noted that
the time of flow for each traffic stream is different, like traffic stream 1 flows for the
entire duration of 25 seconds, traffic stream 2 flows for 20 seconds and traffic stream 3
flows for 15 seconds. The duration of the simultaneous traffic flow of the three streams
is of importance for this thesis work. The sending of multiple traffic streams can be
understood in detail in Section 5.5, Experimental Procedure and from Figure 8
described above.
In Figure 8, the packets belonging to each stream are specified with a different
color, namely blue, magenta and green respectively. Only the packets belonging to
stream 1 (blue color) flow for a duration of 5 seconds at the very beginning. After they
flow for a duration of 5 seconds, stream 2 is started simultaneously (magenta color).
The two streams flow for a duration of 5 seconds. Then, stream 3 is started (green
color). Thus, packets of stream 1 flow for the entire duration of 25 seconds, stream 2
for 20 seconds and stream 3 for 15 seconds. Also, the three streams flow
simultaneously for the duration of 14 seconds. The aim here is to calculate throughput
for a duration of 1 second and on timescales less than 1 second, like 1 millisecond.
When the user wants to calculate the throughput on a timescale of 1 second, let us
consider the window of duration of 5 seconds, when all the three streams are flowing
simultaneously and calculate the throughput. This window of 5 seconds duration is
divided into smaller intervals, with each interval having the size of 1 second and the
throughput is calculated per second. This scenario is depicted in Figure 9.

Figure 9: Calculation of Throughput for 1 second

As seen in the above figure, an ICMP packet, in the form of a ping request was
sent before starting to send the traffic streams. This was done to identify the starting of
IPv4 traffic (or the traffic streams). This starting packet is later discarded once the
required traffic streams are identified and is not used for throughput calculation. Next,
the starting IP packet marking the beginning of simultaneous flow of all three traffic
streams is also identified. In the above figure, each different color of the packet – blue,
magenta, green indicates a packet belonging to a particular traffic stream.

25
Theoretically, it is assumed that the Layer 2 switch in the test bed generates a round
robin fashion of packets for the three traffic streams. There is no issue if this does not
occur in real time, it is shown only for depiction purpose. All the traffic streams are
flowing on a physical link with 10 Gbps link capacity. The user can also consider a
window size of duration other than 5 seconds, like 15 seconds for calculating
throughput per second. The value of 5 seconds is shown here for explanation purpose.
Similarly, when a user wishes to calculate the throughput for a timescale less than
one second, say 1 millisecond, 0.1 millisecond or less, we consider a window of
duration 1 second. This window of 1 second duration is divided into smaller intervals
where each interval has a size of say 1 millisecond (or 0.1 millisecond). It should be
noted here that when the timescale value is 1 millisecond, for a window of 1 second,
the number of intervals becomes 1000. Thus, the throughput is calculated for each
interval of duration 1 millisecond as depicted in Figure 10.

Figure 10: Calculation of Throughput for 1 millisecond and less

Furthermore, the timescale of 0.1 millisecond leads to 10,000 number of intervals,


when the window size is one second. It should be noted that the packets which do not
belong to the given window duration are rejected, i.e. which belong outside of the 5
second (or 1 second) window which is being considered for analysis. Another
important aspect to which one must pay attention to during this multiple timescale
analysis is the case when the packet lies in between two intervals, as shown in Figure
11. In such a scenario, the packet is not rejected and the portion of the packet
belonging to each interval is calculated.
As shown in Figure 11, the interval start time (and the window start time) is the
epoch time of the very first packet which is identified when all the three traffic streams
have started to flow simultaneously. Once this packet is identified, the considered time
window (of 5 sec or 1 sec) is divided into smaller intervals, depending on the timescale
value (1 sec, 1 msec). The interval end time is calculated as the sum of the epoch time
of the starting packet and the timescale value. This is how the given time window is
divided into smaller intervals. At the same time, the epoch time of each packet is taken
as packet start time, and the packet end time for each packet is calculated as the sum of
packet start time and time it takes travel on the 10 Gbps link (line speed). i.e., ((packet
start time) + ((packet size)/(line speed))). A packet belongs to a particular interval if its
starting time is greater than or equal to the interval start time, at the same time, its end
time is less than or equal to the interval end time. There can be scenarios where a
packet lies in between two intervals, when dividing the window into smaller
timescales. It is said to belong in between two intervals when its packet start time is
greater than the interval start time, along with its packet end time, which is also greater

26
than the end time of the current interval being considered for throughput calculation.
But, its start time is less than the interval end time of the current interval.

Figure 11: Packet Location in Different Scenarios

Finally, the packet belongs to the next interval, if all the above conditions are not
satisfied. For the next interval, the interval end time of the previous interval becomes
the interval start time, and so on. This method is followed to analyze the packet data
obtained from log files of the Wireshark server. The above extraction method is
implemented in the form of a Perl script to calculate the throughput on multiple
timescales. The script is given in Appendix A(3). The user needs to give the input of
the larger window size, like 5 seconds or 1 second and the smaller window size which
depicts the value for calculating the throughput on smaller timescales in the Perl script.

4.5 Test-Bed Set-Up


The test-bed for conducting the experiment is shown in Figure 12. The goal is
to send three simultaneous traffic streams, each with a gap of 5 seconds. A more
detailed explanation is given in above section, Section 4.4 regarding traffic
generation. As mentioned before, IXIA is the traffic generator used for generating
IPv4 traffic in this experiment. Three ports of IXIA are used, which act as three
different sources of traffic. These three IXIA ports are present on the IXIA chassis,
which is connected to the IxNetwork TCL server, depicted in Figure 5. The traffic
from these three sources is forwarded to a Layer 2 Switch. The aim of using a layer
2 switch is to generate a round robin stream of packets from the three different
traffic sources. The need for a round robin packet stream is not a necessity, and it is
feasible to perform the test even if round robin stream of packets is not achieved.
Since the aim is to capture the traffic from these three streams at the ingress and
egress of the routers, Network Taps are used. The network taps are optical in
nature. It can be seen that the traffic first arrives to the Layer 1 Switch, and then it
passes through the network taps. It is worth mentioning that the Layer 1 Switch is
an Optical Cross Bar Switch, such that it adds zero jitter to the traffic streams. The
use of Layer 1 switch is necessary because it is the only way to move taps in the
test-bed from one place to another, as the entire experiment was conducted
remotely. The traffic then passes through the router, which is the Device under
Test (DuT). As seen above, two network taps are used to capture the traffic at the
ingress and the egress of the router. The traffic from the network taps is directed to
the Wireshark Server, which stores the information about each packet passing

27
through the router. Thus, we use a passive measurement system for storing the
packet information. The traffic from all the three IXIA ports is directed to only one
IXIA port at the receiver end, i.e, only one destination port. It is due to the lack of
resources at the laboratory that the experiment was conducted under limited
conditions. A detailed information about the specifications of the devices used in
this experiment is given in Section 5. It is worth mentioning that each of the three
IXIA ports at the source side has a maximum physical link capacity of 1 Gbps and
they are connected to the 1 Gbps optical cables. The IXIA port at the destination
has a maximum physical link capacity of 10 Gbps and so it is connected to the 10
Gbps optical cable.
It should be noted that the cables used in the entire test-bed are optical in
nature. The physical link capacity between the three different traffic sources (or
IXIA ports) and the Layer 2 switch is 1 Gbps. The physical link capacity in the rest
of the test-bed, i.e. between layer 2 switch and optical layer 1 switch; between
network taps and Wireshark server; between the router and layer 1 switch and the
destination is 10 Gbps. The entire test is controlled by the Linux client on which
the IXIA client software. It is also through the same Linux client that the various
network elements are configured in the test-bed like Layer 2 Switch, Optical Layer
1 Switch and the router.

Figure 12: Test-Bed Set-Up

28
5 IMPLEMENTATION AND EXPERIMENT
This section gives an overview of the physical environment of the test bed which
was used for conducting the given experiment. It describes some of the general
specifications of network equipment used for performing the given test, like Layer 2
switch, DuT, Layer 1 switch and network taps. It also describes IXIA software version
which was used for developing TCL scripts for test automation, the characteristics of
the physical cables interconnecting the devices and the packet capture set-up. The
steps for successfully conducting the test, crucial details of the experiment and
potential areas of automation for the given test case are mentioned as well.

5.1 IXIA Specifications


As mentioned previously, IxNetwork was used for performing test automation.
The IXIA client software was installed on an Ubuntu 14.04 LTS Client. The release
version at client and IXIA servers is “IxNetwork 7.40 GA”. The software consists of
various packages and each of the following software needs to be correctly installed to
successfully run the scripts. Using different versions at IXIA client and server results
in compatibility warnings and error messages. More specifically, the versions for each
package are described in Table 2:

Software Version
IxNetwork 7.40 GA (7.40.929.28)
IxOS 6.80 GA (6.80.110.12)
HLT API 4.95 GA (4.95.117.44)
TCL Interpreter 8.5
Table 2: IXIA Specifications

5.2 DuT Specifications


The DuT is a physical router, supporting both IP and MPLS forwarding. It should
be noted that no special configurations were made to the router, only IP addresses and
default gateways were assigned. All the remaining configurations were kept default.
Some of the general specifications are described in Table 3:

Feature Value
Port Configuration Up to 2x 10GE ports, Dedicated SFP,
RJ45 Ports
Performance Management IP Performance Monitoring
Ping and Traceroute, BFD
IPv4 Protocols OSPF, IS-IS, BGP, LDP, RIP, VRF
MPLS Protocols L3VPN, L2VPN, RSVP
Layer 2 Properties IEEE 802.1 Q Virtual LAN
Network Management SNMP, RADIUS, RMON, TACACS+
QoS Policing up to 1Gbps, WRED queuing,
Scheduling is combination of Strict
Priority or/and Deficit Round Robin
Operating Environment Operating Temperature: -40C to 65C
Humidity: 0-95% Non-condensing
Power: -48 VDC, 110-240 AC
Table 3: DuT Specifications

29
5.3 Other Specifications
This section describes some of the general characteristics of the Layer 2 Switch
which was used in the test-bed and the properties of the physical links interconnecting
the various network devices.
5.3.1 Layer 2 Switch
The sole function of the layer 2 switch which is used in this test bed is to perform
forwarding of the multiple packet streams from the source, i.e. IXIA ports to the
destination IXIA port through the DuT. This switch has both Layer 2 and Layer 3
Gigabit Ethernet switching capability and it supports stacked VLAN and MPLS
services. Some of the general specifications of this switch are given in Table 4.

Feature Value
Power Supply Dual power inlet
Nominal input voltage: -48 V
Ports Configuration GE, 10 GE Interface ports supporting
SFP, DWDM
Management SSHv2c, TACACS+
SNMP v2/v3
Secure FTP
RADIUS authentication
HTTP for management via web interface
QoS Strict Scheduling, WRR, DRR
DiffServ precedence
Policy-based routing
Protocols EAPS, STP, RSTP, MSTP, RIP, OSPF,
VRRP, VMAN, multicast routing
Operating Temperature -5C to 85C
Capacity 32K MAC addresses, Jumbo frames
12K Layer 3 route table size
32 Virtual Router
Table 4: Layer 2 Switch Specifications
5.3.2 Physical Link Characteristics
This subsection describes the capacity of the cables used for interconnecting the
various network devices in the test-bed. It is important as line rate (line speed/link
speed) is one of the parameters which was considered for calculating throughput. All
the cables used for interconnecting the devices are optical cables. Table 5 describes the
source-destination pair and the capacity of the physical link interconnecting them.

Source Destination Capacity


IXIA Layer 2 switch 1 Gbps
(3 source ports) (3x 1Gbps i.e. 3 cables, each 1Gbps)
Layer 2 switch Layer 1 switch 10 Gbps
Layer 1 switch DuT 10 Gbps
DuT Layer 1 switch 10 Gbps
Layer 1 switch IXIA (1 destination port) 10 Gbps
Layer 1 switch Network Taps 10 Gbps
Network Taps Wireshark Server 10 Gbps
Table 5: Physical Link Characteristics

30
5.4 Packet Capture
As mentioned previously in Section 4.5, a passive measurement system is used to
capture the packets at the ingress and egress of the router. The packet capture set-up
consists of a Layer 1 Optical Switch, Optical Network Taps and a Wireshark Server.
The traffic at the ingress and egress flows from the Layer 1 switch, through the
network taps to finally reach the capturing interfaces of the Wireshark Server.
The Layer 1 Switch, more specifically is an “ S Series Optical Circuit Switch” [42]
developed by Calient. This switch is capable of providing interconnectivity in
networks with speeds ranging from 10 Gbps to 100 Gbps, and more. The switching
module is based on MEMS technology, i.e. Micro-Electrical-Mechanical System. It
has a GUI for configuring the various interfaces of the switch. At the same time, it also
supports TL1 commands and SSH, where the user can manually log into this switch
and execute the configuration commands. The general specifications of this switch are
described in Table 6:

Feature Value
Power Supply 12V DC, 24V DC,
-48V dual redundant power options
Ports Configuration 320 Ports (Tx/Rx Pairs)
Power Dissipation Less than 45 Watts
Temperature -5C to 50C (Operating)
-40 to 70C (Non-Operating)
Loss Maximum Insertion Loss is 3 dB
Features GUI-driven, EMS-Ready, supports TL1,
SNMP, COBRA and OpenFlow
Table 6: Optical Switch Characteristics

The network taps which were used were “All-Optical Taps” manufactured by VSS
Monitoring [43]. The optical split ratio was 70:30, which means that on reaching the
network taps, 70% of light energy was sent to the DuT, and the remaining 30% of light
was sent to the Wireshark Server, where packets were captured at the respective
interfaces. The split ratio is calculated by the mechanism described by the company
which manufactures the taps. The optical loss in a system is calculated so as to
determine the allowable split ratio of a new tap installation, which was found to be
70:30 for the selected network tap. This is because an optical signal degrades as it
propagates through the network. The signal is attenuated by network components like
switches, fibre cables, splitters and the wavelength of the optical signal being used.
As shown in Figure 13 below, the traffic is flowing from the DuT to the
destination IXIA port, with a set-up of Layer 1 switch and network tap in between. The
tap has four ports, namely, NetA, NetB, MonA, MonB. The ports NetA, NetB are
connected to the Layer 1 switch ports and ports MonA, MonB are connected to the two
Wireshark interfaces. In our test case, we consider only unidirectional traffic flow, i.e.
from DuT to IXIA and not vice versa. Thus, the concerning ports are NetA, MonA and
p2p1. The traffic reaches the source port of Layer 1 switch (2.3.4) and from there it is
directed to the optical network tap at port NetA. As mentioned above, the tap is
configured at 70:30 optical split ratio. The 70% of light reaches NetB port and from
there it is directed to the port of Layer 1 switch (2.3.8) and finally the destination IXIA
port. The remaining 30% of light reaches port MonA of the network tap and finally to
the port p2p1 of the Wireshark server. In brief, traffic being received at port NetA of
the network tap is split for transmission on port NetB and port MonA.

31
Figure 13: Splitting at Network Tap

5.5 Experimental Procedure


The following sequence of steps were followed to conduct the test manually:
1. Configure the Router (Device under test) by assigning the IP addresses to its
interfaces.
2. Connect all the equipment in the test-bed with cables (optical cables), i.e.
IXIA ports, router, L2 switch, network taps, Wireshark server. Check for
connectivity across all the devices, perform fault management (check for
faults).
3. Open the GUI of IxNetwork in Windows client machine, connect to chassis,
select and configure the IXIA ports-assign IP address and default gateway to
the ports, configure traffic item for each source-destination pair by setting the
frame size and speed. Now, the traffic is ready to be sent from IXIA.
4. Start capture on the desired Wireshark interfaces simultaneously by logging
into the Wireshark server and manually starting the live capture at desired
interfaces.
5. Now, in the GUI of IxNetwork, send an ICMP packet for identifying the
beginning of the traffic streams. After sending this packet, we immediately
start our traffic streams. We are sending 3 traffic streams, described as Traffic
Item 1, Traffic Item 2 and Traffic Item 3. Start traffic item 1 at time 0 seconds
(as soon as you have sent the ICMP packet). After 5 seconds, start traffic item
2, such that the two traffic items are flowing simultaneously. Again, after 5
seconds, start traffic item 3. The total duration of traffic flow is 25 seconds
(starting from traffic item 1), in which all the three streams are flowing
simultaneously for 15 seconds. The user needs to manually check for the time
duration when sending traffic. The traffic is stopped after 25 seconds, by
manually checking the time being displayed in the GUI and the selected IXIA
ports are manually released from the current configuration.
6. As the traffic is flowing simultaneously, packets are being captured live in the
Wireshark server at the two selected interfaces, one at the ingress of the router
and the other at the egress.
7. The user needs to wait for a few seconds after stopping the traffic in the GUI
to stop the live capture at Wireshark. The “.pcap” Wireshark file at each
capturing interface is saved. It is to be noted that two “.pcap” files are

32
obtained. When capturing packets at the two Wireshark interfaces, the user
needs to make sure that the number of packets at the ingress and egress are
same (equal) and no packets are being dropped at the interfaces when saving
the “.pcap” file.
8. The “.pcap” files are converted into a text (log) file by using a T-Shark filter,
where fields like Packet Number (Frame Number), Epoch Time, Frame Size,
Source and Destination IP addresses are extracted.
9. The two log files containing the packet information at the ingress and egress of
the router are given as input to the developed Perl script to evaluate the
throughput on multiple timescales. The user can specify the timescale with
which he would like to measure the throughput in the form of arguments when
running the Perl script. The timescale can be 1 second, 1 millisecond or 0.1
millisecond. It should be noted that the log files are fetched from the
Wireshark Server to the Linux Client through File Transfer Protocol manually
through command line. The Perl script is executed in the Linux Client, where
the IXIA client software is also installed.
10. The Perl script calculates the Coefficient of Variation (CoV) at the ingress and
egress of the router. The CoV at the egress is subtracted from the CoV at the
ingress. A positive and larger CoV value indicates that there is greater amount
of variation in the throughput when multiple traffic streams are flowing
simultaneously through the Device under Test.

The same experiment is repeated for varying frame sizes of 128 Bytes, 256 Bytes,
512 Bytes, 1024 Bytes and 1518 Bytes at 100 Mbps. All the three traffic items are sent
at the same frame size and speed (layer 2 bit rate). The CoV is calculated for each case
at timescales of 1 second and 1 millisecond. The experiment is repeated five times for
each frame size (5 iterations for each frame size) and the average CoV is calculated.

5.6 Automation in Experimentation


As mentioned in Section 5.5, the user had to manually configure the IXIA ports,
traffic items, start and stop traffic in the GUI by checking the time. At the same time,
he had to manually log into the Wireshark server, simultaneously start, stop capture at
the two Wireshark interfaces and fetch the log files from the server to the Linux client.
These were some of the activities which consumed the user’s time while conducting
this particular test and at the same time affected the accuracy of the results. The
experimental results will be more accurate if the user is able to automatically perform
all of these tasks, with minimum input from his side. As a result, potential areas for
automation were identified for the given test case after which it was possible for the
user to no longer use the IXIA GUI to do the configurations and log into the Wireshark
Server to capture traffic and extract the log files. The following table, Table 7
describes the steps which were automated and which were kept manual for the given
test case (or use case).
The TCL script depicting the current IXIA configuration was generated from the
“ScriptGen” module of IxNetwork GUI. The script was studied and understood. The
modules (APIs) required for connecting to chassis, configuring traffic items, starting
and stopping traffic were identified from this script. Once identified, the variables
which could be given as input were identified and were included in the script. Also, the
traffic configuration for simultaneously sending all traffic streams with a gap of 5
seconds had to be included in the script. The module for releasing the ports from the
chassis and disconnecting from the TCL Server was included as well. In brief, the
major HLT TCL APIs which were used for developing the modified script were
Session APIs and Traffic APIs. A description of these APIs as used while developing
the script is given in Appendix A(1).

33
Manual Automation
Configuration of Router Connection to chassis
Selection of IXIA desired ports
Cabling in the test bed Configuration of IXIA ports – IP Address
and Default Gateway
The user needs to manually give the Configuration of Traffic Items – Frame
following parameters when performing Size, Speed
the test automatically: Starting of simultaneous traffic streams –
Chassis IP, TCL Server IP each traffic stream at a gap of 5 seconds
IxNetwork TCL Server IP Stopping of traffic
IP address and Default Gateway for IXIA Releasing IXIA ports from current
ports configuration.
Frame size and speed for each of the three
traffic items The parameters for traffic configuration
Time scale value for throughput need to be simply written in a text file.
calculation The format of text file is given in
Appendix A(5).
Launching of automated script: Automatic capture at Wireshark
The user needs to manually launch the Interfaces simultaneously.
script which is developed for automatic Automatic fetching of log files from
testing from his Linux Client Machine in Wireshark Server to Linux Client
the terminal.
The user needs to manually subtract the Automatic execution of Perl script for the
CoV values obtained at the ingress and two log files fetched from Wireshark
egress of the router to determine variation Server for CoV calculation.
in throughput.
User needs to manually perform the
iterations for varying frame sizes
Table 7: Manual-Automated Steps

With automation in experimentation being implemented, it was possible to conduct


the test faster with greater accuracy. It was no longer required to use the IxNetwork
GUI. At the same time, the traffic was sent for an exact duration of 25 seconds which
could now provide more uniformity when comparing the results for numerous
iterations. It was mentioned in Section 5.5 that the user had to manually see the time in
the GUI to start traffic streams at a gap of 5 seconds and stop it after 25 seconds,
which was no longer required as the test execution was made automatic. On the other
hand, some steps were still kept manual as some user interaction was necessary while
conducting the given test and verifying the test results. It should be noted that it is also
possible to develop a script which enables the user to automatically perform the
iterations for varying frame sizes. It was not implemented as the primary goal was to
develop a framework for automatically conducting the given test case.

34
6 RESULTS AND ANALYSIS
This chapter expresses the result of benefit of test automation in terms of Return
on Investment and the coefficient of variance in throughput on multiple timescales. An
automation testing framework was developed to evaluate the throughput of a router on
multiple timescales. In the beginning, a discussion is provided for the return on
investment calculation which has been used in the thesis work to express the benefit of
test automation. ROI was calculated for a single use case by taking into consideration
the reduction in execution time obtained by automating the given use case. Later, an
ROI curve is proposed to extend this idea for “N” use cases. Finally, the throughput is
evaluated on two timescales, namely 1 second and 1 millisecond for varying frame
sizes.

6.1 Return on Investment


The proposed test automation method of using a Linux client with IXIA greatly
reduced the complexity while conducting the given test with reduced human
intervention. The test automation method was successfully tested for only one use case
in this thesis. The considered use case was throughput calculation on multiple
timescales. First, a test automation framework was theoretically designed and then, it
was implemented and successfully tested on a single use case.
It is worth mentioning that there are several test (or use) cases to be automated.
Also, there are several other parameters which need to be considered while calculating
ROI. Some of them include the actual benefit and investment cost in terms of money,
time and several other external factors for big projects in software companies.
The ROI with the developed test automation framework was 2.77%, when
compared with manual testing. It is solely described in terms of reduction in execution
time of the given test. In other terms, only the test execution variable is set to 1 in the
TADM and the other variables are set to 0. The cost was described in terms of time
taken (in hours/minutes) for manual test execution with IXIA GUI in the Windows
environment. The benefit was also expressed in time, where a single test script was
executed for successfully conducting the given test automatically in the Linux client.
The ROI calculation curve, described in terms of a linear equation is proposed by
taking into consideration this single use case. The curve is given in Figure 6. It is
drawn for the equation: y = 4.0375 N -5. The aim of describing this curve, apart from
expressing the above benefit in percentage is for highlighting the idea that even though
automating the single use case improved the ROI due to reduction in execution time, it
is important to express ROI by taking into consideration all the available uses case for
testing. The ROI curve for this thesis work is linear in nature with a negative slope.
This can be interpreted as the ROI being negative when test automation is performed
for small use cases. In such cases, the investment cost for developing the test
automation tool is more when compared to the gain obtained from automatically
testing the small uses cases. Hence, ROI is larger when test automation is performed
for larger use cases. Thus, there is a trade-off between the ROI being positive and the
number of use cases to be automated. This shows that not all test cases can be
automated, even though there is a scope for one hundred percent automation. An effort
must be made to identify the value of N where the ROI becomes positive.
It should be noted that the slope of the ROI curve varies according to the set-up
and execution times for the use cases. It can also be positive. It is expected that there
will be around 15-20 % variation in the execution times for automating different use
cases with the proposed test automation framework. The slope in this thesis is
determined based on the time taken for automating a single use case. The developed

35
test automation method of using IXIA in Linux client needs to be developed further
and extended to perform testing for larger use cases and calculate the overall ROI, by
taking into consideration all the possible use cases.

6.2 Throughput
The experiment of throughput evaluation on multiple timescales was conducted for
the standard frame sizes – 128, 256, 512, 1024 and 1518 Bytes. The CoV was
calculated at the ingress and the egress of the router and its value at the ingress was
subtracted from the egress value. The experiment involved sending three simultaneous
traffic streams through the DuT (router), from three different sources to one common
destination and capturing the packets at the egress and the ingress of the DuT using a
Wireshark server. The CoV was calculated for 1 second and 1 millisecond. The
average was calculated after performing five iterations.
A comparison of the difference in the CoV values at a timescale of 1 second for
varying frame sizes is provided in Figure 14. It can be understood that there is no
difference in the CoV values for the larger frame sizes of 1024 and 1518 Bytes. The
difference between the CoV values at the egress and ingress was found to be zero. In
other words, the variance at the egress and ingress was equal for 1024 and 1518 Bytes.
A difference in CoV was observed for the smaller frame sizes – 128, 256, 512 Bytes. It
can be noticed that the difference in the CoV values decreases as we move from the
frame size to the larger frame size. Also, for 128 Bytes, this difference is in the order
of ten to the negative sixth power. This difference is very minute, but it is highlighted
in this thesis. The reason for not conducting the experiment at 64 Bytes is given in
Section 7.2.

Figure 14: Difference in CoV per Second

Figure 15 presents the difference in the CoV values for the same frame sizes at a
timescale of one millisecond. The trend for the varying frame sizes is similar to that of
the trend observed at a timescale of one second. It is observed that the difference in the
CoV values decreases as we move from the smaller frame size to the larger frame size.
The difference in the CoV values increased from 0 to ten to the negative power of four
for the largest frame size of 1518 Bytes. Again, the increase is very little, but it is
worth noticing. This is much more clearly noticeable for the smaller frame size of 128
Bytes. The value increased from ten to the negative sixth power to ten to the negative
third power.

36
Figure 15: Difference in CoV per Millisecond

In Figure 16, the difference in the CoV value for the varying frame sizes is shown
at timescales of 1 second and 1 millisecond. This difference is largest for smaller
frame size, i.e. 128 Bytes when compared to the largest frame size, 1518 Bytes, both at
1 millisecond and 1 second. The difference in CoV values is more for the timescale
value of 1 millisecond, in contrast to the timescale value of 1 second. This difference is
more for the smaller frame size, when compared to the larger frame size. This result
shows that the hypothesis assumed in this thesis is true, meaning that there is an
impact of one traffic stream over the other, which leads to a difference in the CoV
values. It can also be said that the amount of data in each smaller interval (of duration
1 millisecond) is not always same which leads to more variation. It means that there is
little amount of jitter added by the DuT. This scenario is usually not noticed when the
throughput is measured on larger timescales. Even though these values are very
minute, they are mentioned to support the hypothesis assumed in this thesis.

Figure 16: Difference in CoV on Multiple Timescales

37
7 CONCLUSION AND FUTURE WORK
The purpose of this thesis work is to develop an automatic testing framework for
measuring the performance of routers on multiple timescales. The routers from
multiple vendors can be tested automatically with the suggested automation method
and throughput calculation methodology. This thesis work was carried out by using
IXIA for automatic generation of traffic. A passive measurement infrastructure
consisting of Layer 1 Switch, Network Taps and a Wireshark Server was used. The
performance was expressed in terms of throughput on multiple timescales, like 1
second and 1 millisecond. Finally, the benefit of developing the given test automation
framework by using IXIA in a Linux Client was described in terms of Return on
Investment by taking into consideration the reduction in execution time for the test.
The developed test automation framework greatly reduced the complexity when
conducting tests using IXIA. It introduced greater flexibility and speed, in terms of
execution time while testing, saving time and resources. A proof-of-concept of the
developed test automation method was successfully implemented and tested for
evaluating the performance of a router.

7.1 Answering Research Questions


The research questions that were framed in the context of throughput measurement
and test automation are answered as follows:

Q1. What is the performance delivered by a router on multiple


timescales?
This research question is answered by Figure 16 in Section 6.2. Figure 16
provides a comparison for the difference in the CoV values for varying
frame sizes. The CoV at egress and ingress of a router for different frame
sizes is calculated at two timescale values, namely one second and one
millisecond.
The performance of the router is thus described by measuring the
throughput (amount of data) on multiple timescales - one second and one
millisecond. The CoV value at the egress is subtracted from the CoV value
obtained at the ingress of the router. The results are in line with the
hypothesis that the variance is more on smaller timescales (one
millisecond) when compared to larger timescale (one second). This
difference in variance usually goes unnoticed on larger timescales. It is
worth mentioning that these values are very small, in the range of negative
powers of 10. But, since the aim was to check the validity of the
hypothesis with the suggested throughput calculation methodology, it is
worth considering these minute time difference values.

Q2. How do varying frame sizes affect the throughput of a router on


multiple timescales?
The figures described in Section 6.2, namely Figure 14 and Figure 15
answer this research question. They present a comparison of the difference
in CoV values for varying frame sizes at timescale of one second and one
millisecond respectively. From both the figures, it can be said that there is
a reduction in the CoV values as we move from smaller to larger frame
size. A similar trend can be observed for one millisecond as well.
More specifically, in Figure 14, when considering the smallest and the
largest frame size, the difference in CoV was more for 128 Bytes when

38
compared to 1518 Bytes. It was zero for 1518 Bytes, whereas for 128
Bytes, it was slightly greater (in the order of ten to the negative sixth
power). This trend continued on the timescale value of 1 millisecond
where the difference was clearly noticeable as seen from Figure 15. The
difference value increased from zero to ten to the negative power of four
for 1518 Bytes. There was a greater increase in CoV for the smaller frame
size, in the order of ten to the negative power of three. With these two
figures, it is concluded that the packet streams with smaller frame sizes led
to more difference in the CoV values when compared to larger frame sizes
when passing through the router.

Q3. How to develop an automated testing framework for evaluating


throughput? What is the benefit of developing an automated testing
framework, in terms of Return on Investment?
The test automation method described in Section 4.1.1 answers the first
part of the research question. IXIA was used for automatically generating
simultaneous traffic streams each with a gap of 5 seconds. Automation
was performed by writing test scripts which could be executed with
reduced user interaction, all at once. With the developed test automation
framework, it was no longer needed to manually configure each IXIA port
and traffic item and capture traffic using Wireshark.
The second part of the question is answered based on Section 4.1.3 and
Section 6.1. The benefit of test automation over manual testing was
described in terms of Return on Investment. The benefit was 2.77% when
compared to manual testing for the given use case. The ROI was
calculated manually, by considering the matrices of TADM, Benefit and
Cost. The developed test automation framework reduced the execution
time, introduced more accuracy in testing and automatically generated the
throughput calculation result on multiple timescales. Also, a linear curve
with a negative slope depicting the impact of number of use cases on ROI
is shown in Figure 6. It can be said that a trade-off is required between the
number of use cases and ROI when performing automation

It should be noted that the ROI calculation and Throughput calculation values are
all anonymized. Each calculation is divided by a constant factor, to make sure that the
entire data is not made public and maintain the data integrity.

7.2 Limitations and Challenges


1. Due to the limitation of resources in the Lab, only four IXIA ports were used.
Otherwise, the experiment would have been conducted for multiple source
ports and multiple destination ports, like three source ports and three
destination ports for better analysis of results.
2. The issue of packet loss in Wireshark Server at smaller frame sizes made it
difficult to conduct the experiment at 64 bytes. There was loss in Wireshark
for higher line speeds, i.e. when each traffic stream was sent at 150 Mbps or
more. Even if the physical link capacity was sufficient, packets were dropped
at the Wireshark interfaces. There were scenarios where packets were dropped
for 128, 256 Bytes even at 100 Mbps. The iterations where there was no
packet loss were considered for performing calculations. All these factors
limited the student to adjust the traffic streams accordingly and conduct the
experiments.

39
It should be noted there are methods for tuning the Wireshark server to avoid
packet loss, but it was not performed as the remaining experiments in the lab
needed to be stalled and then the server had to be fixed.
3. Another biggest challenge was the time taken for the execution of the Perl
script for calculating the throughput on smaller timescales. For example, the
size of the T-Shark log file at ingress (or egress) for 128 bytes was 260 Mega
Bytes and it took around 6 hours for the complete execution of the script for a
timescale of 1 millisecond. The smaller the timescale value, the more time was
consumed in script execution. In some instances, the script execution process
was killed by the Linux operating system due to insufficient memory. As a
result, the experiments were conducted at 100 Mbps line speed and timescale
values of 1 second and 1 millisecond. The CoV at a timescale of 0.1
millisecond was thus not calculated.
4. The most important parameter affecting the throughput results is the physical
link capacity of 10 Gbps. This made the calculations in the script complex, and
such high forwarding capacity tremendously increased the size of the log files.
5. There were numerous scenarios when the difference in the CoV values at the
egress and ingress was negative. It is believed that the negative variation in
throughput might be because of the manner in which the two network taps
were connected to the Wireshark Server. One network tap was connected to a
network interface card (NIC) which was directly integrated to the motherboard
of the Wireshark Server, while the other tap was connected to an external NIC
on Wireshark. As such, these negative values were discarded and only the
iterations with positive CoV values were considered.
6. The ROI calculation can be expressed more clearly by considering the various
external parameters and costs, different use cases and projects.

These limitations were not tackled because of lack of resources, timing constraints
and other external factors. In the end, the experiments could not be performed again
for more iterations due to repeated license and server issues with IXIA, packet loss in
Wireshark and power glitch which lead to equipment failure and reinstallation
introduced further delay for successful completion of thesis work. As both automation
and experimentation had to be performed simultaneously, better analysis of the results
could not be performed.

7.3 Future Work


An effort to overcome the above mentioned limitations and challenges is suggested
in the form of future work. The experiment needs to be verified for multiple sources
and destinations for physical routers. At the same time, one should also try to calculate
throughput at further smaller timescale values like 0.1 millisecond or less. It is
suggested to identify the limitations of the suggested throughput calculation
methodology, as it is predicted that the physical routers will have further enhanced
forwarding capabilities in the future. For instance, it will be very difficult to calculate
the throughput per millisecond at a physical link capacity of 100 Gbps or more with
higher traffic loads for a physical router. Such high capacities will further increase the
size of the log files. At the same time, the developed Perl script can be improved
further for faster execution and can be implemented in different programming
languages like Python, with better libraries. The script needs to be executed in Servers
with high computational capability and memory for correct analysis of the results. The
popular automation tools like Ansible and Puppet for automating configurations of the
network elements like routers and switches in the test-bed is also another interesting
area worth exploring.

40
The ROI needs to be calculated in the actual real-time business scenario, where
investment cost is taken into consideration, and this test automation methodology
needs to be extended further for different use cases and needs to be implemented for
bigger testing projects. Also, one needs to consider the other parameters like test case
design, test scripting and test evaluation for expressing the complete benefit of test
automation. A comparison of ROI for different sub-cases of a given use case with
partial and complete test automation can also be presented.

41
REFERENCES
[1] S. Baraković and J. Baraković, “Traffic performances improvement using DiffServ and
MPLS networks,” in Information, Communication and Automation Technologies,
2009. ICAT 2009. XXII International Symposium on, 2009, pp. 1–8.
[2] U. M. Mir, A. H. Mir, A. Bashir, and M. A. Chishti, “DiffServ-Aware Multi Protocol
Label Switching Based Quality of Service in Next Generation Networks,” in Advance
Computing Conference (IACC), 2014 IEEE International, 2014, pp. 233–238.
[3] M. Hlozak, J. Frnda, Z. Chmelikova, and M. Voznak, “Analysis of Cisco and Huawei
routers cooperation for MPLS network design,” in Telecommunications Forum Telfor
(℡FOR), 2014 22nd, 2014, pp. 115–118.
[4] V. Kher, A. Arman, and D. S. Saini, “Hybrid evolutionary MPLS Tunneling Algorithm
based on high priority bits,” in Futuristic Trends on Computational Analysis and
Knowledge Management (ABLAZE), 2015 International Conference on, 2015, pp. 495–
499.
[5] K. Wiklund, D. Sundmark, S. Eldh, and K. Lundvist, “Impediments for Automated
Testing -- An Empirical Analysis of a User Support Discussion Board,” 2014, pp. 113–
122.
[6] Y. Amannejad, V. Garousi, R. Irving, and Z. Sahaf, “A Search-Based Approach for
Cost-Effective Software Test Automation Decision Support and an Industrial Case
Study,” 2014, pp. 302–311.
[7] G. Antichi, M. Shahbaz, Y. Geng, N. Zilberman, A. Covington, M. Bruyere, N.
McKeown, N. Feamster, B. Felderman, M. Blott, and others, “Osnt: Open source
network tester,” Netw. IEEE, vol. 28, no. 5, pp. 6–12, 2014.
[8] S. Thummalapenta, S. Sinha, N. Singhania, and S. Chandra, “Automating test
automation,” in Software Engineering (ICSE), 2012 34th International Conference on,
2012, pp. 881–891.
[9] D. Stezenbach, K. Tutschku, and M. Fiedler, “A Performance Evaluation Metric for
NFV Elements on Multiple Timescales,” 2013.
[10] M. Fiedler, K. Tutschku, P. Carlsson, and A. Nilsson, “Identification of performance
degradation in IP networks using throughput statistics,” in Teletraffic Science and
Engineering, vol. 5, Elsevier, 2003, pp. 399–408.
[11] N. Feamster, J. Rexford, and E. Zegura, “The road to SDN,” Queue, vol. 11, no. 12, p.
20, 2013.
[12] T. Zahariadis, D. Papadimitriou, H. Tschofenig, S. Haller, P. Daras, G. D. Stamoulis,
and M. Hauswirth, “Towards a future internet architecture,” in The Future Internet
Assembly, 2011, pp. 7–18.
[13] “Federal Standard 1037C: Glossary of Telecommunications Terms.” [Online].
Available: http://www.its.bldrdoc.gov/fs-1037/fs-1037c.htm. [Accessed: 12-Sep-2016].
[14] J. Pan, S. Paul, and R. Jain, “A survey of the research on future internet architectures,”
IEEE Commun. Mag., vol. 49, no. 7, pp. 26–36, 2011.
[15] K. Benz and T. Bohnert, “Dependability modeling framework: a test procedure for
high availability in cloud operating systems,” in Vehicular Technology Conference
(VTC Fall), 2013 IEEE 78th, 2013, pp. 1–8.
[16] U. Franke, P. Johnson, J. König, and L. Marcks von Würtemberg, “Availability of
enterprise IT systems: an expert-based Bayesian framework,” Softw. Qual. J., vol. 20,
no. 2, pp. 369–394, Jun. 2012.
[17] M. Vogt, R. Martens, and T. Andvaag, “Availability modeling of services in IP
networks,” in Design of Reliable Communication Networks, 2003.(DRCN 2003).
Proceedings. Fourth International Workshop on, 2003, pp. 167–172.
[18] E. Marcus and H. Stern, Blueprints for high availability, 2nd ed. Indianapolis, Ind:
Wiley Pub, 2003.

42
[19] M. Rausand and A. Høyland, System reliability theory: models, statistical methods,
and applications, 2nd ed. Hoboken, NJ: Wiley-Interscience, 2004.
[20] S. C. Maffre, R. V. Alligala, J. K. Epps, and R. L. Halstead, Method and system for test
automation and dynamic test environment configuration. Google Patents, 2013.
[21] D. M. Rafi, K. R. K. Moses, K. Petersen, and M. V. Mäntylä, “Benefits and limitations
of automated software testing: Systematic literature review and practitioner survey,” in
Proceedings of the 7th International Workshop on Automation of Software Test, 2012,
pp. 36–42.
[22] L.-O. Damm and L. Lundberg, “Results from introducing component-level test
automation and Test-Driven Development,” J. Syst. Softw., vol. 79, no. 7, pp. 1001–
1014, Jul. 2006.
[23] S. James, “The relationship between accounting and taxation,” 2002.
[24] P. K. Kurapati, S. K. Misra, and S. Mohan, Method and system for an end-to-end
solution in a test automation framework. Google Patents, 2014.
[25] B. A. Forouzan and S. C. Fegan, Data communications and networking, 4th ed. New
York: McGraw-Hill Higher Education, 2007.
[26] M. K. Porwal, A. Yadav, and S. V. Charhate, “Traffic Analysis of MPLS and Non
MPLS Network including MPLS Signaling Protocols and Traffic Distribution in OSPF
and MPLS,” 2008, pp. 187–192.
[27] J. Postel, “Internet Protocol.” [Online]. Available: https://tools.ietf.org/html/rfc791.
[Accessed: 12-Sep-2016].
[28] “EG 203 165 - V1.1.1 - Speech and multimedia Transmission Quality (STQ);
Throughput Measurement Guidelines - eg_203165v010101m.pdf.” [Online].
Available:
http://www.etsi.org/deliver/etsi_eg/203100_203199/203165/01.01.01_50/eg_203165v0
10101m.pdf. [Accessed: 12-Sep-2016].
[29] J. McQuaid and S. Bradner, “Benchmarking Methodology for Network Interconnect
Devices.” [Online]. Available: https://tools.ietf.org/html/rfc2544. [Accessed: 12-Sep-
2016].
[30] “RFC 1242 - Terminology for IP Multicast Benchmarking <Draft-ietf-bmwg-mcast-
06.txt>.” [Online]. Available: https://tools.ietf.org/html/rfc1242. [Accessed: 12-Sep-
2016].
[31] “RFC 6374 - Packet Loss and Delay Measurement for MPLS Networks.” [Online].
Available: https://tools.ietf.org/html/rfc6374. [Accessed: 12-Sep-2016].
[32] W. D. Kelton and A. M. Law, Simulation modeling and analysis. McGraw Hill Boston,
2000.
[33] B. Everitt and A. Skrondal, The Cambridge dictionary of statistics. Cambridge; New
York: Cambridge University Press, 2010.
[34] “About Us.” [Online]. Available: http://www.ixiacom.com/about-us. [Accessed: 12-
Sep-2016].
[35] “Ixia Network Testing Solutions.” [Online]. Available:
http://www.ixiacom.com/solutions/network-test-solutions. [Accessed: 12-Sep-2016].
[36] “IxLoad.” [Online]. Available: http://www.ixiacom.com/products/ixload. [Accessed:
12-Sep-2016].
[37] “IxChariot.” [Online]. Available: http://www.ixiacom.com/products/ixchariot.
[Accessed: 12-Sep-2016].
[38] “IxNetwork.” [Online]. Available: http://www.ixiacom.com/products/ixnetwork.
[Accessed: 12-Sep-2016].
[39] M. Fiedler and K. Tutschku, “Application of the stochastic fluid flow model for
bottleneck identification and classification,” in SCS Conference on Design, Analysis,
and Simulation of Distributed Systems (DASD 2003), 2003.
[40] R. Lindholm, “Analysis of Resource Isolation and Resource Management in Network
Virtualization,” 2016.

43
[41] P. Arlos, M. Fiedler, and A. A. Nilsson, “A distributed passive measurement
infrastructure,” in Passive and Active Network Measurement, Springer, 2005, pp. 215–
227.
[42] “S Series Optical Circuit Switch | CALIENT Technologies.” [Online]. Available:
http://www.calient.net/products/s-series-photonic-switch/. [Accessed: 12-Sep-2016].
[43] “Network TAPs | Products | VSS Monitoring.” [Online]. Available:
http://www.vssmonitoring.com/taps/. [Accessed: 12-Sep-2016].

44
APPENDIX A
This section describes the scripts which were developed for performing
automation of the given test case. The scripts were developed using TCL, Perl and
Shell scripting. The TCL script is not described in full as the scripts developed using
IXIA are copyrighted. Only some of the modules of this script are mentioned. The
throughput on multiple timescales was calculated using the Perl script, which is
published in full. The shell scripts were used for simultaneously calling multiple
scripts for execution. The format of the text file which was given as user input is also
mentioned.

A.1. IXIA Script:


The following contains the TCL Script Modules and APIs for automatic
connection to chassis, TCL Server, IxNetwork TCL Server; configuration of IXIA
ports, configuration of traffic items, starting and stopping of traffic, releasing ports.
The file is named as “div11.tcl”.

package require Ixia;

set fd [open "file.txt" "r"]


set a [read $fd]
set fields [split $a "\n"]
set gate1 [lindex $fields 2]
set gate2 [lindex $fields 3]
set ip1 [lindex $fields 4]
set ip2 [lindex $fields 5]
set ip3 [lindex $fields 6]
set sub1 [lindex $fields 7]
set ip4 [lindex $fields 8]
set sub2 [lindex $fields 9]

close $fd

# //vport/interface
$::ixnHLT_log
interface_config://vport:<1>/interface:<1>...
set _result_ [::ixia::interface_config \
-mode modify \
-port_handle $ixnHLT(PORT-
HANDLE,//vport:<1>) \
-gateway $gate1 \
-intf_ip_addr $ip1 \
-netmask $sub1 \
-check_opposite_ip_version 0 \
-src_mac_addr 0000.2322.4fc4 \
-arp_on_linkup 1 \
-ns_on_linkup 1 \
-single_arp_per_gateway 1 \
-single_ns_per_gateway 1 \
-mtu 1500 \
-vlan 0 \

45
-l23_config_type protocol_interface \
]

# Check status
if {[keylget _result_ status] != $::SUCCESS} {
$::ixnHLT_errorHandler [info script]
$_result_
}

catch {
set
ixnHLT(HANDLE,//vport:<1>/interface:<1>) [keylget
_result_ interface_handle]
lappend ixnHLT(VPORT-CONFIG-HANDLES,
//vport:<1>, interface_config) \

$ixnHLT(HANDLE,//vport:<1>/interface:<1>)
}

$::ixnHLT_log {COMPLETED: interface_config}

set fd [open "file.txt" "r"]


set a [read $fd]
set fields [split $a "\n"]
set frame_size1 [lindex $fields 10]
set rate1 [lindex $fields 11]
set frame_size2 [lindex $fields 12]
set rate2 [lindex $fields 13]
set frame_size3 [lindex $fields 14]
set rate3 [lindex $fields 15]
close $fd

$::ixnHLT_log {Configuring options for config


elem: //traffic/trafficItem:<1>/configElement:<1>}
# -- Options
set _result_ [::ixia::traffic_config \
-mode modify \
-traffic_generator ixnetwork_540 \
-stream_id $current_config_element \
-preamble_size_mode auto \
-preamble_custom_size 8 \
-data_pattern {} \
-data_pattern_mode incr_byte \
-enforce_min_gap 0 \
-rate_mbps $rate1 \
-frame_rate_distribution_port apply_to_all
\
-frame_rate_distribution_stream
apply_to_all \
-frame_size $frame_size1 \
-length_mode fixed \
-tx_mode advanced \
-transmit_mode continuous \
-pkts_per_burst 1 \
-tx_delay 0 \
-tx_delay_unit bytes \

46
-number_of_packets_per_stream 1 \
-loop_count 1 \
-min_gap_bytes 12 \
]

# Check status
if {[keylget _result_ status] != $::SUCCESS} {
$::ixnHLT_errorHandler [info script]
$_result_
}
# -- Post Options
$::ixnHLT_log {Configuring post options for
config elem:
//traffic/trafficItem:<1>/configElement:<1>}
set _result_ [::ixia::traffic_config \
-mode modify \
-traffic_generator ixnetwork_540 \
-stream_id $current_config_element \
-transmit_distribution none \
]

# Check status
if {[keylget _result_ status] != $::SUCCESS} {
$::ixnHLT_errorHandler [info script]
$_result_
}

set stream1 $current_config_element

# -- Post Options
$::ixnHLT_log {Configuring post options for
config elem:
//traffic/trafficItem:<2>/configElement:<1>}
set _result_ [::ixia::traffic_config \
-mode modify \
-traffic_generator ixnetwork_540 \
-stream_id $current_config_element \
-transmit_distribution none \
]

# Check status
if {[keylget _result_ status] != $::SUCCESS} {
$::ixnHLT_errorHandler [info script]
$_result_
}

set stream2 $current_config_element

# -- Post Options
$::ixnHLT_log {Configuring post options for
config elem:
//traffic/trafficItem:<3>/configElement:<1>}
set _result_ [::ixia::traffic_config \
-mode modify \
-traffic_generator ixnetwork_540 \

47
-stream_id $current_config_element \
-transmit_distribution none \
]

# Check status
if {[keylget _result_ status] != $::SUCCESS} {
$::ixnHLT_errorHandler [info script]
$_result_
}

set stream3 $current_config_element

$::ixnHLT_log "Running Traffic Item 1..."


set r [::ixia::traffic_control \
-action run \
-traffic_generator ixnetwork_540
\
-handle $stream1 \
-max_wait_timer 0 \
-type l23 \
]

if {[keylget r status] != $::SUCCESS} {


$::ixnHLT_errorHandler [info script] $r
}

after 5000

$::ixnHLT_log "Running Traffic Item 1 and


2..."

set r [::ixia::traffic_control \
-action run \
-traffic_generator ixnetwork_540 \
-handle $stream2 \
-max_wait_timer 0 \
-type l23 \
]

if {[keylget r status] != $::SUCCESS} {


$::ixnHLT_errorHandler [info script] $r
}

after 5000

$::ixnHLT_log "Running Traffic Item 1, 2 and


3..."
set r [::ixia::traffic_control \
-action run \
-traffic_generator ixnetwork_540
\
-handle $stream3 \
-max_wait_timer 0 \
-type l23 \
]

48
if {[keylget r status] != $::SUCCESS} {
$::ixnHLT_errorHandler [info script] $r
}

after 15000

# ######################
# stop phase of the test
# ######################

$::ixnHLT_log "Stopping Traffic..."


set r [::ixia::traffic_control \
-action stop \
-traffic_generator ixnetwork_540
\
-max_wait_timer 0 \
-type l23 ]

if {[keylget r status] != $::SUCCESS} {


$::ixnHLT_errorHandler [info script] $r
}

set fd [open "file.txt" "r"]


set a [read $fd]
set fields [split $a "\n"]

set chassis [lindex $fields 0]


set server [lindex $fields 1]
close $fd

set _result_ [::ixia::connect \


-reset 1 \
-device $chassis \
-aggregation_mode $aggregation_mode \
-aggregation_resource_mode
$aggregation_resource_mode \
-port_list $port_list \
-ixnetwork_tcl_server $server \
-tcl_server $tcl_server \
-guard_rail $guard_rail \
-return_detailed_handles 0 \
]

set _result_ [::ixia::cleanup_session \


-maintain_lock 0 \
]

# Check status
if {[keylget _result_ status] != $::SUCCESS} {
$::ixnHLT_errorHandler [info script] $_result_
}

49
A.2. Automatic Capture at Wireshark Interfaces:
The Perl script to simultaneously start capture at the two Wireshark interfaces is
given below. It is to be noted that two scripts were written, namely “test1.pl”,
“test2.pl” and were launched simultaneously to perform live capture. The files saved
on the Wireshark Server are automatically deleted using “delete.pl”
A.2.1. test1.pl
#!/usr/bin/perl
use strict;
use warnings;
use Net::OpenSSH;

my $user = 'edivped';
my $password = 'passwd';
my $host = 'wireshark';

my $cmd1="tshark -i p5p1 -w p5p1-ingress.pcap -


a duration:300";

my $cmd2="tshark -r p5p1-ingress.pcap -T fields


-e frame.number -e frame.time_epoch -e frame.len -
e ip.src -e ip.dst > p5p1.log";

my $ssh = Net::OpenSSH->new(host=>"$host",
user=>"$user", port=>22, password=>"$password");

$ssh->system($cmd1);
$ssh->system($cmd2);
A.2.2. test2.pl
#!/usr/bin/perl
use strict;
use warnings;
use Net::OpenSSH;

my $user = 'edivped';
my $password = 'passwd';
my $host = 'wireshark';

my $cmd1="tshark -i em2 -w em2-egress.pcap -a


duration:300";

my $cmd2="tshark -r em2-egress.pcap -T fields -


e frame.number -e frame.time_epoch -e frame.len -e
ip.src -e ip.dst > em2.log";

my $ssh = Net::OpenSSH->new(host=>"$host",
user=>"$user", port=>22, password=>"$password");

$ssh->system($cmd1);

$ssh->system($cmd2);

50
A.2.3. delete.pl
#!/usr/bin/perl
use strict;
use warnings;
use Net::OpenSSH;

my $user = 'edivped';
my $password = 'passwd';
my $host = 'wireshark';

system(“rm –f p5p1.log em2.log”);

my $cmd1="rm -f p5p1.log em2.log p5p1-


ingress.pcap em2-egress.pcap";

my $ssh = Net::OpenSSH->new(host=>"$host",
user=>"$user", port=>22, password=>"$password");

$ssh->system($cmd1);

A.3. Throughput Evaluation:


The Perl script for evaluating throughput on multiple timescales is given below.
The script is executed by giving command line arguments, delta_time which stands for
the window time, which can be 5 seconds or 1 second. The argument interval_time
stands for the timescale value, which can be 1 second or 1 millisecond. The file
“em2.log” is the Wireshark log file containing the packet information and
“output2.log” contains the CoV value. This script needs to be executed twice, once for
the ingress packet capture, and once for the egress packet capture, i.e. for log files
obtained at the two Wireshark interfaces.

perl perf_gnb_findThroughput.pl --delta_time=1 --interval_time=0.001 p5p1.log >


output1.log

perl perf_gnb_findThroughput.pl --delta_time=1 --interval_time=0.001 em2.log >


output2.log

#!/usr/bin/env perl
use Getopt::Long;
use POSIX;
use Data::Dumper;
use Math::BigFloat;

print "\n\n$0 @ARGV\n\n";

GetOptions (
"delta_time=f" => \$arg_delta_time,
#in seconds
"interval_time=f" =>
\$arg_interval_time #in seconds
);

my $line_speed = 10000000000; #10Gbps

51
my %pkt_data=();

while (<>) {
chomp; # strip record separator

if
(/(\d+)\s+(\d+\.\d+)\s+(\d+)\s+(\d+\.\d+\.\d+\.\d+)\
s+(\d+\.\d+\.\d+\.\d+)/) {
$pkt_no = $1;
$epoch_time_stamp = $2; #in secs
$pkt_size = $3; #in bytes
$src_ip_addr = $4;
$dst_ip_addr = $5;

$pkt_start_time = $epoch_time_stamp;
$pkt_end_time =
$epoch_time_stamp+(($pkt_size*8)/$line_speed);

#Preparing a data structure of Captured


Packets (for generating seperate logs for each
stream & Throughput Calculation)

$pkt_data{$pkt_start_time}{$src_ip_addr}{$dst_ip_add
r}{$pkt_no}=$pkt_size;

if ($pkt_start_time{$src_ip_addr} eq
undef) {
$pkt_start_time{$src_ip_addr} =
$pkt_start_time;
}
$pkt_end_time{$src_ip_addr} =
$pkt_end_time;

#Sort the Start Times in Descending order for the


determining the Throughput Window
foreach my $src_ip ( sort { $pkt_start_time{$b}
<=> $pkt_start_time{$a} } keys %pkt_start_time ) {
$window_start_time = $pkt_start_time{$src_ip};
print "\nWindow Start
Time:$window_start_time\n";
last;
}

#Sort the End Times in Ascending order for the


determining the Throughput Window
foreach my $src_ip_address ( sort {
$pkt_end_time{$a} <=> $pkt_end_time{$b} } keys
%pkt_end_time ) {

52
$window_end_time =
$pkt_end_time{$src_ip_address};
print "\nWindow End Time:$window_end_time\n";
last;
}

#Calculating the Start & End Time of Sub-Window

$sub_window_start = Math::BigFloat-
>new($window_start_time);
$sub_window_end = Math::BigFloat-
>new($sub_window_start+$arg_delta_time);

#$sub_window_start = $window_start_time;
#$sub_window_end =
$sub_window_start+$arg_delta_time;

print "\nSub-Window Start


Time:$sub_window_start\tEnd Time:$sub_window_end";

$count=0;

foreach my $time ( sort {$a <=> $b} keys


%pkt_data) {
if(($time >= $sub_window_start) && ($time <=
$sub_window_end)) {
foreach my $ip_src (keys
%{$pkt_data{$time}}) {
foreach my $ip_dst (keys
%{$pkt_data{$time}{$ip_src}}) {
foreach my $no_pkt (keys
%{$pkt_data{$time}{$ip_src}{$ip_dst}}) {
$count++;
}
}
}
}
}

print "\nTotal number of packets in the


window:$count\n";

#Dividing the sub-window window into smaller


intervals of size arg_interval_time
$no_of_intervals =
$arg_delta_time/$arg_interval_time;
print "\nNo. of Intervals:$no_of_intervals";

@amount=();
$count1=0;
$y1=0;

#Calculating the Data Transfer per interval


for ($i=0; $i < $no_of_intervals; $i++) {

53
$interval_start =
$sub_window_start+($i*($arg_interval_time));
$interval_end =
$interval_start+$arg_interval_time;

#Sorting in ascending order based on Start


Timestamp
foreach my $time_tp ( sort {$a <=> $b} keys
%pkt_data) {
foreach my $ip_src_tp (keys
%{$pkt_data{$time_tp}}) {
foreach my $ip_dst_tp (keys
%{$pkt_data{$time_tp}{$ip_src_tp}}) {
foreach my $no_pkt_tp (keys
%{$pkt_data{$time_tp}{$ip_src_tp}{$ip_dst_tp}}) {

$src=0; $length=0; $endtime=0; $a=0;


$kdiv2=0; $interval_end1=0; $interval_start1=0;
#Clearing the variable values

$src = $ip_src_tp;

$length=$pkt_data{$time_tp}{$ip_src_tp}{$ip_dst_tp}{$no_pk
t_tp};
$kdiv2=($length*8)/$line_speed;
$a=Math::BigFloat-
>new($time_tp); $endtime=Math::BigFloat-
>new($a+$kdiv2);

$interval_end1=Math::BigFloat->new($interval_end);
$interval_start1=Math::BigFloat-
>new($interval_start);

if ($a-
>bge($interval_start1) && $endtime-
>ble($interval_end1) && $a->blt($interval_end1)) {

$amount[$i]=$amount[$i]+$length;
$count1++;
}
else {
if ($a-
>bge($interval_start1) && $endtime-
>bgt($interval_end1) && $a->blt($interval_end1)) {

$add=1;
$part1=0;
$part2=0;
$i1=0;

$i1=$add+$i;

$part1=(($interval_end1-
$a)*$line_speed)/8; $part2=(($endtime-
$interval_end1)*$line_speed)/8;

54
$amount[$i]=$amount[$i]+$part1;

$amount[$i1]=$amount[$i1]+$part2;

$y1++;
}
}
}
}
}
}

print "\ncount1 value:$count1\ty1 value:$y1\n";

#Throughput Calculation
for ($j=0; $j < $no_of_intervals; $j++) {
$throughput[$j]=0;
$throughput[$j] = $amount[$j];
$sum_of_throughput =
$sum_of_throughput+$throughput[$j];
print "\nThroughput in Interval:$j is
$throughput[$j]";
print "\nSum of Throughput in Interval:$j is
$sum_of_throughput";
$sum_of_squared_throughput=
$sum_of_squared_throughput+($throughput[$j]*$through
put[$j]);
}

$mean=$sum_of_throughput/$j;
$mean_squared=($mean*$mean);
$sum_of_squared_throughput=$sum_of_squared_through
put/$j;
$var=$sum_of_squared_throughput-$mean_squared;
$sigma=sqrt($var);

if($sigma && $mean)


{
$cov = $sigma/$mean;
print "\nCoV:$cov\n";
}

A.4. Shell Script:


The shell script for running the TCL script for connecting to IXIA and sending
traffic and Perl script to automatically start capture in Wireshark server and subsequent
throughput calculation is given below. The script is “shell.sh”. It is executed in the
command line (terminal) as “sh shell.sh”. Once the below script is executed, the user
needs to check the log files “output1.log” and “output2.log” to obtain the CoV values
at ingress and egress of the router respectively. All the above scripts are located and
executed in the directory “/opt/ixia/ixos6.80/bin/”

55
#!/bin/bash
sh ixiatcl div11.tcl & perl test1.pl & perl
test2.pl

sshpass -p 'passwd' scp


edivped@wireshark:/home/edivped/p5p1.log
/opt/ixia/ixos6.80/bin/p5p1.log

sshpass -p 'passwd' scp


edivped@wireshark:/home/edivped/em2.log
/opt/ixia/ixos6.80/bin/em2.log

chmod 777 p5p1.log

chmod 777 em2.log

perl perf_gnb_findThroughput.pl --delta_time=1 --


interval_time=0.001 p5p1.log > output1.log

perl perf_gnb_findThroughput.pl --delta_time=1 --


interval_time=0.001 em2.log > output2.log

perl delete.pl

A.5. Input Text File:


The format of the text file which is given as input for the parameters in IXIA script
is given below. This text file needs to be edited by the user before starting to execute
the test. The file is “file.txt”. Each parameter needs to be separated by a next line, i.e.
each line consists of only one parameter.

10.64.213.40
10.64.213.83
192.168.5.4
192.168.184.1
192.168.5.1
192.168.5.2
192.168.5.3
255.255.255.0
192.168.184.1
255.255.255.0
1518
100
1518
100
1518
100

56

You might also like