Lecture 8
Lecture 8
Lecture topics
What to measure in applications
Application traffic analysis
Protocol analysis
– RTP / RTCP
– TCP
– how about secure encapsulation
Host-based diagnostics
After this lecture you should know how to
– do application-specific measurements
– extract quality information from protocol headers
– analyse application logs
1
How to test for throughput
Just transfer a large file
– time wget http://site.example/latest.iso
Benefits
– easy to do and analyse
– 640 MiB in 3701 seconds ⇒1,45 Mbit/s
Problems
– depends on other systems and network
⇒ tells very little on network
– gives only present performance for additional traffic
– results additional load on network
– depends on TCP implementations used
Reno, Vegas, BIC, Westwood, . . .
window size
– does not pinpoint problem locations
2
Active measurements classes
SO Sender Only measurements depend on standard functionality
ICMP echos and diagnostic methods
depends on other system functioning properly
SRP Sender and Receiver Paired measurements
possible to use measurement specific packets
accurate time stamps, sequence numbers
RO Recipient Only: most limited functionality
depends the sender to behave as expected
packet-pair sending
passive analysis
Pb
Sender Receiver
Ab
As Ar [10, 5]
Delay measurements
Easy to do active RTT
– no need to synchronise clocks
– ICMP echo
– TCP handshake
– UDP response
– take end system delay into account
One-way delays more difficult
– software support
– clock synchronisation or clock skew estimation
Take account possible classification
Low bandwidth requirements
– get sufficient number of samples
3
Passive delay measurements
Packet-level measurements
TCP
– time difference between data and corresponding ACK
– ack may be lost too
– end system characteristics[14]
RTP [17]
– has timestamp for samples: if there is standard PCM voice, the timestamp counter is
incremented by 8000 every second
– if monitored on far end, delay variation can be identified
– note possible clock skew
Flow-based analysis possible
– short flows
– 1st packet of flow
Loss measurements
Active similarly to delay measurements
Passive measurements
– TCP retransmissions
– RTP sequence number monitoring
– RTCP receiver reports
4
– other implementation problems [14]
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
Source Port Destination Port
Sequence number
Acknowledgment Number
U A P R S F
Data
Offset Reserved R C S S Y I Window
G K H T N N
Checksum Urgent Pointer
Options Padding
payload
5
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
V=2 P X CC M PT sequence number
timestamp
SSRC
CSRC list
extension header, data
Voice performance
Human tests: MOS (Mean Opinion Score)
– a set of test users
– for each test, a grade 1 – 5 is given
1. bad
2. poor
3. fair
4. good
5. excellent
– expensive and time-consuming
– codecs and systems language-dependent. There are significant differences between
both languages and genders how well each compression method performs.
Automated tests
– characterise network performance
– estimate MOS based on those parameters
6
E-model
A computational model for use in transmission planning [9]
Takes a set of parameters
Ro basic signal-to-noise ratio
Is simultaneous impairment factors
Id delay impairment factors
Ie equipment impairment factors, for example voice codec and bit rate used has an effect
on here. PCM at 64 kbit/s has value of 0 while GSM full-rate codec has value 20
(half-rate 23)
A advantage factor to take into account user’s expectations (0 – 20), results approximately
MOS difference of one unit
R = R o − Is − Id − Ie + A (3)
7
Reference signal System under test
Distrubance processing
Fix intervals
Cognitive modelling
The IPPM WG will develop a set of standard metrics that can be applied to the
quality, performance, and reliability of Internet data delivery services. These metrics
will be designed such that they can be performed by network operators, end users,
or independent testing groups. It is important that the metrics not represent a value
judgement (i.e. define “good” and “bad”), but rather provide unbiased quantitative
measures of performance.
Flow data
Cisco has used Netflow export format
– incompatibles between vendors
IPFIX (IP Flow Information Export)
– based on Netflow v9
– specification mostly done
8
Accounting and AAA information
Accounting systems collect information about network traffic
– CRANE [19]
AAA systems
– Diameter [4]
Highly aggregate data
– total bytes, packets
– can be used to estimate traffic demand
Non-network measurements
Network application logs
– http servers
client IP address and model
document size
date, transfer time
request correlation
72.30.110.140 - - [06/Apr/2006:07:58:46 +0300] "GET /korso2005/ HTTP/1.0" 200
737 "-" "Mozilla/5.0"
– mail servers
message sizes
service times: some email servers currently wait some time before accepting email
to identify some spammer software. Also there may be delay resulting from black-
list lookups etc.
– ftp servers
Response time for application, for example to monitor database server; includes both net-
work and application delays. These can be used as part of SLA verification tools, especially
if “whole service” (i.e. both the network and the server) is provided by one service provider.
Mostly appropriate for estimating traffic demand
Application performance
In addition to data transmission QoS
Call setup time
– PDD (Post Dialling Delay) [8]
Channel change time for IPTV
System responsiveness
These are best measured on end systems
– instrumented application
– test equipment
9
Statistics from end systems
End systems collect protocol statistics
– OS dependent
– counter wrap
Provides indication of network quality
– TCP retransmits
– TCP reorders
10
312 predicted acknowledgments
1 congestion windows recovered after partial ack
0 TCP data loss events
5 other TCP timeouts
341 packets collapsed in receive queue due to low socket buffer
6 connections reset due to unexpected data
6 connections reset due to early user close
1 connections aborted due to timeout
11
Busy system
TcpExt:
267353 resets received for embryonic SYN_RECV sockets
45 ICMP packets dropped because they were out-of-window
77499 TCP sockets finished time wait in fast timer
3 time wait sockets recycled by time stamp
36 packets rejects in established connections because of timestamp
385649 delayed acks sent
2925 delayed acks further delayed because of locked socket
Quick ack mode was activated 13198 times
646595 packets directly queued to recvmsg prequeue.
3271571 of bytes directly received from backlog
549815762 of bytes directly received from prequeue
5340998 packet headers predicted
401429 packets header predicted and directly queued to user
2676410 acknowledgments not containing data received
14962075 predicted acknowledgments
127 times recovered from packet loss due to fast retransmit
10782 times recovered from packet loss due to SACK data
Detected reordering 20 times using reno fast retransmit
Busy system
TCPDSACKUndo: 12
4141 congestion windows recovered after partial ack
9406 TCP data loss events
TCPLostRetransmit: 1
117 timeouts after reno fast retransmit
4379 timeouts after SACK recovery
337 timeouts in loss state
23084 fast retransmits
489 forward retransmits
4864 retransmits in slow start
52291 other TCP timeouts
TCPRenoRecoveryFail: 37
863 sack retransmits failed
367 times receiver scheduled too late for direct processing
16671 DSACKs sent for old packets
1482 DSACKs sent for out of order packets
2845 DSACKs received
900 connections reset due to unexpected data
615 connections reset due to early user close
2366 connections aborted due to timeout
12
tcpOutWinProbe =151290 tcpOutControl =35759699
tcpOutRsts =864119 tcpOutFastRetrans =542751
13
Busy system, Solaris
IGMP:
222062 messages received
0 messages received with too few bytes
0 messages received with bad checksum
222014 membership queries received
0 membership queries received with invalid field(s)
32 membership reports received
0 membership reports received with invalid field(s)
32 membership reports received for groups to which we belong
48 membership reports sent
Conclusion
Possible to estimate application throughput using network measurements
Applications can collect performance data
Perceived quality estimation
– Quality of Experience
– data, voice, video quality
References
[1] G. Almes, S. Kalidindi, and M. Zekauskas. A One-way Delay Metric for IPPM. Request for
Comments RFC 2679, Internet Engineering Task Force, September 1999. (Internet Proposed
Standard). URL:http://www.ietf.org/rfc/rfc2679.txt.
[2] G. Almes, S. Kalidindi, and M. Zekauskas. A One-way Packet Loss Metric for IPPM. Re-
quest for Comments RFC 2680, Internet Engineering Task Force, September 1999. (Internet
Proposed Standard). URL:http://www.ietf.org/rfc/rfc2680.txt.
[3] G. Almes, S. Kalidindi, and M. Zekauskas. A Round-trip Delay Metric for IPPM. Request for
Comments RFC 2681, Internet Engineering Task Force, September 1999. (Internet Proposed
Standard). URL:http://www.ietf.org/rfc/rfc2681.txt.
[4] P. Calhoun, J. Loughney, E. Guttman, G. Zorn, and J. Arkko. Diameter Base Protocol. Re-
quest for Comments RFC 3588, Internet Engineering Task Force, September 2003. (Internet
Proposed Standard). URL:http://www.ietf.org/rfc/rfc3588.txt.
[5] Robert L. Carter and Mark E. Crovella. Measuring bottleneck link speed in packet-switched
networks. Performance Evaluation, 27&28:297–318, 1996.
[6] Adrian E. Conway and Yali Zhu. A simulation-based methodology and tool for automating
the modeling and analysis of voice-over-IP perceptual quality. Performance Evaluation 54
(2003) 129 147, 54(2):129–147, October 2003.
[7] C. Demichelis and P. Chimento. IP Packet Delay Variation Metric for IP Performance Met-
rics (IPPM). Request for Comments RFC 3393, Internet Engineering Task Force, November
2002. (Internet Proposed Standard). URL:http://www.ietf.org/rfc/rfc3393.txt.
[8] Service quality assessment for connection set-up and release delays. ITU-T Recommendation
E.431, International Telecommunication Union, 1992.
[9] The e-model, a computational model for use in transmission planning. ITU-T Recommen-
dation G.107, International Telecommunication Union, 2000.
[10] Van Jacobson. Pathchar: How to infer the characteristics of internet
paths. Lecture at Mathematical Sciences Research Institute, April 1997.
URL:ftp://ftp.ee.lbl.gov/pathchar/msri-talk.pdf.
14
[11] R. Koodli and R. Ravikanth. One-way Loss Pattern Sample Metrics. Request for
Comments RFC 3357, Internet Engineering Task Force, August 2002. (Informational).
URL:http://www.ietf.org/rfc/rfc3357.txt.
[12] J. Mahdavi and V. Paxson. IPPM Metrics for Measuring Connectivity. Request for Com-
ments RFC 2678, Internet Engineering Task Force, September 1999. (Internet Proposed
Standard) (Obsoletes RFC2498). URL:http://www.ietf.org/rfc/rfc2678.txt.
[13] M. Mathis and M. Allman. A Framework for Defining Empirical Bulk Transfer Capacity
Metrics. Request for Comments RFC 3148, Internet Engineering Task Force, July 2001.
(Informational). URL:http://www.ietf.org/rfc/rfc3148.txt.
[14] V. Paxson, M. Allman, S. Dawson, W. Fenner, J. Griner, I. Heavens, K. Lahey,
J. Semke, and B. Volz. Known TCP Implementation Problems. Request for Com-
ments RFC 2525, Internet Engineering Task Force, March 1999. (Informational).
URL:http://www.ietf.org/rfc/rfc2525.txt.
[15] V. Paxson, G. Almes, J. Mahdavi, and M. Mathis. Framework for IP Performance Metrics.
Request for Comments RFC 2330, Internet Engineering Task Force, May 1998. (Informa-
tional). URL:http://www.ietf.org/rfc/rfc2330.txt.
[16] V. Raisanen, G. Grotefeld, and A. Morton. Network performance measurement with periodic
streams. Request for Comments RFC 3432, Internet Engineering Task Force, November 2002.
(Internet Proposed Standard). URL:http://www.ietf.org/rfc/rfc3432.txt.
[17] H. Schulzrinne, S. Casner, R. Frederick, and V. Jacobson. RTP: A Transport Proto-
col for Real-Time Applications. Request for Comments RFC 3550, Internet Engineer-
ing Task Force, July 2003. (Internet Standard) (Obsoletes RFC1889) (Also STD0064).
URL:http://www.ietf.org/rfc/rfc3550.txt.
[18] Matti Siekkinen, Guillaume Urvoy-Keller, Ernst W Biersack, and Taoufik En-Najjary. Root
cause analysis for long-lived TCP connections. In Co-NEXT 2005, 1st ACM/e-NEXT In-
ternational Conference on Future Networking Technologies, 24-27 October, 2005, Toulouse,
France, October 2005.
[19] K. Zhang and E. Elkin. XACCT’s Common Reliable Accounting for Network
Element (CRANE) Protocol Specification Version 1.0. Request for Comments
RFC 3423, Internet Engineering Task Force, November 2002. (Informational).
URL:http://www.ietf.org/rfc/rfc3423.txt.
15