1 # TCPDUMP 4.x.y by [The Tcpdump Group](https://www.tcpdump.org/)
3 [](https://travis-ci.com/github/the-tcpdump-group/tcpdump)
4 [](https://ci.appveyor.com/project/guyharris/tcpdump)
6 **To report a security issue please send an e-mail to security@tcpdump.org.**
8 To report bugs and other problems, contribute patches, request a
9 feature, provide generic feedback etc please see the
10 [guidelines for contributing](CONTRIBUTING) in the tcpdump source tree root.
12 Anonymous Git is available via
14 https://github.com/the-tcpdump-group/tcpdump.git
16 This directory contains source code for tcpdump, a tool for network
17 monitoring and data acquisition.
19 Over the past few years, tcpdump has been steadily improved by the
20 excellent contributions from the Internet community (just browse
21 through the [change log](CHANGES)). We are grateful for all the input.
23 ### Dependency on libpcap
24 Tcpdump uses libpcap, a system-independent interface for user-level
25 packet capture. Before building tcpdump, you must first retrieve and
28 Once libpcap is built (either install it or make sure it's in
29 `../libpcap`), you can build tcpdump using the procedure in the
30 [installation guide](INSTALL.txt).
32 ### Origins of tcpdump
33 The program is loosely based on SMI's "etherfind" although none of the
34 etherfind code remains. It was originally written by Van Jacobson as
35 part of an ongoing research project to investigate and improve TCP and
36 Internet gateway performance. The parts of the program originally
37 taken from Sun's etherfind were later re-written by Steven McCanne of
38 LBL. To insure that there would be no vestige of proprietary code in
39 tcpdump, Steve wrote these pieces from the specification given by the
40 manual entry, with no access to the source of tcpdump or etherfind.
42 formerly from Lawrence Berkeley National Laboratory
43 Network Research Group <tcpdump@ee.lbl.gov>
44 ftp://ftp.ee.lbl.gov/old/tcpdump.tar.Z (3.4)
48 Richard Stevens gives an excellent treatment of the Internet protocols
49 in his book *"TCP/IP Illustrated, Volume 1"*. If you want to learn more
50 about tcpdump and how to interpret its output, pick up this book.
52 Some tools for viewing and analyzing tcpdump trace files are available
53 from the [Internet Traffic Archive](http://ita.ee.lbl.gov/).
55 Another tool that tcpdump users might find useful is
56 [tcpslice](https://github.com/the-tcpdump-group/tcpslice).
57 It is a program that can be used to extract portions of tcpdump binary
60 ### The original LBL README by Steve McCanne, Craig Leres and Van Jacobson
62 This directory also contains some short awk programs intended as
63 examples of ways to reduce tcpdump data when you're tracking
64 particular network problems:
67 Simplifies the tcpdump trace for an ftp (or other unidirectional
68 tcp transfer). Since we assume that one host only sends and
69 the other only acks, all address information is left off and
70 we just note if the packet is a "send" or an "ack".
72 There is one output line per line of the original trace.
73 Field 1 is the packet time in decimal seconds, relative
74 to the start of the conversation. Field 2 is delta-time
75 from last packet. Field 3 is packet type/direction.
76 "Send" means data going from sender to receiver, "ack"
77 means an ack going from the receiver to the sender. A
78 preceding "*" indicates that the data is a retransmission.
79 A preceding "-" indicates a hole in the sequence space
80 (i.e., missing packet(s)), a "#" means an odd-size (not max
81 seg size) packet. Field 4 has the packet flags
82 (same format as raw trace). Field 5 is the sequence
83 number (start seq. num for sender, next expected seq number
84 for acks). The number in parens following an ack is
85 the delta-time from the first send of the packet to the
86 ack. A number in parens following a send is the
87 delta-time from the first send of the packet to the
88 current send (on duplicate packets only). Duplicate
89 sends or acks have a number in square brackets showing
90 the number of duplicates so far.
92 Here is a short sample from near the start of an ftp:
94 3.20 0.20 ack . 1024 (0.20)
96 3.40 0.20 ack . 1536 (0.20)
97 3.80 0.40 * send . 0 (3.80) [2]
98 3.82 0.02 * ack . 1536 (0.62) [2]
99 Three seconds into the conversation, bytes 512 through 1023
100 were sent. 200ms later they were acked. Shortly thereafter
101 bytes 1024-1535 were sent and again acked after 200ms.
102 Then, for no apparent reason, 0-511 is retransmitted, 3.8
103 seconds after its initial send (the round trip time for this
104 ftp was 1sec, +-500ms). Since the receiver is expecting
105 1536, 1536 is re-acked when 0 arrives.
108 Computes chunk summary data for an ftp (or similar
109 unidirectional tcp transfer). [A "chunk" refers to
110 a chunk of the sequence space -- essentially the packet
111 sequence number divided by the max segment size.]
113 A summary line is printed showing the number of chunks,
114 the number of packets it took to send that many chunks
115 (if there are no lost or duplicated packets, the number
116 of packets should equal the number of chunks) and the
119 Following the summary line is one line of information
120 per chunk. The line contains eight fields:
122 2 - the start sequence number for this chunk
123 3 - time of first send
124 4 - time of last send
125 5 - time of first ack
127 7 - number of times chunk was sent
128 8 - number of times chunk was acked
129 (all times are in decimal seconds, relative to the start
130 of the conversation.)
132 As an example, here is the first part of the output for
135 # 134 chunks. 536 packets sent. 508 acks.
136 1 1 0.00 5.80 0.20 0.20 4 1
137 2 513 0.28 6.20 0.40 0.40 4 1
138 3 1025 1.16 6.32 1.20 1.20 4 1
139 4 1561 1.86 15.00 2.00 2.00 6 1
140 5 2049 2.16 15.44 2.20 2.20 5 1
141 6 2585 2.64 16.44 2.80 2.80 5 1
142 7 3073 3.00 16.66 3.20 3.20 4 1
143 8 3609 3.20 17.24 3.40 5.82 4 11
144 9 4097 6.02 6.58 6.20 6.80 2 5
146 This says that 134 chunks were transferred (about 70K
147 since the average packet size was 512 bytes). It took
148 536 packets to transfer the data (i.e., on the average
149 each chunk was transmitted four times). Looking at,
150 say, chunk 4, we see it represents the 512 bytes of
151 sequence space from 1561 to 2048. It was first sent
152 1.86 seconds into the conversation. It was last
153 sent 15 seconds into the conversation and was sent
154 a total of 6 times (i.e., it was retransmitted every
155 2 seconds on the average). It was acked once, 140ms
156 after it first arrived.
160 Output one line per send or ack, respectively, in the form
162 where <time> is the time in seconds since the start of the
163 transfer and <seq. number> is the sequence number being sent
164 or acked. I typically plot this data looking for suspicious
168 The problem I was looking at was the bulk-data-transfer
169 throughput of medium delay network paths (1-6 sec. round trip
170 time) under typical DARPA Internet conditions. The trace of the
171 ftp transfer of a large file was used as the raw data source.
174 - On a local host (but not the Sun running tcpdump), connect to
177 - On the monitor Sun, start the trace going. E.g.,
178 tcpdump host local-host and remote-host and port ftp-data >tracefile
180 - On local, do either a get or put of a large file (~500KB),
181 preferably to the null device (to minimize effects like
182 closing the receive window while waiting for a disk write).
184 - When transfer is finished, stop tcpdump. Use awk to make up
185 two files of summary data (maxsize is the maximum packet size,
186 tracedata is the file of tcpdump tracedata):
187 awk -f send-ack.awk packetsize=avgsize tracedata >sa
188 awk -f packetdat.awk packetsize=avgsize tracedata >pd
190 - While the summary data files are printing, take a look at
191 how the transfer behaved:
192 awk -f stime.awk tracedata | xgraph
193 (90% of what you learn seems to happen in this step).
195 - Do all of the above steps several times, both directions,
196 at different times of day, with different protocol
197 implementations on the other end.
199 - Using one of the Unix data analysis packages (in my case,
200 S and Gary Perlman's Unix|Stat), spend a few months staring
203 - Change something in the local protocol implementation and
204 redo the steps above.
206 - Once a week, tell your funding agent that you're discovering
207 wonderful things and you'll write up that research report