0% found this document useful (0 votes)
5 views

Chapter 3

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Chapter 3

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Chapter 3

Transport Layer

2-1

Transport services and protocols


application
transport
 provide logical communication network
data link
between app processes physical

running on different hosts


 transport protocols run in
end systems
• send side: breaks app
messages into segments,
passes to network layer
• rcv side: reassembles application
segments into messages, transport
network
passes to app layer data link
physical

 more than one transport


protocol available to apps
• Internet: TCP and UDP
3-2

1
Transport vs. network layer
 network layer: logical household analogy:
communication
between hosts 12 kids in Ali’s house sending
letters to 12 kids in Ahmed’s
 transport layer: house:
logical  hosts = houses
communication  processes = kids
between processes  app messages = letters in
envelopes
• relies on, enhances,  transport protocol = Ali
network layer and Ahmed who demux to
services in-house siblings
 network-layer protocol =
postal service

3-3

Internet transport-layer protocols


application
 reliable, in-order transport
network

delivery (TCP) data link


physical
network

• congestion control network


data link
data link
physical
physical
• flow control network
data link

• connection setup physical

network

 unreliable, unordered data link


physical

delivery: UDP network


data link
physical
• no-frills extension of network
data link application
“best-effort” IP physical
network
data link
transport
network
data link
 services not available: physical
physical

• delay guarantees
• bandwidth guarantees

3-4

2
Multiplexing/demultiplexing
multiplexing at sender:
handle data from multiple demultiplexing at receiver:
sockets, add transport header use header info to deliver
(later used for demultiplexing) received segments to correct
socket

application

application P1 P2 application socket


P3 transport P4
process
transport network transport
network link network
link physical link
physical physical

3-5

How demultiplexing works


 host receives IP datagrams 32 bits
• each datagram has source IP source port # dest port #
address, destination IP
address
other header fields
• each datagram carries one
transport-layer segment
• each segment has source, application
destination port number data
 host uses IP addresses & (payload)
port numbers to direct
segment to appropriate
TCP/UDP segment format
socket

3-6

3
Connectionless demux: example
DatagramSocket
DatagramSocket serverSocket = new
DatagramSocket DatagramSocket
mySocket2 = new mySocket1 = new
DatagramSocket (6428); DatagramSocket
(9157); application (5775);
application application
P1
P3 P4
transport
transport transport
network
network link network
link physical link
physical physical

source port: 6428 source port: ?


dest port: 9157 dest port: ?

source port: 9157 source port: ?


dest port: 6428 dest port: ?
3-7

Connection-oriented demux
 TCP socket identified  server host may support
by 4-tuple: many simultaneous TCP
• source IP address sockets:
• source port number • each socket identified by
• dest IP address its own 4-tuple
• dest port number  web servers have
 demux: receiver uses all different sockets for
four values to direct each connecting client
segment to appropriate • non-persistent HTTP will
socket have different socket for
each request

3-8

4
Connection-oriented demux: example

application
application P4 P5 P6 application
P3 P2 P3
transport
transport transport
network
network link network
link physical link
physical server: IP physical
address B

host: IP source IP,port: B,80 host: IP


address A dest IP,port: A,9157 source IP,port: C,5775 address C
dest IP,port: B,80
source IP,port: A,9157
dest IP, port: B,80
source IP,port: C,9157
dest IP,port: B,80
three segments, all destined to IP address: B,
dest port: 80 are demultiplexed to different sockets 3-9

Connection-oriented demux: example


threaded server
application
application application
P4
P3 P2 P3
transport
transport transport
network
network link network
link physical link
physical server: IP physical
address B

host: IP source IP,port: B,80 host: IP


address A dest IP,port: A,9157 source IP,port: C,5775 address C
dest IP,port: B,80
source IP,port: A,9157
dest IP, port: B,80
source IP,port: C,9157
dest IP,port: B,80

3-10

10

5
UDP: User Datagram Protocol [RFC 768]
 “no frills,” “bare bones”  UDP use:
Internet transport  streaming multimedia
protocol apps (loss tolerant, rate
 “best effort” service, UDP sensitive)
segments may be:  DNS
• lost  SNMP
• delivered out-of-order  reliable transfer over
to app
UDP:
 connectionless:
 add reliability at
• no handshaking application layer
between UDP sender,
receiver  application-specific error
recovery!
• each UDP segment
handled independently
of others
3-11

11

UDP: segment header


length, in bytes of
32 bits UDP segment,
source port # dest port # including header

length checksum
why is there a UDP?
 no connection
application establishment (which can
data add delay)
(payload)  simple: no connection
state at sender, receiver
 small header size
UDP segment format  no congestion control:
UDP can blast away as fast
as desired

3-12

12

6
UDP checksum
Goal: detect “errors” (e.g., flipped bits) in transmitted
segment
sender: receiver:
 treat segment contents,  compute checksum of
including header fields, received segment
as sequence of 16-bit  check if computed checksum
integers
equals checksum field value:
 checksum: addition
(one’s complement sum) • NO - error detected
of segment contents • YES - no error detected.
 sender puts checksum But maybe errors
value into UDP checksum nonetheless? More later
field ….

3-13

13

Internet checksum: example


example: add two 16-bit integers
1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0
1 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

wraparound 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1

sum 1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 0
checksum 1 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1

Note: when adding numbers, a carryout from the most


significant bit needs to be added to the result

3-14

14

7
rdt3.0 in action
sender receiver sender receiver
send pkt0 pkt0 send pkt0 pkt0
rcv pkt0 rcv pkt0
ack0 send ack0 ack0 send ack0
rcv ack0 rcv ack0
send pkt1 pkt1 send pkt1 pkt1
rcv pkt1 X
ack1 send ack1 loss
rcv ack1
send pkt0 pkt0
rcv pkt0 timeout
ack0 send ack0 resend pkt1 pkt1
rcv pkt1
ack1 send ack1
rcv ack1
send pkt0 pkt0
(a) no loss rcv pkt0
ack0 send ack0

(b) packet loss


3-15

15

rdt3.0 in action
sender receiver
sender receiver send pkt0 pkt0
send pkt0 pkt0 rcv pkt0
ack0 send ack0
rcv pkt0
send ack0 rcv ack0
ack0 send pkt1 pkt1
rcv ack0 rcv pkt1
send pkt1 pkt1
rcv pkt1 send ack1
ack1 ack1
send ack1
X
loss timeout
resend pkt1 pkt1
rcv pkt1
timeout
resend pkt1 pkt1 rcv ack1 pkt0 (detect duplicate)
rcv pkt1 send pkt0 send ack1
(detect duplicate) ack1
ack1 send ack1 rcv ack1 rcv pkt0
rcv ack1 ack0 send ack0
pkt0 send pkt0 pkt0
send pkt0 rcv pkt0
rcv pkt0 ack0 (detect duplicate)
ack0 send ack0 send ack0

(c) ACK loss (d) premature timeout/ delayed ACK

3-16

16

8
Pipelined protocols
pipelining: sender allows multiple, “in-flight”, yet-
to-be-acknowledged pkts
• range of sequence numbers must be increased
• buffering at sender and/or receiver

 two generic forms of pipelined protocols: go-Back-N,


selective repeat
3-17

17

Pipelined protocols: overview


Go-back-N: Selective Repeat:
 sender can have up to  sender can have up to N
N unacked packets in unack’ed packets in
pipeline pipeline
 receiver only sends  rcvr sends individual ack
cumulative ack for each packet
• doesn’t ack packet if
there’s a gap
 sender has timer for  sender maintains timer
oldest unacked packet for each unacked packet
• when timer expires, • when timer expires,
retransmit all unacked retransmit only that
packets unacked packet

3-18

18

9
GBN in action
sender window (N=4) sender receiver
012345678 send pkt0
012345678 send pkt1
012345678 send pkt2 receive pkt0, send ack0
012345678 send pkt3 Xloss receive pkt1, send ack1
(wait)
receive pkt3, discard,
012345678 rcv ack0, send pkt4 (re)send ack1
012345678 rcv ack1, send pkt5 receive pkt4, discard,
(re)send ack1
ignore duplicate ACK receive pkt5, discard,
(re)send ack1
pkt 2 timeout
012345678 send pkt2
012345678 send pkt3
012345678 send pkt4 rcv pkt2, deliver, send ack2
012345678 send pkt5 rcv pkt3, deliver, send ack3
rcv pkt4, deliver, send ack4
rcv pkt5, deliver, send ack5

3-19

19

Selective repeat
 receiver individually acknowledges all correctly
received pkts
• buffers pkts, as needed, for eventual in-order delivery
to upper layer
 sender only resends pkts for which ACK not
received
• sender timer for each unACKed pkt
 sender window
• N consecutive seq #’s
• limits seq #s of sent, unACKed pkts

3-20

20

10
Selective repeat in action
sender window (N=4) sender receiver
012345678 send pkt0
012345678 send pkt1
012345678 send pkt2 receive pkt0, send ack0
012345678 send pkt3 Xloss receive pkt1, send ack1
(wait)
receive pkt3, buffer,
012345678 rcv ack0, send pkt4 send ack3
012345678 rcv ack1, send pkt5 receive pkt4, buffer,
send ack4
record ack3 arrived receive pkt5, buffer,
send ack5
pkt 2 timeout
012345678 send pkt2
012345678 record ack4 arrived
012345678 rcv pkt2; deliver pkt2,
record ack5 arrived
012345678 pkt3, pkt4, pkt5; send ack2

Q: what happens when ack2 arrives?

3-21

21

TCP: Overview RFCs: 793,1122,1323, 2018, 2581

 point-to-point:  full duplex data:


• one sender, one receiver • bi-directional data flow
 reliable, in-order byte in same connection
steam: • MSS: maximum segment
size
• no “message
boundaries”  connection-oriented:
 pipelined: • handshaking (exchange
of control msgs) inits
• TCP congestion and sender, receiver state
flow control set window before data exchange
size
 flow controlled:
• sender will not
overwhelm receiver
3-22

22

11
TCP segment structure
32 bits
URG: urgent data counting
(generally, not used) source port # dest port #
by bytes
sequence number of data
ACK: ACK #
valid acknowledgement number (not segments!)
head not
PSH: push data now len used
UAP R S F receive window
(generally, not used) # bytes
checksum Urg data pointer
rcvr willing
RST, SYN, FIN: to accept
options (variable length)
connection estab
(setup, teardown
commands)
application
Internet data
checksum (variable length)
(as in UDP)

3-23

23

TCP round trip time, timeout


Q: how to set TCP Q: how to estimate RTT?
timeout value?  SampleRTT: measured
time from segment
 longer than RTT transmission until ACK
• but RTT varies receipt
 too short: premature • ignore retransmissions
timeout, unnecessary  SampleRTT will vary, want
retransmissions estimated RTT “smoother”
• average several recent
 too long: slow reaction measurements, not just
to segment loss current SampleRTT

3-24

24

12
TCP round trip time, timeout
EstimatedRTT = (1- )*EstimatedRTT + *SampleRTT

 exponential weighted moving average


 influence of past sample decreases exponentially fast
 typical value:  = 0.125

3-25

25

TCP reliable data transfer


 TCP creates rdt service
on top of IP’s unreliable
service
• pipelined segments
• cumulative acks let’s initially consider
• single retransmission simplified TCP sender:
timer • ignore duplicate acks
 retransmissions • ignore flow control,
triggered by: congestion control
• timeout events
• duplicate acks

3-26

26

13
TCP sender events:
data rcvd from app: timeout:
 create segment with  retransmit segment
seq # that caused timeout
 seq # is byte-stream  restart timer
number of first data ack rcvd:
byte in segment  if ack acknowledges
 start timer if not previously unacked
already running segments
• think of timer as for • update what is known
oldest unacked to be ACKed
segment
• start timer if there are
• expiration interval: still unacked segments
TimeOutInterval

3-27

27

TCP: retransmission scenarios


Host A Host B Host A Host B

SendBase=92
Seq=92, 8 bytes of data Seq=92, 8 bytes of data

Seq=100, 20 bytes of data


timeout

timeout

ACK=100
X
ACK=100
ACK=120

Seq=92, 8 bytes of data Seq=92, 8


SendBase=100 bytes of data
SendBase=120
ACK=100
ACK=120

SendBase=120

lost ACK scenario premature timeout


3-28

28

14
TCP: retransmission scenarios
Host A Host B

Seq=92, 8 bytes of data

Seq=100, 20 bytes of data


ACK=100
timeout

X
ACK=120

Seq=120, 15 bytes of data

cumulative ACK
3-29

29

TCP fast retransmit


 time-out period often
relatively long: TCP fast retransmit
• long delay before if sender receives 3
resending lost packet ACKs for same data
 detect lost segments ((““triple
triple duplicate
duplicate ACKs
ACKs””),),
via duplicate ACKs. resend unacked
• sender often sends
many segments back- segment with smallest
to-back seq #
• if segment is lost, there  likely that unacked
will likely be many segment lost, so don’t
duplicate ACKs. wait for timeout

3-30

30

15
TCP fast retransmit
Host A Host B

Seq=92, 8 bytes of data


Seq=100, 20 bytes of data
X

ACK=100
timeout
ACK=100
ACK=100
ACK=100
Seq=100, 20 bytes of data

fast retransmit after sender


receipt of triple duplicate ACK
3-31

31

TCP flow control


application
application may process
remove data from application
TCP socket buffers ….
TCP socket OS
receiver buffers
… slower than TCP
receiver is delivering
(sender is sending) TCP
code

IP
flow control code
receiver controls sender, so
sender won’t overflow
receiver’s buffer by transmitting from sender
too much, too fast
receiver protocol stack

3-32

32

16
TCP 3-way handshake

client state server state


LISTEN LISTEN
choose init seq num, x
send TCP SYN msg
SYNSENT SYNbit=1, Seq=x
choose init seq num, y
send TCP SYNACK
msg, acking SYN SYN RCVD
SYNbit=1, Seq=y
ACKbit=1; ACKnum=x+1
received SYNACK(x)
ESTAB indicates server is live;
send ACK for SYNACK;
this segment may contain ACKbit=1, ACKnum=y+1
client-to-server data
received ACK(y)
indicates client is live
ESTAB

3-33

33

TCP: closing a connection


 client, server each close their side of connection
• send TCP segment with FIN bit = 1
 respond to received FIN with ACK
• on receiving FIN, ACK can be combined with own FIN
 simultaneous FIN exchanges can be handled

3-34

34

17
Principles of congestion control
congestion:
 informally: “too many sources sending too much
data too fast for network to handle”
 different from flow control!
 manifestations:
• lost packets (buffer overflow at routers)
• long delays (queueing in router buffers)
 a top-10 problem!

3-35

35

Causes/costs of congestion
R/2
idealization: perfect
knowledge
out

 sender sends only when


router buffers available
in R/2

in : original data


copy out
'in: original data, plus
retransmitted data

A free buffer space!

finite shared output


Host B
link buffers
3-36

36

18
Causes/costs of congestion
Idealization: known loss
packets can be lost,
dropped at router due
to full buffers
 sender only resends if
packet known to be lost

in : original data


copy out
'in: original data, plus
retransmitted data

A
no buffer space!

Host B
3-37

37

Causes/costs of congestion
Idealization: known loss R/2
packets can be lost,
dropped at router due when sending at R/2,
some packets are
out

to full buffers retransmissions but


asymptotic goodput
 sender only resends if is still R/2 (why?)
packet known to be lost in R/2

in : original data


out
'in: original data, plus
retransmitted data

A
free buffer space!

Host B
3-38

38

19
Causes/costs of congestion
Realistic: duplicates R/2
 packets can be lost, dropped at
router due to full buffers when sending at R/2,
some packets are

out
 sender times out prematurely, retransmissions
including duplicated
sending two copies, both of that are delivered!
which are delivered in R/2

in
timeout
copy 'in out

A
free buffer space!

Host B
3-39

39

Causes/costs of congestion
Realistic: duplicates R/2
 packets can be lost, dropped at
router due to full buffers when sending at R/2,
some packets are
out

 sender times out prematurely, retransmissions


including duplicated
sending two copies, both of that are delivered!
which are delivered in R/2

“costs” of congestion:
 more work (retrans) for given “goodput”
 unneeded retransmissions: link carries multiple copies of pkt
• decreasing goodput

3-40

40

20
TCP congestion control: additive increase
multiplicative decrease(AIMD)
 approach: sender increases transmission rate (window
size), probing for usable bandwidth, until loss occurs
• additive increase: increase cwnd by 1 MSS every
RTT until loss detected
• multiplicative decrease: cut cwnd in half after loss
additively increase window size …
…. until loss occurs (then cut window in half)
congestion window size
cwnd: TCP sender

AIMD saw tooth


behavior: probing
for bandwidth

time
3-41

41

TCP Slow Start


Host A Host B
 when connection begins,
increase rate
exponentially until first
loss event:
RTT

• initially cwnd = 1 MSS


• double cwnd every RTT
• done by incrementing
cwnd for every ACK
received
 summary: initial rate is
slow but ramps up
exponentially fast time

3-42

42

21
TCP: detecting, reacting to loss
 loss indicated by timeout:
• cwnd set to 1 MSS;
• window then grows exponentially (as in slow start)
to threshold, then grows linearly
 loss indicated by 3 duplicate ACKs: TCP RENO
• dup ACKs indicate network capable of delivering
some segments
• cwnd is cut in half window then grows linearly
 TCP Tahoe always sets cwnd to 1 (timeout or 3
duplicate acks)

3-43

43

TCP: switching from slow start to CA


Q: when should the
exponential
increase switch to
linear?
A: when cwnd gets
to 1/2 of its value
before timeout.

Implementation:
 variable ssthresh
 on loss event, ssthresh
is set to 1/2 of cwnd just
before loss event

3-44

44

22
TCP throughput
 avg. TCP thruput as function of window size, RTT?
• ignore slow start, assume always data to send
 W: window size (measured in bytes) where loss occurs
• avg. window size (# in-flight bytes) is ¾ W
• avg. thruput is 3/4W per RTT
3 W
avg TCP thruput = bytes/sec
4 RTT

W/2

3-45

45

TCP Fairness
fairness goal: if K TCP sessions share same
bottleneck link of bandwidth R, each should have
average rate of R/K

TCP connection 1

bottleneck
router
capacity R
TCP connection 2

3-46

46

23

You might also like