0% found this document useful (0 votes)
8 views51 pages

T14-Congestion Control and QoS-1

CN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views51 pages

T14-Congestion Control and QoS-1

CN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

Congestion Control

and
Quality of Service

1
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Data Traffic
 Main focus of congestion control and quality of service is
data traffic.
 Average data rate: Number of bits sent during a period of

time, divided by the number of seconds in that period.


Indicates the average bandwidth needed by the traffic.
 Peak data rate: Maximum data rate of the traffic. It

indicates the peak bandwidth that the network needs for


traffic to pass through the network without changing its
data flow.
 Maximum burst size: Peak data rate is ignored if the

duration of the peak value is very short. Maximum burst


rate refers to the maximum length of time the traffic is
generated at the peak rate.
 Effective bandwidth: Bandwidth that the network needs to

allocate for the flow of traffic. This depends on average data


2
rate, peak data rate, and maximum burst size.
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Traffic Profile
1. CBR
2. VBR
3. Bursty Traffic

 Constant bit-rate:
 Fixed-rate
 Data rate does not change.
 Average and peak data rate are the same.

 Variable-bit rate
 Rate of data flow changes in time, with the changes smooth
instead of sudden and sharp
 Average and peak data rate are different
 Maximum burst size is usually small value.

4
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Traffic Profile

 Bursty traffic
 Data rate changes in a very short period of time.
 Average and peak bit rates are very different in this type of flow.
 Maximum burst size is significant
 Most difficult type of traffic to handle because the profile is very
unpredictable.
 One of the main causes of congestion.

5
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Congestion

 Congestion
 May occur if the load on the network – the
number of packets sent to the network – is
greater than the capacity of the network – the
number of packets a network can handle.
 Congestion control refers to mechanisms and
techniques to control the congestion and keep the
load below the capacity.
 Congestion in a network or internetwork occurs
because routers and switches have queues –
buffers that hold the packets before and after
processing. A router has an input queue and an
output queue for each interface.

6
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Congestion

 When a packet arrives at incoming interface, it undergo


three steps:
1. Packet is put at the end of input queue while waiting to
be checked.
2. Processing module of the router removes the packet
from front of queue and make routing decisions using
routing table.
3. Packet is put into respective output queue and waits its
turn to be sent.
 If rate of packet arrival > packet processing rate,
Input queue size will increase.
 If rate of processing > rate of departure, output
queue increases

7
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Network Performance
 Congestion control involves two factors that measures the
performance of the network:
 Delay and Throughput
 Delay versus load
 When load is much less than capacity, the delay is at minimum.
Minimum delay is due to propagation delay and processing delay.
 When load reaches the network capacity, the delay increases
sharply due to addition of waiting time in the queues.
 Delay has negative effect on the load and consequently the
congestion. When a packet is delayed, the source, not receiving
the acknowledgement, retransmits the packet, which makes the
delay, and the congestion, worse.

9
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Network Performance

10
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Performance: Throughput vs network load

 We can define throughput in a network as the number


of packets passing through the network in a unit of
time.
 When the load is below the capacity of the network, the
throughput increases proportionally with the load.
 The throughput declines sharply after the load reaches
to its capacity due to discarding of packets by routers.
 When the load exceeds the capacity, the queues
become full and routers have to discard some packets.
 Discarding packets does not reduce the number of
packets in the network because the sources retransmit
the packets, using time-out mechanisms, when the
packets do not reach the destination.

11
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Performance: Throughput vs network load

12
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Congestion Control

 Congestion control refers to techniques and


mechanisms that can either prevent congestion,
before it happens, or remove congestion, after it has
happened.
1. Open-loop congestion control [Prevention]
2. Closed-loop Congestion Control [Removal]

 Open-loop congestion control [Prevention]


 Policies are applied to prevent congestion before it happens.
 Congestion control is handled by either the source or the
destination.
 Retransmission policy: to optimize efficiency and reduce
congestion set proper retransmission policy and timers.

13
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Congestion Control

 Open-loop congestion control [Prevention]


 Windows policy: Type of window at sender may also
affect congestion. Selective repeat is better than Go-
Back-N.
 Acknowledgement policy: Policy set by receiver may
also affect congestion. If receiver does not
acknowledge every packet it receives, it may slow
down the sender and help prevent congestion.
 Discard policy: Discard less sensitive packets [in audio
transmission] at routers.
 Admission policy: Switches in a flow first check the
resource requirement of a flow before admitting it to
the network.
14
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Closed-loop congestion control [Removal]

 Back Pressure: When a router is congested, it can


inform the previous upstream router to reduce the
rate of outgoing packets. The action can be
recursive all the way to the router before the
source.
 Choke Point: A packet sent by a router to the
source to inform it of congestion. This type of
control is similar to ICMP’s source quench packet.

15
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Choke Packets
 In this approach, the router sends a choke packet
back to the source host, giving it the destination
found in the packet.
 The original packet is tagged (a header bit is
turned on) so that it will not generate any more
choke packets farther along the path and is then
forwarded in the usual way.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


• When the source host gets the choke packet, it is required
to reduce the traffic sent to the specified destination by X
percent.
• Since other packets aimed at the same destination are
probably already under way and will generate yet more
choke packets, the host should ignore choke packets
referring to that destination for a fixed time interval.
• After that period has expired, the host listens for more
choke packets for another interval.
• If one arrives, the line is still congested, so the host
reduces the flow still more and begins ignoring choke
packets again.
• If no choke packets arrive during the listening period, the
host may increase the flow again.
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Hop-by-Hop Choke Packets
 At high speeds or over long distances, sending a choke
packet to the source hosts does not work well because the
reaction is so slow.
 Consider, for example, a host S (router A in Fig (a)) that is
sending traffic to a host in N (router D in Fig (a)) at 155
Mbps.
 If the N host begins to run out of buffers, it will take about
30 msec for a choke packet to get back to S to tell it to slow
down.
 The choke packet propagation is shown as the second,
third, and fourth steps in Fig (a).
 In those 30 msec, another 4.6 megabits will have been
sent.
 Even if the host in S completely shuts down immediately,
the 4.6 megabits in the pipe will continue to pour in and
have to be dealt with.
 Only in the seventh diagram in Fig (a) will the N router
notice a slower flow.
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
(a) A choke packet that affects
only the source.

(b) A choke packet that affects


each hop it passes through.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


 Implicit Signaling: Source can detect an implicit signal
concerning congestion and slow down its sending rate.
For example, the mere delay in receiving an
acknowledgement can be a signal that the network is
congested.
 Explicit Signaling: Routers that experience congestion
can send an explicit signal, the setting of a bit in a
packet, for example, to inform the sender or the receiver
of congestion.
 Backward Signaling: Bit can be set in a packet moving in the
direction opposite to the congestion; indicate the source.
 Forward Signaling: Bit can be set in a packet moving in the
direction of the congestion; indicate the destination.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Congestion control in TCP

 Packet from a sender may pass through several


routers before reaching its final destination.
 Router has a buffer that stores the incoming
packets, processes them, and forwards them.
 If a router receives packets faster than it can
process, congestion might occur and some
packets could be dropped.
 When a packet does not reach the destination, no
acknowledgement is sent for it.

21
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
 The sender has no choice but to retransmit the
lost packet. This may create more congestion
and more dropping of packets, which means
more retransmission and more congestion.
 A point may then be reached in which the whole
system collapses and no more data can be sent.
TCP therefore needs to find some way to avoid
this situation.
 If the network cannot deliver the data as fast as
they are created by the sender, it needs to tell
the sender to slow down. In other words, in
addition to the receiver, the network is a second
entity that determines the size of the sender ’s
window in TCP.
 If the cause of the lost segment is congestion,
retransmission of the segment does not remove
the cause – it aggravates it.
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Congestion Window
 In TCP, sender’s window size is determined not only by the receiver
but also by congestion in the network.
Actual window size = minimum (rwnd size, congestion window size)
 Congestion Avoidance
1. Slow start and additive increase
2. Multiplicative decrease
 Slow start:
 At start of a connection, TCP sets the congestion window to 1
 For each segment that is ACKed, TCP increases the size of the congestion
window by one maximum segment size until it reaches a threshold of
one-half of allowable window size  increases exponentially.
 Sender sends one segment, receives one ACK, increases the size to two
segments, sends two segments, receives ACKs for two segments,
increases the size to four segments, sends four segments, receives ACK
for four segments and so on.

23
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Congestion Window

 Additive increase:
 After the size reaches the threshold, the size is increased one
segment for each acknowledgement even if an
acknowledgement is for several segments.
 The additive-increase strategy continues as long as the
acknowledgements arrive before their corresponding time-
outs or the congestion window size reaches the receiver
window size.

24
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Congestion Window

 Multiplicative decrease
 If congestion occurs, the congestion window size must be
decreased.
 If the sender does not receive an acknowledgement for a
segment before its retransmission timer has matured, it assumes
that there is congestion.
 If a time-out occurs, the threshold must be set to one-half of the
last congestion window size, and the congestion window size
should start from 1 again. In other words, the sender returns to
the slow start phase.
 Note that the threshold is reduced to one-half of the current
congestion window size each time a time-out occurs. This means
that the threshold is reduced exponentially (multiplicative
decrease).

25
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Multiplicative decrease

26
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Quality of Service: Flow Demands

 Reliability: Lack of reliability means losing a packet or


acknowledgement, which entails retransmission. Different
application programs need different levels of reliability.
 Delay: Source-to-destination delay. Delay tolerance varies
between applications.
 Jitter: Variation in delay for packets belonging to the same
flow. Real-time audio and video applications cannot tolerate
high jitter.
 Bandwidth: bits per second
 Flow classes: Depend on flow characteristics, we can
classify flow into groups e.g., CBR, UBR, etc.

27
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Jitter Control
 For applications such as audio and video streaming,
it does not matter much if the packets take 20
msec or 30 msec to be delivered, as long as the
transit time is constant.
 The variation (i.e., standard deviation) in the
packet arrival times is called jitter. High jitter, for
example, having some packets taking 20 msec and
others taking 30 msec to arrive will give an uneven
quality to the sound or movie.
 Jitter is illustrated in Fig in next slide.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Jitter Control

(a) High jitter. (b) Low jitter.


McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
QoS Requirements

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Techniques to Improve QoS
 Scheduling
 FIFO Queuing
 Priority Queuing
 Weighted Fair Queuing
 Traffic Shaping
 Leaky Bucket
 Token Bucket
 Combination of Leaky Bucket and Token Bucket.
 Resource Reservations
 Integrated Services
 Differentiated Services
 Admission Control

32
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Techniques to Improve QoS
 Scheduling: The method of processing the flows. A good
scheduling technique treats the different flows in a fair and
appropriate manner.
 Three Types of Scheduling Algorithms
 FIFO Queuing:
 First-in first-out queuing

 Packets wait in a buffer (queue) until the node (router or


switch) is ready to process them.
 If average arrival rate is higher than average processing rate,
the queue will fill up and new packets will be discarded.

33
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Scheduling: Priority Queuing
 Packets are first assigned to a priority class.
 Each priority class has its own queue.
 Packets in highest-priority queue are processed first. Packets in
lowest-priority queue are processed last.
 System does not stop serving a queue until it is empty.
 Good for multimedia traffic.
 Starvation is possible: If there is a continuous flow in a high-
priority queue, the packets in lower-priority queues will never
have a chance to be processed.

34
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Scheduling: Weighted Fair Queuing

 Packets are assigned to different classes and


admitted to different queues.
 System processes packets in each queue in round-
robin fashion with the number of packets selected
from each queue based on the corresponding weight.

35
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Traffic Shaping
• Traffic shaping is about regulating the average rate (and
burstiness) of data transmission.
• When a connection is set up, the user and the subnet (i.e.,
the customer and the carrier) agree on a certain traffic
pattern (i.e., shape) for that circuit.
• Sometimes this is called a service level agreement.
• As long as the customer fulfills her part of the bargain and
only sends packets according to the agreed-on contract,
the carrier promises to deliver them all in a timely fashion.
• Traffic shaping reduces congestion and thus helps the
carrier live up to its promise.
• Such agreements are not so important for file transfers but
are of great importance for real-time data, such as audio
and video connections, which have stringent quality-of-
service requirements.
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
The Leaky Bucket Algorithm

(a) A leaky bucket with water. (b) a leaky bucket with packets.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


• Imagine a bucket with a small hole in the bottom, as
illustrated in Fig.
• No matter the rate at which water enters the bucket, the
outflow is at a constant rate, r, when there is any water in
the bucket and zero when the bucket is empty
• Also, once the bucket is full, any additional water entering
it spills over the sides and is lost.
• The same idea can be applied to packets, as shown in Fig.
• Conceptually, each host is connected to the network by an
interface containing a leaky bucket, that is, a finite internal
queue.
• If a packet arrives at the queue when it is full, the packet is
discarded.
• In other words, if one or more processes within the host try
to send a packet when the maximum number is already
queued, the new packet is unceremoniously discarded.
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
• The host is allowed to put one packet per clock
tick onto the network.
• The leaky bucket consists of a finite queue.
• When a packet arrives, if there is room on the
queue it is appended to the queue; otherwise, it
is discarded.
• At every clock tick, one packet is transmitted
(unless the queue is empty).

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Leaky bucket implementation

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


• The byte-counting leaky bucket is implemented
almost the same way.
• At each tick, a counter is initialized to n.

• If the first packet on the queue has fewer bytes

than the current value of the counter, it is


transmitted, and the counter is decremented by
that number of bytes.
• Additional packets may also be sent, as long as the

counter is high enough.


• When the counter drops below the length of the

next packet on the queue, transmission stops until


the next tick, at which time the residual byte count
is reset and the flow can continue.
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
The Token Bucket Algorithm
• The leaky bucket algorithm enforces a rigid output
pattern at the average rate, no matter how bursty
the traffic is.
• For many applications, it is better to allow the
output to speed up somewhat when large bursts
arrive, so a more flexible algorithm is needed,
preferably one that never loses data.
• One such algorithm is the token bucket algorithm.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


The Token Bucket Algorithm
In this algorithm, the leaky
bucket holds tokens,
generated by a clock at the
rate of one token every ΔT
sec.
In Fig (a) we see a bucket
holding three tokens, with 5-34
five packets waiting to be
transmitted.
For a packet to be
transmitted, it must capture
and destroy one token.
In Fig (b), we see that three
of the five packets have
gotten through, but the
other two are stuck waiting
for two more tokens to be
generated.
(a) Before. (b) After.
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Token bucket

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


• The leaky bucket algorithm does not allow idle hosts to
save up permission to send large bursts later.
• The token bucket algorithm does allow saving, up to the
maximum size of the bucket, n.
• This property means that bursts of up to n packets can be
sent at once, allowing some burstiness in the output stream
and giving faster response to sudden bursts of input.
• Another difference between the two algorithms is that the
token bucket algorithm throws away tokens (i.e.,
transmission capacity) when the bucket fills up but never
discards packets.
• In contrast, the leaky bucket algorithm discards packets
when the bucket fills up.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


• The implementation of the basic token bucket algorithm is
just a variable that counts tokens.
• The counter is incremented by one every Δ T and
decremented by one whenever a packet is sent.
• When the counter hits zero, no packets may be sent.

• A minor variant is possible, in which each token represents


the right to send not one packet, but k bytes.
• In the byte-count variant, the counter is incremented by k
bytes every Δ T and decremented by the length of each
packet sent.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


• Essentially what the token bucket does is allow bursts, but up
to a regulated maximum length.
• Calculating the length of the maximum rate burst is slightly
tricky.
• It is not just size of burst divided by output rate because while
the burst is being output, more tokens arrive.
• If we call the burst length S sec,
• The token bucket capacity C bytes,
• The token arrival rate ρ bytes/sec,
• The maximum output rate M bytes/sec,
• we see that an output burst contains a maximum of C + ρ S
bytes.
• We also know that the number of bytes in a maximum-speed
burst of length S seconds is MS.
C+ ρ S =M S
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
• We can solve this equation to get S = C/(M - ρ).
• Look at Fig(c) for example. Here we have a token bucket
with a capacity of 250 KB. Tokens arrive at a rate allowing
output at 2 MB/sec. Assuming the token bucket is full when
the 1-MB burst arrives, the bucket can drain at the full 25
MB/sec for about 11 msec. Then it has to cut back to 2
MB/sec until the entire input burst has been sent.
• For our parameters of C = 250 KB, M = 25 MB/sec, and ρ
=2 MB/sec, we get a burst time of about 11 msec.
• Fig(d) and Fig (e) show the token bucket for capacities of
500 KB and 750 KB, respectively.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


(a) Input to a leaky bucket.
(b) Output from a leaky bucket.
Output from a token bucket with capacities of
(c) 250 KB,
(d) 500 KB,
(e) 750 KB,
McGraw-Hill
(f) Output from a 500KB token bucket feeding a 10-MB/sec leaky©The
bucket. McGraw-Hill Companies, Inc., 2004
• One way to get smoother traffic is to insert a leaky
bucket after the token bucket.
• The rate of the leaky bucket should be higher than
the token bucket's ‘ρ’ but lower than the maximum
rate of the network.
• Fig (f) shows the output for a 500-KB token bucket
followed by a 10-MB/sec leaky bucket.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Animated Illustrations

 http://webmuseum.mi.fh-ffenburg.de/index.htm

 http://www.net-seal.net/index.php

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004

You might also like