T14-Congestion Control and QoS-1
T14-Congestion Control and QoS-1
and
Quality of Service
1
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Data Traffic
Main focus of congestion control and quality of service is
data traffic.
Average data rate: Number of bits sent during a period of
Constant bit-rate:
Fixed-rate
Data rate does not change.
Average and peak data rate are the same.
Variable-bit rate
Rate of data flow changes in time, with the changes smooth
instead of sudden and sharp
Average and peak data rate are different
Maximum burst size is usually small value.
4
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Traffic Profile
Bursty traffic
Data rate changes in a very short period of time.
Average and peak bit rates are very different in this type of flow.
Maximum burst size is significant
Most difficult type of traffic to handle because the profile is very
unpredictable.
One of the main causes of congestion.
5
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Congestion
Congestion
May occur if the load on the network – the
number of packets sent to the network – is
greater than the capacity of the network – the
number of packets a network can handle.
Congestion control refers to mechanisms and
techniques to control the congestion and keep the
load below the capacity.
Congestion in a network or internetwork occurs
because routers and switches have queues –
buffers that hold the packets before and after
processing. A router has an input queue and an
output queue for each interface.
6
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Congestion
7
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Network Performance
Congestion control involves two factors that measures the
performance of the network:
Delay and Throughput
Delay versus load
When load is much less than capacity, the delay is at minimum.
Minimum delay is due to propagation delay and processing delay.
When load reaches the network capacity, the delay increases
sharply due to addition of waiting time in the queues.
Delay has negative effect on the load and consequently the
congestion. When a packet is delayed, the source, not receiving
the acknowledgement, retransmits the packet, which makes the
delay, and the congestion, worse.
9
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Network Performance
10
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Performance: Throughput vs network load
11
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Performance: Throughput vs network load
12
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Congestion Control
13
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Congestion Control
15
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Choke Packets
In this approach, the router sends a choke packet
back to the source host, giving it the destination
found in the packet.
The original packet is tagged (a header bit is
turned on) so that it will not generate any more
choke packets farther along the path and is then
forwarded in the usual way.
21
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
The sender has no choice but to retransmit the
lost packet. This may create more congestion
and more dropping of packets, which means
more retransmission and more congestion.
A point may then be reached in which the whole
system collapses and no more data can be sent.
TCP therefore needs to find some way to avoid
this situation.
If the network cannot deliver the data as fast as
they are created by the sender, it needs to tell
the sender to slow down. In other words, in
addition to the receiver, the network is a second
entity that determines the size of the sender ’s
window in TCP.
If the cause of the lost segment is congestion,
retransmission of the segment does not remove
the cause – it aggravates it.
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Congestion Window
In TCP, sender’s window size is determined not only by the receiver
but also by congestion in the network.
Actual window size = minimum (rwnd size, congestion window size)
Congestion Avoidance
1. Slow start and additive increase
2. Multiplicative decrease
Slow start:
At start of a connection, TCP sets the congestion window to 1
For each segment that is ACKed, TCP increases the size of the congestion
window by one maximum segment size until it reaches a threshold of
one-half of allowable window size increases exponentially.
Sender sends one segment, receives one ACK, increases the size to two
segments, sends two segments, receives ACKs for two segments,
increases the size to four segments, sends four segments, receives ACK
for four segments and so on.
23
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Congestion Window
Additive increase:
After the size reaches the threshold, the size is increased one
segment for each acknowledgement even if an
acknowledgement is for several segments.
The additive-increase strategy continues as long as the
acknowledgements arrive before their corresponding time-
outs or the congestion window size reaches the receiver
window size.
24
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Congestion Window
Multiplicative decrease
If congestion occurs, the congestion window size must be
decreased.
If the sender does not receive an acknowledgement for a
segment before its retransmission timer has matured, it assumes
that there is congestion.
If a time-out occurs, the threshold must be set to one-half of the
last congestion window size, and the congestion window size
should start from 1 again. In other words, the sender returns to
the slow start phase.
Note that the threshold is reduced to one-half of the current
congestion window size each time a time-out occurs. This means
that the threshold is reduced exponentially (multiplicative
decrease).
25
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Multiplicative decrease
26
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Quality of Service: Flow Demands
27
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Jitter Control
For applications such as audio and video streaming,
it does not matter much if the packets take 20
msec or 30 msec to be delivered, as long as the
transit time is constant.
The variation (i.e., standard deviation) in the
packet arrival times is called jitter. High jitter, for
example, having some packets taking 20 msec and
others taking 30 msec to arrive will give an uneven
quality to the sound or movie.
Jitter is illustrated in Fig in next slide.
32
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Techniques to Improve QoS
Scheduling: The method of processing the flows. A good
scheduling technique treats the different flows in a fair and
appropriate manner.
Three Types of Scheduling Algorithms
FIFO Queuing:
First-in first-out queuing
33
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Scheduling: Priority Queuing
Packets are first assigned to a priority class.
Each priority class has its own queue.
Packets in highest-priority queue are processed first. Packets in
lowest-priority queue are processed last.
System does not stop serving a queue until it is empty.
Good for multimedia traffic.
Starvation is possible: If there is a continuous flow in a high-
priority queue, the packets in lower-priority queues will never
have a chance to be processed.
34
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Scheduling: Weighted Fair Queuing
35
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Traffic Shaping
• Traffic shaping is about regulating the average rate (and
burstiness) of data transmission.
• When a connection is set up, the user and the subnet (i.e.,
the customer and the carrier) agree on a certain traffic
pattern (i.e., shape) for that circuit.
• Sometimes this is called a service level agreement.
• As long as the customer fulfills her part of the bargain and
only sends packets according to the agreed-on contract,
the carrier promises to deliver them all in a timely fashion.
• Traffic shaping reduces congestion and thus helps the
carrier live up to its promise.
• Such agreements are not so important for file transfers but
are of great importance for real-time data, such as audio
and video connections, which have stringent quality-of-
service requirements.
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
The Leaky Bucket Algorithm
(a) A leaky bucket with water. (b) a leaky bucket with packets.
http://webmuseum.mi.fh-ffenburg.de/index.htm
http://www.net-seal.net/index.php