0% found this document useful (0 votes)
22 views58 pages

21-Traffic Management-TCP Congestion-24-10-2024

Uploaded by

Divyansh Pansari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views58 pages

21-Traffic Management-TCP Congestion-24-10-2024

Uploaded by

Divyansh Pansari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 58

Chapter 24

Congestion Control and


Quality of Service

24.1 Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
24-1 DATA TRAFFIC

The main focus of congestion control and quality of


service is data traffic. In congestion control we try to
avoid traffic congestion. In quality of service, we try to
create an appropriate environment for the traffic. So,
before talking about congestion control and quality of
service, we discuss the data traffic itself.

Topics discussed in this section:


Traffic Descriptor
Traffic Profiles

24.2
Figure 24.1 Traffic descriptors

24.3
Average date rate

The average data rate is the number of bits


sent during a period of time, divided by the
number of seconds in that period.

Average data rate= amount of data/time

24.4
Figure 24.2 Three traffic profiles

24.5
24-2 CONGESTION

Congestion in a network may occur if the load on the


network—the number of packets sent to the network—
is greater than the capacity of the network—the
number of packets a network can handle. Congestion
control refers to the mechanisms and techniques to
control the congestion and keep the load below the
capacity.

Topics discussed in this section:


Network Performance

24.6
Figure 24.3 Queues in a router

24.7
Network Performance
Figure Packet delay and throughput as functions of load

24.8
24-3 CONGESTION CONTROL

Congestion control refers to techniques and


mechanisms that can either prevent congestion, before
it happens, or remove congestion, after it has
happened. In general, we can divide congestion
control mechanisms into two broad categories: open-
loop congestion control (prevention) and closed-loop
congestion control (removal).

Topics discussed in this section:


Open-Loop Congestion Control
Closed-Loop Congestion Control

24.9
Figure 24.5 Congestion control categories

24.10
Open Loop Congestion Control
Open loop congestion control policies are applied to prevent congestion
before it happens. The congestion control is handled either by the source or
the destination.

Retransmission Policy :
It is the policy in which retransmission of the packets are taken care of. If
the sender feels that a sent packet is lost or corrupted, the packet needs to
be retransmitted. This transmission may increase the congestion in the
network.
To prevent congestion, retransmission timers must be designed to prevent
congestion and also able to optimize efficiency.
Window Policy :
The type of window at the sender’s side may also affect the congestion.
Several packets in the Go-back-n window are re-sent, although some
packets may be received successfully at the receiver side. This duplication
may increase the congestion in the network and make it worse.
Therefore, Selective repeat window should be adopted as it sends the
specific packet that may have been lost.
24.11
Discarding Policy : A good discarding policy
adopted by the routers is that the routers may
prevent congestion and at the same time
partially discard the corrupted or less sensitive
packages and also be able to maintain the quality
of a message.

Acknowledgment Policy : Since


acknowledgements are also the part of the load
in the network, the acknowledgment policy
imposed by the receiver may also affect
congestion. Several approaches can be used to
prevent congestion related to
acknowledgment. The receiver should send
acknowledgement for N packets rather than
sending acknowledgement for a single packet.
24.12
Admission Policy :
In admission policy a mechanism should
be used to prevent congestion. Switches
in a flow should first check the resource
requirement of a network flow before
transmitting it further. If there is a chance
of a congestion or there is a congestion in
the network, router should deny
establishing a virtual network connection
to prevent further congestion.

24.13
Closed Loop Congestion Control
Closed loop congestion control techniques are
used to treat or alleviate congestion after it
happens. Several techniques are used by
different protocols; some of them are:
Backpressure :
Backpressure is a technique in which a
congested node stops receiving packets from
upstream node. This may cause the upstream
node or nodes to become congested and reject
receiving data from above nodes. Backpressure
is a node-to-node congestion control technique
that propagate in the opposite direction of data
flow. The backpressure technique can be
24.14
applied only to virtual circuit where each node
Figure 24.6 Backpressure method for alleviating congestion

In above diagram the 3rd node is congested and


stops receiving packets as a result 2nd node may
be get congested due to slowing down of the
output data flow. Similarly 1st node may get
congested and inform the source to slow down.
24.15
Figure 24.7 Choke packet

24.16
Choke Packet Technique : Choke packet
technique is applicable to both virtual networks
as well as datagram subnets. A choke packet is
a packet sent by a node to the source to inform
it of congestion. Each router monitors its
resources and the utilization at each of its
output lines. Whenever the resource utilization
exceeds the threshold value which is set by the
administrator, the router directly sends a choke
packet to the source giving it a feedback to
reduce the traffic. The intermediate nodes
through which the packets has traveled are not
warned about congestion.

24.17
Implicit Signaling : In implicit
signaling, there is no communication
between the congested nodes and
the source. The source guesses that
there is congestion in a network. For
example when sender sends several
packets and there is no
acknowledgment for a while, one
assumption is that there is a
congestion.

24.18
Explicit Signaling :In explicit
signaling, if a node experiences
congestion it can explicitly sends a
packet to the source or destination to
inform about congestion. The
difference between choke packet and
explicit signaling is that the signal is
included in the packets that carry data
rather than creating a different packet
as in case of choke packet
technique. Explicit signaling can occur
in either forward or backward
24.19 direction.
• Forward Signaling : In forward
signaling, a signal is sent in the
direction of the congestion. The
destination is warned about congestion.
The receiver in this case adopt policies
to prevent further congestion.
• Backward Signaling : In backward
signaling, a signal is sent in the
opposite direction of the congestion.
The source is warned about congestion
and it needs to slow down.

24.20
24-4 TWO EXAMPLES

To better understand the concept of congestion


control, let us give two examples: one in TCP and the
other in Frame Relay.

Topics discussed in this section:


Congestion Control in TCP
Congestion Control in Frame Relay

24.21
Congestion Control
Mechanisms
 Additive Increase/Multiplicative Decrease

 Slow start

 Fast retransmit & Fast recovery

Computer Networks: TCP Congestion Control 22


Congestion control in TCP

Actual window size =min (rwnd, cwnd)

Congestion Window (cwnd) is a TCP


state variable that limits the amount of
data the TCP can send into the network
before receiving an ACK.

The Receiver Window (rwnd) is a variable


that advertises the amount of data that
the destination side can receive.
24.23
Figure 24.8 Slow start, exponential increase

24.24
Note

In the slow-start algorithm, the size of


the congestion window increases
exponentially until it reaches a
threshold.

24.25
Figure 24.9 Congestion avoidance, additive increase

24.26
Note

In the congestion avoidance algorithm,


the size of the congestion window
increases additively until
congestion is detected.

24.27
Note

An implementation reacts to congestion


detection in one of the following ways:
❏ If detection is by time-out, a new slow
start phase starts.
❏ If detection is by three ACKs, a new
congestion avoidance phase starts.

24.28
Figure 24.10 TCP congestion policy summary

Max segment size

24.29
Figure 24.11 Congestion example

24.30
Congestion in Frame Relay decreases throughput and
increases delay.
A high throughput and low delay is the main goal of
Frame Relay protocol.
Frame Relay does not have flow control and it allows
user to transmit burst data.
This means that a Frame Relay network has potential
to be really congested with traffic, requiring congestion
control.
Frame Relay uses congestion avoidance by means of
two bit fields present in the Frame Relay frame to
explicitly warn source and destination of presence of
congestion:
24.31
BECN:
Backward Explicit Congestion Notification (BECN) warns the
sender of congestion present in the network. This is achieved
by resending the frame in reverse direction with the help of
switches in the network. This warning can be responded by
the sender by reducing the transmission data rate, thus
reducing congestion effects in the network.
FECN:
Forward Explicit Congestion Notification (FECN) is used to
warn the receiver of congestion in the network. It might
appear that receiver cannot do anything to relieve the
congestion, however the Frame Relay protocol assumes that
sender and receiver are communicating with each other and
when it receives FECN bit as 1 receiver delays the
acknowledgement. This forces sender to slow down and
reducing effects of congestion in the network.

24.32
Figure 24.12 BECN

24.33
Figure 24.13 FECN

24.34
Figure 24.14 Four cases of congestion

24.35
24-5 QUALITY OF SERVICE

Quality of service (QoS) is an internetworking issue


that has been discussed more than defined. We can
informally define quality of service as something a
flow seeks to attain.

Topics discussed in this section:


Flow Characteristics
Flow Classes

24.36
Figure 24.15 Flow characteristics

24.37
QOS Parameters
Reliability
Lack of reliability means losing a packet or
acknowledgment(being sent on its successful
reach to destination), which entails
retransmission.
However, the sensitivity of any application
programs to reliability is not the same. for e.g
file transfer and email service require reliable
service unlike telephone or audio
conferencing.

Transit Delay
It is the time between a message being sent
by the transport user on the source machine
and its being received by the transport user in
the destination machine.
24.38
Jitter is the variation in delay for packets associated
with the same flow. For applications such as audio
and video applications, it does not matter if the
packets arrive with a short or long delay as long as
the delay is the same for all packets.
High jitter means the difference between delays(of
packets of data) is large, low jitter means the
variation is small.

bandwidth : The effective bandwidth is the


bandwidth that the network needs to allocate for the
flow of traffic. The effective bandwidth is basically a
function of three values i.e average data rate, peak
data rate, and maximum burst size.

24.39
24-6 TECHNIQUES TO IMPROVE QoS

In Section 24.5 we tried to define QoS in terms of its


characteristics. In this section, we discuss some
techniques that can be used to improve the quality of
service. We briefly discuss four common methods:
scheduling, traffic shaping, admission control, and
resource reservation.
Topics discussed in this section:
Scheduling
Traffic Shaping
Resource Reservation
Admission Control

24.40
Scheduling
Figure 24.16 FIFO queue

24.41
Figure 24.17 Priority queuing

24.42
Figure 24.18 Weighted fair queuing

24.43
Traffic shaping
Figure 24.19 Leaky bucket

24.44
24.45
Figure 24.20 Leaky bucket implementation

24.46
Note

A leaky bucket algorithm shapes bursty


traffic into fixed-rate traffic by averaging
the data rate. It may drop the packets if
the bucket is full.

24.47
Note

The token bucket allows bursty traffic at


a regulated maximum rate.

24.48
Figure 24.21 Token bucket

24.49
24.50
24-9 QoS IN SWITCHED NETWORKS

Let us now discuss QoS as used in two switched


networks: Frame Relay and Asynchronous Transfer
Mode. These two networks are virtual-circuit networks
that need a signaling protocol such as Resource
reservation protocol.

Topics discussed in this section:


QoS in Frame Relay
QoS in ATM

24.51
Qos in Frame relay
Figure 24.28 Relationship between traffic control attributes

Committed Information rate(CIR)


Excess burst size(Be)
Committed burst size(Bc)
CIR=BC/T bps

24.52
Figure 24.29 User rate in relation to Bc and Bc + Be

24.53
QOS in ATM
Figure 24.30 Service classes

CBR- Constant bit rate


VBR-Variable bit rate
ABR-Available bit rate
UBR- Unspecified bit rate

24.54
QoS feature is used when there is
traffic congestion in-network, it gives
priority to certain real-time media. A
high level of QoS is used while
transmitting real-time multimedia to
eliminate latency and dropouts.
Asynchronous Transfer Mode (ATM) is
a networking technology that uses a
certain level of QoS in data
transmission.
The Quality of Service in ATM is
based on following: Classes, User-
24.55
Classes :
The ATM Forum defines four service classes that are
explained below –
1.Constant Bit Rate (CBR) –
CBR is mainly for users who want real-time audio or
video services. The service provided by a dedicated
line. For example, T line is similar to CBR class service.
2.Variable Bit Rate (VBR) –
VBR class is divided into two sub classes –
1.(i) Real-time (VBR-RT) :
The users who need real-time transmission services
like audio and video and they also use compression
techniques to create a variable bit rate, they use
VBR-RT service class.
2.(ii) Non-real Time (VBR-NRT) :
The users who do not need real-time transmission
services but they use compression techniques to
24.56 create a variable bit rate, then they use VBR-NRT
Available Bit Rate (ABR) –ABR is
used to deliver cells at a specific
minimum rate and if more network
capacity is available, then minimum
rate can be exceeded. ABR is very
much suitable for applications that
have high traffic.
Unspecified Bit Rate (UBR) –UBR
class and it is a best-effort delivery
service that does not guarantee
anything.

24.57
Figure 24.31 Relationship of service classes to the total capacity of the network

24.58

You might also like