21-Traffic Management-TCP Congestion-24-10-2024
21-Traffic Management-TCP Congestion-24-10-2024
24.1 Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
24-1 DATA TRAFFIC
24.2
Figure 24.1 Traffic descriptors
24.3
Average date rate
24.4
Figure 24.2 Three traffic profiles
24.5
24-2 CONGESTION
24.6
Figure 24.3 Queues in a router
24.7
Network Performance
Figure Packet delay and throughput as functions of load
24.8
24-3 CONGESTION CONTROL
24.9
Figure 24.5 Congestion control categories
24.10
Open Loop Congestion Control
Open loop congestion control policies are applied to prevent congestion
before it happens. The congestion control is handled either by the source or
the destination.
Retransmission Policy :
It is the policy in which retransmission of the packets are taken care of. If
the sender feels that a sent packet is lost or corrupted, the packet needs to
be retransmitted. This transmission may increase the congestion in the
network.
To prevent congestion, retransmission timers must be designed to prevent
congestion and also able to optimize efficiency.
Window Policy :
The type of window at the sender’s side may also affect the congestion.
Several packets in the Go-back-n window are re-sent, although some
packets may be received successfully at the receiver side. This duplication
may increase the congestion in the network and make it worse.
Therefore, Selective repeat window should be adopted as it sends the
specific packet that may have been lost.
24.11
Discarding Policy : A good discarding policy
adopted by the routers is that the routers may
prevent congestion and at the same time
partially discard the corrupted or less sensitive
packages and also be able to maintain the quality
of a message.
24.13
Closed Loop Congestion Control
Closed loop congestion control techniques are
used to treat or alleviate congestion after it
happens. Several techniques are used by
different protocols; some of them are:
Backpressure :
Backpressure is a technique in which a
congested node stops receiving packets from
upstream node. This may cause the upstream
node or nodes to become congested and reject
receiving data from above nodes. Backpressure
is a node-to-node congestion control technique
that propagate in the opposite direction of data
flow. The backpressure technique can be
24.14
applied only to virtual circuit where each node
Figure 24.6 Backpressure method for alleviating congestion
24.16
Choke Packet Technique : Choke packet
technique is applicable to both virtual networks
as well as datagram subnets. A choke packet is
a packet sent by a node to the source to inform
it of congestion. Each router monitors its
resources and the utilization at each of its
output lines. Whenever the resource utilization
exceeds the threshold value which is set by the
administrator, the router directly sends a choke
packet to the source giving it a feedback to
reduce the traffic. The intermediate nodes
through which the packets has traveled are not
warned about congestion.
24.17
Implicit Signaling : In implicit
signaling, there is no communication
between the congested nodes and
the source. The source guesses that
there is congestion in a network. For
example when sender sends several
packets and there is no
acknowledgment for a while, one
assumption is that there is a
congestion.
24.18
Explicit Signaling :In explicit
signaling, if a node experiences
congestion it can explicitly sends a
packet to the source or destination to
inform about congestion. The
difference between choke packet and
explicit signaling is that the signal is
included in the packets that carry data
rather than creating a different packet
as in case of choke packet
technique. Explicit signaling can occur
in either forward or backward
24.19 direction.
• Forward Signaling : In forward
signaling, a signal is sent in the
direction of the congestion. The
destination is warned about congestion.
The receiver in this case adopt policies
to prevent further congestion.
• Backward Signaling : In backward
signaling, a signal is sent in the
opposite direction of the congestion.
The source is warned about congestion
and it needs to slow down.
24.20
24-4 TWO EXAMPLES
24.21
Congestion Control
Mechanisms
Additive Increase/Multiplicative Decrease
Slow start
24.24
Note
24.25
Figure 24.9 Congestion avoidance, additive increase
24.26
Note
24.27
Note
24.28
Figure 24.10 TCP congestion policy summary
24.29
Figure 24.11 Congestion example
24.30
Congestion in Frame Relay decreases throughput and
increases delay.
A high throughput and low delay is the main goal of
Frame Relay protocol.
Frame Relay does not have flow control and it allows
user to transmit burst data.
This means that a Frame Relay network has potential
to be really congested with traffic, requiring congestion
control.
Frame Relay uses congestion avoidance by means of
two bit fields present in the Frame Relay frame to
explicitly warn source and destination of presence of
congestion:
24.31
BECN:
Backward Explicit Congestion Notification (BECN) warns the
sender of congestion present in the network. This is achieved
by resending the frame in reverse direction with the help of
switches in the network. This warning can be responded by
the sender by reducing the transmission data rate, thus
reducing congestion effects in the network.
FECN:
Forward Explicit Congestion Notification (FECN) is used to
warn the receiver of congestion in the network. It might
appear that receiver cannot do anything to relieve the
congestion, however the Frame Relay protocol assumes that
sender and receiver are communicating with each other and
when it receives FECN bit as 1 receiver delays the
acknowledgement. This forces sender to slow down and
reducing effects of congestion in the network.
24.32
Figure 24.12 BECN
24.33
Figure 24.13 FECN
24.34
Figure 24.14 Four cases of congestion
24.35
24-5 QUALITY OF SERVICE
24.36
Figure 24.15 Flow characteristics
24.37
QOS Parameters
Reliability
Lack of reliability means losing a packet or
acknowledgment(being sent on its successful
reach to destination), which entails
retransmission.
However, the sensitivity of any application
programs to reliability is not the same. for e.g
file transfer and email service require reliable
service unlike telephone or audio
conferencing.
Transit Delay
It is the time between a message being sent
by the transport user on the source machine
and its being received by the transport user in
the destination machine.
24.38
Jitter is the variation in delay for packets associated
with the same flow. For applications such as audio
and video applications, it does not matter if the
packets arrive with a short or long delay as long as
the delay is the same for all packets.
High jitter means the difference between delays(of
packets of data) is large, low jitter means the
variation is small.
24.39
24-6 TECHNIQUES TO IMPROVE QoS
24.40
Scheduling
Figure 24.16 FIFO queue
24.41
Figure 24.17 Priority queuing
24.42
Figure 24.18 Weighted fair queuing
24.43
Traffic shaping
Figure 24.19 Leaky bucket
24.44
24.45
Figure 24.20 Leaky bucket implementation
24.46
Note
24.47
Note
24.48
Figure 24.21 Token bucket
24.49
24.50
24-9 QoS IN SWITCHED NETWORKS
24.51
Qos in Frame relay
Figure 24.28 Relationship between traffic control attributes
24.52
Figure 24.29 User rate in relation to Bc and Bc + Be
24.53
QOS in ATM
Figure 24.30 Service classes
24.54
QoS feature is used when there is
traffic congestion in-network, it gives
priority to certain real-time media. A
high level of QoS is used while
transmitting real-time multimedia to
eliminate latency and dropouts.
Asynchronous Transfer Mode (ATM) is
a networking technology that uses a
certain level of QoS in data
transmission.
The Quality of Service in ATM is
based on following: Classes, User-
24.55
Classes :
The ATM Forum defines four service classes that are
explained below –
1.Constant Bit Rate (CBR) –
CBR is mainly for users who want real-time audio or
video services. The service provided by a dedicated
line. For example, T line is similar to CBR class service.
2.Variable Bit Rate (VBR) –
VBR class is divided into two sub classes –
1.(i) Real-time (VBR-RT) :
The users who need real-time transmission services
like audio and video and they also use compression
techniques to create a variable bit rate, they use
VBR-RT service class.
2.(ii) Non-real Time (VBR-NRT) :
The users who do not need real-time transmission
services but they use compression techniques to
24.56 create a variable bit rate, then they use VBR-NRT
Available Bit Rate (ABR) –ABR is
used to deliver cells at a specific
minimum rate and if more network
capacity is available, then minimum
rate can be exceeded. ABR is very
much suitable for applications that
have high traffic.
Unspecified Bit Rate (UBR) –UBR
class and it is a best-effort delivery
service that does not guarantee
anything.
24.57
Figure 24.31 Relationship of service classes to the total capacity of the network
24.58