Lecture 6
Lecture 6
TCP flow control operates at the transport layer of the OSI (Open Systems Interconnection) model.
The transport layer is responsible for end-to-end communication and ensures the reliable and
orderly delivery of data between two devices across a network.
Throughput in the context of TCP (Transmission Control Protocol) refers to the actual amount of
data transferred successfully over a TCP connection in a given period.
It is a measure of the effective data transfer rate and is influenced by various factors including
network conditions, latency, and the efficiency of the TCP protocol.
Flow control is the speed at which the sender is sending data to the receiver.
TCP (Transmission Control Protocol) uses a mechanism called flow control to manage the pace at
which data is transmitted between two devices over a network.
Flow control is essential to prevent congestion, ensure efficient data transfer, and avoid
overwhelming the receiving device.
If the TCP Socket receiver buffer is already full and the application is taking time to assemble the
packet, the buffer will start to drop the packet. Packets need to be transmitted again.
There is a receive window that will specify how many bytes the receiver can accept.
Rwnd is the receive window field in the TCP header which specifies how much free space there is.
SYN (Synchronize): The client sends a TCP segment with the SYN flag set to the server, indicating its
intention to establish a connection.
SYN-ACK (Synchronize-Acknowledge): The server responds with a TCP segment that has both the
SYN and ACK (acknowledge) flags set, indicating its acknowledgment of the client's request to
establish a connection and its intention to establish a connection as well.
ACK (Acknowledge): Finally, the client sends a TCP segment with the ACK flag set. This segment
acknowledges the server's response, and at this point, the connection is considered established.
After a time limit the receiving server will forget about the packet and even after the packet has
arrived, the connection will already be lost.
Duplicate packet can also occur where after a delay of one packet, TCP will send another one and if
they both arrive at the same time, one has to be deleted. Extra works.
The client sends a TCP segment to the server with the SYN (synchronize) flag set.
This segment includes an initial sequence number (ISN) that the client chooses. The ISN is a
randomly selected number used to identify the beginning of the data stream.
Upon receiving the SYN segment, the server responds with a TCP segment that has both the SYN and
ACK (acknowledge) flags set.
The server also selects its own initial sequence number (ISN) for the communication.
The client acknowledges the server's response by sending a TCP segment with the ACK flag set.
The acknowledgment number is set to one more than the received sequence number.
The client, which wishes to close the connection, sends a TCP segment with the FIN (Finish) flag set
to the server.
This indicates to the server that the client has finished sending data.
The server acknowledges the client's FIN by sending a TCP segment with the ACK flag set.
At this point, the server can continue to send any remaining data it has before closing its end of the
connection.
Once the server has completed its data transfer, it sends a TCP segment with the FIN flag set to the
client.
This indicates to the client that the server has finished sending data.
The client acknowledges the server's FIN by sending a TCP segment with the ACK flag set.
The client then enters a TIME_WAIT state for a short duration to ensure that any delayed packets
are not misinterpreted as part of a new connection.
After these four steps, the connection is considered closed. Both the client and the server have
indicated their intention to terminate the connection, and both have acknowledged each other's
requests.
TCP congestion control is a set of mechanisms and algorithms implemented in the Transmission
Control Protocol (TCP) to manage the flow of data over a network and prevent congestion.
Congestion occurs when the demand for network resources exceeds its capacity, leading to packet
loss, delays, and degraded performance.
Flow control is between a sender and receiver but congestion control englobes the whole network.
If the speed at which a host is sending is the same as the speed at which the receiver is transmitting,
there will be no problem. If the receiver’s speed is less, then congestion will happen.
Buffer Overflow:
If (routers) network devices' buffers (temporary storage areas) become full, they may start dropping
packets, leading to congestion. Re transmission will need to be done and the host will have used
more bandwidth. Duplicate transmission can happen as the sender did not receive ack packet from
the receiver it may assume the packet is loss and he will transmit again.
An increase in the number of users and devices accessing the network simultaneously can lead to
congestion.
Events such as sudden spikes in traffic, like during peak hours or specific events, can overwhelm the
network infrastructure.
Bandwidth Limitations:
Limited network bandwidth is a primary cause of congestion. If the available bandwidth cannot
handle the volume of data being transmitted, delays and packet loss may occur.
DDoS attacks involve flooding a network with a large volume of traffic, overwhelming its capacity
and causing congestion.
TCP uses an AIMD (Additive Increase, Multiplicative Decrease) algorithm to control congestion.
When the network is not congested (as inferred by the absence of packet loss), TCP increases its
sending rate linearly (additive increase).
In the presence of congestion (detected through packet loss or other signals), TCP decreases its
sending rate multiplicatively to alleviate congestion (multiplicative decrease).
It is embedded in TCP and only the sender and receiver will have to do the work not the routers.
Slow Start:
TCP employs the slow start algorithm when initiating a connection or recovering from congestion.
During slow start, the sender starts with a small congestion window and exponentially increases it
until reaching a congestion threshold.
Once the congestion window surpasses the threshold, TCP transitions to congestion avoidance.
Congestion Avoidance:
In congestion avoidance, TCP increases its congestion window more conservatively, using an additive
increase strategy.
The congestion window controls the amount of unacknowledged data that can be in transit at any
given time.
If a sender detects packet loss, it assumes that congestion occurred and initiates fast retransmit and
fast recovery mechanisms.
Fast retransmit prompts the sender to retransmit the presumed lost packet without waiting for a
timeout.
Fast recovery allows the sender to continue sending data at a reduced rate after a fast retransmit
until it approaches the congestion window size before the loss.
ECN is a mechanism that allows network devices to notify the sender of impending congestion
without dropping packets.
ECN is indicated by setting flags in the IP header, and the sender adjusts its behavior accordingly
when it receives ECN signals.