0% found this document useful (0 votes)
5 views

Module 3 CN

The document discusses higher layer protocols, focusing on the network layer's role in data transmission, including logical addressing, host-to-host delivery, fragmentation, congestion control, and routing. It details various network layer protocols such as IP, ARP, RARP, ICMP, and IGMP, explaining their functions and differences. Additionally, it covers the TCP 3-Way Handshake process and TCP congestion control mechanisms to ensure reliable communication over networks.

Uploaded by

h282y87ykk
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Module 3 CN

The document discusses higher layer protocols, focusing on the network layer's role in data transmission, including logical addressing, host-to-host delivery, fragmentation, congestion control, and routing. It details various network layer protocols such as IP, ARP, RARP, ICMP, and IGMP, explaining their functions and differences. Additionally, it covers the TCP 3-Way Handshake process and TCP congestion control mechanisms to ensure reliable communication over networks.

Uploaded by

h282y87ykk
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 63

MODULE 3

Higher Layer Protocols


Network Layer Protocols
Network Layer is responsible for the transmission of data or communication from one
host to another host connected in a network. Rather than describing how data is
transferred, it implements the technique for efficient transmission. In order to provide
efficient communication protocols are used at the network layer. The data is being
grouped into packets or in the case of extremely large data it is divided into smaller
sub packets. Each protocol used has specific features and advantages. The below
article covers in detail the protocols used at the network layer.

Functions of Network Layer

The network layer is responsible for providing the below-given tasks:

● Logical Addressing: Each device on the network needs to be identified


uniquely. Therefore the network layer provides an addressing scheme to
identify the device. It places the IP address of every sender and the receiver
in the header. This header consists of the network ID and host ID of the
network.
● Host-to-host Delivery of Data: The network layer ensures that the packet is
being delivered successfully from the sender to the receiver. This layer
makes sure that the packet reaches the intended recipient only.
● Fragmentation: In order to transmit the larger data from sender to receiver,
the network layer fragments it into smaller packets. Fragmentation is
required because every node has its own fixed capacity for receiving data.
● Congestion Control: Congestion is defined as a situation where the router is
not able to route the packets property which results in aggregation of
packets in the network. Congestion occurs when a large amount of packets
are flooded in the network. Therefore the network layer controls the
congestion of data packets in the network.
● Routing and Forwarding: Routing is the process that decides the route for
transmission of packets from sender to receiver. It mostly chooses the
shortest path between the sender and the receiver. Routing protocols that are
mostly used are path vector, distance vector routing, link state routing, etc.

Network Layer Protocols

There are various protocols used in the network layer. Each protocol is used for a
different task. Below are the protocols used in the network layer:

Protocols at each Layer


1. IP (Internet Protocol)

IP stands for Internet Protocol. Internet Protocol helps to uniquely identify each
device on the network. Internet protocol is responsible for transferring the data from
one node to another node in the network. Internet protocol is a connectionless protocol
therefore it does not guarantee the delivery of data. For the successful delivery higher
level protocols such as TCP are used to guarantee the data transmission. The Internet
Protocol is divided into two types. They are:

● IPv4: IPv4 provides with the 32 bit address scheme. IPv4 addressing has
four numeric fields and are separated by dot. IPv4 can be configured either
using DHCP or manually. IPv4 does not provide with more security features
as it does not support authentication or encryption techniques. IPv4 is
further divided into five classes as Class A, Class B, Class C, Class D and
Class E.
● IPv6: IPv6 is the most recent version of IP. If provided with a 128 bit
addressing scheme. IP address has eight fields that are separated by colon,
and these fields are alphanumeric. The IPv6 address is represented in
hexadecimal. IPv6 provides with more security features such as
authentication and encryption. IPv6 supports end-to-end connection
integrity. IPv6 provides with more range of IP address as compared to IPv4.

Difference between IPv4 and IPv6: IPv4 vs IPv6

2. ARP (Address Resolution Protocol)


ARP stands for Address Resolution Protocol. ARP is used to convert the logical
address ie. IP address into physical address ie. MAC address. While communicating
with other nodes, it is necessary to know the MAC address or physical address of the
destination node. If any of the node in a network wants to know the physical address
of another node in the same network, the host then sends an ARP query packet. This
ARP query packet consists of IP address and MAC address of source host and only
the IP address of destination host. This ARP packet is then received to every node
present in the network. The node with its own IP address recognises it and sends it
MAC address to the requesting node. But sending and receiving such packets to know
the MAC address of destination node it increases the traffic load. Therefore in order to
reduce this traffic and improve the performance, the systems that makes use of ARP
maintain a cache of recently acquired IP into MAC address bindings.

How Does ARP Work?

● The host broadcasts an ARP inquiry packet containing the IP address over
the network in order to find out the physical address of another computer on
its network.
● The ARP packet is received and processed by all hosts on the network;
however, only the intended recipient can identify the IP address and reply
with the physical address.
● After adding the physical address to the datagram header and cache
memory, the host storing the datagram transmits it back to the sender.
ARP

Types of ARP Entries

● Static Entry: This type of entry is created when a user uses the ARP
command utility to manually enter the IP to MAC address association.
● Dynamic Entry: A dynamic entry is one that is automatically formed when a
sender broadcasts their message to the whole network. Dynamic entries are
periodically removed and are not permanent.

3. RARP

RARP stands for Reverse Address Resolution Protocol. RARP works opposite of
ARP. Reverse Address Resolution Protocol is used to convert MAC address ie.
physical address into IP address ie. logical address. RARP provides with a feature for
the systems and applications to get their own IP address from a DNS( Domain Name
System) or router. This type of resolution is required for various tasks such as
executing reverse DNS lookup. As Reverse Address Resolution Protocol works at low
level it requires direct network addresses. The reply from the server mostly carries a
small amount of information but the 32 bit internet address is used and it does not
exploit the full potential of a network such as ethernet.

How Does RARP Work?

● Data is sent between two places in a network using the RARP, which is on
the Network Access Layer.
● Every user on the network has two distinct addresses: their MAC (physical)
address and their IP (logical) address.
● Software assigns the IP address, and the hardware then builds the MAC
address into the device.
● Any regular computer connected to the network can function as the RARP
server, answering to RARP queries. It must, however, store all of the MAC
addresses’ associated IP addresses. Only these RARP servers are able to
respond to RARP requests that are received by the network. The
information package must be transmitted over the network’s lowest tiers.
● Using both its physical address and Ethernet broadcast address, the client
transmits a RARP request. In response, the server gives the client its IP
address.
RARP

Difference between ARP and RARP: ARP vs RARP

4. ICMP

ICMP stands for Internet Control Message Protocol. ICMP is a part of IP protocol
suite. ICMP is an error reporting and network diagnostic protocol. Feedback in the
network is reported to the designated host. Meanwhile, if any kind of error occur it is
then reported to ICMP. ICMP protocol consists of many error reporting and
diagnostic messages. ICMP protocol handles various kinds of errors such as time
exceeded, redirection, source quench, destination unreachable, parameter problems
etc. The messages in ICMP are divided into two types. They are given below:

● Error Message: Error message states about the issues or problems that are
faced by the host or routers during processing of IP packet.
● Query Message: Query messages are used by the host in order to get
information from a router or another host.

How Does ICMP Work?

● The main and most significant protocol in the IP suite is called ICMP.
However, unlike TCP and UDP, ICMP is a connectionless protocol,
meaning it doesn’t require a connection to be established with the target
device in order to transmit a message.
● TCP and ICMP operate differently from one another; TCP is a connection-
oriented protocol, while ICMP operates without a connection. Every time a
connection is made prior to a message being sent, a TCP Handshake is
required of both devices.
● Datagrams including an IP header containing ICMP data are used to
transmit ICMP packets. An independent data item like a packet is
comparable to an ICMP datagram.
ICMP

5. IGMP

IGMP stands for Internet Group Message Protocol. IGMP is a multicasting


communication protocol. It utilizes the resources efficiently while broadcasting the
messages and data packets. IGMP is also a protocol used by TCP/IP. Other hosts
connected in the network and routers makes use of IGMP for multicasting
communication that have IP networks. In many networks multicast routers are used in
order to transmit the messages to all the nodes. Multicast routers therefore receives
large number of packets that needs to be sent. But to broadcast this packets is difficult
as it would increase the overall network load. Therefore IGMP helps the multicast
routers by addressing them while broadcasting. As multicast communication consists
of more than one senders and receivers the Internet Group Message Protocol is
majorly used in various applications such as streaming media, web conference tools,
games, etc.

How Does IGMP Work?


● Devices that can support dynamic multicasting and multicast groups can use
IGMP.
● The host has the ability to join or exit the multicast group using these
devices. It is also possible to add and remove customers from the group
using these devices.
● The host and local multicast router use this communication protocol. Upon
creation of a multicast group, the packet’s destination IP address is changed
to the multicast group address, which falls inside the class D IP address
range.

TCP 3-Way Handshake Process


The TCP 3-Way Handshake is a fundamental process that establishes a reliable
connection between two devices over a TCP/IP network. It involves three steps: SYN
(Synchronize), SYN-ACK (Synchronize-Acknowledge), and ACK (Acknowledge).
During the handshake, the client and server exchange initial sequence numbers and
confirm the connection establishment. In this article, we will discuss the TCP 3-Way
Handshake Process.

What is the TCP 3-Way Handshake?

The TCP 3-Way Handshake is a fundamental process used in the Transmission


Control Protocol (TCP) to establish a reliable connection between a client and a server
before data transmission begins. This handshake ensures that both parties are
synchronized and ready for communication.

TCP Segment Structure


A TCP segment consists of data bytes to be sent and a header that is added to the data
by TCP as shown:

The header of a TCP segment can range from 20-60 bytes. 40 bytes are for options. If
there are no options, a header is 20 bytes else it can be of upmost 60 bytes. Header
fields:

● Source Port Address: A 16-bit field that holds the port address of the
application that is sending the data segment.
● Destination Port Address: A 16-bit field that holds the port address of the
application in the host that is receiving the data segment.
● Sequence Number: A 32-bit field that holds the sequence number , i.e, the
byte number of the first byte that is sent in that particular segment. It is used
to reassemble the message at the receiving end of the segments that are
received out of order.
● Acknowledgement Number: A 32-bit field that holds the acknowledgement
number, i.e, the byte number that the receiver expects to receive next. It is
an acknowledgement for the previous bytes being received successfully.
● Header Length (HLEN): This is a 4-bit field that indicates the length of the
TCP header by a number of 4-byte words in the header, i.e if the header is
20 bytes(min length of TCP header ), then this field will hold 5 (because 5 x
4 = 20) and the maximum length: 60 bytes, then it’ll hold the value
15(because 15 x 4 = 60). Hence, the value of this field is always between 5
and 15.
● Control flags: These are 6 1-bit control bits that control connection
establishment, connection termination, connection abortion, flow control,
mode of transfer etc. Their function is:
○ URG: Urgent pointer is valid
○ ACK: Acknowledgement number is valid( used in case of
cumulative acknowledgement)
○ PSH: Request for push
○ RST: Reset the connection
○ SYN: Synchronize sequence numbers
○ FIN: Terminate the connection
● Window size: This field tells the window size of the sending TCP in bytes.
● Checksum: This field holds the checksum for error control . It is mandatory
in TCP as opposed to UDP.
● Urgent pointer: This field (valid only if the URG control flag is set) is used
to point to data that is urgently required that needs to reach the receiving
process at the earliest. The value of this field is added to the sequence
number to get the byte number of the last urgent byte.

To master concepts like the TCP 3-Way Handshake and other critical networking
principles, consider enrolling in the GATE CS Self-Paced course . This course offers
a thorough understanding of key topics essential for GATE preparation and a
successful career in computer science. Get the knowledge and skills you need with
expert-led instruction.

TCP 3-way Handshake Process

The process of communication between devices over the internet happens according
to the current TCP/IP suite model(stripped-out version of OSI reference model). The
Application layer is a top pile of a stack of TCP/IP models from where network-
referenced applications like web browsers on the client side establish a connection
with the server. From the application layer, the information is transferred to the
transport layer where our topic comes into the picture. The two important protocols of
this layer are – TCP, and UDP(User Datagram Protocol) out of which TCP is
prevalent(since it provides reliability for the connection established). However, you
can find an application of UDP in querying the DNS server to get the binary
equivalent of the Domain Name used for the website.
TCP provides reliable communication with something called Positive
Acknowledgement with Re-transmission(PAR) . The Protocol Data Unit(PDU) of the
transport layer is called a segment. Now a device using PAR resend the data unit until
it receives an acknowledgement. If the data unit received at the receiver’s end is
damaged(It checks the data with checksum functionality of the transport layer that is
used for Error Detection ), the receiver discards the segment. So the sender has to
resend the data unit for which positive acknowledgement is not received. You can
realize from the above mechanism that three segments are exchanged between
sender(client) and receiver(server) for a reliable TCP connection to get established.
Let us delve into how this mechanism works

● Step 1 (SYN): In the first step, the client wants to establish a connection
with a server, so it sends a segment with SYN(Synchronize Sequence
Number) which informs the server that the client is likely to start
communication and with what sequence number it starts segments with
● Step 2 (SYN + ACK): Server responds to the client request with SYN-ACK
signal bits set. Acknowledgement(ACK) signifies the response of the
segment it received and SYN signifies with what sequence number it is
likely to start the segments with
● Step 3 (ACK): In the final part client acknowledges the response of the
server and they both establish a reliable connection with which they will
start the actual data transfer

TCP Congestion Control


Last Updated : 01 Jul, 2024

TCP congestion control is a method used by the TCP protocol to manage data flow
over a network and prevent congestion. TCP uses a congestion window and
congestion policy that avoids congestion. Previously, we assumed that only the
receiver could dictate the sender’s window size. We ignored another entity here, the
network. If the network cannot deliver the data as fast as it is created by the sender, it
must tell the sender to slow down. In other words, in addition to the receiver, the
network is a second entity that determines the size of the sender’s window.

Congestion Policy in TCP

● Slow Start Phase: Starts slow increment is exponential to the threshold.


● Congestion Avoidance Phase: After reaching the threshold increment is by
1.
● Congestion Detection Phase: The sender goes back to the Slow start phase
or the Congestion avoidance phase.

Slow Start Phase

Exponential Increment: In this phase after every RTT the congestion window size
increments exponentially.

Example: If the initial congestion window size is 1 segment, and the first segment is
successfully acknowledged, the congestion window size becomes 2 segments. If the
next transmission is also acknowledged, the congestion window size doubles to 4
segments. This exponential growth continues as long as all segments are successfully
acknowledged.

Initially cwnd = 1

After 1 RTT, cwnd = 2^(1) = 2

2 RTT, cwnd = 2^(2) = 4

3 RTT, cwnd = 2^(3) = 8

Congestion Avoidance Phase

Additive Increment: This phase starts after the threshold value also denoted as
ssthresh. The size of CWND (Congestion Window) increases additive. After each
RTT cwnd = cwnd + 1.

For example: if the congestion window size is 20 segments and all 20 segments are
successfully acknowledged within an RTT, the congestion window size would be
increased to 21 segments in the next RTT. If all 21 segments are again successfully
acknowledged, the congestion window size will be increased to 22 segments, and so
on.

Initially cwnd = i

After 1 RTT, cwnd = i+1

2 RTT, cwnd = i+2

3 RTT, cwnd = i+3

Congestion Detection Phase

Multiplicative Decrement: If congestion occurs, the congestion window size is


decreased. The only way a sender can guess that congestion has happened is the need
to retransmit a segment. Retransmission is needed to recover a missing packet that is
assumed to have been dropped by a router due to congestion. Retransmission can
occur in one of two cases: when the RTO timer times out or when three duplicate
ACKs are received.

Case 1: Retransmission due to Timeout – In this case, the congestion possibility is


high.

(a) ssthresh is reduced to half of the current window size.

(b) set cwnd = 1

(c) start with the slow start phase again.

Case 2: Retransmission due to 3 Acknowledgement Duplicates – The congestion


possibility is less.

(a) ssthresh value reduces to half of the current window size.


(b) set cwnd= ssthresh

(c) start with congestion avoidance phase

Example

Assume a TCP protocol experiencing the behavior of slow start. At the 5th
transmission round with a threshold (ssthresh) value of 32 goes into the congestion
avoidance phase and continues till the 10th transmission. At the 10th transmission
round, 3 duplicate ACKs are received by the receiver and entered into additive
increase mode. Timeout occurs at the 16th transmission round. Plot the transmission
round (time) vs congestion window size of TCP segments.

TCP Timers
TCP uses several timers to ensure that excessive delays are not encountered during
communications. Several of these timers are elegant, handling problems that are not
immediately obvious at first analysis. Each of the timers used by TCP is examined in
the following sections, which reveal its role in ensuring data is properly sent from one
connection to another.

TCP implementation uses four timers –

● Retransmission Timer – To retransmit lost segments, TCP uses


retransmission timeout (RTO). When TCP sends a segment the timer starts
and stops when the acknowledgment is received. If the timer expires
timeout occurs and the segment is retransmitted. RTO (retransmission
timeout is for 1 RTT) to calculate retransmission timeout we first need to
calculate the RTT(round trip time).
RTT three types –
○ Measured RTT(RTTm) – The measured round-trip time for
a segment is the time required for the segment to reach the
destination and be acknowledged, although the
acknowledgement may include other segments.
○ Smoothed RTT(RTTs) – It is the weighted average of
RTTm. RTTm is likely to change and its fluctuation is so
high that a single measurement cannot be used to calculate
RTO.

Initially -> No value


After the first measurement -> RTTs=RTTm
After each measurement -> RTTs= (1-t)*RTTs + t*RTTm
Note: t=1/8 (default if not given)
● Deviated RTT(RTTd) – Most implementations do not use RTTs alone so
RTT deviated is also calculated to find out RTO.
Initially -> No value

After the first measurement -> RTTd=RTTm/2


After each measurement -> RTTd= (1-k)*RTTd + k*(RTTm-RTTs)
Note: k=1/4 (default if not given)

● Persistent Timer – To deal with a zero-window-size deadlock situation, TCP


uses a persistence timer. When the sending TCP receives an
acknowledgment with a window size of zero, it starts a persistence timer.
When the persistence timer goes off, the sending TCP sends a special
segment called a probe. This segment contains only 1 byte of new data. It
has a sequence number, but its sequence number is never acknowledged; it
is even ignored in calculating the sequence number for the rest of the data.
The probe causes the receiving TCP to resend the acknowledgment which
was lost.
● Keep Alive Timer – A keepalive timer is used to prevent a long idle
connection between two TCPs. If a client opens a TCP connection to a
server transfers some data and becomes silent the client will crash. In this
case, the connection remains open forever. So a keepalive timer is used.
Each time the server hears from a client, it resets this timer. The time-out is
usually 2 hours. If the server does not hear from the client after 2 hours, it
sends a probe segment. If there is no response after 10 probes, each of
which is 75 s apart, it assumes that the client is down and terminates the
connection.
● Time Wait Timer – This timer is used during tcp connection termination.
The timer starts after sending the last Ack for 2nd FIN and closing the
connection.
After a TCP connection is closed, it is possible for datagrams that are still
making their way through the network to attempt to access the closed port.
The quiet timer is intended to prevent the just-closed port from reopening
again quickly and receiving these last datagrams.
The quiet timer is usually set to twice the maximum segment lifetime (the
same value as the Time-To-Live field in an IP header), ensuring that all
segments still heading for the port have been discarded.

Service Primitives
Service generally includes set of various primitives. A primitive simply means
Operations. A Service is specified by set of primitives that are available and given to
user or other various entities to access the service. All these primitives simply tell the
service to perform some action or to report on action that is taken by peer entity. Each
of the protocol that communicates in layered architecture also communicates in peer-
to-peer manner with some of its remote protocol entity. Primitives are called calling
functions between the layers that are used to manage communication among the
adjacent protocol layers i.e., among the same communication node. The set of
primitives that are available generally depends upon the nature of the service that is
being provided.

Classification of Service Primitives :

Primitive Meaning
It represent entity that wants or request service to perform
Request some action or do some work (requesting for connection to
remote computer).

It represent entity that is to be informed about event (receiver


Indication
just have received request of connection).

It represents entity that is responding to event (receiver is


Response
simply sending the permission or allowing to connect).

It represent entity that acknowledges the response to earlier


Confirm request that has come back (sender just acknowledge the
permission to get connected to the remote host).
In the above diagram, these four primitives work as following :

● Request – This primitive is transferred or sent to Layer N by Layer (N+1) to


just request for service.
● Indication – This primitive is returned by Layer N to Layer (N+1) to just
advise of activation of service that is being requested or of action that is
initiated by the service of Layer N.
● Response – This primitive is simply provided by Layer (N+1) in reply to
indication primitive. It might acknowledge or complete action that is
previously invoked by indication primitive.
● Confirm – This primitive is returned by the N th layer to the requesting
(N+1)st layer to simply acknowledge or complete action that is previously
invoked by request primitive.

Parameters of Service Primitives : Some of the Service Primitives need parameters.


These are given below :
● Connect. Request – The initiating entity does this Connect.Request. It just
specifies and determines machine that we want to get connected to, type of
service that is being desired, and maximum size of packet or message that is
used on connection.
● Connect. Indication – The receiver gets this Connect.Indication. It just
specifies caller’s identity service that we want to use like FTP and Telnet,
etc., and maximum size of packets that are exchanged.
● Connect. Response – It just specifies whether or not it wants to accept or
simply reject connection that is being requested.
● Connect. Confirm – It just finds out or determines what happened using the
entity that is issuing the initial Connect. Request.

Primitives of Connection-Oriented Service :

Primitive Meaning

When server is ready to accept request of incoming


Listen connection, it simply put this primitive into action. Listen
primitive simply waiting for incoming connection request.

This primitive is used to connect the server simply by creating


Connect
or establishing connection with waiting peer.
This primitive simply accepts incoming connection from the
Accept
peer.

These primitive afterwards block the server. Receive primitive


Receive
simply waits for incoming message.

This primitive is put into action by the client to transmit its


request that is followed by putting receive primitive into
Send
action to get the reply. Send primitive simply sends or transfer
the message to the peer.

This primitive is simply used to terminate or end the


Disconnect connection after which no one will be able to send any of the
message.

Primitives of Connectionless Service :

Primitive Meaning

Unitdata primitive is simply required to send packet of data


Unitdata
or information.
This primitive is required for getting details about the
Facility,
performance and working of the network such as delivery
Report
statistics or report.

All of us are familiar with the two protocols called TCP & UDP. Today our topic is based on
TCP Sliding Window but before that it is important to know a little bit about TCP & UDC. Both
are used for transmission of data however which one is more reliable and can be trusted? The
answer is TCP and today we will know the reason behind it that how does TCP works in a
different manner from UDP so we can assure about the data transmission. TCP (Transmission
Control Protocol) and UDP (User Datagram Protocol) are both protocols used for data
transmission over networks, but they have different characteristics and use cases. Here's a
comparison of TCP and UDP:

TCP (Transmission Control Protocol):


Reliable Transmission:
TCP ensures reliable data transmission by establishing a connection-oriented session between the
sender and receiver. It provides mechanisms for error detection, acknowledgement of received
data, and retransmission of lost or corrupted packets.
Ordered Data Delivery:
TCP guarantees that data sent from one end will be received in the same order by the other end.
This is essential for applications where data integrity and sequence are critical, such as file
transfers and web browsing.
Flow Control:
TCP employs flow control mechanisms to manage the rate of data transmission between sender
and receiver. This prevents the receiver from being overwhelmed with data it can't process
quickly enough.
Connection Establishment and Termination:
TCP uses a three-way handshake to establish a connection between two endpoints and a four-
way handshake to gracefully terminate the connection.
High Overhead:
The reliability and ordering features of TCP come at the cost of higher overhead due to the need
for acknowledgements, sequence numbers, and connection management.
Applications:
TCP is commonly used for applications that require accurate and reliable data delivery, such as
web browsing, email, file transfers, and database transactions.
UDP (User Datagram Protocol):
Connectionless:
UDP is connectionless, meaning there is no formal connection setup before sending data. Each
packet is treated as an independent unit.
Unreliable Transmission:
Unlike TCP, UDP does not guarantee delivery of data packets. Packets may be lost, duplicated,
or arrive out of order. There is no mechanism for retransmission or acknowledgement.
Low Overhead:
UDP has lower overhead compared to TCP since it lacks the error checking, acknowledgement,
and flow control mechanisms.
Faster Data Transmission:
Due to the lack of reliability features, UDP can be faster for transmitting data, especially in
scenarios where occasional data loss is acceptable.
Applications:
UDP is suitable for applications that prioritize speed and real-time performance over reliability.
Examples include online gaming, streaming media (audio and video), VoIP (Voice over IP), and
DNS (Domain Name System) lookups.
TCP Sliding Window Protocol:
TCP sliding window, often referred to as TCP sliding window protocol, is a fundamental
mechanism within the Transmission Control Protocol (TCP) used for reliable and efficient data
transfer over computer networks. It plays a critical role in ensuring efficient utilization of
network resources and reliable data transmission.
The TCP sliding window protocol is essentially a flow control mechanism that allows the sender
and receiver to manage the amount of data that can be in transit at any given time. It helps
prevent issues like congestion and ensures that data is transmitted and received in a balanced
manner, without overwhelming either the sender or the receiver.
Huawei switches would utilize the TCP protocol for reliable data transmission, flow control, and
error recovery. These switches would maintain sending and receiving windows to optimize data
transmission and handle various network conditions.

Here's how the TCP sliding window works and how it ensures
data transmission:
Sender's Perspective:
The sender maintains a window of packets it can send before waiting for an acknowledgment
from the receiver.
This window is called the sending window or the transmission window.
The size of the sending window is determined by several factors, including available buffer
space at the receiver, network conditions, and congestion avoidance strategies.
As the sender sends packets, it slides the window forward. As acknowledgments are received
from the receiver, the sender can send new packets to fill the vacant space in the window.

Receiver's Perspective:
The receiver maintains a window of expected packets.
This window is called the receiving window or the receiving buffer.
The size of the receiving window determines how many out-of-order packets the receiver can
tolerate.
As packets arrive, the receiver acknowledges them and slides the receiving window forward to
reflect the new range of expected sequence numbers.

Flow Control:
The TCP sliding window protocol provides a form of flow control. The receiver can control the
rate at which data is sent by adjusting the size of its receiving window in acknowledgments.
If the receiver's buffer is close to being full, it can advertise a smaller window size in its
acknowledgments. This signals the sender to slow down its transmission rate, preventing
congestion and potential packet loss.
Here's a step-by-step breakdown of how TCP windowing works:

Connection Establishment:
Before data transfer can begin, a TCP connection is established between the sender (client) and
receiver (server).
During the connection establishment, both parties negotiate initial parameters, including the
initial sequence numbers and window sizes.

Initial Sequence Number (ISN):


Each TCP segment is assigned a sequence number by the sender.
The initial sequence number (ISN) is a random value chosen at the start of the connection. It
helps prevent old, duplicate packets from being misinterpreted as new packets.
Sliding Window Initialization:

Both the sender and receiver maintain a sliding window for flow control.
The sender's sliding window is the "sending window," and the receiver's sliding window is the
"receiving window."
The size of these windows is negotiated during connection establishment and can be adjusted
during data transmission.

Sending Data:
The sender divides the data into segments, each with a sequence number.
The sender can send multiple segments before waiting for acknowledgments.
The number of segments sent without waiting for acknowledgments is determined by the
sender's sending window size.

Receiving Data:
The receiver receives the segments and checks their sequence numbers.
If a segment arrives with the expected sequence number, it's accepted and passed to the higher
layers for processing.
If a segment arrives out of order but falls within the receiver's receiving window, it's stored in a
buffer until the missing segments arrive.

Acknowledgments (ACKs):
The receiver sends acknowledgments (ACKs) back to the sender.
The ACK includes the next expected sequence number (acknowledging receipt of all previous
segments).
The ACK also includes the current size of the receiver's receiving window, indicating how much
more data the sender can send without overflowing the receiver's buffer.
Sliding the Windows:
As acknowledgments arrive at the sender, the sending window slides forward.
This means the sender can send new segments to fill the empty slots in the sending window.
The sender's sending window size is dynamic and can be adjusted based on ACKs and network
conditions.

Flow Control:
The receiver's receiving window size in the ACKs acts as a form of flow control.
If the receiver's buffer is close to being full, it advertises a smaller window size in its ACKs.
The sender receives this information and adjusts its sending rate accordingly to avoid
overwhelming the receiver.

Handling Retransmissions and Errors:


If the sender's expected ACK doesn't arrive within a certain timeout period, it assumes that a
segment was lost and retransmits it.
The receiver's ACKs can also acknowledge duplicate segments to handle cases where ACKs are
lost.

Connection Termination:
Once data transfer is complete, the connection is terminated using a similar handshake process.

Domain Name System (DNS) in Application Layer


Last Updated : 11 Jul, 2024
The Domain Name System (DNS) is like the internet’s phone book. It helps you find
websites by translating easy-to-remember names (like www.example.com) into the
numerical IP addresses (like 192.0.2.1) that computers use to locate each other on the
internet. Without DNS, you would have to remember long strings of numbers to visit
your favorite websites.

Domain Name System (DNS) is a hostname used for IP address translation services.
DNS is a distributed database implemented in a hierarchy of name servers. It is an
application layer protocol for message exchange between clients and servers. It is
required for the functioning of the Internet.

What is the Need for DNS?

Every host is identified by the IP address but remembering numbers is very difficult
for people also the IP addresses are not static therefore a mapping is required to
change the domain name to the IP address. So DNS is used to convert the domain
name of the websites to their numerical IP address.

Types of Domain

There are various kinds of domains:

● Generic
Domains: .com(commercial), .edu(educational), .mil(military), .org(nonpro
fit organization), .net(similar to commercial) all these are generic domains.
● Country Domain: .in (India) .us .uk
● Inverse Domain: if we want to know what is the domain name of the
website. IP to domain name mapping. So DNS can provide both the
mapping for example to find the IP addresses of geeksforgeeks.org then we
have to type

nslookup www.geeksforgeeks.org

Types of DNS

Organization of Domain

It is very difficult to find out the IP address associated with a website because there
are millions of websites and with all those websites we should be able to generate the
IP address immediately, there should not be a lot of delays for that to happen
organization of the database is very important.
Root DNS Server

● DNS Record: Domain name, IP address what is the validity? what is the
time to live? and all the information related to that domain name. These
records are stored in a tree-like structure.
● Namespace: Set of possible names, flat or hierarchical. The naming system
maintains a collection of bindings of names to values – given a name, a
resolution mechanism returns the corresponding value.
● Name Server: It is an implementation of the resolution mechanism.

Name-to-Address Resolution
The host requests the DNS name server to resolve the domain name. And the name
server returns the IP address corresponding to that domain name to the host so that the
host can future connect to that IP address.
Name-to-Address Resolution

● Hierarchy of Name Servers Root Name Servers: It is contacted by name


servers that can not resolve the name. It contacts the authoritative name
server if name mapping is not known. It then gets the mapping and returns
the IP address to the host.
● Top-level Domain (TLD) Server: It is responsible for com, org, edu, etc,
and all top-level country domains like uk, fr, ca, in, etc. They have info
about authoritative domain servers and know the names and IP addresses of
each authoritative name server for the second-level domains.
● Authoritative Name Servers are the organization’s DNS servers, providing
authoritative hostnames to IP mapping for organization servers. It can be
maintained by an organization or service provider. In order to reach
cse.dtu.in we have to ask the root DNS server, then it will point out to the
top-level domain server and then to the authoritative domain name server
which actually contains the IP address. So the authoritative domain server
will return the associative IP address.

Domain Name Server


The client machine sends a request to the local name server, which, if the root does
not find the address in its database, sends a request to the root name server, which in
turn, will route the query to a top-level domain (TLD) or authoritative name server.
The root name server can also contain some hostName to IP address mappings. The
Top-level domain (TLD) server always knows who the authoritative name server is.
So finally the IP address is returned to the local name server which in turn returns the
IP address to the host.

Domain Name Server

How Does DNS Work?


The working of DNS starts with converting a hostname into an IP Address. A domain
name serves as a distinctive identification for a website. It is used in place of an IP
address to make it simpler for consumers to visit websites. Domain Name System
works by executing the database whose work is to store the name of hosts which are
available on the Internet. The top-level domain server stores address information for
top-level domains such as .com and .net, .org, and so on. If the Client sends the
request, then the DNS resolver sends a request to DNS Server to fetch the IP Address.
In case, when it does not contain that particular IP Address with a hostname, it
forwards the request to another DNS Server. When IP Address has arrived at the
resolver, it completes the request over Internet Protocol.
For more, you can refer to Working of DNS Server.
How Does DNS Works?

Authoritative DNS Server Vs Recursive DNS Resolver

Authoritative DNS Recursive DNS


Parameters Server Resolver
Holds the official DNS Resolves DNS queries
Function
records for a domain on behalf of clients

Provides answers to Actively looks up


Role
specific DNS queries information for clients

Responds with Queries other DNS


Query Handling
authoritative DNS data servers for DNS data

Doesn’t directly interact Serves end-users or


Client Interaction
with end-users client applications

Stores the DNS records Looks up data from


Data Source
for specific domains other DNS servers

Generally, doesn’t Caches DNS responses


Caching
perform caching for faster lookups

Hierarchical Does not participate in Actively performs


Resolution the recursive resolution recursive name
resolution

Has a fixed, known IP IP address may vary


IP Address
address depending on ISP

Manages a specific DNS Does not manage any


Zone Authority
zone (domain) specific DNS zone

What is DNS Lookup?


DNS Lookup or DNS Resolution can be simply termed as the process that helps in
allowing devices and applications that translate readable domain names to the
corresponding IP Addresses used by the computers for communicating over the web.

What Are The Steps in a DNS Lookup?


Often, DNS lookup information is stored temporarily either on your own computer or
within the DNS system itself. There are usually 8 steps involved in a DNS lookup. If
the information is already stored (cached), some of these steps can be skipped, making
the process faster. Here is an example of all 8 steps when nothing is cached:
1. A user types “example.com” into a web browser.

2. The request goes to a DNS resolver.

3. The resolver asks a root server where to find the top-level domain (TLD)
server for .com.
4. The root server tells the resolver to contact the .com TLD server.

5. The resolver then asks the .com TLD server for the IP address of
“example.com.”
6. The .com TLD server gives the resolver the IP address of the domain’s
nameserver.
7. The resolver then asks the domain’s nameserver for the IP address of
“example.com.”
8. The domain’s nameserver returns the IP address to the resolver.

Working of DNS

DNS Servers Involved in Loading a Webpage


Upon loading the webpage, several DNS Servers are responsible for translating the
domain name into the corresponding IP Address of the web server hosting the
website. Here is the list of main DNS servers involved in loading a Webpage.
● Local DNS Resolver

● Root DNS Servers


● Top-Level Domain (TLD) DNS Servers

● Authoritative DNS Servers

● Web Server

This hierarchical system of DNS servers ensures that when you type a domain name
into your web browser, it can be translated into the correct IP address, allowing you to
access the desired webpage on the internet.
For more information you can refer DNS Look-Up article.

What is DNS Resolver?


DNS Resolver is simply called a DNS Client and has the functionality for initiating
the process of DNS Lookup which is also called DNS Resolution. By using the DNS
Resolver, applications can easily access different websites and services present on the
Internet by using domain names that are very much friendly to the user and that also
resolves the problem of remembering IP Address.

What Are The Types of DNS Queries?


There are basically three types of DNS Queries that occur in DNS Lookup. These are
stated below.
● Recursive Query: In this query, if the resolver is unable to find the record,
in that case, DNS client wants the DNS Server will respond to the client in
any way like with the requested source record or an error message.
● Iterative Query: Iterative Query is the query in which DNS Client wants
the best answer possible from the DNS Server.
● Non-Recursive Query: Non-Recursive Query is the query that occurs when
a DNS Resolver queries a DNS Server for some record that has access to it
because of the record that exists in its cache.

What is DNS Caching?


DNS Caching can be simply termed as the process used by DNS Resolvers for storing
the previously resolved information of DNS that contains domain names, and IP
Addresses for some time. The main principle of DNS Caching is to speed up the
process of future DNS lookup and also help in reducing the overall time of DNS
Resolution.

Protocols in Application Layer


Last Updated : 26 Jun, 2024

The Application Layer is the topmost layer in the Open System Interconnection (OSI)
model. This layer provides several ways for manipulating the data which enables any
type of user to access the network with ease. The Application Layer interface directly
interacts with the application and provides common web application services. The
application layer performs several kinds of functions that are required in any kind of
application or communication process. In this article, we will discuss various
application layer protocols.

What are Application Layer Protocols?

Application layer protocols are those protocols utilized at the application layer of the
OSI (Open Systems Interconnection) and TCP/IP models. They facilitate
communication and data sharing between software applications on various network
devices. These protocols define the rules and standards that allow applications to
interact and communicate quickly and effectively over a network.

Application Layer Protocol in Computer Network

1. TELNET
Telnet stands for the TELetype NETwork. It helps in terminal emulation. It allows
Telnet clients to access the resources of the Telnet server. It is used for managing files
on the Internet. It is used for the initial setup of devices like switches. The telnet
command is a command that uses the Telnet protocol to communicate with a remote
device or system. The port number of the telnet is 23.

Command

telnet [\\RemoteServer]

\\RemoteServer

: Specifies the name of the server

to which you want to connect

2. FTP

FTP stands for File Transfer Protocol. It is the protocol that actually lets us transfer
files. It can facilitate this between any two machines using it. But FTP is not just a
protocol but it is also a program.FTP promotes sharing of files via remote computers
with reliable and efficient data transfer. The Port number for FTP is 20 for data and 21
for control.

Command

ftp machinename

3. TFTP

The Trivial File Transfer Protocol (TFTP) is the stripped-down, stock version of FTP,
but it’s the protocol of choice if you know exactly what you want and where to find it.
It’s a technology for transferring files between network devices and is a simplified
version of FTP. The Port number for TFTP is 69.

Command

tftp [ options... ] [host [port]] [-c command]

4. NFS

It stands for a Network File System. It allows remote hosts to mount file systems over
a network and interact with those file systems as though they are mounted locally.
This enables system administrators to consolidate resources onto centralized servers
on the network. The Port number for NFS is 2049.

Command

service nfs start

5. SMTP

It stands for Simple Mail Transfer Protocol. It is a part of the TCP/IP protocol. Using
a process called “store and forward,” SMTP moves your email on and across
networks. It works closely with something called the Mail Transfer Agent (MTA) to
send your communication to the right computer and email inbox. The Port number for
SMTP is 25.

Command

MAIL FROM:<[email protected]?

6. LPD
It stands for Line Printer Daemon. It is designed for printer sharing. It is the part that
receives and processes the request. A “daemon” is a server or agent. The Port number
for LPD is 515.

Command

lpd [ -d ] [ -l ] [ -D DebugOutputFile]

7. X window

It defines a protocol for the writing of graphical user interface–based client/server


applications. The idea is to allow a program, called a client, to run on one computer. It
is primarily used in networks of interconnected mainframes. Port number for X
window starts from 6000 and increases by 1 for each server.

Command

Run xdm in runlevel 5

8. SNMP

It stands for Simple Network Management Protocol. It gathers data by polling the
devices on the network from a management station at fixed or random intervals,
requiring them to disclose certain information. It is a way that servers can share
information about their current state, and also a channel through which an
administrate can modify pre-defined values. The Port number of SNMP is 161(TCP)
and 162(UDP).

Command

snmpget -mALL -v1 -cpublic snmp_agent_Ip_address sysName.0


9. DNS

It stands for Domain Name System. Every time you use a domain name, therefore, a
DNS service must translate the name into the corresponding IP address. For example,
the domain name www.abc.com might translate to 198.105.232.4.
The Port number for DNS is 53.

Command

ipconfig /flushdns

10. DHCP

It stands for Dynamic Host Configuration Protocol (DHCP). It gives IP addresses to


hosts. There is a lot of information a DHCP server can provide to a host when the host
is registering for an IP address with the DHCP server. Port number for DHCP is 67,
68.

Command

clear ip dhcp binding {address | * }

11. HTTP/HTTPS

HTTP stands for Hypertext Transfer Protocol and HTTPS is the more secured version
of HTTP, that’s why HTTPS stands for Hypertext Transfer Protocol Secure. This
protocol is used to access data from the World Wide Web. The Hypertext is the well-
organized documentation system that is used to link pages in the text document.

● HTTP is based on the client-server model.


● It uses TCP for establishing connections.
● HTTP is a stateless protocol, which means the server doesn’t maintain any
information about the previous request from the client.
● HTTP uses port number 80 for establishing the connection.

12. POP

POP stands for Post Office Protocol and the latest version is known as POP3 (Post
Office Protocol version 3). This is a simple protocol used by User agents for message
retrieval from mail servers.

● POP protocol work with Port number 110.


● It uses TCP for establishing connections.

POP works in dual mode- Delete mode, Keep Mode.

In Delete mode, it deletes the message from the mail server once they are downloaded
to the local system.

In Keep mode, it doesn’t delete the message from the mail server and also facilitates
the users to access the mails later from the mail server.

13. IRC

IRC stands for Internet Relay Chat. It is a text-based instant messaging/chatting


system. IRC is used for group or one-to-one communication. It also supports file,
media, data sharing within the chat. It works upon the client-server model. Where
users connect to IRC server or IRC network via some web/ standalone application
program.
● It uses TCP or TLS for connection establishment.
● It makes use of port number 6667.

14. MIME

MIME stands for Multipurpose Internet Mail Extension. This protocol is designed to
extend the capabilities of the existing Internet email protocol like SMTP. MIME
allows non-ASCII data to be sent via SMTP. It allows users to send/receive various
kinds of files over the Internet like audio, video, programs, etc. MIME is not a
standalone protocol it works in collaboration with other protocols to extend their
capabilities.

Application layer protocols are required to enable communication and data exchange
between software applications on different network devices. These protocols, which
include HTTP, FTP, SMTP, and DNS, specify the rules and standards that enable
applications to communicate easily across a network. Each protocol serves a distinct
purpose, ranging from file transfer and email management to network device
configuration and web page access, providing efficient and effective network
connection.

Difference Between SMTP and HTTP


Last Updated : 17 May, 2023

Pre-requisites: SMTP, HTTP


A network protocol is an accepted set of rules that govern data communication
between different devices in the network. In this article, we will see the difference
between SMTP and HTTP protocols.

SMTP

SMTP (Simple Mail Transfer Protocol) is a protocol for managing Internet’s


electronic mail. It is an application layer protocol. It uses TCP due to its reliable data
transfer service. TCP establishes SMTP connections at port 25. SMTP uses persistent
connections. The same TCP connection can be used to send multiple emails, once the
connection has been established. Only 7-bit ASCII content is to be directly sent. Other
content needs to be encoded to 7-bit ASCII and then decoded at the receiving end.

DNS uses distributed servers so that data remains distributed in places and per server
load decreases. But SMTP never uses intermediate mail servers. Mail sent by user A
to B will go directly from A’s server to B’s server, and nowhere in between.
HTTP

HTTP is a client-server protocol. It is IP based communication protocol that is used to


deliver data from server to client or vice-versa. Any type of content can be exchanged
as long as the server and client are compatible with it.

Difference between SMTP and HTTP

SMTP HTTP

HTTP is mainly used for data and file


SMTP is used for mail services.
transfer.

It uses port 25. It uses port 80.


It is primarily a push protocol. It is primarily a pull protocol.

It does not impose a 7-bit ASCII


It imposes a 7-bit ASCII restriction on
restriction. Can transfer multimedia,
the content to be transferred.
hyperlinks, etc.

SMTP transfers emails via Mail HTTP transfers files between the Web
Servers. server and the Web client.

SMTP is a persistent type of TCP It can use both Persistent and Non-
connection. persistent.

Uses different methods of


Uses base64 encoding for
authentication such as basic, digest, and
authentication.
OAuth.
Does not support session management Supports session management and
or cookies. cookies to maintain state.

Has a smaller message size limit Has a larger message size limit
compared to HTTP. compared to SMTP.

Requires authentication for sending Does not require authentication for


emails. browsing web pages.

Supports both plain text and


Supports both plain text and encrypted
encrypted communication (SMTPS or
communication (HTTPS).
STARTTLS).

UDP Protocol
In computer networking, the UDP stands for User Datagram Protocol. The David P. Reed
developed the UDP protocol in 1980. It is defined in RFC 768, and it is a part of the TCP/IP
protocol, so it is a standard protocol over the internet. The UDP protocol allows the computer
applications to send the messages in the form of datagrams from one machine to another
machine over the Internet Protocol (IP) network. The UDP is an alternative communication
protocol to the TCP protocol (transmission control protocol). Like TCP, UDP provides a set of
rules that governs how the data should be exchanged over the internet. The UDP works by
encapsulating the data into the packet and providing its own header information to the packet.
Then, this UDP packet is encapsulated to the IP packet and sent off to its destination. Both the
TCP and UDP protocols send the data over the internet protocol network, so it is also known as
TCP/IP and UDP/IP. There are many differences between these two protocols. UDP enables the
process to process communication, whereas the TCP provides host to host communication. Since
UDP sends the messages in the form of datagrams, it is considered the best-effort mode of
communication. TCP sends the individual packets, so it is a reliable transport medium. Another
difference is that the TCP is a connection-oriented protocol whereas the UDP is a connectionless
protocol as it does not require any virtual circuit to transfer the data.

UDP also provides a different port number to distinguish different user requests and also
provides the checksum capability to verify whether the complete data has arrived or not; the IP
layer does not provide these two services.

Features of UDP protocol

The following are the features of the UDP protocol:

Transport layer protocol

UDP is the simplest transport layer communication protocol. It contains a minimum amount of
communication mechanisms. It is considered an unreliable protocol, and it is based on best-effort
delivery services. UDP provides no acknowledgment mechanism, which means that the receiver
does not send the acknowledgment for the received packet, and the sender also does not wait for
the acknowledgment for the packet that it has sent.

○ Connectionless

The UDP is a connectionless protocol as it does not create a virtual path to transfer the data. It
does not use the virtual path, so packets are sent in different paths between the sender and the
receiver, which leads to the loss of packets or received out of order.

Ordered delivery of data is not guaranteed.

In the case of UDP, the datagrams are sent in some order will be received in the same order is
not guaranteed as the datagrams are not numbered.

○ Ports

The UDP protocol uses different port numbers so that the data can be sent to the correct
destination. The port numbers are defined between 0 and 1023.

○ Faster transmission

UDP enables faster transmission as it is a connectionless protocol, i.e., no virtual path is required
to transfer the data. But there is a chance that the individual packet is lost, which affects the
transmission quality. On the other hand, if the packet is lost in TCP connection, that packet will
be resent, so it guarantees the delivery of the data packets.

○ Acknowledgment mechanism
The UDP does have any acknowledgment mechanism, i.e., there is no handshaking between the
UDP sender and UDP receiver. If the message is sent in TCP, then the receiver acknowledges
that I am ready, then the sender sends the data. In the case of TCP, the handshaking occurs
between the sender and the receiver, whereas in UDP, there is no handshaking between the
sender and the receiver.
ADVERTISEMENT

○ Segments are handled independently.

Each UDP segment is handled individually of others as each segment takes different path to
reach the destination. The UDP segments can be lost or delivered out of order to reach the
destination as there is no connection setup between the sender and the receiver.

○ Stateless

It is a stateless protocol that means that the sender does not get the acknowledgement for the
packet which has been sent.

Why do we require the UDP protocol?

As we know that the UDP is an unreliable protocol, but we still require a UDP protocol in some
cases. The UDP is deployed where the packets require a large amount of bandwidth along with
the actual data. For example, in video streaming, acknowledging thousands of packets is
troublesome and wastes a lot of bandwidth. In the case of video streaming, the loss of some
packets couldn't create a problem, and it can also be ignored.

UDP Header Format


In UDP, the header size is 8 bytes, and the packet size is upto 65,535 bytes. But this packet size
is not possible as the data needs to be encapsulated in the IP datagram, and an IP packet, the
header size can be 20 bytes; therefore, the maximum of UDP would be 65,535 minus 20. The
size of the data that the UDP packet can carry would be 65,535 minus 28 as 8 bytes for the
header of the UDP packet and 20 bytes for IP header.

The UDP header contains four fields:

○ Source port number: It is 16-bit information that identifies which port is going t send
the packet.

○ Destination port number: It identifies which port is going to accept the information. It
is 16-bit information which is used to identify application-level service on the destination
machine.

○ Length: It is 16-bit field that specifies the entire length of the UDP packet that includes
the header also. The minimum value would be 8-byte as the size of the header is 8 bytes.

○ Checksum: It is a 16-bits field, and it is an optional field. This checksum field checks
whether the information is accurate or not as there is the possibility that the information
can be corrupted while transmission. It is an optional field, which means that it depends
upon the application, whether it wants to write the checksum or not. If it does not want to
write the checksum, then all the 16 bits are zero; otherwise, it writes the checksum. In
UDP, the checksum field is applied to the entire packet, i.e., header as well as data part
whereas, in IP, the checksum field is applied to only the header field.

Concept of Queuing in UDP protocol


In UDP protocol, numbers are used to distinguish the different processes on a server and client.
We know that UDP provides a process to process communication. The client generates the
processes that need services while the server generates the processes that provide services. The
queues are available for both the processes, i.e., two queues for each process. The first queue is
the incoming queue that receives the messages, and the second one is the outgoing queue that
sends the messages. The queue functions when the process is running. If the process is
terminated then the queue will also get destroyed.

UDP handles the sending and receiving of the UDP packets with the help of the following
components:

○ Input queue: The UDP packets uses a set of queues for each process.

○ Input module: This module takes the user datagram from the IP, and then it finds the
information from the control block table of the same port. If it finds the entry in the
control block table with the same port as the user datagram, it enqueues the data.

○ Control Block Module: It manages the control block table.

○ Control Block Table: The control block table contains the entry of open ports.

○ Output module: The output module creates and sends the user datagram.

Several processes want to use the services of UDP. The UDP multiplexes and demultiplexes the
processes so that the multiple processes can run on a single host.

Limitations

○ It provides an unreliable connection delivery service. It does not provide any services of
IP except that it provides process-to-process communication.

○ The UDP message can be lost, delayed, duplicated, or can be out of order.

○ It does not provide a reliable transport delivery service. It does not provide any
acknowledgment or flow control mechanism. However, it does provide error control to
some extent.

Advantages

○ It produces a minimal number of overheads.


Authoritative DNS Server Vs Recursive DNS Resolver

Authoritative DNS Recursive DNS


Parameters Server Resolver

Holds the official DNS Resolves DNS


Function queries on behalf of
records for a domain
clients

Actively looks up
Provides answers to
Role information for
specific DNS queries
clients

Responds with Queries other DNS


Query Handling
authoritative DNS data servers for DNS data

Doesn’t directly Serves end-users or


Client Interaction
interact with end-users client applications

Data Source Stores the DNS records Looks up data from


for specific domains other DNS servers

Caches DNS
Generally, doesn’t
Caching responses for faster
perform caching
lookups

Actively performs
Hierarchical Does not participate in
recursive name
Resolution the recursive resolution
resolution

Has a fixed, known IP IP address may vary


IP Address
address depending on ISP

Manages a specific Does not manage any


Zone Authority
DNS zone (domain) specific DNS zone

What is DNS Lookup?

DNS Lookup or DNS Resolution can be simply termed as the process that helps in
allowing devices and applications that translate readable domain names to the
corresponding IP Addresses used by the computers for communicating over the web.
What Are The Steps in a DNS Lookup?

Often, DNS lookup information is stored temporarily either on your own computer or
within the DNS system itself. There are usually 8 steps involved in a DNS lookup. If
the information is already stored (cached), some of these steps can be skipped, making
the process faster. Here is an example of all 8 steps when nothing is cached:

1. A user types “example.com” into a web browser.


2. The request goes to a DNS resolver.
3. The resolver asks a root server where to find the top-level domain (TLD)
server for .com.
4. The root server tells the resolver to contact the .com TLD server.
5. The resolver then asks the .com TLD server for the IP address of
“example.com.”
6. The .com TLD server gives the resolver the IP address of the domain’s
nameserver.
7. The resolver then asks the domain’s nameserver for the IP address of
“example.com.”
8. The domain’s nameserver returns the IP address to the resolver.
Working of DNS

DNS Servers Involved in Loading a Webpage

Upon loading the webpage, several DNS Servers are responsible for translating the
domain name into the corresponding IP Address of the web server hosting the
website. Here is the list of main DNS servers involved in loading a Webpage.

● Local DNS Resolver


● Root DNS Servers
● Top-Level Domain (TLD) DNS Servers
● Authoritative DNS Servers
● Web Server
This hierarchical system of DNS servers ensures that when you type a domain name
into your web browser, it can be translated into the correct IP address, allowing you to
access the desired webpage on the internet.

For more information you can refer DNS Look-Up article.

What is DNS Resolver?

DNS Resolver is simply called a DNS Client and has the functionality for initiating
the process of DNS Lookup which is also called DNS Resolution. By using the DNS
Resolver, applications can easily access different websites and services present on the
Internet by using domain names that are very much friendly to the user and that also
resolves the problem of remembering IP Address.

What Are The Types of DNS Queries?

There are basically three types of DNS Queries that occur in DNS Lookup. These are
stated below.

● Recursive Query: In this query, if the resolver is unable to find the record,
in that case, DNS client wants the DNS Server will respond to the client in
any way like with the requested source record or an error message.
● Iterative Query: Iterative Query is the query in which DNS Client wants
the best answer possible from the DNS Server.
● Non-Recursive Query: Non-Recursive Query is the query that occurs when
a DNS Resolver queries a DNS Server for some record that has access to it
because of the record that exists in its cache.
What is DNS Caching?

DNS Caching can be simply termed as the process used by DNS Resolvers for storing
the previously resolved information of DNS that contains domain names, and IP
Addresses for some time. The main principle of DNS Caching is to speed up the
process of future DNS lookup and also help in reducing the overall time of DNS
Resolution.

You might also like