0% found this document useful (0 votes)
15 views

The Network Layer Design Issues

Uploaded by

Rituja Khobare
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

The Network Layer Design Issues

Uploaded by

Rituja Khobare
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 27

The Network Layer

Design Issues
The Network Layer is a critical layer in the
OSI (Open Systems Interconnection) model
that focuses on the routing and forwarding of
data packets across multiple networks to
ensure that they reach their intended
destination. Its main function is to manage and
facilitate data transmission between two
devices on potentially different networks. The
Network Layer is responsible for logical
addressing, routing, and forwarding, which are
essential for the establishment of
communication paths in large, interconnected
networks.
Key Design Issues in the
Network Layer
Designing an efficient and effective network layer
involves addressing several key issues to ensure
1)Routing
reliable and accurate data transmission. Here are the
main• Routing is the process of determining the
design issues:
optimal path for data packets to travel from the
source to the destination across different
networks. The network layer uses routing
algorithms to select paths based on various
metrics (e.g., shortest path, least cost, fastest
route).
• Challenges: The dynamic nature of networks,
such as changes in topology or traffic load, makes
it essential to have adaptive and efficient routing
protocols. Examples include Link State Routing
and Distance Vector Routing.
2) Packet Forwarding
Forwarding refers to the actual process of moving
packets from one router to another along the path
toward the destination. Forwarding tables, often
maintained in each router, are used to look up the
next hop for each incoming packet.
Challenges: Forwarding must be fast to avoid
delays in packet delivery, and mechanisms are
needed to handle forwarding errors and loops that
could lead to packet loss or delays.
3) Addressing
The network layer must uniquely identify each
device on a network, typically using logical
addresses (IP addresses).
Challenges: Address management is a major
concern, particularly with the finite IPv4 address
space. Techniques like IPv6, Network Address
Translation (NAT), and hierarchical addressing help
4) Error Handling and Diagnostics
The network layer is responsible for detecting and
sometimes correcting errors that occur during packet
transmission. the network layer often provides
diagnostics to inform about errors such as packet loss,
TTL (Time to Live) expiration, or unreachable
destinations.
Challenges: Providing timely and accurate
diagnostics, as well as managing errors that could
impact the performance of the network.
Congestion Control
Congestion occurs when the network is overwhelmed
with too much data, causing packet delays, packet
loss, and reduced network performance.
Challenges: Effective congestion control requires real-
time monitoring and adaptive strategies, which can be
difficult in complex and dynamic networks. Techniques
like packet prioritization, traffic shaping, and queue
Quality of Service (QoS)
Quality of Service refers to the network
layer’s ability to provide different levels of
service to different types of traffic, ensuring
that high-priority or time-sensitive data (such
as VoIP or streaming video) receives
preferential treatment.
Challenges: Implementing QoS requires a
sophisticated approach to resource
allocation, scheduling, and traffic
prioritization, which can be complex,
especially on large networks with diverse
Store and Forward Packet Switching Service
Store-and-Forward Packet Switching is a method
of data transmission used in computer networks where
each router or intermediate device receives, stores,
and then forwards packets to the next node on the
route towards their destination. In this model, packets
are temporarily stored in memory before being sent to
the next hop, allowing the network to handle data
transmission more efficiently. This method is
fundamental in networks because it ensures data
integrity, even if there is network congestion or other
temporary issues in the transmission path.
How Store-and-Forward Packet Switching Works
• Packet Reception and Storage: When a data
packet arrives at a router or intermediate device, it
is first fully received and stored in a buffer. This
means that each router waits until the entire packet
is received before processing or forwarding it.
• Error Checking: Once stored, the router can
perform error checking on the packet. If an error is
detected (e.g., a checksum mismatch), the router
may discard the packet or request a retransmission,
depending on the protocol in use. This error-
checking step ensures that corrupted packets are
not forwarded further, helping to maintain data
integrity.
• Forwarding Decision: After the packet is stored
and verified, the router examines the packet header
to determine its destination address and looks up
the best route in its routing table. Based on this
information, it forwards the packet to the
appropriate next hop along the path to its
destination.
• Transmission to the Next Node: The packet is
transmitted to the next router or device in the path,
where the same store-and-forward process repeats
• Error Checking: Once stored, the router can
perform error checking on the packet. If an error is
detected (e.g., a checksum mismatch), the router
may discard the packet or request a retransmission,
depending on the protocol in use. This error-checking
step ensures that corrupted packets are not
forwarded further, helping to maintain data integrity.
• Forwarding Decision: After the packet is stored
and verified, the router examines the packet header
to determine its destination address and looks up
the best route in its routing table. Based on this
information, it forwards the packet to the
appropriate next hop along the path to its
destination.
• Transmission to the Next Node: The packet is
transmitted to the next router or device in the path,
where the same store-and-forward process repeats
Store-and-Forward Packet Switching
Service for the Transport Layer
The Transport Layer (Layer 4 in the OSI model) relies
on the Network Layer (Layer 3) to deliver data across
networks. Store-and-forward packet switching plays a
vital role in this service in the following ways:
• Reliable Data Transfer: By storing and checking
each packet for errors, store-and-forward switching
helps maintain the integrity of the data being
transmitted. This is crucial for the transport layer,
which is responsible for providing end-to-end reliable
data transfer.
• Congestion Management: When a router’s buffer
fills up due to network congestion, it can hold
packets temporarily or, if necessary, drop packets
selectively. This congestion management at the
network layer helps prevent the transport layer from
• Efficient Routing and Load Balancing: Store-
and-forward allows routers to make intelligent
routing decisions based on the current state of the
network. This flexibility in routing supports the
transport layer’s need to deliver data quickly and
reliably.
• Packet Sequencing and Reordering: Store-and-
forward packet switching allows packets to be
processed individually. This characteristic can help
in reordering packets that may arrive out of
sequence, which is a common task of the transport
layer, especially in protocols like TCP.
Implementation of Connectionless Service
In a connectionless service, data is transmitted
without establishing a dedicated end-to-end
connection between the sender and receiver. This
approach is often compared to sending a letter through
the postal service, where each message is sent
independently without any prior arrangement between
sender and receiver. In computer networks, the
connectionless model is commonly implemented using
the User Datagram Protocol (UDP) at the Transport
Layer and Internet Protocol (IP) at the Network
How
Layer.Connectionless Service Works
• Independent Data Transmission: Each packet, often
referred to as a "datagram" in connectionless
communication, is sent independently of others. There
is no predefined path; each datagram contains all
necessary destination addressing information, which
allows it to be delivered without relying on other
• No Session Establishment: Unlike connection-
oriented services, there is no need to establish or
terminate a connection between the sender and
receiver. This removes the overhead associated with
setting up, managing, and tearing down a session.
• Best-Effort Delivery: Connectionless services are
typically "best-effort," meaning they do not
guarantee delivery, order, or error correction. Packets
may arrive out of order, be duplicated, or be lost,
depending on network conditions. Error handling and
reordering, if necessary, are managed by higher
layers or the application itself.
• Stateless: The network and devices involved do not
keep track of the state of the connection or
previously sent data. Each packet is independent,
meaning that routers and devices only need to know
the destination address to forward each packet.
Implementation of Connectionless Service
The implementation of a connectionless service
involves several key components and principles:
1. Internet Protocol (IP)
• Role of IP: The IP protocol at the Network Layer
provides the foundation for a connectionless service in
many networks, especially in the Internet. IP assigns
unique addresses to devices and is responsible for
packet forwarding based on these addresses.
• Packet Handling: Each IP packet (datagram) includes
a header with information such as the source and
destination IP addresses, but there is no sequence
number or control information connecting it to other
packets.
• Routing Decisions: Routers use the destination
address in each packet to determine the best path for
forwarding it toward the final destination. Because each
packet is independent, it may take a different path,
2. User Datagram Protocol (UDP)
• Role of UDP: At the Transport Layer, UDP is the
primary protocol for implementing a connectionless
service. It allows applications to send messages
without establishing a formal connection, making it
faster and more lightweight than TCP.
• Minimal Overhead: UDP headers are simple and
include only essential fields like source port,
destination port, length, and checksum. This
minimalism reduces latency and overhead but
sacrifices reliability.
• No Flow Control or Retransmission: UDP does
not have mechanisms for flow control, error
correction, or retransmission. Applications using UDP
need to handle these features if necessary or accept
potential packet loss.
3. Applications of Connectionless Services
• Real-Time Applications: Applications like video
streaming, online gaming, and VoIP (Voice over IP)
often use UDP because they prioritize low latency
over perfect data integrity. The occasional loss of a
packet is preferable to the delay that would be
caused by retransmissions.
• Simple Request-Response Services: Protocols like
DNS (Domain Name System) use UDP to send small,
single-request, single-response messages, as the
overhead of establishing a connection is unnecessary
for these quick exchanges.
Implementation of Connection Oriented
Service
In a connection-oriented service, a dedicated
connection is established between the sender and
receiver before any data is transmitted. This type of
service guarantees reliable data transfer, ordered
delivery, and other features that ensure data integrity,
making it ideal for applications where accuracy and
reliability are critical.
Connection-oriented services are primarily implemented
using the Transmission Control Protocol (TCP) at
the Transport Layer, with support from various protocols
at lower layers, such as the Internet Protocol (IP) at the
Network Layer.
How Connection-Oriented Service Works
• Connection Establishment: Before data
transmission begins, a connection setup process,
often called a "handshake," is performed between the
sender and receiver. During this handshake, the two
endpoints agree on communication parameters, such
as sequence numbers and initial flow control settings.
• Data Transmission: Once the connection is
established, data packets are sent along the path
determined during the handshake. The sender and
receiver track the sequence of packets, ensuring that
they are received in the correct order and that none
are missing. If any packet is lost or corrupted, it can
be retransmitted.
• Connection Termination: After data transfer is
complete, the connection is explicitly terminated. This
termination involves sending control packets to close
the connection, releasing network resources, and
Implementation of Connection-Oriented Service
The implementation of a connection-oriented service
involves a few key protocols and mechanisms at the
Transport and Network Layers:
1. Transmission Control Protocol (TCP)
• Role of TCP: TCP, a protocol at the Transport Layer,
provides the primary framework for implementing
connection-oriented services. It establishes a reliable
connection between two endpoints, ensuring ordered,
error-free delivery of data.
• Three-Way Handshake for Connection
Establishment: TCP uses a "three-way handshake"
process to establish a connection:
1. The sender sends a SYN packet to initiate a
connection.
2. The receiver responds with a SYN-ACK packet to
acknowledge the request.
3. The sender replies with an ACK packet, confirming
• Data Transmission with Reliability Mechanisms:
After the handshake, data is transmitted in segments.
Each segment is numbered sequentially, and the
receiver acknowledges each segment. If an
acknowledgment (ACK) for a segment is not received
within a specific timeframe, TCP retransmits the
segment, ensuring that no data is lost.
• Flow Control: TCP uses a flow control mechanism to
match the sender's transmission rate to the receiver's
processing capability. This prevents overwhelming the
receiver or causing network congestion.
• Congestion Control: TCP implements congestion
control algorithms to adjust the transmission rate
based on network congestion. This prevents excessive
packet loss and ensures efficient use of network
resources.
• Connection Termination: When data transfer is
complete, TCP performs a "four-way termination"
2. Internet Protocol (IP)
• Role of IP: While IP is typically connectionless, it
supports connection-oriented services when combined
with TCP. IP is responsible for addressing and routing
packets
• Logical Addressing and Routing: IP packets carry
addressing information but are passed in an order
dictated by TCP’s sequencing, effectively creating a
connection-oriented pathway.
3. Applications of Connection-Oriented Services
• File Transfer Protocols (e.g., FTP): For file
transfers, where reliable and accurate data delivery is
essential, TCP is used to ensure files are transferred
without errors.
• Web Browsing (HTTP/HTTPS): HTTP over TCP
provides ordered and reliable delivery of web content,
making it suitable for accessing web pages.
• Email Protocols (e.g., SMTP, POP3, IMAP): Email
Comparison of Virtual Circuit and Datagram
Networks
Virtual Circuit (VC) Datagram
Aspect Networks Networks
No path setup
Requires a predefined
required; each
path (virtual circuit) to be
Path packet is routed
established before data
Setup independently base
transmission begins. All
d on the destination
packets follow this same path.
address.
Connection-oriented:
Connectionless:
Operates similarly to a
Connecti There is no need for
connection-oriented service,
on Type an initial connection
with a setup phase before data
setup phase.
is sent.
Routing is
Routing is only done once performed individua
during the setup phase. lly for each packet.
Routing
Afterward, each packet follows Different packets can
Virtual Circuit (VC)
Aspect Networks Datagram Networks
Each packet must
Each packet typically carries
contain the
a virtual circuit
Addressi full destination
identifier instead of the full
ng address, which
destination address, making
increases the size of
packet headers smaller.
the packet header.
Packets can
Since all packets follow the arrive out of
Reliabilit
same path, ordering is order and may
y and
preserved, and reliability is require reordering at
Ordering
generally easier to manage. the destination if
order is important.
No resource
Network resources (e.g., reservation; each
bandwidth, buffer space) can packet is treated
Resource
be reserved along the independently, which
Allocatio
path for the duration of the can lead to variations
n
virtual circuit, which can in latency and packet
Routing Algorithms
Routing Algorithms are essential components of
network systems that determine the optimal path for
data packets to travel from a source to a destination
across a network. These algorithms are used in routers
and other network devices to make real-time routing
decisions that ensure efficient, fast, and reliable data
transmission, even in large and complex networks.
Goals of Routing Algorithms
• Find the Optimal Path: Identify the best route from
the source to the destination based on specific criteria.
• Ensure Network Efficiency: Maximize network
performance by balancing traffic loads, minimizing
congestion, and avoiding network bottlenecks.
• Adapt to Network Changes: Adjust routes
dynamically in response to changes in network
topology, such as new nodes, link failures, or traffic
Types of Routing Algorithms

Routing algorithms are generally classified based on


their operation and structure. The primary types
include:
1)Static vs. Dynamic Routing
• Static Routing: Routes are manually set by the
network administrator and do not change unless
manually updated. Static routing is simple and
consumes fewer resources, but lacks flexibility
and cannot adapt to network changes.
• Dynamic Routing: Routes are automatically
determined and adjusted by the algorithm based
on real-time network conditions. Dynamic routing
is more flexible and adaptive, which makes it
suitable for larger networks.
2)Adaptive vs. Non-Adaptive Routing
• Adaptive (Adaptive Routing): These algorithms
continuously monitor network traffic and topology to
adapt routes dynamically.
• Non-Adaptive Routing: Non-adaptive (or fixed)
routing algorithms determine routes in advance
based on a predefined algorithm. Routes remain fixed
until manually changed, which makes them simpler
3)but less adaptable.
Centralized vs. Distributed Routing
• Centralized Routing: In centralized routing, a single
central node has all network information and makes
routing decisions for the entire network. It is easy to
control and is less scalable.
• Distributed Routing: In distributed routing, each
node independently makes routing decisions based
on local or partial network information. Distributed
routing is more resilient to failures and can scale well
with larger networks.

You might also like