0% found this document useful (0 votes)
16 views59 pages

CN & NP Unit 2

The document provides an overview of a B.Tech course in Cyber Security at the Noida Institute of Engineering and Technology, detailing the faculty introduction, course objectives, outcomes, prerequisites, and syllabus content. It covers essential networking concepts such as TCP/IP protocols, error detection and correction methods, flow control mechanisms, and medium access control protocols. Additionally, it includes practical examples and techniques like Hamming Code and various framing methods used in data transmission.

Uploaded by

solicad398
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views59 pages

CN & NP Unit 2

The document provides an overview of a B.Tech course in Cyber Security at the Noida Institute of Engineering and Technology, detailing the faculty introduction, course objectives, outcomes, prerequisites, and syllabus content. It covers essential networking concepts such as TCP/IP protocols, error detection and correction methods, flow control mechanisms, and medium access control protocols. Additionally, it includes practical examples and techniques like Hamming Code and various framing methods used in data transmission.

Uploaded by

solicad398
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

Noida Institute of Engineering and Technology,

Greater Noida

B.Tech – CSE Mr. Sumit Sharma


(Cyber Security) Assistant Professor
4th Semester CSE – Cyber Security
Faculty Introduction

 Name : Mr. Sumit Sharma


 Qualification: Pursuing Ph.D. (Artificial Intelligence &
Machine Learning) from Amity University, Greater
Noida
 Area of Specialization: AI / ML
Syllabus
Syllabus
Video Links

 https://onlinecourses.nptel.ac.in/noc21_cs18/preview
 https://nptel.ac.in/courses/106105081
 https://elearn.nptel.ac.in/shop/nptel/computer-networks-
and-internet-protocol/
 https://archive.nptel.ac.in/courses/106/105/106105080/
Course Objective

 The objective of the course is to present an introduction of TCP/IP


protocol, packet switching and message switching, sliding window
protocol, CDMA, network layer protocols (IPv4, ARP, RARP),
routing, TCP and UDP, congestion control, quality of service, and
network applications such as DNS, FTP, TELNET, and remote
logging.
Course Outcome
Build an understanding of the Layered Architecture
CO1 of computer networking and the K2
physical layer.

Understand the properties of link and network layer


and also analyzed the solutions for
CO2 error control, flow control, and addressing, for K4
networks.

Understand the duties of the transport layer and the


CO3 addressing and functions of sockets. K2

Implement and analyzed the network connections


CO4 using programming skills. K4

Understand and analyzed the different protocols


CO5 used at application layer. K4
Pre - requisites

 The student should have a basic knowledge of data


communication and programming.
Unit 2 : Content

 Data Link & Network Layer:


 Framing  Network Layer: Point-to-Point Networks
 Error Detection & Correction  Logical Addressing
 Flow Control (Elementary Data Link  Basic Networking (IP, CIDR, ARP, RARP,
Protocols, Sliding Window Protocols) DHCP, ICMP)
 Medium Access Control & Local Area  IPV4, IPV6
Networks  Routing: Static & Dynamic Routing
 Channel Allocation  Forwarding & Delivery
 Multiple Access Protocols  Routing Algorithms & Protocols
 LAN Standards  Congestion Control Algorithms
 Link Layer Switches & Bridges
What is Framing?
 Data Transmission in the physical layer means moving bits in the form of a signal from the source to the destination.
 The data link layer, on the other hand, needs to pack bits into frames, so that each frame is distinguishable from another.
 Framing in the data link layer separates a message from one source to a destination, or from other messages to other
destinations, by adding a sender address and a destination address.
 The destination address defines where the packet is to go.
 The sender’s address helps the recipient acknowledge the receipt.
 Key Components of Framing:
 Start and End Delimiters: Each frame begins and ends with special characters or bit sequences known as delimiters. These
delimiters indicate the start and end points of a frame.
 Addressing Information: Framing includes addressing information that identifies the destination of the frame. This
address helps network devices recognize which frames are intended for them.
 Control Information: Framing incorporates additional bits for control purposes, such as error checking, flow control, and
frame sequencing.
Types of Framing:
Types of Framing in Computer Networks:

 Byte-oriented Framing:
 In byte-oriented framing, data is organized into frames using sequences of bytes.
 Each frame begins and ends with specific byte patterns, which define the boundaries of the frame.

 Bit-oriented Framing:
 Bit-oriented framing involves framing data using individual bits.
 Delimiters are identified based on specific bit patterns, indicating the start and end of each frame.

 Fixed-Size Framing:
 In fixed-size framing, each frame is of a predetermined and constant size.
 Regardless of the amount of data being transmitted, each frame occupies the same amount of space.

 Variable-Size Framing:
 Variable-size framing allows frames to vary in size based on the amount of data being transmitted.
 Frames may be larger or smaller depending on the size of the payload.
What is an Error?

 A condition when the receiver’s information does not match with the sender’s information.

 During transmission, digital signals suffer from noise that can introduce errors in the binary bits traveling from sender to

receiver.

 That means a 0 bit may change to 1 or a 1 bit may change to 0.


Error Detection
 Error detection is the detection of errors caused by noise or other impairments during transmission from the transmitter to
the receiver.
 Error detection ensures reliable delivery of data across vulnerable networks.
 Error detection minimizes the probability of passing incorrect frames to the destination, known as undetected error
probability.
Error Detection Techniques are:
1. Vertical Redundancy Check (VRC) or Parity Check:
 The oldest method of error correction involves using parity.
 It works by adding a bit to each character word transmitted.
 Blocks of data from the source are subjected to a check bit or parity bit generator form, where a parity of :
 1 is added to the block if it contains an odd number of 1’s, and
 0 is added to the block if it contains an even number of 1’s
Error Detection
Limitations of Parity Checking:
 It is not suitable for the detection of errors if the number of bits changed is even.
 If an error is noticed, it cannot be corrected.
 It cannot reveal the location of the erroneous bit.

2. Longitudinal Redundancy Check (LRC):


 LRC is another error detection method.
 LRC is an error-detection method for determining the correctness of transmitted and stored data.
 In this method, the block of bits is organized in a matrix or table of rows and columns.
 Then the parity bit for each column is calculated and a new row of eight bits, which are the parity bits for the whole block, is
created.
 After that, the new calculated parity bits are attached to the original data and sent to the receiver.
Error Detection
3. Checksum:
 In the checksum error detection scheme, the data is divided into k segments each of m bits.
 In the sender’s end, the segments are added using 1’s complement arithmetic to get the sum. The sum is complemented to get
the checksum.
 The checksum segment is sent along with the data segments.
 At the receiver’s end, all received segments are added using 1’s complement arithmetic to get the sum. The sum is
complemented.
 If the result is zero, the received data is accepted; otherwise discarded.
Error Detection

4. Cyclic Redundancy Check (CRC):

 Unlike the checksum scheme, which is based on addition, CRC is based on binary division.

 In CRC, a sequence of redundant bits, called cyclic redundancy check bits, are appended to the end of the data unit so that the

resulting data unit becomes exactly divisible by a second, predetermined binary number.

 At the destination, the incoming data unit is divided by the same number. If at this step there is no remainder, the data unit is

assumed to be correct and is therefore accepted.

 A remainder indicates that the data unit has been damaged in transit and therefore must be rejected.
Error Correction
 Error correction is the process of detecting errors in transmitted messages and reconstructing the original error-free data.
 Error correction ensures that corrected and error-free messages are obtained at the receiver side.
 Error correction is the additional ability to reconstruct the original, error-free data.
 In Error correction, we need to know the exact number of bits that are corrupted as well as their location in the message.

Error Correction Approaches are:


 Backward Error Correction (BEC):
 It is also called Automatic Repeat-Request.
 It is an error control (error correction) method that uses error-detection codes and positive and negative acknowledgments.
 When the transmitter either receives a negative acknowledgment or a timeout happens before an acknowledgment is
received, the ARQ makes the transmitter resend the message.

 Forward Error Correction (FEC):


 It is a method that involves adding parity data bits to the message.
 These parity bits will be read by the receiver to determine whether an error happened during transmission or storage.
 In this case, the receiver checks and corrects errors when they occur.
 It does not ask the transmitter to resend the frame or message.
Hamming Code

 Hamming Code is an error-correcting code technique used in computer networks and communication systems to detect and correct

errors that may occur during data transmission.

 It was developed by Richard Hamming in the 1940s.

 It is widely used in digital communication systems, including Ethernet networks, Wi-Fi, and satellite communication.

 Hamming Code is a powerful error-correction technique that helps improve the reliability and accuracy of data transmission in

communication systems.

 It is commonly used in conjunction with other error detection and correction methods to ensure robust communication over noisy

or unreliable channels.
Hamming Code Numerical

Question 1: A Bit word 1011 is to be transmitted. Construct the even

parity & odd parity seven-bit hamming code for this data.

Question 2: A Bit word 1010 is to be transmitted. Construct the odd parity

& even parity seven-bit hamming code for this data.


Flow Control
 Flow control is the mechanism that ensures the rate at which a sender is transmitting is in proportion to the receiver’s receiving
capabilities.
 Flow control is the management of data flow between computers or devices or between nodes in a network.
 Too much data arriving before a device can handle it causes data overflow, meaning the data is either lost or must be
retransmitted.
 Flow control is utilized in data communications to manage the flow of data/packets among two different nodes, especially in
cases where the sending device can send data much faster than the receiver can digest.

Methods used for Flow Control:


 Stop and Wait Protocol:
 This is the simplest file control protocol in which the sender transmits a frame and then waits for an acknowledgment,
either positive or negative, from the receiver before proceeding.
 If a positive acknowledgment is received, the sender transmits the next packet; otherwise, it retransmits the same frame.
 Advantages:
 It is a simple method.
 There are no chances of packet loss in the transmission.
 The transmission is highly accurate as the second frame is only sent when the acknowledgment of the first is received.
Flow Control

 Disadvantages:

 It is highly inefficient due to the large waiting time.

 Only one frame can be sent at a time.

 Sliding Window :

 In sliding window protocols the sender's data link layer maintains a 'sending window' which consists of a set of sequence

numbers corresponding to the frames it is permitted to send.

 Similarly, the receiver maintains a 'receiving window' corresponding to the set of frames it is permitted to accept.

 The window size is dependent on the retransmission policy and it may differ in values for the receiver's and the sender's

window.
Elementary Data Link Protocols
 Elementary data link protocols are basic communication protocols used in computer networks to establish a reliable
communication link between devices.
 These protocols typically operate at the data link layer of the OSI model and are responsible for framing error detection, and
flow control.
 In elementary data link protocols, flow control is often achieved using a simple mechanism such as stop-and-wait ARQ
(Automatic Repeat reQuest).
 In this mechanism:
 The sender transmits a frame to the receiver and waits for an acknowledgment (ACK) from the receiver.
 If the sender receives the ACK within a specified timeout period, it sends the next frame. Otherwise, it retransmits the same
frame.
 The receiver sends an ACK to confirm the successful receipt of the frame. If the receiver detects an error, it sends a negative
acknowledgment (NAK), prompting the sender to retransmit the frame.
Sliding Window Protocols
 Sliding window protocols are more sophisticated flow control mechanisms used in computer networks, especially in data link
layer protocols like HDLC (High-Level Data Link Control) and TCP (Transmission Control Protocol).
 In sliding window protocols, both the sender and receiver maintain a "window" of frames that can be transmitted or received at
any given time.
 The window size determines the number of frames that can be outstanding at once.
 There are two main types of sliding window protocols:
 Go-Back-N (GBN): In the Go-Back-N protocol, the sender is allowed to transmit multiple frames without waiting for
individual acknowledgments. However, if an acknowledgment is not received within a certain timeout period, the sender
retransmits all unacknowledged frames from the beginning of the window.
 Selective Repeat: In the Selective Repeat protocol, the sender is allowed to transmit multiple frames, and the receiver
acknowledges each frame individually. If an acknowledgment for a particular frame is not received, only that frame is
retransmitted, while the other frames remain in the window.
Medium Access Control
 Medium Access Control (MAC) is a sublayer of the Data Link Layer (Layer 2) of the OSI model in computer networking.
 It is responsible for controlling access to the transmission medium, such as a shared communication channel, in a network
where multiple devices need to communicate concurrently.
 The primary function of the MAC sublayer is to coordinate the transmission of data frames between devices sharing the same
physical medium, ensuring that data is transmitted efficiently and without collisions.
 MAC protocols define rules and procedures for how devices contend for access to the medium, how collisions are detected and
managed, and how data frames are addressed and delivered.

Medium Access Control includes:


 Access Control Methods:
 MAC protocols define methods for devices to access the transmission medium, such as Carrier Sense Multiple Access
(CSMA), CSMA with Collision Detection (CSMA/CD), CSMA with Collision Avoidance (CSMA/CA), Time Division Multiple
Access (TDMA), and others. These methods determine how devices contend for access and how collisions are handled.
 Collision Detection and Avoidance:
 In shared media networks, collisions can occur when two or more devices attempt to transmit data simultaneously. MAC
protocols incorporate mechanisms for detecting collisions and taking appropriate actions to avoid or resolve them, such as
retransmission strategies and back-off algorithms.
Medium Access Control
 Frame Addressing:
 MAC protocols define rules for addressing data frames to ensure they are delivered to the intended recipient. This typically
involves assigning unique MAC addresses to network interfaces and using these addresses to identify the source and
destination of each frame.

 Frame Synchronization:
 MAC protocols ensure that data frames are synchronized and correctly formatted for transmission over the physical
medium. This involves adding framing information, such as start and end delimiters, to each frame to delineate its
boundaries and facilitate accurate reception by the receiving device.

 Priority and Quality of Service (QoS):


 Some MAC protocols support mechanisms for prioritizing traffic and providing different levels of service quality to different
types of data, such as real-time or latency-sensitive traffic. These mechanisms help optimize network performance and
ensure that critical data is transmitted with minimal delay.
Multiple Access Protocol
 Multiple Access Protocols are communication protocols used in computer networks to allow multiple devices to share a common
communication medium such as a transmission line or a radio channel.
 These protocols define rules and procedures for coordinating access to the shared medium, ensuring that data transmissions from
different devices do not interfere with each other.
 These multiple access protocols enable efficient and fair sharing of network resources among multiple devices, facilitating reliable
communication in various types of networks, including LANs, WANs, and wireless networks.
 The choice of protocol depends on factors such as network topology, bandwidth requirements, and the number of devices
accessing the network.
Random Access Protocol
 It is a set of rules that allows stations in the network to detect and avoid traffic.
 In this, all stations have the same superiority that is no station has more priority than another station.
 Any station can send data depending on the medium’s state (idle or busy).
 It has two features:
 There is no fixed time for sending data.
 There is no fixed sequence of stations sending data.

Types of Random Access Protocols:


a. ALOHA:
 It was designed for wireless LAN but is also applicable for shared media.
 In this, multiple stations can transmit data at the same time and can hence lead to collision and data being garbled.
 Types of ALOHA are as below:
 Pure ALOHA:
 When a station sends data it waits for an acknowledgement.
 If the acknowledgment doesn’t come within the allotted time then the station waits for a random amount of time
called back-off time (Tb) and re-sends the data.
 Since different stations wait for different amounts of time, the probability of further collision decreases.
Random Access Protocol

 Advantages of Pure ALOHA:

 The main advantage of pure ALOHA is its simplicity in implementation.

 It adapts to a varying number of stations.

 It is superior to fixed assignment when there are a large number of bursts stations.

 Disadvantages of Pure ALOHA:

 Its performance becomes worse as the data traffic on the channel increases.

 At high loads, collisions are very frequent.

 It requires queuing buffers for the retransmission of packets.

 Theoretically proven throughput maximum of 18.4%.


Random Access Protocol
 Slotted ALOHA:
 It is similar to pure aloha, except that we divide time into slots, and the sending of data is allowed only at the beginning of
these slots.
 If a station misses out on the allowed time, it must wait for the next slot.
 This reduces the probability of collision.

 Advantages of Slotted ALOHA:


 The big advantage of Slotted ALOHA is the increase in channel utilization.
 Simple to implement.
 It doubles the efficiency of ALOHA.
 Disadvantages of Slotted ALOHA:
 It requires queuing buffers for the retransmission of frames.
 Synchronization required.
Random Access Protocol
b. CSMA:
 Carrier Sense Multiple Access ensures fewer collisions as the station is required to first sense the medium (for idle or busy) before
transmitting data.
 If it is idle then it sends data, otherwise, it waits till the channel becomes idle.
 However, there is still a chance of collision in CSMA due to propagation delay.
 There are three main approaches to CSMS protocols as below:
 1-Persistent CSMA: The node senses the channel, if idle it sends the data, otherwise it continuously keeps on checking the
medium for being idle and transmits unconditionally(with 1 probability) as soon as the channel gets idle.
 Non-Persistent CSMA: The node senses the channel, if idle it sends the data, otherwise it checks the medium after a random
amount of time (not continuously) and transmits when found idle.
 P-Persistent CSMA: The node senses the medium, if idle it sends the data with p probability. If the data is not transmitted ((1-
p) probability) then it waits for some time and checks the medium again, now if it is found idle then it sends with p probability.
This repeat continues until the frame is sent. It is used in Wi - fi and packet radio systems.
Random Access Protocol

c. CSMA (Collision Detection (CSMA/CD)):

 It is used to detect collisions on the network which happen when two stations attempt to transmit at the same time.

 CSMA/CD is a set of rules that determines how network devices respond when two devices attempt to use a data channel

simultaneously.

d. CSMA (Collision Avoidance (CSMA/CA)):

 CSMA/CA is a multiple access protocol used in wireless networks to regulate access to the shared communication medium, such as

a wireless channel.

 Unlike CSMA/CD (Carrier Sense Multiple Access with Collision Detection), which is used in wired Ethernet networks, CSMA/CA is

specifically designed for wireless communication where collision detection is not feasible due to the hidden terminal problem and

the inability to sense collisions.


Controlled Access Protocol
 Controlled access protocols, also known as channel allocation protocols, are a category of network protocols used to regulate
access to a shared communication medium in computer networks.
 Unlike contention-based protocols like CSMA/CD (Carrier Sense Multiple Access with Collision Detection) and CSMA/CA (Carrier
Sense Multiple Access with Collision Avoidance), controlled access protocols use predetermined rules and mechanisms to
allocate access to the communication channel.

There are three main types of controlled access protocols:


 Reservation:
 Reservation-based protocols allocate specific time slots or frequencies to devices for transmitting data.
 Devices make reservations for access to the communication channel in advance.
 Polling:
 Polling-based protocols involve a central controller, known as the master or polling station, that controls access to the
communication channel.
 Token Passing:
 The token passing protocol is a network access control method used in computer networks to regulate access to a
shared communication medium.
 In this protocol, a special token is passed sequentially between the devices in the network, allowing each device to
access the medium when it possesses the token.
Channel Allocation

 Channel allocation refers to the process of assigning communication channels to different devices within a LAN to facilitate

communication.

 There are several methods for channel allocation, including:

 Frequency Division Multiple Access (FDMA): Divides the available frequency spectrum into multiple non-overlapping

channels, with each channel assigned to a different device for communication.

 Time Division Multiple Access (TDMA): Divides the available time slots in a time frame into multiple time slots, with each

time slot assigned to a different device for communication.

 Code Division Multiple Access (CDMA): Assigns a unique spreading code to each device, allowing multiple devices to

transmit simultaneously over the same frequency spectrum.


LAN Standards

 LAN standards define the protocols and specifications for LAN technologies, ensuring interoperability and compatibility among

devices from different manufacturers.

 Common LAN standards include:

 Ethernet:

 The most widely used LAN technology, which specifies the physical and data link layers of the OSI model.

 Ethernet standards include 10BASE-T, 100BASE-TX, and Gigabit Ethernet (IEEE 802.3 standards).

 Wi-Fi (Wireless LAN):

 Defines wireless LAN technologies based on IEEE 802.11 standards.

 Wi-Fi standards include 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac, and 802.11ax (Wi-Fi 6).
Link Layer Switches & Bridges
 Link layer switches and bridges are network devices used to connect multiple devices within a LAN and forward data frames
between them.
 They operate at the data link layer of the OSI model and use MAC addresses to forward frames to their destination.
 Bridges:
 Connects multiple LAN segments or network segments together and forwards frames between them based on MAC
addresses.
 Bridges help reduce network collisions and improve network performance.
 Switches:
 Similar to bridges but with multiple ports, switches have the ability to learn MAC addresses by inspecting the source
address of incoming frames.
 They build and maintain a MAC address table (also known as a forwarding table) to efficiently forward frames only to the
intended recipient, reducing network congestion and improving bandwidth utilization.
Network Layer

 The network layer is a crucial part of the internet where data packets are routed between different devices.

 It acts as a middleman between the physical and application layers.

 Its primary job is to manage the addressing, routing, and forwarding of data packets across various networks.

 Think of it as a post office sorting and directing letters to their destinations based on addresses.

 Protocols like IP (Internet Protocol) operate at this layer, ensuring that data travels efficiently and securely across different

networks, regardless of the specific technologies used by those networks.


Point to Point Networks

 The Point-to-point networks, also referred to as point-to-point connections or links, are a type of network topology where two

devices are directly connected to each other through a dedicated communication link.

 In a point-to-point network:

 Two Devices: There are only two devices involved: a sender (source) and a receiver (destination).

 Dedicated Link: The communication link between the sender and receiver is dedicated exclusively to their communication,

meaning that no other devices share the link.

 Physical Connection: The physical connection between the sender and receiver can take various forms, including wired

connections (e.g., Ethernet, serial connections) or wireless connections (e.g., point-to-point microwave links).

 Simple Configuration: Point-to-point networks are relatively simple to configure and manage compared to other network

topologies, as there are only two devices involved.


Logical Addressing
 Logical addressing is a network layer (Layer 3) concept that involves assigning unique identifiers, known as logical addresses or
network addresses, to devices connected to a network.
 Logical addresses are used to uniquely identify devices within a network or subnet, enabling them to communicate with each
other.
 Logical addresses are independent of the underlying network hardware and are used for routing and addressing purposes.
 Examples of logical addressing include Internet Protocol (IP) addresses used in IP networks.
 The most common type of logical addressing used in computer networks is Internet Protocol (IP) addressing.
 In the IP addressing scheme:
 IPv4: IPv4 addresses are 32-bit numerical addresses represented in dotted-decimal notation (e.g., 192.168.0.1). IPv4 addresses
are divided into network and host portions, with the network portion identifying the network to which the device is connected
and the host portion identifying the specific device within that network.
 IPv6: IPv6 addresses are 128-bit hexadecimal addresses represented in colon-separated notation (e.g.,
2001:0db8:85a3:0000:0000:8a2e:0370:7334). IPv6 addresses provide a much larger address space than IPv4, allowing for a virtually
unlimited number of unique addresses and additional functionality such as hierarchical addressing and auto-configuration.
Basic Networking Protocols
 Internet Protocol (IP):
 IP is a network layer (Layer 3) protocol responsible for routing packets between networks.
 It provides an addressing scheme (IPv4 or IPv6) to uniquely identify devices on a network.
 IP addresses are hierarchical and structured, allowing routers to forward packets to their destination based on their IP
addresses.
 Classless Inter-Domain Routing (CIDR):
 CIDR is a method used to allocate IP addresses and route IP packets more efficiently.
 It allows for the aggregation of IP addresses into larger blocks, reducing the size of routing tables and improving routing
efficiency.
 CIDR notation represents IP address ranges with a prefix length (e.g., 192.168.0.0/24).
 Address Resolution Protocol (ARP):
 ARP is a protocol used to map IP addresses to MAC addresses in local area networks.
 When a device needs to send a packet to another device on the same network, it uses ARP to determine the MAC address of
the destination device corresponding to its IP address.
Basic Networking Protocols
 Reverse Address Resolution Protocol (RARP):
 RARP is a protocol used to map MAC addresses to IP addresses.
 It allows a device to discover its IP address when it knows only its MAC address, typically used during network bootstrapping.
 Dynamic Host Configuration Protocol (DHCP):
 DHCP is a network protocol used to automatically assign IP addresses and other network configuration parameters to devices
on a network.
 It allows devices to obtain IP addresses dynamically without manual configuration, simplifying network administration and
management.
 Internet Control Message Protocol (ICMP):
 ICMP is a network layer protocol used for diagnostic and control purposes in IP networks.
 It is used to send error messages and control messages, such as ping requests and responses for testing connectivity between
devices.
 ICMP messages are encapsulated within IP packets and are typically used by network utilities and diagnostic tools.
IPv4

 IPv4, or Internet Protocol version 4, is a foundational protocol in computer networking that facilitates communication between

devices on the Internet and other IP-based networks.

 It operates at the network layer (Layer 3) of the OSI model.

 It provides the addressing and routing mechanisms necessary for data transmission.

 IPv4 addresses are 32-bit numerical identifiers.

 It uniquely identifies devices on a network.

 These addresses are represented in dotted-decimal notation (e.g., 192.168.0.1) and consist of a network portion and a host

portion.

 IPv4 uses routing tables maintained by routers to determine the optimal path for forwarding packets between networks.

 IPv4 faces challenges due to its limited address space, allowing for approximately 4.3 billion unique addresses.
IPv6

 IPv6, or Internet Protocol version 6, is the successor to IPv4.

 It is designed to address the limitations of IPv4, particularly the scarcity of available IP addresses.

 IPv6 operates at the network layer (Layer 3) of the OSI model.

 It provides the foundation for communication in modern computer networks.

 IPv6 addresses are 128 bits long.

 It offering a significantly larger address space compared to IPv4.

 These addresses are represented in hexadecimal notation (e.g., 2001:0db8:85a3:0000:0000:8a2e:0370:7334) and provide

approximately 340 un-decillion unique addresses, ensuring an abundance of addresses for future growth.
Routing

 Routing refers to the process of determining the optimal path for data packets to travel from a source device to a destination

device across interconnected networks.

 In other words, it is the process of selecting a path in a network along which to send network traffic.

 It involves making decisions about the network segment to forward packets towards their final destination.

 It is performed for many kinds of networks, including telephone networks, and electronic data networks like the Internet and

Transportation Networks.

 Routing is mainly concerned with electronic data networks using packet-switching technology.

 Routing directs forward the logically addressed packets from their source to their destination via intermediate nodes in packet

switching networks.
Forwarding & Delivery Process
In computer networks, forwarding and delivery are two essential processes involved in routing packets from a source device to a
destination device:

1. Forwarding:
 Forwarding refers to the process of passing packets from one network interface to another within the same router or network
device.
 When a router receives an incoming packet, it examines the packet's destination IP address and consults its routing table to
determine the appropriate outbound interface or next-hop router for forwarding the packet towards its destination.
 Forwarding involves making decisions at the data link layer (Layer 2) based on the destination MAC address in the packet's
Ethernet frame.

2. Delivery:
 Delivery, on the other hand, refers to the ultimate receipt of packets by the destination device.
 Once a packet is forwarded to the correct network segment or next-hop router, it traverses through the network until it reaches
the destination device.
 The final delivery of the packet involves the process of receiving and processing the packet at the destination's network interface.
 This process typically occurs at the data link layer (Layer 2) of the OSI model, where the destination device extracts the packet's
payload and processes the data contained within.
Routing Algorithms & Protocols
 Routing algorithms are algorithms used by routers to determine the best path for forwarding packets from a source to a
destination across interconnected networks.
 These algorithms calculate the optimal route based on various factors such as network topology, link cost, and routing metrics.
 Different routing algorithms employ different strategies for route calculation and packet forwarding.

 Here are some common types of routing algorithms:


 Static (Non-Adaptive) Routing Algorithms:
 In Static Routing Algorithms, the network connection is established between source and destination depending on any
parameter.
 The Routing decision is made offline on estimates of current traffic and topology.
 The pre-calculated paths are then loaded to the routing table and are fixed for a longer period.
 Static Algorithms are the following:
 Shortest Path Routing, Flooding, Flow-Based Routing

 Dynamic (Adaptive) Routing Algorithms:


 Dynamic Routing Algorithms change their routing decision if there is a change in topology or traffic.
 Each router continuously checks the network status by communicating with neighbors.
 Thus a change in network topology is eventually propagated to all the routers.
 Based on this information gathered, each router computes the suitable path to the destination.
 Dynamic Algorithms are the following:
 Distance Vector Routing, Link State Routing, Hierarchical Routing, Broadcast Routing
Shortest Path Routing Algorithm
 It is a non-adaptive routing algorithm that is very simple, easy to understand, and easy to implement.
 In this algorithm, a graph of subnet is drawn, with each node of the graph representing a router and each arc of the graph
representing a communication link.
 To choose a route between a pair of routers this algorithm finds the shortest path between them on the graph.
 The path length between each node is measured as a function of distance, average traffic, communication cost, bandwidth, etc.
 There are many algorithms to compute the shortest path between two nodes of a graph. The main are:
 Dijkstra’s Shortest Path Algorithm
 Bellman-Ford Algorithm
 Solved Numerical on Dijkstra’s Shortest Path Algorithm:
Flooding Routing Algorithm
 It is a simple and non-adaptive routing algorithm.
 In this algorithm, a packet is sent by a source node to all its adjacent nodes.
 At each node, every incoming packet is retransmitted on every outgoing line, except the line that it arrived from.
 This technique requires no network information.

 Advantages of Flooding:
 It requires no network information.
 It is highly robust.
 This property finds applications in the military where the robustness of flooding is very much desirable.
 It is also useful in distributed databases where it is necessary to update the database concurrently.

 Disadvantages of Flooding:
 The total traffic load that this technique generates is directly proportional to the connectivity of the network.
 Flooding requires much large bandwidth.
Flow-Based Routing Algorithm
 Flow-based routing algorithm is a method used to manage the flow of data packets based on specific criteria such as available
bandwidth, congestion levels, or quality of service requirements.
 Think of it like managing traffic on a highway, Different vehicles have different needs and priorities - some may require faster lanes,
some may need to avoid congested areas, and some may have special requirements like wider lanes.
 Flow-based routing algorithm works similarly by directing data packets along paths that best suit their needs.
 In this algorithm, routers or switches make routing decisions based on the current state of the network and the requirements of
the data packets.
 They consider factors like the available bandwidth on different paths, the delay along each route, and any constraints imposed by
network policies.
 For example, if a data packet requires high bandwidth and low latency, the router may choose a path with minimal congestion and
the highest available bandwidth.
 Alternatively, if the packet requires a secure connection, the router may prioritize paths with encryption capabilities.
 Flow-based routing algorithms aim to optimize the performance and efficiency of data transmission in computer networks by
dynamically adapting routing decisions to the changing conditions of the network and the specific needs of the data packets.
Distance Vector Routing Algorithm
 It is the type of routing protocol that was originally used on the internet.
 In distance vector routing, each router periodically shares its knowledge about the entire network with its neighbors.
 In this routing, each router maintains a routing table containing one entry for each router in the subnet.
 A route using a distance vector routing protocol does not know the entire path to a destination network.
 Advantages of Distance Vector Routing:
 Simple and efficient in small networks.
 The algorithm required to calculate routing tables from the routing information received from other routers is simple.
 Simple implementation and maintenance.
 Low resource requirements.
 Disadvantages of Distance Vector Routing:
 Slow Convergence.
 Limited Scalability.
 Routing Loops.
 Heavy Administrative Burden.
Link State Routing Algorithm
 It is more robust and used more widely in larger networks.
 Link State Routing Protocols can transmit routing information to all other routers running the same protocol, not just directly
connected neighbors.

 The Link State Routing is simple and each router performs the following functions:
 Discover its neighbors and obtain their network addresses.
 Measures the delay or cost to each of its neighbors.
 Construct a packet containing the network addresses and the cost/delay of all the neighbors.
 Send this packet to all other routers.
 Compute the shortest path to every other router & Dijkstra’s algorithm is used for this purpose.
 Advantages of Link State Routing:
 Link State Routing reacts more quickly, and in a bounded amount of time, to connectivity changes.
 Low network overhead.
 Low convergence time.
 The ability to scale to large and very large networks.
 Changes in topology can be sent out immediately, so convergence can be quicker.
 Disadvantages of Link State Routing:
 More complex and difficult to configure.
 Requires additional memory to create and maintain the link state database.
 Requires more CPU processing than distance vector routing protocol.
 Requires more storage and more computing to run.
Hierarchical Routing Algorithm
 It is a technique commonly used while building large networks.
 As a network grows, the resource requirements for the network’s management and control functions also grow.
 With the increase in the growth of the network size, the size of the routing table also increases and the router can’t handle
network traffic as efficiently.
 In Hierarchical Routing, the network is divided into regions and every router maintains the details of those routers which are
present in the same region.
 It does not know anything about the internal structure of other regions.
 Routers just save one record in their table for every other region.
 Hierarchical Routing is a natural way for routing to scale size, network administration, and governance.
 It is important because the internet is an interconnection of unequal networks.

 Advantages of Hierarchical Routing:


 It reduces the overhead by structuring the network on more levels.
 It needs fewer calculations and updates of routing tables.
 It saves the routing table space.

 Disadvantages of Hierarchical Routing:


 The complexity of maintaining the hierarchy may compromise the performance of the routing protocol.
 The hierarchical routing may face implementation difficulty, because a node selected as a cluster head may not necessarily
have higher processing capability than the other nodes.
Broadcast Routing Algorithm
 The Broadcast Routing algorithm is a method used to send data from one sender to all other nodes within the network.
 It's like sending a message to everyone in a room by speaking loudly so that everyone can hear.
 In this algorithm, when a node wants to transmit data to all other nodes in the network, it sends the data packet to all its
neighboring nodes.
 Each receiving node then forwards the data packet to all of its neighbors, except for the node from which it received the packet.
 This process continues until every node in the network has received the packet.
 Broadcast Routing is commonly used in scenarios where information needs to be disseminated to multiple destinations
simultaneously, such as in announcements or updates that are relevant to all nodes in the network.
 While Broadcast Routing ensures that the message reaches all nodes in the network, it can lead to issues such as network
congestion and unnecessary duplication of data.
 Therefore, it's essential to use this algorithm judiciously and implement optimizations to minimize its drawbacks, such as limiting
the frequency of broadcasts or employing techniques to reduce redundant transmissions.
Routing Protocols
 Routing protocols are sets of rules used by routers to determine the best path for forwarding data packets.
 They enable routers to exchange information, update routing tables, and make decisions about the most efficient routes for data
transmission within the network.
 Routing protocols are essential in enabling communication between different devices in a network.
 They facilitate the exchange of routing information, enabling routers to dynamically adapt to changes in the network topology.
 Examples include RIP, OSPF, BGP, etc. protocols.

 Here are some routing protocols:


 Routing information protocol (RIP)
 One of the earliest protocols developed is the inner gateway protocol or RIP.
 We can use it with local area networks (LANs), which are linked computers in a short range, or wide area networks
(WANs), which are telecom networks that cover a big range.
 Hop counts are used by the Routing Information Protocol (RIP) to calculate the shortest path between networks.
Routing Protocols
 Interior gateway protocol (IGRP)
 IGRP was developed by the multinational technology corporation Cisco.
 It makes use of many of the core features of RIP but raises the maximum number of supported hops to 100.
 It might therefore function better on larger networks.
 IGRPs are elegant and distance-vector protocols.
 To work, IGRP requires comparisons across indicators such as load, reliability, and network capacity.
 Additionally, this kind of updates automatically when things change, such as the route.
 This aids in the prevention of routing loops, which are mistakes that result in an unending data transfer cycle.

 Exterior Gateway Protocol (EGP)


 Exterior gateway protocols, such as EGP, help transfer data or information between several gateway hosts in autonomous
systems.
 In particular, it aids in giving routers the room they need to exchange data between domains, such as the Internet.
Routing Protocols
 Enhanced interior gateway routing protocol (EIGRP)
 This kind is categorized as a classless protocol, inner gateway, and distance vector routing.
 To maximize efficiency, it makes use of the diffusing update method and the dependable transport protocol.
 A router can use the tables of other routers to obtain information and store it for later use.
 Every router communicates with its neighbor when something changes so that everyone is aware of which data paths are
active.
 It stops routers from miscommunicating with one another.
 The only external gateway protocol is called Border Gateway Protocol (BGP).

 Open shortest path first (OSPF)


 OSPF is an inner gateway, link state, and classless protocol that makes use of the shortest path first (SPF) algorithm to
guarantee effective data transfer.
 Multiple databases containing topology tables and details about the network as a whole are maintained by it.
 The ads, which resemble reports, provide thorough explanations of the path’s length and potential resource requirements.
 When topology changes, OSPF recalculates paths using the Dijkstra algorithm.
 To guarantee that its data is safe from modifications or network intrusions, it also employs authentication procedures.
 Using OSPF can be advantageous for both large and small network organizations because of its scalability features.
Routing Protocols

 Border gateway protocol (BGP)

 Another kind of outer gateway protocol that was first created to take the role of EGP is called BGP.

 It is also a distance vector protocol since it performs data package transfers using the best path selection technique.

 BGP defines communication over the Internet.

 The Internet is a vast network of interconnected autonomous systems.

 Every autonomous system has an autonomous system number (ASN) that it receives by registering with the Internet Assigned

Numbers Authority.
Congestion
 Congestion:
 Rushing too many packets to a node or a part of the network may sometimes affect the network.
 The network performance may degrade because of this.
 Such a situation is called Congestion.
 Congestion in a network may occur if the load on the network is greater than the capacity of the network.

 Congestion Control:
 It refers to the mechanisms and techniques to control the congestion and keep the load below the capacity.

Different Algorithms to control the congestion are:


 Leaky Bucket Algorithm:
 The leaky bucket algorithm is used to control the rate at which traffic is sent to the network.
 It provides a mechanism by which burst traffic can be shaped to present a steady stream of traffic to the network, as opposed
to traffic with irregular bursts of low-volume and high-volume flows.
 The Algorithm is as follows:
 Arriving packets are placed in a bucket with a hole in the bottom.
 The bucket can queue at most b bytes.
 A packet that arrives when the bucket is full is discarded.
 Packets drain through the hole in the bucket into the network at a constant rate of r bytes per second, thus smoothing
traffic bursts.
Congestion
 Token Bucket Algorithm:
 Token Bucket consists of a bucket that can hold up to b tokens.
 Tokens are generated at a constant rate of r tokens per second and added to the bucket as long as it is not filled.
 Each packet transmitted into the network must first remove a token from the token bucket.
 If the bucket is empty, the packet must wait for a new token or it is dropped or marketed with a lower priority.
 A Token Bucket can control a bursty data stream with the average rate r and a maximum burst size of b packets.

 The Algorithm is as follows: (Assume each token = 1 byte)


 A Token is added to the bucket every 1/r second.
 The bucket can hold at most b tokens.
 If a token arrives when the bucket is full, it is discarded.
 When a packet of n bytes arrives, n tokens are removed from the bucket and the packet is sent to the network.
 If fewer than n tokens are available, no tokens are removed from the bucket and the packet is considered to be non-
conformant.

You might also like