CN Module-II
CN Module-II
pg. 1
DATA LINK LAYER DESIGN ISSUES
The data link layer uses the services of the physical layer to send and receive bits over
communication channels. Functions of data link layer include:
o Providing a well-defined service interface to the network layer.
o Dealing with transmission errors.
o Regulating the flow of data so that slow receivers are not swamped by fast senders.
To accomplish these goals, the data link layer takes the packets it gets from the network layer
and encapsulates them into frames for transmission. Each frame contains a frame header, a
payload field for holding the packet, and a frame trailer, as illustrated in Fig. 3-1. Frame
management forms the heart of what the data link layer does.
Physical Layer delivers bits of information to and from Data Link layer
3. Regulating the flow of data so that slow receivers are not swamped by fast senders.
@ src m/c is an entity(a process), in the NL that sends bits to the DL for transmission to the
dest.
1. Unacknowledged connectionless service: No logical connection b/w src & dst, Lost pkt no
recovery.
pg. 3
FRAMING
Way for a sender to transmit set of bits that are meaningful to receiver.
DL breaks up the bit stream into discrete frames, computes checksum for each frame, and
include the checksum in the frame when it is transmitted.
Farming methods:
pg. 4
Flag bytes with byte stuffing (Character Stuffing)
Each frame start & end with special byte-Flag byte
If already flag byte occurs in the frame ,stuff extra escape byte(ESC)
pg. 5
Flag bits with bit stuffing
This method used for n/w in which encoding on physical medium contains some redundancy
(no longer needed & not useful).
pg. 6
ERROR CONTROL (EC)
If the sender receives a positive ack, it knows the frame has arrived safely.
On the other hand, a negative ack means that something has gone wrong and the frame must
be transmitted again.
But certain frames can go missing due to the introduction of noise in the signal.
If ack lost, Timer event, Duplication of pkt, Sequence no.
1. Forward Error Correction When the receiver detects error in the data received, it executes
error-correcting code, which helps to correct error & auto-recover.
2. Backward Error Correction When the receiver detects an error in the data received, it
requests back the sender to retransmit the data unit.
pg. 7
ERROR DETECTION
1. Parity
2. Checksum
PARITY
A single parity bit is appended to the data.
The parity bit is chosen so that the number of bits in the codeword is even or odd.
Even parity for asynchronous Transmission
Odd parity for synchronous Transmission (Continuous)
Now an error will be detected, since the no. of ones received are ODD
pg. 8
------------------------------------------------------------------------------
The received data is wrong even though the no. of ones are EVEN. Since two bits are inverted error
can’t be detected.
CHECKSUM
pg. 9
The structure of encoder and decoder
pg. 10
ERROR CORRECTION
Hamming Distance: The number of differences between the corresponding bits (same size)
of two words, d(x, y).
It can be found using XOR operation on the two words and counting the number of 1s in the
result.
pg. 11
Ex: d(10101, 11110) is 3 because 10101 11110 is 01011 (three 1s).
Hamming code:
The Hamming Code method is a network technique designed by R.W. Hamming,
for damage and error detection during data transmission between multiple network
channels. The Hamming Code method is one of the most effective ways to detect
single-data bit errors in the original data at the receiver end.
pg. 12
ELEMENTARY DATA LINK PROTOCOLS:
Protocols in DLL perform: framing, error control and flow control.
Framing is the process of dividing bit - streams from physical layer into data frames(few
hundred to a few thousand bytes).
pg. 13
Simplex Protocol for a noisy channel
pg. 14
Practically data is transmitted in both directions. This can be achieved by full duplex transmission.
‘forward(data) ‘ and ‘reverse(ack)’ channel. In both cases, the reverse channel is almost wasted. To
overcome this problem a technique called piggy backing is used.
pg. 15
Pipelining: increased utilization
We first look at the sliding window protocol. As we know that the sliding window protocol is different from
the stop-and-wait protocol. In the stop-and-wait protocol, the sender can send only one frame at a time and
cannot send the next frame without receiving the acknowledgment of the previously sent frame, whereas, in
the case of sliding window protocol, the multiple frames can be sent at a time.
The variations of sliding window protocol are Go-Back-N ARQ and Selective Repeat ARQ. Let's understand
'what is Go-Back-N ARQ'.
It uses the principle of protocol pipelining in which the multiple frames can be sent before receiving
the acknowledgment of the first frame. If we have five frames and the concept is Go-Back-3, which
means that the three frames can be sent, i.e., frame no 1, frame no 2, frame no 3 can be sent before
expecting the acknowledgment of frame no 1.
In Go-Back-N ARQ, the frames are numbered sequentially as Go-Back-N ARQ sends the multiple
frames at a time that requires the numbering approach to distinguish the frame from another frame,
and these numbers are known as the sequential numbers.
pg. 16
Example: Suppose there are a sender and a receiver, and let's assume that there are 11
frames to be sent. These frames are represented as 0,1,2,3,4,5,6,7,8,9,10, and these are the
sequence numbers of the frames. Mainly, the sequence number is decided by the sender's
window size. But, for the better understanding, we took the running sequence numbers,
i.e., 0,1,2,3,4,5,6,7,8,9,10. Let's consider the window size as 4, which means that the four
frames can be sent at a time before expecting the acknowledgment of the first frame.
pg. 17
Important points related to Go-Back-N ARQ:
o In Go-Back-N, N determines the sender's window size, and the size of the receiver's window
is always 1.
o It does not consider the corrupted frames and simply discards them.
o It does not accept the frames which are out of order and discards them.
o If the sender does not receive the acknowledgment, it leads to the retransmission of all the
current window frames.
Selective repeat:
Selective-repeat Automatic Repeat Request (ARQ) is one of the techniques where a data link
layer may deploy to control errors with flow control. In selective-reject ARQ error control, the
only frames retransmitted are those receive at which time out.
There are some requirements for error control mechanisms, and they are as follows −
Error detection − The sender and receiver, or any must ascertain that there is some error in
the transit.
Positive ACK − Whenever a receiver receives a correct frame, it should acknowledge it.
Negative ACK − Whenever the receiver receives a damaged frame or a duplicate frame, it
sends a NACK back to the sender and sender must retransmit the correct frame.
pg. 18
Retransmission − The sender always maintains a clock and sets a timeout period. If an ACK
of data-frame previously transmitted does not arrive before the timeout, the sender
retransmits the frame, thinking that the frame or it’s ACK is lost in transit.
Selective-repeat Automatic Repeat Request (ARQ) is one of the type of Sliding Window
Protocol and used for error detection and control in the data link layer.
In the selective repeat, the sender sends several frames specified by a window size even
without the need to wait for individual acknowledgement from the receiver as in Go-
Back-N ARQ. In selective repeat protocol, the retransmitted frame is received out of
sequence.
In Selective Repeat ARQ only the lost or error frames are retransmitted, whereas correct
frames are received and buffered.
The receiver while keeping track of sequence numbers buffers the frames in memory
and sends NACK for only frames which are missing or damaged. The sender will
send/retransmit a packet for which NACK is received.
pg. 19
Selective repeat in action
pg. 20
The Channel Allocation Problem
The central theme of this chapter is how to allocate a single broadcast channel among competing
users. The channel might be a portion of the wireless spectrum in a geographic region, or a single
wire or optical fiber to which multiple nodes are connected. It does not matter. In both cases, the
channel connects each user to all other users and any user who makes full use of the channel
interferes with other users who also wish to use the channel.
The traditional way of allocating a single channel, such as a telephone trunk, among multiple
competing users is to chop up its capacity by using one of the multiplexing schemes. If there are N
users, the bandwidth is divided into N equal-sized portions, with each user being assigned one
portion. Since each user has a private frequency band, there is now no interference among users.
When there is only a small and constant number of users, each of which has a steady stream or a
heavy load of traffic, this division is a simple and efficient allocation mechanism.
A wireless example is FM radio stations. Each station gets a portion of the FM band and uses it most
of the time to broadcast its signal. The poor performance of static FDM can easily be seen with a
simple queueing theory calculation. Let us start by finding the mean time delay, T, to send a frame
onto a channel of capacity C bps. We assume that the frames arrive randomly with an average arrival
rate of λ frames/sec, and that the frames vary in length with an average length of 1/μ bits. With these
parameters, the service rate of the channel is μC frames/sec. A standard queueing theory result is
The mean delay for the divided channel is N times worse than if all the frames
were somehow magically arranged orderly in a big central queue. Since none of the traditional static
channel allocation methods work well at all with bursty traffic, we will now explore dynamic
methods.
pg. 21
Dynamic Channel Allocation
1. Independent Traffic: The model consists of N independent stations (e.g., computers, telephones),
each with a program or user that generates frames for transmission. The expected number of frames
generated in an interval of length t is λ Δt, where λ is a constant (the arrival rate of new frames).
Once a frame has been generated, the station is blocked and does nothing until the frame has been
successfully transmitted.
2. Single Channel: A single channel is available for all communication. All stations can transmit on
it and all can receive from it. The stations are assumed to be equally capable, though protocols may
assign them different roles (e.g., priorities)
3. Observable Collisions: If two frames are transmitted simultaneously, they overlap in time and the
resulting signal is garbled. This event is called a collision. All stations can detect that a collision has occurred.
A collided frame must be transmitted again later. No errors other than those generated by collisions occur.
4. Continuous or Slotted Time: Time may be assumed continuous, in which case frame
transmission can begin at any instant. Alternatively, time may be slotted or divided into discrete
intervals (called slots). Frame transmissions must then begin at the start of a slot. A slot may contain
0, 1, or more frames, corresponding to an idle slot, a successful transmission, or a collision,
respectively.
5. Carrier Sense or No Carrier Sense: With the carrier sense assumption, stations can tell if the
channel is in use before trying to use it. No station will attempt to use the channel while it is sensed
as busy. If there is no carrier sense, stations cannot sense the channel before trying to use it. They just
go ahead and transmit. Only later can they determine whether the transmission was successful
Multiple access problems: How to coordinate the access of multiple sending and receiving to a
shared broadcast channel.
pg. 22
The Channel Allocation problem
i) FDM
ii) TDM
pg. 23
Assumptions for Dynamic Channel Allocation: Cellular
Observable Collisions
ii) CSMA
iii) CSMA/CA
iv) CSMA/CD
ALOHA:
Aloha Rules
pg. 24
Pure ALOHA-Original ALOHA
Each station Sends a frame whenever it has a frame to send & wait for ack.
Collision: If two stations wants to send
Waiting time is random (Back-off time)
pg. 25
Throughput VS offered traffic for ALOHA system
pg. 26
1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the shared
channel and if the channel is idle, it immediately sends the data. Else it must wait and keep track of
the status of the channel to be idle and broadcast the frame unconditionally as soon as the channel is
idle.
Non-Persistent: It is the access mode of CSMA that defines before transmitting the data, each node
must sense the channel, and if the channel is inactive, it immediately sends the data. Otherwise, the
station must wait for a random time (not continuously), and when the channel is found to be idle, it
transmits the frames.
P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The P-Persistent mode
defines that each node senses the channel, and if the channel is inactive, it sends a frame with
a P probability. If the data is not transmitted, it waits for a (q = 1-p probability) random time and
resumes the frame with the next time slot.
pg. 27
Collision-Free Protocols
Almost all collisions can be avoided in CSMA/CD but they can still occur during the contention
period. The collision during the contention period adversely affects the system performance, this
happens when the cable is long and length of packet are short.
pg. 28
Token Passing
The token represents permission to send. If a station has a frame queued for transmission when it
receives the token, it can send that frame before it passes the token to the next station. If it has no
queued frame, it simply passes the token.
Frames are transmitted in the direction of the token, circulating around the ring.
After sending a frame, each station must wait for all stations to send the token and frames in one
cycle.
Token Ring (IEEE 802.5) popular in the 1980s as an alternative to classic Ethernet.
FDDI (Fiber Distributed Data Interface) in the 1990s, a faster token ring, was surpassed by switched
Ethernet.
RPR (Resilient Packet Ring) defined as IEEE 802.17 in the 2000s for metropolitan area rings used by
ISPs.
pg. 29
Binary Countdown Protocol
A problem with the basic bit-map protocol, and by extension token passing, is that the overhead is 1
bit per station, so it does not scale well to networks with thousands of stations. We can do better than
that by using binary station addresses with a channel that combines transmissions
All addresses are assumed to be the same length. The bits in each address position from different
stations are BOOLEAN ORed together by the channel when they are sent at the same time.
It implicitly assumes that the trans-mission delays are negligible so that all stations see asserted bits
essentially instantaneously.
To avoid conflicts, an arbitration rule must be applied: as soon as a station sees that a high-order bit
position that is 0 in its address has been overwritten with a 1, it gives up.
In the below example S10 will be the first station to transmit the data followed by S9,S4 and S2.
Channel efficiency :
d/(d + log N), where d is the address length and N is the number of stations.
Example:
Limited-Contention Protocols:
Aim to achieve low delay at low load and improved channel efficiency at high load.
Proposes combining the advantages of both contention and collision-free protocols.
pg. 30
Adaptive Tree Walk Protocol
Example:
pg. 31
WIRELESS LAN PROTOCOLS (802.11)
Wireless LAN: communicating via radio waves and operate as broadcast channels, differing
from wired LANs.
Access Points (APs) : wired networks to the wireless client .
Provides megabit/sec bandwidths, up to 600 Mbps.
Detection of Collisions in Wireless LANs:
Ack used post-transmission to identify collisions and other errors.
Challenges in Coverage Regions:
• In practice, coverage regions are irregular due to environmental factors like walls and
obstacles.
pg. 32
Hidden Terminal Problem
pg. 33
Feature Hidden Terminal Exposed Terminal
A situation where a terminal A situation where a terminal
cannot hear other terminals, can hear other terminals, but
Definition even though they may be within cannot directly communicate
range, leading to potential with each other, potentially
transmission collisions. causing interference.
Common in wireless networks
Common in wireless networks with where terminals are close to each
Scenario
obstacles or signal attenuation. other, causing overlapping
signals.
Can lead to hidden terminal Can lead to exposed terminal
problems, where a terminal problems, where a terminal may
Communication Issues transmits without knowing that refrain from transmitting even
another terminal is already when it could transmit without
transmitting, causing collisions. interference.
- Use of RTS/CTS (Request to Send/ - Adjusting transmission power
Clear to Send) mechanism. levels to reduce interference.
Solutions - Carrier Sense Multiple Access - Implementing Carrier Sense
with Collision Avoidance (CSMA/ Multiple Access with Collision
CA) protocols. Avoidance (CSMA/CA) protocols.
pg. 34
MACA (Multiple Access with Collision Avoidance)
Multiple Access with Collision Avoidance (MACA) is a slotted media access control protocol used in
wireless LAN data transmission to avoid collisions caused by the hidden station problem and to
simplify exposed station problem.
Working:-
The main condition for MACA to work, is that the stations are in sync with frame sizes and data speed.
It includes transmission of two frame called RTS and CTS preceding information transmission. RTS
means Request to Send and CTS means Clear to Send. Stations near to the transmitting station can hear
RTS and remains silent to hear the CTS.
pg. 35