0% found this document useful (0 votes)
19 views

Unit 2 - Data Link Layer

Uploaded by

Anitha Sakthivel
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Unit 2 - Data Link Layer

Uploaded by

Anitha Sakthivel
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 104

COMPUTER NETWORKS

Dr.R.U.Anitha

1
Unit - 2

Text Book: David J.Wetherall, Andrew S.Tanenbaum,

"Computer Networks", 5th Edition, Pearson


Education, 2012.

2
Major Topics

Data Link Layer: Data Link Layer Design


Issues
Error Detection and Correction
Elementary protocols
Sliding Window Protocols
MAC sub layer: Channel allocation
problem
Multiple access protocols

3
Data Link Layer
• The data link layer transforms the physical layer, a raw
transmission facility, to a link responsible for node-to-node
(hop-to-hop) communication.
• Specific responsibilities of the data link layer include
framing, addressing, flow control, error control, and media
access control.

4
 DATA LINK LAYER FUNCTIONS (SERVICES)

 1. Providing services to the network layer:

1 Unacknowledged connectionless service.

Appropriate for low error rate and real-time traffic. Ex: Ethernet

2. Acknowledged connectionless service.

Useful in unreliable channels, WiFi. Ack/Timer/Resend

3. Acknowledged connection-oriented service.

Guarantee frames are received exactly once and in the right order.

Appropriate over long, unreliable links such as a satellite channel or a long distance

telephone circuit

 2. Framing:

Frames are the streams of bits received from the network layer into manageable

data units. This division of stream of bits is done by Data Link Layer.

 3. Physical Addressing:

The Data Link layer adds a header to the frame in order to define physical address
5
 4. Flow Control:

 A receiving node can receive the frames at a faster rate than it can process the frame.

Without flow control, the receiver's buffer can overflow, and frames can get lost. To

overcome this problem, the data link layer uses the flow control to prevent the

sending node on one side of the link from overwhelming the receiving node on

another side of the link. This prevents traffic jam at the receiver side.

 5. Error Control:

 Error control is achieved by adding a trailer at the end of the frame. Duplication of

frames are also prevented by using this mechanism. Data Link Layers adds

mechanism to prevent duplication of frames.

 Error detection:

 Errors can be introduced by signal attenuation and noise. Data Link Layer protocol

provides a mechanism to detect one or more errors. This is achieved by adding error

detection bits in the frame and then receiving node can perform an error check.

 Error correction:

 Error correction is similar to the Error detection, except that receiving node not only
6
 6. Access Control:

 Protocols of this layer determine which of the devices has control over the

link at any given time, when two or more devices are connected to the same

link.

 7. Reliable delivery:

 Data Link Layer provides a reliable delivery service, i.e., transmits the

network layer datagram without any error. A reliable delivery service is

accomplished with transmissions and acknowledgements. A data link layer

mainly provides the reliable delivery service over the links as they have

higher error rates and they can be corrected locally, link at which an error

occurs rather than forcing to retransmit the data.

 8. Half-Duplex & Full-Duplex:

 In a Full-Duplex mode, both the nodes can transmit the data at the same

time. In a Half-Duplex mode, only one node can transmit the data at the
7
DATA LINK LAYER DESIGN ISSUES
 Providing a well-defined service interface to the network
layer.
 Dealing with transmission errors.
 Regulating the flow of data so that slow receivers are not
swamped by fast senders

For this, the data link layer takes the packets it gets from the
network layer and encapsulates them into frames for
transmission. Each frame contains a frame header, a payload
field for holding the packet, and a frame trailer

8
SERVICES PROVIDED TO THE NETWORK LAYER
 The function of the data link layer is to provide services to
the network layer.
 The principal service is transferring data from the network
layer on the source machine to the network layer on the
destination machine.
 The data link layer can be designed to offer various services.
 The actual services offered can vary from system to system.
 Three reasonable possibilities that are commonly provided
are
1) Unacknowledged Connectionless service
2) Acknowledged Connectionless service 9

3) Acknowledged Connection-Oriented service


9
UNACKNOWLEDGED CONNECTIONLESS SERVICE

 Unacknowledged connectionless service consists of having


the source machine send independent frames to the
destination machine without having the destination
machine acknowledge them.
 No logical connection is established beforehand or released
afterward.
 If a frame is lost due to noise on the line, no attempt is
made to detect the loss or recover from it in the data link
layer.
 This class of service is appropriate when the error rate is
very low so that recovery is left to higher layers.
1
0
 It is also appropriate for real- time traffic, such as voice, in
10
which late data are worse than bad data.
ACKNOWLEDGED CONNECTIONLESS SERVICE
 When this service is offered, there are still no logical
connections used, but each frame sent is individually
acknowledged.
 In this way, the sender knows whether a frame has arrived
correctly. If it has not arrived within a specified time
interval, it can be sent again.
 This service is useful over unreliable channels, such as
wireless systems.
 Adding Ack in the DLL rather than in the Network Layer is
just an optimization and not a requirement.
 If individual frames are acknowledged and retransmitted,
entire packets get through much faster. 1
1
 On reliable channels, such as fiber, the overhead of a 11
ACKNOWLEDGED CONNECTION-ORIENTED SERVICE

 Here, the source and destination machines establish a


connection before any data are transferred. Each frame sent
over the connection is numbered, and the data link layer
 Guarantees that each frame sent is indeed received.
Furthermore, itguarantees that each frame is received exactly
once and that all frames are received in
the right order.
 When connection-oriented service is used, transfers
go through three distinct phases.
⚫ In the first phase, the connection is established by having both
sides initialize variables and counters needed to keep track of
which frames have been received and which ones have not.
⚫ In the second phase, one or more frames are actually 8
transmitted.
12
FRAMING
 DLL translates the physical layer's raw bit
stream into discrete units (messages) called frames.
 How can frame be transmitted so the receiver
can
detect frame boundaries? That is, how can the receiver
recognize the start and end of a frame?

1. Byte count.

2. Flag bytes with byte stuffing.

3. Flag bits with bit stuffing.

4. Physical layer coding violations.

13
FRAMING – Byte count
 The first framing method uses a field in the header to specify the
number of characters in the frame.
 When the data link layer at the destination sees the character
count, it knows how many characters follow and hence where the
end of the frame is.

11
The troublewith this algorithm is thatthe count can be
garbled by a 14
FRAMING – Flag bytes with byte stuffing.
 Use reserved characters to indicate the start and end of a frame. For instance,
use the two-character sequence DLE STX (Data-Link Escape, Start of TeXt) to
signal the beginning of a frame, and the sequence DLE ETX (End of TeXt) to
flag the frame's end.
 The second framing method, Starting and ending character stuffing, gets
around the problem of resynchronization after an error by having each frame
start with the ASCII character sequence DLE STX and end with the sequence
DLE ETX.
 Problem: What happens if the two-character sequence DLE ETX

happens to appear in the frame itself?


 Solution: Use character stuffing; within the frame, replace every occurrence
of DLE with the two-character sequence DLE DLE. The receiver reverses the
processes, replacing every occurrence of DLE DLE with a single DLE.
 Example: If the frame contained ``A B DLE D E DLE'', the characters
12

transmitted over the channel would be ``DLE STX A B DLE DLE D E DLE DLE
15
DLE ETX''.
16
FRAMING – Flag bits with bit stuffing
 This technique allows data frames to contain an arbitrary number of bits
and allows character codes with an arbitrary number of bits per character.
It works like this.
 Each frame begins and ends with a special bit pattern, 01111110 (in fact,
a flag byte).
 Whenever the sender's data link layer encounters five consecutive 1s in the
data, it automatically stuffs a 0 bit into the outgoing bit stream.
 This bit stuffing is analogous to byte stuffing, in which an escape byte is
stuffed into the outgoing character stream before a flag byte in the data.
 When the receiver sees five consecutive incoming 1 bits, followed by a 0
bit, it automatically destuffs (i.e., deletes) the 0 bit

17
BIT STUFFING EXAMPLE

18
PHYSICAL LAYER CODING VIOLATIONS
 This Framing Method is used only in those networks in which Encoding on
the Physical Medium contains some redundancy.
 Some LANs encode each bit of data by using two Physical Bits
i.e. Manchester coding is Used. Here, Bit 1 is encoded into high-
low(10) pair and Bit 0 is encoded into low-high(01) pair.
 The scheme means that every data bit has a transition in the middle,
making it easy for the receiver to locate the bit boundaries. The
combinations high-high and low-low are not used for data but are used for
delimiting frames in some protocols.

19
ERROR CONTROL

• Error control is concerned with insuring that all frames are


eventually delivered (possibly in order) to a destination. How?
Three items are required.
• Acknowledgements:
• Typically, reliable delivery is achieved using the
“acknowledgments with retransmission“ paradigm, whereby the
receiver returns a special acknowledgment (ACK) frame
to the sender indicating the correct receipt of a frame.
⚫ In some systems, the receiver also returns a negative
acknowledgment (NACK) for incorrectly-received frames.
⚫ This is nothing more than a hint to the sender so that it can
retransmit a frame right away without waiting for a timer to
expire. 20
 Timers: One problem that simple ACK/NACK schemes fail to
address is recovering from a frame that is lost, and as a result,
fails to solicit an ACK or NACK.
 What happens if an ACK or NACK becomes lost?
⚫ Retransmission timers are used to resend frames that don't
produce an ACK.
⚫ When sending a frame, schedule a timer to expire at some
time after the ACK should have been returned. If the timer
goes o, retransmit the frame.

 Sequence Numbers:

 Retransmissions introduce the possibility of duplicate frames.

 To suppress duplicates, add sequence numbers to each frame, so


tha1t a receiver can distinguish between new frames and old
21
FLOW CONTROL
 Flow control deals with throttling the speed of the sender to match
that of the receiver.
 Two Approaches:
⚫ feedback-based flow control, the receiver sends back
information to the sender giving it permission to send more data
or at least telling the sender how the receiver is doing
⚫ rate-based flow control, the protocol has a built-in mechanism
that limits the rate at which senders may transmit data, without
using feedback from the receiver.
 Various Flow Control schemes uses a common protocol
that contains well-defined rules about when a sender may transmit
the next frame. These rules often prohibit frames from being sent
until the receiver has granted permission, either implicitly 20 or
explicitly.
22
ERROR CORRECTION AND DETECTION

 It is physically impossible for any data recording or transmission


medium to be 100% perfect 100% of the time over its entire
expected useful life.

⚫ In data communication, line noise is a fact of life (e.g., signal


attenuation, natural phenomenon such as lightning, and the
telephone repairman).

 As more bits are packed onto a square centimeter of disk storage,


as communications transmission speeds increase, the likelihood
of error increases-- sometimes geometrically.

 Thus, error detection and correction is critical to accurate data


transmission, storage and retrieval.
23

 Detecting and correcting errors requires redundancy -- sending


TYPES OF ERRORS
 There are two main types of errors in transmissions:

1. Single bit error : It means only one bit of data unit is changed from 1
to 0 or from 0 to 1.

2. Burst error : It means two or more bits in data unit are changed from
1 to 0 from 0 to 1. In burst error, it is not necessary that only
consecutive bits are changed. The length of burst error is measured
from first changed bit to last changed bit

24
ERROR DETECTION VS ERROR CORRECTION

 There are two types of attacks against errors:


 Error Detecting Codes: Include enough redundancy bits to detect
errors and use ACKs and retransmissions to recover from the errors.
 Error Correcting Codes: Include enough redundancy to detect and
correct errors. The use of error-correcting codes is often referred
to as forward error correction.

25
Error Correcting Codes
 Error Correction codes are used to detect and correct the errors
when data is transmitted from the sender to the receiver.
 Error Correction can be handled in two ways:

1. Backward error correction: Once the error is discovered, the


receiver requests the sender to retransmit the entire data unit.

2. Forward error correction: In this case, the receiver uses the


error-correcting code which automatically corrects the errors. A
single additional bit can detect the error, but cannot correct it
 The four different error-correcting codes:
 1. Hamming codes.
 2. Binary convolutional codes.
 3. Reed-Solomon codes.
 4. Low-Density Parity Check codes 26
ERROR CORRECTION
 For correcting the errors, one has to know the exact position of
the error. For example, If we want to calculate a single-bit error,
the error correction code will determine which one of seven bits is
in error.
 To achieve this, we have to add some additional redundant bits.
 Suppose r is the number of redundant bits and d is the total
number of the data bits.
 The number of redundant bits r can be calculated by using the
formula:
 2 r >=d+r+1 The value of r is calculated by using the above
formula.
 For example, if the value of d is 4, then the possible smallest
value that satisfies the above relation would be 3.
 To determine the position of the bit which is in error, a
27
ERROR CORRECTION
 Hamming Code Parity bits: The bit which is appended to the
original data of binary bits so that the total number of 1s is even
or odd.
 Even parity: To check for even parity, if the total number of 1s is
even, then the value of the parity bit is 0. If the total number of 1s
occurrences is odd, then the value of the parity bit is 1.
 Odd Parity: To check for odd parity, if the total number of 1s is
even, then the value of parity bit is 1. If the total number of 1s is
odd, then the value of parity bit is 0.
 Algorithm of Hamming code: An information of 'd' bits are
added to the redundant bits 'r' to form d+r. The location of each of
the (d+r) digits is assigned a decimal value. The 'r' bits are placed
in the positions 1,2,.....2k-1 At the receiving end, the parity bits
are recalculated. The decimal value of the parity bits determines
28
Relationship b/w Error position & binary
Error number
Binary Number
Position
0 000
1 001
2 010
3 011
4 100
5 101
6 110
7 111
 Let's understand the concept of Hamming code through an example:

 Suppose the original data is 1010 which is to be sent.

 Total number of data bits 'd' = 4

 Number of redundant bits r : 2r >= d+r+1 2r >= 4+r+1

 Therefore, the value of r is 3 that satisfies the above relation.

 Total number of bits = d+r = 4+3 = 7;


29
Determining the position of the redundant
bits
 The number of redundant bits is 3.

 The three bits are represented by r1, r2, r4.

 The position of the redundant bits is calculated with corresponds

to the raised power of 2.


 Therefore, their corresponding positions are 1, 21 , 22 . The

position of r1 = 1, The position of r2 = 2 , The position of r4 = 4


Representation of Data on the addition of parity bits:

30
Determining the Parity bits

 Determining the r1 bit:

 The r1 bit is calculated by performing a parity check on

 the bit positions whose binary representation includes 1 in the first

position.

• We observe from the above figure that the bit position that
includes 1 in the
first position are 1, 3, 5, 7.
• Now, we perform the even-parity check at these bit positions.
• The total number of 1 at these bit positions corresponding to r1
is even, therefore, the value of the r1 bit is 0.
31
Determining the Parity bits

 Determining r2 bit:

 The r2 bit is calculated by performing a parity check on the

 bit positions whose binary representation includes 1 in the second

position

 We observe from the above figure that the bit positions that

includes 1 in the
 second position are 2, 3, 6, 7. Now, we perform the even-parity

check at these bit positions. The total number of 1 at these bit


positions corresponding to r2 is odd, therefore, the value of the
r2 bit is 1.
32
Determining the Parity bits

Determining r4 bit:
 The r4 bit is calculated by performing a parity check on the

 bit positions whose binary representation includes 1 in the third

position

• We observe from the above figure that the bit positions that includes 1 in
the
third position are 4, 5, 6, 7.
• Now, we perform the even-parity check at these bit positions.
• The total number of 1 at these bit positions corresponding to r4 is even,
therefore, the value of the r4 bit is 0. 33
Data transferred is given below:

 Suppose the 4th bit is changed from 0 to 1 at the receiving end, then parity bits

are recalculated.

R1 bit
 The bit positions of the r1 bit are 1,3,5,7

• We observe from the above figure that the binary representation of r1 is


1100.
• Now, we perform the even-parity check, the total number of 1s
appearing in the
R2 bit
• r1 bit is an even number. Therefore, the value of r1 is 0.
• The bit positions of r2 bit are 2,3,6,7

34
• We observe from the above figure that the binary representation of r2
is 1001.
• Now, we perform the even-parity check, the total number of 1s
appearing in the
r2 bit is an even number. Therefore, the value of r2 is 0.
R4 bit
The bit positions of r4 bit are 4,5,6,7.

• We observe from the above figure that the binary representation of r4 is


1011.
• Now, we perform the even-parity check, the total number of 1s appearing
in the r4 bit is an odd number. Therefore, the value of r4 is 1.
• The binary representation of redundant bits, i.e., r4r2r1 is 100, and its
corresponding decimal value is 4.
• Therefore, the error occurs in a 4th bit position.
• The bit value must be changed from 1 to 0 to correct the error.
35
Error Detection

Error
 A condition when the receiver’s information does not matches with the sender’s

information.
 During transmission, digital signals suffer from noise that can introduce errors

in the binary bits travelling from sender to receiver. That means a 0 bit may
change to 1 or a 1 bit may change to 0.
Error Detecting Codes
 Whenever a message is transmitted, it may get scrambled by noise or data may

get corrupted.
 To avoid this, we use error-detecting codes which are additional data added to a

given digital message to help us detect if any error has occurred during
transmission of the message.
 Basic approach used for error detection is the use of redundancy bits, where

additional bits are added to facilitate detection of errors.


 Some popular techniques for error detection are:

 1. Simple Parity check 36


Simple Parity check

 Blocks of data from the source are subjected to a check bit or

parity bit generator form, where a parity of : 1 is added to the


block if it contains odd number of 1’s, and 0 is added if it contains
even number of 1’s.
 This scheme makes the total number of 1’s even, that is why it is

called even parity checking.

37
Two-dimensional Parity check
 Parity check bits are calculated for each row, which is equivalent

to a simple parity check bit.


 Parity check bits are also calculated for all columns, then both are

sent along with the data. At the receiving end these are compared
with the parity bits calculated on the received data.

38
Checksum

 In checksum error detection scheme,

the data is divided into k segments each


of m bits.
 In the sender’s end the segments are

added using 1’s complement arithmetic


to get the sum. The sum is
complemented to get the checksum.
 The checksum segment is sent along

with the data segments.


 At the receiver’s end, all received
segments are added using 1’s
complement arithmetic to get the sum.
 The sum is complemented.

 If the result is zero, the received data is

accepted; otherwise discarded

39
Cyclic redundancy check (CRC)
 Unlike checksum scheme, which is based on addition,

CRC is based on binary division.


 In CRC, a sequence of redundant bits, called cyclic

redundancy check bits, are appended to the end of data


unit so that the resulting data unit becomes exactly
divisible by a second, predetermined binary number.
 At the destination, the incoming data unit is divided by

the same number.


 If at this step there is no remainder, the data unit is

assumed to be correct and is therefore accepted.


 A remainder indicates that the data unit has been

damaged in transit and therefore must be rejected. 40


A bit stream 1101011011 is transmitted using
the standard CRC method. The generator
polynomial is x4+x+1. What is the actual bit
string transmitted?

The generator polynomial G(x) = x 4 + x + 1 is


encoded as 10011.
1.x4 + 0.x3 + 0.x2+1. x1+ 1 x0
1 0 0 1 1
Clearly, the generator polynomial consists of 5
bits.
So, a string of 4 zeroes is appended to the bit
stream to be transmitted.
The resulting bit stream is 11010110110000.
XOR 41
From here, CRC =
1110
42
Now,
The code word to be transmitted is obtained by
replacing the last 4 zeroes of
11010110110000 with the CRC.
Thus, the code word transmitted to the receiver
= 11010110111110.

CRC checker at receiver's end:


1. Divide the received data word by the same
generator.
2. If the remainder is zero than data is not
erroneous else it contains error.

43
44
ELEMENTARY DATA LINK
PROTOCOLS
• The protocols are normally implemented in software by
using one of the common programming languages.
1.An Unrestricted Simplex Protocol
2.A Simplex Stop-and-Wait Protocol
3.A Simplex Protocol for a Noisy Channel

45
SIMPLEX PROTOCOL
 It is very simple. The sender sends a sequence of frames without
even thinking about the receiver.
 Data are transmitted in one direction only.
 Both sender & receiver always ready. Processing time can be
ignored.
 Infinite buffer space is available. And best of all, the communication
channel between the data link layers never damages or loses
frames.
 This thoroughly unrealistic protocol, which we will nickname
‘‘Utopia,’’.
 The utopia protocol is unrealistic because it does not handle either
flow control or error correction

46
Stop-and-wait Protocol
• The sender sends one frame and waits for feedback from the
receiver.
• When the ACK arrives, the sender sends the next frame.
• It is Stop-and-Wait Protocol because the sender sends one frame,
stops until it receives confirmation from the receiver (okay to go
ahead), and then sends the next frame.
• It have unidirectional communication for data frames, but auxiliary
ACK frames (simple tokens of acknowledgment) travel from the other
direction.
• It add flow control to our previous protocol.

47
Sliding Window Protocols
1 . Stop-and-Wait Automatic Repeat Request

2. Go-Back-N Automatic Repeat Request

3. Selective Repeat Automatic Repeat Request

1. Stop-and-Wait Automatic Repeat Request

 To detect and correct corrupted frames, we need to add redundancy bits to our

data frame. When the frame arrives at the receiver site, it is checked and if it

is corrupted, it is silently discarded.

 The detection of errors in this protocol is manifested by the silence of the

receiver.

 Lost frames are more difficult to handle than corrupted ones.

 In the previous protocols, there was no way to identify a frame. The received

frame could be the correct one, or a duplicate, or a frame out of order.

 The solution is to number the frames. When the receiver receives a data frame

that is out of order, this means that frames were either lost or duplicated
48
Sliding Window Protocols
 The lost frames need to be resent in this protocol. If the receiver does not

respond when there is an error, how can the sender know which frame to
resend?
 To remedy this problem, the sender keeps a copy of the sent frame.

 At the same time, it starts a timer. If the timer expires and there is no ACK

for the sent frame, the frame is resent, the copy is held, and the timer is
restarted.
 Since the protocol uses the stop-and-wait mechanism, there is only one

specific frame that needs an ACK.


 Error correction in Stop-and-Wait ARQ is done by keeping a copy of the sent

frame and retransmitting of the frame when the timer expires.


 In Stop-and-Wait ARQ, use sequence numbers to number the frames.

 The sequence numbers are based on modulo-2 arithmetic.

 In Stop-and-Wait ARQ, the acknowledgment number always


announces in modulo-2 arithmetic the sequence number of the next
frame expected. 49
Bandwidth Delay Product :
 Assume that, in a Stop-and-Wait

ARQ system, the bandwidth of


the line is 1
 Mbps, and 1 bit takes 20 ms to

make a round trip.


 What is the bandwidth-delay

product?
 If the system data frames are

1000 bits in length, what is the


utilization percentage of the
link?
 The link utilization is only
1000/20,000, or 5 percent.
 For this reason, for a link with

a high bandwidth or long delay,


the use of Stop-and-Wait ARQ
wastes the capacity of the link.

50
2. Go-Back-N Automatic Repeat Request
 To improve the efficiency of transmission (filling the pipe), multiple

frames must be in transition while waiting for acknowledgment.

 In other words, we need to let more than one frame be outstanding to

keep the channel busy while the sender is waiting for

acknowledgment.

 The first is called Go-Back-N Automatic Repeat.

 In this protocol we can send several frames before receiving

acknowledgments; we keep a copy of these frames until the

acknowledgments arrive.

 In the Go-Back-N Protocol, the sequence numbers are modulo

2m,where m is the size of the sequence number field in bits.

 The sequence numbers range from 0 to 2 power m- 1. For example, if


51
Sender window
 The sender window at any time divides the possible sequence

numbers into four regions.


 The first region, from the far left to the left wall of the window,

defines the sequence numbers belonging to frames that are already


52
acknowledged. The sender does not worry about these frames and
 The second region, colored in Figure (a), defines the range of

sequence numbers belonging to the frames that are sent and


have an unknown status. The sender needs to wait to find out
if these frames have been received or were lost. We call these
outstanding frames.
 The third range, white in the figure, defines the range of

sequence numbers for frames that can be sent; however, the


corresponding data packets have not yet been received from
the network layer.
 Finally, the fourth region defines sequence numbers that

cannot be used until the window slides The send window is an


abstract concept defining an imaginary box of size 2m − 1
with three variables: Sf, Sn, and S size.
 The variable Sf defines the sequence number of the first
53
 Figure (b) shows how a send window can slide one or more

slots to the right when an acknowledgment arrives from the


other end.
 The acknowledgments in this protocol are cumulative,
meaning that more than one frame can be acknowledged by
an ACK frame.
 In Figure, frames 0, I, and 2 are acknowledged, so the

window has slide to the right three slots.


 Note that the value of Sf is 3 because frame 3 is now the first

outstanding frame.
 The send window can slide one or more slots when a valid

acknowledgment arrives.

54
 Receiver window: variable Rn (receive window, next frame

expected) .
 The sequence numbers to the left of the window belong to the

frames already received and acknowledged; the sequence


numbers to the right of this window define the frames that
cannot be received.
 Any received frame with a sequence number in these two regions

is discarded.
 Only a frame with a sequence number matching the value of Rn

is accepted and acknowledged.


 The receive window also slides, but only one slot at a time. When

a correct
 frame is received (and a frame is received only one at a time), the

window
55
Timers
• Although there can be a timer for each frame that is sent, in our
protocol we use only one. The reason is that the timer for the first
outstanding frame always expires first; we send all outstanding
frames when this timer expires.

56
Acknowledgment

 The receiver sends a positive acknowledgment if a frame has arrived safe and

sound and in order.

 If a frame is damaged or is received out of order, the receiver is silent and will

discard all subsequent frames until it receives the one it is expecting.

 The silence of the receiver causes the timer of the unacknowledged frame at the

sender side to expire. This, in turn, causes the sender to go back and resend all

frames, beginning with the one with the expired timer.

 The receiver does not have to acknowledge each frame received. It can send one

cumulative acknowledgment for several frames.

Resending a Frame
 When the timer expires, the sender resends all outstanding frames. For example,

suppose the sender has already sent frame 6, but the timer for frame 3 expires.

 This means that frame 3 has not been acknowledged; the sender goes back and sends

frames 3,4,5, and 6 again. That is why the protocol is called Go-Back-N ARQ.
57
 Below figure is an example(if ack lost ) of a case where the

forward channel is reliable, but the reverse is not.


 No data frames are lost, but some ACKs are delayed and one is

lost.
 The example also shows how cumulative acknowledgments can

help if acknowledgments are delayed or lost

58
Below figure is an example(if frame lost)
 Stop-and-Wait ARQ is a special case of Go-Back-N ARQ in

which the size of the send window is 1.

59
3 Selective Repeat Automatic Repeat Request

 In Go-Back-N ARQ, The receiver keeps track of only one

variable, and there is no need to buffer out-of- order frames;

they are simply discarded.

 However, this protocol is very inefficient for a noisy link.

 In a noisy link a frame has a higher probability of damage,

which means the resending of multiple frames. This resending

uses up the bandwidth and slows down the transmission.

 For noisy links, there is another mechanism that does not

resend N frames when just one frame is damaged; only the

damaged frame is resent.

 This mechanism is called Selective Repeat ARQ.


60
Sender Window
 (explain go-back N sender window concept (before & after

sliding.)
 The only difference in sender window between Go-back N and

Selective Repeat is Window size)

61
Receiver window

 The receiver window in Selective Repeat is totally different from the one

in Go Back-N. First, the size of the receive window is the same as the

size of the send window (2m-1).

 The Selective Repeat Protocol allows as many frames as the size of the

receiver window to arrive out of order and be kept until there is a set of

in order frames to be delivered to the network layer. Because the sizes

of the send window and receive window are the same, all the frames in

the send frame can arrive out of order and be stored until they can be

delivered.

 However the receiver never delivers packets out of order to the network

layer.

 Above Figure shows the receive window. Those slots inside the window

that are colored define frames that have arrived out of order and are
62
waiting for their neighbors to arrive before delivery to the network
Delivery of Data in Selective Repeat ARQ

Flow Diagram

63
Differences between Go-Back N & Selective Repeat

Piggybacking
• A technique called piggybacking is used to improve the
efficiency of the bidirectional protocols. When a frame is
carrying data from A to B, it can also carry control information
about arrived (or lost) frames from B; when a frame is carrying
data from B to A, it can also carry control information about the
arrived (or lost) frames from A. 64
MAC sub layer
 The medium access sub layer, which is part of the data link layer, it deals how to

determine that who may use the network next when the network consists of a single

shared channel, as in most networks. This layer is also known as the Medium Access

Control Sub-layer.

 Networks can be divided into two categories: those using point-to-point connections

and those using broadcast channels.

 In any broadcast network, the key issue is how to determine who gets to use the

channel when there is competition for it.

 To make this point clearer, consider a conference call in which six people, on six

different telephones, are all connected so that each one can hear and talk to all the

others. It is very likely that when one of them stops speaking, two or more will start

talking at once, leading to chaos. When only a single channel is available, determining

who should go next is much harder.

 The protocols used to determine who goes next on a multi-access channel belong to a

sub-layer of the data link layer called the MAC (Medium Access Control) sub-layer.
65
The Channel Allocation Problem

 The central theme of this chapter is how to allocate a single broadcast

channel among competing users. There are 2 types of Channel allocation.

1. Static Channel Allocation


 The traditional way of allocating a single channel, eg. a telephone line,

among multiple competing users is Frequency Division Multiplexing


(FDM).
 If there are N users, the bandwidth is divided into N equal-sized portions,

each user being assigned one portion. Since each user has a private
frequency band, there is no interference between users. When there are
only a small and constant number of users, each of which has a heavy load
of traffic, FDM is a simple and efficient allocation mechanism.
 When the number of senders is large and continuously varying or the

traffic is bursty, FDM presents some problems.

66
 If the spectrum is cut up into N regions and fewer than N users are

currently interested in communicating, a large piece of valuable


spectrum will be wasted. If more than N users want to
 communicate, some of them will be denied permission for lack of

bandwidth, even if some of the users who have been assigned a


frequency band hardly ever transmit or receive anything.
 Even assuming that the number of users could somehow be held

constant at N, dividing the single available channel into static sub-


channels is inherently inefficient.
 The basic problem is that when some users are quiescent, their

bandwidth is simply lost. They are not using it, and no one else is
allowed to use it either.
 The poor performance of static FDM can easily be seen from a

simple queuing theory calculation.


67
 Let us start with the mean time delay, T, for a channel of capacity

C bps, with an arrival rate of F frames/sec, each frame having a


length drawn from an exponential probability density function
with mean 1/μ bits/frame.
 With these parameters the arrival rate is F frames/sec and the

service rate is μC frames/sec. From queuing theory it can be


shown that for Poisson arrival and service times.
 T=1/(μC-F) ------------------(i)

 For example, if C is 100 Mbps, the mean frame length, 1/μ, is

10,000 bits, and the frame arrival rate, F, is 5000 frames/sec, then
T = 200 μsec.

 Note that if we ignored the queuing delay and just asked how long

it takes to send a 10,000 bit frame on a 100-Mbps network, we

would get the (incorrect) answer of 100 μsec.

 That result only holds when there is no contention for the channel.68
 Now let us divide the single channel into N independent sub-channels, each

with capacity C/N bps. The mean input rate on each of the sub-channels

will now be F/N. Re-computing T we get, Equation (ii)

TFDM = 1/(μ(C/N) – (F/N) = N/(μC-F) =NT ---------------------

(ii)

 The mean delay using FDM is N times worse than if all the frames were

somehow magically arranged orderly in a big central queue.

 Precisely the same arguments that apply to FDM also apply to time division

multiplexing (TDM). Each user is statically allocated every Nth time slot.

 If a user does not use the allocated slot, it just lies fallow. The same holds if

we split up the networks physically. Using our previous example again, if

we were to replace the 100-Mbps networks with 10 networks of 10 Mbps

each and statically allocate each user to one of them, the mean delay would

jump from 200 μsec to 2 msec


69
2. Dynamic Channel Allocation
 Some key assumptions about this are described below.

1. Independent Traffic.
 The model consists of N independent stations (e.g., computers,

telephones), each with a program or user that generates frames for


transmission.
 The expected number of frames generated in an interval of length Δt

is λΔt, where λ is a constant (the arrival rate of new frames).


 Once a frame has been generated, the station is blocked and does

nothing until the frame has been successfully transmitted.

2. Single Channel.
 A single channel is available for all communication.

 All stations can transmit on it and all can receive from it.

 The stations are assumed to be equally capable, though protocols

may assign them different roles (e.g., priorities). 70


3. Observable Collisions.
 If two frames are transmitted simultaneously, they overlap in time and the

resulting signal is garbled. This event is called a collision. All stations can
detect that a collision has occurred. A collided frame must be transmitted
again later. No errors other than those generated by collisions occur.
4. Continuous or Slotted Time.
 Time may be assumed continuous, in which case frame transmission can

begin at any instant. Alternatively, time may be slotted or divided into


discrete intervals (called slots). Frame transmissions must then begin at the
start of a slot. A slot may contain 0, 1, or more frames, corresponding to an
idle slot, a successful transmission, or a collision, respectively.
5. Carrier Sense or No Carrier Sense.
 With the carrier sense assumption, stations can tell if the channel is in use

before trying to use it. No station will attempt to use the channel while it is
sensed as busy. If there is no carrier sense, stations cannot sense the
channel before trying to use it. They just go ahead and transmit. Only later
71
Multiple Access Protocol

 The data link layer has two sub layers.

 The upper sub layer is responsible for data link control, and the lower sub

layer is responsible for resolving access to the shared media.


 The upper sub layer that is responsible for flow and error control is

called the logical link control (LLC) layer; the lower sub layer that is
mostly responsible for multiple access resolution is called the media
access control (MAC) layer.
 When nodes or stations are connected and use a common link, called a

multipoint or broadcast link, we need a multiple-access protocol to


coordinate access to the link.

72
Taxonomy of multiple-access protocols

1. RANDOM ACCESS

 In random access or contention methods, no station is superior to

another station and none is assigned the control over another.

 Two features give this method its name. First, there is no scheduled

time for a station to transmit. Transmission is random among the

stations. That is why these methods are called random access.

 Second, no rules specify which station should send next. Stations

compete with one another to access the medium. That is why these
73
methods are also called contention methods.
ALOHA

1 Pure ALOHA
 The original ALOHA protocol is called pure ALOHA. This is a

simple, but elegant protocol. The idea is that each station sends a
frame whenever it has a frame to send. However, since there is
only one channel to share, there is the possibility of collision
between frames from different stations. Below Figure shows an
example of frame collisions in pure ALOHA

74
 In pure ALOHA, the stations transmit frames whenever they have data to send.

 When two or more stations transmit simultaneously, there is collision and the

frames are destroyed.

 In pure ALOHA, whenever any station transmits a frame, it expects the

acknowledgement from the receiver.

 If acknowledgement is not received within specified time, the station assumes

that the frame (or acknowledgement) has been destroyed.

 If the frame is destroyed because of collision the station waits for a random

amount of time and sends it again. This waiting time must be random otherwise

same frames will collide again and

 again.

 Therefore pure ALOHA dictates that when time-out period passes, each station

must wait for a random amount of time before resending its frame. This

randomness will help avoid more collisions.

75
Vulnerable time

 Let us find the length of time, the vulnerable time, in

which there is a possibility of collision. We assume that the

stations send fixed length frames with each frame taking

(frame trans. time)Tfr S to send.

 Below Figure shows the vulnerable time for station A.

76
 Station A sends a frame at time t.

 Now imagine station B has already sent a frame between t – T fr

and t.

 This leads to a collision between the frames from station A and

station B. The end of B's frame collides with the beginning of A's

frame. On the other hand, suppose that station C sends a frame

between t and t + Tfr

 Here, there is a collision between frames from station A and station

C. The beginning of C's frame collides with the end of A's frame

 Looking at Figure, we see that the vulnerable time, during which a

collision may occur in pure ALOHA, is 2 times the frame

transmission time.

 Pure ALOHA vulnerable time = 2 x Tfr 77


2. Slotted ALOHA
 Pure ALOHA has a vulnerable time of 2 x Tfr .

 This is so because there is no rule that defines when the station can

send. A station may send soon after another station has started or
soon before another station has finished.
 Slotted ALOHA was invented to improve the efficiency of pure

ALOHA.
 In slotted ALOHA we divide the time into slots of Tfr s and force the

station to send only at the beginning of the time slot. Figure 3 shows
an example of frame collisions in slotted ALOHA

78
 Because a station is allowed to send only at the beginning of the synchronized

time slot, if a station misses this moment, it must wait until the beginning of
the next time slot.
 This means that the station which started at the beginning of this slot has

already finished sending its frame. Of course, there is still the possibility of
collision if two stations try to send at the beginning of the same time slot.
 However, the vulnerable time is now reduced to one-half, equal to Tfr

 Figure 4 shows the situation

 Below fig shows that the vulnerable time for slotted ALOHA is one-half that of

pure ALOHA.
 Slotted ALOHA vulnerable time = Tfr

The throughput for slotted ALOHA is S = G × e−G . The maximum


throughput S max = 0.368 when G = 1. 79
Comparison between Pure Aloha & Slotted Aloha

80
Carrier Sense Multiple Access (CSMA)
 To minimize the chance of collision and, therefore, increase the

performance, the CSMA method was developed. The chance of collision

can be reduced if a station senses the medium before trying to use it.

 Carrier sense multiple access (CSMA) requires that each station first

listen to the medium (or check the state of the medium) before sending.

In other words, CSMA is based on the principle "sense before transmit"

or "listen before talk."

 CSMA can reduce the possibility of collision, but it cannot eliminate it.

The reason for this is shown in below Figure.

81
 Stations are connected to a shared channel (usually a

dedicated medium).

 The possibility of collision still exists because of propagation

delay; station may sense the medium and find it idle, only

because the first bit sent by another station has not yet been

received.

 At time tI' station B senses the medium and finds it idle, so it

sends a frame.

 At time t2 (t2> tI)' station C senses the medium and finds it

idle because, at this time, the first bits from station B have

not reached station C. Station C also sends a frame.

 The two signals collide and both frames are destroyed.


82
Vulnerable Time

 The vulnerable time for CSMA is the propagation time Tp.

 This is the time needed for a signal to propagate from one end of the

medium to the other.

 When a station sends a frame, and any other station tries to send a

frame during this time, a collision will result. But if the first bit of the

frame reaches the end of the medium, every station will already have

heard the bit and will refrain from sending.

83
Persistence
Methods
 What should a station do

if the channel is busy?


 What should a station do

if the channel is idle?


 Three methods have
been devised to answer
these questions:

1. The 1-persistent
method

2. The non-persistent
Method

3. p-persistent method

84
1. 1-Persistent: In this method, after the station finds the line idle, it sends

its frame immediately (with probability 1). This method has the highest

chance of collision because two or more stations may find the line idle and

send their frames immediately.

2. Non-persistent: a station that has a frame to send senses the line. If the

line is idle, it sends immediately. If the line is not idle, it waits a random

amount of time and then senses the line again. This approach reduces the

chance of collision because it is unlikely that two or more stations will wait

the same amount of time and retry to send simultaneously. However, this

method reduces the efficiency of the network because the medium remains

idle when there may be stations with frames to send.

3. p-Persistent: This is used if the channel has time slots with a slot

duration equal to or greater than the maximum propagation time. The p-

persistent approach combines the advantages of the other two strategies. It

reduces the chance of collision and improves efficiency


85
 In this method, after the

station finds the line idle it

follows these steps:

 1. With probability p, the

station sends its frame.

 2.With probability q = 1 - p,

the station waits for the

beginning of the next time

slot and checks the line gain.

 a. If the line is idle, it goes to

step 1.

 b. If the line is busy, it acts as

though a collision has

occurred and uses the

backoff procedure.
86
Carrier Sense Multiple Access with Collision Detection (CSMA/CD)

 The CSMA method does not specify the procedure following a

collision.

 Carrier sense multiple access with collision detection (CSMA/CD)

augments the algorithm to handle the collision.

 In this method, a station monitors the medium after it sends a

frame to see if the transmission was successful. If so, the station is

finished. If, however, there is a collision, the frame is sent again.

 To better understand CSMA/CD, let us look at the first bits

transmitted by the two stations involved in the collision.

 Although each station continues to send bits in the frame until it

detects the collision, we show what happens as the first bits collide.
87
 In below Figure, stations A and C are involved in the

collision.

88
 Collision of the first bit in CSMA/CD

 At time t 1, station A has executed its persistence procedure and starts

sending the bits of its frame.

 At time t2, station C has not yet sensed the first bit sent by A. Station C

executes its persistence procedure and starts sending the bits in its

frame, which propagate both to the left and to the right.

 collision occurs sometime after time t2.Station C detects a collision at

time t3 when it receives the first bit of A's frame.

 Station C immediately (or after a short time, but we assume

immediately) aborts transmission.

 Station A detects collision at time t4 when it receives the first bit of C's

frame; it also immediately aborts transmission.

 Looking at the figure, we see that A transmits for the duration t4 - tl; C

transmits for the duration t3 - t2. 89


 Minimum Frame Size

 For CSMAlCD to work, we need a restriction on the frame size. Before

sending the last bit of the frame, the sending station must detect a

collision, if any, and abort the transmission.

 This is so because the station, once the entire frame is sent, does not keep

a copy of the frame and does not monitor the line for collision detection.

 Therefore, the frame transmission time T fr must be at least two times the

maximum propagation time Tp.

 To understand the reason, let us think about the worst-case scenario.

 If the two stations involved in a collision are the maximum distance apart,

the signal from the first takes time Tp to reach the second, and the Effect

of the collision takes another time Tp to reach the first.

 So the requirement is that the first station must still be transmitting after

2Tp
90
91
DIFFERENCES BETWEEN ALOHA & CSMA/CD
 The first difference is the addition of the persistence

process. We need to sense the channel before we start


sending the frame by using one of the persistence
processes
 The second difference is the frame transmission. In

ALOHA, we first transmit the entire frame and then wait


for an acknowledgment. In CSMA/CD, transmission and
collision detection is a continuous process. We do not send
the entire frame and then look for a collision. The station
transmits and receives continuously and simultaneously.
 The third difference is the sending of a short jamming
92
Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA)

 We need to avoid collisions on wireless networks because

they cannot be detected.


 Carrier sense multiple access with collision avoidance

(CSMAlCA) was invented for wirelesss network.


 Collisions are avoided through the use of CSMA/CA's three

strategies: the inter frame space, the contention window,


and acknowledgments, as shown in Figure

93
Inter frame Space (IFS)

 First, collisions are avoided by deferring transmission even if the channel is

found idle.

 When an idle channel is found, the station does not send immediately.

 It waits for a period of time called the inter frame space or IFS.

 Even though the channel may appear idle when it is sensed, a distant station

may have already started transmitting. The distant station's signal has not yet

reached this station. The IFS time allows the front of the transmitted signal by

the distant station to reach this station.

 If after the IFS time the channel is still idle, the station can send, but it still

needs to wait a time equal to the contention time.

 The IFS variable can also be used to prioritize stations or frame types. For

example, a station that is assigned shorter IFS has a higher priority.

 In CSMA/CA, the IFS can also be used to define the priority of a station or a

frame.
94
Contention Window

 The contention window is an amount of time divided into slots. A station that is

ready to send chooses a random number of slots as its wait time.

 The number of slots in the window changes according to the binary exponential

back-off strategy.

 This means that it is set to one slot the first time and then doubles each time the

station cannot detect an idle channel after the IFS time.

 This is very similar to the p-persistent method except that a random outcome

defines the number of slots taken by the waiting station.

 One interesting point about the contention window is that the station needs to

sense the channel after each time slot.

 However, if the station finds the channel busy, it does not restart the process; it

just stops the timer and restarts it when the channel is sensed as idle.

 This gives priority to the station with the longest waiting time.In CSMA/CA, if the

station finds the channel busy, it does not restart the timer of the contention

window; it stops the timer and restarts it when the channel becomes idle. 95
Acknowledgment
 With all these
precautions, there still
may be a collision
resulting in destroyed
data. In addition, the
data may be corrupted
during the
transmission.
 The positive
acknowledgment and
the time-out timer can
help guarantee that
the receiver has
received the frame. 96
This is the CSMA protocol with collision avoidance.

 The station ready to transmit, senses the line by using one of the

persistent strategies.

 As soon as it finds the line to be idle, the station waits for an IFS

(Inter frame space) amount of time.

 If then waits for some random time and sends the frame.

 After sending the frame, it sets a timer and waits for the

acknowledgement from the receiver.

 If the acknowledgement is received before expiry of the timer, then

the transmission is successful.

 But if the transmitting station does not receive the expected

acknowledgement before the timer expiry then it increments the back

off parameter, waits for the back off time and re senses the line.
97
Controlled Access Protocols
 In controlled access, the stations seek information from one another

to find which station has the right to send. It allows only one node to
send at a time, to avoid collision of messages on shared medium.
 The three controlled-access methods are:

 1 Reservation

 2 Polling

 3 Token Passing

Reservation
 In the reservation method, a station needs to make a reservation

before sending data.


 The time line has two kinds of periods:

 1. Reservation interval of fixed time length

 2. Data transmission period of variable frames.


98
 If there are M stations, the reservation interval is divided into M slots, and

each station has one slot.

 Suppose if station 1 has a frame to send, it transmits 1 bit during the slot

 1. No other station is allowed to transmit during this slot.

 In general, ith station may announce that it has a frame to send by

inserting a 1 bit into ith slot.

 After all N slots have been checked, each station knows which stations

wish to transmit.

 The stations which have reserved their slots transfer their frames in that

order.

 After data transmission period, next reservation interval begins.

 Since everyone agrees on who goes next, there will never be any

collisions.
99
 The following figure shows a situation with five stations

and a five slot reservation frame.


 In the first interval, only stations 1, 3, and 4 have made

reservations.
 In the second interval, only station 1 has made a
reservation.

100
Polling

 Polling process is similar to the roll-call performed in class. Just like the

teacher, a controller sends a message to each node in turn.

 In this, one acts as a primary station(controller) and the others are

secondary stations. All data exchanges must be made through the

controller.

 The message sent by the controller contains the address of the node

being selected for granting access.

 Although all nodes receive the message but the addressed one responds

to it and sends data, if any.

 If there is no data, usually a “poll reject”(NAK) message is sent back.

 Problems include high overhead of the polling messages and high

dependence on the reliability of the controller.


101
102
Token Passing

 In token passing scheme, the stations are connected logically to each

other in form of ring and access of stations is governed by tokens.

 A token is a special bit pattern or a small message, which circulate

from one station to the next in the some predefined order.

 In Token ring, token is passed from one station to another adjacent

station in the ring whereas incase of Token bus, each station uses

the bus to send the token to the next station in some predefined

order.

 In both cases, token represents permission to send. If a station has a

frame queued for transmission when it receives the token, it can

send that frame before it passes the token to the next station. If it

has no queued frame, it passes the token simply. 103


After sending a frame, each station must wait for all N stations

(including itself) to send the token to their neighbors and the other N

– 1 stations to send a frame, if they have one.

There exists problems like duplication of token or token is lost or

insertion of new station, removal of a station, which need be tackled

for correct and reliable operation of this scheme

104

You might also like