0% found this document useful (0 votes)
20 views

Error Control in DLL

The document discusses data link layer protocols and error detection and correction techniques. It describes framing, flow control, error control and different data link layer protocols. It also covers topics like cyclic redundancy check, block coding techniques and minimum Hamming distance.

Uploaded by

Hrithik Jaiswal
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Error Control in DLL

The document discusses data link layer protocols and error detection and correction techniques. It describes framing, flow control, error control and different data link layer protocols. It also covers topics like cyclic redundancy check, block coding techniques and minimum Hamming distance.

Uploaded by

Hrithik Jaiswal
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 80

Error detection and correction

Data link layer


Data link layer design issues

• Service provided to the network layer


• Framing
• Error control
• Flow control
• Synchronization
• Lin configuration control
Services provided for the network layer

• Virtual communication

• Actual communication
• Types of services provided for n/w layer
– Unacknowledged connectionless service
– Acknowledged connectionless service
– Acknowledged connection oriented service
11-1 FRAMING

The data link layer needs to pack bits into frames, so


that each frame is distinguishable from another. Our
postal system practices a type of framing. The simple
act of inserting a letter into an envelope separates one
piece of information from another; the envelope serves
as the delimiter.
Topics discussed in this section:
Fixed-Size Framing
Variable-Size Framing

11.4
• Translation of physical layers raw bits into a larger
aggregate or discrete units called frames.
• Here beginning and end of the data are marked to
recognize the frame & used in synchronization.
• Frames are
– Fixed size framing ( ATM WAN) Asynchronous transfer
mode
– Variable size framing (LAN)
• Character oriented protocol (ASCII)
• Bit oriented protocol
Figure 11.1 A frame in a character-oriented protocol

11.6
Figure 11.2 Byte stuffing and unstuffing

11.7
Note

Byte stuffing is the process of adding 1 extra byte


whenever there is a flag or escape character in the text.

11.8
Figure 11.3 A frame in a bit-oriented protocol

11.9
Note

Bit stuffing is the process of adding one extra 0


whenever five consecutive 1s follow a 0 in the data, so
that the receiver does not mistake
the pattern 0111110 for a flag.

11.10
Figure 11.4 Bit stuffing and unstuffing

11.11
11-2 FLOW AND ERROR CONTROL

The most important responsibilities of the data link


layer are flow control and error control. Collectively,
these functions are known as data link control.

Topics discussed in this section:


Flow Control
Error Control

11.12
Note

Flow control refers to a set of procedures used to


restrict the amount of data
that the sender can send before
waiting for acknowledgment.

11.13
Note

Error control in the data link layer is based on


automatic repeat request, which is the
retransmission of data.

11.14
Note

Data can be corrupted


during transmission.

Some applications require that


errors be detected and corrected.

10.15
Types of errors

• When ever bits flow from one point to another, they


are subjected to unpredictable changes because of
interference.
• This interference can change the shape of the signal.

• There are two types of errors


– Single bit error
– Burst error
Note

In a single-bit error, only 1 bit in the data unit


has changed.

10.17
Figure 10.1 Single-bit error

10.18
Note

A burst error means that 2 or more bits in the


data unit have changed.
Does not necessarily mean that the error occur
in consecutive bits.

10.19
Figure 10.2 Burst error of length 8

10.20
Note

Redundancy

To detect or correct errors, we need to send


extra (redundant) bits with data.

10.21
Detection Vs Correction

• The correction of errors is more difficult than the


detection
• In error detection, we only see any error has occurred.
• In error correction, we need to know the exact
number of bits that are corrupted and their position in
the message.
• There are two methods of error correction
– Forward error correction ( receiver guess the message by
using redundant bits)
– Retransmission
coding

• Redundancy is achieved through various coding


schemes.

• We can divide coding scheme in to two categories

– Block coding
– Convolution coding (more complex)
Figure 10.3 The structure of encoder and decoder

10.24
Note

In modulo-N arithmetic, we use only the integers in the


range 0 to N −1, inclusive.

10.25
Figure 10.4 XORing of two single bits or two words

10.26
Block coding

• In block coding, we divide our message into blocks,


each of k bits, called datawords. We add r redundant
bits to each block to make the length n = k + r. The
resulting n-bit blocks are called codewords.
Note

An error-detecting code can detect


only the types of errors for which it is designed;
other types of errors may remain undetected.

10.28
Note

The Hamming distance between two words is


the number of differences between
corresponding bits.

10.29
Example 10.4

Let us find the Hamming distance between two pairs of


words.

1. The Hamming distance d(000, 011) is 2 because

2. The Hamming distance d(10101, 11110) is 3 because

10.30
Note

The minimum Hamming distance is the


smallest Hamming distance between
all possible pairs in a set of words.

10.31
Example 10.5

Find the minimum Hamming distance of the coding scheme


in Table 10.1.
Solution
We first find all Hamming distances.

The dmin in this case is 2.

10.32
Example 10.6

Find the minimum Hamming distance of the coding scheme


in Table 10.2.

Solution
We first find all the Hamming distances.

The dmin in this case is 3.

10.33
Figure 10.11 Two-dimensional parity-check code

10.34
Cyclic redundancy check

• Cyclic codes are special linear block codes


with one extra property. In a cyclic code, if a
codeword is cyclically shifted (rotated), the
result is another codeword.
Figure 10.15 Division in CRC encoder

10.36
Figure 10.16 Division in the CRC decoder for two cases

10.37
Note

The divisor in a cyclic code is normally called the


generator polynomial
or simply the generator.

10.38
11-3 PROTOCOLS

Now let us see how the data link layer can combine
framing, flow control, and error control to achieve the
delivery of data from one node to another. The protocols
are normally implemented in software by using one of
the common programming languages. To make our
discussions language-free, we have written in
pseudocode a version of each protocol that concentrates
mostly on the procedure instead of delving into the
details of language rules.

11.39
Figure 11.5 Taxonomy of protocols discussed in this chapter

11.40
11-4 NOISELESS CHANNELS

Let us first assume we have an ideal channel in which


no frames are lost, duplicated, or corrupted. We
introduce two protocols for this type of channel.

Topics discussed in this section:


Simplest Protocol
Stop-and-Wait Protocol

11.41
Elementary data link layer protocol
An unrestricted simplest protocol
•In order to appreciate the step by step development of efficient and complex
protocols such as SDLC, HDLC etc., we will begin with a simple but
unrealistic protocol.
In this protocol:
•Data are transmitted in one direction only

•The transmitting (Tx) and receiving (Rx) hosts are always ready
•Processing time can be ignored
•Infinite buffer space is available
•No errors occur; i.e. no damaged frames and no lost frames (perfect channel)
Difference between hdlc & sdlc
• Hdlc: high level data link control protocol
• sdlc : synchronous data link control protocol
• Two protocols that provide point to multipoint interconnection
between computers.
• The main difference between HDLC and SDLC is actually their origin.
• SDLC was developed by IBM for use with their computers.
• They eventually moved for it SDLC to be standardized by governing
bodies like ISO and ANSI. ISO adopted SDLC but renamed it to
HDLC but introduced a number of changes that make it distinct.
• Because of this, HDLC is actually a standard protocol that was used
by many hardware makers while SDLC is not but is still used in some
IBM hardware.
1.HDLC is actually adopted from SDLC

2.HDLC is a standard protocol while SDLC is not

3.HDLC has the Asynchronous Balanced Mode feature while


SDLC does not

4.HDLC supports frames that are not multiple of bit-octets


while SDLC does not ( it have bits 8, 16,32 as size)

5.HDLC removed some procedures that were present in SDLC


Figure 11.6 The design of the simplest protocol with no flow or error control

11.45
Algorithm 11.1 Sender-site algorithm for the simplest protocol

11.46
Algorithm 11.2 Receiver-site algorithm for the simplest protocol

11.47
Example 11.1

Figure 11.7 shows an example of communication using this


protocol. It is very simple. The sender sends a sequence of
frames without even thinking about the receiver. To send
three frames, three events occur at the sender site and three
events at the receiver site. Note that the data frames are
shown by tilted boxes; the height of the box defines the
transmission time difference between
the first bit and the last bit in the frame.

11.48
Figure 11.7 Flow diagram for Example 11.1

11.49
A simplex stop-and-wait protocol

In this protocol
• we assume that Data are transmitted in one direction only
• No errors occur (perfect channel)
• The receiver can only process the received information at a finite rate
• These assumptions imply that the transmitter cannot send frames at a rate
faster than the receiver can process them. The problem here is how to
prevent the sender from flooding the receiver.
• A general solution to this problem is to have the receiver provide some sort
of feedback to the sender. The process could be as follows: The receiver
send an acknowledge frame back to the sender telling the sender that the
last received frame has been processed and passed to the host; permission
to send the next frame is granted. The sender, after having sent a frame,
must wait for the acknowledge frame from the receiver before sending
another frame. This protocol is known as stop-and-wait.
Figure 11.8 Design of Stop-and-Wait Protocol

11.51
Algorithm 11.3 Sender-site algorithm for Stop-and-Wait Protocol

11.52
Algorithm 11.4 Receiver-site algorithm for Stop-and-Wait Protocol

11.53
Example 11.2

Figure 11.9 shows an example of communication using this


protocol. It is still very simple. The sender sends one frame
and waits for feedback from the receiver. When the ACK
arrives, the sender sends the next frame. Note that sending
two frames in the protocol involves the sender in four
events and the receiver in two events.

11.54
Figure 11.9 Flow diagram for Example 11.2

11.55
Noisy channels

Stop-and-wait Automatic Repeat Request

• In this protocol the unreal "error free" assumption in protocol 2 is dropped. Frames
may be either damaged or lost completely. We assume that transmission errors in the
frame are detected by the hardware checksum.

• One suggestion is that the sender would send a frame, the receiver would send an
ACK frame only if the frame is received correctly. If the frame is in error the receiver
simply ignores it; the transmitter would time out and would retransmit it.

• One fatal flaw with the above scheme is that if the ACK frame is lost or damaged,
duplicate frames are accepted at the receiver without the receiver knowing it.
• Imagine a situation where the receiver has just sent an ACK
frame back to the sender saying that it correctly received and
already passed a frame to its host. However, the ACK frame
gets lost completely, the sender times out and retransmits the
frame. There is no way for the receiver to tell whether this
frame is a retransmitted frame or a new frame, so the receiver
accepts this duplicate happily and transfers it to the host. The
protocol thus fails in this aspect.

• To overcome this problem it is required that the receiver be


able to distinguish a frame that it is seeing for the first time
from a retransmission. One way to achieve this is to have the
sender put a sequence number in the header of each frame it
sends. The receiver then can check the sequence number of
each arriving frame to see if it is a new frame or a duplicate to
be discarded.
• The receiver needs to distinguish only 2 possibilities: a new
frame or a duplicate; a 1-bit sequence number is sufficient. At
any instant the receiver expects a particular sequence number.
Any wrong sequence numbered frame arriving at the receiver is
rejected as a duplicate. A correctly numbered frame arriving at
the receiver is accepted, passed to the host, and the expected
sequence number is incremented by 1 (modulo 2).

This protocol can handle lost frames by timing out. The timeout
interval has to be long enough to prevent premature timeouts
which could cause a "deadlock" situation.
Note

Error correction in Stop-and-Wait ARQ is done by


keeping a copy of the sent frame and retransmitting of
the frame when the timer expires.

11.59
Note

In Stop-and-Wait ARQ, we use sequence numbers to


number the frames.
The sequence numbers are based on modulo-2
arithmetic.

11.60
Note

In Stop-and-Wait ARQ, the acknowledgment number


always announces in modulo-2 arithmetic the sequence
number of the next frame expected.

11.61
Figure 11.10 Design of the Stop-and-Wait ARQ Protocol

11.62
Example 11.3

Figure 11.11 shows an example of Stop-and-Wait ARQ.


Frame 0 is sent and acknowledged. Frame 1 is lost and
resent after the time-out. The resent frame 1 is
acknowledged and the timer stops. Frame 0 is sent and
acknowledged, but the acknowledgment is lost. The sender
has no idea if the frame or the acknowledgment is lost, so
after the time-out, it resends frame 0, which is
acknowledged.

11.63
Figure 11.11 Flow diagram for Example 11.3

11.64
Note

In the Go-Back-N Protocol, the sequence numbers are


modulo 2m,
where m is the size of the sequence number field in
bits.

11.65
Note

The send window is an abstract concept defining an


imaginary box of size 2m − 1 with three variables: Sf,
Sn, and Ssize.

11.66
Figure 11.12 Send window for Go-Back-N ARQ

11.67
Note

The send window can slide one


or more slots when a valid acknowledgment arrives.

11.68
Figure 11.13 Receive window for Go-Back-N ARQ

11.69
Note

The receive window is an abstract concept defining an


imaginary box
of size 1 with one single variable Rn.
The window slides
when a correct frame has arrived; sliding occurs one
slot at a time.

11.70
Figure 11.14 Design of Go-Back-N ARQ

11.71
Figure 11.15 Window size for Go-Back-N ARQ

11.72
Note
In Go-Back-N ARQ, the size of the send window must
be less than 2m;
the size of the receiver window
is always 1.

11.73
Note

Stop-and-Wait ARQ is a special case of Go-Back-N


ARQ in which the size of the send window is 1.

11.74
Selective Repeat Automatic Repeat Request

• Go-back-N ARQ simplifies the process at receiver site.


• The receiver keep track of only one variable, and there is no
need to buffer out of order frames, they are simply
discarded.
• Go-back-N ARQ is inefficient for noise channels.
• In a noisy link a frame has a higher probability of damage,
which means the resending of multiple frames.
• This resending uses bandwidth and sloe down transmission.
• In selective repeat ARQ, only one frame will be resend when
damage occur instead of sending N frames
• It is more efficient for noisy links, but the
processing at the receiver is more complex.
• Selective repeat protocol also uses two windows,
sender and receiver.
• They are different from go-back-N
– The size of the sender window is much smaller, it is 2m-
1 (m=4 the sequence number go from 0 -15) but the
size of the window is just 8 which is 15 in Go-back-N
– Receiver window is same size of the sender window
Sliding window protocol
•Piggybacking technique
•In most practical situations there is a need for transmitting data in both directions
(i.e. between 2 computers). A full duplex circuit is required for the operation.

•If protocol 2 or 3 is used in these situations the data frames and ACK (control)
frames in the reverse direction have to be interleaved. This method is acceptable but
not efficient. An efficient method is to absorb the ACK frame into the header of the
data frame going in the same direction. This technique is known as piggybacking.

•When a data frame arrives at an IMP (receiver or station), instead of immediately


sending a separate ACK frame, the IMP restrains itself and waits until the host
passes it the next message. The acknowledgement is then attached to the outgoing
data frame using the ACK field in the frame header. In effect, the acknowledgement
gets a free ride in the next outgoing data frame.
• This technique makes better use of the channel bandwidth. The
ACK field costs only a few bits, whereas a separate frame
would need a header, the acknowledgement, and a checksum.

An issue arising here is the time period that the IMP waits for a
message onto which to piggyback the ACK. Obviously the IMP
cannot wait forever and there is no way to tell exactly when the
next message is available. For these reasons the waiting period
is usually a fixed period. If a new host packet arrives quickly
the acknowledgement is piggybacked onto it; otherwise, the
IMP just sends a separate ACK frame.
• Sliding window
When one host sends traffic to another it is desirable that the traffic should arrive in the
same sequence as that in which it is dispatched. It is also desirable that a data link should deliver
frames in the order sent.

A flexible concept of sequencing is referred to as the sliding window concept and the next three
protocols are all sliding window protocols.

In all sliding window protocols, each outgoing frame contains a sequence number SN ranging
from 0 to 2^(n -1)(where n is the number of bits reserved for the sequence number field).

• At any instant of time the sender maintains a list of consecutive sequence numbers corresponding to
frames it is permitted to send. These frames are said to fall within the sending window. Similarly, the
receiver maintains a receiving window corresponding to frames it is permitted to accept.
• At the receiving node, any frame falling outside the window is discarded. Frames falling
within the receiving window are accepted and arranged into sequence. Once sequenced,
the frames at the left of the window are delivered to the host and an acknowledgement of
the delivered frames is transmitted to their sender. The window is then rotated to the
position where the left edge corresponds to the next expected frame, RN.

Whenever a new frame arrives from the host, it is given the next highest
sequence number, and the upper edge of the sending window is advanced by
one. The sequence numbers within the sender's window represent frames sent
but as yet not acknowledged. When an acknowledgement comes in, it gives the
position of the receiving left window edge which indicates what frame the
receiver expects to receive next. The sender then rotates its window to this
position, thus making buffers available for continuous transmission.

You might also like