Data Link Layer5
Data Link Layer5
Data link layer design issues highlight the problems that network designers must address while designing
the data link layer of any networking model. But before understanding these design issues, let us first
understand the need for the data link layer.
The data link layer is a layer in the networking model that transforms the raw transmission facility (Hub,
Repeater, Cables, Modem, and Connectors) provided by the physical layer into a reliable transmission
link that is responsible for node-to-node or hop-to-hop communication.
The data link layer accepts the stream of bits from the network layer and divides them into manageable
data units that we refer to as frames. Now to each frame, the data link layer adds a header that specifies
the addresses of the sender and receiver of the frame.
The data link layer uses the services of the physical layer to send and receive bits over communication
channels. It has a number of functions, including:
3. Error Control
4. Flow Control
To accomplish these goals, the data link layer takes the packets it gets from the network layer and
encapsulates them into frames for transmission. Each frame contains a frame header, a payload field for
holding the packet, and a frame trailer.
The data link layer provides services to the network layer. One of the main services is to transfer data
from the source machine’s network layer to the destination machine’s network layer.
The network layer at the source machine transfers some data bits (packets) to the data link layer.
Now the data link layer transmits these bits to the data link layer at the destination machine. The
destination machine’s data link layer hands over the bits to the network layer at the destination
machine.
Here it seems as if two data link layers of the source and destination machine are communicating. But this
is not the case in reality. Instead, the data bits travel through the source machine’s physical layer to the
destination machine’s physical layer.
Then the physical layer of the destination machine passes these data bits to the data link layer there so
that they can be handed over to the network layer.
4 4 4 4
3 3 3 3
Virtual
data
2 2 2 2
path
1 1
1 Actua 1
l data
path
(a) (b)
The design issues of the data link layer vary from protocol to protocol for the following reasons:
In this scenario, the source machine delivers data frames to the destination machine without expecting
acknowledgement from them. The most common example of unacknowledged connectionless service
is Ethernet.
With this kind of service, the sender and receiver initially establish a logical connection between them.
Then each outgoing frame is sequentially numbered, guaranteeing that each frame will be delivered only
once and in the right order. Once all the frames get delivered, the sender and receiver release the
connection between them. The most popular example of this kind of service is long distance telephone
circuits.
2. Framing
Framing is one of the design issues of the data link layer. We have discussed earlier that the data link
layer breaks the stream of bits from the network layer into discrete frames. But do you know why it does
so?
Need of Framing
We know that the data link layer uses the services of the physical layer. The physical layer simply accepts
the raw bits stream and delivers it to the destination. Now it may be the case that the physical layer, i.e.
wired or wireless links used, may be noisy, which ultimately increases the bit error rate.
To reduce the bit error rate, the physical layer adds redundancy to the signals. But that doesn’t guarantee
the bit of stream received at the destination data link layer is error-free. So, it becomes the responsibility
of the data link layer to detect and correct the errors.
To detect and rectify the errors in the bit stream data link layer uses the concept of framing. While
framing, the data link layer breaks the bit stream into discrete frames, and for each frame, it calculates
the checksum.
The data link layer includes this checksum into the frame while transmitting it to the destination. When
the frame arrives at the destination, the receiver recalculates this checksum, and if it comes out to be
different, it indicates errors in the arrived frame.
Now, these frames can be of two types fixed size and variable size frame.
Types of frame
In fixed-size frames, we do not have to define any boundaries in the frame, as its size itself acts as a
delimiter.
With a variable-length frame, we have to define the frame boundaries, i.e. end of a frame and the
beginning of the next frame. For defining boundaries of variable length frame, we have two methods as
discussed below:
The character-oriented approach was popular when the data link layer use to exchange
the text information. In this approach, the data exchanged used to be of 8 bits and the information that
the header and trailer contain should be multiple of 8.
To mark the beginning and end of the frame, the sender’s data link layer adds an 8-bit flag to the start
and end of the frame. This flag could be any charter which is not part of text communication.
Nowadays, it is not limited to the exchange of text. We are exchanging graphs, audio and video. The
character used for the flag could be a part of the information exchanged. So, we came up with a new
strategy, byte stuffing.
We also refer to byte stuffing as character stuffing. Here if a character in the data section has the same
pattern as that of the flag, then the data link layer stuffs that part of the data section with an extra byte.
We refer to this extra byte as an escape character. The escape character has a predefined bit pattern.
Now when this escape character is encountered at the receiver end, it will simply remove this character
and treat the next character as data only. Till it encounters the final delimiter, i.e. flag.
But this approach also had a problem, i.e. what if the data section has character with the escape pattern?
So, to eliminate this problem, the character with the escape pattern is marked with another escape
character.
However, the character-oriented approach is losing its grip over framing concepts, as nowadays, the
universal coding system has 16-bit or 32-bit characters. That’s why we have to move towards the bit-
oriented approach.
In the character-oriented approach, we stuff an entire byte (8-bit flag an escape character) to identify the
start and end of the frame. But in the bit-oriented approach, we stuff a single bit. In the bit-oriented
approach to prevent a part of a data section from appearing like a flag, we stuff a single bit 0 after 5
consecutive 1s.
Here, if the data link layer identifies 0 and five consecutive 1’s, then an extra bit 0 is added to it. When the
receiver receives the frame, it removes the extra bit.
3. Error Control
It is done so that there is no copying of the frames for the safe delivery of the frames at the destination. In
addition, Positive and negative acceptance is sent about the incoming frames.
Therefore, if the sender gets positive acceptance that means the frame appears safely, while negative
appearance means that something is wrong with the frame and the frame will be retransferred.
The timer is put at the receiver's and sender's end. Besides, the sequence number is given to the
outgoing transmission. So that receiver will easily identify that it is a retransmitted frame. It is one of the
main parts of the data link layer responsibilities.
4. Flow Control
It may happen that a sender sends the frame at a faster rate compared to the rate at which the receiver
can receive the frames. Well, this may be the case where the sender is running on a much faster machine
than the receiver. So how to resolve such a design issue?
In such a situation, even if the transmission is error-free, it may happen that the receiver is unable to
handle the frames at a faster rate and may lose some of them in the process.
The receiver permits the sender to send more data or at least inform the sender about how the receiver is
doing.
This approach limits the rate at which the sender can send the data so that it does not overload the
receivers with the frames.
So, these are the data link layer design issues that network designers address while designing the data
link layer. The data link layer is responsible for providing services to the network layer, framing, error
control, and flow control.
ERROR DETECTION AND CORRECTION
Data-link layer uses error control techniques to ensure that frames, i.e. bit streams of data, are
transmitted from the source to the destination with a certain extent of accuracy.
Errors
Error is a condition when the receiver’s information does not match the sender’s. Digital signals suffer
from noise during transmission that can introduce errors in the binary bits traveling from sender to
receiver. That means a 0 bit may change to 1 or a 1 bit may change to 0.
Data (Implemented either at the Data link layer or Transport Layer of the OSI Model) may get scrambled
by noise or get corrupted whenever a message is transmitted. To prevent such errors, error-detection
codes are added as extra data to digital messages. This helps in detecting any errors that may have
occurred during message transmission.
Types of Errors
1. Single-Bit Error
A single-bit error refers to a type of data transmission error that occurs when one bit (i.e., a single binary
digit) of a transmitted data unit is altered during transmission, resulting in an incorrect or corrupted data
unit.
Single-Bit Error
2. Multiple-Bit Error
A multiple-bit error is an error type that arises when more than one bit in a data transmission is affected.
Although multiple-bit errors are relatively rare when compared to single-bit errors, they can still occur,
particularly in high-noise or high-interference digital environments.
Multiple-Bit Error
3. Burst Error
When several consecutive bits are flipped mistakenly in digital transmission, it creates a burst error. This
error causes a sequence of consecutive incorrect values.
Burst Error
Simple-bit parity is a simple error detection method that involves adding an extra bit to a data
transmission. It works as:
This scheme makes the total number of 1’s even, that is why it is called even parity checking.
Advantages of Simple Parity Check
For example, the Data to be transmitted is 101010. Codeword transmitted to the receiver is
1010101 (we have used even parity).
Two-dimensional Parity check bits are calculated for each row, which is equivalent to a simple parity
check bit. Parity check bits are also calculated for all columns, and then both are sent along with the
data. At the receiving end, these are compared with the parity bits calculated on the received data.
Two-Dimensional Parity Check can detect and correct all single bit error.
Two-Dimensional Parity Check can detect two or three bit error that occurs anywhere in the
matrix.
Two-Dimensional Parity Check cannot correct two or three bit error. It can only detect two or three
bit error.
If we have a error in the parity bit then this scheme will not work.
3. Checksum
Checksum error detection is a method used to identify errors in transmitted data. The process involves
dividing the data into equally sized segments and using a 1’s complement to calculate the sum of these
segments. The calculated sum is then sent along with the data to the receiver. At the receiver’s end, the
same process is repeated and if all zeroes are obtained in the sum, it means that the data is correct.
On the sender’s end, the segments are added using 1’s complement arithmetic to get the sum. The
sum is complemented to get the checksum.
At the receiver’s end, all received segments are added using 1’s complement arithmetic to get the
sum. The sum is complemented.
In CRC, a sequence of redundant bits, called cyclic redundancy check bits, are appended to the end of the
data unit so that the resulting data unit becomes exactly divisible by a second, predetermined binary
number.
At the destination, the incoming data unit is divided by the same number. If at this step there is no
remainder, the data unit is assumed to be correct and is therefore accepted.
A remainder indicates that the data unit has been damaged in transit and therefore must be rejected.
CRC Working
Note:
CRC must be k-1 bits
Length of Code word = n+k-1 bits
Example: Let’s data to be send is 1010000 and divisor in the form of polynomial is x3+1. CRC method discussed
below.
Increased Data Reliability: Error detection ensures that the data transmitted over the network is reliable,
accurate, and free from errors. This ensures that the recipient receives the same data that was transmitted
by the sender.
Improved Network Performance: Error detection mechanisms can help to identify and isolate network
issues that are causing errors. This can help to improve the overall performance of the network and reduce
downtime.
Enhanced Data Security: Error detection can also help to ensure that the data transmitted over the network
is secure and has not been tampered with.
Overhead: Error detection requires additional resources and processing power, which can lead to
increased overhead on the network. This can result in slower network performance and increased
latency.
False Positives: Error detection mechanisms can sometimes generate false positives, which can
result in unnecessary retransmission of data. This can further increase the overhead on the
network.
Limited Error Correction: Error detection can only identify errors but cannot correct them. This
means that the recipient must rely on the sender to retransmit the data, which can lead to further
delays and increased network overhead.
Forward Error Correction: In this Error Correction Scenario, the receiving end is responsible for
correcting the network error. There is no need for retransmission of the data from the sender’s
side.
In Backward Error Correction, the sender is responsible for retransmitting the data if errors are
detected by the receiver. The receiver signals the sender to resend the corrupted data or the entire
message to ensure accurate delivery.
1. Hamming codes.
2. Binary convolutional codes.
3. Reed-Solomon codes.
4. Low-Density Parity Check codes.
Total of redundant bits(r) = 4 (This is because the message has four 1’s in it)
Also, by convention, the redundant bits are always placed in the places which are powers of 2. Now,
this message will take the format as shown below:
Therefore, we have R1, R2, R3, and R4 as redundant bits which will be calculated according to the
following rules:
R1 includes all the positions whose binary representation has 1 in their least significant
bit. Thus, R1 covers positions 1, 3, 5, 7, 9, 11.
R2 includes all the positions whose binary representation has 1 in the second position
from the least significant bit. Thus, R2 covers positions 2, 3, 6, 7, 10, 11.
R3 includes all the positions whose binary representation has 1 in the third position from
the least significant bit. Hence, R3 covers positions 4, 5, 6, 7.
R4 includes all the positions whose binary representation has 1 in the fourth position from
the least significant bit due to which R4 covers positions 8, 9, 10, 11.
Since the total number of 1s in all the bit positions corresponding to R2 is an odd number, R2= 1.
Since the total number of 1s in all the bit positions corresponding to R3 is an odd number, R3= 1.
Since the total number of 1s in all the bit positions corresponding to R4 is even, R4 = 0.
For R1: bits 1, 3, 5, 7, 9, and 11 are checked. We can see that the number of 1’s in these bit
positions is 4(even) so R1 = 0.
For R2: bits 2, 3, 6, 7, 10, 11 are checked. You can observe that the number of 1’s in these bit
positions is 5(odd) so we get a R2 = 1.
For R3: bits 4, 5, 6, and 7 are checked. We see that the number of 1’s in these bit positions is
3(odd). Hence, R3 = 1.
For R8: bits 8, 9, 10, 11 are observed. Here, the number of 1’s in these bit positions is 2 and that’s
even so we get R4 = 0.
If we observe the redundant bits, they give the binary number 0110 whose decimal representation is 6.
Thus, bit 6 contains an error. To correct the error the 6th bit is changed from 1 to 0 to correct the error
S1 S2 S3 S4 S5 S6
Input bit
Outpu
t bit
2
Figure 3-7. The NASA binary convolutional code used in 802.11.
In Fig. 3-7, each input bit on the left-hand side produces two output bits on the right-hand side that
are XOR sums of the input and internal state. Since it deals with bits and performs linear operations,
this is a binary, linear convolutional code. Since 1 input bit produces 2 output bits, the code rate is 1/2.
It is not systematic since none of the output bits is simply the input bit.
The internal state is kept in six memory registers. Each time another bit is in- put the values in the
registers are shifted to the right. For example, if 111 is input and the initial state is all zeros, the
internal state, written left to right, will become 100000, 110000, and 111000 after the first, second,
and third bits have been input. The output bits will be 11, followed by 10, and then 01. It takes seven
shifts to flush an input completely so that it does not affect the output. The constraint length of this
code is thus k = 7.
A convolutional code is decoded by finding the sequence of input bits that is most likely to have
produced the observed sequence of output bits (which includes any errors). For small values of k, this
is done with a widely used algorithm de- veloped by Viterbi (Forney, 1973). The algorithm walks the
observed sequence, keeping for each step and for each possible internal state the input sequence that
would have produced the observed sequence with the fewest errors. The input sequence requiring the
fewest errors at the end is the most likely message.
Convolutional codes have been popular in practice because it is easy to factor the uncertainty of a bit
being a 0 or a 1 into the decoding. For example, suppose
-1V is the logical 0 level and +1V is the logical 1 level, we might receive 0.9V and - 0.1V for 2 bits.
Instead of mapping these signals to 1 and 0 right away, we would like to treat 0.9V as ‘‘very likely a 1’’
and -0.1V as ‘‘maybe a 0’’ and correct the sequence as a whole. Extensions of the Viterbi algorithm
can work with these uncertainties to provide stronger error correction. This approach of working with
the uncertainty of a bit is called soft-decision decoding. Conversely, deciding whether each bit is a 0 or
a 1 before subsequent error correction is called hard-decision decoding.
Reed - Solomon error correcting codes are one of the oldest codes that were introduced in 1960s by
Irving S. Reed and Gustave Solomon. It is a subclass of non - binary BCH codes. BCH codes (Bose-
Chaudhuri-Hocquenghem codes) are cyclic ECCs that is constructed using polynomials over data blocks.
A Reed - Solomon encoder accepts a block of data and adds redundant bits (parity bits) before
transmitting it over noisy channels. On receiving the data, a decoder corrects the error depending upon
the code characteristics.
Explore our latest online courses and learn new skills at your own pace. Enroll and become a certified
expert to boost your career.
Application Areas of Reed-Solomon Codes
In coding systems with block codes, valid code words consist of polynomials that are divisible by another
fixed polynomial of short length. This fixed polynomial is called generator polynomial.
In Reed Solomon code, generator polynomial with factors is constructed where each root is a consecutive
element in the Galois field. The polynomial is of the form −
Encoding
The method of encoding in Reed Solomon code has the following steps −
The message is represented as a polynomial p(x), and then multiplied with the generator
polynomial g(x).
The message vector [x1,x2,x3.....xk] is mapped to a polynomial of degree less than k such
that px(αi) = xi for all i = 1,...k
The polynomial is evaluated using interpolation methods like Lagrange Interpolation.
Using this polynomial, the other points αk + 1....αn, are evaluated.
The encoded message is calculated as s(x) = p(x) * g(x). The sender sends this encoded message
along with the generator polynomial g(x).
Decoding
The receiver receives the message r(x) and divides it by the generator polynomial g(x).
If r(x)/g(x)=0, then it implies no error.
If r(x)/g(x)≠0, then the error polynomial is evaluated using the expression: r(x) = p(x) * g(x) +
e(x)
The error polynomial gives the error positions.
A low - density parity check (LFPC) code is specified by a parity-check matrix containing mostly 0s and a
low density of 1s. The rows of the matrix represent the equations and the columns represent the bits in
the codeword, i.e. code symbols.
A LDPC code is represented by, where is the block length, is the number of 1s in each column and is the
number of 1s in each row, holding the following properties −
The following parity check matrix Hamming code having n = 7, with 4 information bits followed by 3
even parity bits. The check digits are diagonally 1. The parity equations are given alongside −
Example 2 − Low - Density Parity Check Matrix
This examples illustrates an (12, 3, 4) LDPC matrix, i.e. n = 12, j = 3 and k = 4. This implies that each
equation operates on 4 code symbols and each code symbol appears in 3 equations. Unlike parity check
matrix of the Hamming code, this code does not have any diagonal 1s in parity bits.
In the first technique, the decoder does all the parity checks as per the parity equations. If any
bit is contained in more than a fixed number of unsatisfied parity equations, the value of that
bit is reversed. Once the new values are obtained, parity equations are recomputed using the
new values. The procedure is repeated until all the parity equations are satisfied.
This decoding procedure is simple and but is applicable only when the parity-check sets are
small.
The second method performs probabilistic algorithms on LDPC graphs. The graph is a sparse
bipartite graph that contains two sets of nodes, one set representing the parity equations and
the other set representing the code symbols. A line connects node in first set to the second if
a code symbol is present in the equation. Decoding is done by passing messages along the
lines of the graph. Messages are passed from message nodes to check nodes, and from check
nodes back to message nodes and their parity values are calculated.
The two subclasses of these methods are belief propagation and maximum likelihood
decoding. Though these decoding algorithms are complex, they yield better results than the
former.