0% found this document useful (0 votes)
24 views

Unit 2

Uploaded by

ameer.amartey
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

Unit 2

Uploaded by

ameer.amartey
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 135

Computer Networking

Data Link Layer Design Issues


• The data link layer uses the services of the physical layer to send and receive bits over communication
channels.
• It has a number of functions, including:
1. Providing a well-defined service interface to the network layer.
2. Dealing with transmission errors.
3. Regulating the flow of data so that slow receivers are not swamped by fast senders.
• To accomplish these goals, the data link layer takes the packets it gets from the network layer and
encapsulates them into frames for transmission.
• Each frame contains a frame header, a payload field for holding the packet, and a frame trailer, as
illustrated in Fig.
• Frame management forms the heart of what the data link layer does.
Data Link Layer Design Issues
Services Provided to the Network Layer
Services Provided to the Network Layer
• The function of the data link layer is to provide services to the network layer.
• The principal service is transferring data from the network layer on the source machine to the network
layer on the destination machine.
• On the source machine is an entity, call it a process, in the network layer that hands some bits to the data
link layer for transmission to the destination.
• The job of the data link layer is to transmit the bits to the destination machine so they can be handed
over to the network layer there, as shown in Fig.(a).
• The actual transmission follows the path of Fig.(b), but it is easier to think in terms of two data link layer
processes communicating using a data link protocol.
Services Provided to the Network Layer
Services Provided to the Network Layer
Services Provided to the Network Layer
• The data link layer can be designed to offer various services.
• The actual services that are offered vary from protocol to protocol.
• Three reasonable possibilities that we will consider in turn are:
1. Unacknowledged connectionless service.
2. Acknowledged connectionless service.
3. Acknowledged connection-oriented service.
Services Provided to the Network Layer
Services Provided to the Network Layer
Unacknowledged connectionless service
• Unacknowledged connectionless service consists of having the source machine send independent frames
to the destination machine without having the destination machine acknowledge them.
• Ethernet is a good example of a data link layer that provides this class of service.
• No logical connection is established beforehand or released afterward.
• If a frame is lost due to noise on the line, no attempt is made to detect the loss or recover from it in the
data link layer.
• This class of service is appropriate when the error rate is very low, so recovery is left to higher layers.
• It is also appropriate for real-time traffic, such as voice, in which late data are worse than bad data.
Services Provided to the Network Layer
Services Provided to the Network Layer
Acknowledged connectionless service
• The next step up in terms of reliability is acknowledged connectionless service.
• When this service is offered, there are still no logical connections used, but each frame sent is individually
acknowledged.
• In this way, the sender knows whether a frame has arrived correctly or been lost.
• If it has not arrived within a specified time interval, it can be sent again.
• This service is useful over unreliable channels, such as wireless systems. 802.11 (WiFi) is a good example
of this class of service.
Services Provided to
Services Provided to the Network Layer
the Network Layer
Acknowledged connection oriented service
• With this service, the source and destination machines establish a connection before any data are
transferred.
• Each frame sent over the connection is numbered, and the data link layer guarantees that each frame
sent is indeed received.
• Furthermore, it guarantees that each frame is received exactly once and that all frames are received in
the right order.
• Connection-oriented service thus provides the network layer processes with the equivalent of a reliable
bit stream.
• It is appropriate over long, unreliable links such as a satellite channel or a long-distance telephone circuit.
• If acknowledged connectionless service were used, it is conceivable that lost acknowledgements could
cause a frame to be sent and received several times, wasting bandwidth
Services Provided to the Network Layer
Services Provided to the Network Layer
Acknowledged connection oriented service
• When connection-oriented service is used, transfers go through three distinct phases.
• In the first phase, the connection is established by having both sides initialize variables and
counters needed to keep track of which frames have been received and which ones have not.
• In the second phase, one or more frames are actually transmitted.
• In the third and final phase, the connection is released, freeing up the variables, buffers, and
other resources used to maintain the connection.
Services Provided to
Services Provided to the Network Layer
the Network Layer
Framing
• To provide service to the network layer, the data link layer must use the service provided to it
by the physical layer.
• What the physical layer does is accept a raw bit stream and attempt to deliver it to the
destination.
• If the channel is noisy, as it is for most wireless and some wired links, the physical layer will
add some redundancy to its signals to reduce the bit error rate to a tolerable level.
• However, the bit stream received by the data link layer is not guaranteed to be error free.
• Some bits may have different values and the number of bits received may be less than, equal
to, or more than the number of bits transmitted.
• It is up to the data link layer to detect and, if necessary, correct errors.
Services Provided to
Services Provided to the Network Layer
the Network Layer
Framing
• The usual approach is for the data link layer to break up the bit stream into discrete frames,
compute a short token called a checksum for each frame, and include the checksum in the
frame when it is transmitted.
• When a frame arrives at the destination, the checksum is recomputed.
• If the newly computed checksum is different from the one contained in the frame, the data
link layer knows that an error has occurred and takes steps to deal with it (e.g., discarding the
bad frame and possibly also sending back an error report).
Services Provided to
Services Provided to the Network Layer
the Network Layer
Framing
• Breaking up the bit stream into frames is more difficult than it at first appears.
• A good design must make it easy for a receiver to find the start of new frames while using little
of the channel bandwidth.
• We will look at four methods:
1. Byte count.
2. Flag bytes with byte stuffing.
3. Flag bits with bit stuffing.
4. Physical layer coding violations.
Services Provided to
Services Provided to the Network Layer
the Network Layer
Framing
Byte Count
• The first framing method uses a field in the header to specify the number of bytes in the
frame.
• When the data link layer at the destination sees the byte count, it knows how many bytes
follow and hence where the end of the frame is.
• This technique is shown in Fig. for four small example frames of sizes 5, 5, 8, and 8 bytes,
respectively.
Services Provided to
Services Provided to the Network Layer
the Network Layer
Framing
Byte Count
• The trouble with this algorithm is that the count can be garbled by a transmission error. For
example, if the byte count of 5 in the second frame of Fig. becomes a 7 due to a single bit flip,
the destination will get out of synchronization. It will then be unable to locate the correct start
of the next frame.
• Even if the checksum is incorrect so the destination knows that the frame is bad, it still has no
way of telling where the next frame starts.
• Sending a frame back to the source asking for a retransmission does not help either, since the
destination does not know how many bytes to skip over to get to the start of the
retransmission. For this reason, the byte count method is rarely used by itself.
Services Provided to
Services Provided to the Network Layer
the Network Layer
Framing
Flag Byte with Byte Stuffing
• The second framing method gets around the problem of resynchronization after an error by
having each frame start and end with special bytes.
• Often the same byte, called a flag byte, is used as both the starting and ending delimiter.
• This byte is shown in Fig. as FLAG.
• Two consecutive flag bytes indicate the end of one frame and the start of the next.
• Thus, if the receiver ever loses synchronization it can just search for two flag bytes to find the
end of the current frame and the start of the next frame.
Services Provided to
Services Provided to the Network Layer
the Network Layer
Framing
Flag Byte with Byte Stuffing
• It may happen that the flag byte occurs in the data, especially when binary data such as
photographs or songs are being transmitted.
• This situation would interfere with the framing.
• One way to solve this problem is to have the sender’s data link layer insert a special escape
byte (ESC) just before each ‘‘accidental’’ flag byte in the data.
• Thus, a framing flag byte can be distinguished from one in the data by the absence or presence
of an escape byte before it.
• The data link layer on the receiving end removes the escape bytes before giving the data to
the network layer. This technique is called byte stuffing.
Services Provided to
Services Provided to the Network Layer
the Network Layer
Framing
Flag bits with bit stuffing
The third method of delimiting the bit stream gets around a disadvantage of byte stuffing, which
is that it is tied to the use of 8-bit bytes. Framing can be also be done at the bit level, so frames
can contain an arbitrary number of bits made up of units of any size. .
• It was developed for the once very popular HDLC (High- level Data Link Control) protocol. Each
frame begins and ends with a special bit pattern, 01111110 or 0x7E in hexadecimal.
• This pattern is a flag byte.
• Whenever the sender’s data link layer encounters five consecutive 1s in the data, it
automatically stuffs a 0 bit into the outgoing bit stream.
Services Provided to
Services Provided to the Network Layer
the Network Layer
Framing
Flag bits with bit stuffing
• This bit stuffing is analogous to byte stuffing, in which an escape byte is stuffed into the
outgoing character stream before a flag byte in the data.
• It also ensures a minimum density of transitions that help the physical layer maintain
synchronization.
• USB (Universal Serial Bus) uses bit stuffing for this reason.
Services Provided to the Network Layer
Services Provided to the Network Layer
Error Control
• How to make sure all frames are eventually delivered to the network layer at the destination
and in the proper order.
• For unacknowledged connectionless service it might be fine if the sender just kept outputting
frames without regard to whether they were arriving properly.
• But for reliable, connection-oriented service it would not be fine at all.
Services Provided to the Network Layer
Services Provided to the Network Layer
Error Control
• The usual way to ensure reliable delivery is to provide the sender with some feedback about
what is happening at the other end of the line.
• Typically, the protocol calls for the receiver to send back special control frames bearing
positive or negative acknowledgements about the incoming frames.
• If the sender receives a positive acknowledgement about a frame, it knows the frame has
arrived safely.
• On the other hand, a negative acknowledgement means that something has gone wrong and
the frame must be transmitted again.
Services Provided to the Network Layer
Services Provided to the Network Layer
Error Control
• An additional complication comes from the possibility that hardware troubles may cause a
frame to vanish completely (e.g., in a noise burst).
• In this case, the receiver will not react at all, since it has no reason to react.
• Similarly, if the acknowledgement frame is lost, the sender will not know how to proceed.
• It should be clear that a protocol in which the sender transmits a frame and then waits for an
acknowledgement, positive or negative, will hang forever if a frame is ever lost due to, for
example, malfunctioning hardware or a faulty communication channel.
Services Provided to the Network Layer
Services Provided to the Network Layer
Error Control
• This possibility is dealt with by introducing timers into the data link layer.
• When the sender transmits a frame, it generally also starts a timer.
• The timer is set to expire after an interval long enough for the frame to reach the destination,
be processed there, and have the acknowledgement propagate back to the sender.
• Normally, the frame will be correctly received and the acknowledgement will get back before
the timer runs out, in which case the timer will be canceled.
Services Provided to the Network Layer
Services Provided to the Network Layer
Error Control
• However, if either the frame or the acknowledgement is lost, the timer will go off, alerting the
sender to a potential problem.
• The obvious solution is to just transmit the frame again.
• However, when frames may be transmitted multiple times there is a danger that the receiver
will accept the same frame two or more times and pass it to the network layer more than
once.
• To prevent this from happening, it is generally necessary to assign sequence numbers to
outgoing frames, so that the receiver can distinguish retransmissions from originals.
Services Provided to
Services Provided to the Network Layer
the Network Layer
Flow Control
• Another important design issue that occurs in the data link layer (and higher layers as well) is
what to do with a sender that systematically wants to transmit frames faster than the receiver
can accept them.
• This situation can occur when the sender is running on a fast, powerful computer and the
receiver is running on a slow, low-end machine.
• A common situation is when a smart phone requests a Web page from a far more powerful
server, which then turns on the fire hose and blasts the data at the poor helpless phone until it
is completely swamped.
• Even if the transmission is error free, the receiver may be unable to handle the frames as fast
as they arrive and will lose some.
Services Provided to
Services Provided to the Network Layer
the Network Layer
Flow Control
• Two approaches are commonly used. In the first one, feedback-based flow control, the
receiver sends back information to the sender giving it permission to send more data, or at
least telling the sender how the receiver is doing.
• In the second one, rate-based flow control, the protocol has a built-in mechanism that limits
the rate at which senders may transmit data, without using feedback from the receiver.
Error Detection and Correction
• Network designers have developed two basic strategies for dealing with errors.
• Both add redundant information to the data that is sent.
• One strategy is to include enough redundant information to enable the receiver to deduce
what the transmitted data must have been.
• The other is to include only enough redundancy to allow the receiver to deduce that an error
has occurred (but not which error) and have it request a retransmission.
• The former strategy uses error-correcting codes and the latter uses error-detecting codes.
• The use of error-correcting codes is often referred to as FEC (Forward Error Correction).
Error Correcting Codes
HAMMING CODES
• Hamming code is a set of error-correction codes that can be used to detect and correct the
errors that can occur when the data is moved or stored from the sender to the receiver.
• It is a technique developed by R.W. Hamming for error correction.
• Redundant bits – Redundant bits are extra binary bits that are generated and added to the
information-carrying bits of data transfer to ensure that no bits were lost during the data
transfer.
Error Correcting Codes
HAMMING CODES
• A frame consists of m data (i.e., message) bits and r redundant (i.e. check) bits.
• Let the total length of a block be n (i.e., n = m + r).
• We will describe this as an (n,m) code. An n-bit unit containing data and check bits is referred
to as an n-bit codeword.
• The code rate, or simply rate, is the fraction of the codeword that carries information that is
not redundant, or m/n.
• The rates used in practice vary widely.
• They might be 1/2 for a noisy channel, in which case half of the received information is
redundant, or close to 1 for a high-quality channel, with only a small number of check bits
added to a large message.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit

Error Correcting Codes


HAMMING CODES
• The number of redundant bits can be calculated using the following formula:

• Suppose the number of data bits is 7, then the number of redundant bits can be calculated
using: = 2^4 ≥ 7 + 4 + 1 Thus, the number of redundant bits= 4 Parity bits.
• A parity bit is a bit appended to a data of binary bits to ensure that the total number of 1’s in
the data is even or odd.
• Parity bits are used for error detection.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit

Error Correcting Codes


HAMMING CODES
There are two types of parity bits:
1. Even parity bit: In the case of even parity, for a given set of bits, the number of 1’s are
counted. If that count is odd, the parity bit value is set to 1, making the total count of
occurrences of 1’s an even number. If the total number of 1’s in a given set of bits is already
even, the parity bit’s value is 0.
2. Odd Parity bit: In the case of odd parity, for a given set of bits, the number of 1’s are
counted. If that count is even, the parity bit value is set to 1, making the total count of
occurrences of 1’s an odd number. If the total number of 1’s in a given set of bits is already
odd, the parity bit’s value is 0.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit

Error Correcting Codes


HAMMING CODES
To solve the data bit issue with the hamming code method, some steps need to be followed:
Step 1 - The position of the data bits and the number of redundant bits in the original data.
The number of redundant bits is deduced from the expression [2^r >= d+r+1].
Step 2 - Fill in the data bits and redundant bit, and find the parity bit value using the
expression [2^p, where, p - {0,1,2, …… n}].
Step 3 - Fill the parity bit obtained in the original data and transmit the data to the receiver
side.
•Step 4 - Check the received data using the parity bit and detect any error in the data, and in
case damage is present, use the parity bit value to correct the error.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit

Error Correcting Codes


HAMMING CODES
• The data bit to be transmitted is 1011010, to be solved using
the hamming code method.
• Determining the Number of Redundant Bits and Position in
the Data,
• The data bits = 7
• The redundant bit,
• 2^r >= d+r+1
• 2^4 >= 7+4+1
• 16 >= 12, [So, the value of r = 4.]
• Position of the redundant bit, applying the 2^p
expression:
• 2^0 - P1
• 2^1 - P2
• 2^2 - P4
• 2^3 - P8
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit

Error Correcting Codes


HAMMING CODES
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit

Error Correcting Codes


•Finding the Parity Bits, for ”Even parity bits,”
1. P1 parity bit is deduced by checking all the bits with 1’s in the least significant location.
P1: 1, 3, 5, 7, 9, 11
•P1 - P1, 0, 1, 1, 1, 1
•P1 - 0
2. P2 parity bit is deduced by checking all the bits with 1’s in the second significant location.
P2: 2, 3, 6, 7, 10, 11
•P2 - P2, 0, 0, 1, 0, 1
•P2 - 0
3. P4 parity bit is deduced by checking all the bits with 1’s in the third significant location.
P4: 4, 5, 6, 7
•P4 - P4, 1, 0, 1
•P4 - 0
4. P8 parity bit is deduced by checking all the bits with 1’s in the fourth significant location.
P8: 8, 9, 10, 11
•P8 - P1, 1, 0, 1
•P8 - 0
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit

Error Correcting Codes


So, the original data to be transmitted to the receiver side is:
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit

Error Correcting Codes


• Error Detecting and Correction of the Data Received,
• Assume that during transmission, the data bit at position 7 is changed from 1 to 0.
• Then by applying the parity bit technique, we can identify the error:
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit

Error Correcting Codes


• Parity values obtained in the above deduction vary from the originally deduced parity values,
proving that an error occurred during data transmission.
• To identify the position of the error bit, use the new parity values as, [0+2^2+2^1+2^0] 7, i.e.,
same as the assumed error position.
• To correct the error, simply reverse the error bit to its complement, i.e., for this case, change 0
to 1, to obtain the original data bit.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit

Error Detecting Codes


• Error-correcting codes are widely used on wireless links, which are notoriously noisy and error
prone when compared to optical fibers.
• Without error-correcting codes, it would be hard to get anything through.
• However, over fiber or high-quality copper, the error rate is much lower, so error detection
and retransmission is usually more efficient there for dealing with the occasional error.
• We will examine three different error-detecting codes. They are all linear, systematic block
codes:
1. Parity.
2. Checksums.
3. Cyclic Redundancy Checks (CRCs).
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit

Error Detecting Codes


1. Parity.
• A parity bit is an extra bit included in binary message to make total number of 1’s either odd
or even.
• Parity word denotes number of 1’s in a binary string.
• There are two parity system – even and odd parity checks.
• 1. Even Parity Check: Total number of 1’s in the given data bit should be even.
• So if the total number of 1’s in the data bit is odd then a single 1 will be appended to
make total number of 1’s even else 0 will be appended(if total number of 1’s are already
even).
• Hence, if any error occurs, the parity check circuit will detect it at the receiver’s end. Let’s
understand this with example, see the below diagram :
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit

Error Detecting Codes


1. Parity.
• Odd Parity Check: In odd parity system, if the total number of 1’s in the given binary
string (or data bits) are even then 1 is appended to make the total count of 1’s as odd
else 0 is appended.
• The receiver knows that whether sender is an odd parity generator or even parity
generator.
• Suppose if sender is an odd parity generator then there must be an odd number of 1’s in
received binary string.
• If an error occurs to a single bit that is either bit is changed to 1 to 0 or 0 to 1, received
binary bit will have an even number of 1’s which will indicate an error.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit

Error Detecting Codes


1. Parity.
• Limitations :
1. The limitation of this method is that only error in a single bit would be identified and
we also cannot determine the exact location of error in the data bit.
2. If the number of bits in even parity check increase or decrease (data changed) but
remained to be even then it won’t be able to detect error as the number of bits are
still even and same goes for odd parity check.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit

Error Detecting Codes


2. Checksum
• The second kind of error-detecting code, the checksum, is closely related to groups of parity
bits.
• The word ‘checksum’’ ‘is often used to mean a group of check bits associated with a
message, regardless of how are calculated.
• A group of parity bits is one example of a checksum.
• However, there are other, stronger checksums based on a running sum of the data bits of
the message.
• The checksum is usually placed at the end of the message, as the complement of the sum
function.
• This way, errors may be detected by summing the entire received codeword, both data bits
and checksum.
• If the result comes out to be zero, no error has been detected.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit

Error Detecting Codes


2. Checksum
• At the Sender side, the data is divided into equal subunits of n bit length by the checksum
generator.
• This bit is generally of 16-bit length.
• These subunits are then added together using one’s complement method.
• This sum is of n bits.
• The resultant bit is then complemented.
• This complemented sum which is called checksum is appended to the end of original data
unit and is then transmitted to Receiver.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit

Error Detecting Codes


2. Checksum
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit

Error Detecting Codes


2. Checksum
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit

Error Detecting Codes


2. Checksum
• The Receiver after receiving data + checksum passes it to checksum checker.
• Checksum checker divides this data unit into various subunits of equal length and adds all
these subunits.
• These subunits also contain checksum as one of the subunits.
• The resultant bit is then complemented.
• If the complemented result is zero, it means the data is error-free.
• If the result is non-zero it means the data contains an error and Receiver rejects it.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit

Error Detecting Codes


2. Checksum
• Example –
If the data unit to be transmitted is 10101001 00111001, the following procedure is used at
Sender site and Receiver site.
• Sender Site :
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Error Detecting Codes


2. Checksum
• Receiver Site :

• Sum is Zero that means no error.


2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Error Detecting Codes


2. Checksum
• :
Advantage
• checksum detects all the errors involving an odd number of bits as well as the error
involving an even number of bits.
• Disadvantage :
• The main problem is that the error goes undetected if one or more bits of a subunit is
damaged and the corresponding bit or bits of a subunit are damaged and the
corresponding bit or bits of opposite value in second subunit are also damaged.
• This is because the sum of those columns remains unchanged.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Error Detecting Codes


• CRC
• CRC or Cyclic Redundancy Check is a method of detecting accidental changes/errors in the
communication channel.
• CRC uses Generator Polynomial which is available on both sender and receiver side.
• An example generator polynomial is of the form like x3 + x + 1.
• This generator polynomial represents key 1011.
• Another example is x2 + 1 that represents key 101.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Error Detecting Codes


• CRC
Sender Side (Generation of Encoded Data from Data and Generator Polynomial (or Key)):

1.The binary data is first augmented by adding k-1 zeros in the end of the data

2.Use modulo-2 binary division to divide binary data by the key and store remainder of
division.

3.Append the remainder at the end of the data to form the encoded data and send the same
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Error Detecting Codes


• CRC
• Receiver Side (Check if there are errors introduced in transmission)
Perform modulo-2 division again and if the remainder is 0, then there are no errors.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Error Detecting Codes


• CRC
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Error Detecting Codes


• CRC
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Error Detecting Codes


• CRC
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Flow Control- Stop and Wait Protocol


• The problem of preventing the sender from flooding the receiver with frames faster than the
latter is able to process them.
• This situation can easily happen in practice so being able to prevent it is of great importance.
• One solution is to build the receiver to be powerful enough to process a continuous stream of
back-to-back frames (or, equivalently, define the link layer to be slow enough that the
receiver can keep up).
• It must have sufficient buffering and processing abilities to run at the line rate and must be
able to pass the frames that are received to the network layer quickly enough.
• However, this is a worst-case solution. It requires dedicated hardware and can be wasteful of
resources if the utilization of the link is mostly low.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Flow Control- Stop and Wait Protocol


• A more general solution to this problem is to have the receiver provide feed back to the
sender.
• After having passed a packet to its network layer, the receiver sends a little dummy frame
back to the sender which, in effect, gives the sender permission to transmit the next frame.
• After having sent a frame, the sender is required by the protocol to bide its time until the
little dummy (i.e., acknowledgement) frame arrives.
• This delay is a simple example of a flow control protocol.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Flow Control- Simplex Stop and Wait Protocol


• Here stop and wait means, whatever the data that sender wants to send, he sends the data to the
receiver.
• After sending the data, he stops and waits until he receives the acknowledgment from the receiver.
• The stop and wait protocol is a flow control protocol where flow control is one of the services of the data
link layer.
• It is a data-link layer protocol which is used for transmitting the data over the noiseless channels.
• It provides unidirectional data transmission which means that either sending or receiving of data will take
place at a time.
• It provides flow-control mechanism but does not provide any error control mechanism.
• The idea behind the usage of this frame is that when the sender sends the frame then he waits for the
acknowledgment before sending the next frame.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Flow Control- Simplex Stop and Wait Protocol


Primitives of Stop and Wait Protocol
• The primitives of stop and wait protocol are:
• Sender side
• Rule 1: Sender sends one data packet at a time.
• Rule 2: Sender sends the next packet only when it receives the acknowledgment of
the previous packet.
• Therefore, the idea of stop and wait protocol in the sender's side is very simple, i.e., send
one packet at a time, and do not send another packet before receiving the
acknowledgment.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Flow Control- Simplex Stop and Wait Protocol


Primitives of Stop and Wait Protocol
• The primitives of stop and wait protocol are:
• Receiver side
• Rule 1: Receive and then consume the data packet.
• Rule 2: When the data packet is consumed, receiver sends the acknowledgment to the
sender.
• Therefore, the idea of stop and wait protocol in the receiver's side is also very simple, i.e.,
consume the packet, and once the packet is consumed, the acknowledgment is sent.
• This is known as a flow control mechanism.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Flow Control- Simplex Stop and Wait Protocol


Working of Stop and Wait protocol
• The main advantage of this protocol is its simplicity
but it has some disadvantages also.
• For example, if there are 1000 data packets to be sent,
then all the 1000 packets cannot be sent at a time as
in Stop and Wait protocol, one packet is sent at a time.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Flow Control- Simplex Stop and Wait Protocol


Working of Stop and Wait protocol
• The main advantage of this protocol is its simplicity
but it has some disadvantages also.
• For example, if there are 1000 data packets to be sent,
then all the 1000 packets cannot be sent at a time as
in Stop and Wait protocol, one packet is sent at a time.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Flow Control- Simplex Stop and Wait Protocol


Disadvantages of Stop and Wait protocol
1. Problems occur due to lost data
• Suppose the sender sends the data and the data is lost.
• The receiver is waiting for the data for a long time.
• Since the data is not received by the receiver, so it does
not send any acknowledgment.
• Since the sender does not receive any acknowledgment
so it will not send the next packet.
• This problem occurs due to the lost data.
• In this case, two problems occur:
•Sender waits for an infinite amount of time for an
acknowledgment.
•Receiver waits for an infinite amount of time for a data.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Flow Control- Simplex Stop and Wait Protocol


Disadvantages of Stop and Wait protocol
2. Problems occur due to lost acknowledgment
• Suppose the sender sends the data and it has also been received
by the receiver.
• On receiving the packet, the receiver sends the acknowledgment.
• In this case, the acknowledgment is lost in a network, so there is
no chance for the sender to receive the acknowledgment.
• There is also no chance for the sender to send the next packet as
in stop and wait protocol, the next packet cannot be sent until
the acknowledgment of the previous packet is received.
• In this case, one problem occurs:
• Sender waits for an infinite amount of time for an
acknowledgment.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Flow Control- Simplex Stop and Wait Protocol


Disadvantages of Stop and Wait protocol
3. Problem due to the delayed data or acknowledgment
• Suppose the sender sends the data and it has also been received
by the receiver.
• The receiver then sends the acknowledgment but the
acknowledgment is received after the timeout period on the
sender's side.
• As the acknowledgment is received late, so acknowledgment can
be wrongly considered as the acknowledgment of some other
data packet.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Flow Control- Simplex Stop and Wait Protocol


Stop and Wait for ARQ (Automatic Repeat Request)
• The above 3 problems are resolved by Stop and Wait for ARQ (Automatic Repeat Request) that does both
error control and flow control.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Flow Control- Simplex Stop and Wait Protocol


Stop and Wait for ARQ (Automatic Repeat Request)
• Time Out:
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Flow Control- Simplex Stop and Wait Protocol


Stop and Wait for ARQ (Automatic Repeat Request)
• Sequence Number (Data)
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Flow Control- Simplex Stop and Wait Protocol


Stop and Wait for ARQ (Automatic Repeat Request)
• Delayed Acknowledgement:

• This is resolved by introducing sequence numbers for acknowledgement also.


2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Flow Control- Simplex Stop and Wait Protocol


Stop and Wait for ARQ (Automatic Repeat Request)
• Working of Stop and Wait for ARQ:

1. Sender A sends a data frame or packet with sequence


number 0.

2. Receiver B, after receiving the data frame, sends an


acknowledgement with sequence number 1 (the sequence
number of the next expected data frame or packet)

3. There is only a one-bit sequence number that implies that


both sender and receiver have a buffer for one frame or
packet only.

• acknowledgement also.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Flow Control- Simplex Stop and Wait Protocol


Stop and Wait for ARQ (Automatic Repeat Request)
• Advantages of Stop and Wait ARQ :
• Simple Implementation: Stop and Wait ARQ is a simple protocol that is easy to implement in both
hardware and software. It does not require complex algorithms or hardware components, making it an
inexpensive and efficient option.
• Error Detection: Stop and Wait ARQ detects errors in the transmitted data by using checksums or cyclic
redundancy checks (CRC). If an error is detected, the receiver sends a negative acknowledgment (NAK) to
the sender, indicating that the data needs to be retransmitted.
• Reliable: Stop and Wait ARQ ensures that the data is transmitted reliably and in order. The receiver cannot
move on to the next data packet until it receives the current one. This ensures that the data is received in
the correct order and eliminates the possibility of data corruption.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Flow Control- Simplex Stop and Wait Protocol


Stop and Wait for ARQ (Automatic Repeat Request)
• Advantages of Stop and Wait ARQ :
• Flow Control: Stop and Wait ARQ can be used for flow control, where the receiver can control the rate at
which the sender transmits data. This is useful in situations where the receiver has limited buffer space or
processing power.
• Backward Compatibility: Stop and Wait ARQ is compatible with many existing systems and protocols,
making it a popular choice for communication over unreliable channels.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Flow Control- Simplex Stop and Wait Protocol


Stop and Wait for ARQ (Automatic Repeat Request)
• Disadvantages of Stop and Wait ARQ :
• Low Efficiency: Stop and Wait ARQ has low efficiency as it requires the sender to wait for an
acknowledgment from the receiver before sending the next data packet. This results in a low data
transmission rate, especially for large data sets.
• High Latency: Stop and Wait ARQ introduces additional latency in the transmission of data, as the sender
must wait for an acknowledgment before sending the next packet. This can be a problem for real-time
applications such as video streaming or online gaming.
• Limited Bandwidth Utilization: Stop and Wait ARQ does not utilize the available bandwidth efficiently, as
the sender can transmit only one data packet at a time. This results in underutilization of the channel,
which can be a problem in situations where the available bandwidth is limited.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Flow Control- Simplex Stop and Wait Protocol


Stop and Wait for ARQ (Automatic Repeat Request)
• Disadvantages of Stop and Wait ARQ :
• Limited Error Recovery: Stop and Wait ARQ has limited error recovery capabilities. If a data packet is lost or
corrupted, the sender must retransmit the entire packet, which can be time-consuming and can result in
further delays.
• Vulnerable to Channel Noise: Stop and Wait ARQ is vulnerable to channel noise, which can cause errors in
the transmitted data. This can result in frequent retransmissions and can impact the overall efficiency of
the protocol.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Sliding Window Protocol


• Sliding window protocols are data link layer protocols for reliable and sequential delivery of data
frames.
• The sliding window is also used in Transmission Control Protocol.
• In this protocol, multiple frames can be sent by a sender at a time before receiving an
acknowledgment from the receiver.
• The term sliding window refers to the imaginary boxes to hold frames.
• Sliding window method is also known as windowing.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Sliding Window Protocol


Working Principle
• In these protocols, the sender has a buffer called the sending window and the receiver has buffer called
the receiving window.
• The size of the sending window determines the sequence number of the outbound frames.
• If the sequence number of the frames is an n-bit field, then the range of sequence numbers that can be
assigned is 0 to 2𝑛−1.
• Consequently, the size of the sending window is 2𝑛−1.
• Thus in order to accommodate a sending window size of 2𝑛−1, a n-bit sequence number is chosen.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Sliding Window Protocol


Working Principle
• The sequence numbers are numbered as modulo-n.
• For example, if the sending window size is 4, then the sequence numbers will be 0, 1, 2, 3, 0, 1, 2, 3,
0, 1, and so on. The number of bits in the sequence number is 2 to generate the binary sequence 00,
01, 10, 11.
• The size of the receiving window is the maximum number of frames that the receiver can accept at a
time.
• It determines the maximum number of frames that the sender can send before receiving
acknowledgment.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Sliding Window Protocol


Example
• Suppose that we have sender window
and receiver window each of size 4.
• So the sequence numbering of both the
windows will be 0,1,2,3,0,1,2 and so on.
• The following diagram shows the
positions of the windows after sending
the frames and receiving
acknowledgments.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Sliding Window Protocol


Types of Sliding Window Protocols
• The Sliding Window ARQ (Automatic Repeat reQuest) protocols are of two categories −
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Sliding Window Protocol


Go – Back – N ARQ
• Go – Back – N ARQ provides for sending multiple frames before receiving the
acknowledgment for the first frame.
• It uses the concept of sliding window, and so is also called sliding window
protocol.
• The frames are sequentially numbered and a finite number of frames are sent.
• If the acknowledgment of a frame is not received within the time period, all
frames starting from that frame are retransmitted.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Sliding Window Protocol


Selective Repeat ARQ
• This protocol also provides for sending multiple frames before receiving the
acknowledgment for the first frame.
• However, here only the erroneous or lost frames are retransmitted, while the
good frames are received and buffered.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Sliding Window Protocol


Selective Repeat ARQ
• In SRP, the sender's window size starts at 0 and it grows to some predefined
maximum.
• The receiver's window is always fixed in size and equal to the predetermined
maximum.
• The receiver has the buffer reserved for each sequence number within its fixed
window.
• The sender and the receiver maintain a buffer of their window size.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Sliding Window Protocol


Selective Repeat ARQ
• If there is an error, the receiver checks the lower edge to the last sequence
number before the lost frame sequence number.
• The receiver continues to receive and acknowledge incoming frames.
• The sender maintains a timeout clock for the unacknowledged frame number
and retransmits that frame after the timeout.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Sliding Window Protocol


Selective Repeat ARQ
• A special timer will time out for
the ACK so that the ACK is sent
back as an independent packet.
If the receiver suspects that the
transmission has an error, it
immediately sends back a
negative acknowledgment
(NAK) to the sender.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Aloha
• Aloha is a type of Random access protocol it was developed at the
University of Hawaii in early 1970, it is a LAN-based protocol
because there are more chances of occurrence of collisions during
the transmission of data from any source to the destination, Aloha
has two types one Pure Aloha and another Slotted Aloha.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Aloha
Pure Aloha
• Pure Aloha can be termed as the main Aloha or the original Aloha.
• Whenever any frame is available, each station sends it, and due to the
presence of only one channel for communication, it can lead to the chance of
collision.
• In the case of the pure aloha, the user transmits the frame and waits till the
receiver acknowledges it, if the receiver does not send the acknowledgment,
the sender will assume that it has not been received and sender resends the
acknowledgment.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Aloha
Pure Aloha
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Aloha
Slotted Aloha
• Slotted Aloha is simply an advanced version of pure Aloha that helps in
improving the communication network.
• A station is required to wait for the beginning of the next slot to transmit.
• The vulnerable period is halved as opposed to Pure Aloha.
• Slotted Aloha helps in reducing the number of collisions by properly utilizing
the channel and this basically results in the somehow delay of the users.
• In Slotted Aloha, the channel time is separated into particular time slots.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Aloha
Slotted Aloha
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

CSMA
• In Carrier Sense Multiple Access (CSMA) protocol, the station will
sense the channel before the transmission of data.
• CSMA reduces the chances of collision in the network but it does
not eliminate the collision from the channel.
• 1-Persistent, Non-Persistent, P-Persistent, and O-Persistent are the
three access methods of CSMA.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

CSMA
• CSMA stands for Carrier Sense Multiple Access (CSMA).
• CSMA is one of the network protocols which works on the principle of ‘carrier
sense’.
• CSMA is a protocol developed to increase the performance of the network and
reduce the chance of collision in the network.
• If any device wants to send data then the device first senses or listens to the
network medium to check whether the shared network is free or not.
• If the channel is found idle then the device will transmit its data.
• This sense reduces the chance of collision in the network but this method is not
able to eliminate the collision.
• CSMA is used in Ethernet networks where two or more network devices are
connected.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

CSMA
• CSMA works on the principle of "Listen before Talking" or "Sense before
Transmit".
• When the device on the shared medium wants to transmit a data frame, then
the device first detects the channel to check the presence of any carrier signal
from other connected devices on the network.
• In this situation, if the device senses any carrier signal on the shared medium,
then this means that there is another transmission on the channel.
• The device will wait until the channel becomes idle and the transmission that is
in progress is currently completed.
• When the channel becomes idle the station starts its transmission.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

CSMA
• All other stations connected in the network receive the transmission of the
station.
• In CSMA, the station senses or detects the channel before the transmission of
data so it reduces the chance of collision in the transmission.
• But there may be a situation where two stations detected the channel idle at
the same time and they both start data transmission simultaneously so in this,
there is a chance of collision.
• So CSMA reduces the chance of collision in data transmission but it does not
eliminate the collision.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

CSMA
Types of CSMA Access Modes
1-Persistent
• This method is considered the straightforward and simplest method of CSMA.
• In this method, if the station finds the medium idle then the station will
immediately send the data frame with 1- probability.
• In this, if the station wants to transmit the data. Then the station first senses the
medium.
• If the medium is busy then the station waits until the channel becomes idle, and
the station continuously senses the channel until the medium becomes idle.
• If the station detects the channel as idle then the station will immediately send
the data frame with 1 probability that’s why the name of this method is
1-persistent.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

CSMA
Types of CSMA Access Modes
1-Persistent
• In this method, there is a high possibility of collision as two or more station
senses the channel idle at the same time and transmits data simultaneously
which may lead to a collision.
• This is one of the most straightforward methods.
• In this method, once the station finds that the medium is idle then it
immediately sends the frame.
• By using this method there are higher chances for collision because it is possible
that two or more stations find the shared medium idle at the same time and
then they send their frames immediately.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

CSMA
Types of CSMA Access Modes
Non-Persistent
• In this method of CSMA, if the station finds the channel busy then it will wait for
a random amount of time before sensing the channel again.
• If the station wants to transmit the data then first of all it will sense the
medium.
• If the medium is idle then the station will immediately send the data.
• Otherwise, if the medium is busy then the station waits for a random amount of
time and then again senses the channel after waiting for a random amount of
time.
• In Non-persistent there is less chance of collision in comparison to the
1-persistent method as this station will not continuously sense the channel but
since the channel after waiting for a random amount of time.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

CSMA
Types of CSMA Access Modes
Non-Persistent
• So the random amount of time is unlikely to be the same for two stations that’s
why this method reduces the chance of collision.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

CSMA
Types of CSMA Access Modes
P-Persistent
• The p-persistent method of CSMA is used when the channel is divided into
multiple time slots and the duration of time slots is greater than or equal to the
maximum propagation time.
• This method is designed as a combination of the advantages of 1-Persistent and
Non-Persistent CSMA.
• The p-persistent method of CSMA reduces the chance of collision in the
network and there is an increment in the efficiency of the network.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

CSMA
Types of CSMA Access Modes
O-Persistent
• In this method of CSMA supervisory node assigns a transmission order to each
node in the network.
• When the channel is idle instead of immediately sending the data channel will
wait for its transmission order assigned to them.
• This mode of CSMA defines the superiority of the station before data
transmission in the medium.
• In this mode, if the channel is inactive then all stations will wait to transmit the
data for its turn.
• Every station in the channel transmits the data in its turn.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

CSMA
Types of CSMA Access Modes
O-Persistent
• In this method of CSMA supervisory node assigns a transmission order to each
node in the network.
• When the channel is idle instead of immediately sending the data channel will
wait for its transmission order assigned to them.
• This mode of CSMA defines the superiority of the station before data
transmission in the medium.
• In this mode, if the channel is inactive then all stations will wait to transmit the
data for its turn.
• Every station in the channel transmits the data in its turn.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

CSMA
Variations of CSMA Protocol
Carrier Sense Multiple Access with Collision Detection (CSMA/CD)
• Carrier sense multiple access/ collision detection is one of the network
protocols for transmission.
• CSMA/CD protocol works with the medium access control layer of the network.
• The station senses the channel before transmission of data and if the station
finds the channel idle then the station transmits its data frames to check
whether data transmission is successful in the network or not.
• If the station sent successfully the data frame then it will again send the next
frame.
• If the station detects a collision in the network, then in CSMA/CD the station will
send the stop/jam signal to all the stations connected in the network to
terminate their transmission of data. Then the station waits for a random
amount of time for the transmission of data.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

CSMA
Variations of CSMA Protocol
Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA)
• Carrier sense multiple access/collision avoidance is one of the network
protocols for data frame transmission.
• When the station sends the data frame on the channel it receives the
acknowledgment in response to the sent data frame to test whether the
channel is idle or not.
• When the station receives a single signal i.e. its signal this means that there is
no collision and data has been successfully received by the receiver.
• But in case of collision, the station receives two signals: its signal and the second
signal sent by the other station
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

CSMA
Variations of CSMA Protocol
Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA)
• In CSMA/CA collision is avoided by using the following three strategies.
• Interframe space
• Contention window
• Acknowledgement
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

CSMA
Variations of CSMA Protocol
Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA)
• Interframe Space or IFS:
• If the station wants to transmit the data then it waits until the channel
becomes idle and when the channel becomes idle station does not
immediately send the data but waits for some time.
• This period is known as the Interframe Space or IFS.
• IFS can also define the priority of the frame or station.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

CSMA
Variations of CSMA Protocol
Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA)
• Acknowledgement:
• There may be a chance of collision or data may be corrupted during the
transmission.
• Positive acknowledgment and time-out are used in addition to ensuring that
the receiver has successfully received the data.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

CSMA
Advantages of CSMA:
1. Increased efficiency: CSMA ensures that only one device communicates on the
network at a time, reducing collisions and improving network efficiency.
2. Simplicity: CSMA is a simple protocol that is easy to implement and does not
require complex hardware or software.
3. Flexibility: CSMA is a flexible protocol that can be used in a wide range of
network environments, including wired and wireless networks.
4. Low cost: CSMA does not require expensive hardware or software, making it a
cost-effective solution for network communication.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

CSMA
Disadvantages of CSMA:
1. Limited scalability: CSMA is not a scalable protocol and can become inefficient
as the number of devices on the network increases.
2. Delay: In busy networks, the requirement to sense the medium and wait for an
available channel can result in delays and increased latency.
3. Limited reliability: CSMA can be affected by interference, noise, and other
factors, resulting in unreliable communication.
4. Vulnerability to attacks: CSMA can be vulnerable to certain types of attacks,
such as jamming and denial-of-service attacks, which can disrupt network
communication.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Comparison of various protocols


2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

IEEE 802.3 and Ethernet


• Ethernet is a set of technologies and protocols that are used primarily in LANs.
• It was first standardized in 1980s by IEEE 802.3 standard.
• IEEE 802.3 defines the physical layer and the medium access control (MAC) sub-layer of the data link layer
for wired Ethernet networks.
• Ethernet is classified into two categories: classic Ethernet and switched Ethernet.
• Classic Ethernet is the original form of Ethernet that provides data rates between 3 to 10 Mbps.
• The varieties are commonly referred as 10BASE-X.
• Here, 10 is the maximum throughput, i.e. 10 Mbps, BASE denoted use of baseband transmission, and X is
the type of medium used.
• Most varieties of classic Ethernet have become obsolete in present communication scenario.
• A switched Ethernet uses switches to connect to the stations in the LAN.
• It replaces the repeaters used in classic Ethernet and allows full bandwidth utilization.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

IEEE 802.3 and Ethernet


IEEE 802.3 Popular Versions
• There are a number of versions of IEEE 802.3 protocol. The most popular ones are -
• IEEE 802.3:
• This was the original standard given for 10BASE-5.
• It used a thick single coaxial cable into which a connection can be tapped by drilling into the cable to
the core.
• Here, 10 is the maximum throughput, i.e. 10 Mbps, BASE denoted use of baseband transmission, and
5 refers to the maximum segment length of 500m.
• IEEE 802.3a:
• This gave the standard for thin coax (10BASE-2), which is a thinner variety where the segments of
coaxial cables are connected by BNC connectors.
• The 2 refers to the maximum segment length of about 200m (185m to be precise).
• IEEE 802.3i:
• This gave the standard for twisted pair (10BASE-T) that uses unshielded twisted pair (UTP) copper
wires as physical layer medium.
• The further variations were given by IEEE 802.3u for 100BASE-TX, 100BASE-T4 and 100BASE-FX.
• IEEE 802.3i:
• This gave the standard for Ethernet over Fiber (10BASE-F) that uses fiber optic cables as medium of
transmission.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

IEEE 802.3 and Ethernet


2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

IEEE 802.3 and Ethernet


Frame Format of Classic Ethernet and IEEE 802.3
The main fields of a frame of classic Ethernet are -
• Preamble: It is the starting field that provides alert and timing pulse for transmission. In case of classic
Ethernet it is an 8 byte field and in case of IEEE 802.3 it is of 7 bytes.
• Start of Frame Delimiter: It is a 1 byte field in a IEEE 802.3 frame that contains an alternating pattern of
ones and zeros ending with two ones.
• Destination Address: It is a 6 byte field containing physical address of destination stations.
• Source Address: It is a 6 byte field containing the physical address of the sending station.
• Length: It a 7 bytes field that stores the number of bytes in the data field.
• Data: This is a variable sized field carries the data from the upper layers. The maximum size of data field is
1500 bytes.
• Padding: This is added to the data to bring its length to the minimum requirement of 46 bytes.
• CRC: CRC stands for cyclic redundancy check. It contains the error detection information.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

IEEE 802.3 and Ethernet


Frame Format of Classic Ethernet and IEEE 802.3
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Advantages:
IEEE 802.3 and Ethernet
Simple format: The Ethernet frame format is simple and easy to understand, making it easy to
implement and troubleshoot Ethernet networks.
Flexibility: The Ethernet frame format is flexible and can accommodate different data sizes and
network topologies, making it suitable for a wide range of network applications.
Widely adopted: The Ethernet frame format is widely adopted and supported by a large number
of vendors and network devices, ensuring compatibility and interoperability.
Error detection: The Ethernet frame format includes a cyclic redundancy check (CRC) field for
error detection, which helps to ensure data integrity during transmission.
Support for VLANs: The Ethernet frame format supports virtual local area networks (VLANs),
which allows network administrators to logically partition a physical LAN into multiple smaller
virtual LANs for improved network management and security.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

IEEE 802.3 and Ethernet


Disadvantages:
Limited frame size: The Ethernet frame format has a maximum frame size of 1500 bytes, which
can limit the amount of data that can be transmitted in a single frame and can result in increased
overhead due to fragmentation and reassembly of larger packets.
Broadcast storms: Ethernet networks use broadcast transmissions to send frames to all devices
on the network, which can lead to broadcast storms if too many devices send broadcast frames
simultaneously, resulting in network congestion and performance issues.
Security vulnerabilities: The Ethernet frame format does not include built-in security features,
making Ethernet networks vulnerable to security threats such as eavesdropping and spoofing.
Limited speed: Ethernet networks have a limited maximum speed, which may not be sufficient for
high-speed applications or large-scale networks.
Limited distance: The maximum distance between two devices on an Ethernet network is limited,
which can restrict the physical coverage of the network.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Token Bus (IEEE 802.4)


• Token Bus (IEEE 802.4) is a popular standard for token passing LANs.
• In a token bus LAN, the physical media is a bus or a tree, and a logical ring is
created using a coaxial cable.
• The token is passed from one user to another in a sequence (clockwise or
anticlockwise).
• Each station knows the address of the station to its “left” and “right” as per
the sequence in the logical ring.
• A station can only transmit data when it has the token.
• The working of a token bus is somewhat similar to Token Ring.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Token Bus (IEEE 802.4)


2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Token Bus (IEEE 802.4)


Frame Format:
The various fields of the frame format are:
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Token Bus (IEEE 802.4)


Frame Format:
1. Preamble – It is used for bit synchronization. It is a 1-byte field.
2. Start Delimiter – These bits mark the beginning of the frame. It is a 1-byte field.
3. Frame Control – This field specifies the type of frame – data frame and control frames. It is a 1-byte
field.
4. Destination Address – This field contains the destination address. It is a 2 to 6 bytes field.
5. Source Address – This field contains the source address. It is a 2 to 6 bytes field.
6. Data – If 2-byte addresses are used then the field may be up to 8182 bytes and 8174 bytes in the case of
6-byte addresses.
7. Checksum – This field contains the checksum bits which are used to detect errors in the transmitted
data. It is 4 bytes field.
8. End Delimiter – This field marks the end of a frame. It is a 1-byte field.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Token Bus (IEEE 802.4)


Characteristics:-
1. Bus Topology: Token Bus uses a bus topology, where all devices on the network are
connected to a single cable or “bus”.
2. Token Passing: A “token” is passed around the network, which gives permission for a device
to transmit data.
3. Priority Levels: Token Bus uses three priority levels to prioritize data transmission. The
highest priority level is reserved for control messages and the lowest for data transmission.
4. Collision Detection: Token Bus employs a collision detection mechanism to ensure that two
devices do not transmit data at the same time.
5. Maximum Cable Length: The maximum cable length for Token Bus is limited to 1000 meters.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Token Bus (IEEE 802.4)


Characteristics:-
6. Data Transmission Rates: Token Bus can transmit data at speeds of up to 10 Mbps.
7. Limited Network Size: Token Bus is typically used for small to medium-sized networks with up
to 72 devices.
8. No Centralized Control: Token Bus does not require a central controller to manage network
access, which can make it more flexible and easier to implement.
9. Vulnerable to Network Failure: If the token is lost or a device fails, the network can become
congested or fail altogether.
10. Security: Token Bus has limited security features, and unauthorized devices can potentially
gain access to the network.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Token Bus (IEEE 802.4)


Advantages:
1. Token Bus provides a fair access mechanism, which ensures that each device gets an equal
opportunity to transmit data.
2. It supports a large number of nodes (up to 72 nodes), which makes it suitable for use in large
networks.
3. Token Bus is a deterministic protocol, which means that the time required for a device to
access the network is predictable.
4. It provides a high level of reliability and fault tolerance, as a single point of failure (such as a
broken cable) does not affect the entire network.
5. Token Bus is a standardized protocol, which ensures interoperability between devices from
different vendors.
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

Token Bus (IEEE 802.4)


Disadvantages:
1. Token Bus has a relatively low data transfer rate compared to other LAN protocols, such as
Ethernet.
2. It requires a strict physical layout of the network, with a maximum length of 2500 meters and
a maximum of 8 taps (devices) between any two active nodes.
3. Token Bus has a complex protocol, which requires a high level of expertise to design and
implement.
4. It is susceptible to collisions, as multiple devices may try to access the network at the same
time.
5. Token Bus requires a dedicated token, which can result in increased latency and decreased
network performance.
Token Ring Network (IEEE Standard
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

802.5)
• In a token ring, a special bit pattern, known as a token, circulates around the ring when all the
stations are idle. Token Ring is formed by the nodes connected in ring format.
Token Ring Network (IEEE Standard
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

802.5)
• The principle used in the token ring network is that a token is circulating in the ring, and
whichever node grabs that token will have the right to transmit the data.
• Whenever a station wants to transmit a frame, it inverts a single bit of the 3-byte token, which
instantaneously changes it into a normal data packet.
• As there is only one token, there can be only one transmission at a time.
• Since the token rotates in the ring, it is guaranteed that every node gets the token within some
specified time.
• So there is an upper bound on the time of waiting to grab the token so that starvation is
avoided.
• There is also an upper limit of 250 on the number of nodes in the network.
Token Ring Network (IEEE Standard
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

802.5)
Modes of Operation
There are various modes of operations which are as follows −
• Listen Mode − In the listen mode, the incoming bits are simply transmitted to the output line
with no further action taken.
• Talk or Transmit Node − The ring interface is set to the talk or transmit node when the station
connected to the ring interface has acquired a token. The direct input to output connection
through the single bit buffer is disconnected.
• By-pass Mode − This mode reaches when the node is down. Any data is just bypassed. There is
no one-bit delay in this mode.
Token Ring Network (IEEE Standard
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

802.5)
• Handling Breakage
• The main problem with a ring network is that the network goes down when the ring cable
breaks down or gets tempered.
• The solution to this problem is the use of a wire Centre, as shown in the figure.
• This wire center bypasses the terminals that have gone down from the ring.
• This is done by connecting the bypass relay for that station.
• These relays are generally controlled by the software that operates automatically in case of
station failure.
• The use of a wire center improves the reliability and maintainability of the ring network.
Token Ring Network (IEEE Standard
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

802.5)
• Handling Breakage
Token Ring Network (IEEE Standard
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

802.5)
Token Ring Frame Formats
• There are three types of frame formats that are supported on a Token Ring network such as
token, abort, and frame.
• The token format is the mechanism by which access to the ring is passed from one computer
attached to the network to another device connected to the network.
• Here, the token format consists of three bytes, of which the starting and ending delimiters are
used to indicate the beginning and end of a token frame. The middle byte of a token frame is
an access control byte.
• Three bits are used as a priority indicator, three bits are used as a reservation indicator, while
one bit is used for the token bit, and another bit position functions as the monitor bit.
Token Ring Network (IEEE Standard
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

802.5)
Token Ring Frame Formats
Token Ring Network (IEEE Standard
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

802.5)
Token Ring Frame Formats
The components of the Token Ring Frame Format are as follows −
• Start Delimiter (SD) − The first field of the data/command frame, SD, is one byte long and is
used to alert the receiving station to the arrival of a frame as well as to allow it to synchronize
its retrieval timing.
• Access Control (AC) − The AC field is one byte long and includes four subfields. The first three
bits are the priority field. The fourth bit is called the token bit.
• Frame Control (FC) −The FC field is one byte long and contains two fields. The first is a one-bit
field used to indicate the type of information contained in the Protocol Data Unit (PDU).
Token Ring Network (IEEE Standard
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

802.5)
Token Ring Frame Formats
• Destination Address (DA) −The two-to-six-byte DA field contains the physical address of the
frame’s next destination. If its ultimate destination is another network, the DA is the address
of the router to the next LAN on its path.
• Source Address (SA) − The SA field is also two to six bytes long and contains the physical
address of the sending station. If the ultimate destination of the packet is a station on the
same network as the originating station, the SA is that of the originating station.
• Data −The sixth field, data, is allotted 4500 bytes and contains the PDU. A token ring frame
does not include a PDU length or type field.
• Checksum −The checksum field is 4 bytes long. The checksum field is used to cross-check the
data at the sending station. This field contains the total number of bytes in the frame. The
number is checked at the receiver end after counting the bytes in the received frame.
Token Ring Network (IEEE Standard
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

802.5)
Token Ring Frame Formats
• End Delimiter (ED) −The ED is a second flag field of one byte and indicates the end of the
sender’s data and control information.
• Frame Status −The last byte of the frame is the FS field. It can be set by the receiver to indicate
that the frame has been read or by the monitor to indicate that the frame has already been
around the ring.
Token Ring Network (IEEE Standard
2^r ≥ m + r + 1 where, r = redundant bit, m = data bit
Result is zero, it means no error.

802.5)
S.No. IEEE 802.3 IEEE 802.4 IEEE 802.5
Topology used in IEEE 802.3 is Bus Topology used in IEEE 802.4 is Bus or Topology used in IEEE 802.5 is Ring
1
Topology. Tree Topology. Topology.
Size of the frame format in IEEE 802.3 Size of the frame format in IEEE 802.4 Frame format in IEEE 802.5 standard is
2
standard is 1572 bytes. standard is 8202 bytes. of the variable size.
There is no priority given in this
3 It supports priorities to stations. In IEEE 802.5 priorities are possible
standard.
4 Size of the data field is 0 to 1500 bytes. Size of the data field is 0 to 8182 bytes. No limit is on the size of the data field.
5 Minimum frame required is 64 bytes. It can handle short minimum frames. It supports both short and large frames.
Efficiency decreases when speed
Throughput & efficiency at very high Throughput & efficiency at very high
6 increases and throughput is affected by
loads are outstanding. loads are outstanding.
the collision.
Like IEEE 802.4, modems are also
7 Modems are not required. Modems are required in this standard.
required in it.
8 Protocol is very simple. Protocol is extremely complex. Protocol is moderately complex.
It can be applied for Real time
It is not applicable on Real time
applications and interactive
9 applications, interactive Applications and It is applicable to Real time traffic.
applications because there is no
Client-Server applications.
limitation on the size of data.

You might also like