0% found this document useful (0 votes)
12 views47 pages

Acn Unit-2

The document discusses the functions of the Data Link Layer, specifically focusing on framing methods, error control using Hamming code, and protocols like Stop-and-Wait and Sliding Window. It details four methods of framing: Character Count, Character Stuffing, Bit Stuffing, and Physical Layer Coding Violations, along with the Hamming code for error detection and correction. Additionally, it outlines the HDLC frame format and its components, including flag fields, address fields, control fields, and information fields.

Uploaded by

M Popcorn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views47 pages

Acn Unit-2

The document discusses the functions of the Data Link Layer, specifically focusing on framing methods, error control using Hamming code, and protocols like Stop-and-Wait and Sliding Window. It details four methods of framing: Character Count, Character Stuffing, Bit Stuffing, and Physical Layer Coding Violations, along with the Hamming code for error detection and correction. Additionally, it outlines the HDLC frame format and its components, including flag fields, address fields, control fields, and information fields.

Uploaded by

M Popcorn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

UNIT-2

12. Explain the various functions of the Data Link Layer. Discuss
the different methods of framing with a neat diagram?
and
13. With a neat diagram discuss different methods of framing?

Framing is function of Data Link Layer that is used to separate message from source or
sender to destination or receiver or simply from all other messages to all other destinations
just by adding sender address and destination address. The destination or receiver address is
simply used to represent where message or packet is to go and sender or source address is
simply used to help recipient to acknowledge receipt.
Frames are generally data unit of data link layer that is transmitted or transferred among
various network points. It includes complete and full addressing, protocols that are essential,
and information under control. Physical layers only just accept and transfer stream of bits
without any regard to meaning or structure. Therefore it is up to data link layer to simply
develop and recognize frame boundaries.
This can be achieved by attaching special types of bit patterns to start and end of the frame.
If all of these bit patterns might accidentally occur in data, special care is needed to be
taken to simply make sure that these bit patterns are not interpreted incorrectly or wrong as
frame delimiters.
Framing is simply point-to-point connection among two computers or devices that consists
or includes wire in which data is transferred as stream of bits. However, all of these bits
should be framed into discernible blocks of information.

Methods of Framing:
There are basically four methods of framing as given below –
• Character Count
• Flag Byte with Character Stuffing
• Starting and Ending Flags, with Bit Stuffing
• Encoding Violations

These are explained as following below.

• Character Count: This method is rarely used and is generally required to count
total number of characters that are present in frame. This is be done by using field
in header. Character count method ensures data link layer at the receiver or
destination about total number of characters that follow, and about where the
frame ends.
There is disadvantage also of using this method i.e., if anyhow character count is
disturbed or distorted by an error occurring during transmission, then destination
or receiver might lose synchronization. The destination or receiver might also be
not able to locate or identify beginning of next frame.

• Character Stuffing: Character stuffing is also known as byte stuffing or character-


oriented framing and is same as that of bit stuffing but byte stuffing actually
operates on bytes whereas bit stuffing operates on bits. In byte stuffing, special byte
that is basically known as ESC (Escape Character) that has predefined pattern is
generally added to data section of the data stream or frame when there is message
or character that has same pattern as that of flag byte.
But receiver removes this ESC and keeps data part that causes some problems or
issues. In simple words, we can say that character stuffing is addition of 1
additional byte if there is presence of ESC or flag in text.
• Bit Stuffing: Bit stuffing is also known as bit-oriented framing or bit-oriented approach.
In bit stuffing, extra bits are being added by network protocol designers to data streams.
It is generally insertion or addition of extra bits into transmission unit or message to be
transmitted as simple way to provide and give signaling information and data to receiver
and to avoid or ignore appearance of unintended or unnecessary control sequences.
It is type of protocol management simply performed to break up bit pattern that results
in transmission to go out of synchronization. Bit stuffing is very essential part of
transmission process in network and communication protocol. It is also required in
USB.
• Physical Layer Coding Violations: Encoding violation is method that is used only for
network in which encoding on physical medium includes some sort of redundancy i.e.,
use of more than one graphical or visual structure to simply encode or represent one
variable of data.

14. How is Hamming code used to detect error control? Distinguish between
codes used to detect them?

Hamming Code

Hamming code is a block code that is capable of detecting up to two simultaneous bit
errors and correcting single-bit errors. It was developed by R.W. Hamming for error
correction.
In this coding method, the source encodes the message by inserting redundant bits within
the message. These redundant bits are extra bits that are generated and inserted at specific
positions in the message itself to enable error detection and correction. When the
destination receives this message, it performs recalculations to detect errors and find the
bit position that has error.

Encoding a message by Hamming Code

The procedure used by the sender to encode the message encompasses the
following steps −

Step 1 − Calculation of the number of redundant bits.

Step 2 − Positioning the redundant bits.

Step 3 − Calculating the values of each redundant bit.

Once the redundant bits are embedded within the message, this is sent to the user. Step 1

− Calculation of the number of redundant bits.


If the message contains m𝑚number of data bits, r𝑟number of redundant bits are added to
it so that m𝑟 is able to indicate at least (m + r+ 1) different states. Here, (m + r) indicates
location of an error in each of (𝑚 + 𝑟) bit positions and one additional state indicates no
error. Since, r𝑟 bits can indicate 2r𝑟 states, 2r𝑟 must be at least equal to (m + r + 1). Thus
the following equation should hold 2r ≥ m+r+1

Step 2 − Positioning the redundant bits.

The r redundant bits placed at bit positions of powers of 2, i.e. 1, 2, 4, 8, 16 etc. They are
referred in the rest of this text as r1 (at position 1), r2 (at position 2), r3 (at position 4), r4
(at position 8) and so on.

Step 3 − Calculating the values of each redundant bit.

The redundant bits are parity bits. A parity bit is an extra bit that makes the number of 1s
either even or odd. The two types of parity are −

Even Parity − Here the total number of bits in the message is made even.

Odd Parity − Here the total number of bits in the message is made odd.

Each redundant bit, ri, is calculated as the parity, generally even parity, based upon its
bit position. It covers all bit positions whose binary representation includes a 1 in the ith
position except the position of ri. Thus −

r1 is the parity bit for all data bits in positions whose binary representation includes a 1
in the least significant position excluding 1 (3, 5, 7, 9, 11 and so on)

r2 is the parity bit for all data bits in positions whose binary representation includes a 1
in the position 2 from right except 2 (3, 6, 7, 10, 11 and so on)

r3 is the parity bit for all data bits in positions whose binary representation includes a 1
in the position 3 from right except 4 (5-7, 12-15, 20-23 and so on
15. Compute the checksum for the data frame 1101011011 with G(x) = x4 + x + 1?
Explanation: Given frame for transmission is 1101011011 and generator polynomial is G(x) = x4
+ x + 1 i.e., 10011. We have to append 4 0’s (1 0’s less then divisor according to CRC):

We have to append 1110 to the frame. Now our frame for transmission is 11010110111110.
16. Compute the checksum for the data x9 + x7 + x4 + x + 1 with G(x) = x3 + x + 1
as a generator polynomial?
17. Explain the simplex stop-and-wait protocol. What are its advantages?
Here stop and wait means, whatever the data that sender wants to send, he sends the data to the
receiver. After sending the data, he stops and waits until he receives the acknowledgment from
the receiver. The stop and wait protocol are a flow control protocol where flow control is one of
the services of the data link layer.

It is a data-link layer protocol which is used for transmitting the data over the noiseless channels.
It provides unidirectional data transmission which means that either sending or receiving of data
will take place at a time. It provides flow-control mechanism but does not provide any error
control mechanism.

The idea behind the usage of this frame is that when the sender sends the frame then he waits
for the acknowledgment before sending the next frame.
Primitives of Stop and Wait Protocol
The primitives of stop and wait protocols are:
Sender side
Rule 1: Sender sends one data packet at a time.
Rule 2: Sender sends the next packet only when it receives the acknowledgment of the
previous packet.
Therefore, the idea of stop and wait protocol in the sender's side is very simple, i.e., send
one packet at a time, and do not send another packet before receiving the acknowledgment.
Receiver side
Rule 1: Receive and then consume the data packet.
Rule 2: When the data packet is consumed, receiver sends the acknowledgment to the
sender.
Therefore, the idea of stop and wait protocol in the receiver's side is also very simple, i.e.,
consume the packet, and once the packet is consumed, the acknowledgment is sent. This is
known as a flow control mechanism.
Advantages of Stop and Wait Protocol
• It is very simple to implement.
• The main advantage of this protocol is the accuracy. The next frame is sent only when
the first frame is acknowledged. So, there is no chance of any frame being lost

18. What is a sliding protocol? Discuss Go-back n and selective repeat


protocol?

The sliding window is a technique for sending multiple frames at a time. It controls the data
packets between the two devices where reliable and gradual delivery of data frames is needed. It
is also used in TCP (Transmission Control Protocol).
In this technique, each frame has sent from the sequence number. The sequence numbers are
used to find the missing data in the receiver end. The purpose of the sliding window technique is
to avoid duplicate data, so it uses the sequence number.

S.NO Go-Back-N Protocol Selective Repeat Protocol


1. In Go-Back-N Protocol, if the sent frame In selective Repeat protocol, only
are find suspected then all the frames those frames are re-transmitted
are re-transmitted from the lost packet which are found suspected.
to the last packet transmitted.
2. Sender window size of Go-Back-N Sender window size of selective
Protocol is N. Repeat protocol is also N.
3. Receiver window size of Go-Back-N Receiver window size of selective
Protocol is 1. Repeat protocol is N.
4. Go-Back-N Protocol is less complex. Selective Repeat protocol is more
complex.
5. In Go-Back-N Protocol, neither sender In selective Repeat protocol,
nor at receiver need sorting. receiver side needs sorting to sort
the frames.
6. In Go-Back-N Protocol, type of In selective Repeat protocol, type of
Acknowledgement is cumulative. Acknowledgement is individual.
7. In Go-Back-N Protocol, Out-of-Order In selective Repeat protocol, Out-of-
packets are NOT Accepted (discarded) Order packets are Accepted.
and the entire window is re-transmitted.
8. In Go-Back-N Protocol, if Receives In selective Repeat protocol, if
receives a corrupt packet, then also, the Receives receives a corrupt packet,
entire window is re-transmitted. it immediately sends a negative
acknowledgement and hence only
the selective packet is
retransmitted.
9. Efficiency of Go-Back-N Protocol Efficiency of selective Repeat
isN/(1+2*a) protocol is alsoN/(1+2*a)
19. What should be the size of the frame so that an efficiency of 50% is
achieved in the sliding window protocol?

The Stop and Wait ARQ offers error and flow control, but may cause big performance
issues as sender always waits for acknowledgement even if it has next packet ready to
send. Consider a situation where you have a high bandwidth connection and propagation
delay is also high (you are connected to some server in some other country through a
high-speed connection), you can’t use this full speed due to limitations of stop and wait.
Sliding Window protocol handles this efficiency issue by sending more than one packet
at a time with a larger sequence number. The idea is same as pipelining in architecture.
Efficiency – It is defined as the ratio of total useful time to the total cycle time of a packet.
For stop and wait protocol,
Total cycle time = Tt(data) + Tp(data) + Tt(acknowledgement) +
Tp(acknowledgement) = Tt(data) + Tp(data) + Tp(acknowledgement) = Tt + 2*Tp

Since acknowledgements are very less in size, their transmission delay can be neglected.
Efficiency = Useful Time / Total Cycle Time = Tt/(Tt + 2*Tp) (For Stop and Wait)
= 1/(1+2a) [ Using a = Tp/Tt ]
Effective Bandwidth(EB) or Throughput – Number of bits sent per second. EB =
Data Size(D) / Total Cycle time(Tt + 2*Tp) Multiplying and dividing by Bandwidth
(B), = (1/(1+2a)) * B [ Using a = Tp/Tt ] = Efficiency * Bandwidth
Capacity of link – If a channel is Full Duplex, then bits can be transferred in both the
directions and without any collisions. Number of bits a channel/Link can hold at
maximum is its capacity.
Capacity = Bandwidth(B) * Propagation(Tp) For Full Duplex channels, Capacity =
2*Bandwidth(B) * Propagation(Tp)
20. Describe the frame format of HDLC?
and
22. With a neat diagram explain the frame format of HDLC?

High-Level Data Link Control (HDLC) generally uses term “frame” to indicate and represent an
entity of data or a protocol of data unit often transmitted or transferred from one station to another
station. Each and every frame on link should begin and end with Flag Sequence Field
(F). Each of frames in HDLC includes mainly six fields. It begins with a flag field, an address
field, a control field, an information field, an frame check sequence (FCS) field, and an ending
flag field. The ending flag field of one frame can serve as beginning flag field of the next frame
in multiple-frame transmissions.
The basic frame structure of HDLC protocol is shown below:

Size of Different Fields:

Field Name Size (bits)


Flag Field 8 bits
Address Field 8 bits
Control Field 8 or 16 bits
Information Field Variable (not used in some type of HDLC frames)
FCS (Frame Check Sequence) Field 16 or 32 bits
Closing Flag Field 8 bits
Let us understand these fields in details:
• Flag Field – The flag field is generally responsible for initiation and termination of
error checking. In HDLC protocol, there is no start and stop bits. So, the flag field is
basically using delimiter 0x7e to simply indicate beginning and end of frame.
It is an 8-bit sequence with a bit pattern 01111110 that basically helps in identifying
both starting and end of a frame. This bit pattern also serves as a synchronization
pattern for receiver. This bit pattern is also not allowed to occur anywhere else inside
a complete frame.
• Address Field – The address field generally includes HDLC address of secondary
station. It helps to identify secondary station will sent or receive data frame. This
field also generally consists of 8 bits therefore it is capable of addressing 256
addresses. This field can be of 1 byte or several bytes long, it depends upon
requirements of network. Each byte can identify up to 128 stations.
This address might include a particular address, a group address, or a broadcast
address. A primary address can either be a source of communication or a destination
that eliminates requirement of including address of primary.
• Control Field – HDLC generally uses this field to determine how to control process of
communication. The control field is different for different types of frames in HDLC
protocol. The types of frames can be Information frame (I-frame), Supervisory frame

(S-frame), and Unnumbered frame (U-frame).

This field is a 1-2-byte segment of frame generally requires for flow and error control.
This field basically consists of 8 bits but it can be extended to 16 bits. In this field,
interpretation of bits usually depends upon the type of frame.
• Information Field – This field usually contains data or information of user's sender is
transmitting to receiver in an I-frame and network layer or management information in
U-frame. It also consists of user’s data and is fully transparent. The length of this field
might vary from one network to another network. Information field is not always present
in an HDLC frame.
• Frame Check Sequence (FCS) – FCS is generally used for identification of errors i.e.,
HDLC error detection. In FCS, CRC16 (16-bit Cyclic Redundancy Check) or CRC32
(32-
bit Cyclic Redundancy Check) code is basically used for error detection. CRC
calculation is done again in receiver. If somehow result differs even slightly from
value in original frame, an error is assumed.
This field can either contain 2 byte or 4 bytes. This field is a total 16 bit that is required
for error detection in address field, control field, and information field. FCS is basically
calculated by sender and receiver both of a data frame. FCS is used to confirm and
ensure that data frame was not corrupted by medium that is used to transfer frame from
sender to receiver.

21. Describe the frame format of PPP standard protocol?


Point-to-Point Protocol (PPP) is generally the default RAS protocol in Windows and is most
commonly used protocol of data link layer that is required to encapsulate higher network- layer
protocols simply to pass over synchronous and asynchronous communication lines. In PPP, link
establishment is controlled and handled mainly by Link Control Protocol (LCP). It is also
required to connect the Home PC to server of ISP through a modem. It was also adopted by
ISPs to simply provide dial-up Internet Access.

PPP Frame Format: PPP frame is generally required to encapsulate packets of information or data
that simply includes either configuration information or data. PPP basically uses the same basic
format as that of HDLC. PPP usually contains one additional field i.e., protocol field
This protocol field is present just after control field and before information or data field.

Various fields of Frame are given below:

• Flag field – PPP frame similar to HDLC frame, always begins and ends with
standard HDLC flag. It always has a value of 1 byte i.e., 01111110 binary value.
• Address field – Address field is basically broadcast address. In this, all 1’s simply
indicates that all of the stations are ready to accept frame. It has the value of 1
byte i.e., 11111111 binary values. PPP on the other hand, does not provide or
assign individual station addresses.
• Control field – This field basically uses format of U-frame i.e., Unnumbered frame in
HDLC. In HDLC, control field is required for various purposes but in PPP, this field is
set to 1 byte i.e., 00000011 binary value. This 1 byte is used for a connection-less data
link.
• Protocol field – This field basically identifies network protocol of the datagram. It
usually identifies the kind of packet in the data field i.e., what exactly is being carried in
data field. This field is of 1 or 2 bytes and helps in identifies the PDU (Protocol Data
Unit) that is being encapsulated by PPP frame.
• Data field – It usually contains the upper layer datagram. Network layer datagram is
particularly encapsulated in this field for regular PPP data frames. Length of this field is
not constant rather it varies.
• FCS field – This field usually contains checksum simply for identification of errors. It
can be either 16 bits 0r 32 bits in size. It is also calculated over address, control,
protocol, and even information fields. Characters are added to frame for control and
handling of errors.
23. Explain in detail about Fast Ethernet.
Fast Ethernet was designed to compete with LAN protocols such as FDDI or Fiber Channel (or Fibre
Channel, as it is sometimes spelled). IEEE created Fast Ethernet under the name 802.3u. Fast Ethernet is
backward-compatible with Standard Ethernet, but it can transmit data 10 times faster at a rate of 100 Mbps.
The goals of Fast Ethernet can be summarized as follows:
1. Upgrade the data rate to 100 Mbps.
2. Make it compatible with Standard Ethernet.
3. Keep the same 48-bit address.
4. Keep the same frame format.
5. Keep the same minimum and maximum frame lengths
Fast Ethernet is popularly named as 100-BASE-X. Here, 100 is the maximum throughput, i.e. 100 Mbps,
BASE denoted use of baseband transmission, and X is the type of medium used, which is TX or FX.
Varieties of Fast Ethernet
The common varieties of fast Ethernet are 100-Base-TX, 100-BASE-FX and 100-Base-T4.

100-Base-T4
● This has four pairs of UTP of Category 3, two of which are bi-directional and the other two are
unidirectional.
● In each direction, three pairs can be used simultaneously for data transmission.
● Each twisted pair is capable of transmitting a maximum of 25Mbaud data. Thus the three pairs can
handle a maximum of 75Mbaud data.
● It uses the encoding scheme 8B/6T (eight binary/six ternary).
100-Base-TX
● This has either two pairs of unshielded twisted pairs (UTP) category 5 wires or two shielded
twisted pairs (STP) type 1 wires. One pair transmits frames from hub to the device and the other
from device to hub.
● Maximum distance between hub and station is 100m.
● It has a data rate of 125 Mbps.
● It uses MLT-3 encoding scheme along with 4B/5B block coding.
100-BASE-FX
● This has two pairs of optical fibers. One pair transmits frames from hub to the device and the
other from device to hub.
● Maximum distance between hub and station is 2000m.
● It has a data rate of 125 Mbps.
● It uses NRZ-I encoding scheme along with 4B/5B block coding.
Frame Format of IEEE 802.3
The frame format of IEEE 802.3u is same as IEEE 802.3. The fields in the frame are:
● Preamble − It is a 7 bytes starting field that provides alert and timing pulse for transmission.
● Start of Frame Delimiter (SOF) − It is a 1 byte field that contains an alternating pattern of ones
and zeros ending with two ones.
● Destination Address − It is a 6 byte field containing physical address of destination stations.
● Source Address − It is a 6 byte field containing the physical address of the sending station.
● Length − It a 2 bytes field that stores the number of bytes in the data field.
● Data − This is a variable sized field carries the data from the upper layers. The maximum size of
data field is 1500 bytes.
● Padding − This is added to the data to bring its length to the minimum requirement of 46 bytes.
● CRC − CRC stands for cyclic redundancy check. It contains the error detection information.
24.Explain about Gigabit Ethernet in detail with neat diagram.
In computer networks, Gigabit Ethernet (GbE) is the family of Ethernet technologies that achieve
theoretical data rates of 1 gigabit per second (1 Gbps). It was introduced in 1999 and was defined by the
IEEE 802.3ab standard.
Gigabit Ethernet is a variant of the Ethernet technology generally used in local area networks (LANs) for
sending Ethernet frames at 1 Gbps. It can be used as a backbone in several networks, especially those of
large organizations.
Gigabit Ethernet is an enlargement to the earlier 10 Mbps and 100 Mbps 802.3 Ethernet standards. It
provides 1,000 Mbps bandwidth while supporting full compatibility with the set up base of around 100
million Ethernet nodes.
Gigabit Ethernet usually employs an optical fibre connection to share records at a very huge speed over
high distances. For short distances, copper cables and twisted pair connections are utilized.
Varieties of Gigabit Ethernet
The popular varieties of fast Ethernet are 1000Base-SX, 1000Base-LX, 1000BASE-T and 1000Base-CX.

1000BASE-CX
● Defined by IEEE 802.3z standard
● The initial standard for Gigabit Ethernet
● Uses shielded twisted pair cables with DE-9 or 8P8C connector
● Maximum segment length is 25 metres
● Uses NRZ line encoding and 8B/6B block encoding
1000BASE-SX
● Defined by IEEE 802.3z standard
● Uses a pair of fibre optic cables of a shorter wavelength having 770 – 860 nm diameter
● The maximum segment length varies from 220 – 550 metres depending upon the fiber properties.
● Uses NRZ line encoding and 8B/10B block encoding
1000BASE-LX
● Defined by IEEE 802.3z standard
● Uses a pair of fibre optic cables of a longer wavelength having 1270 – 1355 nm diameter
● Maximum segment length is 500 metres
● Can cover distances up to 5 km
● Uses NRZ line encoding and 8B/10B block encoding
1000BASE-T
● Defined by IEEE 802.3ab standard
● Uses a pair four lanes of twisted-pair cables (Cat-5, Cat-5e, Cat-6, Cat‑7)
● Maximum segment length is 100 metres
● Uses trellis code modulation technique
Gigabit Ethernet can be deployed using both copper wires and fiber optic cables. Since they target to
achieve data rates of 1 gigabit per second (1 Gbps), they require encoding – decoding a bit within a
nanosecond. This was achieved first in the 1000BASE-CX version.
Cabling in Common Varieties of Gigabit Ethernet
The popular varieties of fast Ethernet are 1000Base-SX, 1000Base-LX, 1000BASE-T and 1000Base-CX.

Cabling in 1000BASE-T
● Uses four pairs of unshielded twisted pair (UTP) cables
● Generally uses Category 5/5e UTP, but can also use Category 6 and Category 7
● Maximum segment length is 100 meters
Cabling in 1000BASE-CX
● Uses shielded twisted-pair cables (STP) with DE-9 or 8P8C connector
● Uses 2 pairs of STP
● Maximum segment length is 25 meters
Cabling in 1000BASE-SX
● Uses a pair of fiber optic cables of a shorter wavelength having 770 – 860 nm diameter
● The maximum segment length varies from 220 – 550 meters depending upon the fiber properties.
● Uses multimode fibers
Cabling in 1000BASE-LX
● Uses a pair of fiber optic cables of a longer wavelength having 1270 – 1355 nm diameter
● Maximum segment length is 500 meters
● Can cover distances up to 5 km
● Uses both single-mode and multimode fibers
Advantages of Gigabit Ethernet
The advantages of Gigabit Ethernet are as follows −
● Noise Immunity − The coaxial cable used in an Ethernet network is very well shielded, and has a
very large immunity from electrical noise generated by external sources.
● Reliability − Ethernet connections acquire principal reliability. This is because there is no delay
from the radio frequencies. Therefore, ultimately there are fewer disconnections and slowdowns
in Ethernet. Because the bandwidth is not shared between connected devices, there are no
bandwidth shortages as well.
● Conceptually Simple −Ethernet is clearly daisy-chained closely with coax cable and "T"
adapters. There are generally no hubs, transceivers, or multiple devices used.
● Speed − Speed provided by Ethernet is much higher than compared to the wireless connection.
This is due to the Ethernet supporting one to one connection. As a result, a speed of 10Gbps or
sometimes 100Gbps can be simply managed.
Disadvantages of Gigabit Ethernet
The disadvantages of Gigabit Ethernet are as follows −
● Installation − Ethernet connections are usually harder to install without expert assistance.
Particularly the areas where they required passing walls and various floors. These areas required
to be drilled independently and also multiple cables required to be connected to several computers
and switches.
● Mobility − Mobility is limited. Ethernet is perfect to use in areas where the device is required for
sitting in specific areas.
● Connections − The multiple connections are restricted in Ethernet. If it is using a single Ethernet
connection then only a single device can be linked.
● Difficult Troubleshooting − Ethernet networks are very complex to troubleshoot. There is no
simple way to decide what node or cable areas is generating a problem, and the network should
be troubleshot by a "step of elimination." This can be very slow.
25. Explain in detail about Fiber Channel.

Fibre Channel is a high-speed data transfer protocol that provides in-order, lossless
delivery of raw block data. It is designed to connect general purpose computers,
mainframes and supercomputers to storage devices. The technology primarily supports
point-to-point (two devices directly connected to each other) though most common found
in switched fabric (devices connected by Fibre Channel switches) environments.

A storage area network (SAN) is a dedicated network used for storage connectivity
between host servers and shared storage - typically shared arrays that deliver block-level
data storage.

Fibre Channel SANs are typically deployed for low latency applications best suited to
block-based storage, such as databases used for high-speed online transactional
processing (OLTP), such as those found in banking, online ticketing, and virtual
environments. Fibre Channel typically runs on optical fiber cables within and between
data centers but can also run on copper cabling.

Fibre Channel is a high-speed data transfer protocol that provides in-order, lossless
delivery of raw block data to connect data storage to host servers. Fibre Channel fabrics
can be extended over distance for Disaster Recovery and Business Continuance and most
SANs are typically designed with redundant fabrics.

Begun in 1988, Fibre Channel is standardized in the T11 Technical Committee of the
International Committee for Information Technology Standards (INCITS), an American
National Standards Institute (ANSI)-accredited standards committee. The Fibre Channel
Physical and Signaling Interface (FC-PH) was first published in 1994
26. Draw the ARQ protocol in the DLL?

Automatic Repeat reQuest (ARQ) in Data Link Layer (DLL)


Also known as Automatic Repeat Query
An error-control method in which correction is made by retransmission of data.
Data transmission uses acknowledgements (messages sent by the receiver indicating that it has correctly
received a frame) and timeouts (specified periods of time allowed to elapse before an acknowledgment is
to be received) to achieve reliable data transmission over an unreliable communication channel.
If the sender does not receive an acknowledgment before the timeout, it re-transmits the frame until it
receives an acknowledgment or exceeds a predefined number of retransmissions.
Eg: IEEE 802.11 wireless networking uses ARQ retransmissions at the data-link layer.

ARQ is a group of error – control protocols for transmission of data over noisy or unreliable
communication network.
These protocols reside in the Data Link Layer and in the Transport Layer of the OSI (Open Systems
Interconnection) model.
They are named so because they provide for automatic retransmission of frames that are corrupted or lost
during transmission.
They are used in a two-way communication system.
The sender waits for a positive acknowledgement before advancing to the next data item, therefore also
called PAR (Positive Acknowledgement with Retransmission).

Examples
ARQ systems were widely used on shortwave radio to ensure reliable delivery of data such as for
telegrams. ARQs are used to provide reliable transmissions over unreliable upper layer services.
They are used in Global System for Mobile (GSM) communication.

Working Principle of ARQ


The main function of these protocols is, the receiver sends an acknowledgement message back to the
sender if it receives a frame correctly.
If the sender does not receive the acknowledgement of a transmitted frame before a specified period of
time, i.e., a timeout occurs, the sender understands that the frame has been corrupted or lost during transit.
So, the sender retransmits the frame. This process is repeated until the correct frame is transmitted.

Types of ARQ Protocols


There are three ARQ protocols in the data link layer.

i) Stop – and – Wait ARQ


Provides unidirectional data transmission with flow control and error control mechanisms, appropriate for
noisy channels.
Also referred as the alternating protocol.
It is a method used in two-way communication systems to send information between two connected
devices (sender and a receiver).
The function of this protocol is to send one frame at a time.
The sender keeps a copy of the sent frame. It then waits for a finite time to receive a positive
acknowledgement from the receiver. If the timer expires, the frame is retransmitted. If a positive
acknowledgement is received then the next frame is sent.

ii) Go – Back – N ARQ


Provides for sending multiple frames before receiving the acknowledgement for the first frame.
The sending process continues to send several frames even without receiving an acknowledgement from
the receiver.
It uses the concept of sliding window, therefore also called Sliding Window Protocol.
The frames are sequentially numbered and a finite number of frames are sent.
The receiver keeps track of the sequence number of the next frame it expects to receive and sends that
sequence number with every acknowledgement to the sender.
The receiver will remove any frame that does not have the desired sequence number it expects and will
resend an acknowledgement for the last correct frame.
If the acknowledgement of a frame is not received within the time period, all frames starting from that
frame are retransmitted.
There are only two possibilities that a frame won’t match the sequence number:
a) duplicated frame of an existing frame or
b) an out-of-order frame that needs to be sent later,
− The receiver recognizes this scenario and sends an acknowledgement signal accordingly.
Drawback:
It results in sending frames multiple times, if any frame was lost or found to be corrupted, then that frame
and all following frames in the send window will be re-transmitted.
Advantage:
This protocol is more efficient than Stop and Wait ARQ as there is no waiting time.

iii) Selective Repeat ARQ


Also known as Selective Reject ARQ
This protocol also provides for sending multiple frames before receiving the acknowledgement for the first
frame.
Sending process continues even after a frame is found to be corrupt or lost.
In this protocol, only the erroneous or lost frames are retransmitted, while the good frames are received and
buffered.

Advantages of ARQ
The Error-detection and correction mechanisms are quite simple compared to the other techniques.
A much simpler decoding equipment can be put to use compared to the other techniques.

Disadvantages of ARQ
A medium or a channel with a high error rate might cause too much transmission of the frames or packets
of information.
The high error rate in the channel might also lead to loss of information, therefore reducing the efficiency
or the productivity of the system.
27. Explain reservation ALOHA?
Reservation ALOHA, or R-ALOHA, is a channel access method for wireless (or other shared
channel) transmission that allows uncoordinated users to share a common transmission
resource. Reservation ALOHA (and its parent scheme, Slotted ALOHA) is a schema or rule set
for the division of transmission resources over fixed time increments, also known as slots. If
followed by all devices, this scheme allows the channel's users to cooperatively use a shared
transmission resource—in this case, it is the allocation of transmission time.

Reservation ALOHA is an effort to improve the efficiency of Slotted ALOHA. The


improvements with Reservation ALOHA are markedly shorter delays and ability to efficiently
support higher levels of utilization. As a contrast of efficiency, simulations have shown that
Reservation ALOHA exhibits less delay at 80% utilization than Slotted ALOHA at 20–36%
utilization.

The chief difference between Slotted and Reservation ALOHA is that with Slotted ALOHA, any
slot is available for utilization without regards to prior usage. Under Reservation ALOHA's
contention-based reservation schema, the slot is temporarily considered "owned" by the station
that successfully used it. Additionally, Reservation ALOHA simply stops sending data once the
station has completed its transmission. As a rule, idle slots are considered available to all stations
that may then implicitly reserve (utilize) the slot on a contention basis.

The R-ALOHA scheme can be viewed as combination of slotted ALOHA and TDM protocols.
There are many versions of the R-ALOHA scheme. Here we briefly describe one of them due
to [Pah95].
Fig. 6.1. The example of reservation ALOHA (M=4, R=5).

The example of R-ALOHA scheme is presented in Fig. 6.1. The system has two modes of
operation: unreserved mode and reserved mode. In the unreserved mode, the time axis is divided
into short equal-length subslots for making reservations. Users with data to send, transmit short
reservation requests in the reservation subslots using the slotted ALOHA protocol. The
reservation request can ask for a single slot or multiple slots. After transmitting a reservation
request a user waits for positive acknowledgment (ACK). The reservation acknowledgment
advices the requesting user where to locate its first data packet. The system then switches to the
reserved mode.

In the reserved mode the time axis is divided into fixed-length frames. Each frame consists of M+1
equal-length slots of which the first M slots are used for message transmission (message slots) and
the last slot is subdivided into R short reservation subslots used for reservation. A sending user that
has been granted a reservation sends its packets in successive message slots, skipping over the
reservation subslots when they are encountered. When there are no reservations taking place, the
system returns to the unreserved mode.

In the R-ALOHA system, the contention is limited to the short reservation subslots, while the
transmission in the message slots is contention-free. The choice of the number of reservations
subslots relative to the number of message slots is a design tradeoff issue. The number of
reservations subslots should be small enough to keep system overhead low, but large enough to
serve the expected number of reservation requests.

In the R-ALOHA scheme the control of the system is distributed among all users in the network.
Because all reservation messages are heard by all users in the network, each user maintains
information on the queue of outstanding reservations for all other users in the network as well as
for its own reservation. When the queue length drops to zero, the system returns to the
unreserved mode, in which there are reservation subslots only.

In the example presented in Fig. 6.1 the user reserves three message slots. The reservation
acknowledgment advices the user when to send its first data packet. The user knows that the slot next
to the first packet slot comprises five reservation subslots, so it does not transmit its packet during
this time. The second and third data packets are sent in the following two slots. Because there are no
more reserved slots, the system returns to the unreserved mode.
28. Show that the channel efficiency in slotted ALOHA is twice that in pure
ALOHA?
Aloha-
There are two different versions of Aloha-
• Pure Aloha
• Slotted Aloha

1. Pure Aloha-
• It allows the stations to transmit data at any time whenever they want.
• After transmitting the data packet, station waits for some time.
• Transmitting station receives an acknowledgement from the receiving station.
• In this case, transmitting station assumes that the transmission is successful.
Case-02:
• Transmitting station does not receive any acknowledgement within specified time
from the receiving station.
• In this case, transmitting station assumes that the transmission is unsuccessful.
Then,
• Transmitting station uses a Back Off Strategy and waits for some random amount of
time.
• After back off time, it transmits the data packet again.
• It keeps trying until the back off limit is reached after which it aborts the
transmission.
Efficiency-

where G = Number of stations willing to transmit data


Maximum Efficiency-
For maximum efficiency,
• We put dη / dG = 0
• Maximum value of η occurs at G = 1/2
• Substituting G = 1/2 in the above expression, we get-
Maximum efficiency of Pure Aloha
= 1/2 x e-2 x 1/2
= 1 / 2e
= 0.184
= 18.4%
Thus,

The maximum efficiency of Pure Aloha is very less due to large number of collisions.
2. Slotted Aloha-
• Slotted Aloha divides the time of shared channel into discrete intervals called as time
slots.
• Any station can transmit its data in any time slot.
• The only condition is that station must start its transmission from the beginning of
the time slot.
• If the beginning of the slot is missed, then station has to wait until the beginning of the
next time slot.
• A collision may occur if two or more stations try to transmit data at the beginning of
the same time slot.
Efficiency-

where G = Number of stations willing to transmit data at the beginning of the same time slot
Maximum Efficiency-
For maximum efficiency,
• We put dη / dG = 0
• Maximum value of η occurs at G = 1
• Substituting G = 1 in the above expression, we get-
Maximum efficiency of Slotted Aloha
-1
=1xe

=1/e
= 0.368
= 36.8%
Thus,
The maximum efficiency of Slotted Aloha is high due to less number of collisions.

29. Explain CSMA-CD protocol with a neat diagram?

The CSMA method does not specify the procedure following a collision. Carrier sense Multiple access with
collision detection (CSMA/CD) augments the algorithm to Handle the collision.
In this method, a station monitors the medium after it sends a frame to see if the transmission was
successful. If so, the station is finished. If, however, there is a collision, the frame is sent again.
To better understand CSMA/CD, let us look at the first bits transmitted by the two stations involved in the
collision. Although each station continues to send bits in the frame until it detects the collision, we show
what happens as the first bits collide. In Figure 5.38, stations A and C are involved in the collision.

At time t1, station A has executed its persistence procedure and starts sending the bits of its frame. At time
t2, station C has not yet sensed the first bit sent by A. Station C executes its persistence procedure and starts
sending the bits in its frame, which propagate both to the left and to the right. The collision occurs sometime
after time t2. Station C detects a collision at time t3 when it receives the first bit of A’s frame. Station C
immediately (or after a short time, but we assume immediately) aborts transmission. Station A detects
collision at time t4 when it receives the first bit of C’s frame; it also immediately aborts transmission.
Looking at the figure, we see that A transmits for the duration t4 − t1; C transmits for the duration t3 − t2.
Now that we know the time durations for the two transmissions, we can show a more complete graph in
Figure 5.39.

Minimum Frame Size


For CSMA/CD to work, we need a restriction on the frame size. Before sending the last bit of the frame,
the sending station must detect a collision, if any, and abort the transmission. This is so because the
station, once the entire frame is sent, does not keep a copy of the frame and does not monitor the line for
collision detection. Therefore, the frame transmission time Tfr must be at least two times the maximum
propagation time Tp. To understand the reason, let us think about the worst-case scenario. If the two
stations involved in a collision are the maximum distance apart, the signal from the first takes time Tp to
reach the second, and the effect of the collision takes another time TP to reach the first. So the
requirement is that the first station must still be transmitting after 2Tp.
30. What are collision free protocols? Explain basic bit map method, BRAP and
binary count down?

Almost collisions can be avoided in CSMA/CD.they can still occur during the contention period.the
collision during contention period adversely affects the system performance, this happens when the
cable is long and length of packet are short. This problem becomes serious as fiber optics network
come into use. Here we shall discuss some protocols that resolve the collision during the contention
period.

Bit-map Protocol

Binary Countdown

Limited Contention Protocols

The Adaptive Tree Walk Protocol

Pure and slotted Aloha, CSMA and CSMA/CD are Contention based Protocols:

Try-if collide-Retry

No guarantee of performance

What happen if the network load is high?

Collision Free Protocols:

Pay constant overhead to achieve performance guarantee

Good when network load is high

1. Bit-map Protocol:
Bit map protocol is collision free Protocol in In bitmap protocol method, each contention period
consists of exactly N slots. if any station has to send frame, then it transmits a 1 bit in the respective
slot. For example if station 2 has a frame to send, it transmits a 1 bit during the second slot.
31. Explain the following protocols: limited contention protocol, adaptive tree walk
protocol?

Adaptive Tree Walk Protocol is a technique for transmitting data over shared channels that
combines the advantages of collision based protocols and collision free protocols.
In computer networks, when more than one station tries to transmit simultaneously via a shared
channel, the transmitted data is garbled, an event called collision. In collision based protocols like
ALOHA, all stations are permitted to transmit a frame without trying to detect whether the
transmission channel is idle or busy. This works very good under light loads. Under heavy loads,
collision free protocols are suitable, since channel access is resolved in the contention period that
eliminates the possibilities of collisions.
In adaptive tree walk protocol, the stationed are partitioned into groups in a hierarchical manner.
The contention period is divided into discrete time slots, and for each slot the contention rights of
the stations are limited. Under light loads, all the stations can participate for contention each slot
like ALOHA. However, under heavy loads, only a group can try for a given slot.

Working Principle

In adaptive tree walk protocol, the stations or nodes are arranged in the form of a binary tree as
shown in the diagram. Here, the internal nodes (marked from 0 to 6) represent the groups while the
leaf nodes (marked A to H) are the stations contending for network access.

Initially all nodes (A, B ……. G, H) are permitted to compete for the channel. If a node is
successful in acquiring the channel, it transmits its frame. In case of collision, the nodes are divided
into two groups −

Stations under the group 1, i.e. A, B, C, D

Stations under the group 2, i.e. E, F, G, H

Nodes belonging to only one of them is permitted for competing. Say, for slot 1, all stations under
group 1 are allowed to contend. If one of the stations successfully acquires the channel, then it
transmits to completion. In the next slot, i.e. slot 2, all stations under group 2 can contend.
However, if there is a collision, then the stations are further divided into groups as follows −

Stations under the group 3, i.e. A, B

Stations under the group 4, i.e. C, D

Stations under the group 5, i.e. E, F

Stations under the group 6, i.e. G, H

In order to locate the contending stations, depth-first search algorithm is used. The same principle
of contention is applied, only for those groups that has some contending stations. The division
continues if collisions occur, until each group contains only 1 node.
Limited Contention Protocols are the media access control (MAC) protocols that combines the
advantages of collision based protocols and collision free protocols. They behave like slotted
ALOHA under light loads and bitmap protocols under heavy loads.
Concept

In computer networks, when more than one station tries to transmit simultaneously via a shared
channel, the transmitted data is garbled, an event called collision. In collision based protocols like
ALOHA, all stations are permitted to transmit a frame without trying to detect whether the
transmission channel is idle or busy. In slotted ALOHA, the shared channel is divided into a
number of discrete time intervals called slots. Any station having a frame can start transmitting at
the beginning of a slot. Since, this works very good under light loads, limited contention protocols
behave like slotted ALOHA under low loads.
However, with the increase in loads, there occurs exponential growth in number of collisions and so
the performance of slotted ALOHA degrades rapidly. So, under high loads, collision free protocols
like bitmap protocols work best. In collision free protocols, channel access is resolved in the
contention period and so the possibilities of collisions are eliminated. In bit map protocol, the
contention period is divided into N slots, where N is the total number of stations sharing the
channel. If a station has a frame to send, it sets the corresponding bit in the slot. So, before
transmission, each station knows whether the other stations want to transmit. Collisions are avoided
by mutual agreement among the contending stations on who gets the channel. Limited contention
protocols behave like slotted ALOHA under low loads.

Working Principle

Limited contention protocols divide the contending stations into groups, which may or not be
disjoint. At slot 0, only stations in group 0 can compete for channel access. At slot 1, only stations
in group 1 can compete for channel access and so on. In this process, if a station successfully
acquires the channel, then it transmits its data frame. If there is a collision or there are no stations
competing for a given slot in a group, the stations of the next group can compete for the slot.
By dynamically changing the number of groups and the number of stations allotted in a group
according to the network load, the protocol changes from slotted ALOHA under low loads to bit
map protocol under high loads. Under low loads, only one group is there containing all stations,
which is the case of slotted ALOHA. As the load increases, more groups are added and the size of
each group is reduced. When the load is very high, each group has just one station, i.e. only one
station can compete at a slot, which is the case of bit map protocol.
The performance of limited contention protocol is highly dependent upon the algorithm to
dynamically adjust the group configurations to the changes in network environment.
Example − An example of limited contention protocol is Adaptive Tree Walk Protocol.
32. Bring out the differences: wireless LANs (802.11) and broadband wireless
(802.16) frame formats?

IEEE 802.16 is a standard that defines Wireless Interoperability for Microwave Access (WiMAX), a
wireless technology that delivers network services to the last mile of broadband access.
The IEEE 802.11 standard that lays down the specifications of the wireless local area networks (WLAN)
or Wi-Fi, that connects wireless devices within a limited area.

The following chart gives a comparison between 802.16 and 802.11 −

Feature IEEE 802.16 IEEE 802.11

Technology Defines WIMAX. Defines WLANs or WiFi.

Application Area Last-mile of broadband wireless access. Limited area forming wireless
LANs.

Versions of the Standard 802.16a, 802.16d, 802.16e, 802.11a, 11b, 11g, 11n, 11ac,
802.16m etc. 11ad etc.
Domain of Usage It is used for a wide area mostly outdoors. It is used for a limited area
mostly indoors.

Area of Coverage WiMAX generally covers the area WiFi has a smaller coverage area
between 7 Km to 50 Km so that it can of 30 to 100 meters, that enables
provide a large number of customers devices within this range to
connected to the broadband services. connect to the network services.

Date Rate The data rate is typically 5 bps/Hz with a The data rate is 2.7 bps/Hz with a
maximum of 100 Mbps in a 20 MHz maximum of 54 Mbps in 20 MHz
channel. channel.

Frequency Band It operates in the frequency of 2 GHz to It operates in the frequency of


11 GHz. 2.4 GHz.
Encryption It uses mandatory DES (Data Encryption It uses RC4 (Rivest Cipher 4).
Standard) with optional AES (Advanced 802.11i uses AES.
Encryption Standard)

QoS (Quality of Service) A number of QoS options are available It does not provide any QoS.
like UGS, rtPS, nrtPS, BE, etc. WiMAX However, 802.11e lays down
can bring the Internet the connection QoS.
needed to service local WiFi networks.

Ubiquitous Services Provides ubiquitous networking services. Cannot provide ubiquitous


services.
Scalability Users can scale up from one to hundreds Users can scale up from one to
of Consumer Premises Equipment tens per CPE.
(CPEs), where one CPE has unlimited
subscribers.

33. Explain with a neat diagram Manchester and differential Manchester encoding?

Manchester encoding is a synchronous clock encoding technique used by the physical layer of the
Open System Interconnection [OSI] to encode the clock and data of a synchronous bit stream.

The binary data to be transmitted over the cable are not sent as NRZ [Non-return-to-zero].
Non-return-to-zero [NRZ] –
NRZ code’s voltage level is constant during a bit interval. When there is a long sequence of 0s and
1s, there is a problem at the receiving end. The problem is that the synchronization is lost due to a
lack of transmissions.
It is of 2 types:
NRZ-level encoding –
The polarity of signals changes when the incoming signal changes from ‘1’ to ‘0’ or from ‘0’ to ‘1’.
It considers the first bit of data as polarity change.

NRZ-Inverted/ Differential encoding –


In this, the transitions at the beginning of the bit interval are equal to 1 and if there is no transition
at the beginning of the bit interval is equal to 0.

Characteristics of Manchester Encoding –

A logic 0 is indicated by a 0 to 1 transition at the center of the bit and logic 1 by 1 to 0 transition.

The signal transitions do not always occur at the ‘bit boundary’ but there is always a transition at
the center of each bit.

The Differential Physical Layer Transmission does not employ an inverting line driver to convert
the binary digits into an electrical signal. And therefore the signal on the wire is not opposite the
output by the encoder.

The Manchester Encoding is also called Biphase code as each bit is encoded by a positive 90
degrees phase transition or by negative 90 degrees phase transition.

The Digital Phase Locked Loop (DPLL) extracts the clock signal and deallocates the value and
timing of each bit. The transmitted bitstream must contain a high density of bit transitions.

The Manchester Encoding consumes twice the bandwidth of the original signal.

The advantage of the Manchester code is that the DC component of the signal carries no
information. This makes it possible that standards that usually do not carry power can transmit this
information.Eg: For 10Mbps LAN the signal spectrum lies between 5 and 20

Another example to find out the bits by seeing the transitions.


34. What are the objectives of scrambling technique? Encode the following
1001100011001110 using Manchester differential Manchestar, pseudometry, B8ZS and
HDB3 techniques, NRZ and bipolar techniques?

[10:56 pm, 21/03/2022] Preethi Uvce: Scrambling is a technique that does not increase the number
of bits and does provide synchronization. Problem with technique like Bipolar AMI(Alternate Mark
Inversion) is that continuous sequence of zero’s create synchronization problems one solution to
this is Scrambling.
There are two common scrambling techniques:

B8ZS(Bipolar with 8-zero substitution)

HDB3(High-density bipolar3-zero)

B8ZS(Bipolar with 8-zero substitution) –


This technique is similar to Bipolar AMI except when eight consecutive zero-level voltages are
encountered they are replaced by the sequence,”000VB0VB”.
V(Violation), is a non-zero voltage which means signal have same polarity as the previous non-zero
voltage. Thus it is violation of general AMI technique.

B(Bipolar), also non-z…


[10:56 pm, 21/03/2022] Preethi Uvce: 38.Spanning Tree Protocol (STP) is a communication
protocol operating at data link layer the OSI model to prevent bridge loops and the resulting
broadcast storms. It creates a loop − free topology for Ethernet networks.

Working Principle

A bridge loop is created when there are more than one paths between two nodes in a given network.
When a message is sent, particularly when a broadcast is done, the bridges repeatedly rebroadcast
the same message flooding the network. Since a data link layer frame does not have a time-to-live
field in the header, the broadcast frame may loop forever, thus swamping the channels.
Spanning tree protocol creates a spanning tree by disabling all links that form a loop or cycle in the
network. This leaves exactly one active path…
35. Discuss CSMA-CD protocol in IEEE 802.3(Ethernet). Draw and Explain the
format. Explain binary exponential back off algorithm and discuss its performance?
CSMA/CD (Carrier Sense Multiple Access/ Collision Detection) is a media access control method that
was widely used in Early Ethernet technology/LANs When there used to be
shared Bus Topology and each node (Computers) were connected By Coaxial Cables. Now a Days
Ethernet is Full Duplex and CSMA/CD is not used as Topology is either Star (connected via Switch or
Router) or Point to Point (Direct Connection) but they are still supported though.

Consider a scenario where there are ‘n’ stations on a link and all are waiting to transfer data through that
channel. In this case, all ‘n’ stations would want to access the link/channel to transfer their own data.
Problem arises when more than one station transmits the data at the moment. In this case, there will be
collisions in the data from different stations.

CSMA/CD is one such technique where different stations that follow this protocol agree on some terms
and collision detection measures for effective transmission. This protocol decides which station will
transmit when so that data reaches the destination without corruption.

How CSMA/CD works?

Step-01: Sensing the Carrier-

● Any station willing to transmit the data senses the carrier.


● If it finds the carrier free, it starts transmitting its data packet otherwise not.

How?

● Each station can sense the carrier only at its point of contact with the carrier.
● It is not possible for any station to sense the entire carrier.
● Thus, there is a huge possibility that a station might sense the carrier free even when it is actually
not.

Example-

Consider the following scenario-


At the current instance,

● If station A senses the carrier at its point of contact, then it will find the carrier free.But the carrier
is actually not free because station D is already transmitting its data.
● If station A starts transmitting its data now, then it might lead to a collision with the data
transmitted by station D.

Step-02: Detecting the Collision-

In CSMA / CD,

● It is the responsibility of the transmitting station to detect the collision.


● For detecting the collision, CSMA / CD implements the following condition.
● This condition is followed by each station-
● Transmission delay >= 2 x Propagation delay
According to this condition,

● Each station must transmit the data packet of size whose transmission delay is at least twice its
propagation delay.
● If the size of data packet is smaller, then collision detection would not be possible.

Understanding the Condition to Detect Collision with Example

● Consider at time 10:00 am, station A senses the carrier.


● It finds the carrier free and starts transmitting its data packet to station D.
● Let the propagation delay be 1 hour. (We are considering station D for the worst case)
● Let us consider the scenario at time 10:59:59:59 when the packet is about to reach the station D.
● At this time, station D senses the carrier.
● It finds the carrier free and starts transmitting its data packet.
● Now, as soon as station D starts transmitting its data packet, a collision occurs with the data
packet of station A at time 11:00 am.
● After collision occurs, the collided signal starts travelling in the backward direction.
● The collided signal takes 1 hour to reach the station A after the collision has occurred.
● For station A to detect the collided signal, it must be still transmitting the data.
● So, transmission delay of station A must be >= 1 hour + 1 hour >= 2 hours to detect the collision.
● That is why, for detecting the collision, condition is Tt >= 2Tp.

Two cases are possible-

Case-01:

● If no collided signal comes back during the transmission,


● It indicates that no collision has occurred.
● The data packet is transmitted successfully.
Case-02:

● If the collided signal comes back during the transmission,


● It indicates that the collision has occurred.
● The data packet is not transmitted successfully.

Step-03: Releasing Jam Signal-

● Jam signal is a 48-bit signal.


● It is released by the transmitting stations as soon as they detect a collision.
● It alerts the other stations not to transmit their data immediately after the collision.
● Otherwise, there is a possibility of collision again with the same data packet.
● Ethernet sends the jam signal at a frequency other than the frequency of data signals.
● This ensures that jam signal does not collide with the data signals undergone collision.

Step-04: Waiting for Back Off Time-

● After the collision, the transmitting station waits for some random amount of time called as back
off time.
● After back off time, it tries transmitting the data packet again.
● If again the collision occurs, then station again waits for some random back off time and then
tries again.
● The station keeps trying until the back off time reaches its limit.
● After the limit is reached, station aborts the transmission.
● Back off time is calculated using Back Off Algorithm.

The following CSMA / CD flowchart represents the CSMA / CD procedure-


Characteristics of CSMA/CD

● CSMA/CD has carrier sense i.e., it senses the channel for transmissions to check if it is busy or not.
● After the collision is detected, the stations have to wait for a random amount of time before
retransmitting the frames.
● A jam signal is used to indicate the other stations that collision has taken place and that the
stations have to wait.
● A higher transmission priority mechanism can be implemented.

Advantages of CSMA/CD

● Collisions are detected in a shorter span of time.


● Wasteful transmission of frames is avoided thus all available bandwidth is utilized if possible.
● It has low overhead.

Disadvantages of CSMA/CD

● It is not possible over long distances and large networks.


● It does not reduce the possibility of collision and collisions degrade performance.
● The performance is inversely proportional to the number of stations. Thus, more stations degrade
the performance exponentially.

Binary exponential back off algorithm

Back-off algorithm is a collision resolution mechanism which is used in random access MAC protocols
(CSMA/CD). This algorithm is generally used in Ethernet to schedule re-transmissions after collisions.

If a collision takes place between 2 stations, they may restart transmission as soon as they can after the
collision. This will always lead to another collision and form an infinite loop of collisions leading to a
deadlock. To prevent such scenario back-off algorithm is used.

Let us consider an scenario of 2 stations A and B transmitting some data:


After a collision, time is divided into discrete slots (Tslot) whose length is equal to 2t, where t is the
maximum propagation delay in the network.

The stations involved in the collision randomly pick an integer from the set K i.e. {0, 1}. This set is called
the contention window. If the sources collide again because they picked the same integer, the contention
window size is doubled and it becomes {0, 1, 2, 3}. Now the sources involved in the second collision
randomly pick an integer from the set {0, 1, 2, 3} and wait for that number of time slots before trying
again. Before they try to transmit, they listen to the channel and transmit only if the channel is idle. This
causes the source which picked the smallest integer in the contention window to succeed in transmitting
its frame.

So, Back-off algorithm defines a waiting time for the stations involved in collision, i.e., for how much
time the station should wait to re-transmit.

Waiting time = back–off time


Let n = collision number or re-transmission serial number.
Then,
Waiting time = K * Tslot
where K = [0, 2n – 1 ]
Example –

Case-1:
Suppose 2 stations A and B start transmitting data (Packet 1) at the same time then, collision occurs. So,
the collision number n for both their data (Packet 1) = 1. Now, both the station randomly picks an integer
from the set K i.e. {0, 1}.

● When both A and B choose K = 0


–> Waiting time for A = 0 * Tslot = 0
Waiting time for B = 0 * Tslot = 0

Therefore, both stations will transmit at the same time and hence collision occurs.

● When A chooses K = 0 and B chooses K = 1


–> Waiting time for A = 0 * Tslot = 0
Waiting time for B = 1 * Tslot = Tslot

Therefore, A transmits the packet and B waits for time T slot for transmitting and hence A wins.

● When A chooses K = 1 and B chooses K = 0


–> Waiting time for A = 1 * Tslot = Tslot
Waiting time for B = 0 * Tslot = 0

Therefore, B transmits the packet and A waits for time T slot for transmitting and hence B wins.

● When both A and B choose K = 1


–> Waiting time for A = 1 * Tslot = Tslot
Waiting time for B = 1 * Tslot = Tslot

Therefore, both will wait for the same time Tslot and then transmit. Hence, collision occurs.

Probability that A wins = 1/4

Probability that B wins = 1/4


Probability of collision = 2/4

Case-2:
Assume that A wins in Case 1 and transmitted its data (Packet 1). Now, as soon as B transmits its packet
1, A transmits its packet 2. Hence, collision occurs. Now collision no. n becomes 1 for packet 2 of A and
becomes 2 for packet 1 of B.
For packet 2 of A, K = {0, 1}
For packet 1 of B, K = {0, 1, 2, 3}

Probability that A wins =

5/8 Probability that B wins =

1/8 Probability of collision =

2/8

So, the probability of collision decreases as compared to Case 1.

Advantage –

● Collision probability decreases exponentially.

Disadvantages –

● Capture effect: Station who wins ones keeps on winning.


● Works only for 2 stations or hosts
36. Discuss the following: repeaters, hubs, bridges, switches, router
and gateways? Repeaters
A repeater is a device that operates only in the physical layer. Signals that carry information within a
network can travel a fixed distance before attenuation endangers the integrity of the data. A repeater receives
a signal and, before it becomes too weak or corrupted, regenerates and retimes the original bit pattern. The
repeater then sends the refreshed signal. In the past, when Ethernet LANs were using bus topology, a
repeater was used to connect two segments of a LAN to overcome the length restriction of the coaxial cable.
Today, however, Ethernet LANs use star topology. In a star topology, a repeater is a multiport device, often
called a hub, that can be used to serve as the connecting point and at the same time function as a repeater.

Figure 5.84 shows that when a packet from station A to station B arrives at the hub, the signal representing
the frame is regenerated to remove any possible corrupting noise, but the hub forwards the packet from all
outgoing ports except the one from which the signal has received. In other words, the frame is broadcast. All
stations in the LAN receive the frame, but only station B keeps it. The rest of the stations discard it. Figure
5.84 shows the role of a repeater or a hub in a switched LAN. The figure definitely shows that a hub does
not have a filtering capability; it does not have the intelligence to find from which port the frame should be
sent out. A hub or a repeater is a physical-layer device. They do not have a link-layer address and they do
not check the link-layer address of the received frame. They just regenerate the corrupted bits and send them
out from every port.

Switches
Link-Layer Switches
Link-Layer Switches A link-layer switch operates in both the physical and the data-link layers. As a
physical-layer device, it regenerates the signal it receives. As a link-layer device, the link-layer switch can
check the MAC addresses (source and destination) contained in the frame. Filtering One may ask what is the
difference in functionality between a link-layer switch and a hub. A link-layer switch has filtering capability.
It can check the destination address of a frame and can decide from which outgoing port the frame should be
sent.
Let us give an example. In Figure 5.85, we have a LAN with four stations that are connected to a link-layer
switch. If a frame destined for station 71:2B:13:45:61:42 arrives at port 1, the link-layer switch consults its
table to find the departing port.
According to its table, frames for 71:2B:13:45:61:42 should be sent out only through port 2; therefore, there
is no need for forwarding the frame through other ports.
Transparent Switches
Transparent Switches A transparent switch is a switch in which the stations are completely unaware of the
switch’s existence. If a switch is added or deleted from the system, reconfiguration of the stations is
unnecessary. According to the IEEE 802.1d specification, a system equipped with transparent switches must
meet three criteria: q Frames must be forwarded from one station to another. q The forwarding table is
automatically made by learning frame movements in the network. q Loops in the system must be prevented.

Routers
We discussed routers in Chapter 4. In this chapter, we mention routers to compare them with a two-layer
switch and a hub. A router is a three-layer device; it operates in the physical, data-link, and network layers.
As a physical-layer device, it regenerates the signal it receives. As a link-layer device, the router checks the
physical addresses (source and destination) contained in the packet. As a network-layer device, a router
checks the network-layer addresses. A router can connect networks. In other words, a router is an
internetworking device; it connects independent networks to form an internetwork. According to this
definition, two networks connected by a router become an internetwork or an internet. There are three major
differences between a router and a repeater or a switch. 1. A router has a physical and logical (IP) address
for each of its interfaces. 2. A router acts only on those packets in which the link-layer destination address
matches the address of the interface at which the packet arrives. 3. A router changes the link-layer address of
the packet (both source and destination) when it forwards the packet.
37. Explain the following IEEE standard protocols? (i)
802.3 (ii) 802.16 (iii) 802.11
Refer above question for 802.3 (Ethernet)
802.16 802.11
IEEE 802.16 standard defines WiMAX. IEEE 802.11 standard defines WLAN or
WiFi.
It is designed for long distance/wide area. It is designed for limited area.
It provides a coverage range of 7 km to 50 It provides a coverage range of 30 m to
km. 100m.
802.16 standard is used for outdoor usage. 802.11 standard is used for indoor usage.
It operates on frequencies of 2.4 GHz, 5 GHz. It operates on frequencies of 2.5 GHz, 3.5
GHz, 5.8GHz.
Standard variants of 802.16 are 802.16a, Standard variants of 802.11 are 802.11a,
802.16d, 802.16e, 802.16m etc. 11b, 11g, 11n, 11ac, 11ad etc.
It provides a data rate of 100 Mbps in a 20 It provides a data rate of 54 Mbps in 20
MHz channel. MHz channel.
Large number of customers are connected to Limited number of customers/devices are
this as it covers a wide area. connected which are with in the limited
range.
For encryption purpose it uses Data For encryption purpose it uses Rivest
Encryption Standard with Advanced Cipher 4.
Encryption Standard.
Different QoS options are available like UGS, It does not provide any QoS but Supported
rtPS, nrtPS, BE, etc in IEEE 802.11e.
Bandwidths varies dynamically as per user Bandwidth variants are 20 MHz, 40MHz,
requirement from 1.5 to 28 MHz. 80MHz and 160 MHz.
38. Explain spanning tree bridges, remote bridges, transparent bridges and source
routing bridges?

Spanning Tree Protocol (STP) is a communication protocol operating at data link layer the OSI model
to prevent bridge loops and the resulting broadcast storms. It creates a loop − free topology for
Ethernet networks.

Working Principle

A bridge loop is created when there are more than one paths between two nodes in a given network.
When a message is sent, particularly when a broadcast is done, the bridges repeatedly rebroadcast the
same message flooding the network. Since a data link layer frame does not have a time-to-live field in
the header, the broadcast frame may loop forever, thus swamping the channels.
Spanning tree protocol creates a spanning tree by disabling all links that form a loop or cycle in the
network. This leaves exactly one active path between any two nodes of the network. So when a
message is broadcast, there is no way that the same message can be received from an alternate path.
The bridges that participate in spanning tree protocol are often called spanning tree bridges.
To construct a spanning tree, the bridges broadcast their configuration routes. Then they execute a
distributed algorithm for finding out the minimal spanning tree in the network, i.e. the spanning tree
with minimal cost. The links not included in this tree are disabled but not removed.
In case a particular active link fails, the algorithm is executed again to find the minimal spanning tree
without the failed link. The communication continues through the newly formed spanning tree. When
a failed link is restored, the algorithm is re-run including the newly restored link.

Example

Let us consider a physical topology, as shown in the diagram, for an Ethernet network that comprises
of six interconnected bridges. The bridges are named {B1, B2, B3, B4, B5, B6} and several nodes are
connected to each bridge. The links between two bridges are named {L1, L2, L3, L4, L5, L6, L7, L8,
L9}, where L1 connects B1 and B2, L2 connects B1 and B3 and so on. It is assumed that all links are
of uniform costs.
From the diagram we can see that there are multiple paths from a bridge to any other bridge in the
network, forming several bridge loops that makes the topology susceptible to broadcast storms.

According to spanning tree protocol, links that form a cycle are disabled. Thus,we get a logical
topology so that there is exactly one route between any two bridges. One possible logical topology is
shown in the following diagram below containing links {L1, L2, L3, L4, L5} −

In the above logical configuration, if a situation arises such that link L4 fails. Then, the spanning tree
is reconstituted leaving L4. A possible logical reconfiguration containing links {L1, L2, L3, L5, L9}
is as follows −

A remote bridge has at least one local area network (LAN) port, such as an RJ-45 jack for an
unshielded twisted-pair (UTP) LAN connection to a switch or a hub, and at least one serial port, such
as an RS-232 port or V.35 interface. The serial port is synchronous for digital lines or asynchronous
for modems.

The bridge might have both synchronous and asynchronous serial ports. Remote bridges can also be
enabled for Simple Network Management Protocol (SNMP) and have other diagnostic and support
features such as out-of-band management (OBM) support.
A transparent bridge is a common type of bridge that observes incoming network traffic to identify
media access control (MAC) addresses. These bridges operate in a way that is transparent to all the
network's connected hosts.A transparent bridge records MAC addresses in a table that is much like a
routing table and evaluates that information whenever a packet is routed toward its location. A
transparent bridge may also combine several different bridges to better inspect incoming traffic.
Transparent bridges are implemented primarily in Ethernet networks.

source route bridge (SRB) A form of routing used to allow connection to be established between pairs
of nodes on different token rings. A node wishing to establish a connection issues a special explorer
packet that is broadcast to all nodes on all rings until it is recognized by the specified destination
node. The specified node then returns a specific reply packet that returns to the original node,
acquiring routing information as it does so, and thus presenting the source node with a complete route
between the two nodes. This technique presents a dynamic choice of route at the time of establishing
the connection, but all subsequent traffic must follow the path determined at that time.

You might also like