0% found this document useful (0 votes)
21 views

Chapter IV Data Link Layer

Detailed Description in Data Link Layer

Uploaded by

Sheba Parimala
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Chapter IV Data Link Layer

Detailed Description in Data Link Layer

Uploaded by

Sheba Parimala
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 39

CHAPTER IV: DATA LINK LAYER

Channel access on links – SDMA – TDMA – FDMA – CDMA – Hybrid Multiple Access
Techniques – Issues in the Data Link Layer – Framing - Error correction and detection – Link
Level Flow Control – Medium Access – Ethernet – Token Ring – FDDI – Wireless LAN –
Bridges and Switches

Data Link Layer:

The data link layer is responsible for multiplexing data streams, data frame detection,
medium access, and error control. It ensures reliable point-to-point and point-to-multipoint
connections in a communication network.

For effective data communication between two directly connected transmitting and
receiving stations the data link layer has to carry out a number of specific functions as
follows:
1. Frame synchronisation
2. Flow control.
3. Error control
4. Addressing

Data Link Layer is second layer of OSI Layered Model. This layer is one of the most
complicated layers and has complex functionalities and liabilities. Data link layer hides the
details of underlying hardware and represents itself to upper layer as the medium to
communicate.
Data link layer works between two hosts which are directly connected in some sense. This
direct connection could be point to point or broadcast. Systems on broadcast network are
said to be on same link. The work of data link layer tends to get more complex when it is
dealing with multiple hosts on single collision domain.
Data link layer is responsible for converting data stream to signals bit by bit and to send that
over the underlying hardware. At the receiving end, Data link layer picks up data from
hardware which are in the form of electrical signals, assembles them in a recognizable frame
format, and hands over to upper layer.
Data link layer has two sub-layers:
 Logical Link Control: It deals with protocols, flow-control, and error control
 Media Access Control: It deals with actual control of media
Data-link layer takes the packets from the Network Layer and encapsulates them into frames.
If the frame size becomes too large, then the packet may be divided into small sized frames.
Smaller sized frames makes flow control and error control more efficient.
Then, it sends each frame bit-by-bit on the hardware. At receiver’s end, data link layer picks
up signals from hardware and assembles them into frames.
Parts of a Frame
A frame has the following parts −
 Frame Header − It contains the source and the destination addresses of the frame.
 Payload field − It contains the message to be delivered.
 Trailer − It contains the error detection and error correction bits.
 Flag − It marks the beginning and end of the frame.

Types of Framing
Framing can be of two types, fixed sized framing and variable sized framing.
Fixed-sized Framing
Here the size of the frame is fixed and so the frame length acts as delimiter of the frame.
Consequently, it does not require additional boundary bits to identify the start and end of the
frame.
Example − ATM cells.
Variable – Sized Framing
Here, the size of each frame to be transmitted may be different. So additional mechanisms are
kept to mark the end of one frame and the beginning of the next frame.
It is used in local area networks.
Two ways to define frame delimiters in variable sized framing are −
 Length Field − Here, a length field is used that determines the size of the frame. It is
used in Ethernet (IEEE 802.3).
 End Delimiter − Here, a pattern is used as a delimiter to determine the size of frame.
It is used in Token Rings. If the pattern occurs in the message, then two approaches
are used to avoid the situation −
o Byte – Stuffing − A byte is stuffed in the message to differentiate from the
delimiter. This is also called character-oriented framing.
o Bit – Stuffing − A pattern of bits of arbitrary length is stuffed in the message
to differentiate from the delimiter. This is also called bit – oriented framing.
The data link layer in the OSI (Open System Interconnections) Model, is in between the
physical layer and the network layer. This layer converts the raw transmission facility
provided by the physical layer to a reliable and error-free link.
The main functions and the design issues of this layer are

 Providing services to the network layer


 Framing
 Error Control
 Flow Control
Services to the Network Layer
In the OSI Model, each layer uses the services of the layer below it and provides services to
the layer above it. The data link layer uses the services offered by the physical layer.The
primary function of this layer is to provide a well defined service interface to network layer
above it.

The types of services provided can be of three types −

 Unacknowledged connectionless service


 Acknowledged connectionless service
 Acknowledged connection - oriented service
Framing
The data link layer encapsulates each data packet from the network layer into frames that are
then transmitted.
A frame has three parts, namely −
 Frame Header
 Payload field that contains the data packet from network layer
 Trailer

Functionality of Data-link Layer

Data link layer does many tasks on behalf of upper layer. These are:
 Framing
Data-link layer takes packets from Network Layer and encapsulates them into
Frames.Then, it sends each frame bit-by-bit on the hardware. At receiver’ end, data
link layer picks up signals from hardware and assembles them into frames.
 Addressing
Data-link layer provides layer-2 hardware addressing mechanism. Hardware address
is assumed to be unique on the link. It is encoded into hardware at the time of
manufacturing.
 Synchronization
When data frames are sent on the link, both machines must be synchronized in order
to transfer to take place.
 Error Control
Sometimes signals may have encountered problem in transition and the bits are
flipped.These errors are detected and attempted to recover actual data bits. It also
provides error reporting mechanism to the sender.
 Flow Control
Stations on same link may have different speed or capacity. Data-link layer ensures
flow control that enables both machine to exchange data on same speed.
 Multi-Access
When host on the shared link tries to transfer the data, it has a high probability of
collision. Data-link layer provides mechanism such as CSMA/CD to equip capability
of accessing a shared media among multiple Systems.
Data-link layer uses some error control mechanism to ensure that frames (data bit streams)
are transmitted with certain level of accuracy. But to understand how errors is controlled, it
is essential to know what types of errors may occur.

Types of Errors

There may be three types of errors:


 Single bit error

In a frame, there is only one bit, anywhere though, which is corrupt.


 Multiple bits error

Frame is received with more than one bits in corrupted state.


 Burst error
Frame contains more than1 consecutive bits corrupted.
Error control mechanism may involve two possible ways:
 Error detection
 Error correction

Error Detection

Errors in the received frames are detected by means of Parity Check and Cyclic Redundancy
Check (CRC). In both cases, few extra bits are sent along with actual data to confirm that
bits received at other end are same as they were sent. If the counter-check at receiver’ end
fails, the bits are considered corrupted.
Parity Check
One extra bit is sent along with the original bits to make number of 1s either even in case of
even parity, or odd in case of odd parity.
The sender while creating a frame counts the number of 1s in it. For example, if even parity
is used and number of 1s is even then one bit with value 0 is added. This way number of 1s
remains even.If the number of 1s is odd, to make it even a bit with value 1 is added.

The receiver simply counts the number of 1s in a frame. If the count of 1s is even and even
parity is used, the frame is considered to be not-corrupted and is accepted. If the count of 1s
is odd and odd parity is used, the frame is still not corrupted.
If a single bit flips in transit, the receiver can detect it by counting the number of 1s. But
when more than one bits are erro neous, then it is very hard for the receiver to detect the
error.
Cyclic Redundancy Check (CRC)
CRC is a different approach to detect if the received frame contains valid data. This
technique involves binary division of the data bits being sent. The divisor is generated using
polynomials. The sender performs a division operation on the bits being sent and calculates
the remainder. Before sending the actual bits, the sender adds the remainder at the end of the
actual bits. Actual data bits plus the remainder is called a codeword. The sender transmits
data bits as codewords.
At the other end, the receiver performs division operation on codewords using the same
CRC divisor. If the remainder contains all zeros the data bits are accepted, otherwise it is
considered as there some data corruption occurred in transit.

Error Correction

In the digital world, error correction can be done in two ways:


 Backward Error Correction When the receiver detects an error in the data
received, it requests back the sender to retransmit the data unit.
 Forward Error Correction When the receiver detects some error in the data
received, it executes error-correcting code, which helps it to auto-recover and to
correct some kinds of errors.
The first one, Backward Error Correction, is simple and can only be efficiently used where
retransmitting is not expensive. For example, fiber optics. But in case of wireless
transmission retransmitting may cost too much. In the latter case, Forward Error Correction
is used.
To correct the error in data frame, the receiver must know exactly which bit in the frame is
corrupted. To locate the bit in error, redundant bits are used as parity bits for error
detection.For example, we take ASCII words (7 bits data), then there could be 8 kind of
information we need: first seven bits to tell us which bit is error and one more bit to tell that
there is no error.

Flow Control

When a data frame (Layer-2 data) is sent from one host to another over a single medium, it
is required that the sender and receiver should work at the same speed. That is, sender sends
at a speed on which the receiver can process and accept the data. What if the speed
(hardware/software) of the sender or receiver differs? If sender is sending too fast the
receiver may be overloaded, (swamped) and data may be lost.
Two types of mechanisms can be deployed to control the flow:
 Stop and Wait

This flow control mechanism forces the sender after transmitting a data frame to stop
and wait until the acknowledgement of the data-frame sent is received.

 Sliding Window
In this flow control mechanism, both sender and receiver agree on the number of
data-frames after which the acknowledgement should be sent. As we learnt, stop and
wait flow control mechanism wastes resources, this protocol tries to make use of
underlying resources as much as possible.

Error Control

When data-frame is transmitted, there is a probability that data-frame may be lost in the
transit or it is received corrupted. In both cases, the receiver does not receive the correct
data-frame and sender does not know anything about any loss.In such case, both sender and
receiver are equipped with some protocols which helps them to detect transit errors such as
loss of data-frame. Hence, either the sender retransmits the data-frame or the receiver may
request to resend the previous data-frame.
Requirements for error control mechanism:
 Error detection - The sender and receiver, either both or any, must ascertain that
there is some error in the transit.
 Positive ACK - When the receiver receives a correct frame, it should acknowledge
it.
 Negative ACK - When the receiver receives a damaged frame or a duplicate frame,
it sends a NACK back to the sender and the sender must retransmit the correct frame.
 Retransmission: The sender maintains a clock and sets a timeout period. If an
acknowledgement of a data-frame previously transmitted does not arrive before the
timeout the sender retransmits the frame, thinking that the frame or it’s
acknowledgement is lost in transit.
There are three types of techniques available which Data-link layer may deploy to control
the errors by Automatic Repeat Requests (ARQ):

 Stop-and-wait ARQ

The following transition may occur in Stop-and-Wait ARQ:

o The sender maintains a timeout counter.


o When a frame is sent, the sender starts the timeout counter.
o If acknowledgement of frame comes in time, the sender transmits the next
frame in queue.
o If acknowledgement does not come in time, the sender assumes that either the
frame or its acknowledgement is lost in transit. Sender retransmits the frame
and starts the timeout counter.
o If a negative acknowledgement is received, the sender retransmits the frame.
 Go-Back-N ARQ
Stop and wait ARQ mechanism does not utilize the resources at their best.When the
acknowledgement is received, the sender sits idle and does nothing. In Go-Back-N
ARQ method, both sender and receiver maintain a window.

The sending-window size enables the sender to send multiple frames without
receiving the acknowledgement of the previous ones. The receiving-window enables
the receiver to receive multiple frames and acknowledge them. The receiver keeps
track of incoming frame’s sequence number.
When the sender sends all the frames in window, it checks up to what sequence
number it has received positive acknowledgement. If all frames are positively
acknowledged, the sender sends next set of frames. If sender finds that it has
received NACK or has not receive any ACK for a particular frame, it retransmits all
the frames after which it does not receive any positive ACK.

 Selective Repeat ARQ

In Go-back-N ARQ, it is assumed that the receiver does not have any buffer space
for its window size and has to process each frame as it comes. This enforces the
sender to retransmit all the frames which are not acknowledged.
In Selective-Repeat ARQ, the receiver while keeping track of sequence numbers,
buffers the frames in memory and sends NACK for only frame which is missing or
damaged.
The sender in this case, sends only packet for which NACK is received.
Routers take help of routing tables, which has the following information:
 Method to reach the network
Routers upon receiving a forwarding request, forwards packet to its next hop (adjacent
router) towards the destination.
The next router on the path follows the same thing and eventually the data packet reaches its
destination.
Network address can be of one of the following:
 Unicast (destined to one host)
 Multicast (destined to group)
 Broadcast (destined to all)
 Anycast (destined to nearest one)
A router never forwards broadcast traffic by default. Multicast traffic uses special treatment
as it is most a video stream or audio with highest priority. Anycast is just similar to unicast,
except that the packets are delivered to the nearest destination when multiple destinations
are available.
When a device has multiple paths to reach a destination, it always selects one path by
preferring it over others. This selection process is termed as Routing. Routing is done by
special network devices called routers or it can be done by means of software processes.The
software based routers have limited functionality and limited scope.
A router is always configured with some default route. A default route tells the router where
to forward a packet if there is no route found for specific destination. In case there are
multiple path existing to reach the same destination, router can make decision based on the
following information:
 Hop Count
 Bandwidth
 Metric
 Prefix-length
 Delay
Routes can be statically configured or dynamically learnt. One route can be configured to be
preferred over others.

Unicast routing

Most of the traffic on the internet and intranets known as unicast data or unicast traffic is
sent with specified destination. Routing unicast data over the internet is called unicast
routing. It is the simplest form of routing because the destination is already known. Hence
the router just has to look up the routing table and forward the packet to next hop.

Broadcast routing

By default, the broadcast packets are not routed and forwarded by the routers on any
network. Routers create broadcast domains. But it can be configured to forward broadcasts
in some special cases. A broadcast message is destined to all network devices.
Broadcast routing can be done in two ways (algorithm):
 A router creates a data packet and then sends it to each host one by one. In this case,
the router creates multiple copies of single data packet with different destination
addresses. All packets are sent as unicast but because they are sent to all, it simulates
as if router is broadcasting.
This method consumes lots of bandwidth and router must destination address of each
node.
 Secondly, when router receives a packet that is to be broadcasted, it simply floods
those packets out of all interfaces. All routers are configured in the same way.

This method is easy on router's CPU but may cause the problem of duplicate packets
received from peer routers.
Reverse path forwarding is a technique, in which router knows in advance about its
predecessor from where it should receive broadcast. This technique is used to detect
and discard duplicates.

Multicast Routing

Multicast routing is special case of broadcast routing with significance difference and
challenges. In broadcast routing, packets are sent to all nodes even if they do not want it. But
in Multicast routing, the data is sent to only nodes which wants to receive the packets.
The router must know that there are nodes, which wish to receive multicast packets (or
stream) then only it should forward. Multicast routing works spanning tree protocol to avoid
looping.
Multicast routing also uses reverse path Forwarding technique, to detect and discard
duplicates and loops.

Anycast Routing

Anycast packet forwarding is a mechanism where multiple hosts can have same logical
address. When a packet destined to this logical address is received, it is sent to the host
which is nearest in routing topology.

Anycast routing is done with help of DNS server. Whenever an Anycast packet is received it
is enquired with DNS to where to send it. DNS provides the IP address which is the nearest
IP configured on it.
Unicast Routing Protocols

There are two kinds of routing protocols available to route unicast packets:

 Distance Vector Routing Protocol

Distance Vector is simple routing protocol which takes routing decision on the
number of hops between source and destination. A route with less number of hops is
considered as the best route. Every router advertises its set best routes to other
routers. Ultimately, all routers build up their network topology based on the
advertisements of their peer routers,
For example Routing Information Protocol (RIP).

 Link State Routing Protocol

Link State protocol is slightly complicated protocol than Distance Vector. It takes
into account the states of links of all the routers in a network. This technique helps
routes build a common graph of the entire network. All routers then calculate their
best path for routing purposes.for example, Open Shortest Path First (OSPF) and
Intermediate System to Intermediate System (ISIS).

Multicast Routing Protocols

Unicast routing protocols use graphs while Multicast routing protocols use trees, i.e.
spanning tree to avoid loops. The optimal tree is called shortest path spanning tree.
 DVMRP - Distance Vector Multicast Routing Protocol
 MOSPF - Multicast Open Shortest Path First
 CBT - Core Based Tree
 PIM - Protocol independent Multicast
Protocol Independent Multicast is commonly used now. It has two flavors:
 PIM Dense Mode
This mode uses source-based trees. It is used in dense environment such as LAN.
 PIM Sparse Mode
This mode uses shared trees. It is used in sparse environment such as WAN.

Routing Algorithms

The routing algorithms are as follows:


Flooding
Flooding is simplest method packet forwarding. When a packet is received, the routers send
it to all the interfaces except the one on which it was received. This creates too much burden
on the network and lots of duplicate packets wandering in the network.
Time to Live (TTL) can be used to avoid infinite looping of packets. There exists another
approach for flooding, which is called Selective Flooding to reduce the overhead on the
network. In this method, the router does not flood out on all the interfaces, but selective
ones.
Shortest Path
Routing decision in networks, are mostly taken on the basis of cost between source and
destination. Hop count plays major role here. Shortest path is a technique which uses various
algorithms to decide a path with minimum number of hops.
Common shortest path algorithms are:
 Dijkstra's algorithm
 Bellman Ford algorithm
 Floyd Warshall algorithm

Data Link control


The data link control is responsible for reliable transmission of message over transmission
channel by using techniques like framing, error control and flow control. For Data link
control refer to – Stop and Wait ARQ
Multiple Access Control
If there is a dedicated link between the sender and the receiver then data link control layer
is sufficient, however if there is no dedicated link present then multiple stations can access
the channel simultaneously. Hence multiple access protocols are required to decrease
collision and avoid crosstalk. For example, in a classroom full of students, when a teacher
asks a question and all the students (or stations) start answering simultaneously (send data
at same time) then a lot of chaos is created( data overlap or data lost) then it is the job of the
teacher (multiple access protocols) to manage the students and make them answer one at a
time.
Thus, protocols are required for sharing data on non dedicated channels. Multiple access
protocols can be subdivided further as –

1. Random Access Protocol: In this, all stations have same superiority that is no station
has more priority than another station. Any station can send data depending on medium’s
state( idle or busy). It has two features:
1. There is no fixed time for sending data
2. There is no fixed sequence of stations sending data
The Random access protocols are further subdivided as:
(a) ALOHA – It was designed for wireless LAN but is also applicable for shared medium.
In this, multiple stations can transmit data at the same time and can hence lead to collision
and data being garbled.
 Pure Aloha:
When a station sends data it waits for an acknowledgement. If the acknowledgement
doesn’t come within the allotted time then the station waits for a random amount of
time called back-off time (Tb) and re-sends the data. Since different stations wait for
different amount of time, the probability of further collision decreases.
 Vulnerable Time = 2* Frame transmission time

 Throughput = G exp{-2*G}
Maximum throughput = 0.184 for G=0.5
 Slotted Aloha:
It is similar to pure aloha, except that we divide time into slots and sending of data is
allowed only at the beginning of these slots. If a station misses out the allowed time, it
must wait for the next slot. This reduces the probability of collision.
 Vulnerable Time = Frame transmission time
 Throughput = G exp{-*G}
Maximum throughput = 0.368 for G=1
For more information on ALOHA refer – LAN Technologies
(b) CSMA – Carrier Sense Multiple Access ensures fewer collisions as the station is
required to first sense the medium (for idle or busy) before transmitting data. If it is idle
then it sends data, otherwise it waits till the channel becomes idle. However there is still
chance of collision in CSMA due to propagation delay. For example, if station A wants to
send data, it will first sense the medium.If it finds the channel idle, it will start sending
data. However, by the time the first bit of data is transmitted (delayed due to propagation
delay) from station A, if station B requests to send data and senses the medium it will also
find it idle and will also send data. This will result in collision of data from station A and B.
CSMA access modes-
 1-persistent: The node senses the channel, if idle it sends the data, otherwise it
continuously keeps on checking the medium for being idle and transmits
unconditionally(with 1 probability) as soon as the channel gets idle.
 Non-Persistent: The node senses the channel, if idle it sends the data, otherwise it
checks the medium after a random amount of time (not continuously) and transmits
when found idle.
 P-persistent: The node senses the medium, if idle it sends the data with p probability. If
the data is not transmitted ((1-p) probability) then it waits for some time and checks the
medium again, now if it is found idle then it send with p probability. This repeat
continues until the frame is sent. It is used in Wifi and packet radio systems.
 O-persistent: Superiority of nodes is decided beforehand and transmission occurs in
that order. If the medium is idle, node waits for its time slot to send data.
(c) CSMA/CD – Carrier sense multiple access with collision detection. Stations can
terminate transmission of data if collision is detected. For more details refer – Efficiency of
CSMA/CD
(d) CSMA/CA – Carrier sense multiple access with collision avoidance. The process of
collisions detection involves sender receiving acknowledgement signals. If there is just one
signal(its own) then the data is successfully sent but if there are two signals(its own and the
one with which it has collided) then it means a collision has occurred. To distinguish
between these two cases, collision must have a lot of impact on received signal. However it
is not so in wired networks, so CSMA/CA is used in this case.
CSMA/CA avoids collision by:
1. Interframe space – Station waits for medium to become idle and if found idle it does
not immediately send data (to avoid collision due to propagation delay) rather it waits
for a period of time called Interframe space or IFS. After this time it again checks the
medium for being idle. The IFS duration depends on the priority of station.
2. Contention Window – It is the amount of time divided into slots. If the sender is ready
to send data, it chooses a random number of slots as wait time which doubles every time
medium is not found idle. If the medium is found busy it does not restart the entire
process, rather it restarts the timer when the channel is found idle again.
3. Acknowledgement – The sender re-transmits the data if acknowledgement is not
received before time-out.
2. Controlled Access:
In this, the data is sent by that station which is approved by all other stations. For further
details refer – Controlled Access Protocols
3. Channelization:
In this, the available bandwidth of the link is shared in time, frequency and code to multiple
stations to access channel simultaneously.
 Frequency Division Multiple Access (FDMA) – The available bandwidth is divided
into equal bands so that each station can be allocated its own band. Guard bands are
also added so that no to bands overlap to avoid crosstalk and noise.
 Time Division Multiple Access (TDMA) – In this, the bandwidth is shared between
multiple stations. To avoid collision time is divided into slots and stations are allotted
these slots to transmit data. However there is a overhead of synchronization as each
station needs to know its time slot. This is resolved by adding synchronization bits to
each slot. Another issue with TDMA is propagation delay which is resolved by addition
of guard bands.
For more details refer – Circuit Switching
 Code Division Multiple Access (CDMA) – One channel carries all transmissions
simultaneously. There is neither division of bandwidth nor division of time. For
example, if there are many people in a room all speaking at the same time, then also
perfect reception of data is possible if only two person speak the same language.
Similarly data from different stations can be transmitted simultaneously in different
code languages.

CSMA (Carrier Sense Multiple Access)

It is a carrier sense multiple access based on media access protocol to sense the traffic on a
channel (idle or busy) before transmitting the data. It means that if the channel is idle, the
station can send data to the channel. Otherwise, it must wait until the channel becomes idle.
Hence, it reduces the chances of a collision on a transmission medium.

CSMA Access Modes

1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the shared
channel and if the channel is idle, it immediately sends the data. Else it must wait and keep
track of the status of the channel to be idle and broadcast the frame unconditionally as soon
as the channel is idle.

Non-Persistent: It is the access mode of CSMA that defines before transmitting the data,
each node must sense the channel, and if the channel is inactive, it immediately sends the
data. Otherwise, the station must wait for a random time (not continuously), and when the
channel is found to be idle, it transmits the frames.

P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The P-


Persistent mode defines that each node senses the channel, and if the channel is inactive, it
sends a frame with a P probability. If the data is not transmitted, it waits for a (q = 1-p
probability) random time and resumes the frame with the next time slot.

O- Persistent: It is an O-persistent method that defines the superiority of the station before
the transmission of the frame on the shared channel. If it is found that the channel is inactive,
each station waits for its turn to retransmit the data.

CSMA/ CD

It is a carrier sense multiple access/ collision detection network protocol to transmit data
frames. The CSMA/CD protocol works with a medium access control layer. Therefore, it first
senses the shared channel before broadcasting the frames, and if the channel is idle, it
transmits a frame to check whether the transmission was successful. If the frame is
successfully received, the station sends another frame. If any collision is detected in the
CSMA/CD, the station sends a jam/ stop signal to the shared channel to terminate data
transmission. After that, it waits for a random time before sending a frame to a channel.

CSMA/ CA

It is a carrier sense multiple access/collision avoidance network protocol for carrier


transmission of data frames. It is a protocol that works with a medium access control layer.
When a data frame is sent to a channel, it receives an acknowledgment to check whether the
channel is clear. If the station receives only a single (own) acknowledgments, that means the
data frame has been successfully transmitted to the receiver. But if it gets two signals (its own
and one more in which the collision of frames), a collision of the frame occurs in the shared
channel. Detects the collision of the frame when a sender receives an acknowledgment signal.

Following are the methods used in the CSMA/ CA to avoid the collision:

Interframe space: In this method, the station waits for the channel to become idle, and if it
gets the channel is idle, it does not immediately send the data. Instead of this, it waits for
some time, and this time period is called the Interframe space or IFS. However, the IFS time
is often used to define the priority of the station.

Contention window: In the Contention window, the total time is divided into different slots.
When the station/ sender is ready to transmit the data frame, it chooses a random slot number
of slots as wait time. If the channel is still busy, it does not restart the entire process, except
that it restarts the timer only to send data packets when the channel is inactive.

Acknowledgment: In the acknowledgment method, the sender station sends the data frame to
the shared channel if the acknowledgment is not received ahead of time.

B. Controlled Access Protocol

It is a method of reducing data frame collision on a shared channel. In the controlled access
method, each station interacts and decides to send a data frame by a particular station
approved by all other stations. It means that a single station cannot send the data frames
unless all other stations are not approved. It has three types of controlled
access: Reservation, Polling, and Token Passing.

C. Channelization Protocols

It is a channelization protocol that allows the total usable bandwidth in a shared channel to be
shared across multiple stations based on their time, distance and codes. It can access all the
stations at the same time to send the data frames to the channel.

Following are the various methods to access the channel based on their time, distance and
codes:

1. FDMA (Frequency Division Multiple Access)


2. TDMA (Time Division Multiple Access)
3. CDMA (Code Division Multiple Access)
FDMA

It is a frequency division multiple access (FDMA) method used to divide the available
bandwidth into equal bands so that multiple users can send data through a different frequency
to the subchannel. Each station is reserved with a particular band to prevent the crosstalk
between the channels and interferences of stations.

TDMA

Time Division Multiple Access (TDMA) is a channel access method. It allows the same
frequency bandwidth to be shared across multiple stations. And to avoid collisions in the
shared channel, it divides the channel into different frequency slots that allocate stations to
transmit the data frames. The same frequency bandwidth into the shared channel by dividing
the signal into various time slots to transmit it. However, TDMA has an overhead of
synchronization that specifies each station's time slot by adding synchronization bits to each
slot.

CDMA

The code division multiple access (CDMA) is a channel access method. In CDMA, all
stations can simultaneously send the data over the same channel. It means that it allows each
station to transmit the data frames with full frequency on the shared channel at all times. It
does not require the division of bandwidth on a shared channel based on time slots. If
multiple stations send data to a channel simultaneously, their data frames are separated by a
unique code sequence. Each station has a different unique code for transmitting the data over
a shared channel. For example, there are multiple users in a room that are continuously
speaking. Data is received by the users if only two-person interact with each other using the
same language. Similarly, in the network, if different stations communicate with each other
simultaneously with different code language.

1. . Hybrid Multiple Access Techniques GSM (Global System for Mobile


Communication) • Combines FDD (Frequency Division Duplex) with TDMA &
FDMA.  FDD – Transmitter & receiver operate at different carrier frequencies. •
FDD is used to prevent interference between outward and return channel. • FDMA &
TDMA is used for multiple handsets to work in a single cell. • GSM with GRRS
(General Packet Radio Service) combines the FDD & FDMA with slotted ALOHA
for reservation inquiries and Dynamic TDMA for transferring the actual data.
2. 2. Bluetooth • Combines frequency hopping with CSMA/CA  FHSS (Frequency
Hopping Spread Spectrum) o method of transmitting radio signals by rapidly
switching a carrier among many frequency channels using pseudo random sequence. o
This pseudo random sequence is known to both sender and receiver. • CSMA/CA for
multiple access. Wireless LAN (IEEE 802.11b) • Based on FDMA & DS-CDMA
(Direct Sequence CDMA) combined with CSMA/CA Hybrid Multiple Access
Techniques
3. 3.  DS-CDMA: o Multiple access scheme based on DSSS (Direct Sequence Spread
Spectrum) by spreading the signals from / to different users with different codes. •
FDMA & DS-CDMA is used to avoid the interference among adjacent Wireless LAN
cells (or) access point • CSMA/CA for multiple access with in a cell. HIPERLAN / 2 •
Combines the FDMA with dynamic TDMA, meaning that resource reservation is
achieved by packet scheduling. Hybrid Multiple Access Techniques

MAC Layer in the OSI Model


The Open System Interconnections (OSI) model is a layered networking framework that
conceptualizes how communications should be done between heterogeneous systems. The
data link layer is the second lowest layer. It is divided into two sublayers −
 The logical link control (LLC) sublayer
 The medium access control (MAC) sublayer
The following diagram depicts the position of the MAC layer −
Functions of MAC Layer
 It provides an abstraction of the physical layer to the LLC and upper layers of the OSI
network.
 It is responsible for encapsulating frames so that they are suitable for transmission via
the physical medium.
 It resolves the addressing of source station as well as the destination station, or groups
of destination stations.
 It performs multiple access resolutions when more than one data frame is to be
transmitted. It determines the channel access methods for transmission.
 It also performs collision resolution and initiating retransmission in case of collisions.
 It generates the frame check sequences and thus contributes to protection against
transmission errors.
MAC Addresses
MAC address or media access control address is a unique identifier allotted to a network
interface controller (NIC) of a device. It is used as a network address for data transmission
within a network segment like Ethernet, Wi-Fi, and Bluetooth.
MAC address is assigned to a network adapter at the time of manufacturing. It is hardwired
or hard-coded in the network interface card (NIC). A MAC address comprises of six groups
of two hexadecimal digits, separated by hyphens, colons, or no separators. An example of a
MAC address is 00:0A:89:5B:F0:11.
Fiber Distributed Data Interface (FDDI)
Fiber Distributed Data Interface (FDDI) is a set of ANSI and ISO standards for
transmission of data in local area network (LAN) over fiber optic cables. It is applicable in
large LANs that can extend up to 200 kilometers in diameter.
Features
 FDDI uses optical fiber as its physical medium.
 It operates in the physical and medium access control (MAC layer) of the Open
Systems Interconnection (OSI) network model.
 It provides high data rate of 100 Mbps and can support thousands of users.
 It is used in LANs up to 200 kilometers for long distance voice and multimedia
communication.
 It uses ring based token passing mechanism and is derived from IEEE 802.4 token bus
standard.
 It contains two token rings, a primary ring for data and token transmission and a
secondary ring that provides backup if the primary ring fails.
 FDDI technology can also be used as a backbone for a wide area network (WAN).
The following diagram shows FDDI −

Frame Format
The frame format of FDDI is similar to that of token bus as shown in the following diagram −

The fields of an FDDI frame are −


 Preamble: 1 byte for synchronization.
 Start Delimiter: 1 byte that marks the beginning of the frame.
 Frame Control: 1 byte that specifies whether this is a data frame or control frame.
 Destination Address: 2-6 bytes that specifies address of destination station.
 Source Address: 2-6 bytes that specifies address of source station.
 Payload: A variable length field that carries the data from the network layer.
 Checksum: 4 bytes frame check sequence for error detection.
 End Delimiter: 1 byte that marks the end of the frame.
FDDI Features

FDDI is an efficient network topology, regarding fault-tolerance and integrated network


management functions. With its deterministic access methods, FDDI guarantees high
aggregated throughput rates, even in large and high traffic networks. FDDI can be added
easily to existing network topologies (such as Ethernet and Token Ring) as a strong
backbone to eliminate severe network bottlenecks in existing LANs.
FDDI offers the following features:
• High transmission rates (100 Mbps) and bandwidth
• Real throughput rate (20 stations expected) of approx. 95 Mbps
• Large extensions (max. 100 km)
• Great node-to-node distance (2km using multimode fiber, 40 km using single mode fiber)
• Available for both fiber and copper media
• Easier to maintain
• Compatible to standards-based components and various operating systems.
Cabling Requirement

Optical fiber is the transmission medium of FDDI networks-but copper media also can be
used for standard office connections offering the same transmission rates. In contrast to
copper media, fiber provides the best possible protection against physical network tapping
and offers immunity to electromagnetic interference. As its name indicates, FDDI developed
around the idea of using optical fiber cable. It is, in fact, the type of cable used, especially
when the high-speed transmission needed over relatively long distances (2000 to 10,000
meters, or roughly 1 to 6 miles). However, over shorter distances (about 100 meters, or 330
feet), FDDI can also be implemented on less expensive copper cable.
In all, FDDI supports four different types of cable:
• Multimode fiber optic cable: This type of cable can be used over a maximum of 2000
meters and uses LED as a light source.
• Single mode fiber optic cable: This can be used over a maximum of 10,000metres or
more and uses lasers as a light source. Single mode cable is thinner at the core than
multimode, but it provides higher bandwidth because of the way the light impulse travels
through the cable.
• Unshielded twisted-pair copper wiring: This cable contains eight wires, and as the next
category, can be used over distances up to 30 meters.
• Shielded twisted-pair copper wiring: This is a shielded cable that contains two pairs of
twisted wires, with each pair also shielded.
The Fiber PMD (Physical Medium Dependent)

PMD defines specifications for the physical layer of a network standard, namely, the media
and interface connectors used. As FDDI supports both fiber and copper media, two separate
specifications are defined. They are the Fiber PMD (for optical fiber media) and TP-PMD
(for copper media, specifically for twisted-pair). Other two significant PMDs are SMF-PMD
(Single Mode Fiber-PMD), defines the demands on single mode fibers permitting distances
of 40 to 60 km (in contrast to multimode fibers permitting distances of maximum 2 km).
LCF-PMD (Low-Cost Fiber-PMD) was developed to get a low-price, fiber-based alternative
with restrictions to the maximum distance between nodes at the same time. The duplex SC
connector introduced by LCF-PMD mainly adopted by the full power budget version of
Fiber PMD.
The Fiber PMD-ANSI X3T9.5/48-48 describes the physical layer that uses fiber and optical
components. It defines the following characteristics and parameters of the optical fiber
cables allowed for FDDI.
• The wavelength of light (nominal wavelength is 1,300 nm)
• Attenuation and bandwidth
• Maximum bit-error rate
• Dispersion of optical media
• The numerical aperture (sine’s of aperture angle for total internal reflection, the nominal
aperture is 0.275)
• Intensity of light
• The jitter of the pulse
• Allowed power between two stations
62.5/125 and 85/125 micrometers graded index fibers defined as transmission media.
50/125 and 100/140 micrometers graded index fibers are also accepted.
PHY (Physical Layer Protocol)

The PHY document describes the physical processes on the medium (example data
encoding). It defines:
• Data encoding/ decoding.
• Clock synchronization (as FDDI rings can grow rather big, there is no central clock
frequency).
• TVX (Valid Transmission Time).
• Line states (Quiet – no signal on the line, Idle – normal line state, Halt and Master use
when transmitting configuration data).
• Control symbols (Start delimiter send when the data transmission starts and ending
delimiter tells when the FDDI frame should terminate.
• Data symbols (hexadecimal symbols 0 to F).
• Violation symbols (example the symbol v means that an illegal signal is on the line).
The maximum number of Phys per FDDI ring is 1000. A Dual Attachment Station (OAS)
has two PHYs connected directly to the double ring, whereas a SAS (Single Attached
Station) has an additional PHY in the concentrator. As each station needs two PHYs, the
network can accommodate a maximum of 500 stations.
MAC (Medium Access Control) Layer

The MAC (Medium Access Control) layer of the FDDI specified within the data link layer.
It uses the token passing method as medium access method and allows each station
accessing the ring in precisely defined intervals. The IEEE 802.2 standard applied in the
LLC (Logical Link Control) layer. Due to the LLC protocol, FDDI smoothly integrates
network topologies such as Ethernet (IEEE 802.3) and Token Ring (IEEE 802.5).
The MAC layer comprises address identification generating and checking the FCS (Frame
Check Sequence) checksum. Beside it specifies transmitting, repeating, and deleting MAC
frames just as providing MAC services to the LLC layer. Additionally, the MAC layer
specifies how to handle synchronous and asynchronous data traffic. The critical difference
between the IEEE 802.5 protocol and the FDDI are as follows.
• A station is waiting for a token abort the token transmission in FDDI. However, in IEEE
802.5 (Token ring) protocol, it naturally complements the selected bit in the token. It is due
to the high data rate of FDDI.
• An FDDI station releases the token as soon as the last frame is released (early release in
IEEE
802.5), as waiting for the transmitted frame (standard IEEE 802.5) to return inefficient in a
high-speed environment.
SMT (Station Management)

Station management provides the control necessary at the node level to manage the
functions of various FDDI layers. Individual network control and safety mechanisms used
for controlling and managing activities on the ring and the connection of every station.
Furthermore, it allows reconfiguring the ring in case of malfunction or line-disruption. The
SMT function permanently monitors the FDDI ring. It coordinates the configuration during
network start-up and produces status accounts on the ring’s and station’s states. The SMT
manages the Phys, MACs, and PMDs of each station. Besides, SMT is responsible for
counters, parameters, and statistics.
The functionality of the SMT can divide into CMT (Connection management) and RMT
(ring management). The CMT again is divided into:
ECM (Entity Coordination Management): Coordinating the activities of all PHYs and
controlling the optional optical bypass function.
PCM (Physical Connection Management): Inserting and removing stations, initializing,
and coordinating physical connections between local and neighboring ports.
CFM (Configuration Management): Responsible for configuring MAC and PHY of an
FDDI station.
Further, each port of an FDDI station has a CEM (Configuration Element Management).
The RMT manages the MAC components, such as changing a MAC address to a unique
address and removing a MAC with a duplicate address from the ring. Further, the RMT
monitors the FDDI ring and manages adequate procedures in case of ring disruption.
FDDI Topology

The FDDI network topology may view at two distinct levels.


• The physical level
• The logical level
Physical topology describes the arrangement and interconnection of nodes with physical
connections. The logical topology describes the paths through the network between MAC
entities. An FDDI network forms one of the two following physical topologies:
• A dual ring of trees.
• A subset of a dual ring of trees.
FDDI uses a dual ring topology. The dual ring topology uses two counter-rotating rings
known as the primary and secondary ring. The primary ring is similar to the main ring path
in token-ring terminology. The secondary ring is similar to the backup ring path of a token-
ring. Each ring consists of a single fiber path, which is equivalent to a pair of copper
conductors. FDDI topology permits many attachment units (stations, concentrators, and
bridges) to attach in various ways. From a wiring point of view, FDDI is similar to a fiber
optic token-ring network; however, there are differences between the token-ring and FDDI
techniques. A device can be attached directly to the ring without requiring a concentrator
such as the Multi-station Access Unit (MAU) on a token-ring. A device can be attached to
either one or both of the primary and secondary rings.
Station Types

To differentiate between devices that attach to one ring or both rings, FDDI defines two
classes of devices. A Class A device attaches to both the rings directly. It may be a station
and is called a Class A station or a Dual Attachment Station (OAS). It can also be a
concentrator, and in this case, it is called a Dual Attachment Concentrator (OAC).
A Class B device is a station attaches to only one of the rings, directly or through a
concentrator. Concentrators are active devices that act as wiring hubs. A Class B station
called as Single Attachment Station (SAS). It can also be a concentrator, and it is called a
Single Attachment Concentrator (SAC). During regular ring operation, the primary ring is
active while the secondary ring is idle. In the wake of a failure on the primary ring, the
secondary ring become active when a class A station or a Dual Attachment Concentrator
wraps the primary ring to the secondary ring, establishing a single ring. This functionality is
mandatory to maintain the reliability of the LAN. Figure(a) illustrates the dual ring
topology.
The FDDI dual ring configuration consists of Dual Attachment Stations (OAS). A dual
attached station on the ring has at least two ports, an A port, where the primary ring comes
in, and the secondary ring goes out, and a B port where the secondary ring comes in, and the
primary goes out. Each station has both ports (A and B) attached to the rings. The cabling
between the stations has to be all fiber or shielded twisted-pair (STP).
Dual Homing

To attain better fault tolerant, a particular topology known as dual homing use. A
concentrator that is not part of the main ring may be dual attached via one or two other
concentrators to provide higher availability. When connected in this manner, a concentrator
described as a Dual Homing Concentrator (DHC). Similarly, a Dual Attachment Station can
be connected to one or two concentrators using both A and B ports to provide high
availability. The station connected in this manner is considered a Dual Homing Station
(DHS). In both cases, only port B is active, and the connection to port A remains in standby
mode. If the connection to port B fails, port A become active without any impact on the
users of the Dual-Homed station or concentrator. Figure (b) illustrates the dual homing
technique.
Operation of FDDI

FDDI topology and operation are similar to Token Ring. The sequence in which stations
gain access to the medium is predetermined. A station generates a particular bit sequence
called a Token that controls the right to transmit. The Token is continually passed around
the network from one node to the next. Each station has the chance to transmit data when a
token passes. A station can decide how many frames it transmits using an algorithm that
permits bandwidth allocation. FDDI also allows a station to transmit many frames without
releasing the token. When a station has some data to send, it captures the token, sends the
information in the form of well-formatted FDDI frames, and then releases the token. The
header of these frames includes the address of the station(s) that copy the frame. All nodes
read the frame as it passes around the ring to determine if they are the recipients of the
frame. If they are, they extract the data and retransmit the frame to the next station on the
ring. When the frame returns to the originating station, the originating station removes the
frame. The token-access control scheme thus allows all stations to share the network
bandwidth in an orderly and efficient manner.
Generally, in an FDDI network, one ring (known as the primary ring) carries the tokens and
data frames, and the secondary ring remains idle and uses as a backup for fault tolerance or
insurance. Because the secondary ring is available if needed, whenever a nonfunctioning
node causes a break in the primary ring, traffic ‘Q’ a “wrap” around the problem causing
node and continue to carrying data, only in the opposite direction and on the secondary ring.
That way, even if a node goes down, the network continues to function. Of course, it is also
possible for two nodes to fail. When this happens, the wrap at both locations effectively
segments the one ring into two separate, non-communicating rings. To avoid this potentially
dangerous problem, FDDI networks can rely on concentrators. These concentrators resemble
hubs or MAU in that multiple nodes plugs into them. They are also able to isolate any failed
nodes while keeping the network traffic flowing. Sometimes, both rings use for data
transfer. In this case, the data travels in one direction (clockwise) on one ring and in the
other direction (counterclockwise) on the other ring. Using both rings to carry data makes
the number of frames twice that of the standard rate. Hence, the speed of the network can
double, from 100 Mbps to 200 Mbps.
Frame Format

An FDDI frame is very similar to that defined by the traditional token ring, but there are
only eight fields. The Token and frame format for the FDDI shown in Figure (a) and (b),
respectively. The Control token structure is very much similar to the Token ring case.
Eight fields make up the FDDI frame. They are:
As you can see, the data token in FDDI is very similar to the token ring data token we saw
earlier. The access control byte or octet is missing.
Start Delimiter: The Start Delimiter of a token is an indicator of the start of the token. It
consists of the symbols ‘J’ and ‘K,’ and these symbols not be seen anywhere else but at the
start of a frame or token.
Frame Control: The frame control gives information about the type of the token. A value of
hexadecimal 80 in the frame control field denotes a Non-restricted Token, while a frame
control of hexadecimal CO is a restricted token.
Destination Address: It is a 12-symbol code that indicates the identity of the station to
which the frame is to send. Each station has a unique 12-symbol address that identifies it.
When a station receives a frame, it compares the OA of that frame to its address if the two
match, the station copy the contents of the frame into its buffers.
The destination address can be an individual address or a group address that depends on its
first. If the first bit set to (1), the address is a group address. If it set to (0), the address is an
individual address. Group addresses can be used to address a frame to multiple destination
stations. A broadcast address is a particular type of group address, which applies to all of the
stations on the network.
Source Address: This field indicates the address of the station that created the frame. In
FDDI, when a station creates a frame, the frame is passed from one station to the next until
it returns to the originating station. The originating station removes the frame from the
physical medium.
Data: This field carries the actual information to be conveyed. Every frame is mostly built
around this field and is merely a mechanism for getting the info from one station to another.
The type of information contained in the data field determined from the Frame Control of
the field of the frame.
Cyclic Redundancy Check (CRC): CRC is used to verify whether the incoming frame
contains any bit errors. The FCS is generated by the station that sourced the frame, using the
bits of the FC, OA, SA, DATA and CRC fields. The CRC generated such that, should any of
the bits in those fields be altered, then the receiving station notice that there is a problem and
discard the frame.
Ending Delimiter: This field consists of two ‘T’ symbols. These ‘T’ symbols indicate that
the token is complete. Any data sequence that does not end with these ‘T’ symbols is not
considered a token.
Frame Status: The Frame Status consists of three indicators, which may have one of two
values set and reset. The three indicators are Error (JE’), Address recognized (or
Acknowledge) (‘N), and Copy (‘C).
FDDI Token Passing

Token passing on an FDDI network works much the way it does on a Token Ring network,
that is, nodes pass a token around the ring, and only the node with the token is allowed to
transmit a frame. There is a twist to this. However, that is related to FDDI fault tolerance.
When a node on the ring detects a problem, it is not idle. Instead; it generates a control
frame known as a beacon and sends it on to the network. As neighboring nodes detect the
beacon, they too begin to transmit beacons, and so it goes around the ring. When the node
that started the process, eventually receives its beacon back usually after the network has
switched to the secondary ring, it then assumes that the problem has been isolated or
resolved, generates a new token, and starts the ball rolling once again.
Structure of a FDDI Network

An FDDI network, as already mentioned, cannot include rings longer than 100 kilometers
apart. Another restriction on an FDDI network is that it cannot support more than SOD
(nodes per ring. Although the overall network topology must conform to a logical ring, the
network does not have to look like a circle. It can include stars connected to hubs or
concentrators, and it can even include trees collections of hubs connected in a hierarchy. As
long as the stars and trees connect in a logical ring, the FDDI network does not face any
problem.
Existing networks like Ethernet and Token Ring networks can integrate via workgroup
switches or routers into an FDDI backbone. Fileservers should be connected directly to the
FDDI backbone to reduce data load and to provide reasonable access time for the user. The
availability of FDDI adapters for twisted-pair allows smooth migration of existing cabling to
FOOL High-End PC workstations with applications like CAD, CIM, CAM, DTP, or image
processing can be connected directly to the FDDI ring via FDDI network interface cards and
concentrators.
Ethernet is a set of technologies and protocols that are used primarily in LANs. It was first
standardized in 1980s by IEEE 802.3 standard. IEEE 802.3 defines the physical layer and the
medium access control (MAC) sub-layer of the data link layer for wired Ethernet networks.
Ethernet is classified into two categories: classic Ethernet and switched Ethernet.
Classic Ethernet is the original form of Ethernet that provides data rates between 3 to 10
Mbps. The varieties are commonly referred as 10BASE-X. Here, 10 is the maximum
throughput, i.e. 10 Mbps, BASE denoted use of baseband transmission, and X is the type of
medium used. Most varieties of classic Ethernet have become obsolete in present
communication scenario.
A switched Ethernet uses switches to connect to the stations in the LAN. It replaces the
repeaters used in classic Ethernet and allows full bandwidth utilization.
IEEE 802.3 Popular Versions
There are a number of versions of IEEE 802.3 protocol. The most popular ones are -
 IEEE 802.3: This was the original standard given for 10BASE-5. It used a thick
single coaxial cable into which a connection can be tapped by drilling into the cable to
the core. Here, 10 is the maximum throughput, i.e. 10 Mbps, BASE denoted use of
baseband transmission, and 5 refers to the maximum segment length of 500m.
 IEEE 802.3a: This gave the standard for thin coax (10BASE-2), which is a thinner
variety where the segments of coaxial cables are connected by BNC connectors. The 2
refers to the maximum segment length of about 200m (185m to be precise).
 IEEE 802.3i: This gave the standard for twisted pair (10BASE-T) that uses
unshielded twisted pair (UTP) copper wires as physical layer medium. The further
variations were given by IEEE 802.3u for 100BASE-TX, 100BASE-T4 and
100BASE-FX.
 IEEE 802.3i: This gave the standard for Ethernet over Fiber (10BASE-F) that uses
fiber optic cables as medium of transmission.

Frame Format of Classic Ethernet and IEEE 802.3


The main fields of a frame of classic Ethernet are -
 Preamble: It is the starting field that provides alert and timing pulse for transmission.
In case of classic Ethernet it is an 8 byte field and in case of IEEE 802.3 it is of 7
bytes.
 Start of Frame Delimiter: It is a 1 byte field in a IEEE 802.3 frame that contains an
alternating pattern of ones and zeros ending with two ones.
 Destination Address: It is a 6 byte field containing physical address of destination
stations.
 Source Address: It is a 6 byte field containing the physical address of the sending
station.
 Length: It a 7 bytes field that stores the number of bytes in the data field.
 Data: This is a variable sized field carries the data from the upper layers. The
maximum size of data field is 1500 bytes.
 Padding: This is added to the data to bring its length to the minimum requirement of
46 bytes.
 CRC: CRC stands for cyclic redundancy check. It contains the error detection
information.

Fast Ethernet and Gigabit Ethernet are various types of ethernet and are very high in speed.
Following are the important differences between Fast Ethernet and Gigabit Ethernet.

Sr. Key Fast Ethernet Gigabit Ethernet


No.

Successor Fast Ethernet is successor of 10- Gigabit Ethernet is successor of Fast


1
Base-T-Ethernet. Ethernet.

Network Fast Ethernet speed is upto 100 Gigabit Ethernet speed is upto 1 Gbps.
2
speed Mbps.

3 Complexity Fast Ethernet is simple to configure. Gigabit Ethernet is quiet complex to


Sr. Key Fast Ethernet Gigabit Ethernet
No.

configure.

Delay Fast ethernet generates more delay. Gigabit ethernet generates less delay
4
than Fast Ethernet.

Coverage Fast Ethernet coverage limit is upto Gigabit Ethernet coverage limit is
5
Limit 10KM. upto 70KM.

Round trip Fast Ethernet round trip delay is 100 Gigabit Ethernet round trip delay is
6
delay to 500 bit times. 4000 bit times.

Token Ring
Token ring (IEEE 802.5) is a communication protocol in a local area network (LAN) where
all stations are connected in a ring topology and pass one or more tokens for channel
acquisition. A token is a special frame of 3 bytes that circulates along the ring of stations. A
station can send data frames only if it holds a token. The tokens are released on successful
receipt of the data frame.
Token Passing Mechanism in Token Ring
If a station has a frame to transmit when it receives a token, it sends the frame and then
passes the token to the next station; otherwise it simply passes the token to the next station.
Passing the token means receiving the token from the preceding station and transmitting to
the successor station. The data flow is unidirectional in the direction of the token passing. In
order that tokens are not circulated infinitely, they are removed from the network once their
purpose is completed. This is shown in the following diagram −
Token Bus
Token Bus (IEEE 802.4) is a standard for implementing token ring over virtual ring in LANs.
The physical media has a bus or a tree topology and uses coaxial cables. A virtual ring is
created with the nodes/stations and the token is passed from one node to the next in a
sequence along this virtual ring. Each node knows the address of its preceding station and its
succeeding station. A station can only transmit data when it has the token. The working
principle of token bus is similar to Token Ring.
Token Passing Mechanism in Token Bus
A token is a small message that circulates among the stations of a computer network
providing permission to the stations for transmission. If a station has data to transmit when it
receives a token, it sends the data and then passes the token to the next station; otherwise, it
simply passes the token to the next station. This is depicted in the following diagram −
Differences between Token Ring and Token Bus
Token Ring Token Bus

The token is passed over the physical ring The token is passed along the virtual ring of
formed by the stations and the coaxial cable stations connected to a LAN.
network.

The stations are connected by ring topology, or The underlying topology that connects the
sometimes star topology. stations is either bus or tree topology.

It is defined by IEEE 802.5 standard. It is defined by IEEE 802.4 standard.

The maximum time for a token to reach a It is not feasible to calculate the time for token
station can be calculated here. transfer.

Wireless LANs (WLANs)


Wireless LANs (WLANs) are wireless computer networks that use high-frequency radio
waves instead of cables for connecting the devices within a limited area forming LAN (Local
Area Network). Users connected by wireless LANs can move around within this limited area
such as home, school, campus, office building, railway platform, etc.
Most WLANs are based upon the standard IEEE 802.11 standard or WiFi.
Components of WLANs
The components of WLAN architecture as laid down in IEEE 802.11 are −
 Stations (STA) − Stations comprises of all devices and equipment that are connected
to the wireless LAN. Each station has a wireless network interface controller. A
station can be of two types −
o Wireless Access Point (WAP or AP)
o Client
 Basic Service Set (BSS) − A basic service set is a group of stations communicating at
the physical layer level. BSS can be of two categories −
o Infrastructure BSS
o Independent BSS
 Extended Service Set (ESS) − It is a set of all connected BSS.
 Distribution System (DS) − It connects access points in ESS.

Types of WLANS
WLANs, as standardized by IEEE 802.11, operates in two basic modes, infrastructure, and ad
hoc mode.
 Infrastructure Mode − Mobile devices or clients connect to an access point (AP)
that in turn connects via a bridge to the LAN or Internet. The client transmits frames
to other clients via the AP.
 Ad Hoc Mode − Clients transmit frames directly to each other in a peer-to-peer
fashion.
Advantages of WLANs
 They provide clutter-free homes, offices and other networked places.
 The LANs are scalable in nature, i.e. devices may be added or removed from the
network at greater ease than wired LANs.
 The system is portable within the network coverage. Access to the network is not
bounded by the length of the cables.
 Installation and setup are much easier than wired counterparts.
 The equipment and setup costs are reduced.
Disadvantages of WLANs
 Since radio waves are used for communications, the signals are noisier with more
interference from nearby systems.
 Greater care is needed for encrypting information. Also, they are more prone to errors.
So, they require greater bandwidth than the wired LANs.
 WLANs are slower than wired LANs

 Switch
A switch is basically a hardware or a device which is responsible for channeling the
data that is coming into the various input ports to a particular output port which will
further take the data to the desired destination.
 It is thus mainly used for the transfer of the data packets among various network
devices such as routers, servers etc.It is actually a data link layer device (layer 2
device) which ensures that the data packets being forwarded are error free and
accurate.
 The switch makes the use of the MAC address in order to forward the data to the
data link layer. Since the switch inputs the data from multiple ports thus it is also
called multiport bridge.
 Bridge
A bridge is basically a device which is responsible for dividing a single network into
various network segments.
 Thus the process of dividing a single network into various multiple network
segments is called as network bridging.
 Every network segment thus represents a separate collision domain where each
domain has a different bandwidth.The performance of the network can be improved
by using a bridge as the number of collisions occurring on the network get reduced.
 The bridge takes the decision that the incoming network traffic has to be forwarded
or filtered.Bridge is also responsible for maintaining the MAC (media access
control) address table.
Difference between switch and bridge:
S.NO
. Switch Bridge

It is a device which is responsible for


channeling the data that is coming into the It is basically a device which
various input ports to a particular output port is responsible for dividing a
which will further take the data to the desired single network into various
1. destination. network segments.

A bridge can have 2 or 4 ports


2. A switch can have a lot of ports. only.

The bridge performs the


The switch performs the packet forwarding by packet forwarding by using
using hardwares such as ASICS hence, it is softwares so it is software
3. hardware based. based.

The switching method in case of a switch can The switching method in case
thus be store, forward, fragment free or cut of a bridge is store and
4. through. forward.

The task of error checking is performed by a A bridge cannot perform the


5. switch. error checking.

A bridge may not have a


6. A switch has buffers. buffer.

You might also like