0% found this document useful (0 votes)
5 views

unit-3 computer networks

The document discusses the Medium Access Control (MAC) sublayer, which is responsible for coordinating access to a shared communication channel in networks. It outlines channel allocation methods, including static and dynamic allocations, as well as various multiple access protocols such as ALOHA and Carrier Sense Multiple Access (CSMA) with their respective variations. Additionally, it covers Ethernet technology, its standards, and the physical layer configurations used in local area networks.

Uploaded by

Ratna Kumari
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

unit-3 computer networks

The document discusses the Medium Access Control (MAC) sublayer, which is responsible for coordinating access to a shared communication channel in networks. It outlines channel allocation methods, including static and dynamic allocations, as well as various multiple access protocols such as ALOHA and Carrier Sense Multiple Access (CSMA) with their respective variations. Additionally, it covers Ethernet technology, its standards, and the physical layer configurations used in local area networks.

Uploaded by

Ratna Kumari
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

THE MEDIUM ACCESS SUB LAYER:

To coordinate the access to the channel, multiple access protocols are


requiring. All these protocols belong to the MAC sub layer. Data Link layer
is divided into two sub layers:
1. Logical Link Control (LLC)- is responsible for error control & flow control.
2. Medium Access Control (MAC)- MAC is responsible for multiple access
resolutions

3.1 THE CHANNEL ALLOCATION PROBLEM

In broadcast networks, single channel is shared by several stations. This


channel can be allocated to only one transmitting user at a time. There are
two different methods of channel allocations:

1. Static Channel Allocation- a single channel is divided among various


users either on the basis of frequency (FDM) or on the basis of time
(TDM). In FDM, fixed frequency is assigned to each user, whereas, in
TDM, fixed time slot is assigned to each user.
2. Dynamic Channel Allocation- no user is assigned fixed frequency or
fixed time slot. All users are dynamically assigned frequency or time
slot, depending upon the requirements of the user
3.2 MULTIPLE ACCESS PROTOCOLS
Many protocols have been defined to handle the access to shared link.
These protocols are organized in three different groups:
 Random Access Protocols
 Controlled Access Protocols
 Channelization Protocols

Fig 3.1 types of Multiple acess protocols


3.2.1 Random Access Protocols
There is no rule that decides which station should send next. If two
stations transmit at the same time, there is collision and the frames are
lost. The various random access methods are:
1. ALOHA
2. CSMA (Carrier Sense Multiple Access)
3. CSMA/CD (Carrier Sense Multiple Access with Collision Detection)
4. CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance)

3.2.1.1 ALOHA
ALOHA was developed at University of Hawaii in early 1970s by Norman
Abramson. It was used for ground based radio broadcasting. In this
method, stations share a common channel. When two stations transmit
simultaneously, collision occurs and frames are lost. There are two
different versions of ALOHA:
 Pure ALOHA
 Slotted ALOHA
Fig 3.2 protocol flow chart for ALOHA
Pure ALOHA
In pure ALOHA, stations transmit frames whenever they have data to send.
When two stations transmit simultaneously, there is collision and frames
are lost. In pure ALOHA, whenever any station transmits a frame, it
expects an acknowledgement from the receiver. If acknowledgement is not
received within specified time, the station assumes that the frame has
been lost. If the frame is lost, station waits for a random amount of time
and sends it again. This waiting time must be random; otherwise, same
frames will collide again and again. Whenever two frames try to occupy the
channel at the same time, there will be collision and both the frames will be
lost. If first bit of a new frame overlaps with the last bit of a frame almost
finished, both frames will be lost and both will have to be retransmitted.

Fig 3.3 ALOHA Protocol

The probability of having k arrivals during a time interval of length t is given


by:

Pk (t)  (t)k et


k!
where λ is the arrival rate. Note that this is a single-parameter model; all
we have to know is λ.

Analysis of Pure ALOHA:


 Notation:
– Tf = frame time (processing, transmission, propagation)
– S: Average number of successful transmissions per Tf ; that
is, the
throughput or efficiency.
– G: Average number of total frames transmitted per Tf
– D: Average delay between the time a packet is ready for
transmission and the completion of successful transmission.
The following assumptions are
– All frames are of constant length
– The channel is noise-free; the errors are only due to collisions.
– Frames do not queue at individual stations
– The channel acts as a Poisson process.
Since S represents the number of “good” transmissions per frame time,
and G
represents the total number of attempted transmissions per frame time, then
we have:
S = G ´ (Probability of good transmission)
• The vulnerable time for a successful transmission is 2Tf
• So, the probability of good transmission is not to have an “arrival”
during the vulnerable time .

Fig 3.4 collision of frames


Using
: (t)k et

Pk (t) 
And setting t = 2Tf and k = k!0, we get
(  2T )0 e 2Tf
P0 (2Tf )  f
 e2G
0!
G
becasue   . Thus, S  G  e2G
Tf

If we differentiate S = Ge-2G with respect to G and set the result to 0 and


solve for G, we find that the maximum occurs when G = 0.5, and for that S
= 1/2e = 0.18. So, the maximum throughput is only 18% of capacity

Slotted ALOHA
Slotted ALOHA was invented to improve the efficiency of pure ALOHA.
In slotted ALOHA, time of the channel is divided into intervals called slots.
The station can send a frame only at the beginning of the slot and only one
frame is sent in each slot. If any station is not able to place the frame onto
the channel at the beginning of the slot, it has to wait until the next time
slot. There is still a possibility of collision if two stations try to send at the
beginning of the same time slot.

Analysis of Slotted ALOHA


Note that the vulnerable period is now reduced in half. Using:
(t)k et
Pk (t) 
k!
And setting t = Tf and k = 0, we get
(  T )0 eTf
P0 (Tf )   eG
f

0!
because
G
d,  . Thus, S  G  e G

Tf

Fig 3.5 Throughput versus offered traffic for ALOHA systems.

3.3..2 Carrier Sense Multiple Access (CSMA)


` CSMA stands for Carrier Sense Multiple Access. Carrier Sense means,
stations has an additional property with them, that they can sense the
channel (carrier) and tell if the channel is in use or not. What we want, that
at the start of the slot, stations should sense the channel first, and then act
accordingly.
CSMA was developed to overcome the problems of ALOHA i.e. to
minimize the chances of collision. The chances of collision reduces to a
great extent if a station checks the channel before trying to use it.
There are three different types of CSMA protocols:
1. 1-Persistent CSMA
2. Non-Persistent CSMA
3. P-Persistent CSMA

3.3.2.1 1-Persistent CSMA

In this method, station that wants to transmit data, continuously


senses the channel to check whether he channel is idle or busy. If the
channel is busy, station waits until it becomes idle. When the station
detects an idle channel, it immediately transmits the frame. This method
has the highest chance of collision because two or more stations may find
channel to be idle at the same time and transmit their frames.

3.3.2.2 Non-Persistent CSMA


A station that has a frame to send senses the channel. If the channel
is idle, it sends immediately. If the channel is busy, it waits a random
amount of time and then senses the channel again. It reduces the chance
of collision because the stations wait for a random amount of time. It is
unlikely that two or more stations will wait for the same amount of time and
will retransmit at the same time.

3.3.2.3 P-Persistent CSMA


In this method, the channel has time slots such that the time slot
duration is equal to or greater than the maximum propagation delay time.
When a station is ready to send, it senses the channel. If the channel is
busy, station waits until next slot. If the channel is idle, it transmits the
frame. It reduces the chance of collision and improves the efficiency of the
network.

3.3.2.4 CSMA with Collision Detection (CSMA/CD)


In this protocol, the station senses the channel before transmitting
the frame. If the channel is busy, the station waits. Additional feature in
CSMA/CD is that the stations can detect collisions. The stations abort their
transmission as soon as they detect collision. This feature is not present in
CSMA. The stations continue to transmit even though they find that
collision has occurred.
In CSMA/CD, the station that sends its data on the channel, continues
to sense the channel even after data transmission. If collision is detected,
the station aborts its transmission and waits for a random amount of time &
sends its data again. As soon as a collision is detected, the transmitting
stations release a jam signal. Jam signal alerts other stations. Stations are
not supposed to transmit immediately after the collision has occurred.

Fig 3.6 Flowchart for CSMA/CD

CSMA/CD can be in one of three states: contention, transmission, or idle.

Fig 3.7 Frame format for CSMA/CD


3.3.2.5 CSMA with Collision Avoidance (CSMA/CA)
This protocol is used in wireless networks because they cannot detect
the collision. So, the only solution is collision avoidance. It avoids the
collision by using three basic techniques:
 Interframe Space
 Contention Window
 Acknowledgements

Interframe Space: Whenever the channel is found idle, the station does not
transmit immediately. It waits for a period of time called Interframe Space
(IFS). When channel is sensed idle, it may be possible that some distant
station may have already started transmitting. Therefore, the purpose of
IFS time is to allow this transmitted signal to reach its destination. If after
this IFS time, channel is still idle, the station can send the frames.

Contention Window: Contention window is the amount of time divided into


slots. Station that is ready to send chooses a random number of slots as its
waiting time. The number of slots in the window changes with time. It
means that it is set of one slot for the first time, and then doubles each
time the station cannot detect an idle channel after the IFS time. In
contention window, the station needs to sense the channel after each time
slot.

Acknowledgment: Despite all the precautions, collisions may occur and


destroy the data. Positive acknowledgement and the time-out timer help
guarantee that the receiver has received the frame.

Fig 3.8 Flow chart for CSMA/CA Limited-


Contention Protocols
Computer NetworkComputer
EngineeringMCA

Limited Contention Protocols are the media


access control (MAC) protocols that combines
the advantages of collision based protocols and
collision free protocols. They behave like
slotted ALOHA under light loads and bitmap
protocols under heavy loads.

Concept

In computer networks, when more than one


station tries to transmit simultaneously via a
shared channel, the transmitted data is
garbled, an event called collision. In collision
based protocols like ALOHA, all stations are
permitted to transmit a frame without trying to
detect whether the transmission channel is idle
or busy. In slotted ALOHA, the shared channel
is divided into a number of discrete time
intervals called slots. Any station having a
frame can start transmitting at the beginning of
a slot. Since, this works very good under light
loads, limited contention protocols behave like
slotted ALOHA under low loads.
However, with the increase in loads, there
occurs exponential growth in number of
collisions and so the performance of slotted
ALOHA degrades rapidly. So, under high loads,
collision free protocols like bitmap protocols
work best. In collision free protocols, channel
access is resolved in the contention period and
so the possibilities of collisions are eliminated.
In bit map protocol, the contention period is
divided into N slots, where N is the total
number of stations sharing the channel. If a
station has a frame to send, it sets the
corresponding bit in the slot. So, before
transmission, each station knows whether the
other stations want to transmit. Collisions are
avoided by mutual agreement among the
contending stations on who gets the channel.
Limited contention protocols behave like
slotted ALOHA under low loads.

Working Principle

Limited contention protocols divide the


contending stations into groups, which may or
not be disjoint. At slot 0, only stations in group
0 can compete for channel access. At slot 1, only
stations in group 1 can compete for channel
access and so on. In this process, if a station
successfully acquires the channel, then it
transmits its data frame. If there is a collision
or there are no stations competing for a given
slot in a group, the stations of the next group
can compete for the slot.
By dynamically changing the number of groups
and the number of stations allotted in a group
according to the network load, the protocol
changes from slotted ALOHA under low loads to
bit map protocol under high loads. Under low
loads, only one group is there containing all
stations, which is the case of slotted ALOHA. As
the load increases, more groups are added and
the size of each group is reduced. When the
load is very high, each group has just one
station, i.e. only one station can compete at a
slot, which is the case of bit map protocol.

The performance of limited contention


protocol is highly dependent upon the
algorithm to dynamically adjust the group
configurations to the changes in network
environment.

Wireless LAN Protocols-

3.4 ETHERNET

Ethernet, developed in 1976, is the most widely-installed LAN


technology, and typically uses coaxial or UTP cable. Ethernet technology
uses broadcast topology with baseband signaling and a control method
called Carrier Sense Multiple Access/Collision Detection (CSMA/CD) to
transmit data. The IEEE 802.3 standard defines Ethernet protocols for
(Open Systems Interconnect) OSI’s Media Access Control (MAC) sublayer
and physical layer network characteristics. The IEEE 802.2 standard defines
protocols for the Logical Link Control (LLC) sublayer.
The most commonly installed Ethernet systems are called 10BASE-T,
which provides transmission speeds up to 10 Mbps. 'Fast Ethernet' or
100BASE-T provides transmission speeds up to 100 megabits per second
and is typically used for servers, LAN backbone systems and in
workstations with high-bandwidth needs. Gigabit Ethernet provides an even
faster level of backbone support at 1000 megabits per second (1 gigabit or
1 billion bits per second).
Ethernet is a passive, contention-based broadcast technology that
uses baseband signaling. Baseband signaling uses the entire bandwidth of
a cable for a single transmission. Only one signal can be transmitted at a
time and every device on the shared network hears broadcast
transmissions. Passive technology means that there is no one device
controlling the network. Contention-based means that every device must
compete with every other device for access to the shared network. In other
words, devices take turns. They can transmit only when no other device is
transmitting.
Physical layer configurations are specified in three parts
-Data rate (10, 100, 1,000) Mbps
-Signaling method –Baseband(Digital signaling) and Broadband(Analog
signaling)
-Cabling (2, 5, T, F, S, L)
– 5 - Thick coax (original Ethernet cabling)
– F – Optical fiber
– S – Short wave laser over multimode fiber
– L – Long wave laser over single mode
fiber Frame format

Fig 3.9 Frame format of Ehernet

Preamble is a sequence of 7 bytes, each set to “10101010”. Used to


synchronize receiver before actual data is sent
Addresses: -unique, 48-bit unicast address assigned to each adapter
• example: 8:0:e4:b1:2
• Each manufacturer gets their own address range
– broadcast: all 1s
– multicast: first bit is 1
Type field is a demultiplexing key used to determine which higher level
protocol the frame should be delivered to.
Body can contain up to 1500 bytes of data.

3.4.1 Ethernet working

When a node wants to communicate to another node, it transmits its


frame. The frame travels to every node on the segment. Each node inspects
the frame to see if it is addressed to him. If the frame is not addressed to
the node, the node ignores it. If the frame is addressed to the node, the
node opens the frame and reads its contents. The exception is a broadcast
address, which is a special message intended to be read by every node (like
a message on the P.A. as opposed to a comment from one person
II

PREPARED BY – Dr.P.Chitra PAGE 14 of 20

to another).Token Ring, the main alternative to Ethernet, uses a different


strategy to avoid computers talking at the same time.

Ethernet popularity is a result of several factors. Ethernet technology is:

 Inexpensive
 Easy to install, maintain, troubleshoot and expand
 A widely accepted industry standard, which means compatibility and
 equipment access are less of an issue
 Structured to allow compatibility with network operating systems
 (NOS)
 Very reliable
 The physical-layer specifications of the Ethernet family of computer
network standards are published by the Institute of Electrical and
Electronics Engineers (IEEE), which defines the electrical or optical
properties and the transfer speed of the physical connection between a
device and the network or between network devices. It is complemented
by the MAC layer and the logical link layer. An implementation of a
specific physical layer is commonly referred to as PHY.
 The Ethernet physical layer has evolved over its existence starting in
1980 and encompasses multiple physical media interfaces and
several orders of magnitude of speed from 1 Mbit/s to 800 Gbit/s. The
physical medium ranges from bulky coaxial cable to twisted
pair and optical fiber with a standardized reach of up to 80 km. In
general, network protocol stack software will work similarly on all
physical layers.
 Many Ethernet adapters and switch ports support multiple speeds by
using autonegotiation to set the speed and duplex for the best values
supported by both connected devices. If autonegotiation fails, some
multiple-speed devices sense the speed used by their partner, [1] but this
may result in a duplex mismatch. With rare exceptions, a 100BASE-
TX port (10/100) also supports 10BASE-T while a 1000BASE-T port
(10/100/1000) also supports 10BASE-T and 100BASE-TX. Most 10GBASE-
T ports also support 1000BASE-T,[2] some even 100BASE-TX or 10BASE-
T. While autonegotiation can practically be relied on for Ethernet over
twisted pair, few optical-fiber ports support multiple speeds. In any case,
even multi-rate fiber interfaces only support a single wavelength (e.g.
850 nm for 1000BASE-SX or 10GBASE-SR).
 10 Gigabit Ethernet was already used in both enterprise and carrier
networks by 2007, with 40 Gbit/s[3][4] and 100 Gigabit Ethernet[5] ratified.
[6]
In 2024, the fastest additions to the Ethernet family were 800
Gbit/s variants.[7]
 Naming conventions
 [edit]
 Generally, layers are named by their specifications: [8]
 10, 100, 1000, 10G, ... – the nominal, usable speed at the top of the
physical layer (no suffix = megabit/s, G = gigabit/s), excluding line
codes but including other physical layer overhead (preamble, SFD, IPG);
II

PREPARED BY – Dr.P.Chitra PAGE 15 of 20


some WAN PHYs (W) run at slightly reduced bitrates for compatibility
reasons; encoded PHY sublayers usually run at higher bitrates
 BASE, BROAD, PASS – indicates baseband, broadband,
or passband signaling respectively
 -T, -T1, -S, -L, -E, -Z, -C, -K, -H ... – medium (PMD): T = twisted pair, -T1 =
single-pair twisted pair, S = 850 nm short wavelength (multi-mode
fiber), L = 1300 nm long wavelength (mostly single-mode fiber), E or Z =
1500 nm extra long wavelength (single-mode), B = bidirectional fiber
(mostly single-mode) using WDM, P = passive optical (PON), C =
copper/twinax, K = backplane, 2 or 5 or 36 = coax with 185/500/3600 m
reach (obsolete), F = fiber, various wavelengths, H = plastic optical fiber
 X, R – PCS encoding method (varying with the
generation): X for 8b/10b block encoding (4B5B for Fast Ethernet), R for
large block encoding (64b/66b)
 1, 2, 4, 10 – for LAN PHYs indicates number of lanes used per link; for
WAN PHYs indicates reach in kilometers
 For 10 Mbit/s, no encoding is indicated as all variants use Manchester
code. Most twisted pair layers use unique encoding, so most often just -
T is used.
 The reach, especially for optical connections, is defined as the maximum
achievable link length that is guaranteed to work when all channel
parameters are met (modal bandwidth, attenuation, insertion
losses etc.). With better channel parameters, often a longer, stable link
length can be achieved. Vice versa, a link with worse channel
parameters can also work but only over a shorter
distance. Reach and maximum distance have the same meaning.
 Physical layers
 [edit]
 The following sections provide a brief summary of official Ethernet
media types. In addition to these official standards, many vendors have
implemented proprietary media types for various reasons—often to
support longer distances over fiber optic cabling.
 Early implementations and 10 Mbit/s
 [edit]
 See also: Classic Ethernet
 Early Ethernet standards used Manchester coding so that the signal
was self-clocking and not adversely affected by high-pass filters.






Classic ethernet mac sub layer


 Classic Ethernet is the original form of Ethernet used primarily in LANs. It
provides data rates between 3 to 10 Mbps.It operates both in the
physical layer and in the MAC sublayer of the OSI model. In the physical
layer, the features of the cables and networks are considered. In MAC
sublayer, the frame formats for the Ethernet data frame are laid down.
II

PREPARED BY – Dr.P.Chitra PAGE 16 of 20


 Classic Ethernet was first standardized in 1980s as IEEE 802.3 standard.
 Frame Format of Classic Ethernet
 Classic Ethernet frames can be either of Ethernet (DIX) or of IEEE 802.3
standard. The frames of the two standards are very similar except for
one field. The main fields of a frame of classic Ethernet are −
 Preamble − It is the starting field that provides alert and timing pulse
for transmission. In case of Ethernet (DIX) it is an 8 byte field and in case
of IEEE 802.3 it is of 7 bytes.
 Start of Frame Delimiter (SOF) − It is a 1 byte field in an IEEE 802.3
frame that contains an alternating pattern of ones and zeros ending with
two ones.
 Destination Address − It is a 6 byte field containing physical address
of destination stations.
 Source Address − It is a 6 byte field containing the physical address of
the sending station.
 Type/Length − This is a 2 byte field. In case of Ethernet (DIX), the field
is type that instructs the receiver which process to give the frame to. In
case of IEEE 802.3, the field is length that stores the number of bytes in
the data field.
 Data − This is a variable sized field carries the data from the upper
layers. The maximum size of data field is 1500 bytes.
 Padding − This is added to the data to bring its length to the minimum
requirement of 46 bytes.
 CRC − CRC stands for cyclic redundancy check. It contains the error
detection information.



II

PREPARED BY – Dr.P.Chitra PAGE 17 of 20


Ethernet Performance

 Ethernet is a set of technologies and protocols that are used primarily in


LANs. The performance of Ethernet is analysed by computing the
efficiency of the channel under different load conditions.
 Let us assume an Ethernet network has k stations and each station
transmits with a probability p during a contention slot. Let A be the
probability that some station acquires the channel. A is calculated as −
 A = kp (1−p)kp
 The value of A is maximized at p = 1/k. If there can be innumerable
stations connected to the Ethernet network, i.e. k → ∞, the maximum
value of A will be 1/e.
 Let Q be the probability that the contention period has exactly j slots. Q
is calculated as −
 Q = A (1−A)j−1
 Let M be the mean number of slots per contention. So, the value of M
will be −

the mean contention interval, 𝑤 will be 2τ/A.


 Given that τ is the propagation time, each slot has duration 2τ. Hence

 Let P be the time is seconds for a frame to propagate.


 The channel efficiency, when a number of stations want to send frame,
can be calculated as −


 Let F be the length of frame, B be the cable length, L be the cable
II

PREPARED BY – Dr.P.Chitra PAGE 18 of 20


length, c be the speed of signal propagation and e be the contention
slots per frame. The channel efficiency in terms of these parameters is −

Fast Ethernet
In the fast-evolving world of computers, speed plays a vital role in the present
and has become paramount. As the technology is advancing and the demands
on the networks are increasing significantly, the quest for transferring the data
at faster rates has become a priority. A very crucial evolution in networking
technology developed to meet the escalating features and demands head-on -
enters the Fast Ethernet.
Fast Ethernet has transformed the landscape of communication enabling
unparalleled speeds and unleashing a new era of connectivity through speed.
In this article, we will look into depth exploration of the concepts from their
fundamental definitions to profound modern technologies landscapes. Before
understanding and getting into the concept, there are some primary
terminologies we need to look to understand Fast ethernet better.
Primary Terminologies Related to Fast Ethernet
 Ethernet: Ethernet is a family of wired computers and It is a widely used
networking technology that mainly defines the path of the data
transmission over a wired connection. It works on the principle of packet
switching and uses the ethernet protocol. Ethernet technology allows
network-connected devices to communicate without packet collisions. In
an Ethernet network, data is broken into packets.
 Data Transfer Rate (DTR): Data Transfer rate also known as Data
Throughput. It is the speed at which the data is transmitted from one
device to another within the connected network. DTR is mainly measured
in bits per second(bps) or in major kilobits per second(Kbps).
 Band Width: It mainly refers to the maximum rate of data transfer
across a connected network medium. These represent the capacity of the
medium which can carry data up to what extent. Bandwidth is actually the
volume of information that can be sent over a connection in a measured
amount of time – calculated in megabits per second (Mbps).
Understanding Fast Ethernet
Fast Ethernet is a networking technology which is enhancement of traditional
ethernet by increasing the data transfer rates. Fast Ethernet represents a huge
development over traditional Ethernet, addressing the growing demand for
better data transfer rates in networking environments. The original Ethernet
standard, defined by means of the Institute of Electrical and Electronics
Engineers (IEEE) 802.3 specification, operated at a speed of 10 Mbps. However,
because the call for faster network speeds grew, the need for an advanced
Ethernet standards became apparent.
II

PREPARED BY – Dr.P.Chitra PAGE 19 of 20


Fast Ethernet, standardized below the IEEE 802.3u specification, brought
several enhancements to get its higher data transfer rates. One of the key
improvements changed into the use of different signaling methods and media
types compared to traditional Ethernet. Fast Ethernet helps both twisted pair
and fiber optic cabling.
Types of Fast Ethernet
Fast Ethernet is mainly of several types or standards, each having their own
specifications, implementations and characteristics. The two most common
types of Fast Ethernet are:
 100Base-TX
 100Base-FX

1. 100Base-TX
 100Base-TX utilizes twisted pair copper cabling specified as Cat5e cables
for the ease of data transmission.
 It is mainly used in Local Area Networks(LANs)
 It works on both Full duplex and Half Duplex Modes and it supports data
transfer rates up to 100 Mbps.
 This is used in the connecting devices like computers. printers and LAN
environment.
2. 100Base-FX
 100Base-FX is another type of Fast Ethernet that is in different cabling
which is fiber optic cabling for Data transmission
 It is mainly used in long run cables of range 120 Kms and it requires
electro magnetic interference.
 It supports up to 100 Mbps of DTR and it also offers the higher bandwidth.
 This is used commonly in connecting devices across different buildings or
in the environment where copper cabling is out of reach.
Significance of Fast Ethernet
The Significance of Fast Ethernet lies in the impact on networking capabilities,
performance and Efficiency. Some Key points mentioned are the significance of
Fast Ethernet below:
 Increased Data Transfer Rates
II

PREPARED BY – Dr.P.Chitra PAGE 20 of 20


 Enhanced Network Performance
 Cost Effectiveness
 Scalability
 Flexibility
 Support for Bandwidth intensive applications ( Video Conferencing,
Multimedia streaming, Large File Transfers).
Applications of Fast Ethernet
The applications of Fast Ethernet are diverse in various industries and
scenarios. The following are some of the applications for Fast Ethernet:
 Data Centers: Fast Ethernet is commonly deployed in data center
environments to connect servers, storage devices, and networking
equipment. It facilitates rapid data transfer between servers and storage
arrays, supporting mission-critical applications and services hosted in the
data center.
 Surveillance Systems: Fast Ethernet is utilized in surveillance systems
for transmitting high-definition video feeds from IP cameras to monitoring
stations or recording devices. It ensures real-time monitoring and
recording of surveillance footage, enhancing security and surveillance
capabilities.
 Educational Institutions: Fast Ethernet is employed in educational
institutions, such as schools and universities, to support networked
learning environments. It enables access to online educational resources,
collaborative tools, and e-learning platforms, enriching the educational
experience for students and educators.
 Telecommunications: Fast Ethernet is used in telecommunications
networks for backhaul connections between central offices, cell towers,
and network aggregation points. It enables the efficient transfer of voice,
data, and video traffic over large-scale telecommunications networks.

Wireless LANs

You might also like