Unit-3: 1. Static Channel Allocation in Lans and Mans
Unit-3: 1. Static Channel Allocation in Lans and Mans
INTRODUCTION:
The protocols used to determine who goes next on a multiaccess channel belong to
a sublayer of the data link layer called the MAC (Medium Access Control)
sublayer. The MAC sublayer is especially important in LANs, particularly wireless
ones because wireless is naturally a broadcast channel. WANs, in contrast, use
point-to-point links, except for satellite networks
T (FDM) = N*T(1/U(C/N)-L/N)
Where,
T = mean time delay,
C = capacity of channel,
L = arrival rate of frames,
1/U = bits/frame,
N = number of sub channels,
T(FDM) = Frequency Division Multiplexing Time
1. Station Model:
When a sender and receiver have a dedicated link to transmit data packets, the
data link control is enough to handle the channel. Suppose there is no dedicated
path to communicate or transfer the data between two devices. In that case,
multiple stations access the channel and simultaneously transmits the data over the
channel. It may create collision and cross talk. Hence, the multiple access protocol
is required to reduce the collision and avoid crosstalk between the channels.
For example, suppose that there is a classroom full of students. When a teacher
asks a question, all the students (small channels) in the class start answering the
question at the same time (transferring the data simultaneously). All the students
respond at the same time due to which data is overlap or data lost. Therefore it is
the responsibility of a teacher (multiple access protocol) to manage the students
and make them one answer.
Following are the types of multiple access protocol that is subdivided into the
different process as:
In this protocol, all the station has the equal priority to send the data over a
channel. In random access protocol, one or more stations cannot depend on
another station nor any station control another station. Depending on the channel's
state (idle or busy), each station transmits the data frame. However, if more than
one station sends the data over a channel, there may be a collision or data conflict.
Due to the collision, the data frame packets may be lost or changed. And hence, it
does not receive by the receiver end.
o Aloha
o CSMA
o CSMA/CD
o CSMA/CA
It is designed for wireless LAN (Local Area Network) but can also be used in a
shared medium to transmit data. Using this method, any station can transmit data
across a network simultaneously when a data frameset is available for
transmission.
Aloha Rules
3. Collision and data frames may be lost during the transmission of data
through multiple stations.
Pure Aloha
Whenever data is available for sending over a channel at stations, we use Pure
Aloha. In pure Aloha, when each station transmits data to a channel without
checking whether the channel is idle or not, the chances of collision may occur, and
the data frame can be lost. When any station transmits the data frame to a
channel, the pure Aloha waits for the receiver's acknowledgment. If it does not
acknowledge the receiver end within the specified time, the station waits for a
random amount of time, called the backoff time (Tb). And the station may assume
the frame has been lost or destroyed. Therefore, it retransmits the frame until all
the data are successfully transmitted to the receiver.
As we can see in the figure above, there are four stations for accessing a shared
channel and transmitting data frames. Some frames collide because most stations
send their frames at the same time. Only two frames, frame 1.1 and frame 2.2, are
successfully transmitted to the receiver end. At the same time, other frames are
lost or destroyed. Whenever two frames fall on a shared channel simultaneously,
collisions can occur, and both will suffer damage. If the new frame's first bit enters
the channel before finishing the last bit of the second frame. Both frames are
completely finished, and both stations must retransmit the data frame.
Slotted Aloha
The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure
Aloha has a very high possibility of frame hitting. In slotted Aloha, the shared
channel is divided into a fixed time interval called slots. So that, if a station wants
to send a frame to a shared channel, the frame can only be sent at the beginning of
the slot, and only one frame is allowed to be sent to each slot. And if the stations
are unable to send data to the beginning of the slot, the station will have to wait
until the beginning of the slot for the next time. However, the possibility of a
collision remains when trying to send a frame at the beginning of two or more
station time slot.
1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.
1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense
the shared channel and if the channel is idle, it immediately sends the data. Else it
must wait and keep track of the status of the channel to be idle and broadcast the
frame unconditionally as soon as the channel is idle.
CSMA/ CD
CSMA/ CA
Following are the methods used in the CSMA/ CA to avoid the collision:
Interframe space: In this method, the station waits for the channel to become
idle, and if it gets the channel is idle, it does not immediately send the data.
Instead of this, it waits for some time, and this time period is called
the Interframe space or IFS. However, the IFS time is often used to define the
priority of the station.
Contention window: In the Contention window, the total time is divided into
different slots. When the station/ sender is ready to transmit the data frame, it
chooses a random slot number of slots as wait time. If the channel is still busy, it
does not restart the entire process, except that it restarts the timer only to send
data packets when the channel is inactive.
C. Channelization Protocols
Following are the various methods to access the channel based on their time,
distance and codes:
FDMA
TDMA
Time Division Multiple Access (TDMA) is a channel access method. It allows the
same frequency bandwidth to be shared across multiple stations. And to avoid
collisions in the shared channel, it divides the channel into different frequency slots
that allocate stations to transmit the data frames. The same frequency bandwidth
into the shared channel by dividing the signal into various time slots to transmit it.
However, TDMA has an overhead of synchronization that specifies each station's
time slot by adding synchronization bits to each slot.
CDMA
The code division multiple access (CDMA) is a channel access method. In CDMA, all
stations can simultaneously send the data over the same channel. It means that it
allows each station to transmit the data frames with full frequency on the shared
channel at all times. It does not require the division of bandwidth on a shared
channel based on time slots. If multiple stations send data to a channel
simultaneously, their data frames are separated by a unique code sequence. Each
station has a different unique code for transmitting the data over a shared channel.
For example, there are multiple users in a room that are continuously speaking.
Data is received by the users if only two-person interact with each other using the
same language. Similarly, in the network, if different stations communicate with
each other simultaneously with different code language.
Classic Ethernet
Ethernet is a set of technologies and protocols that are used primarily in LANs. It
was first standardized in 1980s as IEEE 802.3 standard. Ethernet is classified into
two categories: classic Ethernet and switched Ethernet.
Classic Ethernet is the original form of Ethernet that provides data rates between 3
to 10 Mbps. The varieties are commonly referred as 10BASE-X. Here, 10 is the
maximum throughput, i.e. 10 Mbps, BASE denoted use of baseband transmission,
and X is the type of medium used. Most varieties of classic Ethernet have become
obsolete in present communication scenario.
Varieties of Classic Ethernet
The common varieties of classic Ethernet are -
Thick coax (10BASE-5): This was the original version that used a single
coaxial cable into which a connection can be tapped by drilling into the cable
to the core. The 5 refers to the maximum segment length of 500m.
Thin coax (10BASE-2): This is a thinner variety where segments of coaxial
cables are connected by BNC connectors. The 2 refers to the maximum
segment length of about 200m (185m to be precise).
Twisted pair (10BASE-T): This uses unshielded twisted pair copper wires
as physical layer medium.
Ethernet over Fiber (10BASE-F): This uses fiber optic cables as medium of
transmission.
Frame Format of Classic Ethernet
The main fields of a frame of classic Ethernet are -
Preamble: It is a 8 bytes starting field that provides alert and timing pulse
for transmission.
Destination Address: It is a 6 byte field containing physical address of
destination stations.
Source Address: It is a 6 byte field containing the physical address of the
sending station.
Length: It a 7 bytes field that stores the number of bytes in the data field.
Data: This is a variable sized field carries the data from the upper layers.
The maximum size of data field is 1500 bytes.
Padding: This is added to the data to bring its length to the minimum
requirement of 46 bytes.
CRC: CRC stands for cyclic redundancy check. It contains the error detection
information.
Architecture
Classic Ethernet is simplest form of Ethernet. It comprises of an Ethernet medium
composed of a long piece of coaxial cable. Stations can be connected to the coaxial
cable using a card called the network interface (NI). The NIs are responsible for
receiving and transmitting data through the network. Repeaters are used to make
end-to-end joins between cable segments as well as re-generate the signals if they
weaken. When a station is ready to transmit, it places its frame in the cable. This
arrangement is called the broadcast bus.
The configuration is illustrated as follows –
Frame Format of Classic Ethernet
The main fields of a frame of classic Ethernet are −
Preamble: It is a 8 bytes starting field that provides alert and timing pulse
for transmission.
Destination Address: It is a 6 byte field containing physical address of
destination stations.
Advantages of CMSA/CD
CMSA/CA prevents collision.
Due to acknowledgements, data is not lost unnecessarily.
It avoids wasteful transmission.
It is very much suited for wireless transmissions.
Disadvantages of CSMA/CD
The algorithm calls for long waiting times.
It has high power consumption.
Ethernet Performance
Ethernet is a set of technologies and protocols that are used primarily in LANs. The
performance of Ethernet is analysed by computing the efficiency of the channel
under different load conditions.
Let us assume an Ethernet network has k stations and each station transmits with a
probability p during a contention slot. Let A be the probability that some station
acquires the channel. A is calculated as −
A = kp (1−p)kp
The value of A is maximized at p = 1/k. If there can be innumerable stations
connected to the Ethernet network, i.e. k → ∞, the maximum value of A will be 1/e.
Let Q be the probability that the contention period has exactly j slots. Q is
calculated as −
Q = A (1−A)j−1
Let M be the mean number of slots per contention. So, the value of M will be −
Given that τ is the propagation time, each slot has duration 2τ. Hence the mean
contention interval, 𝑤 will be 2τ/A.
Let P be the time is seconds for a frame to propagate.
The channel efficiency, when a number of stations want to send frame, can be
calculated as −
Let F be the length of frame, B be the cable length, L be the cable length, c be the
speed of signal propagation and e be the contention slots per frame. The channel
efficiency in terms of these parameters is −
Fast Ethernet
In computer networks, Fast Ethernet is a variation of Ethernet standards that carry
data traffic at 100 Mbps (Mega bits per second) in local area networks (LAN). It was
launched as the IEEE 802.3u standard in 1995, and stayed the fastest network till
the introduction of Gigabit Ethernet.
Fast Ethernet is popularly named as 100-BASE-X. Here, 100 is the maximum
throughput, i.e. 100 Mbps, BASE denoted use of baseband transmission, and X is
the type of medium used, which is TX or FX.
Varieties of Fast Ethernet
The common varieties of fast Ethernet are 100-Base-TX, 100-BASE-FX and 100-
Base-T4.
100-Base-T4
o This has four pairs of UTP of Category 3, two of which are bi-
directional and the other two are unidirectional.
o In each direction, three pairs can be used simultaneously for data
transmission.
o Each twisted pair is capable of transmitting a maximum of 25Mbaud
data. Thus the three pairs can handle a maximum of 75Mbaud data.
o It uses the encoding scheme 8B/6T (eight binary/six ternary).
100-Base-TX
o This has either two pairs of unshielded twisted pairs (UTP) category 5
wires or two shielded twisted pairs (STP) type 1 wires. One pair
transmits frames from hub to the device and the other from device to
hub.
o Maximum distance between hub and station is 100m.
o It has a data rate of 125 Mbps.
o It uses MLT-3 encoding scheme along with 4B/5B block coding.
100-BASE-FX
o This has two pairs of optical fibers. One pair transmits frames from
hub to the device and the other from device to hub.
o Maximum distance between hub and station is 2000m.
o It has a data rate of 125 Mbps.
o It uses NRZ-I encoding scheme along with 4B/5B block coding.
Switched Ethernet
Ethernet is a set of technologies and protocols that are used primarily in LANs. It
was first standardized in 1980s as IEEE 802.3 standard. Ethernet is classified into
two categories: classic Ethernet and switched Ethernet.
In switched Ethernet, the hub connecting the stations of the classic Ethernet is
replaced by a switch. The switch connects the high-speed backplane bus to all the
stations in the LAN. The switch-box contains a number of ports, typically within the
range of 4 – 48. A station can be connected in the network by simply plugging a
connector to any of the ports. Connections from a backbone Ethernet switch can go
to computers, peripherals or other Ethernet switches and Ethernet hubs.
The following diagram shows configuration of a switched Ethernet −
Working Principle
Unlike classic Ethernet in which the channel is shared by the stations, in switched
Ethernet, each station gets a dedicated connection. When a port of the switch
receives a frame, it checks the destination address in the frame and then sends the
frame to the corresponding port, for outgoing data.
In switched Ethernet, collisions do not occur in the channel due to the presence of
dedicated connection to each station. However, collisions may still occur in a
destination port if it receives frames from more than one ports simultaneously. In a
switch, each port has its own individual collision domain and resolves it individually.
Frame Format of Switched Ethernet
The frame format of switched Ethernet is same as that of classic Ethernet. The
fields are −
Preamble: An 8 bytes starting field that provides alert and timing pulse for
transmission.
Destination Address: A 6 byte field containing physical address of
destination stations.
Source Address: A 6 byte field containing the physical address of the
sending station.
Length: A 2 bytes field that stores the number of bytes in the data field.
Data: A variable sized field carries the data from the upper layers. The
maximum size of data field is 1500 bytes.
Padding: Extra bits added to the data to bring its length to the minimum
size of 46 bytes.
CRC: A 4 byte field that contains the error detection information.
GIGABIT ETHERNET:
WIRELESS LANS
Wireless LANs are increasingly popular, and homes, offices, cafes, libraries,
airports, zoos, and other public places are being outfitted with them to connect
computers, PDAs, and smart phones to the Internet. Wireless LANs can also be
used to let two or more nearby computers communicate without using the Internet.
The main wireless LAN standard is 802.11. Now it is time to take a closer
look at the technology.In the following sections, we will look at the protocol stack,
physical-layer radio transmission techniques, the MAC sublayer protocol, the frame
structure, and the services provided.
4.4.1 The 802.11 Architecture and Protocol Stack
802.11 networks can be used in two modes. The most popular mode is to connect
clients, such as laptops and smart phones, to another network, such as a company
intranet or the Internet. This mode is shown in Fig. 4-23(a). In infrastructure mode,
each client is associated with an AP (Access Point) that is in turn connected to the
other network. The client sends and receives its packets via the AP. Several access
points may be connected together, typically by a wired network called a
distribution system, to form an extended 802.11 network. In this case, clients
can send frames to other clients via their APs.
The other mode, shown in Fig. 4-23(b), is an ad hoc network. This mode is
a collection of computers that are associated so that they can directly send frames
to each other. There is no access point. Since Internet access is the killer
application for wireless, ad hoc networks are not very popular.
Now we will look at the protocols. All the 802 protocols, including 802.11 and
Ethernet, have a certain commonality of structure. A partial view of the 802.11
protocol stack is given in Fig. 4-24. The stack is the same for clients and APs.
The physical layer corresponds fairly well to the OSI physical layer, but the
data link layer in all the 802 protocols is split into two or more sublayers. In
802.11, the MAC (Medium Access Control) sublayer determines how the channel is
allocated, that is, who gets to transmit next.
Above it is the LLC (Logical Link Control) sublayer, whose job it is to hide the
differences between the different 802 variants and make them indistinguishable as
far as the network layer is concerned. This could have been a significant
responsibility, but these days the LLC is a glue layer that identifies the protocol
(e.g., IP) that is carried within an 802.11 frame.
The 802.11 Physical Layer
The 802.11 MAC sublayer protocol is quite different from that of Ethernet,
due to two factors that are fundamental to wireless communication. First, radios are
nearly always half duplex, meaning that they cannot transmit and listen for noise
bursts at the same time on a single frequency. The received signal can easily be a
million times weaker than the transmitted signal, so it cannot be heard at the same
time. Instead, 802.11 tries to avoid collisions with a protocol called CSMA/CA
(CSMA with Collision Avoidance).
This protocol is conceptually similar to Ethernet’s CSMA/CD, with channel
sensing before sending and exponential back off after collisions. However, a station
that has a frame to send starts with a random backoff (except in the case that it
has not used the channel recently and the channel is idle). It does not wait for a
collision.
An example timeline is shown in Fig. 4-25. Station A is the first to send a
frame. While A is sending, stations B and C become ready to send. They see that
the channel is busy and wait for it to become idle. Shortly after A receives an
acknowledgement, the channel goes idle. However, rather than sending a frame
right away and colliding, B and C both perform a backoff. C picks a short backoff,
and thus sends first. B pauses its countdown while it senses that C is using the
channel, and resumes after C has received an acknowledgement. B soon completes
its backoff and sends its frame.
Compared to Ethernet, there are two main differences. First, starting backoffs early
helps to avoid collisions. This avoidance is worthwhile because collisions are
expensive, as the entire frame is transmitted even if one occurs. Second,
acknowledgements are used to infer collisions because collisions cannot be
detected. This mode of operation is called DCF (Distributed Coordination
Function) because each station acts independently, without any kind of central
control. The standard also includes an optional mode of operation called PCF (Point
Coordination Function) in which the access point controls all activity in its cell,
just like a cellular base station.
The second problem is that the transmission ranges of different stations may
be different. With a wire, the system is engineered so that all stations can hear
each other. With the complexities of RF propagation this situation does not hold for
wireless stations. Consequently, situations such as the hidden terminal problem
mentioned earlier can arise.
In this example, station C is transmitting to station B. If A senses the
channel, it will not hear anything and will falsely conclude that it may now start
transmitting to B. This decision leads to a collision. The inverse situation is the
exposed terminal problem, illustrated in Fig. 4- 26(b). Here, B wants to send to C,
so it listens to the channel. When it hears a transmission, it falsely concludes that it
may not send to C, even though A may in fact be transmitting to D (not shown).
Now let us consider this exchange from the viewpoints of C and D. C is within
range of A, so it may receive the RTS frame. If it does, it realizes that someone is
going to send data soon. From the information provided in the RTS request, it can
estimate how long the sequence will take, including the final ACK. CSMA/CA with
physical and virtual sensing is the core of the 802.11 protocol.
However, there are several other mechanisms that have been developed to
go with it. Each of these mechanisms was driven by the needs of real operation, so
we will look at them briefly.
The first need we will look at is reliability. In contrast to wired networks,
wireless networks are noisy and unreliable, in no small part due to interference
from other kinds of devices, Another strategy to improve the chance of the frame
getting through undamaged is to send shorter frames. If the probability of any bit
being in error is p, the probability of an n-bit frame being received entirely correctly
is (1 − p)n.
The second need we will discuss is saving power. Battery life is always
an issue with mobile wireless devices The basic mechanism for saving power builds
on beacon frames. Beacons are periodic broadcasts by the AP, Clients can set a
power-management bit in frames that they send to the AP to tell it that they are
entering power-save mode. Another power-saving mechanism, called APSD
(Automatic Power Save Delivery), was also added to 802.11 in 2005.
The third and last need we will examine is quality of service Five
intervals are depicted in Fig. 4-28. The interval between regular data frames is
called the DIFS (DCF InterFrame Spacing).
Any station may attempt to acquire the channel to send a new frame after
the medium has been idle for DIFS. The usual contention rules apply, and binary
exponential backoff may be needed if a collision occurs.
The shortest interval is SIFS (Short InterFrame Spacing). It is used to
allow the parties in a single dialog the chance to go first. Examples include letting
the receiver send an ACK, other control frame sequences like RTS and CTS, or
letting a sender transmit a burst of fragments. Sending the next fragment after
waiting only SIFS is what prevents another station from jumping in with a frame in
the middle of the exchange.
The two AIFS (Arbitration InterFrame Space) intervals show examples of two
different priority levels. The short interval, AIFS1, is smaller than DIFS but longer
than SIFS. It can be used by the AP to move voice or other high-prioritytraffic to
the head of the line, The long interval, AIFS4, is larger than DIFS. It is used for
background traffic that can be deferred until after regular traffic. The last time
interval, EIFS (Extended InterFrame Spacing), is used only by a station that
has just received a bad or unknown frame, to report the problem.
The 802.11 standard defines three different classes of frames in the air:
data, control, and management. Each of these has a header with a variety of fields
used within the MAC sublayer.
In addition, there are some headers used by the physical layer, but these
mostly deal with the modulation techniques used, so we will not discuss them here.
We will look at the format of the data frame as an example. It is shown in
Fig. 4-29. First comes the Frame control field, which is made up of 11 subfields.
The first of these is the Protocol version, set to 00. It is there to allow
future versions of 802.11 to operate at the same time in the same cell.
Then come the Type(data, control, or management) and Subtype fields
(e.g., RTS or CTS). For a regular data frame (without quality of service), they are
set to 10 and 0000 in binary.
The To DS and From DS bits are set to indicate whether the frame is going
to or coming from the network connected to the APs, which is called the distribution
system. The More fragments bit means that more fragments will follow. The Retry
bit marks a retransmission of a frame sent earlier.
The Power management bit indicates that the sender is going into power-
save mode. The More data bit indicates that the sender has additional frames for
the receiver.
The Protected Frame bit indicates that the frame body has been encrypted
for security. We will discuss security briefly in the next section. Finally, the Order
bit tells the receiver that the higher layer expects the sequence of frames to arrive
strictly in order.
The second field of the data frame, the Duration field, tells how long the
frame and its acknowledgement will occupy the channel, measured in
microseconds. It is present in all types of frames, including control frames, and is
what stations use to manage the NAV mechanism.
Next come addresses. Data frames sent to or from an AP have three
addresses, all in standard IEEE 802 format. The first address is the receiver,
and the second address is the transmitter. They are obviously needed, but
what is the third address for? Remember that the AP is simply a relay point for
frames as they travel between a client and another point on the network, perhaps a
distant client or a portal to the Internet. The third address gives this distant
endpoint .
The Sequence field numbers frames so that duplicates can be detected. Of
the16 bits available, 4 identify the fragment and 12 carry a number that is
advanced with each new transmission.
The Data field contains the payload, up to 2312bytes. The first bytes of this
payload are in a format known as LLC (Logical Link Control). This layer is the
glue that identifies the higher-layer protocol(e.g., IP) to which the payloads should
be passed.
Last comes the Frame check sequence, which is the same 32-bit CRC we
saw in Sec. 3.2.2 and elsewhere. Management frames have the same format as
data frames, plus a format for the data portion that varies with the subtype (e.g.,
parameters in beacon frames).
Control frames are short. Like all frames, they have the Frame control,
Duration, and Frame check sequence fields. However, they may have only one
address and no data portion. Most of the key information is conveyed with the
Subtype field (e.g., ACK, RTS and CTS).
Services
The 802.11 standard defines the services that the clients, the access points,
and the network connecting them must be a conformant wireless LAN. These
services cluster into several groups.
The association service is used by mobile stations to connect themselves to
APs. Typically, it is used just after a station moves within radio range of the AP.
Upon arrival, the station learns the identity and capabilities of the AP, either from
beacon frames or by directly asking the AP.
The capabilities include the data rates supported, security arrangements,
power-saving capabilities, quality of service support, and more. The station sends a
request to associate with the AP. The AP may accept or reject the request.
Re-association lets a station change its preferred AP. This facility is useful
for mobile stations moving from one AP to another AP in the same extended 802.11
LAN, like a handover in the cellular network. If it is used correctly, no data will be
lost as a consequence of the handover. Either the station or the AP may also
disassociate, breaking their relationship. A station should use this service before
shutting down or leaving the network. The AP may use it before going down for
maintenance.
Stations must also authenticate before they can send frames via the AP,
but authentication is handled in different ways depending on the choice of security
scheme. The recommended scheme, called WPA2 (WiFi Protected Access 2),
implements security as defined in the 802.11i standard.
The scheme that was used before WPA is called WEP (Wired Equivalent
Privacy). For this scheme, authentication with a preshared key happens before
association. Data transmission is what it is all about, so 802.11 naturally provide a
data delivery service. This service lets stations transmit and receive data using
the protocols Since 802.11 is modeled on Ethernet and transmission over Ethernet
is not guaranteed to be 100% reliable, transmission over 802.11 is not guaranteed
to be reliable either.
Higher layers must deal with detecting and correcting errors. Wireless is a
broadcast signal. For information sent over a wireless LAN to be kept confidential, it
must be encrypted.
This goal is accomplished with a privacy service that manages the details of
encryption and decryption. The encryption algorithm for WPA2 is based on AES
(Advanced Encryption Standard), a U.S. government standard approved in
2002. To handle traffic with different priorities, there is a QOS traffic scheduling
service.
It uses the protocols we described to give voice and video traffic preferential
treatment compared to best-effort and background traffic.
Finally, there are two services that help stations manage their use of the
spectrum. The transmit power control service gives stations the information they
need to meet regulatory limits on transmit power that vary from region to region.
The dynamic frequency selection service give stations the information they need
to avoid transmitting on frequencies in the 5-GHz band that are being used for
radar in the proximity.
With these services, 802.11 provide a rich set of functionality for connecting
nearby mobile clients to the Internet. It has been a huge success, and the standard
has repeatedly been amended to add more functionality