CN Unit 4
CN Unit 4
UNIT-4
Syllabus
• ALOHA • Error Detection
• Parity Check
• CSMA/CD
• Checksum
• CSMA/CA • CRC
• Ethernet • Error Correction
• Token Ring • Hamming Codes
• Error Control
• Stop and Wait ARQ
• Sliding Window ARQ
ALOHA
• RANDOM ACCESS
• In random-access or contention methods, no station is superior to another
station and none is assigned control over another
• n a random-access method, each station has the right to the medium without
being controlled by any other station. However, if more than one station tries
to send, there is an access conflict—collision—and the frames will be either
destroyed or modified.
ALOHA
• ALOHA, the earliest random access method, was developed at the University of Hawaii
in early 1970. It was designed for a radio (wireless) LAN, but it can be used on any
shared medium.
• Pure ALOHA
• The original ALOHA protocol is called pure ALOHA
• The idea is that each station sends a frame whenever it has a frame to send (multiple access). However,
since there is only one channel to share, there is the possibility of collision between frames from
different stations.
ALOHA
ALOHA
• Vulnerable time
• Let us find the vulnerable time, the length of time in which there is a possibility
of collision.
• G the average number of frames generated by the system during one frame
transmission time.
ALOHA
• pure ALOHA network transmits 200-bit frames on a shared channel of 200 kbps. What
is the throughput if the system (all stations together) produces
a. 1000 frames per second?
b. 500 frames per second?
c. 250 frames per second?
ALOHA
• Slotted ALOHA
• In slotted ALOHA we divide the time into slots of Tfr seconds and force the
station to send only at the beginning of the time slot
• There is still the possibility of collision if two stations try to send at the
beginning of the same time slot. However, the vulnerable time is now reduced
to one-half, equal to Tfr
ALOHA
• Vulnerable time for slotted ALOHA protocol
ALOHA
Carrier Sense Multiple Access (CSMA)
• CSMA is based on the principle “sense before transmit” or “listen before talk.”
• CSMA can reduce the possibility of collision, but it cannot eliminate
• a station may sense the medium and find it idle, only because the first bit sent
by another station has not yet been received
Carrier Sense Multiple Access (CSMA)
• Vulnerable Time
• The vulnerable time for CSMA is the propagation time Tp. This is the time needed for a
signal to propagate from one end of the medium
• if the first bit of the frame reaches the end of the medium, every station will already have
heard the bit and will refrain from sending. to the other.
• The leftmost station, A, sends a frame at time t1 , which reaches the rightmost station, D,
at time t1 + Tp. The gray area shows the vulnerable area in time and space
Carrier Sense Multiple Access (CSMA)
• Persistence Methods
• 1-Persistent
• In this method, after the station finds the line idle, it sends its frame immediately (with
probability 1).
• This method has the highest chance of collision because two or more stations may find
the line idle and send their frames immediately.
Carrier Sense Multiple Access (CSMA)
• Nonpersistent
• In the nonpersistent method, a station that has a frame to send senses the line. If the line
is idle, it sends immediately. If the line is not idle, it waits a random amount of time and
then senses the line again.
• The nonpersistent approach reduces the chance of collision because it is unlikely that
two or more stations will wait the same amount of time and retry to send
simultaneously.
• However, this method reduces the efficiency of the network because the medium
remains idle when there may be stations with frames to send.
Carrier Sense Multiple Access (CSMA)
• p-Persistent
• It reduces the chance of collision and improves efficiency. In this method, after the station
finds the line idle it follows these steps:
• 1.With probability p, the station sends its frame.
• 2. With probability q = 1 − p, the station waits for the beginning of the next time slot and
checks the line again.
• a. If the line is idle, it goes to step 1.
• b. If the line is busy, it acts as though a collision has occurred and uses the backoff procedure.
Carrier Sense Multiple Access with Collision Detection (CSMA/CD)
• Procedure
Carrier Sense Multiple Access with Collision Detection (CSMA/CD)
• Energy Level
• The level of energy in a channel can have three values: zero, normal, and abnormal.
• Zero level: the channel is idle.
• Normal level: a station has successfully captured the channel and is sending its frame.
• Abnormal level: there is a collision and the level of the energy is twice the normal level
Throughput
The throughput of CSMA/CD is greater than that of pure or slotted ALOHA
Carrier Sense Multiple Access With Collision Avoidance (CSMA/CA)
• (CSMA/CA) was invented
for wireless networks.
Collisions are avoided
through the use of
CSMA/CA’s three strategies:
• the interframe space,
• the contention window,
• and acknowledgment
Carrier Sense Multiple Access With Collision Avoidance (CSMA/CA)
• Acknowledgment
• The positive acknowledgment and the time-out timer can help guarantee that the
receiver has received the frame
Carrier Sense Multiple Access With Collision Avoidance (CSMA/CA)
• Frame Exchange Time Line
• 1. Before sending a frame, the source station senses the medium by checking the
energy level at the carrier frequency.
• a. The channel uses a persistence strategy with backoff until the channel is idle.
• b. After the station is found to be idle, the station waits for a period of time called the DCF
interframe space (DIFS); then the station sends a control frame called the request to send (RTS).
• 2. After receiving the RTS and waiting a period of time called the short interframe
space (SIFS), the destination station sends a control frame, called the clear to send
(CTS), to the source station. This control frame indicates that the destination station is
ready to receive data.
• 3. The source station sends data after waiting an amount of time equal to SIFS.
• 4. The destination station, after waiting an amount of time equal to SIFS, sends an
acknowledgment to show that the frame has been received. Acknowledgment is
needed in this protocol because the station does not have any means to check for the
successful arrival of its data at the destination.
Carrier Sense Multiple Access With Collision Avoidance (CSMA/CA)
Ethernet
• IEEE Project 802
• In 1985, the Computer Society of the IEEE started a project, called Project 802, to set
standards to enable intercommunication among equipment from a variety of
manufacturers
• The IEEE has subdivided the data-link layer into two sublayers:
• Logical Link Control (LLC) and
• Media Access Control (MAC)
Ethernet
• STANDARD ETHERNET
• Characteristics
• Connectionless and Unreliable Service
• Connectionless : each frame sent is independent of the previous or next frame.
Ethernet has no connection establishment or connection termination phases
• Unreliable Service: If a frame is corrupted during transmission and the receiver finds
out about the corruption
Standard Ethernet
• Frame Format
• Preamble. This field contains 7 bytes (56 bits) of alternating 0s and 1s that alert
the receiving system to the coming frame and enable it to synchronize its clock if
it’s out of synchronization.
• Start frame delimiter (SFD). This field (1 byte: 10101011) signals the beginning of
the frame. The SFD warns the station or stations that this is the last chance for
synchronization.
Standard Ethernet
• Destination address (DA): This field is six bytes (48 bits) and contains the link-layer address of
the destination station or stations to receive the packet
• Source address (SA): This field is also six bytes and contains the link-layer address of the sender
of the packet
• Type: This field defines the upper-layer protocol whose packet is encapsulated in the frame
• Data: It is a minimum of 46 and a maximum of 1500 bytes
• CRC: The last field contains error detection information
• Frame Length
• If we count 18 bytes of header and trailer (6 bytes of source address, 6 bytes of destination address, 2 bytes
of length or type, and 4 bytes of CRC), then the minimum length of data from the upper layer is 64 − 18 =
46 bytes. If the upper-layer packet is less than 46 bytes, padding is added to make up the difference
• The standard defines the maximum length of a frame (without preamble and SFD field) as 1518 bytes. If
we subtract the 18 bytes of header and trailer, the maximum length of the payload is 1500 bytes
Standard Ethernet
• Addressing: Each station on an Ethernet network (such as a PC, workstation, or
printer) has its own network interface card (NIC)
• Implementation
• 10BaseX, the number defines the data rate (10 Mbps), the term Base means baseband
(digital) signal, and X approximately defines either the maximum size of the cable in 100
meters (for example 5 for 500 or 2 for 185 meters) or the type of cable, T for unshielded
twisted pair cable (UTP) and F for fiber-optic.
Standard Ethernet
• 10Base5: Thick Ethernet
• 10Base5 was the first Ethernet specification to use a bus topology with an external
transceiver (transmitter/receiver) connected via a tap to a thick coaxial cable.
• If a length of more than 500 m is needed, up to five segments, each a maximum of 500
meters, can be connected using repeaters
Standard Ethernet
• 10Base2: Thin Ethernet
• 10Base2 also uses a bus topology, but the
cable is much thinner and more flexible.
The cable can be bent to pass very close to
the stations. In this case, the transceiver is
normally part of the network interface card
(NIC), which is installed inside the station
• Topology
• Fast Ethernet is designed to connect two or more stations.
• If there are only two stations,they can be connected point-to-point.
• Three or more stations need to be connected in a star topology with a hub or a
switch at the center
Fast Ethernet (100 MBPS)
Gigabit Ethernet
• Gigabit Ethernet Protocol (1000 Mbps). The IEEE committee calls it the
Standard 802.3z. The goals of the Gigabit Ethernet were to upgrade the data
rate to 1 Gbps, but keep the address length, the frame format, and the
maximum and minimum frame length the same
• Upgrade the data rate to 1 Gbps.
• Make it compatible with Standard or Fast Ethernet.
• Use the same 48-bit address.
• Use the same frame format.
• Keep the same minimum and maximum frame lengths.
• Support autonegotiation as defined in Fast Ethernet.
• Full-Duplex Mode
Gigabit Ethernet
• Topology
• Gigabit Ethernet is designed to connect two or more stations. If there are only
two stations, they can be connected point-to-point. Three or more stations
need to be connected in a star topology with a hub or a switch at the center.
10 GIGABIT ETHERNET
• The IEEE committee created 10 Gigabit Ethernet and called it Standard 802.3ae
• The goals of the 10 Gigabit Ethernet design can be summarized as upgrading the
data rate to 10 Gbps, keeping the same frame size and format, and allowing the
interconnection of LANs, MANs, and WAN possible.
Token Ring
• When a station has some data to send, it waits until it receives the token from its
predecessor.
• When the station has no more data to send, it releases the token, passing it to the next
logical station in the ring.
• The station cannot send data until it receives the token again in the next round.
• In this process, when a station receives the token and has no data to send, it just passes the
data to the next station
Token Ring
• Logical Ring
• Physical Ring Topology
• when a station sends the token to its successor, the token cannot be seen by
other stations; the successor is the next one in line
• Disadvantage: if one of the links—the medium between two adjacent stations
fails, the whole system fails
Token Ring
• Dual Ring Topology
• The dual ring topology uses a second (auxiliary) ring which operates in the reverse
direction compared with the main ring.
• The second ring is for emergencies only. If one of the links in the main ring fails, the
system automatically combines the two rings to form a temporary ring. After the failed
link is restored, the auxiliary ring becomes idle again.
• Note that for this topology to work, each station needs to have two transmitter ports and
two receiver ports. The high-speed Token Ring networks called FDDI (Fiber Distributed
Data Interface) and CDDI (Copper Distributed Data Interface) use this topology.
Token Ring
• Bus Ring Topology
• In the bus ring topology, also called a token bus, the stations are connected to a
single cable called a bus.
• They, however, make a logical ring, because each station knows the address of
its successor (and also predecessor for token management purposes).
• When a station has finished sending its data, it releases the token and inserts the
address of its successor in the token.
• Only the station with the address matching the destination address of the token
gets the token to access the shared media.
Token Ring
• Star Ring Topology
• In a star ring topology, the physical topology is a star.
• There is a hub, however, that acts as the connector. The wiring inside the hub makes
the ring; the stations are connected to this ring through the two wire connections.
• This topology makes the network less prone to failure because if a link goes down, it
will be bypassed by the hub and the rest of the stations can operate. Also adding and
removing stations from the ring is easier.
• This topology is still used in the Token Ring LAN designed by IBM.
Flow Control
• Flow control is a set of procedures that tells the sender how much data it can transmit
before it must wait for an acknowledgment from the receiver
• Any receiving device has a limited speed at which it can process incoming data and a
limited amount of memory in which to store incoming data.
• The receiving device must be able to inform the sending device before those limits are
reached and to request that the transmitting device send fewer frames or stop
temporarily.
• Incoming data must be checked and processed before they can be used. The rate of
such processing is often slower than the rate of transmission.
• For this reason, each receiving device has a block of memory, called a buffer, reserved
for storing incoming data until they are processed.
• If the buffer begins to fill up, the receiver must be able to tell the sender to halt transmission until
it is once again able to receive
Flow Control
• Working Principle
• In these protocols, the sender has a buffer called the sending window and the receiver has
buffer called the receiving window.
• The size of the sending window determines the sequence number of the outbound frames. If
the sequence number of the frames is an n-bit field, then the range of sequence numbers that
can be assigned is 0 to 2𝑛−1. Consequently, the size of the sending window is 2𝑛−1. Thus in
order to accommodate a sending window size of 2𝑛−1, a n-bit sequence number is chosen.
• The sequence numbers are numbered as modulo-n. For example, if the sending window size is
4, then the sequence numbers will be 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, and so on. The number of bits in
the sequence number is 2 to generate the binary sequence 00, 01, 10, 11.
• The size of the receiving window is the maximum number of frames that the receiver can
accept at a time. It determines the maximum number of frames that the sender can send before
receiving acknowledgment.
Flow Control
• Example
• Suppose that we have sender
window and receiver window
each of size 4.
• So the sequence numbering of
both the windows will be
0,1,2,3,0,1,2 and so on.
• The following diagram shows
the positions of the windows
after sending the frames and
receiving acknowledgments.
• Error Control
• Stop and Wait ARQ
• Sliding Window ARQ
Stop and Wait ARQ
• https://www.youtube.com/watch?v=sdfQ2XbzAEU
Sliding Window ARQ
• Go-Back-N ARQ
• https://www.youtube.com/watch?v=QD3oCelHJ20
• HDLC
• PPP
HDLC
• High Level Data Link Control is a protocol, which operates at the data link layer.
• The HDLC protocol embeds information in a data frame that allows devices to control data
flow and correct errors
• Role of HDLC is to ensure that the data has been received without any loss or errors
and in the correct order
• Link configuration:
➢ Unbalanced: one primary station, one or more secondary stations
➢ Balanced: two combined stations
Configurations and Transfer Modes
Normal Response Mode (NRM)
• station configuration is unbalanced.
• We have one primary station and
multiple secondary stations.
• A primary station can send
commands; a secondary station can
only respond.
• The NRM is used for both point-to-
point and multipoint links
Asynchronous Balanced Mode (ABM)
• The configuration is balanced.
• The link is point-to-point, and each
station can function as a primary and
a secondary (acting as peers)
Framing
• HDLC is a bit - oriented protocol where each frame contains up to six fields. The structure
varies according to the type of frame
• Flag field. This field contains synchronization pattern 01111110, which identifies both the
beginning and the end of a frame.
• Address field. This field contains the address of the secondary station. If a primary station
created the frame, it contains a to address. If a secondary station creates the frame, it contains a
from address. The address field can be one byte or several bytes long, depending on the needs
of the network
• Control field. The control field is one or two bytes used for flow and error control.
• Information field. The information field contains the user’s data from the network layer or
management information. Its length can vary from one network to another.
• FCS field. The frame check sequence (FCS) is the HDLC error detection field. It can contain
either a 2- or 4-byte CRC.
Types of HDLC Frames
2. The user applies a predefined function that takes the challenge value and the user’s own password
and creates a result. The user sends the result in the response packet to the system.
3. The system does the same. It applies the same function to the password of the user (known to the
system) and the challenge value to create a result. If the result created is the same as the result sent
in the response packet, access is granted; otherwise, it is denied
Multiplexing
Multiplexing
• Network Control Protocols
• PPP is a multiple-network-layer protocol. It can carry a network-layer data packet
from protocols defined by the Internet, OSI, Xerox, DECnet, AppleTalk, Novel, and
so on.
• To do this, PPP has defined a specific Network Control Protocol for each network
protocol.
• For example, IPCP (Internet Protocol Control Protocol) configures the link for
carrying IP data packets
• IPCP
• This protocol configures the link used to carry IP packets in the Internet
Multiplexing
Multiplexing
• Data from the Network Layer
• After the network-layer configuration is completed by one of the NCP
protocols, the users can exchange data packets from the network layer.
• Here again, there are different protocol fields for different network layers.
For example,
• if PPP is carrying data from the IP network layer, the field value is 0021
• If PPP is carrying data from the OSI network layer, the value of the protocol field is
0023
Multiplexing
• Multilink PPP
• PPP was originally designed for a single-channel point-to-point physical
link.
• The availability of multiple channels in a single point-to-point link
motivated the development of Multilink PPP
Multiplexing