0% found this document useful (0 votes)
8 views83 pages

CN Unit 4

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views83 pages

CN Unit 4

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 83

21CSC302J-COMPUTER NETWORKS

UNIT-4
Syllabus
• ALOHA • Error Detection
• Parity Check
• CSMA/CD
• Checksum
• CSMA/CA • CRC
• Ethernet • Error Correction
• Token Ring • Hamming Codes

• Flow Control • Data Link Layer Protocol


• Stop and Wait • HDLC
• Sliding Window • PPP

• Error Control
• Stop and Wait ARQ
• Sliding Window ARQ
ALOHA

• RANDOM ACCESS
• In random-access or contention methods, no station is superior to another
station and none is assigned control over another
• n a random-access method, each station has the right to the medium without
being controlled by any other station. However, if more than one station tries
to send, there is an access conflict—collision—and the frames will be either
destroyed or modified.
ALOHA
• ALOHA, the earliest random access method, was developed at the University of Hawaii
in early 1970. It was designed for a radio (wireless) LAN, but it can be used on any
shared medium.
• Pure ALOHA
• The original ALOHA protocol is called pure ALOHA
• The idea is that each station sends a frame whenever it has a frame to send (multiple access). However,
since there is only one channel to share, there is the possibility of collision between frames from
different stations.
ALOHA
ALOHA
• Vulnerable time
• Let us find the vulnerable time, the length of time in which there is a possibility
of collision.

• G the average number of frames generated by the system during one frame
transmission time.
ALOHA
• pure ALOHA network transmits 200-bit frames on a shared channel of 200 kbps. What
is the throughput if the system (all stations together) produces
a. 1000 frames per second?
b. 500 frames per second?
c. 250 frames per second?
ALOHA
• Slotted ALOHA
• In slotted ALOHA we divide the time into slots of Tfr seconds and force the
station to send only at the beginning of the time slot
• There is still the possibility of collision if two stations try to send at the
beginning of the same time slot. However, the vulnerable time is now reduced
to one-half, equal to Tfr
ALOHA
• Vulnerable time for slotted ALOHA protocol
ALOHA
Carrier Sense Multiple Access (CSMA)

• CSMA is based on the principle “sense before transmit” or “listen before talk.”
• CSMA can reduce the possibility of collision, but it cannot eliminate
• a station may sense the medium and find it idle, only because the first bit sent
by another station has not yet been received
Carrier Sense Multiple Access (CSMA)
• Vulnerable Time
• The vulnerable time for CSMA is the propagation time Tp. This is the time needed for a
signal to propagate from one end of the medium
• if the first bit of the frame reaches the end of the medium, every station will already have
heard the bit and will refrain from sending. to the other.
• The leftmost station, A, sends a frame at time t1 , which reaches the rightmost station, D,
at time t1 + Tp. The gray area shows the vulnerable area in time and space
Carrier Sense Multiple Access (CSMA)
• Persistence Methods
• 1-Persistent
• In this method, after the station finds the line idle, it sends its frame immediately (with
probability 1).
• This method has the highest chance of collision because two or more stations may find
the line idle and send their frames immediately.
Carrier Sense Multiple Access (CSMA)
• Nonpersistent
• In the nonpersistent method, a station that has a frame to send senses the line. If the line
is idle, it sends immediately. If the line is not idle, it waits a random amount of time and
then senses the line again.
• The nonpersistent approach reduces the chance of collision because it is unlikely that
two or more stations will wait the same amount of time and retry to send
simultaneously.
• However, this method reduces the efficiency of the network because the medium
remains idle when there may be stations with frames to send.
Carrier Sense Multiple Access (CSMA)
• p-Persistent
• It reduces the chance of collision and improves efficiency. In this method, after the station
finds the line idle it follows these steps:
• 1.With probability p, the station sends its frame.
• 2. With probability q = 1 − p, the station waits for the beginning of the next time slot and
checks the line again.
• a. If the line is idle, it goes to step 1.
• b. If the line is busy, it acts as though a collision has occurred and uses the backoff procedure.
Carrier Sense Multiple Access with Collision Detection (CSMA/CD)

• To handle the collision


• At the first bits transmitted by the two stations involved in the collision.
Although each station continues to send bits in the frame until it detects the
collision
Carrier Sense Multiple Access with Collision Detection (CSMA/CD)

• Minimum Frame Size


• The frame transmission time Tfr must be at least two times the maximum propagation time
Tp. To understand the reason, let us think about the worst-case scenario.
• If the two stations involved in a collision are the maximum distance apart, the signal from
the first takes time Tp to reach the second, and the effect of the collision takes another time
TP to reach the first. So the requirement is that the first station must still be transmitting
after 2Tp
• Example:
• A network using CSMA/CD has a bandwidth of 10 Mbps. If the maximum propagation
time is 25.6 μs, what is the minimum size of the frame?
• Solution
• The minimum frame transmission time is Tfr = 2 × Tp = 51.2 μs. This means, in the
worst case, a station needs to transmit for a period of 51.2 μs to detect the collision. The
minimum size of the frame is 10 Mbps × 51.2 μs = 512 bits or 64 bytes. This is actually
the minimum size of the frame for Standard Ethernet
Carrier Sense Multiple Access with Collision Detection (CSMA/CD)

• Procedure
Carrier Sense Multiple Access with Collision Detection (CSMA/CD)

• Energy Level
• The level of energy in a channel can have three values: zero, normal, and abnormal.
• Zero level: the channel is idle.
• Normal level: a station has successfully captured the channel and is sending its frame.
• Abnormal level: there is a collision and the level of the energy is twice the normal level

Throughput
The throughput of CSMA/CD is greater than that of pure or slotted ALOHA
Carrier Sense Multiple Access With Collision Avoidance (CSMA/CA)
• (CSMA/CA) was invented
for wireless networks.
Collisions are avoided
through the use of
CSMA/CA’s three strategies:
• the interframe space,
• the contention window,
• and acknowledgment
Carrier Sense Multiple Access With Collision Avoidance (CSMA/CA)

• Interframe Space (IFS)


• When an idle channel is found, the station does not send immediately. It waits for a
period of time called the interframe space or IFS
• after waiting an IFS time, if the channel is still idle, the station can send, but it still needs
to wait a time equal to the contention window
• station that is assigned a shorter IFS has a higher priority
• Contention Window
• The contention window is an amount of time divided into slots. A station that is ready to
send chooses a random number of slots as its wait time.

• Acknowledgment
• The positive acknowledgment and the time-out timer can help guarantee that the
receiver has received the frame
Carrier Sense Multiple Access With Collision Avoidance (CSMA/CA)
• Frame Exchange Time Line
• 1. Before sending a frame, the source station senses the medium by checking the
energy level at the carrier frequency.
• a. The channel uses a persistence strategy with backoff until the channel is idle.
• b. After the station is found to be idle, the station waits for a period of time called the DCF
interframe space (DIFS); then the station sends a control frame called the request to send (RTS).
• 2. After receiving the RTS and waiting a period of time called the short interframe
space (SIFS), the destination station sends a control frame, called the clear to send
(CTS), to the source station. This control frame indicates that the destination station is
ready to receive data.
• 3. The source station sends data after waiting an amount of time equal to SIFS.
• 4. The destination station, after waiting an amount of time equal to SIFS, sends an
acknowledgment to show that the frame has been received. Acknowledgment is
needed in this protocol because the station does not have any means to check for the
successful arrival of its data at the destination.
Carrier Sense Multiple Access With Collision Avoidance (CSMA/CA)
Ethernet
• IEEE Project 802
• In 1985, the Computer Society of the IEEE started a project, called Project 802, to set
standards to enable intercommunication among equipment from a variety of
manufacturers
• The IEEE has subdivided the data-link layer into two sublayers:
• Logical Link Control (LLC) and
• Media Access Control (MAC)
Ethernet

• Logical Link Control (LLC)


• In IEEE Project 802, flow control, error control, and part of the framing duties are
collected into one sublayer called the logical link control (LLC).
• Framing is handled in both the LLC sublayer and the MAC sublayer

• Media Access Control (MAC)


• IEEE Project 802 has created a sublayer called media access control that defines the
specific access method for each LAN.
• For example, it defines CSMA/CD as the media access method for Ethernet LANs
and
• defines the token-passing method for Token Ring and Token Bus LANs
Ethernet
• Ethernet Evolution

• STANDARD ETHERNET
• Characteristics
• Connectionless and Unreliable Service
• Connectionless : each frame sent is independent of the previous or next frame.
Ethernet has no connection establishment or connection termination phases
• Unreliable Service: If a frame is corrupted during transmission and the receiver finds
out about the corruption
Standard Ethernet
• Frame Format

• Preamble. This field contains 7 bytes (56 bits) of alternating 0s and 1s that alert
the receiving system to the coming frame and enable it to synchronize its clock if
it’s out of synchronization.
• Start frame delimiter (SFD). This field (1 byte: 10101011) signals the beginning of
the frame. The SFD warns the station or stations that this is the last chance for
synchronization.
Standard Ethernet
• Destination address (DA): This field is six bytes (48 bits) and contains the link-layer address of
the destination station or stations to receive the packet
• Source address (SA): This field is also six bytes and contains the link-layer address of the sender
of the packet
• Type: This field defines the upper-layer protocol whose packet is encapsulated in the frame
• Data: It is a minimum of 46 and a maximum of 1500 bytes
• CRC: The last field contains error detection information
• Frame Length
• If we count 18 bytes of header and trailer (6 bytes of source address, 6 bytes of destination address, 2 bytes
of length or type, and 4 bytes of CRC), then the minimum length of data from the upper layer is 64 − 18 =
46 bytes. If the upper-layer packet is less than 46 bytes, padding is added to make up the difference
• The standard defines the maximum length of a frame (without preamble and SFD field) as 1518 bytes. If
we subtract the 18 bytes of header and trailer, the maximum length of the payload is 1500 bytes
Standard Ethernet
• Addressing: Each station on an Ethernet network (such as a PC, workstation, or
printer) has its own network interface card (NIC)

• Transmission of Address Bits


• The transmission is left to right, byte by byte; however, for each byte, the least significant
bit is sent first and the most significant bit is sent last.
• This means that the bit that defines an address as unicast or multicast arrives first at the
receiver.
• This helps the receiver to immediately know if the packet is unicast or multicast.
Standard Ethernet
• Unicast, Multicast, and Broadcast Addresses

• Implementation
• 10BaseX, the number defines the data rate (10 Mbps), the term Base means baseband
(digital) signal, and X approximately defines either the maximum size of the cable in 100
meters (for example 5 for 500 or 2 for 185 meters) or the type of cable, T for unshielded
twisted pair cable (UTP) and F for fiber-optic.
Standard Ethernet
• 10Base5: Thick Ethernet
• 10Base5 was the first Ethernet specification to use a bus topology with an external
transceiver (transmitter/receiver) connected via a tap to a thick coaxial cable.

• The transceiver is responsible for transmitting, receiving, and detecting collisions

• If a length of more than 500 m is needed, up to five segments, each a maximum of 500
meters, can be connected using repeaters
Standard Ethernet
• 10Base2: Thin Ethernet
• 10Base2 also uses a bus topology, but the
cable is much thinner and more flexible.
The cable can be bent to pass very close to
the stations. In this case, the transceiver is
normally part of the network interface card
(NIC), which is installed inside the station

• 10Base-T: Twisted-Pair Ethernet


• 10Base-T uses a physical star topology. The
stations are connected to a hub via two pairs
of twisted cable
• The maximum length of the twisted cable
here is defined as 100 m
Standard Ethernet
• 10Base-F: Fiber Ethernet
• 10Base-F uses a star topology to connect stations to a hub.
• The stations are connected to the hub using two fiber-optic cables
Fast Ethernet (100 MBPS)

• Upgrade the data rate to 100 Mbps.


• Make it compatible with Standard Ethernet.
• Keep the same 48-bit address.
• Keep the same frame format.

• Topology
• Fast Ethernet is designed to connect two or more stations.
• If there are only two stations,they can be connected point-to-point.
• Three or more stations need to be connected in a star topology with a hub or a
switch at the center
Fast Ethernet (100 MBPS)
Gigabit Ethernet
• Gigabit Ethernet Protocol (1000 Mbps). The IEEE committee calls it the
Standard 802.3z. The goals of the Gigabit Ethernet were to upgrade the data
rate to 1 Gbps, but keep the address length, the frame format, and the
maximum and minimum frame length the same
• Upgrade the data rate to 1 Gbps.
• Make it compatible with Standard or Fast Ethernet.
• Use the same 48-bit address.
• Use the same frame format.
• Keep the same minimum and maximum frame lengths.
• Support autonegotiation as defined in Fast Ethernet.
• Full-Duplex Mode
Gigabit Ethernet
• Topology
• Gigabit Ethernet is designed to connect two or more stations. If there are only
two stations, they can be connected point-to-point. Three or more stations
need to be connected in a star topology with a hub or a switch at the center.
10 GIGABIT ETHERNET
• The IEEE committee created 10 Gigabit Ethernet and called it Standard 802.3ae
• The goals of the 10 Gigabit Ethernet design can be summarized as upgrading the
data rate to 10 Gbps, keeping the same frame size and format, and allowing the
interconnection of LANs, MANs, and WAN possible.
Token Ring

• In the token-passing method, the stations in a network are organized in


a logical ring. In each station, there is a predecessor and a successor.
• Predecessor is the station which is logically before the station in the ring;

• Successor is the station which is after the station in the ring.

• Current station is the one that is accessing the channel now.


Token Ring
• In this method, a special packet called a token circulates through the ring.
• The possession of the token gives the station the right to access the channel and send its
data.

• When a station has some data to send, it waits until it receives the token from its
predecessor.

• It then holds the token and sends its data.

• When the station has no more data to send, it releases the token, passing it to the next
logical station in the ring.

• The station cannot send data until it receives the token again in the next round.

• In this process, when a station receives the token and has no data to send, it just passes the
data to the next station
Token Ring
• Logical Ring
• Physical Ring Topology
• when a station sends the token to its successor, the token cannot be seen by
other stations; the successor is the next one in line
• Disadvantage: if one of the links—the medium between two adjacent stations
fails, the whole system fails
Token Ring
• Dual Ring Topology
• The dual ring topology uses a second (auxiliary) ring which operates in the reverse
direction compared with the main ring.
• The second ring is for emergencies only. If one of the links in the main ring fails, the
system automatically combines the two rings to form a temporary ring. After the failed
link is restored, the auxiliary ring becomes idle again.
• Note that for this topology to work, each station needs to have two transmitter ports and
two receiver ports. The high-speed Token Ring networks called FDDI (Fiber Distributed
Data Interface) and CDDI (Copper Distributed Data Interface) use this topology.
Token Ring
• Bus Ring Topology
• In the bus ring topology, also called a token bus, the stations are connected to a
single cable called a bus.
• They, however, make a logical ring, because each station knows the address of
its successor (and also predecessor for token management purposes).
• When a station has finished sending its data, it releases the token and inserts the
address of its successor in the token.
• Only the station with the address matching the destination address of the token
gets the token to access the shared media.
Token Ring
• Star Ring Topology
• In a star ring topology, the physical topology is a star.
• There is a hub, however, that acts as the connector. The wiring inside the hub makes
the ring; the stations are connected to this ring through the two wire connections.
• This topology makes the network less prone to failure because if a link goes down, it
will be bypassed by the hub and the rest of the stations can operate. Also adding and
removing stations from the ring is easier.
• This topology is still used in the Token Ring LAN designed by IBM.
Flow Control

• Flow control is a set of procedures that tells the sender how much data it can transmit
before it must wait for an acknowledgment from the receiver
• Any receiving device has a limited speed at which it can process incoming data and a
limited amount of memory in which to store incoming data.
• The receiving device must be able to inform the sending device before those limits are
reached and to request that the transmitting device send fewer frames or stop
temporarily.
• Incoming data must be checked and processed before they can be used. The rate of
such processing is often slower than the rate of transmission.
• For this reason, each receiving device has a block of memory, called a buffer, reserved
for storing incoming data until they are processed.
• If the buffer begins to fill up, the receiver must be able to tell the sender to halt transmission until
it is once again able to receive
Flow Control

• Stop and Wait


• At Sender
• Rule 1: Send one data packet at a time.
• Rule 2: Send the next packet only after
receiving acknowledgment for the previous.
• At Receiver
• Rule 1: Send acknowledgement after receiving
and consuming a data packet.
• Rule 2: After consuming packet
acknowledgement need to be sent (Flow
Control)
Flow Control

• Problems Associated with Stop and Wait


• Lost Data
• Assume the sender transmits the data packet and it is lost.
The receiver has been waiting for the data for a long time.
• Because the data is not received by the receiver, it does not
transmit an acknowledgment.
• The sender does not receive an acknowledgment, it will
not send the next packet. This problem is caused by a loss
of data.
Flow Control

• Problems Associated with Stop and Wait


• Lost Acknowledgement
• Assume the sender sends the data, which is also received by
the receiver.
• The receiver sends an acknowledgment after receiving the
packet.
• In this situation, the acknowledgment is lost in the network.
• The sender does not send the next data packet because it
does not receive acknowledgement, under the stop and wait
protocol, the next packet cannot be transmitted until the
preceding packet’s acknowledgment is received.
Flow Control

• Problems Associated with Stop and Wait


• Delayed Acknowledgement/Data
• Assume the sender sends the data, which is also received by the receiver.
• The receiver then transmits the acknowledgment, which is received after the sender’s
timeout period.
• After a timeout on the sender side, a long-delayed acknowledgement might be
wrongly considered as acknowledgement of some other recent packet.
Flow Control

• Sliding Window Protocol


• Sliding window protocols are data link layer protocols for reliable and sequential
delivery of data frames.
• The sliding window is also used in Transmission Control Protocol.
• In this protocol, multiple frames can be sent by a sender at a time before receiving an
acknowledgment from the receiver.
• The term sliding window refers to the imaginary boxes to hold frames. Sliding window
method is also known as windowing.
Flow Control

• Working Principle
• In these protocols, the sender has a buffer called the sending window and the receiver has
buffer called the receiving window.
• The size of the sending window determines the sequence number of the outbound frames. If
the sequence number of the frames is an n-bit field, then the range of sequence numbers that
can be assigned is 0 to 2𝑛−1. Consequently, the size of the sending window is 2𝑛−1. Thus in
order to accommodate a sending window size of 2𝑛−1, a n-bit sequence number is chosen.
• The sequence numbers are numbered as modulo-n. For example, if the sending window size is
4, then the sequence numbers will be 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, and so on. The number of bits in
the sequence number is 2 to generate the binary sequence 00, 01, 10, 11.
• The size of the receiving window is the maximum number of frames that the receiver can
accept at a time. It determines the maximum number of frames that the sender can send before
receiving acknowledgment.
Flow Control

• Example
• Suppose that we have sender
window and receiver window
each of size 4.
• So the sequence numbering of
both the windows will be
0,1,2,3,0,1,2 and so on.
• The following diagram shows
the positions of the windows
after sending the frames and
receiving acknowledgments.
• Error Control
• Stop and Wait ARQ
• Sliding Window ARQ
Stop and Wait ARQ

• https://www.youtube.com/watch?v=sdfQ2XbzAEU
Sliding Window ARQ

• Go-Back-N ARQ
• https://www.youtube.com/watch?v=QD3oCelHJ20

• Selective Repeat ARQ


• https://www.youtube.com/watch?v=WfIhQ3o2xow
Data Link Layer Protocol

• HDLC
• PPP
HDLC
• High Level Data Link Control is a protocol, which operates at the data link layer.
• The HDLC protocol embeds information in a data frame that allows devices to control data
flow and correct errors

• It is a bit-oriented protocol for communication over point-to-point and multipoint


links

• Role of HDLC is to ensure that the data has been received without any loss or errors
and in the correct order

• Provides connection-oriented and connection-less service

• It implements the Stop-and-Wait protocol


HDLC
• Stations:
➢ Primary: sends data, controls the link with commands
➢ Secondary: receives data, responds to control messages
➢ Combined: can issue both commands and responses

• Link configuration:
➢ Unbalanced: one primary station, one or more secondary stations
➢ Balanced: two combined stations
Configurations and Transfer Modes
Normal Response Mode (NRM)
• station configuration is unbalanced.
• We have one primary station and
multiple secondary stations.
• A primary station can send
commands; a secondary station can
only respond.
• The NRM is used for both point-to-
point and multipoint links
Asynchronous Balanced Mode (ABM)
• The configuration is balanced.
• The link is point-to-point, and each
station can function as a primary and
a secondary (acting as peers)
Framing

• HDLC is a bit - oriented protocol where each frame contains up to six fields. The structure
varies according to the type of frame
• Flag field. This field contains synchronization pattern 01111110, which identifies both the
beginning and the end of a frame.
• Address field. This field contains the address of the secondary station. If a primary station
created the frame, it contains a to address. If a secondary station creates the frame, it contains a
from address. The address field can be one byte or several bytes long, depending on the needs
of the network
• Control field. The control field is one or two bytes used for flow and error control.
• Information field. The information field contains the user’s data from the network layer or
management information. Its length can vary from one network to another.
• FCS field. The frame check sequence (FCS) is the HDLC error detection field. It can contain
either a 2- or 4-byte CRC.
Types of HDLC Frames

• I-frames are used to data-link


user data and control information
relating to user data

• S-frames are used only to


transport control information

• U-frames are reserved for system


management
Control Fields
• Control Field for I-Frames
• I-frames are designed to carry user data from the network layer. In addition,
they can include flow- and error-control information (piggybacking)
• First bit defines the type: 0 -> I-frame
• Next 3 bits, called N(S): define the sequence number (between 0 and 7) of the
frame
• Last 3 bits, called N(R): correspond to the acknowledgment number when
piggybacking is used
• P/F bit: The P/F field is a single bit with a dual purpose.
• bit = 1 mean poll or bit = 0 mean final.
• poll when the frame is sent by a primary station to a secondary (when the address field
contains the address of the receiver).
• final when the frame is sent by a secondary to a primary (when the address field contains
the address of the sender)
Control Fields
• Control Field for S-Frame
• Supervisory frames are used for flow and error control whenever piggybacking is either
impossible or inappropriate. S-frames do not have information field
• First 2 bits 10: means the frame is an S-frame
• Last 3 bits, called N(R): correspond to the acknowledgment number (ACK) or negative
acknowledgment number (NAK), depending on the type of S-frame
• 2 bits called code: used to define the type of S-frame itself
• Receive ready (RR)-00: acknowledges the receipt of a safe and sound frame or group of frames. In this case,
the value of the N(R) field defines the acknowledgment number
• Receive not ready (RNR)- 10: It acknowledges the receipt of a frame or group of frames, and it announces
that the receiver is busy and cannot receive more frames. the value of the N(R) field defines the
acknowledgment number
• Reject (REJ)-01: It is a NAK that can be used in Go-Back-N ARQ to improve the efficiency of the process
by informing the sender, before the sender timer expires, that the last frame is lost or damaged. The value
of N(R) is the negative acknowledgment number.
• Selective reject (SREJ)-11. This is a NAK frame used in Selective Repeat ARQ. The value of N(R) is the
negative acknowledgment number.
Control Fields
• Control Field for U-Frames
• Unnumbered frames are used to exchange session management and control
information between connected devices.
Example
• Node A asks for a connection with a set
asynchronous balanced mode (SABM) frame;
• node B gives a positive response with an
unnumbered acknowledgment (UA) frame.
• After these two exchanges, data can be
transferred between the two nodes
• After data transfer, node A sends a DISC
(disconnect) frame to release the connection;
• it is confirmed by node B responding with a
UA (unnumbered acknowledgment).
POINT-TO-POINT PROTOCOL (PPP)
• It is a communication protocol of the data link layer
• It is a byte-oriented protocol that is widely used in broadband communications
having heavy loads and high speeds
• to connect home computers to the server of an Internet service provider use PPP
• Services
• Defining the frame format of the data to be transmitted.
• Defining the procedure of establishing link between two points and exchange of data.
• Stating the method of encapsulation of network layer data in the frame.
• Stating authentication rules of the communicating devices.
• Providing address for network communication.
• Providing connections over multiple links.
• Supporting a variety of network layer protocols by providing a range os services.
POINT-TO-POINT PROTOCOL (PPP)

• Services Not Provided by PPP


• PPP does not provide flow control
• Lack of error control and sequence numbering may cause a packet to be received
out of order
• PPP does not provide a sophisticated addressing mechanism to handle frames in
a multipoint configuration
• Framing
Framing
• Flag. A PPP frame starts and ends with a 1-byte flag with the bit pattern 01111110
• Address. It is a constant value and set to 11111111 (broadcast address).
• Control. This field is set to the constant value 00000011 (imitating unnumbered frames
in HDLC).
• Protocol. defines what is being carried in the data field: either user data or other
information.
• Payload field. This field carries either the user data or other information. The data field
is a sequence of bytes with the default of a maximum of 1500 bytes; but this can be
changed during negotiation.
• FCS. The Frame Check Sequence (FCS) is simply a 2-byte or 4-byte standard CRC
• Byte Stuffing
• the flag in PPP is a byte that needs to be escaped whenever it appears in the data section of
the frame. The escape byte is 01111101, which means that every time the flag like pattern
appears in the data, this extra byte is stuffed to tell the receiver that the next byte is not a
flag. Obviously, the escape byte itself should be stuffed with another escape byte.
Transition Phases
• Dead state: there is no active carrier (at the physical layer) and the line is quiet
• When one of the two nodes starts the communication, the connection goes into the establish state
• If the two parties agree that they need authentication (for example, if they do not know each other), then
the system needs to do authentication (an extra step); otherwise, the parties can simply start
communication
• When a connection reaches Open state, the exchange of data packets can be started. The connection
remains in this state until one of the endpoints wants to terminate the connection. In this case, the
system goes to the terminate state. The system remains in this state until the carrier (physical-layer
signal) is dropped, which moves the system to the dead state again
Multiplexing
• Although PPP is a link-layer protocol, it uses another set of protocols to
establish the link, authenticate the parties involved, and carry the network-
layer data. Three sets of protocols are defined to make PPP powerful:
• Link Control Protocol (LCP)
• Two Authentication Protocols (APs)
• Several Network Control Protocols (NCPs)
Multiplexing
• Link Control Protocol
• The Link Control Protocol (LCP) is responsible for establishing, maintaining,
configuring, and terminating links.
• It also provides negotiation mechanisms to set options between the two endpoints. Both
endpoints of the link must reach an agreement about the options before the link can be
established
• All LCP packets are carried in the payload field of the PPP frame with the protocol field
set to C021 in hexadecimal
• The code field defines the type of LCP packet. There are 11 types of packets
Multiplexing

• used for link configuration


during the establish phase

• used for link termination


during the termination
phase

• used for link monitoring


and debugging
Multiplexing
• ID field holds a value that matches a request with a reply
• Length field defines the length of the entire LCP packet.
• Information field contains information such as options, needed for some LCP
packets
• There are many options that can be negotiated between the two endpoints. Options are
inserted in the information field of the configuration packets.
• information field is divided into three fields:
• option type
• option length
• option data
Multiplexing
• Authentication Protocols
• Authentication means validating the identity of a user who needs to access a set of resources.
• PPP has created two protocols for authentication: Password Authentication Protocol and Challenge
Handshake Authentication Protocol
• PAP
• The Password Authentication Protocol (PAP) is a simple authentication procedure with
a two-step process:
1. The user who wants to access a system sends an authentication identification (usually the user
name) and a password.
2. The system checks the validity of the identification and password and either accepts or denies
connection.
• When a PPP frame is carrying any PAP packets, the value of the protocol field is
0xC023
Multiplexing

• The three PAP packets are authenticate-request, authenticate-ack,


and authenticate-nak.
• The first packet is used by the user to send the user name and password.
• The second is used by the system to allow access.
• The third is used by the system to deny access
Multiplexing
• CHAP

• The Challenge Handshake Authentication Protocol (CHAP) is a three-way


handshaking authentication protocol that provides greater security than PAP. In this
method, the password is kept secret; it is never sent online
1. The system sends the user a challenge packet containing a challenge value, usually a few bytes.

2. The user applies a predefined function that takes the challenge value and the user’s own password
and creates a result. The user sends the result in the response packet to the system.

3. The system does the same. It applies the same function to the password of the user (known to the
system) and the challenge value to create a result. If the result created is the same as the result sent
in the response packet, access is granted; otherwise, it is denied
Multiplexing
Multiplexing
• Network Control Protocols
• PPP is a multiple-network-layer protocol. It can carry a network-layer data packet
from protocols defined by the Internet, OSI, Xerox, DECnet, AppleTalk, Novel, and
so on.
• To do this, PPP has defined a specific Network Control Protocol for each network
protocol.
• For example, IPCP (Internet Protocol Control Protocol) configures the link for
carrying IP data packets

• IPCP
• This protocol configures the link used to carry IP packets in the Internet
Multiplexing
Multiplexing
• Data from the Network Layer
• After the network-layer configuration is completed by one of the NCP
protocols, the users can exchange data packets from the network layer.
• Here again, there are different protocol fields for different network layers.
For example,
• if PPP is carrying data from the IP network layer, the field value is 0021
• If PPP is carrying data from the OSI network layer, the value of the protocol field is
0023
Multiplexing
• Multilink PPP
• PPP was originally designed for a single-channel point-to-point physical
link.
• The availability of multiple channels in a single point-to-point link
motivated the development of Multilink PPP
Multiplexing

You might also like