0% found this document useful (0 votes)
16 views

Data Link Layer

Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Data Link Layer

Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 101

Data-Link Layer

• Data Link Layer – Framing – Flow control – Error control – Data-


Link Layer Protocols – HDLC –PPP - Media Access Control –
Ethernet Basics – CSMA/CD – Virtual LAN – Wireless LAN (802.11)
-
• The Internet is a combination of networks glued together
by combining devices (routers and switches).
• If a packet is to travel from a host to another host, it needs
to pass through these networks.
• Figure shows communication between Alice and Bob, but
we are now interested in communication at the data-link
layer.
• Communication at the data-link layer is made up of five
separate logical connections between the data-link layers in
the path.
• The data-link layer at Alice’s computer communicates
with the data-link layer at router R2.
• The data-link layer at router R2 communicates with
the data-link layer at router R4, and so on.
• Finally, the data-link layer at router R7 communicates
with the data-link layer at Bob’s computer.
• Communication between Alice’s computer and Bob’s
computer involves one data-link layer; communication
at routers involve two data-link layers.
Nodes and Links
• Although communication at the application, transport, and network layers is
end-to-end, communication at the data-link layer is node-to-node.
• A data unit from one point in the Internet needs to pass through many
networks (LANs and WANs) to reach another point.
• These LANs and WANs are connected by routers.
• It is customary to refer to the two end hosts and the routers as nodes and the
networks in between as links.
• The following is a simple representation of links and nodes when the path of
the data unit is only six nodes.
• The first node is the source host; the last node is the destination host.
• The other four nodes are routers.
• The first, third, and fifth links represent the three LANs; the second and
fourth links represent the two WANs.
Two Types of Links
• Two nodes are physically connected by a transmission medium such
as cable or air, the data-link layer controls how the medium is used.
• We can have a data-link layer that uses the whole capacity of the
medium; we can also have a data-link layer that uses only part of the
capacity of the link.
• We can have a point-to-point link or a broadcast link.
• In a point-to-point link, the link is dedicated to the two devices; in a
broadcast link, the link is shared between several pairs of devices.
• For example, when two friends use the traditional home phones to
chat, they are using a point-to-point link; when the same two friends
use their cellular phones, they are using a broadcast link (the air is
shared among many cell phone users).
Two Sublayers
• To better understand the functionality of and the
services provided by the link layer, we can divide the
data-link layer into two sublayers: data-link control
(DLC) and media access control (MAC).
• The data-link-control sublayer deals with all issues
common to both point-to-point and broadcast links;
• the MAC sublayer deals only with issues specific to
broadcast links.
• Two types of links at the data-link layer are shown in
Figure.
DATA-LINK CONTROL
• Data-link control (DLC) deals with
procedures for communication between two
adjacent nodes—node-to-node
communication—no matter whether the link
is dedicated or broadcast.
• Its functions include framing and error
control.
Framing
• Data transmission in the physical layer means moving bits in the form
of a signal from the source to the destination.
• The physical layer provides bit synchronization to ensure that the
sender and receiver use the same bit durations and timing.
• The data-link layer, needs to pack bits into frames, so that each frame
is distinguishable from another.
• Our postal system practices a type of framing.
• The simple act of inserting a letter into an envelope separates one piece
of information from another; the envelope serves as the delimiter.
• In addition, each envelope defines the sender and receiver addresses,
which is necessary since the postal system is a many-to-many carrier
facility.
• Framing in the data-link layer separates a message from one source
to a destination by adding a sender address and a destination address.
• The destination address defines where the packet is to go; the sender
address helps the recipient acknowledge the receipt.
• Although the whole message could be packed in one frame, that is
not normally done.
• One reason is that a frame can be very large, making flow and error
control very inefficient.
• When a message is carried in one very large frame, even a single-bit
error would require the retransmission of the whole frame.
• When a message is divided into smaller frames, a single-bit error
affects only that small frame.
Frame Size
• Frames can be of fixed or variable size.
• In fixed-size framing, there is no need to define the boundaries of
the frames; the size itself can be used as a delimiter.
• An example of this type of framing is the ATM WAN, which uses
frames of fixed size called cells.
• Main discussion concerns variable-size framing, prevalent in local-
area networks.
• In variable-size framing, we need a way to define the end of one
frame and the beginning of the next.
• Historically, two approaches have been used for this purpose: a
character-oriented approach and a bit-oriented approach.
Character-Oriented Framing
• In character-oriented (or byte-oriented) framing, data to be
carried are 8-bit characters from a coding system such as ASCII.
• The header, which normally carries the source and destination
addresses and other control information, and the trailer, which
carries error detection redundant bits, are also multiples of 8 bits.
• To separate one frame from the next, an 8-bit (1-byte) flag is
added at the beginning and the end of a frame.
• The flag, composed of protocol-dependent special characters,
signals the start or end of a frame.
• Figure shows the format of a frame in a character-oriented
protocol.
• Character-oriented framing was popular when only text was exchanged by
the data-link layers.
• The flag could be selected to be any character not used for text
communication.
• Now, however, we send other types of information such as graphs, audio,
and video; any pattern used for the flag could also be part of the
information.
• If this happens, the receiver, when it encounters this pattern in the middle
of the data, thinks it has reached the end of the frame.
• To fix this problem, a byte-stuffing strategy was added to character-
oriented framing.
• In byte stuffing (or character stuffing), a special byte is added to the data
section of the frame when there is a character with the same pattern as the
flag.
• The data section is stuffed with an extra byte.
• This byte is usually called the escape character (ESC) and has a predefined
bit pattern.
• Whenever the receiver encounters the ESC character, it removes it from the
data section and treats the next character as data, not as a delimiting flag.
• Byte stuffing by the escape character allows the presence of the flag in the
data section of the frame, but it creates another problem.
• What happens if the text contains one or more escape characters followed by
a byte with the same pattern as the flag?
• The receiver removes the escape character, but keeps the next byte, which is
incorrectly interpreted as the end of the frame.
• To solve this problem, the escape characters that are part of the text must
also be marked by another escape character.
• In other words, if the escape character is part of the text, an extra one is
added to show that the second one is part of the text. Figure 3.5 shows the
situation.
• Byte stuffing and unstuffing
Bit-Oriented Framing
• In bit-oriented framing, the data section of a frame is a sequence of
bits to be interpreted by the upper layer as text, graphic, audio,
video, and so on.
• However, in addition to headers (and possible trailers), we still
need a delimiter to separate one frame from the other.
• Most protocols use a special 8-bit pattern flag, 01111110, as the
delimiter to define the beginning and end of the frame, as shown in
Figure 3.6.
• This flag can create the same type of problem we saw in the
character-oriented protocols.
• That is, if the flag pattern appears in the data, we need to somehow
inform the receiver that this is not the end of the frame.
• We do this by stuffing one single bit (instead of one byte) to
prevent the pattern from looking like a flag.
• The strategy is called bit stuffing.
• In bit stuffing, if a 0 and five consecutive 1 bits are
encountered, an extra 0 is added.
• This extra stuffed bit is eventually removed from the data by
the receiver.
• Note that the extra bit is added after one 0 followed by five
1s regardless of the value of the next bit.
• This guarantees that the flag field sequence does not
inadvertently appear in the frame.
• Bit stuffing is the process of adding one extra 0 whenever five
consecutive 1s follow a 0 in the data, so that the receiver does not
mistake the pattern 0111110 for a flag.
• Figure shows bit stuffing at the sender and bit removal at the receiver.
Note that even if we have a 0 after five 1s, we still stuff a 0.
• The 0 will be removed by the receiver.
• This means that if the flaglike pattern 01111110 appears in the data, it
will change to 011111010 (stuffed) and is not mistaken for a flag by
the receiver.
• The real flag 01111110 is not stuffed by the sender and is recognized
by the receiver.
Error Control
• Error control is both error detection and error
correction.
• It allows the receiver to inform the sender of any
frames lost or damaged in transmission and
coordinates the retransmission of those frames by
the sender.
• In the data-link layer, the term error control refers
primarily to methods of error detection and
retransmission (error correction is done using
retransmission of the corrupted frame).
Types of Errors
• Whenever bits flow from one point to another, they are subject
to unpredictable changes because of interference.
• This interference can change the shape of the signal.
• The term single-bit error means that only 1 bit of a given data
unit (such as a byte, character, or packet) is changed from 1 to 0
or from 0 to 1.
• The term burst error means that two or more bits in the data
unit have changed from 1 to 0 or from 0 to 1.
• Figure shows the effect of a single-bit error and burst error,
respectively, on a data unit.
• A burst error is more likely to occur than a single-bit
error because the duration of the noise signal is normally
longer than the duration of one bit, which means that
when noise affects data, it affects a set of bits.
• The number of bits affected depends on the data rate and
duration of noise.
• For example, if we are sending data at 1 kbps, a noise of
1/100 s can affect 10 bits; if we are sending data at 1
Mbps, the same noise can affect 10,000 bits.
Redundancy
• The central concept in detecting or correcting errors is
redundancy.
• To be able to detect or correct errors, we need to send
some extra bits with our data.
• These redundant bits are added by the sender and
removed by the receiver.
• Their presence allows the receiver to detect or correct
corrupted bits.
• Detection versus Correction
• The correction of errors is more difficult than the detection.
• In error detection, we are looking only to see if any error has occurred.
• The answer is a simple yes or no.
• We are not even interested in the number of corrupted bits.
• A single-bit error is the same for us as a burst error.
• In error correction, we need to know the exact number of bits that are corrupted
and, more importantly, their location in the message.
• The number of the errors and the size of the message are important factors.
• If we need to correct a single error in an 8-bit data unit, we need to consider eight
possible error locations; if we need to correct two errors in a data unit of the same
size, we need to consider 28 (permutation of 8 by 2) possibilities.
• You can imagine the receiver’s difficulty in finding 10 errors in a data unit of
1000 bits.
• We concentrate on error detection.
• Coding
• Redundancy is achieved through various coding schemes.
• The sender adds redundant bits through a process that creates a
relationship between the redundant bits and the actual data bits.
• The receiver checks the relationships between the two sets of
bits to detect errors.
• The ratio of redundant bits to the data bits and the robustness of
the process are important factors in any coding scheme.
• We can divide coding schemes into two broad categories: block
coding and convolution coding.
• Here we concentrate on block coding; convolution coding is
more complex and beyond the scope.
Block Coding
• In block coding, we divide our message into blocks, each consisting of k bits, called
datawords.
• We add r redundant bits to each block to make the length n = k + r.
• The resulting n-bit blocks are called codewords.
• It is important to know that we have a set of datawords, each of size k, and a set of codewords,
each of size of n.
• With k bits, we can create a combination of 2k datawords; with n bits, we can create a
combination of 2n codewords.
• Since n > k, the number of possible codewords is larger than the number of possible
datawords.
• The block-coding process is one-to-one; the same dataword is always encoded as the same
codeword.
• This means that we have 2n − 2k codewords that are not used.
• We call these codewords invalid or illegal.
• If the receiver receives an invalid codeword, this indicates that the data were corrupted during
transmission.
Error Detection
• How can errors be detected by using block coding?
• If the following two conditions are met, the receiver can detect a change in
the original codeword.
1. The receiver has (or can find) a list of valid codewords.
2. The original codeword has changed to an invalid one.
• The sender creates codewords out of datawords by using a generator that
applies the rules and procedures of encoding.
• Each codeword sent to the receiver may change during transmission.
• If the received codeword is the same as one of the valid codewords, the
word is accepted; the corresponding dataword is extracted for use.
• If the received codeword is not valid, it is discarded.
• However, if the codeword is corrupted during transmission but the received
word still matches a valid codeword, the error remains undetected.
Hamming Distance
• One of the central concepts in coding for error control is the idea of the Hamming
distance.
• The Hamming distance between two words (of the same size) is the number of
differences between the corresponding bits.
• We show the Hamming distance between two words x and y as d(x, y).
• We may wonder why the Hamming distance is important for error detection.
• The reason is that the Hamming distance between the received codeword and the
sent codeword is the number of bits that are corrupted during transmission.
• For example, if the codeword 00000 is sent and 01101 is received, 3 bits are in error
and the Hamming distance between the two is d(00000, 01101) = 3.
• If the Hamming distance between the sent and the received codeword is not zero, the
codeword has been corrupted during transmission.
• The Hamming distance can easily be found if we apply the XOR operation ( ⊕) on
the two words and count the number of 1s in the result.
• Note that the Hamming distance is a value greater than or equal to zero.
Minimum Hamming Distance for Error Detection
• In a set of codewords, the minimum Hamming distance is the smallest Hamming
distance between all possible pairs of codewords.
• Now let us find the minimum Hamming distance in a code if we want to be able to
detect up to s errors.
• If s errors occur during transmission, the Hamming distance between the sent
codeword and received codeword is s.
• If our system is to detect up to s errors, the minimum distance between the valid codes
must be (s + 1) so that the received codeword does not match a valid codeword.
• In other words, if the minimum distance between all valid codewords is (s + 1), the
received codeword cannot be erroneously mistaken for another codeword.
• The error will be detected.
• We need to clarify a point here: Although a code with dmin = s + 1 may be able to
detect more than s errors in some special cases, only s or fewer errors are guaranteed
to be detected.
• We can look at this criteria geometrically.
• Let us assume that the sent codeword x is at the center of a circle with
radius s.
• All received codewords that are created by 0 to s errors are points
inside the circle or on the perimeter of the circle.
• All other valid codewords must be outside the circle, as shown in
Figure 3.10.
• This means that dmin must be an integer greater than s or dmin = s + 1.
• Geometric concept explaining dmin in error detection
Example 3.3
• The minimum Hamming distance for our first code scheme (Table 3.1)
is 2.
• This code guarantees detection of only a single error.
• For example, if the third codeword (101) is sent and one error occurs,
the received codeword does not match any valid codeword.
• If two errors occur, however, the received codeword may match a
valid codeword and the errors are not detected.
Example 3.4
• A code scheme has a Hamming distance dmin = 4.
• This code guarantees the detection of up to three errors (d = s + 1 or s
= 3).
Linear Block Codes
• Almost all block codes used today belong to a subset of
block codes called linear block codes.
• The use of nonlinear block codes for error detection and
correction is not as widespread because their structure
makes theoretical analysis and implementation difficult.
• We therefore concentrate on linear block codes.
• For our purposes, a linear block code is a code in which the
exclusive OR (addition modulo-2) of two valid codewords
creates another valid codeword.
Example 3.5
• The code in Table 3.1 is a linear block code because the result of XORing
any codeword with any other codeword is a valid codeword.
• For example, the XORing of the second and third codewords creates the
fourth one.
Minimum Distance for Linear Block Codes
• The minimum Hamming distance for a linear block code is simple to
find.
• It is the number of 1s in the nonzero valid codeword with the smallest
number of 1s.
Example 3.6
• In our first code (Table 3.1), the numbers of 1s in the nonzero codewords
are 2, 2, and 2. So the minimum Hamming distance is dmin = 2.
Parity-Check Code
• The most familiar error-detecting code is the parity-check code.
• This code is a linear block code.
• In this code, a k-bit dataword is changed to an n-bit codeword where
n = k + 1.
• The extra bit, called the parity bit, is selected to make the total
number of 1s in the codeword even.
• The minimum Hamming distance for this category is dmin = 2, which
means that the code is a single-bit error-detecting code.
• Our first code (Table 3.1) is a parity-check code (k = 2 and n = 3).
• The code in Table 3.2 is also a parity-check code with k = 4 and n =
5.
• If the number of 1s is even, the result is 0; if the number of 1s is odd, the result is 1.
• In both cases, the total number of 1s in the codeword is even.
• The sender sends the codeword, which may be corrupted during transmission.
• The receiver receives a 5-bit word.
• The checker at the receiver does the same thing as the generator in the sender with one
exception: The addition is done over all 5 bits.
• The result, which is called the syndrome, is just 1 bit.
• The syndrome is 0 when the number of 1s in the received codeword is even; otherwise,
it is 1.
s0 = b3 + b2 + b1 + b0 + q0 (modulo-2)
• The syndrome is passed to the decision logic analyzer.
• If the syndrome is 0, there is no detectable error in the received codeword; the data
portion of the received codeword is accepted as the dataword.
• If the syndrome is 1, the data portion of the received codeword is discarded.
• The dataword is not created.
• Figure 3.12 shows one possible design for the encoder and decoder.
• In the encoder, the dataword has k bits (4 here); the codeword has n bits (7
here).
• The size of the dataword is augmented by adding n − k (3 here) 0s to the
right-hand side of the word.
• The n-bit result is fed into the generator.
• The generator uses a divisor of size n − k + 1 (4 here), predefined and agreed
upon.
• The generator divides the augmented dataword by the divisor (modulo-2
division).
• The quotient of the division is discarded; the remainder (r2r1r0) is appended
to the dataword to create the codeword.
• The decoder receives the codeword (possibly corrupted in transition).
• A copy of all n bits is fed to the checker, which is a replica of the generator.
• The remainder produced by the checker is a syndrome of n – k (3 here) bits,
which is fed to the decision logic analyzer.
• The analyzer has a simple function.
• If the syndrome bits are all 0s, the 4 leftmost bits of the codeword are
accepted as the dataword (interpreted as no error); otherwise, the 4 bits are
discarded (error).
Encoder
• The encoder takes a dataword and augments it with n – k number of 0s.
• It then divides the augmented dataword by the divisor, as shown in Figure.
• Figure 3.13 Division in CRC encoder
• The process of modulo-2 binary division is the same as the familiar division
process we use for decimal numbers.
• However, addition and subtraction in this case are the same; we use the XOR
operation to do both.
• As in decimal division, the process is done step by step.
• In each step, a copy of the divisor is XORed with the 4 bits of the dividend.
• The result of the XOR operation (remainder) is 3 bits (in this case), which is
used for the next step after 1 extra bit is pulled down to make it 4 bits long.
• There is one important point we need to remember in this type of division.
• If the leftmost bit of the dividend (or the part used in each step) is 0, the step
cannot use the regular divisor; we need to use an all-0s divisor.
• When there are no bits left to pull down, we have a result.
• The 3-bit remainder forms the check bits (r2, r1, and r0).
• They are appended to the dataword to create the codeword.
Decoder
• The codeword can change during transmission.
• The decoder does the same division process as the encoder.
• The remainder of the division is the syndrome.
• If the syndrome is all 0s, there is no error with a high probability; the
dataword is separated from the received codeword and accepted.
• Otherwise, everything is discarded.
• Figure 3.14 shows two cases:
• Figure 3.14a shows the value of the syndrome when no error has occurred;
the syndrome is 000.
• Figure 3.14b shows the case in which there is a single error.
• The syndrome is not all 0s (it is 011).
Divisor
• You may be wondering how the divisor 1011 is chosen.
• This depends on the expectation we have from the code.
• Some of the standard divisors used in networking are shown
in Table 3.4.
• The number in the name of the divisor (for example, CRC-
32) refers to the degree of the polynomial (the highest
power) representing the divisor.
• The number of bits is always one more than the degree of
the polynomial.
• For example, CRC-8 has 9 bits and CRC-32 has 33 bits.
Advantages of Cyclic Codes
• Cyclic codes can easily be implemented in hardware and software.
• They are especially fast when implemented in hardware.
• This has made cyclic codes a good candidate for many networks.
Checksum
• Checksum is an error-detecting technique that can be applied to a
message of any length.
• In the Internet, the checksum technique is mostly used at the
network and transport layer rather than the data-link layer.
Two DLC Protocols
• Having finished presenting all issues related to the
DLC sublayer, we now discuss two DLC protocols that
actually implement those concepts.
• The first, High-Level Data-Link Control, is the base
of many protocols that have been designed for LANs.
• The second, Point-to-Point Protocol, is derived from
HDLC and is used for point-to-point links.
• High-Level Data-Link Control
• protocol for communication over point-to-point and
multipoint links.
• Configurations and Transfer Modes
• HDLC provides two common transfer modes that can be used in
different configurations: normal response mode (NRM) and
asynchronous balanced mode (ABM).
• In NRM, the station configuration is unbalanced.
• We have one primary station and multiple secondary stations.
• A primary station can send commands; a secondary station can only
respond.
• The NRM is used for both point-to-point and multipoint links, as shown
in Figure.
Frames
• To provide the flexibility necessary to support all the options possible in the modes
and configurations just described, HDLC defines three types of frames: information
frames (I-frames), supervisory frames (S-frames), and unnumbered frames (Uframes).
• Each type of frame serves as an envelope for the transmission of a different type of
message.
• I-frames are used to transport user data and control information relating to user data
(piggybacking).
• S-frames are used only to transport control information.
• U-frames are reserved for system management.
• Information carried by U-frames is intended for managing the link itself.
• Each frame in HDLC may contain up to six fields, as shown in Figure : a beginning
flag field, an address field, a control field, an information field, a frame check
sequence (FCS) field, and an ending flag field.
• In multiple-frame transmissions, the ending flag of one frame can serve as the
beginning flag of the next frame.
Control Field for I-Frames
• I-frames are designed to carry user data from the network layer.
• In addition, they can include flow- and error-control information
(piggybacking).
• The subfields in the control field are used to define these functions.
• The first bit defines the type.
• If the first bit of the control field is 0, this means the frame is an I-frame.
• The next 3 bits, called N(S), define the sequence number of the frame.
• Note that with 3 bits, we can define a sequence number between 0 and
7.
• The last 3 bits, called N(R), correspond to the acknowledgment number
when piggybacking is used.
• The single bit between N(S) and N(R) is called the P/F bit.
• The P/F field is a single bit with a dual purpose.
• It has meaning only when it is set (bit = 1) and can mean
poll or final.
• It means poll when the frame is sent by a primary station to
a secondary station (when the address field contains the
address of the receiver).
• It means final when the frame is sent by a secondary
station to a primary station (when the address field
contains the address of the sender).
Control Field for S-Frames
• Supervisory frames are used for flow and error control whenever
piggybacking is either impossible or inappropriate.
• S-frames do not have information fields.
• If the first 2 bits of the control field are 10, this means the frame is
an S-frame.
• The last 3 bits, called N(R), correspond to the acknowledgment
number (ACK) or negative acknowledgment number (NAK)
depending on the type of S-frame.
• The 2 bits called code are used to define the type of S-frame itself.
With 2 bits, we can have four types of S-frames:
Control Field for U-Frames
• Unnumbered frames are used to exchange session
management and control information between connected
devices.
• Unlike S-frames, Uframes contain an information field, but
one used for system management information, not user data.
• As with S-frames, however, much of the information carried
by Uframes is contained in codes included in the control
field.
• U-frame codes are divided into two sections: a 2-bit prefix
before the P/F bit and a 3-bit suffix after the P/F bit.
• Together, these two segments (5 bits) can be used to create up
to 32 different types of U-frames.
Point-to-Point Protocol
• One of the most common protocols for point-to-point access is the
Point-to-Point Protocol (PPP).
• Today, millions of Internet users who need to connect their home
computers to the server of an Internet service provider use PPP.
• The majority of these users have a traditional modem; they are
connected to the Internet through a telephone line, which provides
the services of the physical layer.
• But to control and manage the transfer of data, there is a need for
point-to-point access at the data-link layer.
• PPP is by far the most common.
• Services
• The designers of PPP have included several services to make it suitable for a point-to-
point protocol, but have ignored some traditional services to make it simple.
• Services Provided by PPP
• PPP defines the format of the frame to be exchanged between devices.
• It also defines how two devices can negotiate the establishment of the link and the
exchange of data.
• PPP is designed to accept payloads from several network layers [not only Internet
Protocol (IP)].
• Authentication is also provided in the protocol, but it is optional.
• The new version of PPP, called Multilink PPP, provides connections over multiple links.
• One interesting feature of PPP is that it provides network address configuration.
• This is particularly useful when a home user needs a temporary network address to
connect to the Internet.
• Services Not Provided by PPP
• PPP does not provide flow control.
• A sender can send several frames one after another with no
concern about overwhelming the receiver.
• PPP has a very simple mechanism for error control.
• A CRC field is used to detect errors.
• If the frame is corrupted, it is silently discarded; the upper-
layer protocol needs to take care of the problem.
• Lack of error control and sequence numbering may cause a
packet to be received out of order.
• PPP does not provide a sophisticated addressing mechanism
to handle frames in a multipoint configuration.
Byte Stuffing
• Because PPP is a byte-oriented protocol, the flag in PPP is
a byte that needs to be escaped whenever it appears in the
data section of the frame.
• The escape byte is 01111101, which means that every time
the flag like pattern appears in the data, this extra byte is
stuffed to tell the receiver that the next byte is not a flag.
• Obviously, the escape byte itself should be stuffed with
another escape byte.
Transition Phases
• A PPP connection goes through phases that can be shown in a
transition phase diagram.
• The transition diagram starts with the dead state.
• In this state, there is no active carrier (at the physical layer)
and the line is quiet.
• When one of the two nodes starts the communication, the
connection goes into the establish state.
• In this state, options are negotiated between the two parties.
• If the two ends agree with authentication, the system goes to
the authenticate state; otherwise, the system goes to the
network state.
• The Link Control Protocol packets, discussed shortly, are used for
this purpose.
• Several packets may be exchanged here.
• Data transfer takes place in the open state.
• When a connection reaches this state, the exchange of data packets
can be started.
• The connection remains in this state until one of the endpoints
wants to terminate the connection.
• In this case, the system goes to the terminate state.
• The system remains in this state until the carrier (physical-layer
signal) is dropped, which moves the system to the dead state again.
Multiplexing
• Although PPP is a link-layer protocol, it uses another set of
protocols to establish the link, authenticate the parties involved, and
carry the network-layer data.
• Three sets of protocols are defined to make PPP powerful: the Link
Control Protocol (LCP), two Authentication Protocols (APs), and
several Network Control Protocols (NCPs).
• At any moment, a PPP packet can carry data from one of these
protocols in its data field, as shown in Figure.
• Note that there is one LCP, two APs, and several NCPs.
• Data may also come from several different network layers.
Link Control Protocol
• The Link Control Protocol (LCP) is responsible for
establishing, maintaining, configuring, and terminating
links.
• It also provides negotiation mechanisms to set options
between the two endpoints.
• Both endpoints of the link must reach an agreement about
the options before the link can be established.
Authentication Protocols
• Authentication plays a very important role in PPP because
PPP is designed for use over dial-up links where verification
of user identity is necessary.
• Authentication means validating the identity of a user who
needs to access a set of resources.
• PPP has created two protocols for authentication: Password
Authentication Protocol and Challenge Handshake
Authentication Protocol.
• Note that these protocols are used during the authentication
phase.
Network Control Protocols
• PPP is a multiple-network-layer protocol.
• It can carry a network-layer data packet from protocols defined by the
Internet, OSI, Xerox, DECnet, AppleTalk, Novel, and so on.
• To do this, PPP has defined a specific Network Control Protocol for
each network protocol.
• Xerox CP does the same for the Xerox protocol data packets, and so
on.
• Note that none of the NCP packets carry network-layer data; they just
configure the link at the network layer for the incoming data.
• One NCP protocol is the Internet Protocol Control Protocol (IPCP).
• This protocol configures the link used to carry IP data packets in the
Internet. IPCP is especially of interest to us.
Data from the Network Layer
• After the network-layer configuration is completed by one of the NCP
protocols, users can exchange data packets from the network layer.
• Here again, there are different protocol fields for different network
layers.
• For example, if PPP is carrying data from the IP network layer, the
field value is (0021)16.
• If PPP is carrying data from the OSI network layer, the protocol field
value is (0023)16, and so on.
Multilink PPP
• PPP was originally designed for a single-channel point-to-point physical
link.
• The availability of multiple channels in a single point-to-point link
motivated the development of Multilink PPP.
• In this case, a logical PPP frame is divided into several actual PPP frames.
• A segment of the logical frame is carried in the payload of an actual PPP
frame, as shown in Figure 3.22.
• To show that the actual PPP frame is carrying a fragment of a logical PPP
frame, the protocol field is set to (003d) 16.
• This new development adds complexity.
• For example, a sequence number needs to be added to the actual PPP
frame to show a fragment’s position in the logical frame.

You might also like