CS3591 Computer Networks Lecture Notes 1
CS3591 Computer Networks Lecture Notes 1
COURSE OBJECTIVES:
To understand the concept of layering in networks.
To know the functions of protocols of each layer of TCP/IP protocol suite.
To visualize the end-to-end flow of information.
To learn the functions of network layer and the various routing protocols
To familiarize the functions and protocols of the Transport layer
UNIT IV ROUTING 7
Routing and protocols: Unicast routing - Distance Vector Routing - RIP - Link State Routing – OSPF
– Path-vector routing - BGP - Multicast Routing: DVMRP – PIM.
TOTAL:45 PERIODS
TEXT BOOKS
1. James F. Kurose, Keith W. Ross, Computer Networking, A Top-Down Approach Featuring
the Internet, Eighth Edition, Pearson Education, 2021.
2. Behrouz A. Forouzan, Data Communications and Networking with TCP/IP Protocol Suite,
Sixth Edition TMH, 2022
CS 3591 COMPUTER NETWORKS
INTRODUCTION TO NETWORKS
A network is a set of devices (often referred to as nodes) connected by
communication links.
A node can be a computer, printer, or any other device capable of sending or
receiving data generated by other nodes on the network.
When we communicate, we are sharing information. This sharing can be local or
remote.
CHARACTERISTICS OF A NETWORK
The effectiveness of a network depends on three characteristics.
TRANSMISSION MODES
o The way in which data is transmitted from one device to another device is known as
transmission mode.
o The transmission mode is also known as the communication mode.
o Each communication channel has a direction associated with it, and transmission media
provide the direction. Therefore, the transmission mode is also known as a directional
mode.
o The transmission mode is defined in the physical layer.
o Simplex Mode
o Half-duplex Mode
o Full-duplex mode (Duplex Mode)
SIMPLEX MODE
o In simplex mode, the station can utilize the entire bandwidth of the
communication channel, so that more data can be transmitted at a
time.
HALF-DUPLEX MODE
o In a Half-duplex channel, direction can be reversed, i.e., the station can transmit and
receive the data as well.
o Messages flow in both the directions, but not at the same time.
o The entire bandwidth of the communication channel is utilized in one direction data time.
o In half-duplex mode, it is possible to perform the error detection, and if any error
occurs, then the receiver requests the sender to retransmit the data.
o A Walkie-talkie is an example of the Half-duplex mode.
o In Walkie-talkie, one party speaks, and another party listens. After a pause, the other
speaks and first party listens. Speaking simultaneously will create the distorted sound
which cannot be understood.
FULL-DUPLEX MODE
o In Full duplex mode, the communication is bi-directional, i.e., the data flow in
both the directions.
o Both the stations can send and receive the message simultaneously.
o Full-duplex mode has two simplex channels. One channel has traffic moving in
one direction, and another channel has traffic flowing in the opposite direction.
o The Full-duplex mode is the fastest mode of communication between devices.
o The most common example of the full-duplex mode is a Telephone network.
When two people are communicating with each other by a telephone line, both can
talk and listen at the same time.
Advantage of Full-duplex mode:
o Both the stations can send and receive the data at the same time.
Send/Receive A device can only Both the devices Both the devices
send the data but can send and can send and
cannot receive it or it receive the data, receive the data
can only receive the but one at a time. simultaneously.
data but cannot send
it.
Line configuration refers to the way two or more communication devices attach to a link. A
link is a communications pathway that transfers data from one device to another. There are two
possible line configurations:
i. Point to Point (PPP): Provides a dedicated Communication link between two devices. It
is simple to establish. The most common example for Point-to-Point connection is a
computer connected by telephone line. We can connect the two devices by means of a
pair of wires or using a microwave or satellite link.
ii. MultiPoint: It is also called Multidrop configuration. In this connection two or more
devices share a single link. There are two kinds of Multipoint Connections.
Temporal (Time) Sharing: If users must take turns using the link, then
it’s called Temporally shared or Time Shared Line Configuration.
NETWORK TYPES
A computer network is a group of computers linked to each other that enables the
computer to communicate with another computer and share their resources, data, and
applications.
A computer network can be categorized by their size.
A computer network is mainly of three types:
1. Local Area Network (LAN)
2. Wide Area Network (WAN)
3. Metropolitan Area Network (MAN)
INTERNETWORK
Extranet Intranet
An extranet is used for information sharing.
The access to the extranet is restricted to only An intranet belongs to an organization which
those users who have login credentials. An is only accessible by the
extranet is the lowest level of internetworking. organization's employee or members. The
It can becategorized as MAN, WAN or other main aim of the intranet is to share the
computer networks. An extranet cannot have a information and resources among the
single LAN, at least it must have one organization employees. An intranet provides
connection to the external network. the facility to work in groups and for
teleconferences.
PROTOCOL LAYERING
In networking, a protocol defines the rules that both the sender and receiver and all
intermediate devices need to follow to be able to communicate effectively.
A protocol provides a communication service that the process uses to exchange
messages.
When communication is simple, we may need only one simple protocol.
When the communication is complex, we may need to divide the task between different
layers, in which case we need a protocol at each layer, or protocol layering.
Protocol layering is that it allows us to separate the services from the implementation.
A layer needs to be able to receive a set of services from the lower layer and to give the
services to the upper layer.
Any modification in one layer will not affect the other layers.
Basic Elements of Layered Architecture
Service: It is a set of actions that a layer provides to the higher layer.
Protocol: It defines a set of rules that a layer uses to exchange the information with peer
entity. These rules mainly concern about both the contents and order of the messages
used.
Interface: It is a way through which the message is transferred from one layer to
another layer.
Protocol Graph
The set of protocols that make up a network system is called a protocol graph.
The nodes of the graph correspond to protocols, and the edges represent a dependence
relation.
For example, the Figure below illustrates a protocol graph consists of protocols RRP
(Request/Reply Protocol) and MSP (Message Stream Protocol) implement two
different types of process-to-process channels, and both depend on the HHP (Host-to-
Host Protocol) which provides a host-to-host connectivity service
OSI MODEL
o OSI stands for Open System Interconnection.
o It is a reference model that describes how information from a software applicationin one
computer moves through a physical medium to the software application in another
computer.
o OSI consists of seven layers, and each layer performs a particular network function.
o OSI model was developed by the International Organization for Standardization (ISO) in
1984, and it is now considered as an architectural model for the inter- computer
communications.
o OSI model divides the whole task into seven smaller and manageable tasks. Each layer
is assigned a particular task.
o Each layer is self-contained, so that task assigned to each layer can be performed
independently.
ORGANIZATION OF THE OSI LAYERS
FUNCTIONS OF THE OSI LAYERS
1. PHYSICAL LAYER
The physical layer coordinates the functions required to transmit a bit stream over a physical
medium.
The physical layer is concerned with the following functions:
Physical characteristics of interfaces and media - The physical layer defines the
characteristics of the interface between the devices and the transmission medium.
Representation of bits - To transmit the stream of bits, it must be encoded to signals.
The physical layer defines the type of encoding.
Signals: It determines the type of the signal used for transmitting the information.
Data Rate or Transmission rate - The number of bits sent each second –is also defined
by the physical layer.
Synchronization of bits - The sender and receiver must be synchronized at the bit level.
Their clocks must be synchronized.
Line Configuration - In a point-to-point configuration, two devices are connected
together through a dedicated link. In a multipoint configuration, a link is shared between
several devices.
Physical Topology - The physical topology defines how devices are connected to make
a network. Devices can be connected using a mesh, bus, star or ring topology.
Transmission Mode - The physical layer also defines the direction of transmission
between two devices: simplex, half-duplex or full-duplex.
3. NETWORK LAYER
This layer is responsible for the delivery of packets from source to destination.
It determines the best path to move data from source to the destination based on the
network conditions, the priority of service, and other factors.
The other responsibilities of this layer are
Logical addressing - If a packet passes the network boundary, we need another
addressing system for source and destination called logical address. This addressing is
used to identify the device on the internet.
Routing – Routing is the major component of the network layer, and it determines the
best optimal path out of the multiple paths from source to the destination.
4. TRANSPORT LAYER
It is responsible for Process-to-Process delivery. That is responsible for source-to- destination
(end-to-end) delivery of the entire message, It also ensures whether the message arrives in order
or not.
The other responsibilities of this layer are
Port addressing / Service Point addressing - The header includes an address called
port address / service point address. This layer gets the entire message to the correct
process on that computer.
Segmentation and reassembly - The message is divided into segments and each
segment is assigned a sequence number. These numbers are arranged correctly on the
arrival side by this layer.
Connection control - This can either be connectionless or connection oriented.
The connectionless treats each segment as an individual packet and delivers to
the destination.
The connection-oriented makes connection on the destination side before the
delivery. After the delivery the termination will be terminated.
Flow control - The transport layer also responsible for flow control but it is
performed end-to-end rather than across a single link.
Error Control - Error control is performed end-to-end rather than across the single
link.
5. SESSION LAYER
This layer establishes, manages and terminates connections between applications. The
other responsibilities of this layer are
Dialog control - Session layer acts as a dialog controller that creates a dialog between
two processes or we can say that it allows the communication between two processes
which can be either half-duplex or full-duplex.
Synchronization- Session layer adds some checkpoints when transmitting the data in a
sequence. If some error occurs in the middle of the transmission of data, then the
transmission will take place again from the checkpoint. This process is known as
Synchronization and recovery.
6. PRESENTATION LAYER
It is concerned with the syntax and semantics of information exchanged between two systems.
The other responsibilities of this layer are
Translation – Different computers use different encoding system, this layer is
responsible for interoperability between these different encoding methods. It will change
the message into some common format.
Encryption and decryption-It means that sender transforms the original information to
another form and sends the resulting message over the n/w. and vice versa.
Compression and expansion-Compression reduces the number of bits contained in the
information particularly in text, audio and video.
7. APPLICATION LAYER
This layer enables the user to access the network. It handles issues such as network transparency,
resource allocation, etc. This allows the user to log on to remote user.
The other responsibilities of this layer are
FTAM (File Transfer, Access, Management) - Allows user to access files in a
remote host.
Mail services - Provides email forwarding and storage.
Directory services - Provides database sources to access information about various
sources and objects.
APPLICATION LAYER
An application layer incorporates the function of top three OSI layers. An
application layer is the topmost layer in the TCP/IP model.
It is responsible for handling high-level protocols, issues of representation.
This layer allows the user to interact with the application.
When one application layer protocol wants to communicate with another application
layer, it forwards its data to the transport layer.
Protocols such as FTP, HTTP, SMTP, POP3, etc running in the application layer
provides service to other program running on top of application layer
TRANSPORT LAYER
The transport layer is responsible for the reliability, flow control, and correction of data
which is being sent over the network.
The two protocols used in the transport layer are User Datagram protocol and
INTERNET LAYER
The internet layer is the second layer of the TCP/IP model.
An internet layer is also known as the network layer.
The main responsibility of the internet layer is to send the packets from any
network, and they arrive at the destination irrespective of the route they take.
Internet layer handle the transfer of information across multiple networks through router
and gateway .
IP protocol is used in this layer, and it is the most significant part of the entire
TCP/IP suite.
2 Service interface and protocols are Service interface and protocols were not
clearly distinguished before clearly distinguished before
7 All packets are reliably delivered TCP reliably delivers packets, IP does not
reliably deliver packets
SOCKETS
A socket is a software ,it serves as an endpoint of a two-way communication link between two programs
running on the network. The socket mechanism provides a means of inter-process communication (IPC) by
establishing named contact points between which the communication take place.
Sockets is created using ‘socket’ system call. The socket provides bidirectional FIFO Communication
facility over the network. A socket connecting to the network is created at each end of the communication.
Each socket has a specific address. This address is composed of an IP address and a port number.
Socket are generally employed in client server applications. The server creates a socket, attaches it to a
network port address then waits for the client to contact it. The client creates a socket and then attempts to
connect to the server socket. When the connection is established, transfer of data takes place.
Types of Sockets: There are two types of Sockets: the datagram socket and the stream socket.
1. Datagram Socket: This is a type of network which has connection less point for sending and
receiving packets. It is similar to mailbox. The letters (data) posted into the box are collected and
delivered (transmitted) to a letterbox (receiving socket).
2. Stream Socket: In Computer operating system, a stream socket is type of inter process
communications socket or network socket which provides a connection-oriented, sequenced, and
unique flow of data without record boundaries with well-defined mechanisms for creating and
destroying connections and for detecting errors. It is similar to phone. A connection is established
between the phones (two ends) and a conversation (transfer of data) takes place.
The Hyper Text Transfer Protocol (HTTP) is used to define how the client-server
programs can be written to retrieve web pages from the Web.
It is a protocol used to access the data on the World Wide Web (WWW).
The HTTP protocol can be used to transfer the data in the form of plain text,
hypertext, audio, video, and so on.
HTTP is a stateless request/response protocol that governs client/server
communication.
An HTTP client sends a request; an HTTP server returns a response.
The server uses the port number 80; the client uses a temporary port number.
HTTP uses the services of TCP, a connection-oriented and reliable protocol.
HTTP is a text-oriented protocol. It contains embedded URL known as
links.
When hypertext is clicked, browser opens a new connection, retrieves file from the
server and displays the file.
Each HTTP message has the general form
START_LINE <CRLF>
MESSAGE_HEADER <CRLF>
<CRLF> MESSAGE_BODY <CRLF>
where <CRLF> stands for carriage-return-line-feed.
Features of HTTP
o Connectionless protocol:
HTTP is a connectionless protocol. HTTP client initiates a request and waits
for a response from the server. When the server receives the request, the
server processes the request and sends back the response to the HTTP client
after which the client disconnects the connection. The connection between
client and server exist only during the current request and response time only.
o Media independent:
HTTP protocol is a media independent as data can be sent as long as both
the client and server know how to handle the data content. It is required for
both the client and server to specify the content type in MIME-type header.
o Stateless:
HTTP is a stateless protocol as both the client and server know each other only
during the current request. Due to this nature of the protocol, both the client
and server do not retain the information between various requests of the web
pages.
Request Message: The request message is sent by the client that consists of a
request line, headers, and sometimes a body.
Response Message: The response message is sent by the server to the client that
consists of a status line, headers, and sometimes a body.
HTTP REQUEST MESSAGE
Request Line
There are three fields in this request line - Method, URL and Version.
The Method field defines the request types.
The URL field defines the address and name of the corresponding web page.
The Version field gives the version of the protocol; the most current version of HTTP
is 1.1.
Some of the Method types are
Request Header
Each request header line sends additional information from the client to the
server.
Each header line has a header name, a colon, a space, and a header value.
The value field defines the values associated with each header name.
Headers defined for request message include
Body
The body can be present in a request message. It is optional.
Usually, it contains the comment to be sent or the file to be published on the website
when the method is PUT or POST.
Conditional Request
A client can add a condition in its request.
In this case, the server will send the requested web page if the condition is met or
inform the client otherwise.
One of the most common conditions imposed by the client is the time and date the
web page is modified.
The client can send the header line If-Modified-Since with the request to tell the server
that it needs the page only if it is modified after a certain point in time.
Status Line
The Status line contains three fields - HTTP version, Status code, Status phrase
The first field defines the version of HTTP protocol, currently 1.1.
The status code field defines the status of the request. It classifies the HTTP result.
It consists of three digits.
1xx–Informational, 2xx– Success, 3xx–Redirection,
4xx–Client error, 5xx–Server error
The Status phrase field gives brief description about status code in text form.
Some of the Status codes are
Response Header
Each header provides additional information to the client.
Each header line has a header name, a colon, a space, and a header value.
Some of the response headers are:
y
Bod
he server to the client. The body is present unless the response is an error
message.
T
h HTTP CONNECTIONS
e
HTTP Clients and Servers exchange multiple messages over the same TCP
connection. If some of the objects are located on the same server, we have two
b choices: to retrieve each object using a new TCP connection or to make a TCP
o connection and retrieve them all.
d The first method is referred to as a non-persistent connection, the second as a persistent
y connection.
HTTP 1.0 uses non-persistent connections and HTTP 1.1 uses persistent
c connections.
o
n
t
a
i
n
s
t
h
e
d
o
c
u
m
e
n
t
t
o
b
e
s
e
n
t
f
r
o
m
t
NON-PERSISTENT CONNECTIONS
PERSISTENT CONNECTIONS
FTP OBJECTIVES
It provides the sharing of files.
It is used to encourage the use of remote computers.
It transfers the data more reliably and efficiently.
FTP MECHANISM
FTP CONNECTIONS
There are two types of connections in FTP -
Control Connection and Data Connection.
The two connections in FTP have different lifetimes.
The control connection remains connected during the entire interactive FTP
session.
The data connection is opened and then closed for each file transfer activity. When
a user starts an FTP session, the control connection opens.
While the control connection is open, the data connection can be opened and closed
multiple times if several files are transferred.
FTP uses two well-known TCP ports:
o Port 21 is used for the control connection
o Port 20 is used for the data connection.
Control Connection:
o The control connection uses very simple rules for communication.
o Through control connection, we can transfer a line of command or line of
response at a time.
o The control connection is made between the control processes.
o The control connection remains connected during the entire interactive FTP
session.
Data Connection:
o The Data Connection uses very complex rules as data types may vary.
o The data connection is made between data transfer processes.
o The data connection opens when a command comes for transferring the
files and closes when the file is transferred.
FTP COMMUNICATION
FTP Communication is achieved through commands and responses.
FTP Commands are sent from the client to the server
FTP responses are sent from the server to the client.
FTP Commands are in the form of ASCII uppercase, which may or may not be
followed by an argument.
Some of the most common commands are
FTP SECURITY
FTP requires a password, the password is sent in plaintext which is unencrypted.
This means it can be intercepted and used by an attacker.
The data transfer connection also transfers data in plaintext, which is insecure.
To be secure, one can add a Secure Socket Layer between the FTP application layer
and the TCP layer.
In this case FTP is called SSL-FTP.
WORKING OF EMAIL
When Alice needs to send a message to Bob, she runs a UA program to prepare the
message and send it to her mail server.
The mail server at her site uses a queue (spool) to store messages waiting to be sent.
The message, however, needs to be sent through the Internet from Alice’s site to
Bob’s site using an MTA.
Here two message transfer agents are needed: one client and one server.
The server needs to run all the time because it does not know when a client will ask
for a connection.
The client can be triggered by the system when there is a message in the queue to be
sent.
The user agent at the Bob site allows Bob to read the received message.
Bob later uses an MAA client to retrieve the message from an MAA server
running on the second server.
Command driven
o Command driven user agents belong to the early days of electronic mail.
o A command-driven user agent normally accepts a one-character command from
the keyboard to perform its task.
o Some examples of command driven user agents are mail, pine, and elm.
GUI-based
o Modern user agents are GUI-based.
o They allow the user to interact with the software by using both the keyboard and
the mouse.
o They have graphical components such as icons, menu bars, and windows that make
the services easy to access.
o Some examples of GUI-based user agents are Eudora and Outlook.
SMTP is the standard protocol for transferring mail between hosts in the TCP/IP
protocol suite.
SMTP is not concerned with the format or content of messages themselves.
SMTP uses information written on the envelope of the mail (message header),but
does not look at the contents (message body) of the envelope.
SMTP Commands
Commands are sent from the client to the server. It consists of a keyword followed by
zero or more arguments. SMTP defines 14 commands.
SMTP Responses
Responses are sent from the server to the client.
A response is a three-digit code that may be followed by additional textual
information.
SMTP OPERATIONS
Basic SMTP operation occurs in three phases:
1. Connection Setup
2. Mail Transfer
3. Connection Termination
Connection Setup
An SMTP sender will attempt to set up a TCP connection with a target host when
it has one or more mail messages to deliver to that host.
The sequence is quite simple:
1. The sender opens a TCP connection with the receiver.
2. Once the connection is established, the receiver identifies itself
with "Service Ready”.
3. The sender identifies itself with the HELO command.
4. The receiver accepts the sender's identification with "OK".
5. If the mail service on the destination is unavailable, the destination
host returns a "Service Not Available" reply in step 2, and the process
is terminated.
Mail Transfer
Once a connection has been established, the SMTP sender may send one or more
messages to the SMTP receiver.
There are three logical phases to the transfer of a message:
1. A MAIL command identifies the originator of the message.
2. One or more RCPT commands identify the recipients for
this message.
3. A DATA command transfers the message text.
Connection Termination
The SMTP sender closes the connection in two steps.
First, the sender sends a QUIT command and waits for a reply.
The second step is to initiate a TCP close operation for the TCP connection.
The receiver initiates its TCP close after sending its reply to the QUIT command.
LIMITATIONS OF SMTP
SMTP cannot transmit executable files or other binary objects.
SMTP cannot transmit text data that includes national language characters, as these
are represented by 8-bit codes with values of 128 decimal or higher, and SMTP is
limited to 7-bit ASCII.
SMTP servers may reject mail message over a certain size.
SMTP gateways that translate between ASCII and the character code EBCDIC do not
use a consistent set of mappings, resulting in translation problems.
Some SMTP implementations do not adhere completely to the SMTP standards
defined.
Common problems include the following:
1. Deletion, addition, or recording of carriage return and linefeed.
2. Truncating or wrapping lines longer than 76 characters.
3. Removal of trailing white space (tab and space characters).
4. Padding of lines in a message to the same
length. Conversion of tab characters into multiple-space
characters
SMTP provides a basic email service, while MIME adds multimedia capability to
SMTP.
MIME is an extension to SMTP and is used to overcome the problems and limitations
of SMTP.
Email system was designed to send messages only in ASCII format.
Languages such as French, Chinese, etc., are not supported.
Image, audio and video files cannot be
sent. MIME adds the following features to email
service:
Be able to send multiple attachments with a single message;
Unlimited message length;
Use of character sets other than ASCII code;
Use of rich text (layouts, fonts, colors, etc)
Binary attachments (executables, images, audio or video files, etc.), which
may be divided if needed.
MIME is a protocol that converts non-ASCII data to 7-bit NVT (Network Virtual
Terminal) ASCII and vice-versa.
MIME HEADERS
Using headers, MIME describes the type of message content and the encoding used.
Headers defined in MIME are:
MIME-Version- current version, i.e., 1.1
Content-Type - message type (text/html, image/jpeg, application/pdf)
Content-Transfer-Encoding - message encoding scheme (eg base64).
Content-Id - unique identifier for the message.
Content-Description - describes type of the message body.
MIME CONTENT TYPES
There are seven different major types of content and a total of 14 subtypes.
In general, a content type declares the general type of data, and the subtype
specifies a particular format for that type of data.
MIME also defines a multipart type that says how a message carrying more than
one data type is structured.
This is like a programming language that defines both base types (e.g., integers and
floats) and compound types (e.g., structures and arrays).
One possible multipart subtype is mixed, which says that the message contains a
set of independent data pieces in a specified order.
Each piece then has its own header line that describes the type of that piece.
The table below lists the MIME content types:
MESSAGE TRANSFER IN MIME
MTA is a mail daemon (send mail) active on hosts having mailbox, used to send an
email.
Mail passes through a sequence of gateways before it reaches the recipient mail
server. Each gateway stores and forwards the mail using Simple mail transfer protocol
(SMTP).
SMTP defines communication between MTAs over TCP on port 25.
In an SMTP session, sending MTA is client and receiver is server. In each
exchange:
Client posts a command (HELO, MAIL, RCPT, DATA, QUIT, VRFY, etc.)
Server responds with a code (250, 550, 354, 221, 251 etc) and an explanation.
Client is identified using HELO command and verified by the server
Client forwards message to server, if server is willing to
accept. Message is terminated by a line with only single period
(.) in it. Eventually client terminates the connection.
Offline
This means that any information you put into the software isn't stored on your
computer but instead on your internet connection.
Online
Users may connect to the server, look at what email is available, and
access it online. This looks to the user very much like having local spool
files, but they’re on the mail server.
Disconnected operation
A mail client connects to the server, can make a “cache” copy of
selected messages, and disconnects from the server. The user can then
work on the messages offline, and connect to the server later and
resynchronize the server status with the cache.
OPERATION OF IMAP
The mail transfer begins with the client authenticating the user and identifying the
mailbox they want to access.
Client Commands
LOGIN, AUTHENTICATE, SELECT, EXAMINE, CLOSE, and LOGOUT
Server Responses
OK, NO (no permission), BAD (incorrect command),
When user wishes to FETCH a message, server responds in MIME format.
Message attributes such as size are also exchanged.
Flags are used by client to report user actions.
SEEN, ANSWERED, DELETED, RECENT
IMAP4
The latest version is IMAP4. IMAP4 is more powerful and more complex.
IMAP4 provides the following extra functions:
A user can check the e-mail header prior to downloading.
A user can search the contents of the e-mail for a specific string of characters
prior to downloading.
A user can partially download e-mail. This is especially useful if bandwidth is
limited and the e-mail contains multimedia with high bandwidth requirements.
A user can create, delete, or rename mailboxes on the mail server.
A user can create a hierarchy of mailboxes in a folder for e-mail storage.
ADVANTAGES OF IMAP
With IMAP, the primary storage is on the server, not on the local machine.
Email being put away for storage can be foldered on local disk, or can be
POP3 client is installed on the recipient computer and POP server on the mail server.
Client opens a connection to the server using TCP on port 110.
Client sends username and password to access mailbox and to retrieve messages.
POP3 Commands
POP commands are generally abbreviated into codes of three or four letters The
following describes some of the POP commands:
1. UID - This command opens the connection
2. STAT - It is used to display number of messages currently in the mailbox
3. LIST - It is used to get the summary of messages
4. RETR -This command helps to select a mailbox to access the messages
5. DELE - It is used to delete a message
6. RSET - It is used to reset the session to its initial state
7. QUIT - It is used to log off the session
DIFFERENCE BETWEEN POP AND IMAP
The following six steps shows the working of a DNS. It maps the host name to an IP address:
1. The user passes the host name to the file transfer client.
2. The file transfer client passes the host name to the DNS client.
3. Each computer, after being booted, knows the address of one DNS server. The DNS
client sends a message to a DNS server with a query that gives the file transfer server
name using the known IP address of the DNS server.
4. The DNS server responds with the IP address of the desired file transfer server.
5. The DNS server passes the IP address to the file transfer client.
6. The file transfer client now uses the received IP address to access the file transfer server.
NAME SPACE
To be unambiguous, the names assigned to machines must be carefully selected from a
name space with complete control over the binding between the names and IP address.
The names must be unique because the addresses are unique.
A name space that maps each address to a unique name can be organized in
two ways: flat (or) hierarchical.
Each node in the tree has a label, which is a string with a maximum of 63 characters.
The root label is a null string (empty string). DNS requires that children of a node
(nodes that branch from the same node) have different labels, which guarantees the
uniqueness of the domain names.
Domain Name
Each node in the tree has a label called as domain name.
A full domain name is a sequence of labels separated by dots (.)
The domain names are always read from the node up to the root.
The last label is the label of the root (null).
This means that a full domain name always ends in a null label, which
means the last character is a dot because the null string is nothing.
If a label is terminated by a null string, it is called a fully qualified domain
name (FQDN).
If a label is not terminated by a null string, it is called a partially
qualified domain name (PQDN).
Domain
A domain is a subtree of the domain name space.
The name of the domain is the domain name of the node at the top of the sub-tree.
A domain may itself be divided into domains.
ZONE
ROOT SERVER
A root sever is a server whose zone consists of the whole tree.
A root server usually does not store any information about domains but delegates its
authority to other servers, keeping references to those servers.
Currently there are more than 13 root servers, each covering the whole domain name
space.
The servers are distributed all around the world.
Generic Domains
The generic domains define registered hosts according to their generic behavior.
Each node in the tree defines a domain, which is an index to the domain namespace
database.
The first level in the generic domains section allows seven possible three character
levels.
These levels describe the organization types as listed in following table.
Country Domains
The country domains section follows the same format as the generic domains but
uses two characters for country abbreviations
E.g.; in for India, us for United States etc) in place of the three character
organizational abbreviation at the first level.
Second level labels can be organizational, or they can be more specific, national
designation.
India for example, uses state abbreviations as a subdivision of the country
domain us. (e.g., ca.in.)
Inverse Domains
o Mapping an address to a name is called Inverse domain.
The client can send an IP address to a server to be mapped to a domain name and it is
called PTR(Pointer) query.
To answer queries of this kind, DNS uses the inverse domain
DNS RESOLUTION
Mapping a name to an address or an address to a name is called name address
resolution. DNS is designed as a client server application.
A host that needs to map an address to a name or a name to an address calls a DNS
client named a Resolver.
The Resolver accesses the closest DNS server with a mapping request.
If the server has the information, it satisfies the resolver; otherwise, it either refers the
resolver to other servers or asks other servers to provide the information.
After the resolver receives the mapping, it interprets the response to see if it is areal
resolution or an error and finally delivers the result to the process that requested it.
A resolution can be either recursive or iterative.
Recursive Resolution
The application program on the source host calls the DNS resolver (client) to find the
IP address of the destination host. The resolver, which does not know this address,
sends the query to the local DNS server of the source (Event 1)
The local server sends the query to a root DNS server (Event 2)
The Root server sends the query to the top-level-DNS server (Event 3)
The top-level DNS server knows only the IP address of the local DNS server at the
destination. So, it forwards the query to the local server, which knows the IP address
of the destination host (Event 4)
The IP address of the destination host is now sent back to the top-level DNS server
(Event 5) then back to the root server (Event 6), then back to the source DNS server,
which may cache it for the future queries (Event 7), and finally back to the source host
(Event 8).
Iterative Resolution
In iterative resolution, each server that does not know the mapping, sends the IP
address of the next server back to the one that requested it.
The iterative resolution takes place between two local servers.
The original resolver gets the final answer from the destination local server.
The messages shown by Events 2, 4, and 6 contain the same query.
However, the message shown by Event 3 contains the IP address of the top-level
domain server.
The message shown by Event 5 contains the IP address of the destination local DNS
server
The message shown by Event 7 contains the IP address of the destination.
When the Source local DNS server receives the IP address of the destination, it sends
it to the resolver (Event 8).
DNS CACHING
Each time a server receives a query for a name that is not in its domain, it needs to
search its database for a server IP address.
DNS handles this with a mechanism called caching.
When a server asks for a mapping from another server and receives the response, it
stores this information in its cache memory before sending it to the client.
If the same or another client asks for the same mapping, it can check its cache
memory and resolve the problem.
However, to inform the client that the response is coming from the cache memory and
not from an authoritative source, the server marks the response as unauthoritative.
Caching speeds up resolution. Reduction of this search time would increase
efficiency, but it can also be problematic.
If a server caches a mapping for a long time, it may send an outdated mapping to the
client.
To counter this, two techniques are used.
First, the authoritative server always adds information to the mapping called
time to live (TTL). It defines the time in seconds that the receiving server can
cache the information. After that time, the mapping is invalid and any query
must be sent again to the authoritative server.
Second, DNS requires that each server keep a TTL counter for each mapping
it caches. The cache memory must be searched periodically and those
mappings with an expired TTL must be purged.
DNS MESSAGES
DNS has two types of messages: query and response.
Both types have the same format.
The query message consists of a header and question section.
The response message consists of a header, question section, answer section,
authoritative section, and additional section
Header
Both query and response messages have the same header format with some
fields set to zero for the query messages.
The header fields are as follows:
The identification field is used by the client to match the response withthe
query.
The flag field defines whether the message is a query or response. It also
includes status of error.
The next four fields in the header define the number of each record typein the
message.
Question Section
The question section consists of one or more question records. It is
present in both query and response messages.
Answer Section
The answer section consists of one or more resource records. It ispresent only
in response messages.
Authoritative Section
The authoritative section gives information (domain name) about one ormore
authoritative servers for the query.
Additional Information Section
The additional information section provides additional information thatmay
help the resolver.
DNS CONNECTIONS
DNS can use either UDP or TCP.
In both cases the well-known port used by the server is port 53.
UDP is used when the size of the response message is less than 512 bytes
because most UDP packages have a 512-byte packet size limit.
If the size of the response message is more than 512 bytes, a TCP connection isused.
DNS REGISTRARS
New domains are added to DNS through a registrar. A fee is charged.
A registrar first verifies that the requested domain name is unique and thenenters
it into the DNS database.
Today, there are many registrars; their names and addresses can be found at
http://www.intenic.net
To register, the organization needs to give the name of its server and the IP
address of the server.
For example, a new commercial organization named wonderful with a server
named ws and IP address 200.200.200.5, needs to give the following information
to one of the registrars:
Domain name: ws.wonderful.com IP address: 200.200.200.5
SNMP (SIMPLE NETWORK MANAGEMENT PROTOCOL)
SNMP MANAGER
A manager is a host that runs the SNMP client program
The manager has access to the values in the database kept by the agent.
A manager checks the agent by requesting the information that reflects the
behavior of the agent.
A manager also forces the agent to perform a certain function by resetting values
in the agent database.
For example, a router can store in appropriate variables the number of
packets received and forwarded.
The manager can fetch and compare the values of these two variables to see if the
router is congested or not.
SNMP AGENT
The agent is a router that runs the SNMP server program.
The agent is used to keep the information in a database while the manager is used
to access the values in the database.
For example, a router can store the appropriate variables such as a number of
packets received and forwarded while the manager can compare these variables to
determine whether the router is congested or not.
Agents can also contribute to the management process.
A server program on the agent checks the environment, if something goes wrong, the
agent sends a warning message to the manager.
Name
SMI requires that each managed object (such as a router, a variable in a router,
a value, etc.) have a unique name. To name objects globally.
SMI uses an object identifier, which is a hierarchical identifier based on a
tree structure.
The tree structure starts with an unnamed root. Each object can be defined
using a sequence of integers separated by dots.
The tree structure can also define an object using a sequence of textual names
separated by dots.
Type of data
The second attribute of an object is the type of data stored in it.
To define the data type, SMI uses Abstract Syntax Notation One (ASN.1)
definitions.
SMI has two broad categories of data types: simple and structured.
The simple data types are atomic data types. Some of them are taken directly
from ASN.1; some are added by SMI.
SMI defines two structured data types: sequence and sequence of.
Sequence - A sequence data type is a combination of simple data types,
not necessarily of the same type.
Sequence of - A sequence of data type is a combination of simple data types
all of the same type or a combination of sequence data types all of the
same type.
Encoding data
SMI uses another standard, Basic Encoding Rules (BER), to encode data to be
transmitted over the network.
BER specifies that each piece of data be encoded in triplet format (TLV): tag, length,
value
Management Information Base (MIB)
The Management Information Base (MIB) is the second component used in network management.
Each agent has its own MIB, which is a collection of objects to be managed.
MIB classifies objects under groups.
MIB Variables
MIB variables are of two types namely simple and table.
Simple variables are accessed using group-id followed by variable-id and 0
Tables are ordered as column-row rules, i.e., column by column from top
to bottom. Only leaf elements are accessible in a table type.
SNMP MESSAGES/PDU
SNMP is request/reply protocol that supports various operations using PDUs.
SNMP defines eight types of protocol data units (or PDUs):
GetRequest, GetNext-Request, GetBulkRequest, SetRequest, Response, Trap,
InformRequest, and Report
GetRequest
The GetRequest PDU is sent from the manager (client) to the agent (server) to
retrieve the value of a variable or a set of variables.
GetNextRequest
The GetNextRequest PDU is sent from the manager to the agent to retrieve the
value of a variable.
GetBulkRequest
The GetBulkRequest PDU is sent from the manager to the agent to retrieve a large
amount of data. It can be used instead of multiple GetRequest and GetNextRequest
PDUs.
SetRequest
The SetRequest PDU is sent from the manager to the agent to set (store) a
value in a variable.
Response
The Response PDU is sent from an agent to a manager in response to GetRequest or
GetNextRequest. It contains the value(s) of the variable(s) requested by the
manager.
Trap
The Trap PDU is sent from the agent to the manager to report an event. For example,
if the agent is rebooted, it informs the manager and reports the time of rebooting.
InformRequest
The InformRequest PDU is sent from one manager to another remote manager to
get the value of some variables from agents under the control of the remote
manager. The remote manager responds with a Response PDU.
Report
The Report PDU is designed to report some types of errors between managers.
UNIT – II: TRANSPORT LAYER
Introduction - Transport-Layer Protocols: UDP – TCP: Connection
Management – Flow control -
Congestion Control - Congestion avoidance (DECbit, RED) – SCTP –
Quality of Service
INTRODUCTION
The transport layer is the fourth layer of the OSI model and is the core of the Internet
model.
It responds to service requests from the session layer and issues service requests to
the network Layer.
The transport layer provides transparent transfer of data between hosts.
It provides end-to-end control and information transfer with the quality of service
needed by the application program.
It is the first true end-to-end layer, implemented in all End Systems (ES).
1
Downloaded from EnggTree.com
EnggTree.com
Process-to-Process Communication
The Transport Layer is responsible for delivering data to the appropriate application
process on the host computers.
This involves multiplexing of data from different application processes, i.e. forming
data packets, and adding source and destination port numbers in the header of each
Transport Layer data packet.
Together with the source and destination IP address, the port numbers constitute a
network socket, i.e. an identification address of the process-to-
processcommunication.
Flow Control
Flow Control is the process of managing the rate of data transmission between two
nodes to prevent a fast sender from overwhelming a slow receiver.
It provides a mechanism for the receiver to control the transmission speed, so that the
receiving node is not overwhelmed with data from transmitting node.
2
Error Control
Error control at the transport layer is responsible for
1. Detecting and discarding corrupted packets.
2. Keeping track of lost and discarded packets and resending them.
3. Recognizing duplicate packets and discarding them.
4. Buffering out-of-order packets until the missing packets arrive.
Error Control involves Error Detection and Error Correction
Congestion Control
Congestion in a network may occur if the load on the network (the number of
packets sent to the network) is greater than the capacity of the network (the
number of packets a network can handle).
Congestion control refers to the mechanisms and techniques that control the
congestion and keep the load below the capacity.
Congestion Control refers to techniques and mechanisms that can either prevent
congestion, before it happens, or remove congestion, after it has happened
Congestion control mechanisms are divided into two categories,
1. Open loop - prevent the congestion before it happens.
2. Closed loop - remove the congestion after it happens.
Each protocol provides a different type of service and should be used appropriately.
3
UDP - UDP is an unreliable connectionless transport-layer protocol used for its simplicity
and efficiency in applications where error control can be provided by the application-layer
process.
TCP - TCP is a reliable connection-oriented protocol that can be used in any application
where reliability is important.
SCTP - SCTP is a new transport-layer protocol designed to combine some features of UDP
and TCP in an effort to create a better protocol for multimedia communication.
UDP PORTS
Processes (server/client) are identified by an abstract locator known as port.
Server accepts message at well known port.
Some well-known UDP ports are 7–Echo, 53–DNS, 111–RPC, 161–SNMP, etc.
< port, host > pair is used as key for demultiplexing.
Ports are implemented as a message queue.
When a message arrives, UDP appends it to end of the queue.
When queue is full, the message is discarded.
When a message is read, it is removed from the queue.
When an application process wants to receive a message, one is removed from the
front of the queue.
If the queue is empty, the process blocks until a message becomes available.
4
UDP DATAGRAM (PACKET) FORMAT
UDP packets are known as user datagrams .
These user datagrams, have a fixed-size header of 8 bytes made of four fields, eachof
2 bytes (16 bits).
5
Length
This field denotes the total length of the UDP Packet (Header plus data)
The total length of any UDP datagram can be from 0 to 65,535 bytes.
Checksum
UDP computes its checksum over the UDP header, the contents of the message
body, and something called the pseudoheader.
The pseudoheader consists of three fields from the IP header—protocol number,
source IP address, destination IP address plus the UDP length field.
Data
Data field defines tha actual payload to be transmitted.
Its size is variable.
UDP SERVICES
Process-to-Process Communication
UDP provides process-to-process communication using socket addresses, a
combination of IP addresses and port numbers.
Connectionless Services
UDP provides a connectionless service.
There is no connection establishment and no connection termination .
Each user datagram sent by UDP is an independent datagram.
There is no relationship between the different user datagrams even if they are
coming from the same source process and going to the same destination program.
The user datagrams are not numbered.
Each user datagram can travel on a different path.
Flow Control
UDP is a very simple protocol.
There is no flow control, and hence no window mechanism.
The receiver may overflow with incoming messages.
The lack of flow control means that the process using UDP should provide for this
service, if needed.
Error Control
There is no error control mechanism in UDP except for the checksum.
This means that the sender does not know if a message has been lost or duplicated.
When the receiver detects an error through the checksum, the user datagram is
silently discarded.
The lack of error control means that the process using UDP should provide for this
service, if needed.
Checksum
UDP checksum calculation includes three sections: a pseudoheader, the UDP header,
and the data coming from the application layer.
The pseudoheader is the part of the header in which the user datagram is to be
encapsulated with some fields filled with 0s.
Optional Inclusion of Checksum
The sender of a UDP packet can choose not to calculate the checksum.
In this case, the checksum field is filled with all 0s before being sent.
In the situation where the sender decides to calculate the checksum,
but it happens that the result is all 0s, the checksum is changed to all
1s before the packet is sent.
In other words, the sender complements the sum two times.
Congestion Control
Since UDP is a connectionless protocol, it does not provide congestion control.
UDP assumes that the packets sent are small and sporadic(occasionally or at
irregular intervals) and cannot create congestion in the network.
This assumption may or may not be true, when UDP is used for interactive real-time
transfer of audio and video.
Queuing
In UDP, queues are associated with ports.
At the client site, when a process starts, it requests a port number from the
operating system.
Some implementations create both an incoming and an outgoing queue associated
with each process.
Other implementations create only an incoming queue associated with each process.
APPLICATIONS OF UDP
UDP is used for management processes such as SNMP.
UDP is used for route updating protocols such as RIP.
UDP is a suitable transport protocol for multicasting. Multicasting capability
is embedded in the UDP software
UDP is suitable for a process with internal flow and error control mechanisms such
as Trivial File Transfer Protocol (TFTP).
UDP is suitable for a process that requires simple request-response communication
with little concern for flow and error control.
UDP is normally used for interactive real-time applications that cannot tolerate
uneven delay between sections of a received message.
TRANSMISSION CONTROL PROTOCOL (TCP)
TCP SERVICES
Process-to-Process Communication
TCP provides process-to-process communication using port numbers.
Stream Delivery Service
TCP is a stream-oriented protocol.
TCP allows the sending process to deliver data as a stream of bytes and allows the
receiving process to obtain data as a stream of bytes.
TCP creates an environment in which the two processes seem to be connected by an
imaginary “tube” that carries their bytes across the Internet.
The sending process produces (writes to) the stream and the receiving process consumes
(reads from) it.
Full-Duplex Communication
TCP offers full-duplex service, where data can flow in both directions at the same
time.
Each TCP endpoint then has its own sending and receiving buffer, and segments
move in both directions.
Reliable Service
TCP is a reliable transport protocol.
It uses an acknowledgment mechanism to check the safe and sound arrival of data.
TCP SEGMENT
A packet in TCP is called a segment.
Data unit exchanged between TCP peers are called segments.
A TCP segment encapsulates the data received from the application layer.
The TCP segment is encapsulated in an IP datagram, which in turn is encapsulated in
a frame at the data-link layer.
TCP is a byte-oriented protocol, which means that the sender writes bytes into a TCP
connection and the receiver reads bytes out of the TCP connection.
TCP does not, itself, transmit individual bytes over the Internet.
TCP on the source host buffers enough bytes from the sending process to fill a
reasonably sized packet and then sends this packet to its peer on the destination host.
TCP on the destination host then empties the contents of the packet into a receive
buffer, and the receiving process reads from this buffer at its leisure.
TCP connection supports byte streams flowing in both directions.
The packets exchanged between TCP peers are called segments, since each one
carries a segment of the byte stream.
Connection Establishment
While opening a TCP connection the two nodes (client and server) want to agree on
aset of parameters.
The parameters are the starting sequence numbers that is to be used for their
respective byte streams.
Connection establishment in TCP is a three-way handshaking.
1. Client sends a SYN segment to the server containing its initial sequence number (Flags
= SYN, SequenceNum = x)
2. Server responds with a segment that acknowledges client’s segment and specifies its
initial sequence number (Flags = SYN + ACK, ACK = x + 1 SequenceNum = y).
3. Finally, client responds with a segment that acknowledges server’s sequence number
(Flags = ACK, ACK = y + 1).
The reason that each side acknowledges a sequence number that is one larger
than the one sent is that the Acknowledgment field actually identifies the “next
sequence number expected,”
A timer is scheduled for each of the first two segments, and if the expected
response is not received, the segment is retransmitted.
Data Transfer
After connection is established, bidirectional data transfer can take place.
The client and server can send data and acknowledgments in both directions.
The data traveling in the same direction as an acknowledgment are carried on
the same segment.
The acknowledgment is piggybacked with the data.
Connection Termination
Connection termination or teardown can be done in two ways:
Three-way Close and Half-Close
Send Buffer
Sending TCP maintains send buffer which contains 3 segments
(1) acknowledged data
(2) unacknowledged data
(3) data to be transmitted.
Send buffer maintains three pointers
(1) LastByteAcked, (2) LastByteSent, and (3)
LastByteWritten such that:
LastByteAcked ≤ LastByteSent ≤ LastByteWritten
A byte can be sent only after being written and only a sent byte can be
acknowledged.
Bytes to the left of LastByteAcked are not kept as it had been acknowledged.
Receive Buffer
Receiving TCP maintains receive buffer to hold data even if it arrives out-of-order.
Receive buffer maintains three pointers namely
(1) LastByteRead, (2) NextByteExpected, and (3) LastByteRcvd
such that:
LastByteRead ≤ NextByteExpected ≤ LastByteRcvd + 1
A byte cannot be read until that byte and all preceding bytes have been received.
If data is received in order, then NextByteExpected = LastByteRcvd + 1
Bytes to the left of LastByteRead are not buffered, since it is read by the application.
TCP TRANSMISSION
TCP has the mechanisms to trigger the transmission of a segment.
They are
o Maximum Segment Size (MSS) - Silly Window Syndrome
o Timeout - Nagle’s Algorithm
Congestion occurs if load (number of packets sent) is greater than capacity of the
network (number of packets a network can handle).
When load is less than network capacity, throughput increases proportionally.
When load exceeds capacity, queues become full and the routers discard some
packets and throughput declines sharply.
When too many packets are contending for the same link
o The queue overflows
o Packets get dropped
o Network is congested
Network should provide a congestion control mechanism to deal with such a
situation.
TCP maintains a variable called CongestionWindow for each connection.
TCP Congestion Control mechanisms are:
Slow Start
Slow start is used to increase CongestionWindow exponentially from a cold start.
Source TCP initializes CongestionWindow to one packet.
TCP doubles the number of packets sent every RTT on successful transmission.
When ACK arrives for first packet TCP adds 1 packet to CongestionWindow and
sends two packets.
When two ACKs arrive, TCP increments CongestionWindow by 2 packets and sends
four packets and so on.
Instead of sending entire permissible packets at once (bursty traffic), packets are sent
in a phased manner, i.e., slow start.
Initially TCP has no idea about congestion, henceforth it increases
CongestionWindow rapidly until there is a timeout. On timeout:
CongestionThreshold = CongestionWindow/ 2
CongestionWindow = 1
Slow start is repeated until CongestionWindow reaches CongestionThreshold and
thereafter 1 packet per RTT.
For example, packets 1 and 2 are received whereas packet 3 gets lost.
o Receiver sends a duplicate ACK for packet 2 when packet 4 arrives.
o Sender receives 3 duplicate ACKs after sending packet 6 retransmits packet 3.
o When packet 3 is received, receiver sends cumulative ACK up to packet 6.
The congestion window trace will look like
TCP CONGESTION AVOIDANCE
Congestion avoidance mechanisms prevent congestion before it actually occurs.
These mechanisms predict when congestion is about to happen and then to reduce
the rate at which hosts send data just before packets start being discarded.
TCP creates loss of packets in order to determine bandwidth of the connection.
Routers help the end nodes by intimating when congestion is likely to occur.
Congestion-avoidance mechanisms are:
o DEC bit - Destination Experiencing Congestion Bit
o RED - Random Early Detection
The idea is to evenly split the responsibility for congestion control between the
routers and the end nodes.
Each router monitors the load it is experiencing and explicitly notifies the end nodes
when congestion is about to occur.
This notification is implemented by setting a binary congestion bit in the packets that
flow through the router; hence the name DECbit.
The destination host then copies this congestion bit into the ACK it sends back to the
source.
The Source checks how many ACK has DEC bit set for previous window packets.
If less than 50% of ACK have DEC bit set, then source increases its congestion
window by 1 packet
Using a queue length of 1 as the trigger for setting the congestion bit.
A router sets this bit in a packet if its average queue length is greater than or equal to
1 at the time the packet arrives.
Computing average queue length at a router using DEC bit
Average queue length is measured over a time interval that includes the
last busy + last idle cycle + current busy cycle.
It calculates the average queue length by dividing the curve area with time interval.
b. DECbit may lead to tail drop policy, whereas RED drops packet based on
drop probability in a random manner. Drop each arriving packet with some
drop probability whenever the queue length exceeds some drop level. This
idea is called early random drop.
RED has two queue length thresholds that trigger certain activity: MinThreshold
and MaxThreshold
When a packet arrives at a gateway it compares Avglen with these two
values according to the following rules.
SCTP SERVICES
SCTP
Stream Control Transmission Protocol (SCTP) is a reliable, message-oriented
transport layer protocol.
SCTP has mixed features of TCP and UDP.
SCTP maintains the message boundaries and detects the lost data, duplicate data as
well as out-of-order data.
SCTP provides the Congestion control as well as Flow control.
SCTP is especially designed for internet applications as well as multimedia
communication.
Process-to-Process Communication
SCTP provides process-to-process communication.
Multiple Streams
SCTP allows multistream service in each connection, which is called association in SCTP
terminology.
If one of the streams is blocked, the other streams can still deliver their data.
Multihoming
An SCTP association supports multihoming service.
The sending and receiving host can define multiple IP addresses in each end for an
association.
In this fault-tolerant approach, when one path fails, another interface can be used for data
delivery without interruption.
Full-Duplex Communication
SCTP offers full-duplex service, where data can flow in both directions at the same time.
Each SCTP then has a sending and receiving buffer and packets are sent in both
directions.
Connection-Oriented Service
SCTP is a connection-oriented protocol.
In SCTP, a connection is called an association.
If a client wants to send and receive message from server , the steps are :
Step1: The two SCTPs establish the connection with each other.
Step2: Once the connection is established, the data gets exchanged in both the directions.
Step3: Finally, the association is terminated.
Reliable Service
SCTP is a reliable transport protocol.
It uses an acknowledgment mechanism to check the safe and sound arrival of data.
An SCTP packet has a mandatory general header and a set of blocks called chunks.
General Header
The general header (packet header) defines the end points of each association to
which the packet belongs
It guarantees that the packet belongs to a particular association
It also preserves the integrity of the contents of the packet including the header itself.
There are four fields in the general header.
Source port
This field identifies the sending port.
Destination port
This field identifies the receiving port that hosts use to route the packet to the
appropriate endpoint/application.
Verification tag
A 32-bit random value created during initialization to distinguish stale packets
from a previous connection.
Checksum
The next field is a checksum. The size of the checksum is 32 bits. SCTP uses
CRC-32 Checksum.
Chunks
Control information or user data are carried in chunks.
Chunks have a common layout.
The first three fields are common to all chunks; the information field depends on the type
of chunk.
The type field can define up to 256 types of chunks. Only a few have been defined so far;
the rest are reserved for future use.
The flag field defines special flags that a particular chunk may need.
The length field defines the total size of the chunk, in bytes, including the type, flag, and
length fields.
Types of Chunks
An SCTP association may send many packets, a packet may contain several chunks, and
chunks may belong to different streams.
SCTP defines two types of chunks - Control chunks and Data chunks.
A control chunk controls and maintains the association.
A data chunk carries user data.
SCTP ASSOCIATION
SCTP is a connection-oriented protocol.
A connection in SCTP is called an association to emphasize multihoming.
SCTP Associations consists of three phases:
Association Establishment
Data Transfer
Association Termination
Association Establishment
Association establishment in SCTP requires a four-way handshake.
In this procedure, a client process wants to establish an association with a server
process using SCTP as the transport-layer protocol.
The SCTP server needs to be prepared to receive any association (passive open).
Association establishment, however, is initiated by the client (active open).
The client sends the first packet, which contains an INIT chunk.
The server sends the second packet, which contains an INIT ACK chunk. The INIT ACK
also sends a cookie that defines the state of the server at this moment.
The client sends the third packet, which includes a COOKIE ECHO chunk. This is a very
simple chunk that echoes, without change, the cookie sent by the server. SCTP allows the
inclusion of data chunks in this packet.
The server sends the fourth packet, which includes the COOKIE ACK chunk that
acknowledges the receipt of the COOKIE ECHO chunk. SCTP allows the inclusion of
data chunks with this packet.
Data Transfer
The whole purpose of an association is to transfer data between two ends.
After the association is established, bidirectional data transfer can take place.
The client and the server can both send data.
SCTP supports piggybacking.
Types of SCTP data Transfer :
1. Multihoming Data Transfer
Data transfer, by default, uses the primary address of the destination.
If the primary is not available, one of the alternative addresses is used.
This is called Multihoming Data Transfer.
2. Multistream Delivery
SCTP can support multiple streams, which means that the sender process can
define different streams and a message can belong to one of these streams.
Each stream is assigned a stream identifier (SI) which uniquely defines that
stream.
SCTP supports two types of data delivery in each stream: ordered (default) and
unordered.
Association Termination
In SCTP,either of the two parties involved in exchanging data (client or server) can close
the connection.
SCTP does not allow a “half closed” association. If one end closes the association, the
other end must stop sending new data.
If any data are left over in the queue of the recipient of the termination request, they are
sent and the association is closed.
Association termination uses three packets.
Receiver Site
The receiver has one buffer (queue) and three variables.
The queue holds the received data chunks that have not yet been read by the process.
The first variable holds the last TSN received, cumTSN.
The second variable holds the available buffer size; winsize.
The third variable holds the last accumulative acknowledgment, lastACK.
The following figure shows the queue and variables at the receiver site.
When the site receives a data chunk, it stores it at the end of the buffer (queue) and
subtracts the size of the chunk from winSize.
The TSN number of the chunk is stored in the cumTSN variable.
When the process reads a chunk, it removes it from the queue and adds the size of the
removed chunk to winSize (recycling).
When the receiver decides to send a SACK, it checks the value of lastAck; if it is less than
cumTSN, it sends a SACK with a cumulative TSN number equal to the cumTSN.
It also includes the value of winSize as the advertised window size.
Sender Site
The sender has one buffer (queue) and three variables: curTSN, rwnd, and inTransit.
We assume each chunk is 100 bytes long. The buffer holds the chunks produced by the
process that either have been sent or are ready to be sent.
The first variable, curTSN, refers to the next chunk to be sent.
All chunks in the queue with a TSN less than this value have been sent, but not
acknowledged; they are outstanding.
The second variable, rwnd, holds the last value advertised by the receiver (in bytes).
The third variable, inTransit, holds the number of bytes in transit, bytes sent but not yet
acknowledged.
The following figure shows the queue and variables at the sender site.
A chunk pointed to by curTSN can be sent if the size of the data is less than or equal to
the quantity rwnd - inTransit.
After sending the chunk, the value of curTSN is incremented by 1 and now points to the
next chunk to be sent.
The value of inTransit is incremented by the size of the data in the transmitted chunk.
When a SACK is received, the chunks with a TSN less than or equal to the cumulative
TSN in the SACK are removed from the queue and discarded. The senderdoes not have to
worry about them anymore.
The value of inTransit is reduced by the total size of the discarded chunks.
The value of rwnd is updated with the value of the advertised window in the SACK.
Receiver Site
The receiver stores all chunks that have arrived in its queue including the out-of-
order ones. However, it leaves spaces for any missing chunks.
It discards duplicate messages, but keeps track of them for reports to the sender.
The following figure shows a typical design for the receiver site and the state of the
receiving queue at a particular point in time.
Sender Site
At the sender site, it needs two buffers (queues): a sending queue and
aretransmission queue.
Three variables were used - rwnd, inTransit, and curTSN as described in the
previoussection.
The following figure shows a typical design.
The sending queue holds chunks 23 to 40.
The chunks 23 to 36 have already been sent, but not acknowledged; they
areoutstanding chunks.
The curTSN points to the next chunk to be sent (37).
We assume that each chunk is 100 bytes, which means that 1400 bytes of
data(chunks 23 to 36) is in transit.
The sender at this moment has a retransmission queue.
When a packet is sent, a retransmission timer starts for that packet (all data chunks
inthat packet).
Some implementations use one single timer for the entire association, but
otherimplementations use one timer for each packet.
The network layer works for the transmission of data from one host to the other located in
different networks.
It provides end to end communication by forwarding packets from source to destination.
It also takes care of packet routing i.e., selection of the shortest path to transmit the packet,
from the number of routes available.
The sender & receiver’s IP addresses are placed in the header by the network layer.
Advantages of Switching:
o Switch increases the bandwidth of the network.
o It reduces the workload on individual PCs as it sends the information to only
that device which has been addressed.
o It increases the overall performance of the network by reducing the traffic on the
network.
o There will be less frame collision as switch creates the collision domain for
each connection.
Disadvantages of Switching:
o A Switch is more expensive than network bridges.
o A Switch cannot determine the network connectivity issues easily.
o Proper designing and configuration of the switch are required to handle
multicast packets.
o The packet switching is a switching technique in which the message is sent in one
go, but it is divided into smaller pieces, and they are sent individually.
o The message splits into smaller pieces known as packets and packets are given a
unique number to identify their order at the receiving end.
o Every packet contains some information in its headers such as source address,
destination address and sequence number.
o Packets will travel across the network, taking the shortest path as possible.
o All the packets are reassembled at the receiving end in correct order.
o If any packet is missing or corrupted, then the message will be sent to resend the
message.
o If the correct order of the packets is reached, then the acknowledgment message
will be sent.
Routing Table
In this type of network, each switch (or packet switch) has a routing table which is based
on the destination address. The routing tables are dynamic and are updated periodically.
The destination addresses and the corresponding forwarding output ports are recorded in
the tables.
Example:
Source A sends a frame to Source B through Switch 1, Switch 2 and Switch 3.
5
IPV4 ADDRESSES
The identifier used in the IP layer of the TCP/IP protocol suite to identify the
connection of each device to the Internet is called the Internet address or IP
address.
Internet Protocol version 4 (IPv4) is the fourth version in the development of the
Internet Protocol (IP) and the first version of the protocol to be widely
deployed.
IPv4 is described in IETF publication in September 1981.
The IP address is the address of the connection, not the host or the router. An
IPv4 address is a 32-bit address that uniquely and universally defines the
connection.
If the device is moved to another network, the IP address may be changed.
IPv4 addresses are unique in the sense that each address defines one, and only
one, connection to the Internet.
If a device has two connections to the Internet, via two networks, it has two
IPv4 addresses.
IPv4 addresses are universal in the sense that the addressing system must be
accepted by any host that wants to be connected to the Internet.
In binary notation, an IPv4 address is displayed as 32 bits. To make the address more
readable, one or more spaces are usually inserted between bytes (8 bits).
In dotted-decimal notation, IPv4 addresses are usually written in decimal form with a
decimal point (dot) separating the bytes. Each number in the dotted-decimal notation
The first part of the address, called the prefix, defines the network (Net ID); the
second part of the address, called the suffix, defines the node (Host ID).
The prefix length is n bits and the suffix length is (32- n) bits.
CLASSFUL ADDRESSING
An IPv4 address is 32-bit long (4 bytes).
Class A
In Class A, an IP address is assigned to those networks that contain a large
number of hosts.
The network ID is 8 bits long.
The host ID is 24 bits long.
In Class A, the first bit in higher order bits of the first octet is always set to 0
and the remaining 7 bits determine the network ID.
The 24 bits determine the host ID in any network.
The total number of networks in Class A = 2 7 = 128 network address
The total number of hosts in Class A = 2 24 - 2 = 16,777,214 host address
Class B
In Class B, an IP address is assigned to those networks that range from small-
sized to large-sized networks.
The Network ID is 16 bits long.
The Host ID is 16 bits long.
In Class B, the higher order bits of the first octet is always set to 0 1, and
the remaining14 bits determine the network ID.
The other 16 bits determine the Host ID.
The total number of networks in Class B = 214 = 16384 network address
The total number of hosts in Class B = 216 - 2 = 65534 host address
Class C
In Class C, an IP address is assigned to only small-sized networks.
The Network ID is 24 bits long.
The host ID is 8 bits long.
In Class C, the higher order bits of the first octet is always set to 110, and the
remaining 21 bits determine the network ID.
The 8 bits of the host ID determine the host in a network.
The total number of networks = 2 21 = 2097152 network address
The total number of hosts = 2 8 - 2 = 254 host address
Class D
In Class D, an IP address is reserved for multicast addresses.
It does not possess subnetting.
The higher order bits of the first octel are always set to 1110, and the remaining
bits determines the host ID in any network.
Class E
In Class E, an IP address is used for the future use or for the research and
development purposes.
It does not possess any subnetting.
The higher order bits of the first octet are always set to 1111, and the remaining
bits determines the host ID in any network.
Address Depletion in Classful Addressing
The reason that classful addressing has become obsolete is address depletion.
Since the addresses were not distributed properly, the Internet was faced
with the problem of the addresses being rapidly used up.
This results in no more addresses available for organizations and individuals
that needed to be connected to the Internet.
To understand the problem, let us think about class A.
This class can be assigned to only 128 organizations in the world, but each
organization needs to have a single network with 16,777,216 nodes.
Since there may be only a few organizations that are this large, most of the
addresses in this class were wasted (unused).
Class B addresses were designed for midsize organizations, but many of the
addresses in this class also remained unused.
Class C addresses have a completely different flaw in design. The number of
addresses that can be used in each network (256) was so small that most
companies were not comfortable using a block in this address class.
Class E addresses were almost never used, wasting the whole class.
CLASSLESS ADDRESSING
In 1996, the Internet authorities announced a new architecture called classless
addressing.
In classless addressing, variable-length blocks are used that belong to no
classes.
We can have a block of 1 address, 2 addresses, 4 addresses, 128 addresses, and
so on.
In classless addressing, the whole address space is divided into variable length
blocks.
The prefix in an address defines the block (network); the suffix defines the
node (device).
Theoretically, we can have a block of 20, 21, 22, 232 addresses.
The number of addresses in a block needs to be a power of 2. An organization
can be granted one block of addresses.
Address Aggregation
One of the advantages of the CIDR strategy is address aggregation
(Sometimes called address summarization or route summarization).
When blocks of addresses are combined to create a larger block, routing can be
done based on the prefix of the larger block.
ICANN assigns a large block of addresses to an ISP.
Each ISP in turn divides its assigned block into smaller subblocks and grants
the subblocks to its customers.
This-host Address
The only address in the block 0.0.0.0/32 is called the this-host address.
It is used whenever a host needs to send an IP datagram but it does not know
its own address to use as the source address.
Limited-broadcast Address
The only address in the block 255.255.255.255/32 is called the limited-
broadcast address.
It is used whenever a router or a host needs to send a datagram to all devices in
a network.
The routers in the network, however, block the packet having this address as
the destination; the packet cannot travel outside the network.
Loopback Address
The block 127.0.0.0/8 is called the loopback address.
A packet with one of the addresses in this block as the destination address
never leaves the host; it will remain in the host.
Private Addresses
Four blocks are assigned as private addresses: 10.0.0.0/8, 172.16.0.0/12,
192.168.0.0/16, and 169.254.0.0/16.
Multicast Addresses
The block 224.0.0.0/4 is reserved for multicast addresses.
The dynamic host configuration protocol is used to simplify the installation and
maintenance of networked computers.
DHCP is derived from an earlier protocol called BOOTP.
Ethernet addresses are configured into network by manufacturer and they are
unique.
IP addresses must be unique on a given internetwork but also must reflect the
structure of the internetwork
Most host Operating Systems provide a way to manually configure the IP
information for the host
Drawbacks of manual configuration:
A DHCP packet is actually sent using a protocol called the User Datagram
Protocol (UDP).
The Internet Protocol is the key tool used today to build scalable,
heterogeneous internetworks.
IP runs on all the nodes (both hosts and routers) in a collection of networks
IP SERVICE MODEL
Service Model defines the host-to-host services that we want to provide
The main concern in defining a service model for an internetwork is that we can
provide a host-to-host service only if this service can somehow be provided over
each of the underlying physical networks.
The Internet Protocol is the key tool used today to build scalable, heterogeneous
internetworks.
The IP service model can be thought of as having two parts:
14
IP PACKET FORMAT / IP DATAGRAM FORMAT
A key part of the IP service model is the type of packets that can be carried.
FIELD DESCRIPTION
Version Specifies the version of IP. Two versions exist – IPv4 and IPv6.
HLen Specifies the length of the header
TOS An indication of the parameters of the quality of service
(Type of Service) desired such as Precedence, Delay, Throughput and Reliability.
Length Length of the entire datagram, including the header. The maximum
size of an IP datagram is 65,535(210 ) bytes
Ident Uniquely identifies the packet sequence number.
(Identification) Used for fragmentation and re-assembly.
Flags Used to control whether routers are allowed to fragment a packet.
If a packet is fragmented, this flag value is 1. If not, flag value is
0.
Offset Indicates where in the datagram, this fragment belongs.
(Fragmentation The fragment offset is measured in units of 8 octets
offset) (64 bits). The first fragment has offset zero.
TTL Indicates the maximum time the datagram is allowed to
(Time to Live) remain in the network. If this field contains the value zero, then
the datagram must be destroyed.
Protocol Indicates the next level protocol used in the data portion of the
datagram
Checksum Used to detect the processing errors introduced into the packet
Example:
The original packet starts at the client; the fragments are reassembled at the
server.
The value of the identification field is the same in all fragments, as is the value
of the flags field with the more bit set for all fragments except the last.
Also, the value of the offset field for each fragment is shown.
Although the fragments arrived out of order at the destination, they can be
correctly reassembled.
The value of the offset field is always relative to the original datagram.
Even if each fragment follows a different path and arrives out of order, the
final destination host can reassemble the original datagram from the
fragments received (if none of them is lost) using the following strategy:
Reassembly:
Reassembly is done at the receiving host and not at each router.
To enable these fragments to be reassembled at the receiving host, they all
carry the same identifier in the Ident field.
This identifier is chosen by the sending host and is intended to be unique
among all the datagrams that might arrive at the destination from this source
over some reasonable time period.
Since all fragments of the original datagram contain this identifier, the
reassembling host will be able to recognize those fragments that go
together.
For example, if a single fragment is lost, the receiver will still attempt to
reassemble the datagram, and it will eventually give up and have to garbage-
collect the resources that were used to perform the failed reassembly.
Hosts are now strongly encouraged to perform “path MTU discovery,” a
process by which fragmentation is avoided by sending packets that are small
enough to traverse the link with the smallest MTU in the path from sender to
receiver.
IP SECURITY
There are three security issues that are particularly applicable to the IP protocol:
(1) Packet Sniffing (2) Packet Modification and (3) IP Spoofing.
Packet Sniffing
An intruder may intercept an IP packet and make a copy of it.
Packet sniffing is a passive attack, in which the attacker does not change the
contents of the packet.
This type of attack is very difficult to detect because the sender and the receiver
may never know that the packet has been copied.
Although packet sniffing cannot be stopped, encryption of the packet can make
the attacker’s effort useless.
The attacker may still sniff the packet, but the content is not detectable.
Packet Modification
The second type of attack is to modify the packet.
The attacker intercepts the packet, changes its contents, and sends the
newpacket to the receiver.
The receiver believes that the packet is coming from the original sender.
This type of attack can be detected using a data integrity mechanism.
The receiver, before opening and using the contents of the message, can use
this mechanism to make sure that the packet has not been changed during the
transmission.
IP Spoofing
An attacker can masquerade as somebody else and create an IP packet that
carries the source address of another computer.
An attacker can send an IP packet to a bank pretending that it is coming from
one of the customers.
This type of attack can be prevented using an origin authentication
mechanism
IP Sec
The IP packets today can be protected from the previously mentioned attacks
using a protocol called IPsec (IP Security).
This protocol is used in conjunction with the IP protocol.
IPsec protocol creates a connection-oriented service between two entities in
which they can exchange IP packets without worrying about the three attacks
such as Packet Sniffing, Packet Modification and IP Spoofing.
IP Sec provides the following four services:
1) Defining Algorithms and Keys: The two entities that want to create a
secure channel between themselves can agree on some available
algorithms and keys to be used for security purposes.
2) Packet Encryption: The packets exchanged between two parties can be
encrypted for privacy using one of the encryption algorithms and a
shared key agreed upon in the first step. This makes the packet sniffing
attack useless.
3) Data Integrity: Data integrity guarantees that the packet is not
modified during the transmission. If the received packet does not pass
the data integrity test, it is discarded. This prevents the second
attack,packet modification.
4) Origin Authentication: IPsec can authenticate the origin of the packet
to be sure that the packet is not created by an imposter. This can
prevent IP spoofing attacks.
Ping
The ping program is used to find if a host is alive and responding.
The source host sends ICMP echo-request messages; the destination, if alive,
responds with ICMP echo-reply messages.
21
The ping program sets the identifier field in the echo-request and echo-reply
message and starts the sequence number from 0; this number is incremented by
1 each time a new message is sent.
The ping program can calculate the round-trip time.
It inserts the sending time in the data section of the message.
When the packet arrives, it subtracts the arrival time from the departure time to
get the round-trip time (RTT).
$ ping google.com
Traceroute or Tracert
The traceroute program in UNIX or tracert in Windows can be used to trace
the path of a packet from a source to the destination.
It can find the IP addresses of all the routers that are visited along the path.
The program is usually set to check for the maximum of 30 hops (routers) to be
visited.
The number of hops in the Internet is normally less than this.
$ traceroute google.com
FEATURES OF IPV6
1. Better header format - IPv6 uses a new header format in which options are
separated from the base header and inserted, when needed, between the base
header and the data. This simplifies and speeds up the routing process because
most of the options do not need to be checked by routers.
2. New options - IPv6 has new options to allow for additional functionalities.
3. Allowance for extension - IPv6 is designed to allow the extension of the
protocol if required by new technologies or applications.
4. Support for resource allocation - In IPv6, the type-of-service field has been
removed, but two new fields, traffic class and flow label, have been added to
enable the source to request special handling of the packet. This mechanism
can be used to support traffic such as real-time audio and video.
Additional Features:
1. Need to accommodate scalable routing and addressing
2. Support for real-time services
3. Security support
4. Autoconfiguration
The ability of hosts to automatically configure themselves with such
information as their own IP address and domain name.
5. Enhanced routing functionality, including support for mobile hosts
6. Transition from ipv4 to ipv6
22
ADDRESS SPACE ALLOCATION OF IPV6
IPv6 provides a 128-bit address space to handle up to 2128 nodes.
IPv6 uses classless addressing, but classification is based on MSBs.
The address space is subdivided in various ways based on the leading bits.
The current assignment of prefixes is listed in Table
For example,
47CD : 0000 : 0000 : 0000 : 0000 : 0000 : A456 : 0124 → 47CD : : A456 : 0124
Extension Headers
Extension header provides greater functionality to IPv6.
Base header may be followed by six extension headers.
Each extension header contains a NextHeader field to identify the header
following it.
ARP Operation
o ARP maintains a cache table in which MAC addresses are mapped to IP
addresses.
o If a host wants to send an IP datagram to a host,it first checks for a mapping
in the cache table.
o If no mapping is found, it needs to invoke the Address Resolution Protocol
over the network.
o It does this by broadcasting an ARP query onto the network.
o This query contains the target IP address.
o Each host receives the query and checks to see if it matches its IP address.
o If it does match, the host sends a response message that contains its link-
layer address (MAC Address) back to the originator of the query.
o The originator adds the information contained in this response to its ARP
table.
o For example,
To determine system B’s physical (MAC) address, system A broadcasts
an ARP request containing B’s IP address to all machines on its
network.
o All nodes except the destination discard the packet but update their ARP
table.
o Destination host (System B) constructs an ARP Response packet
o ARP Response is unicast and sent back to the source host (System A).
o Source stores target Logical & Physical address pair in its ARP table from
ARP Response.
o If target node does not exist on same network, ARP request is sent to
default router.
ARP Packet
RARP – Reverse ARP
o Reverse Address Resolution protocol (RARP) allows a host
to convert itsMAC address to the corresponding IP address.
UNIT IV – ROUTING
Routing and protocols: Unicast routing - Distance Vector
Routing - RIP - Link State Routing – OSPF
– Path-vector routing - BGP - Multicast Routing: DVMRP –
PIM.
NETWORK AS A GRAPH
The Figure below shows a graph representing a network.
The nodes of the graph, labeled A through G, may be hosts, switches, routers,
or networks.
The edges of the graph correspond to the network links.
Each edge has an associated cost.
The basic problem of routing is to find the lowest-cost path between any two
nodes, where the cost of a path equals the sum of the costs of all the edges that
make up the path.
This static approach has several problems:
It does not deal with node or link failures.
It does not consider the addition of new nodes or links.
It implies that edge costs cannot change.
For these reasons, routing is achieved by running routing protocols among the
nodes.
These protocols provide a distributed, dynamic way to solve the problem of
finding the lowest-cost path in the presence of link and node failures and
changing edge costs.
1
UNICAST ROUTING ALGORITHMS
There are three main classes of routing protocols:
1) Distance Vector Routing Algorithm – Routing Information Protocol
2) Link State Routing Algorithm – Open Shortest Path First Protocol
3) Path-Vector Routing Algorithm - Border Gateway Protocol
Initial State
The initial table for all the nodes are given below
Each node sends its initial table (distance vector) to neighbors and receives
their estimate.
Node A sends its table to nodes B, C, E & F and receives tables from nodes B,
C, E & F.
Each node updates its routing table by comparing with each of its neighbor's
table
For each destination, Total Cost is computed as:
Total Cost = Cost (Node to Neighbor) + Cost (Neighbor to Destination)
If Total Cost < Cost then
Cost = Total Cost and NextHop = Neighbor
Node A learns from C's table to reach node D and from F's table to reach
node G.
Total Cost to reach node D via C = Cost (A to C) + Cost(C to D)
Cost = 1 + 1 = 2.
Since 2 < ∞, entry for destination D in A's table is changed to (D, 2, C)
Each node builds complete routing table after few exchanges amongst its
neighbors.
System stabilizes when all nodes have complete routing information, i.e.,
convergence.
Routing tables are exchanged periodically or in case of triggered update.
The final distances stored at each node is given below:
Example
Routers advertise the cost of reaching networks. Cost of reaching each link is 1
hop. For example, router C advertises to A that it can reach network 2, 3 at cost
0 (directly connected), networks 5, 6 at cost 1 and network 4 at cost 2.
Each router updates cost and next hop for each network number.
Infinity is defined as 16, i.e., any route cannot have more than 15 hops.
Therefore RIP can be implemented on small-sized networks only.
Advertisements are sent every 30 seconds or in case of triggered update.
Reliable Flooding
Each node sends its LSP out on each of its directly connected links.
When a node receives LSP of another node, checks if it has an LSP already for
that node.
If not, it stores and forwards the LSP on all other links except the incoming
one.
Else if the received LSP has a bigger sequence number, then it is stored and
forwarded. Older LSP for that node is discarded.
Otherwise discard the received LSP, since it is not latest for that node.
Thus, recent LSP of a node eventually reaches all nodes, i.e., reliable flooding.
(a) (b) (c) (d)
Route Calculation
Each node knows the entire topology, once it has LSP from every other node.
Forward search algorithm is used to compute routing table from the received
LSPs.
Each node maintains two lists, namely Tentative and Confirmed with entries of
the form (Destination, Cost, NextHop).
Example :
OPEN SHORTEST PATH FIRST PROTOCOL (OSPF)
OSPF is a non-proprietary widely used link-state routing protocol.
OSPF Features are:
Authentication―Malicious host can collapse a network by advertising
to reach every host with cost 0. Such disasters are averted by
authenticating routing updates.
Additional hierarchy―Domain is partitioned into areas, i.e., OSPF is
more scalable.
Load balancing―Multiple routes to the same place are assigned same
cost. Thus, traffic is distributed evenly.
Example:
The Figure below shows a small internet with only five nodes.
Each source has created its own spanning tree that meets its policy.
The policy imposed by all sources is to use the minimum number of nodes to
reach a destination.
The spanning tree selected by A and E is such that the communication does not
pass through D as a middle node.
Similarly, the spanning tree selected by B is such that the communication does
not pass through C as a middle node.
Each AS have a border router (gateway), by which packets enter and leave that
AS. In above figure, R3 and R4 are border routers.
One of the routers in each autonomous system is designated as BGP speaker.
BGP Speaker exchange reachability information with other BGP speakers,
known as external BGP session.
BGP advertises complete path as enumerated list of AS (path vector) to reach a
particular network.
Paths must be without any loop, i.e., AS list is unique.
For example, backbone network advertises that network 128.96 and 192.4.153
can be reached along the path <AS1, AS2, AS4>.
If there are multiple routes to a destination, BGP speaker chooses one based on
policy.
Speakers need not advertise any route to a destination, even if one exists.
Advertised paths can be cancelled, if a link/node on the path goes down. This
negative advertisement is known as withdrawn route.
Routes are not repeatedly sent. If there is no change, keep alive messages are
sent.
Pruning:
Sent from routers receiving multicast traffic for which they have no
Grafting:
Used after a branch has been pruned back
Shared Tree
When a router sends Join message for group G to RP, it goes through a set of
routers.
Join message is wildcarded (*), i.e., it is applicable to all senders.
Routers create an entry (*, G) in its forwarding table for the shared tree.
Interface on which the Join arrived is marked to forward packets for that
group.
Forwards Join towards rendezvous router RP.
Eventually, the message arrives at RP. Thus a shared tree with RP as root is
formed.
Example
Router R4 sends Join message for group G to rendezvous router RP.
Join message is received by router R2. It makes an entry (*, G) in its table and
forwards the message to RP.
When R5 sends Join message for group G, R2 does not forwards the Join. It
adds an outgoing interface to the forwarding table created for that group.
16
As routers send Join message for a group, branches are added to the tree, i.e.,
shared.
Multicast packets sent from hosts are forwarded to designated router RP.
Source-Specific Tree
RP can force routers to know about group G, by sending Join message to the
sending host, so that tunneling can be avoided.
Intermediary routers create sender-specific entry (S, G) in their tables. Thus
a source-specific route from R1 to RP is formed.
If there is high rate of packets sent from a sender to a group G, then shared-
tree is replaced by source-specific tree with sender as root.
Example
1. FRAMING
The data-link layer packs the bits of a message into frames, so that each
frame is distinguishable from another.
Although the whole message could be packed in one frame, that is not
normally done.
One reason is that a frame can be very large, making flow and error control
very inefficient.
When a message is carried in one very large frame, even a single-bit error
would require the retransmission of the whole frame.
When a message is divided into smaller frames, a single-bit error affects
only that small frame.
Framing in the data-link layer separates a message from one source to a
destination by adding a sender address and a destination address.
The destination address defines where the packet is to go; the sender
address helps the recipient acknowledge the receipt.
Frame Size
Frames can be of fixed or variable size.
Frames of fixed size are called cells. In fixed-size framing, there is no need
for defining the boundaries of the frames; the size itself can be used as a
delimiter.
1
In variable-size framing, we need a way to define the end of one frame and
the beginning of the next. Two approaches were used for this purpose: a
character-oriented approach and a bit-oriented approach.
Character-Oriented Framing
In character-oriented (or byte-oriented) framing, data to be carried are 8-bit
characters.
To separate one frame from the next, an 8-bit (1-byte) flag is added at the
beginning and the end of a frame.
The flag, composed of protocol-dependent special characters, signals the
start or end of a frame.
Any character used for the flag could also be part of the information.
If this happens, when it encounters this pattern in the middle of the data,the
receiver thinks it has reached the end of the frame.
To fix this problem, a byte-stuffing strategy was added to character-
oriented framing.
Bit-Oriented Framing
In bit-oriented framing, the data section of a frame is a sequence of bits to
be interpreted by the upper layer as text, graphic, audio, video, and so on.
In addition to headers and trailers), we still need a delimiter to separate one
frame from the other.
Most protocols use a special 8-bit pattern flag, 01111110, as the delimiter to
define the beginning and the end of the frame
If the flag pattern appears in the data, the receiver must be informed
that this is not the end of the frame.
This is done by stuffing 1 single bit (instead of 1 byte) to prevent the
pattern from looking like a flag. The strategy is called bit stuffing.
Bit Stuffing
Bit stuffing is the process of adding one extra 0 whenever five
consecutive 1s follow a 0 in the data, so that the receiver does not
mistake the pattern 0111110 for a flag.
In bit stuffing, if 0 and five consecutive 1 bits are encountered, an extra 0 is
added.
This extra stuffed bit is eventually removed from the data by the receiver.
The extra bit is added after one 0 followed by five 1’s regardless of the
value of the next bit.
This guarantees that the flag field sequence does not inadvertently appear in
the frame.
FLOW CONTROL
o Flow control refers to a set of procedures used to restrict the amount
of data that the sender can send before waiting for acknowledgment.
o The receiving device has limited speed and limited memory to store the
data.
o Therefore, the receiving device must be able to inform the sending device
to stop the transmission temporarily before the limits are reached.
o It requires a buffer, a block of memory for storing the information until they
are processed.
o If the acknowledgement is not received within the allotted time, then the
sender assumes that the frame is lost during the transmission, so it will
retransmit the frame.
o The acknowledgement may not arrive because of the following three
scenarios :
1. Original frame is lost
2. ACK is lost
3. ACK arrives after the timeout
Advantage of Stop-and-wait
o The Stop-and-wait method is simple as each frame is checked
and acknowledged before the next frame is sent
Disadvantages of Stop-And-Wait
o In stop-and-wait, at any point in time, there is only one frame that is sent
and waiting to be acknowledged.
o This is not a good use of transmission medium.
o To improve efficiency, multiple frames should be in transition while
waiting for ACK.
PIGGYBACKING
SLIDING WINDOW
o The Sliding Window is a method of flow control in which a sender can
transmit the several frames before getting an acknowledgement.
o In Sliding Window Control, multiple frames can be sent one after
theanother due to which capacity of the communication channel can be
utilized efficiently.
o A single ACK acknowledge multiple frames.
o Sliding Window refers to imaginary boxes at both the sender and receiver
end.
o The window can hold the frames at either end, and it provides the upper
limit on the number of frames that can be transmitted before the
acknowledgement.
o Frames can be acknowledged even when the window is not completely filled.
o The window has a specific size in which they are numbered as modulo-n
means that they are numbered from 0 to n-1.
o For example, if n = 8, the frames are numbered from
0,1,2,3,4,5,6,7,0,1,2,3,4,5,6,7,0,1........
o The size of the window is represented as n-1. Therefore, maximum n-1
frames can be sent before acknowledgement.
o When the receiver sends the ACK, it includes the number of the next frame
that it wants to receive.
o For example, to acknowledge the string of frames ending with frame
number 4, the receiver will send the ACK containing the number 5.
o When the sender sees the ACK with the number 5, it got to know that the
frames from 0 through 4 have been received.
3. ERROR CONTROL
PARITY CHECK
One bit, called parity bit is added to every data unit so that the total number
of 1’s in the data unit becomes even (or) odd.
The source then transmits this data via a link, and bits are checked and
verified at the destination.
Data is considered accurate if the number of bits (even or odd) matches the
number transmitted from the source.
This techniques is the most common and least complex method.
Steps Involved:
Consider the original message (data word) as M(x) consisting of ‘k’ bits and
the divisor as C(x) consists of ‘n+1’ bits.
The original message M(x) is appended by ‘n’ bits of zeros. Let us call
this zero-extended message as T(x).
Divide T(x) by C(x) and find the remainder.
The division operation is performed using XOR operation.
The resultant remainder is appended to the original message M(x) as CRC
and sent by the sender(codeword).
Example 1:
Consider the Dataword / Message M(x) = 1001
Divisor C(x) = 1011 (n+1=4)
Appending ‘n’ zeros to the original Message M(x).
The resultant messages is called T(x) = 1001 000. (here n=3)
Divide T(x) by the divisor C(x) using XOR operation.
Sender Side:
Receiver Side:
(For Both Case – Without Error and With Error)
10
Polynomials
A pattern of 0s and 1s can be represented as a polynomial with coefficients
of 0 and 1.
The power of each term shows the position of the bit; the coefficient shows
the value of the bit.
INTERNET CHECKSUM
11
12
STOP-AND-WAIT ARQ
Stop-and-wait ARQ is a technique used to retransmit the data in case of
damaged or lost frames.
This technique works on the principle that the sender will not transmit the
next frame until it receives the acknowledgement of the last transmitted
frame.
o Lost Frame: Sender is equipped with the timer and starts when the frame is
transmitted. Sometimes the frame has not arrived at the receiving end so
that it cannot be acknowledged either positively or negatively. The sender
waits for acknowledgement until the timer goes off. If the timer goes off, it
retransmits the last transmitted frame.
13
1. GO-BACK-N ARQ
o In Go-Back-N ARQ protocol, if one frame is lost or damaged, then it
retransmits all the frames after which it does not receive the positive ACK.
o In the above figure, three frames (Data 0,1,2) have been transmitted before
an error discovered in the third frame.
o The receiver discovers the error in Data 2 frame, so it returns the NAK 2
frame.
o All the frames including the damaged frame (Data 2,3,4) are discarded as it
is transmitted after the damaged frame.
o Therefore, the sender retransmits the frames (Data2,3,4).
2. SELECTIVE-REJECT(REPEAT) ARQ
14
o In the above figure, three frames (Data 0,1,2) have been transmitted before
an error discovered in the third frame.
o The receiver discovers the error in Data 2 frame, so it returns the NAK 2
frame.
o The damaged frame only (Data 2) is discarded.
o The other subsequent frames (Data 3,4) are accepted.
o Therefore, the sender retransmits only the damaged frame (Data2).
Four protocols have been defined for the data-link layer controls.
They are
1. Simple Protocol
2. Stop-and-Wait Protocol
3. Go-Back-N Protocol
4. Selective-Repeat Protocol
1. SIMPLE PROTOCOL
o The first protocol is a simple protocol with neither flow nor error control.
o We assume that the receiver can immediately handle any frame it receives.
o In other words, the receiver can never be overwhelmed with incoming
frames.
o The data-link layers of the sender and receiver provide transmission
services for their network layers.
15
o The data-link layer at the sender gets a packet from its network layer,
makes a frame out of it, and sends the frame.
o The data-link layer at the receiver receives a frame from the link, extracts
the packet from the frame, and delivers the packet to its network layer.
NOTE :
2. STOP-AND-WAIT PROTOCOL
REFER STOP AND WAIT FROM FLOW CONTROL
3. GO-BACK-N PROTOCOL
REFER GO-BACK-N ARQ FROM ERROR CONTROL
4. SELECTIVE-REPEAT PROTOCOL
REFER SELECTIVE-REPEAT ARQ FROM ERROR CONTROL
16
Normal response mode (NRM)
o In normal response mode (NRM), the station configuration is unbalanced.
o We have one primary station and multiple secondary stations.
o A primary station can send commands; a secondary station can only
respond.
o The NRM is used for both point-to-point and multipoint links.
HDLC FRAMES
HDLC defines three types of frames:
1. Information frames (I-frames) - used to carry user data
2. Supervisory frames (S-frames) - used to carry control information
3. Unnumbered frames (U-frames) – reserved for system management
Each type of frame serves as an envelope for the transmission of a different type of
message.
Each frame in HDLC may contain up to six fields:
1. Beginning flag field
2. Address field
3. Control field
4. Information field (User Information/ Management Information)
5. Frame check sequence (FCS) field
6. Ending flag field
In multiple-frame transmissions, the ending flag of one frame can serve as the
beginning flag of the next frame.
17
o Flag field - This field contains synchronization pattern 01111110, which
identifies both the beginning and the end of a frame.
o Address field - This field contains the address of the secondary station. If a
primary station created the frame, it contains a ‘to’ address. If a secondary
station creates the frame, it contains a ‘from’ address. The address field can
be one byte or several bytes long, depending on the needs of the network.
o Control field. The control field is one or two bytes used for flow and error
control.
o Information field. The information field contains the user’s data from the
network layer or management information. Its length can vary from one
network to another.
o FCS field. The frame check sequence (FCS) is the HDLC error detection
field. It can contain either a 16- bit or 32-bit CRC.
o The first bit defines the type. If the first bit of the control field is 0, this
means the frame is an I-frame.
o The next 3 bits, called N(S), define the sequence number of the frame.
o The last 3 bits, called N(R), correspond to the acknowledgment number
when piggybacking is used.
o The single bit between N(S) and N(R) is called the P/F bit. If this bit is 1 it
means poll (the frame is sent by a primary station to a secondary).
18
o If this bit is 0 it means final (the frame is sent by a secondary to a Primary).
o If the first 2 bits of the control field are 10, this means the frame is an S-
frame.
o The last 3 bits, called N(R), correspond to the acknowledgment number
(ACK) or negative acknowledgment number (NAK), depending on the type
of S-frame.
o The 2 bits called code are used to define the type of S-frame itself.
o With 2 bits, we can have four types of S-frames –
Receive ready (RR), Receive not ready (RNR), Reject (REJ) and
Selective reject (SREJ).
o If the first 2 bits of the control field are 11, this means the frame is an U-
frame.
o U-frame codes are divided into two sections: a 2-bit prefix before the P/F
bit and a 3-bit suffix after the P/F bit.
o Together, these two segments (5 bits) can be used to create up to 32
different types of U-frames.
19
PPP Frame
PPP is a byte - oriented protocol where each field of the frame is composed of one
or more bytes.
1. Flag − 1 byte that marks the beginning and the end of the frame. The
bit pattern of the flag is 01111110.
2. Address − 1 byte which is set to 11111111 in case of broadcast.
3. Control − 1 byte set to a constant value of 11000000.
4. Protocol − 1 or 2 bytes that define the type of data contained in the payload
field.
5. Payload − This carries the data from the network layer. The
maximum length of the payload field is 1500 bytes.
6. FCS − It is a 2-byte(16-bit) or 4 bytes(32-bit) frame check
sequence forerror detection. The standard code used is CRC.
20
Dead: In dead phase the link is not used. There is no active carrier and the
line is quiet.
Establish: Connection goes into this phase when one of the nodes start
communication. In this phase, two parties negotiate the options. If
negotiation is successful, the system goes into authentication phase or
directly to networking phase.
Authenticate: This phase is optional. The two nodes may decide whether
they need this phase during the establishment phase. If they decide
toproceed with authentication, they send several authentication packets. If
the result is successful, the connection goes to the networking phase;
otherwise,it goes to the termination phase.
Network: In network phase, negotiation for the network layer protocols
takes place. PPP specifies that two nodes establish a network layer
agreement before data at the network layer can be exchanged. This is
because PPP supports several protocols at network layer. If a node is
running multiple protocols simultaneously at the network layer, the
receiving node needs to know which protocol will receive the data.
Open: In this phase, data transfer takes place. The connection remains in
this phase until one of the endpoints wants to end the connection.
Terminate: In this phase connection is terminated.
21
Components/Protocols of PPP
Three sets of components/protocols are defined to make PPP powerful:
Link Control Protocol (LCP)
Authentication Protocols (AP)
Network Control Protocols (NCP)
PAP
The Password Authentication Protocol (PAP) is a simple authentication procedure
with a two-step process:
a. The user who wants to access a system sends an
authentication identification (usually the user name) and a
password.
b. The system checks the validity of the identification and password
and either accepts or denies connection.
CHAP
The Challenge Handshake Authentication Protocol (CHAP) is a three-way
handshaking authentication protocol that provides greater security than PAP. In
this method, the password is kept secret; it is never sent online.
a. The system sends the user a challenge packet containing a
challenge value.
b. The user applies a predefined function that takes the challenge value and
the user’s own password and creates a result. The user sends the result in
the response packet to the system.
c. The system does the same. It applies the same function to the password
of the user (known to the system) and the challenge value to create a
result. If the result created is the same as the result sent in the response
packet, access is granted; otherwise, it is denied.
CHAP is more secure than PAP, especially if the system continuously changes the
challenge value. Even if the intruder learns the challenge value and the result, the
password is still secret.
22
Downloaded from EnggTree.com
EnggTree.com
has defined a specific Network Control Protocol for each network protocol. These
protocols are used for negotiating the parameters and facilities for the network
layer. For every higher-layer protocol supported by PPP, one NCP is there.
Goals of MAC
1. Fairness in sharing
2. Efficient sharing of bandwidth
3. Need to avoid packet collisions at the receiver due to interference
MAC Management
Medium allocation (collision avoidance)
Contention resolution (collision handling)
23
MAC Types
Round-Robin: – Each station is given opportunity to transmit in turns.
Either a central controller polls a station to permit to go, or stations can
coordinate among themselves.
Reservation: - Station wishing to transmit makes reservations for time
slots in advance. (Centralized or distributed).
Contention (Random Access): - No control on who tries; If
collision‖ occurs, retransmission takes place.
MECHANISMS USED
Wired Networks:
o CSMA / CD – Carrier Sense Multiple Access / Collision Detection
Wireless Networks:
o CSMA / CA – Carrier Sense Multiple Access / Collision Avoidance
Carrier Sense in CSMA/CD means that all the nodes sense the medium to
check whether it is idle or busy.
If the carrier sensed is idle, then the node transmits the
entire frame.
If the carrier sensed is busy, the transmission is postponed.
Collision Detect means that a node listens as it transmits and can therefore
detect when a frame it is transmitting has collided with a frame transmitted
by another node.
24
Non-Persistent Strategy
In the non-persistent method, a station that has a frame to send senses the
line.
If the line is idle, it sends immediately.
If the line is not idle, it waits a random amount of time and then senses the
line again.
Persistent Strategy
1- Persistent:
The 1-persistent method is simple and straightforward.
In this method, after the station finds the line idle, it sends its frame
immediately (with probability 1).
This method has the highest chance of collision because two or more
stations may find the line idle and send their frames immediately.
25
P-Persistent:
In this method, after the station finds the line idle it follows these steps:
With probability p, the station sends its frame.
With probability q = 1 − p, the station waits for the beginning of the next
time slot and checks the line again.
The p-persistent method is used if the channel has time slots with a slot
duration equal to or greater than the maximum propagation time.
The p-persistent approach combines the advantages of the other two
strategies. It reduces the chance of collision and improves efficiency.
.
EXPONENTIAL BACK-OFF
Once an adaptor has detected a collision and stopped its transmission, it waits
a certain amount of time and tries again.
Each time it tries to transmit but fails, the adaptor doubles the amount of time
it waits before trying again.
This strategy of doubling the delay interval between each retransmission
attempt is a general technique known as exponential back-off.
26
ETHERNET BASICS
EVOLUTION OF ETHERNET
27
28
29
The 64-bit preamble allows the receiver to synchronize with the signal; it is
a sequence of alternating 0’s and 1’s.
Both the source and destination hosts are identified with a 48-bit address.
The packet type field serves as the demultiplexing key.
Each frame contains up to 1500 bytes of data(Body).
CRC is used for Error detection
Ethernet Addresses
Every Ethernet host has a unique Ethernet address (48 bits – 6 bytes).
Ethernet address is represented by sequence of six numbers separated by
colons.
Each number corresponds to 1 byte of the 6- byte address and is given
bypair of hexadecimal digits.
Eg: 8:0:2b:e4:b1:2 is the representation of
00001000 00000000 00101011 11100100 10110001 00000010
Each frame transmitted on an Ethernet is received by every adaptor
connected to the Ethernet.
In addition to unicast addresses an Ethernet address consisting of all 1s is
treated as broadcast address.
Similarly, the address that has the first bit set to 1 but it is not the
broadcastaddress is called multicast address.
30
ADVANTAGES OF ETHERNET
Ethernets are successful because
It is extremely easy to administer and maintain. There are no switches that
can fail, no routing or configuration tables that have to be kept up-to-date,
and it is easy to add a new host to the network.
It is inexpensive: Cable is cheap, and the only other cost is the network
adaptor on each host.
1. Flexibility: Within radio coverage, nodes can access each other as radio
waves can penetrate even partition walls.
2. Planning: No prior planning is required for connectivity as long as
devices follow standard convention
3. Design: Allows to design and develop mobile devices.
4. Robustness: Wireless network can survive disaster. If the devices survive,
communication can still be established.
31
32
33
Station Types
IEEE 802.11 defines three types of stations based on their mobility in a wireless
LAN:
1. No-transition - A station with no-transition mobility is either
stationary (not moving) or moving only inside a BSS.
2. BSS-transition - A station with BSS-transition mobility can move from
one BSS to another, but the movement is confined inside one ESS
ESS-transition - A station with ESS-transition mobility can move from one
ESS to another.
34
Wireless protocol would follow exactly the same algorithm as the Ethernet—Wait
until the link becomes idle before transmitting and back off should a collision
occur.
Each of the four nodes is able to send and receive signals that reach just the
nodes to its immediate left and right.
For example, B can exchange frames with A and C but it cannot reach D,
while C can reach B and D but not A.
35
SWITCHING
o The technique of transferring the information from one computer network to
another network is known as switching.
o Switching in a computer network is achieved by using switches.
o A switch is a small hardware device which is used to join multiple computers
together with one local area network (LAN).
o Switches are devices capable of creating temporary connections between two or
more devices linked to the switch.
o Switches are used to forward the packets based on MAC addresses.
o A Switch is used to transfer the data only to the device that has been addressed. It
verifies the destination address to route the packet appropriately.
o It is operated in full duplex mode.
o It does not broadcast the message as it works with limited bandwidth.
Advantages of Switching:
o Switch increases the bandwidth of the network.
o It reduces the workload on individual PCs as it sends the information to only
that device which has been addressed.
o It increases the overall performance of the network by reducing the traffic on the
36
network.
o There will be less frame collision as switch creates the collision domain for
each connection.
Disadvantages of Switching:
o A Switch is more expensive than network bridges.
o A Switch cannot determine the network connectivity issues easily.
o Proper designing and configuration of the switch are required to handle
multicast packets.
CIRCUIT SWITCHING
2. Data transfer - Once the circuit has been established, data and voice are transferred
37
from the source to the destination. The dedicated connection remains as long as the
end parties communicate.
Advantages
It is suitable for long continuous transmission, since a continuous transmission
route is established, that remains throughout the conversation.
The dedicated path ensures a steady data rate of communication.
No intermediate delays are found once the circuit is established. So, they are
suitable for real time communication of both voice and data transmission.
Disadvantages
Circuit switching establishes a dedicated connection between the end parties. This
dedicated connection cannot be used for transmitting any other data, even if the
data load is very low.
Bandwidth requirement is high even in cases of low data volume.
There is underutilization of system resources. Once resources are allocated to a
particular connection, they cannot be used for other connections.
Time required to establish connection may be high.
It is more expensive than other switching techniques as a dedicated path
isrequired for each connection.
38