Computer Network Notes-4
Computer Network Notes-4
We are now going to discuss all the above mentioned major components
of a Computer Network:
1. Preparing data
2. Sending and Controlling data
3. Configuration
4. Drivers
5. Compatability
6. Performance
2. Hub
Hubs are those devices that are used to link several computers together.
Hubs repeat one signal that comes in on one port and then copies it to
other ports.
Types of Hub
1. Active Hub:
Active Hubs make use of electronics in order to amplify and clean up the
signals before they are broadcast to other ports. Active Hubs are mainly
used to extend the maximum distance between nodes. It works both as a
wiring center as well as a repeater.
2. Passive Hub:
Passive Hubs are those hubs that connect only to Active Hubs. Passive
Hubs are simply used to connect all ports together electrically and these
are usually not powered. These hubs are cheaper than Passive hub.
Passive hubs neither amplifies the signal nor regenerates the signal.
3. Intelligent Hub:
Intelligent hubs give better performance than active and passive hubs.
Nowadays Intelligent hubs are widely used and are in more demand than
active and passive hubs. These hubs are mainly used to connect various
devices. It supports amplification and regeneration of signals at any point
of incoming signals.
Intelligent hub sustains the network along with the selection path. The
tasks of both passive and active are manageable by the intelligent hub.
With the help of an Intelligent hub, the Speed and efficiency of the whole
network increases which helps to gain the fast and efficient performance
of the network.
3. Switch
Switch mainly resembles a Hub. It is a layer-2 device and it is used for the
intelligent forwarding of messages. By intelligent we mean the decision-
making ability of the switch. As hub works in the way by sending data to
all ports on the device, whereas the switch sends the data to only that
port that is connected with the destination device.
2. Unmanaged Switch
These are cheap switches and are mainly used in home networks and in
small businesses. The unmanaged switch does not need to be configured.
Unmanaged switches can be easily set up just by plugging them into the
network, after plugging they instantly start operating.
3. PoE Switch
These are referred to as Power over Ethernet switches. With the help of
the PoE technology, these switches combine the data and power
transmission over the same cable, and with the help of that devices
connected to this switch are able to receive both electricity as well as data
over the same line. Thus PoE switches offer more flexibility.
4. LAN Switch
LAN switch is referred to as Local Area Network switch and it is mainly
used to connect devices in the internal local area network of an
organization. These are helpful in reducing network congestion.
Bandwidth with these switches is allocated in a manner such that there is
no overlapping of data packets in the network.
4. Repeater
The repeater is a Physical layer device. As the name suggests, the
repeater is mainly used to regenerate the signal over the same network
and it mainly regenerates before the signal gets corrupted or weak.
They are incorporated into the networks in order to extend the coverage
area. Repeaters can connect signals by making the use of diffrent types of
cables.
Types of repeaters:
Types of repeaters that are available are as follows:
1. Analog Repeaters
These are only used to amplify the analog signals.
2. Digital Repeaters
These are only used to amplify digital signals.
3. Wired Repeaters
These repeaters are mainly used in wired Local area networks.
4. Wireless Repeaters
These are mainly used in wireless local area networks and also in cellular
networks.
5. Local Repeaters
These are used to connect segments of a local area network that are
separated by a small distance.
6. Remote Repeaters
These are mainly used to connect those local area networks that are far
away from each other.
5. Router
The router is a network component that is mainly used to send or
receive data on the computer network. The process of forwarding data
packets from the source to the destination is referred to as Routing.
Types of Routers:
Different types of routers are as follows:
1.Core Routers
Core routers are mainly used by service providers(like AT&T, Vodafone) or
by cloud providers like (Amazon, Microsoft, and Google). Core Routers
provide maximum bandwidth so as to connect additional routers or
switches. Core routers are used by large organizations.
2.Edge Routers
An edge router is also known as a Gateway router or gateway simply. The
gateway is the network's outermost point of connection with external
networks and also includes the Internet. These routers are mainly used
to optimize bandwidth and are designed in order to connect to other
routers so as to distribute data to end-users. Border Gateway protocol is
mainly used for connectivity by edge routers.
3. Brouters
Brouter means bridging routing device. These are special routers and they
also provide functionalities of bridges. They perform the functioning of the
bridge as well as of router; like a bridge, these routers help to transfer
data between networks, and like the router, they route the data within the
devices of a network.
4.Broadband Routers
It is a type of networking device that mainly allows end-users to access
broadband Internet from an Internet service provider (ISP). The Internet
service provider usually provides and configures the broadband router for
the end-user.
5.Distribution Routers
These routers mainly receive the data from the edge router (or gateway)
via a wired connection and then sends it on to the end-users with the help
of Wi-Fi.
5.Wireless Routers
These routers combine the functioning of both edge routers and
distribution routers. These routers mainly provide a WiFi connection to
WiFi devices like laptops, smartphones, etc. These routers also provide
the standard Ethernet routing. For indoor connections, the range of these
routers is 150 feet while for outdoor connections it is 300 feet.
6. Modem
The modem is basically a hardware component that mainly allows a
computer or any other device like a router, switch to connect to the
Internet. A modem is basically a shorthand form of Modulator-
Demodulator.
The demodulator basically converts the analog data signals into digital
data at the time when it is being received by the computer.
7. Server
A Server is basically a computer that serves the data to other devices. The
server may serve data to other devices or computers over a local area
network or on a Wide area network with the help of the Internet. There
can be virtual servers, proxy servers, application servers, web servers,
database servers, file servers, and many more.
Thus servers are mainly used to serve the requests of other devices. It
can be hardware or software.
8. Bridge
It is another important component of the computer network. The bridge is
also a layer-2( that is data link layer device). A bridge is mainly used to
connect two or more local area networks together. These are mainly used
as they help in the fast transferring of the data.
Thus Bridge can mainly transfer the data between different protocols (i.e.
a Token Ring and Ethernet network) and operates at the data link layer or
level 2 of the OSI (Open Systems Interconnection) networking reference
model as told above.
Local bridge
These are ordinary bridges.
Remote bridges
These are mainly used to connect networks that are at a distance
from each other. Generally Wide Area Network is provided between
two bridges
Some Bridge protocols are spanning tree protocol, source routing
protocol, and source routing transparent protocol.
Personal Area Network (PAN) - The interconnection of devices within the range
of an individual person, typically within a range of 10 meters. For example, a
wireless network connecting a computer with its keyboard, mouse or printer is a
PAN. Also, a PDA that controls the user's hearing aid or pacemaker fits in this
category. Another example of PAN is a Bluetooth. Typically, this kind of network
could also be interconnected without wires to the Internet or other networks.
The type of data transmission demonstrates the direction in which the data
moves between the sender and receiver.
Simplex data transmission: Data is sent from sender to receiver
Half-duplex data transmission: Data can transmit both ways, but not
simultaneously
Full-duplex data transmission: Data can transmit both ways at the same
time
Transmission Impairment
In the data communication system, analog and digital signals go
through the transmission medium. Transmission media are not ideal.
There are some imperfections in transmission mediums. So, the
signals sent through the transmission medium are also not perfect.
This imperfection cause signal impairment.
Consequences
1. For a digital signal, there may occur bit errors.
2. For analog signals, these impairments degrade the quality of
the signals.
Causes of Impairment
There are three main causes of impairment are,
1. Attenuation
2. Distortion
3. Noise
1. Attenuation
Here attenuation Means loss of energy that is the weaker signal.
Whenever a signal transmitted through a medium it loses its energy,
so that it can overcome by the resistance of the medium.
2. Distortion
If a signal changes its form or shape, it is referred to as distortion.
Signals made up of different frequencies are composite signals.
Distortion occurs in these composite signals.
3. Noise
Noise is another problem. There are some random or unwanted
signals mix up with the original signal is called noise. Noises can
corrupt the signals in many ways along with the distortion
introduced by the transmission media.
What is Transmission Modes?
Transmission mode means transferring data between two
devices. It is also known as a communication mode. Buses and
networks are designed to allow communication to occur between
individual devices that are interconnected. There are three
types of transmission modes:
Simplex Mode
In Simplex mode, the communication is unidirectional, as on a
one-way street. Only one of the two devices on a link can
transmit, the other can only receive. The simplex mode can use
the entire capacity of the channel to send data in one direction.
Example: Keyboard and traditional monitors. The keyboard can
only introduce input, the monitor can only give the output.
Half-Duplex Mode
In half-duplex mode, each station can both transmit and receive,
but not at the same time. When one device is sending, the other
can only receive, and vice versa. The half-duplex mode is used in
cases where there is no need for communication in both
directions at the same time. The entire capacity of the channel
can be utilized for each direction.
Example: Walkie-talkie in which message is sent one at a time
and messages are sent in both directions.
Channel capacity=Bandwidth * Propagation Delay
Full-Duplex Mode
In full-duplex mode, both stations can transmit and receive
simultaneously. In full_duplex mode, signals going in one
direction share the capacity of the link with signals going in
another direction, this sharing can occur in two ways:
Either the link must contain two physically separate
transmission paths, one for sending and the other for
receiving.
Or the capacity is divided between signals traveling in
both directions.
Full-duplex mode is used when communication in both directions
is required all the time. The capacity of the channel, however,
must be divided between the two directions.
Example: Telephone Network in which there is communication
between two persons by a telephone line, through which both
can talk and listen at the same time.
Channel Capacity=2* Bandwidth*propagation Delay
Parallel Communication
Serial Communication
Network software
Application layer
The first component is the application layer or the application plane, which
refers to the applications and services running on the network. It is a program
that conveys network information, the status of the network, and the network
requirements for particular resource availability and application. This is done
through the control layer via application programming interfaces (APIs). The
application layer also consists of the application logic and one or more API
drivers.
2. Control layer
The control layer lies at the center of the architecture and is one of the most
important components of the three layers. You could call it the brain of the
whole system. Also called the controller or the control plane, this layer also
includes the network control software and the network operating system
within it. It is the entity in charge of receiving requirements from the
applications and translating the same to the network components. The control
of the infrastructure layer or the data plane devices is also done via the
controller. In simple terms, the control layer is the intermediary that facilitates
communication between the top and bottom layers through APIs interfaces.
3. Infrastructure layer
The infrastructure layer, also called the data plane, consists of the actual
network devices (both physical and virtual) that reside in this layer. They are
primarily responsible for moving or forwarding the data packets after receiving
due instructions from the control layer. In simple terms, the data plane in the
network architecture components physically handles user traffic based on the
commands received by the controller.
The application program interface (API) ties all three components together.
Communication between these three layers is facilitated through northbound
and southbound application program interfaces. The northbound API ties
communication between the application and the control layers, whereas the
southbound API enables communication between the infrastructure and the
control
Design Issues for the Layers of Computer Networks
A number of design issues exist for the layer to layer approach of computer
networks. Some of the main design issues are as follows −
Reliability
Network channels and components may be unreliable, resulting in loss of bits
while data transfer. So, an important design issue is to make sure that the
information transferred is not distorted.
Scalability
Networks are continuously evolving. The sizes are continually increasing
leading to congestion. Also, when new technologies are applied to the added
components, it may lead to incompatibility issues. Hence, the design should be
done so that the networks are scalable and can accommodate such additions
and alterations.
Addressing
At a particular time, innumerable messages are being transferred between
large numbers of computers. So, a naming or addressing system should exist so
that each layer can identify the sender and receivers of each message.
Error Control
Unreliable channels introduce a number of errors in the data streams that are
communicated. So, the layers need to agree upon common error detection and
error correction methods so as to protect data packets while they are
transferred.
Flow Control
If the rate at which data is produced by the sender is higher than the rate at
which data is received by the receiver, there are chances of overflowing the
receiver. So, a proper flow control mechanism needs to be implemented.
Resource Allocation
Computer networks provide services in the form of network resources to the
end users. The main design issue is to allocate and deallocate resources to
processes. The allocation/deallocation should occur so that minimal
interference among the hosts occurs and there is optimal usage of the
resources.
Statistical Multiplexing
It is not feasible to allocate a dedicated path for each message while it is being
transferred from the source to the destination. So, the data channel needs to
be multiplexed, so as to allocate a fraction of the bandwidth or time to each
host.
Routing
There may be multiple paths from the source to the destination. Routing
involves choosing an optimal path among all possible paths, in terms of cost
and time. There are several routing algorithms that are used in network
systems.
Security
A major factor of data communication is to defend it against threats like
eavesdropping and surreptitious alteration of messages. So, there should be
adequate mechanisms to prevent unauthorized access to data through
authentication and cryptography.
OSI stands for Open Systems Interconnection, where open stands to say non-
proprietary. It is a 7-layer architecture with each layer having specific
functionality to perform. All these 7 layers work collaboratively to transmit the
data from one person to another across the globe. The OSI reference model
was developed by ISO – ‘International Organization for Standardization‘, in the
year 1984.
The OSI model provides a theoretical foundation for understanding network
communication. However, it is usually not directly implemented in its entirety
in real-world networking hardware or software. Instead, specific
protocols and technologies are often designed based on the principles
outlined in the OSI model to facilitate efficient data transmission and
networking operations
TCP/IP Model
The TCP/IP model is a fundamental framework for computer networking. It
stands for Transmission Control Protocol/Internet Protocol, which are the
core protocols of the Internet. This model defines how data is transmitted
over networks, ensuring reliable communication between devices. It consists
of four layers: the Link Layer, the Internet Layer, the Transport Layer, and the
Application Layer. Each layer has specific functions that help manage different
aspects of network communication, making it essential for understanding
and working with modern networks.
Chp 4
What is Transmission Media?
A transmission medium is a physical path between the transmitter and the
receiver i.e. it is the channel through which data is sent from one place to
another. Transmission Media is broadly classified into the following types:
Types of Transmission Media
1. Guided Media
Guided Media is also referred to as Wired or Bounded transmission media.
Signals being transmitted are directed and confined in a narrow pathway by
using physical links.
Features:
High Speed
Secure
Used for comparatively shorter distances
There are 3 major types of Guided Media:
Twisted Pair Cable
It consists of 2 separately insulated conductor wires wound about each other.
Generally, several such pairs are bundled together in a protective sheath. They
are the most widely used Transmission Media. Twisted Pair is of two types:
Unshielded Twisted Pair (UTP): UTP consists of two insulated copper
wires twisted around one another. This type of cable has the ability to
block interference and does not depend on a physical shield for this
purpose. It is used for telephonic applications.
Microwave Transmission
Infrared
Infrared waves are used for very short distance communication. They cannot
penetrate through obstacles. This prevents interference between systems.
Frequency Range:300GHz – 400THz. It is used in TV remotes, wireless mouse,
keyboard, printer, etc.
In the above diagram the datagram approach is used to deliver four packets
from station A to station X. All the four packets belong to the same message
but they may travel via different paths to reach the destination i.e. station X.
Explore our latest online courses and learn new skills at your own pace. Enroll
and become a certified expert to boost your career.
Advantages
The advantages of the datagram packet switching are explained below −
It provides greater flexibility
It performs fast re-routing of data packets at the time of network
congestion or failure.
Disadvantages
The disadvantages of the datagram packet switching are explained below −
Each packet has to decide its own dedicated path.
If large group of packets assembled at the same destination the network
has to examine each packets that travels through the network node
individually and it has to determine next path of each packets
This leads to inefficiency and wastage of time.
Chp-5
Framing in Data Link Layer
Frames are the units of digital transmission, particularly in computer
networks and telecommunications. Frames are comparable to the packets of
energy called photons in the case of light energy. Frame is continuously used
in Time Division Multiplexing process.
Framing is a point-to-point connection between two computers or devices
consisting of a wire in which data is transmitted as a stream of bits. However,
these bits must be framed into discernible blocks of information. Framing is a
function of the data link layer. It provides a way for a sender to transmit a set
of bits that are meaningful to the receiver. Ethernet, token ring, frame relay,
and other data link layer technologies have their own frame structures.
Frames have headers that contain information such as error-checking codes.
Problems in Framing
Detecting start of the frame: When a frame is transmitted, every
station must be able to detect it. Station detects frames by looking out
for a special sequence of bits that marks the beginning of the frame i.e.
SFD (Starting Frame Delimiter).
How does the station detect a frame: Every station listens to link for
SFD pattern through a sequential circuit. If SFD is detected, sequential
circuit alerts station. Station checks destination address to accept or
reject frame.
Detecting end of frame: When to stop reading the frame.
Handling errors: Framing errors may occur due to noise or other
transmission errors, which can cause a station to misinterpret the
frame. Therefore, error detection and correction mechanisms, such as
cyclic redundancy check (CRC), are used to ensure the integrity of the
frame.
Framing overhead: Every frame has a header and a trailer that contains
control information such as source and destination address, error
detection code, and other protocol-related information. This overhead
reduces the available bandwidth for data transmission, especially for
small-sized frames.
Framing incompatibility: Different networking devices and protocols
may use different framing methods, which can lead to framing
incompatibility issues. For example, if a device using one framing
method sends data to a device using a different framing method, the
receiving device may not be able to correctly interpret the frame.
Framing synchronization: Stations must be synchronized with each
other to avoid collisions and ensure reliable communication.
Synchronization requires that all stations agree on the frame
boundaries and timing, which can be challenging in complex networks
with many devices and varying traffic loads.
Framing efficiency: Framing should be designed to minimize the
amount of data overhead while maximizing the available bandwidth
for data transmission. Inefficient framing methods can lead to lower
network performance and higher latency.
Types of framing
There are two types of framing:
1. Fixed-size: The frame is of fixed size and there is no need to provide
boundaries to the frame, the length of the frame itself acts as a delimiter.
Drawback: It suffers from internal fragmentation if the data size is less
than the frame size
Solution: Padding
2. Variable size: In this, there is a need to define the end of the frame as well
as the beginning of the next frame to distinguish. This can be done in two
ways:
1. Length field – We can introduce a length field in the frame to indicate
the length of the frame. Used in Ethernet(802.3). The problem with this
is that sometimes the length field might get corrupted.
2. End Delimiter (ED) – We can introduce an ED(pattern) to indicate the
end of the frame. Used in Token Ring. The problem with this is that ED
can occur in the data. This can be solved by:
1. Character/Byte Stuffing: Used when frames consist of characters. If data
contains ED then, a byte is stuffed into data to differentiate it from ED.
Let ED = “$” –> if data contains ‘$’ anywhere, it can be escaped using ‘\O’
character.
–> if data contains ‘\O$’ then, use ‘\O\O\O$'($ is escaped using \O and \O is
escaped using \O).
2 . flow and error control: Flow control is design issue at Data Link Layer. It is
a technique that generally observes the proper flow of data from sender to
receiver. It is very essential because it is possible for sender to transmit data
or information at very fast rate and hence receiver can receive this
information and process it. This can happen only if receiver has very high load
of traffic as compared to sender, or if receiver has power of processing less as
compared to sender. Flow control is basically a technique that gives
permission to two of stations that are working and processing at different
speeds to just communicate with one another. Flow control in Data Link Layer
simply restricts and coordinates number of frames or amount of data sender
can send just before it waits for an acknowledgement from receiver. Flow
control is actually set of procedures that explains sender about how much
data or frames it can transfer or transmit before data overwhelms receiver.
The receiving device also contains only limited amount of speed and memory
to store data. This is why receiving device should be able to tell or inform the
sender about stopping the transmission or transferring of data on temporary
basis before it reaches limit. It also needs buffer, large block of memory for
just storing data or frames until they are processed.
Techniques of Flow Control in Data Link Layer : There are basically two types
of techniques being developed to control the flow of data
1. Stop-and-Wait Flow Control : This method is the easiest and simplest form
of flow control. In this method, basically message or data is broken down into
various multiple frames, and then receiver indicates its readiness to receive
frame of data. When acknowledgement is received, then only sender will
send or transfer the next frame. This process is continued until sender
transmits EOT (End of Transmission) frame. In this method, only one of
frames can be in transmission at a time. It leads to inefficiency i.e. less
productivity if propagation delay is very much longer than the transmission
delay and Ultimately In this method sender sent single frame and receiver
take one frame at a time and sent acknowledgement(which is next frame
number only) for new frame.
Advantages –
This method is very easiest and simple and each of the frames is
checked and acknowledged well.
This method is also very accurate.
Disadvantages –
This method is fairly slow.
In this, only one packet or frame can be sent at a time.
It is very inefficient and makes the transmission process very slow.
2. Sliding Window Flow Control : This method is required where reliable in-
order delivery of packets or frames is very much needed like in data link layer.
It is point to point protocol that assumes that none of the other entity tries to
communicate until current data or frame transfer gets completed. In this
method, sender transmits or sends various frames or packets before receiving
any acknowledgement. In this method, both the sender and receiver agree
upon total number of data frames after which acknowledgement is needed to
be transmitted. Data Link Layer requires and uses this method that simply
allows sender to have more than one unacknowledged packet “in-flight” at a
time. This increases and improves network throughput. and Ultimately In
this method sender sent multiple frame but receiver take one by one and
after completing one frame acknowledge(which is next frame number only)
for new frame.
Advantages –
It performs much better than stop-and-wait flow control.
This method increases efficiency.
Multiples frames can be sent one after another.
Disadvantages –
The main issue is complexity at the sender and receiver due to the
transferring of multiple frames.
The receiver might receive data frames or packets out the sequence.
Error: Data-link layer uses the techniques of error control simply to ensure
and confirm that all the data frames or packets, i.e. bit streams of data, are
transmitted or transferred from sender to receiver with certain accuracy.
Using or providing error control at this data link layer is an optimization, it
was never a requirement. Error control is basically process in data link layer
of detecting or identifying and re-transmitting data frames that might be lost
or corrupted during transmission. In both of these cases, receiver or
destination does not receive correct data frame and sender or source does
not even know anything about any such loss regarding data frames.
Therefore, in such type of cases, both sender and receiver are provided with
some essential protocols that are required to detect or identify such types of
errors as loss of data frames. The Data-link layer follows a technique known
as re-transmission of frames to detect or identify transit errors and also to
take necessary actions that are required to reduce or remove such errors.
Each and every time an error is detected during transmission, particular data
frames are retransmitted and this process is known as ARQ (Automatic
Repeat Request).
Various Techniques for Error Control : There are various techniques of error
control as given below :
1. Stop-and-Wait ARQ : Stop-and-Wait ARQ is also known as alternating bit
protocol. It is one of the simplest flow and error control techniques or
mechanisms. This mechanism is generally required in telecommunications to
transmit data or information between two connected devices. Receiver
simply indicates its readiness to receive data for each frame. In these, sender
sends information or data packets to receiver. Sender then stops and waits
for ACK (Acknowledgment) from receiver. Further, if ACK does not arrive
within given time period i.e., time-out, sender then again resends frame and
waits for ACK. But, if sender receives ACK, then it will transmit the next data
packet to receiver and then again wait for ACK from receiver. This process to
stop and wait continues until sender has no data frame or packet to send.
2. Sliding Window ARQ : This technique is generally used for continuous
transmission error control. It is further categorized into two categories as
given below :
Go-Back-N ARQ : Go-Back-N ARQ is form of ARQ protocol in which
transmission process continues to send or transmit total number of
frames that are specified by window size even without receiving an
ACK (Acknowledgement) packet from the receiver. It uses sliding
window flow control protocol. If no errors occur, then operation is
identical to sliding window.
Selective Repeat ARQ : Selective Repeat ARQ is also form of ARQ
protocol in which only suspected or damaged or lost data frames are
only retransmitted. This technique is similar to Go-Back-N ARQ though
much more efficient than the Go-Back-N ARQ technique due to reason
that it reduces number of retransmission. In this, the sender only
retransmits frames for which NAK is received. But this technique is
used less because of more complexity between sender and receiver
and each frame must be needed to be acknowledged individually.
CSMA
CSMA Access Modes
Chp-7
Protocols in Application Layer
The Application Layer is the topmost layer in the Open System Interconnection
(OSI) model. This layer provides several ways for manipulating the data which
enables any type of user to access the network with ease. The Application Layer
interface directly interacts with the application and provides common web
application services. The application layer performs several kinds of functions that
are required in any kind of application or communication process. In this article, we
will discuss various application layer protocols.
Therefore,
Transmission rate = 8000 bits/2 Mbps = 4000 msec
Properties:
UDP does not use congestion-control mechanism. Video chunks are encapsulated
before transmission using RTT (Real-Time Transport) Protocol.
Additional client-server path is maintained to inform the client state to server like
pause, resume, skip and so on.
Drawbacks:
Bandwidth is unpredictable and varying between client and server.
UDP streaming requires a separate media control server like RTSP server(Real-Time
Streaming Protocol) to track client state(pause, resume etc).
Devices are configured with firewalls to block UDP traffic which prevents the
reception of UDP packets to clients.
2. HTTP STREAMING: Video is stored in an HTTP server as a simple ordinary file
with a unique URL. Client establishes TCP connection with server and issues a HTTP
GET request for that URL. Server sends the video file along with an HTTP
RESPONSE. Now the client buffer grabs the video and then displayed on user
screen. Advantages:
Use of HTTP over TCP allows the video to traverse firewalls and NATs easily.
Does not need any media control servers like RTSP servers causing reduction in cost
of large-scale deployment over internet.
Disadvantages:
Latency or lag between a video is recorded and played. This can make the viewers
more annoyed and irritated. Only a few milliseconds delay is acceptable.
Early pre-fetching of video, but, what if, the user stops playing the video at a early
stage? Wastage of data is not appreciated.
All clients receive the same encoding of the video, despite the large variations in
bandwidth amount available to different clients and also for same client over time.
Uses: Youtube and Netflix uses HTTP streaming mechanism. 3. ADAPTIVE HTTP
STREAMING: The major drawbacks of HTTP streaming, lead to development of new
type of HTTP based streaming referred to as DASH (Dynamic Adaptive Streaming
over HTTP). Videos are encoded into different bit rate versions, having different
quality. The host makes a dynamic video request of few seconds in length from
different bit versions. When bandwidth is high, high bit rate chunks are received
hence high quality similarly, low quality video during low bandwidth. Advantages:
DASH uses the user to switch over different qualities of video on screen.
Client can use HTTP byte-range request to precisely control the amount of pre-
fetched video that is locally buffered.
DASH also stores the audio in different versions with different quality and different
bit-rate with unique URL.
So the client dynamically selects the video and audio chunks and synchronizes it
locally in the play-out. Uses: COMCAST uses DASH for streaming high quality video
contents.
Advantages of Streaming Stored Video
Convenience: Streaming stored video allows users to access content at any time,
without the need for a physical media or download.
Increased Accessibility: Streaming stored video makes it easier for users to access
content, as it eliminates the need for physical storage and retrieval of media.
On-demand Content: Streaming stored video allows users to choose what they
want to watch, and when they want to watch it, rather than having to conform to a
schedule.
Increased User Experience: Streaming stored video provides a better viewing
experience compared to traditional broadcast, as it allows for higher quality video
and improved interactivity.
Scalability: Streaming stored video can be scaled to meet the demands of large
numbers of users, making it a reliable solution for large-scale video distribution.
Applications of Streaming Stored Video
Online Entertainment: Streaming stored video is commonly used for online
entertainment, allowing users to access movies, TV shows, and other content from
the internet.
Video Conferencing: Streaming stored video is used for video conferencing,
allowing for real-time communication between participants.
Education: Streaming stored video is used in education to facilitate online classes
and lectures.
Corporate Communications: Streaming stored video is used in corporate
communications to share important information with employees and stakeholders.
Advertising: Streaming stored video is used for advertising, allowing businesses to
reach target audiences with video content.
Centralized Directory
The major problem with such an architecture is that there is a single point of
failure. If the server crashes, the whole P2P network crashes. Also, since all of the
processing is to be done by a single server so a huge amount of the database has to
be maintained and regularly updated.
2. Query Flooding
Unlike the centralized approach, this method makes use of distributed systems. In
this, the peers are supposed to be connected to an overlay network. It means if a
connection/path exists from one peer to another, it is a part of this overlay
network. In this overlay network, peers are called nodes, and the connection
between peers is called an edge between the nodes, thus resulting in a graph-like
structure. Gnutella was the first decentralized peer-to-peer network.
Working
Now when one peer requests for some file, this request is sent to all its
neighboring nodes i.e. to all nodes connected to this node. If those nodes don’t
have the required file, they pass on the query to their neighbors and so on. This is
called query flooding.
When the peer with the requested file is found (referred to as query hit), the query
flooding stops and it sends back the file name and file size to the client, thus
following the reverse path.
If there are multiple query hits, the client selects from one of these peers.
Gnutella: Gnutella represents a new wave of P2P applications providing distributed
discovery and sharing of resources across the Internet. Gnutella is distinguished by
its support for anonymity and its decentralized architecture. A Gnutella network
consists of a dynamically changing set of peers connected using TCP/IP.
Query Flooding
This method also has some disadvantages, the query has to be sent to all the
neighboring peers unless a match is found. This increases traffic in the network.
3. Exploiting Heterogeneity
This P2P architecture makes use of both the above-discussed systems. It resembles
a distributed system like Gnutella because there is no central server for query
processing. But unlike Gnutella, it does not treat all its peers equally. The peers
with higher bandwidth and network connectivity are at a higher priority and are
called group leaders/supernodes. The rest of the peers are assigned to these
supernodes. These supernodes are interconnected and the peers under these
supernodes inform their respective leaders about their connectivity, IP address,
and the files available for sharing.
KaZaA technology is such an example that makes use of Napster and
Gnutella. Thus, the individual group leaders along with their child peers form a
Napster-like structure. These group leaders then interconnect among themselves to
resemble a Gnutella-like structure.
Working
This structure can process the queries in two ways.
The first one is that the supernodes could contact other supernodes and merge
their databases with their database. Thus, this supernode now has information
about a large number of peers.
Another approach is that when a query comes in, it is forwarded to the
neighboring super nodes until a match is found, just like in Gnutella. Thus query
flooding exists but with limited scope as each supernode has many child peers.
Hence, such a system exploits the heterogeneity of the peers by designating some
of them as group leaders/supernodes and others as their child peers
Exploiting heterogeneity
P2P File Sharing Security Concerns
Steps that ensure that Sensitive Information on the network is secure:
You must delete your sensitive information which you don’t require and you can
apply some restrictions to important file present within the network.
For strong or accessing sensitive information, try to reduce or remove P2P file-
sharing programs on computers.
Constantly try to monitor the network to find unauthorized file-sharing programs.
Try to block the unauthorized Peer-to-Peer file sharing programs within the
perimeter of the network.
Implement strong access controls and authentication mechanisms to prevent
unauthorized access to sensitive information on the network.
Use encryption techniques such as Secure Socket Layer (SSL) or Transport Layer
Security (TLS) to protect data in transit between peers on the network.
Implement firewalls, intrusion detection and prevention systems, and other
security measures to prevent unauthorized access to the network and to detect
and block malicious activity.
Regularly update software and security patches to address known vulnerabilities in
P2P file-sharing programs and other software used on the network.
Educate users about the risks associated with P2P file-sharing and provide training
on how to use these programs safely and responsibly.
Use data loss prevention tools to monitor and prevent the transmission of sensitive
data outside of the network.
Implement network segmentation to limit the scope of a security breach in case of
a compromise, and to prevent unauthorized access to sensitive areas of the
network.
Regularly review and audit the network to identify potential security threats and to
ensure that security controls are effective and up-to-date.
Chp-8
What is Network Security?
Network security is the practice of protecting a computer network from
unauthorized access, misuse, or attacks. It involves using tools,
technologies, and policies to ensure that data traveling over the
network is safe and secure, keeping sensitive information away from
hackers and other threats.
How Does Network Security Work?
Network security uses several layers of protection, both at the edge of
the network and within it. Each layer has rules and controls that
determine who can access network resources. People who are allowed
access can use the network safely, but those who try to harm it with
attacks or other threats are stopped from doing so.
The basic principle of network security is protecting huge stored data
and networks in layers that ensure the bedding of rules and regulations
that have to be acknowledged before performing any activity on the
data. These levels are:
Physical Network Security: This is the most basic level that includes
protecting the data and network through unauthorized personnel from
acquiring control over the confidentiality of the network. The same can
be achieved by using devices like biometric systems.
Technical Network Security: It primarily focuses on protecting the data
stored in the network or data involved in transitions through the
network. This type serves two purposes. One is protected from
unauthorized users, and the other is protected from malicious
activities.
Administrative Network Security: This level of network security
protects user behavior like how the permission has been granted and
how the authorization process takes place. This also ensures the level
of sophistication the network might need for protecting it through all
the attacks. This level also suggests necessary amendments that have
to be done to the infrastructure.
Types of Network Security
There are several types of network security through which we can
make our network more secure, Your network and data are shielded
from breaches, invasions, and other dangers by network security. Here
below are some important types of network security:
Email Security
Email Security is defined as the process designed to protect the Email
Account and its contents safe from unauthorized access. For Example,
you generally see, fraud emails are automatically sent to the Spam
folder. because most email service providers have built-in features to
protect the content.
The most common danger vector for a security compromise is email
gateways. Hackers create intricate phishing campaigns using recipients’
personal information and social engineering techniques to trick them
and direct them to malicious websites. To stop critical data from being
lost, an email security programme restricts outgoing messages and
stops incoming threats.
What is Cryptography?
Cryptography is a technique of securing information and
communications through the use of codes so that only those persons
for whom the information is intended can understand and process it.
Thus preventing unauthorized access to information. The prefix “crypt”
means “hidden” and the suffix “graphy” means “writing”. In
Cryptography, the techniques that are used to protect information are
obtained from mathematical concepts and a set of rule-based
calculations known as algorithms to convert messages in ways that
make it hard to decode them. These algorithms are used for
cryptographic key generation, digital signing, and verification to protect
data privacy, web browsing on the internet and to protect confidential
transactions such as credit card and debit card transactions.
Features Of Cryptography
Confidentiality: Information can only be accessed by the person for
whom it is intended and no other person except him can access it.
Integrity: Information cannot be modified in storage or transition
between sender and intended receiver without any addition to
information being detected.
Non-repudiation: The creator/sender of information cannot deny his
intention to send information at a later stage.
Authentication: The identities of the sender and receiver are
confirmed. As well destination/origin of the information is confirmed.
Interoperability: Cryptography allows for secure communication
between different systems and platforms.
Adaptability: Cryptography continuously evolves to stay ahead of
security threats and technological advancements.
Types Of Cryptography
1. Symmetric Key Cryptography
It is an encryption system where the sender and receiver of a message
use a single common key to encrypt and decrypt messages. Symmetric
Key cryptography is faster and simpler but the problem is that the
sender and receiver have to somehow exchange keys securely. The
most popular symmetric key cryptography systems are Data Encryption
Systems (DES) and Advanced Encryption Systems (AES).
What is Firewall?
A firewall is a network security device, either hardware or software-based,
which monitors all incoming and outgoing traffic and based on a defined set
of security rules accepts, rejects, or drops that specific traffic.
Accept: allow the traffic
Reject: block the traffic but reply with an “unreachable error”
Drop: block the traffic with no reply
A firewall is a type of network security device that filters incoming and
outgoing network traffic with security policies that have previously been set
up inside an organization. A firewall is essentially the wall that separates a
private internal network from the open Internet at its very basic level.
Dial-up Internet access is a form of Internet access that uses the facilities of
the public switched telephone network (PSTN) to establish a connection to
an Internet service provider (ISP) by dialing a telephone number on a
conventional telephone line which could be connected using an RJ-11
connector.[1] Dial-up connections use modems to decode audio signals into
data to send to a router or computer, and to encode signals from the latter
two devices to send to another modem at the ISP.
Dial-up Internet reached its peak popularity during the dot-com bubble with
the likes of ISPs such as Sprint, EarthLink, MSN Dial-up, NetZero, Prodigy, and
America Online (more commonly known as AOL). This was in large part
because broadband Internet did not become widely used until well into the
2000s. Since then, most dial-up access has been replaced by broadband.
Modems
[edit]
Banks of modems used by an ISP to provide dial-
up Internet service
Because there was no technology to allow different carrier signals on a
telephone line at the time, dial-up Internet access relied on using audio
communication. A modem would take the digital data from a
computer, modulate it into an audio signal and send it to a receiving modem.
This receiving modem would demodulate the signal from analogue noise,
back into digital data for the computer to process.[14]
The simplicity of this arrangement meant that people would be unable to use
their phone line for verbal communication until the Internet call was finished.
The Internet speed using this technology can drop to 21.6 kbit/s or less. Poor
condition of the telephone line, high noise level and other factors all affect
dial-up speed. For this reason, it is popularly called the 21600 Syndrome. [15][16]
Availability
[edit]
Dial-up connections to the Internet require no additional infrastructure other
than the telephone network and the modems and servers needed to make
and answer the calls. Because telephone access is widely available, dial-up is
often the only choice available for rural or remote areas,
where broadband installations are not prevalent due to low population
density and high infrastructure cost.[11]
A 2008 Pew Research Center study stated that only 10% of US adults still used
dial-up Internet access. The study found that the most common reason for
retaining dial-up access was high broadband prices. Users cited lack of
infrastructure as a reason less often than stating that they would never
upgrade to broadband.[17] That number had fallen to 6% by 2010,[18] and to 3%
by 2013.[19]
A survey conducted in 2018 estimated that 0.3% of Americans were using
dial-up by 2017.[20]
The CRTC estimated that there were 336,000 Canadian dial-up users in 2010.
[21]
Replacement by broadband
[edit]
Broadband Internet access via cable, digital subscriber line, wireless
broadband, mobile broadband, satellite and FTTx has replaced dial-up access
in many parts of the world. Broadband connections typically offer speeds of
700 kbit/s or higher for two-thirds more than the price of dial-up on average.
[18]
In addition, broadband connections are always on, thus avoiding the need
to connect and disconnect at the start and end of each session. Broadband
does not require the exclusive use of a phone line, and thus one can access
the Internet and at the same time make and receive voice phone calls
without having a second phone line.
However, many rural areas remain without high-speed Internet, despite the
eagerness of potential customers. This can be attributed to population,
location, or sometimes ISPs' lack of interest due to little chance of
profitability and high costs to build the required infrastructure. Some dial-up
ISPs have responded to the increased competition by lowering their rates and
making dial-up an attractive option for those who merely want email access
or basic Web browsing.[22][23]
Dial-up has seen a significant fall in usage, with the potential to cease to exist
in future as more users switch to broadband.[citation needed] In 2013, only about
3% of the U.S population used dial-up, compared to 30% in 2000. [24] One
contributing factor is the bandwidth requirements of newer computer
programs, like operating systems and antivirus software, which automatically
download sizeable updates in the background when a connection to the
Internet is first made. These background downloads can take several minutes
or longer and, until all updates are completed, they can severely impact the
amount of bandwidth available to other applications like Web browsers.
Since an "always on" broadband is the norm expected by most newer
applications being developed,[citation needed] this automatic background
downloading trend is expected to continue to eat away at dial-up's available
bandwidth to the detriment of dial-up users' applications. [25] Many newer
websites also now assume broadband speeds as the norm, and when
connected to with slower dial-up speeds may drop (timeout) these slower
connections to free up communication resources. On websites that are
designed to be more dial-up friendly, use of a reverse proxy prevents dial-ups
from being dropped as often but can introduce long wait periods for dial-up
users caused by the buffering used by a reverse proxy to bridge the different
data rates.
Despite the rapid decline, dial-up Internet still exists in some rural areas, and
many areas of developing and underdeveloped nations, although wireless
and satellite broadband are providing faster connections in many rural areas
where fibre or copper may be uneconomical.[citation needed]
In 2010, it was estimated that there were 800,000 dial-up users in the
UK. BT turned off its dial-up service in 2013.[26]
In 2012, it was estimated that 7% of Internet connections in New Zealand
were dial-up. One NZ (formerly Vodafone) turned off its dial-up service in
2021.[27][28]
Performance
[edit]
An example handshake of a dial-up modem
Modern dial-up modems typically have a maximum theoretical transfer
speed of 56 kbit/s (using the V.90 or V.92 protocol), although in most cases,
40–50 kbit/s is the norm. Factors such as phone line noise as well as the
quality of the modem itself play a large part in determining connection
speeds.[citation needed]
Some connections may be as low as 20 kbit/s in extremely noisy
environments, such as in a hotel room where the phone line is shared with
many extensions, or in a rural area, many kilometres from the phone
exchange. Other factors such as long loops, loading coils, pair gain, electric
fences (usually in rural locations), and digital loop carriers can also slow
connections to 20 kbit/s or lower.
[The dial-up sounds are] a choreographed sequence that allowed these
digital devices to piggyback on an analog telephone network. A phone line
carries only the small range of frequencies in which most human
conversation takes place: about three hundred to three thousand hertz. The
modem works within these [telephone network] limits in creating sound
waves to carry data across phone lines. What you're hearing is the way 20th
century technology tunneled through a 19th century network; what you're
hearing is how a network designed to send the noises made by your muscles
as they pushed around air came to transmit anything [that can be] coded in
zeroes and ones.
-Alexis Madrigal, paraphrasing Glenn Fleishman[29]
Analog telephone lines are digitally switched and transported inside a Digital
Signal 0 once reaching the telephone company's equipment. Digital Signal 0 is
64 kbit/s and reserves 8 kbit/s for signaling information; therefore a 56 kbit/s
connection is the highest that will ever be possible with analog phone lines.
Dial-up connections usually have latency as high as 150 ms or even more,
higher than many forms of broadband, such as cable or DSL, but typically less
than satellite connections. Longer latency can make video
conferencing and online gaming difficult, if not impossible. An increasing
amount of Internet content such as streaming media will not work at dial-up
speeds.
Video games released from the mid-1990s to the mid-2000s that utilized
Internet access such as EverQuest, Red Faction, Warcraft 3, Final Fantasy
XI, Phantasy Star Online, Guild Wars, Unreal Tournament, Halo: Combat
Evolved, Audition, Quake 3: Arena, Starsiege: Tribes and Ragnarok Online,
etc., accommodated for 56k dial-up with limited data transfer between the
game servers and user's personal computer. The first consoles to provide
Internet connectivity, the Dreamcast and PlayStation 2, supported dial-up as
well as broadband. The GameCube could use dial-up and broadband
connections, but this was used in very few games and required a separate
adapter. The original Xbox exclusively required a broadband connection.
Many computer and video games released since 2006 do not even include the
option to use dial-up. However, there are exceptions to this, such as Vendetta
Online, which can still run on a dial-up modem.
Chp-10
Defining network security
The simple definition of network security is any combination of hardware and
software products that operate in Layers 3 and 4 -- the network and transport
layers -- of the OSI stack, with the primary function to manage access to the
corporate network and network-embedded resources. Network security acts
as a gatekeeper that permits entry to authorized users and detects and
prevents unauthorized access and anything that tries to infiltrate the network
to cause harm or compromise data.
Network security is not one-size-fits-all, as it typically comprises different
components.
Network security is not one-size-fits-all, as it typically comprises different
components. Below, we explore nine elements of network security and their
roles in a security strategy. Please note that these components are not
mutually exclusive, as many features and technologies overlap in various
suppliers' offerings.
1. Network firewall
Firewalls are the first line of defense in network security. These network
applications or devices monitor and control the flow of incoming and
outgoing network traffic between a trusted internal network and untrusted
external networks. Network traffic is evaluated based on state, port and
protocol, with filtering decisions made based on both administrator-defined
security policy and static rules.
Firewalls make up the single largest segment of the network security market,
according to Doyle Research and Security Mindsets. In 2019, firewalls of all
types were responsible for about 40% of network security spending, around
$8 billion.
Although many IPS features have been incorporated into NGFWs and unified
threat management (UTM) appliances, the IPS market is still responsible for
10% of network security spending.
Representative vendors: Alert Logic, Check Point Software, Cisco, McAfee and
Trend Micro
3. Unified threat management
A UTM product integrates multiple networking and network security
functions into a single appliance, while offering consolidated
management. UTM devices must include network routing, firewalling,
network intrusion prevention and gateway antivirus. They generally offer
many other security applications, such as VPN, remote access, URL filtering
and quality of service. Unified management of all these functions is required,
as the converged platform is designed to increase overall security, while
reducing complexity.
UTM devices are best suited for SMBs and for branch and remote sites. UTM
products are the second-largest network security category with over $5
billion in spending.
Chp-11
Wireless Links and Network Characteristics
A number of important differences between a wired link and a wireless
link:
o Decreasing signal strength:
Electromagnetic radiation attenuates as it passes through
matter. Even in free space, the signal will disperse,
resulting in decreased signal strength as the distance
between sender and receiver increases.
o Interference from other sources:
Radio sources transmission in the same frequency band
will interfere with each other.
In addition to interference from transmitting sources,
electromagnetic noise within the environment can result
in interference.
o Multipath propagation:
It occurs when portions of the electromagnetic wave
reflect off objects and the ground, taking paths of different
lengths between a sender and receiver. Moving objects
between the sender and receiver can cause multipath
propagation to change over time.
Wireless links employ powerful CRC error detection codes and link-
level reliable-data-transfer protocols that retransmits corrupted frames
because bit errors are more common in wireless links.
The host receives an electromagnetic signal that is a combination of a
degraded form of the original signal transmitted by the sender and
background noise in the environment.
o The Signal-to-noise ratio (SNR) is a relative measure of the
strength of the received signal and this noise.
o The SNR is typically measured in dB.
It is 20*the ratio of the base-10 logarithms of the
amplitude of the receives signal to the amplitude of the
noise.
A larger SNR makes it easier for the receiver to extract the
transmitted signal from the background noise.
BER = Bit error rate
Physical-layer characteristics that are important to understand for
higher-layer wireless communication protocols:
o For a given modulation scheme, the higher the SNR, the lower
the BER:
Since a sender can increase the SNR by increasing its
transmission power, a sender can decrease the probability
that a frame is received in error by increasing its
transmission power.
There’s little gain in increasing the power beyond a
certain threshold.
A disadvantage associated with increasing the
transmission power is that it costs more energy for the
sender and the sender’s transmissions are more likely to
interfere with transmissions of another sender.
o For a given SNR, a modulation technique with a higher bit
transmission rate will have a higher BER:
Issues will rise when mobility solutions for macro-mobility such as Mobile IP
are adopted for micro-mobility. Base Mobile IP mechanism introduces
significant network overhead in terms of delay, packet loss, and signaling. For
example, the real-time wireless applications such as Voice IP (VoIP) would
suffer degradation of service due to frequent handoff [Inayat03]. Micro-
mobility solutions are proposed for localized mobility in a domain. These
proposals focus on reducing the handoff latency by inducing those additional
overheads due to control traffic as they have to maintain routing information
at the local network and are also heavy on the address space [Campbell02].
2.1.3 IP Mobile Multicasting
Instead of sending data to a single node, multicasting delivers data to a set of
selected receivers. In IP multicast, a source sends a single copy of a packet
and the network duplicates the packet as needed until the packet reaches all
the selected receivers. This avoids the overheads associated with both
replication of packets at the source and sending duplicated packets over the
same link. [Romdhani04] is a good survey paper for IP mobile multicast.
2.2 Requirements of Mobility
The main goal of the mobility solutions is to continue the communication of
the MN and the networks while it moves, and avoid the disrupting of the
connections. When a MN moves from one place to another, in order to
support seamless connectivity and continuous reachability, a mobility
solution should provide mechanism to handle the handover and the routing
of packets thereafter. The proposals for mobility should have the following
properties [Atiquzzaman05][Zhuang03][Henderson03]:
Most solutions proposed so far (e.g. Mobile IP) are based on the idea of
indirection points between MN and CN so that the CN does not need to know
the topology location of the MN by sending packets to the indirection points.
These approaches do not require changing fixed hosts in the Internet, but
they require changing the underlying IP substrate. Some other solutions
emphasize on end-to-end architecture (e.g. TCP Migrate). These solutions do
not require change to the underlying IP substrate [Snoeren00].
Here we will introduce some mobility solutions which has already existed for
a period of time and is widely used or referred to.
2.3.1 Mobile IP
The most widely known mobility solution today is Mobile IP, which were
developed by IETF to support mobility on the Internet. Mobile IP aims to
allow a MN to continue the communication with its CN during its movement.
It supports network layer mobility so that TCP is not aware of the mobility.
In Mobile IP, Host Agent (HA) is used as indirection points. HA is in the MN's
home network and it intercepts and tunnels packets to the MN. A Mobile
Node (MN) has a permanent Home Address (HoA) from its home network
and obtains a temporary Care-of-Address (COA) which is routable within the
foreign network when it moves to a new network. The MN registers its COA
to the HA in its home network every time it obtains a new COA, and this
process is called registration process. For maintaining the transport and
higher-level communications when moving, the MN maintains its HoA and
uses the COA for routing purpose. A binding associates these two addresses
on both the MN part and the HA part.
Both Mobile IPv4 and Mobile IPv6 are based on the above ideas, and they
share many features. However, Mobile IPv6 offers some improvements:
Paramete
2.4 GHz 5 GHz
r
Comparatively
Range High
low
What is Bluetooth?
Bluetooth is a wireless communication technology that allows devices to
exchange data over short distances without the need for cables. It’s
commonly used for connecting devices like wireless headphones, keyboards,
and mouse to computers and smartphones. Bluetooth is also key in smart
home devices and car systems. Developed by the Bluetooth Special Interest
Group (SIG), this technology ensures secure and reliable connections with
low power consumption, making it a crucial feature in today’s electronic
devices.
What is Zigbee?
Zigbee is a wireless communication protocol designed for low-power, low-
data-rate applications, commonly used in home automation and industrial
settings. It allows smart devices like lights, thermostats, and security systems
to communicate with each other efficiently. Developed by the Zigbee
Alliance, this technology emphasizes simplicity and reliability, providing
secure and scalable networks for various smart applications. Zigbee’s low
power consumption makes it ideal for battery-operated devices, ensuring
long-lasting and energy-efficient performance in modern smart homes
and IoT systems.
IoT Protocols Comparison
In the above figure, we can see that the data transfer rate is faster in
Bluetooth than in Zigbee whereas Zigbee covers a larger distance than
Bluetooth.
Both Bluetooth and Zigbee have a lot in common which is, each area unit
style of IEEE 802.15 WPANs, each runs within the pair of 4-GHz unlicensed
bands, and each uses tiny kind factors and low power. Besides these
similarities, there are some differences which are given below in the tabular
form.
Comparison Between Bluetooth and ZigBee
Bluetooth Zigbee
protocols.
It uses the GFSK modulation Whereas it also uses BPSK and QPSK
technique. modulation techniques like UWB.
The radio signal range of Bluetooth While the radio signal range of
is ten meters. ZigBee is ten to hundred meters.
Cell Splitting
Need for Cellular Hierarchy
Extending the coverage to the areas that are difficult to cover by a large cell.
Increasing the capacity of the network for those areas that have a higher
density of users. An increasing number of wireless devices and the
communication between them.
Cellular Hierarchy
Femtocells: The smallest unit of the hierarchy, these cells need to cover
only a few meters where all devices are in the physical range of the
uses.
Picocells: The size of these networks is in the range of a few tens of
meters, e.g., WLANs.
Microcells: Cover a range of hundreds of meters e.g. in urban areas to
support PCS which is another kind of mobile technology.
Macrocells: Cover areas in the order of several kilometers, e.g., cover
metropolitan areas.
Mega cells: Cover nationwide areas with ranges of hundreds of
kilometers, e.g., used with satellites.
Fixed Channel Allocation
Adjacent radio frequency bands are assigned to different cells. In analog,
each channel corresponds to one user while in digital each RF channel carries
several time slots or codes (TDMA/CDMA). Simple to implement as traffic is
uniform.
Global System for Mobile (GSM) Communications
GSM uses 124 frequency channels, each of which uses an 8-slot Time Division
Multiplexing (TDM) system. There is a frequency band that is also fixed.
Transmitting and receiving do not happen in the same time slot because the
GSM radios cannot transmit and receive at the same time and it takes time to
switch from one to the other. A data frame is transmitted in 547
microseconds, but a transmitter is only allowed to send one data frame every
4.615 microseconds since it is sharing the channel with seven other stations.
The gross rate of each channel is 270, 833 bps divided among eight users,
which gives 33.854 kbps gross.
Control Channel (CC)
Apart from user channels, there are some control channels which is used to
manage the system.
1. The broadcast control channel (BCC): It is a continuous stream of
output from the base station’s identity and the channel status. All
mobile stations monitor their signal strength to see when they move
into a new cell.
2. The dedicated control channel (DCC): It is used for location updating,
registration, and call setup. In particular, each base station maintains a
database of mobile stations. Information needed to maintain this
database is sent to the dedicated control channel.
Common Control Channel
Three logical sub-channels are:
1. Is the paging channel, that the base station uses to announce incoming
calls. Each mobile station monitors it continuously to watch for calls it
should answer.
2. Is the random access channel that allows the users to request a slot on
the dedicated control channel. If two requests collide, they are garbled
and have to be retried later.
3. Is the access grant channel which is the announced assigned slot.
Advantages of Cellular Networks
Mobile and fixed users can connect using it. Voice and data services
also provided.
Has increased capacity & easy to maintain.
Easy to upgrade the equipment & has consumes less power.
It is used in place where cables can not be laid out because of its
wireless existence.
To use the features & functions of mainly all private and public
networks.
Can be distributed to the larger coverage of areas.
Disadvantages of Cellular Networks
It provides a lower data rate than wired networks like fiber optics and
DSL. The data rate changes depending on wireless technologies like
GSM, CDMA, LTE, etc.
Macrophage cells are impacted by multipath signal loss.
To service customers, there is a limited capacity that depends on the
channels and different access techniques.
Due to the wireless nature of the connection, security issues exist.
For the construction of antennas for cellular networks, a foundation
tower and space are required. It takes a lot of time and labor to do this.
Chp-6
Network Layer
o The Network Layer is the third layer of the OSI model.
o It handles the service requests from the transport layer and further
forwards the service request to the data link layer.
o The network layer translates the logical addresses into physical
addresses
o It determines the route from the source to the destination and also
manages the traffic problems such as switching, routing and controls
the congestion of data packets.
o The main role of the network layer is to move the packets from sending
host to the receiving host.
The main functions performed by the network layer are:
o Routing: When a packet reaches the router's input link, the router will
move the packets to the router's output link. For example, a packet
from S1 to R1 must be forwarded to the next router on the path to S2.
o Logical Addressing: The data link layer implements the physical
addressing and network layer implements the logical addressing.
Logical addressing is also used to distinguish between source and
destination system. The network layer adds a header to the packet
which includes the logical addresses of both the sender and the
receiver.
o Internetworking: This is the main role of the network layer that it
provides the logical connection between different types of networks.
o Fragmentation: The fragmentation is a process of breaking the packets
into the smallest individual data units that travel through different
networks.
IP stands for Internet Protocol version v4 stands for Version Four (IPv4), is the
most widely used system for identifying devices on a network. It uses a set of
four numbers, separated by periods (like 192.168.0.1), to give each device a
unique address. This address helps data find its way from one device to
another over the internet.
IPv4 was the primary version brought into action for production within the
ARPANET in 1983. IP version four addresses are 32-bit integers which will be
expressed in decimal notation. Example- 192.0.2.126 could be an IPv4
address.
Parts of IPv4
IPv4 addresses consist of three parts:
Network Part: The network part indicates the distinctive variety that’s
appointed to the network. The network part conjointly identifies the
category of the network that’s assigned.
Host Part: The host part uniquely identifies the machine on your
network. This part of the IPv4 address is assigned to every host.
For each host on the network, the network part is the same, however,
the host half must vary.
Subnet Number: This is the nonobligatory part of IPv4. Local networks
that have massive numbers of hosts are divided into subnets
and subnet numbers are appointed to that.
Characteristics of IPv4
IPv4 could be a 32-bit IP Address.
IPv4 could be a numeric address, and its bits are separated by a dot.
The number of header fields is twelve and the length of the header
field is twenty.
It has Unicast, broadcast, and multicast-style addresses.
IPv4 supports VLSM (Virtual Length Subnet Mask).
IPv4 uses the Post Address Resolution Protocol to map to the MAC
address.
RIP may be a routing protocol supported by the routed daemon.
Networks ought to be designed either manually or with DHCP.
Packet fragmentation permits from routers and causes host.
Advantages of IPv4
IPv4 security permits encryption to keep up privacy and security.
IPV4 network allocation is significant and presently has quite 85000
practical routers.
It becomes easy to attach multiple devices across an outsized network
while not NAT.
This is a model of communication so provides quality service also as
economical knowledge transfer.
IPV4 addresses are redefined and permit flawless encoding.
Routing is scalable and economical as a result of addressing its
collective more effectively.
Data communication across the network becomes a lot of specific in
multicast organizations.
o Limits net growth for existing users and hinders the use of the net
for brand-new users.
o Internet Routing is inefficient in IPv4.
o IPv4 has high System Management prices and it’s labor-intensive,
complex, slow & prone to errors.
o Security features are nonobligatory.
o Difficulty to feature support for future desires as a result of
adding it on is extremely high overhead since it hinders the
flexibility to attach everything over IP.
Limitations of IPv4
IP relies on network layer addresses to identify end-points on the
network, and each network has a unique IP address.
The world’s supply of unique IP addresses is dwindling, and they might
eventually run out theoretically.
If there are multiple hosts, we need the IP addresses of the next class.
Complex host and routing configuration, non-hierarchical addressing,
difficult to re-numbering addresses, large routing tables, non-trivial
implementations in providing security, QoS (Quality of Service),
mobility, and multi-homing, multicasting, etc. are the big limitations of
IPv4 so that’s why IPv6 came into the picture.
Ipv6
The most common version of the Internet Protocol currently in use, IPv4, will
soon be replaced by IPv6, a new version of the protocol. The well-known IPv6
protocol is being used and deployed more often, especially in mobile phone
markets. IP address determines who and where you are in the network of
billions of digital devices that are connected to the Internet.
IPv6 or Internet Protocol Version 6 is a network layer protocol that allows
communication to take place over the network. IPv6 was designed by the
Internet Engineering Task Force (IETF) in December 1998 with the purpose of
superseding IPv4 due to the global exponentially growing internet of users.
What is IPv6?
The next generation Internet Protocol (IP) address standard, known as IPv6, is
meant to work in tandem with IPv4, which is still in widespread use today,
and eventually replace it. To communicate with other devices, a computer,
smartphone, home automation component, Internet of Things sensor, or any
other Internet-connected device needs a numerical IP address. Because so
many connected devices are being used, the original IP address scheme,
known as IPv4, is running out of addresses.
What is IPv4?
The common type of IP address (is known as IPv4, for “version 4”). Here’s an
example of what an IP address might look like:
25.59.209.224
An IPv4 address consists of four numbers, each of which contains one to
three digits, with a single dot (.) separating each number or set of digits. This
group of separated numbers creates the addresses that let you and everyone
around the globe to send and retrieve data over our Internet connections.
The IPv4 uses a 32-bit address scheme allowing to store 2^32 addresses
which is more than 4 billion addresses. To date, it is considered the primary
Internet Protocol and carries 94% of Internet traffic. Initially, it was assumed
it would never run out of addresses but the present situation paves a new
way to IPv6, let’s see why? An IPv6 address consists of eight groups of
four hexadecimal digits. Here’s an example IPv6 address:
3001:0da8:75a3:0000:0000:8a2e:0370:7334
IPv6 vs IPv4
This new IP address version is being deployed to fulfil the need for more
Internet addresses. With 128-bit address space, it allows 340 undecillion
unique address space.
IPv6 support a theoretical maximum of 340, 282, 366, 920, 938, 463, 463,
374, 607, 431, 768, 211, 456. To keep it straightforward, we will never run out
of IP addresses again.
The next iteration of the IP standard is known as Internet Protocol version 6
(IPv6). Although IPv4 and IPv6 will coexist for a while, IPv6 is meant to work
in tandem with IPv4 before eventually taking its place. We need to
implement IPv6 in order to proceed and keep bringing new gadgets and
services to the Internet. We can only move forward with an innovative and
open Internet if we implement it, which was created with the needs of a
global commercial Internet in mind.
Grasping the concepts of ARP and its variants is critical for understanding
network protocols, a key area for GATE CS. If you’re looking to dive deeper
into networking topics like these, the GATE CS Self-Paced Course offers
comprehensive study materials and exercises to help you strengthen your
understanding and excel in the GATE exam.
Example –GATE CS 2005, Question 24 (ARP Based).
2. Reverse Address Resolution Protocol (RARP) –
Reverse ARP is a networking protocol used by a client machine in a local area
network to request its Internet Protocol address (IPv4) from the gateway-
router’s ARP table. The network administrator creates a table in gateway-
router, which is used to map the MAC address to corresponding IP address.
When a new machine is setup or any machine which don’t have memory to
store IP address, needs an IP address for its own use. So the machine sends a
RARP broadcast packet which contains its own MAC address in both sender
and receiver hardware address field.
A special host configured inside the local area network, called as RARP-server
is responsible to reply for these kind of broadcast packets. Now the RARP
server attempt to find out the entry in IP to MAC address mapping table. If
any entry matches in table, RARP server send the response packet to the
requesting device along with IP address.
LAN technologies like Ethernet, Ethernet II, Token Ring and Fiber
Distributed Data Interface (FDDI) support the Address Resolution
Protocol.
RARP is not being used in today’s networks. Because we have much
great featured protocols like BOOTP (Bootstrap Protocol) and
DHCP( Dynamic Host Configuration Protocol).
3. Inverse Address Resolution Protocol (InARP) –
Instead of using Layer-3 address (IP address) to find MAC address, Inverse
ARP uses MAC address to find IP address. As the name suggests, InARP is just
inverse of ARP. Reverse ARP has been replaced by BOOTP and later DHCP but
Inverse ARP is solely used for device configuration. Inverse ARP is enabled by
default in ATM(Asynchronous Transfer Mode) networks. InARP is used to find
Layer-3 address from Layer-2 address (DLCI in frame relay). Inverse ARP
dynamically maps local DLCIs to remote IP addresses when you configure
Frame Relay. When using inverse ARP, we know the DLCI of remote router but
don’t know its IP address. InARP sends a request to obtain that IP address
and map it to the Layer-2 frame-relay DLCI.
4. Proxy ARP –
Proxy ARP was implemented to enable devices which are separated into
network segments connected by a router in the same IP network or sub-
network to resolve IP address to MAC addresses. When devices are not in
same data link layer network but are in the same IP network, they try to
transmit data to each other as if they were on the local network. However,
the router that separates the devices will not send a broadcast message
because routers do not pass hardware-layer broadcasts. Therefore, the
addresses cannot be resolved. Proxy ARP is enabled by default so the “proxy
router” that resides between the local networks responds with its MAC
address as if it were the router to which the broadcast is addressed. When
the sending device receives the MAC address of the proxy router, it sends the
datagram to the proxy router, which in turns sends the datagram to the
designated device.
5. Gratuitous ARP –
Gratuitous Address Resolution Protocol is used in advance network scenarios.
It is something performed by computer while booting up. When the
computer booted up (Network Interface Card is powered) for the first time, it
automatically broadcast its MAC address to the entire network. After
Gratuitous ARP MAC address of the computer is known to every switch and
allow DHCP servers to know where to send the IP address if requested.
Gratuitous ARP could mean both Gratuitous ARP request and Gratuitous ARP
reply, but not needed in all cases. Gratuitous ARP request is a packet where
source and destination IP are both set to IP of the machine issuing the packet
and the destination MAC is the broadcast address ff:ff:ff:ff:ff:ff ; no reply
packet will occur. Gratuitous ARP is ARP-Reply that was not prompted by an
ARP-Request. Gratuitous Address Resolution Protocol is useful to detect IP
conflict. Gratuitous ARP is also used to update ARP mapping table and Switch
port MAC address table.
DHCP helps in managing the entire process automatically and centrally. DHCP
helps in maintaining a unique IP Address for a host using the server. DHCP
servers maintain information on TCP/IP configuration and provide
configuration of address to DHCP-enabled clients in the form of a lease offer.
Components of DHCP
The main components of DHCP include:
DHCP Server: DHCP Server is a server that holds IP Addresses and other
information related to configuration.
DHCP Client: It is a device that receives configuration information from
the server. It can be a mobile, laptop, computer, or any other electronic
device that requires a connection.
DHCP Relay: DHCP relays basically work as a communication channel
between DHCP Client and Server.
IP Address Pool: It is the pool or container of IP Addresses possessed by
the DHCP Server. It has a range of addresses that can be allocated to
devices.
Subnets: Subnets are smaller portions of the IP network partitioned to
keep networks under control.
Lease: It is simply the time that how long the information received
from the server is valid, in case of expiration of the lease, the tenant
must have to re-assign the lease.
DNS Servers: DHCP servers can also provide DNS (Domain Name
System) server information to DHCP clients, allowing them to resolve
domain names to IP addresses.
Default Gateway: DHCP servers can also provide information about the
default gateway, which is the device that packets are sent to when the
destination is outside the local network.
Options: DHCP servers can provide additional configuration options to
clients, such as the subnet mask, domain name, and time server
information.
Renewal: DHCP clients can request to renew their lease before it
expires to ensure that they continue to have a valid IP address and
configuration information.
Failover: DHCP servers can be configured for failover, where two
servers work together to provide redundancy and ensure that clients
can always obtain an IP address and configuration information, even if
one server goes down.
Dynamic Updates: DHCP servers can also be configured to dynamically
update DNS records with the IP address of DHCP clients, allowing for
easier management of network resources.
Audit Logging: DHCP servers can keep audit logs of all DHCP
transactions, providing administrators with visibility into which devices
are using which IP addresses and when leases are being assigned or
renewed.
DHCP Packet Format
DHCP Packet Format
S.N
o Intradomain Routing Interdomain Routing
Routing is the process of establishing the routes that data packets must
follow to reach the destination. In this process, a routing table is created
which contains information regarding routes that data packets follow. Various
routing algorithms are used for the purpose of deciding which route an
incoming data packet needs to be transmitted on to reach the destination
efficiently.
It refers to the algorithms that help to find the shortest path between a
sender and receiver for routing the data packets through the network in
terms of shortest distance, minimum cost, and minimum time.
It is mainly for building a graph or subnet containing routers as nodes
and edges as communication lines connecting the nodes.
Hop count is one of the parameters that is used to measure the
distance.
Hop count: It is the number that indicates how many routers are
covered. If the hop count is 6, there are 6 routers/nodes and the edges
connecting them.
Another metric is a geographic distance like kilometers.
We can find the label on the arc as the function of bandwidth, average
traffic, distance, communication cost, measured delay, mean queue
length, etc.
Common Shortest Path Algorithms
Dijkstra’s Algorithm
Bellman Ford’s Algorithm
Floyd Warshall’s Algorithm
Dijkstra’s Algorithm
The Dijkstra’s Algorithm is a greedy algorithm that is used to find the
minimum distance between a node and all other nodes in a given graph. Here
we can consider node as a router and graph as a network. It uses weight of
edge .ie, distance between the nodes to find a minimum distance route.
Algorithm:
1: Mark the source node current distance as 0 and all others as infinity.
2: Set the node with the smallest current distance among the non-visited
nodes as the current node.
3: For each neighbor, N, of the current node:
Calculate the potential new distance by adding the current distance of
the current node with the weight of the edge connecting the current
node to N.
If the potential new distance is smaller than the current distance of
node N, update N's current distance with the new distance.
4: Make the current node as visited node.
5: If we find any unvisited node, go to step 2 to find the next node which has
the smallest current distance and continue this process.
Example:
Consider the graph G:
Graph G
Now,we will start normalising graph one by one starting from node 0.
step 1
Nearest neighbour of 0 are 2 and 1 so we will normalize them first .
step 3
Similarly we will normalize other node considering it should not form a cycle
and will keep track in visited nodes.
step 5
Bellman Ford’s Algorithm
The Bell man Ford’s algorithm is a single source graph search algorithm which
help us to find the shortest path between a source vertex and any other
vertex in a give graph. We can use it in both weighted and unweighted
graphs. This algorithm is slower than Dijkstra's algorithm and it can also use
negative edge weight.
Algorithm
1: First we Initialize all vertices v in a distance array dist[] as INFINITY.
2: Then we pick a random vertex as vertex 0 and assign dist[0] =0.
3: Then iteratively update the minimum distance to each node (dist[v]) by
comparing it with the sum of the distance from the source node (dist[u]) and
the edge weight (weight) N-1 times.
4: To identify the presence of negative edge cycles, with the help of following
cases do one more round of edge relaxation.
We can say that a negative cycle exists if for any edge uv the sum of
distance from the source node (dist[u]) and the edge weight (weight) is
less than the current distance to the largest node(dist[v])
It indicates the absence of negative edge cycle if none of the edges
satisfies case1
Flooding –
Requires no network information like topology, load condition, cost of
diff. paths
Every incoming packet to a node is sent out on every outgoing like
except the one it arrived on.
For Example in the above figure
o An incoming packet to (1) is sent out to (2),(3)
o from (2) is sent to (6),(4), and from (3) it is sent to (4),(5)
o from (4) it is sent to (6),(5),(3), from (6) it is sent to (2),(4),(5),
from (5) it is sent to (4),(3)
Characteristics –
All possible routes between Source and Destination are tried. A packet
will always get through if the path exists
As all routes are tried, there will be at least one route which is the
shortest
All nodes directly or indirectly connected are visited
Limitations –
Flooding generates a vast number of duplicate packets
Suitable damping mechanism must be used
Hop-Count –
A hop counter may be contained in the packet header which is
decremented at each hop.
with the packet being discarded when the counter becomes zero
The sender initializes the hop counter. If no estimate is known, it is set
to the full diameter of the subnet.
Keep track of the packets which are responsible for flooding using a
sequence number. Avoid sending them out a second time.
Selective Flooding: Routers do not send every incoming packet out on every
line, only on those lines that go in approximately in the direction of the
destination.
Advantages of Flooding :
Highly Robust, emergency or immediate messages can be sent (eg
military applications)
Set up the route in virtual circuit
Flooding always chooses the shortest path
Broadcast messages to all the nodes
Disadvantages of Flooding :
Network congestion: Flooding can cause a significant amount of traffic
in the network, leading to congestion. This can result in slower network
speeds and delays in delivering data packets.
Wastage of network resources: Flooding uses a lot of network
resources, including bandwidth and processing power, to deliver
packets. This can result in the wastage of valuable network resources
and reduce the overall efficiency of the network.
Security risks: Flooding can be used as a tool for launching various
types of attacks, including denial of service (DoS) attacks. Attackers can
flood the network with data packets, which can overload the network
and cause it to crash.
Inefficient use of energy: Flooding can result in an inefficient use of
energy in wireless networks. Since all nodes receive every packet, even
if they are not the intended recipient, they will still need to process it,
which can waste energy and reduce the overall battery life of mobile
devices.
Difficulty in network troubleshooting: Flooding can make it difficult to
troubleshoot network issues. Since packets are sent to all nodes, it can
be challenging to isolate the cause of a problem when it arises.
The transport Layer is the second layer in the TCP/IP model and the fourth
layer in the OSI model. It is an end-to-end layer used to deliver messages to a
host. It is termed an end-to-end layer because it provides a point-to-point
connection rather than hop-to-hop, between the source host and destination
host to deliver the services reliably. The unit of data encapsulation in the
Transport Layer is a segment.
Working of Transport Layer
The transport layer takes services from the Application layer and provides
services to the Network layer.
The transport layer ensures the reliable transmission of data between
systems. Understanding protocols like TCP and UDP is crucial. If you’re aiming
for a deeper understanding of transport layer protocols, the GATE CS Self-
Paced Course offers comprehensive modules on networking, including
detailed explanations of transport layer responsibilities and how they
operate in real-world applications.
At the sender’s side: The transport layer receives data (message) from the
Application layer and then performs Segmentation, divides the actual
message into segments, adds the source and destination’s port numbers into
the header of the segment, and transfers the message to the Network layer.
At the receiver’s side: The transport layer receives data from the Network
layer, reassembles the segmented data, reads its header, identifies the port
number, and forwards the message to the appropriate port in the Application
layer.
Responsibilities of a Transport Layer
The Process to Process Delivery
End-to-End Connection between Hosts
Multiplexing and Demultiplexing
Congestion Control
Data integrity and Error correction
Flow control
1. The Process to Process Delivery
While Data Link Layer requires the MAC address (48 bits address contained
inside the Network Interface Card of every host machine) of source-
destination hosts to correctly deliver a frame and the Network layer requires
the IP address for appropriate routing of packets, in a similar way Transport
Layer requires a Port number to correctly deliver the segments of data to the
correct process amongst the multiple processes running on a particular host.
A port number is a 16-bit address used to identify any client-server program
uniquely.
Process to Process Delivery
2. End-to-end Connection between Hosts
The transport layer is also responsible for creating the end-to-end Connection
between hosts for which it mainly uses TCP and UDP. TCP is a secure,
connection-orientated protocol that uses a handshake protocol to establish a
robust connection between two end hosts. TCP ensures the reliable delivery
of messages and is used in various applications. UDP, on the other hand, is a
stateless and unreliable protocol that ensures best-effort delivery. It is
suitable for applications that have little concern with flow or error control
and requires sending the bulk of data like video conferencing. It is often used
in multicasting protocols.
End to End Connection.
3. Multiplexing and Demultiplexing
Multiplexing(many to one) is when data is acquired from several processes
from the sender and merged into one packet along with headers and sent as
a single packet. Multiplexing allows the simultaneous use of different
processes over a network that is running on a host. The processes are
differentiated by their port numbers. Similarly, Demultiplexing(one to many)
is required at the receiver side when the message is distributed into different
processes. Transport receives the segments of data from the network layer
distributes and delivers it to the appropriate process running on the
receiver’s machine.
Service Example
Advantages :
It kindly support for quality of service is an easy way.
This connection is more reliable than connectionless service.
Long and large messages can be divided into various smaller messages
so that it can fit inside packets.
Problems or issues that are related to duplicate data packets are made
less severe.
Disadvantages :
In this connection, cost is fixed no matter how traffic is.
It is necessary to have resource allocation before communication.
If any route or path failures or network congestions arise, there is no
alternative way available to continue communication.
Service Example
Advantages :
It is very fast and also allows for multicast and broadcast operations in
which similar data are transferred to various recipients in a single
transmission.
The effect of any error occurred can be reduced by implementing error-
correcting within an application protocol.
This service is very easy and simple and is also low overhead.
At the network layer, host software is very much simpler.
No authentication is required in this service.
Some of the application doesn’t even require sequential delivery of
packets or data. Examples include packet voice, etc.
Disadvantages :
This service is less reliable as compared to connection-oriented service.
It does not guarantee that there will be no loss, or error occurrence,
misdelivery, duplication, or out-of-sequence delivery of the packet.
They are more prone towards network congestions.
What is Congestion?
Congestion in a computer network happens when there is too much data
being sent at the same time, causing the network to slow down. Just like
traffic congestion on a busy road, network congestion leads to delays and
sometimes data loss. When the network can’t handle all the incoming data, it
gets “clogged,” making it difficult for information to travel smoothly from one
place to another.
Effects of Congestion in Computer Network
Improved Network Stability: Congestion control helps keep the
network stable by preventing it from getting overloaded. It manages
the flow of data so the network doesn’t crash or fail due to too much
traffic.
Reduced Latency and Packet Loss: Without congestion control, data
transmission can slow down, causing delays and data loss. Congestion
control helps manage traffic better, reducing these delays and ensuring
fewer data packets are lost, making data transfer faster and the
network more responsive.
Enhanced Throughput: By avoiding congestion, the network can use its
resources more effectively. This means more data can be sent in a
shorter time, which is important for handling large amounts of data
and supporting high-speed applications.
Fairness in Resource Allocation: Congestion control ensures that
network resources are shared fairly among users. No single user or
application can take up all the bandwidth, allowing everyone to have a
fair share.
Better User Experience: When data flows smoothly and quickly, users
have a better experience. Websites, online services, and applications
work more reliably and without annoying delays.
Mitigation of Network Congestion Collapse: Without congestion
control, a sudden spike in data traffic can overwhelm the network,
causing severe congestion and making it almost unusable. Congestion
control helps prevent this by managing traffic efficiently and avoiding
such critical breakdowns.
Congestion Control Algorithm
Congestion Control is a mechanism that controls the entry of data
packets into the network, enabling a better use of a shared network
infrastructure and avoiding congestive collapse.
Congestive-avoidance algorithms (CAA) are implemented at the TCP
layer as the mechanism to avoid congestive collapse in a network.
There are two congestion control algorithms which are as follows:
Leaky Bucket Algorithm
The leaky bucket algorithm discovers its use in the context of network
traffic shaping or rate-limiting.
A leaky bucket execution and a token bucket execution are
predominantly used for traffic shaping algorithms.
This algorithm is used to control the rate at which traffic is sent to the
network and shape the burst traffic to a steady traffic stream.
The disadvantages compared with the leaky-bucket algorithm are the
inefficient use of available network resources.
The large area of network resources such as bandwidth is not being
used effectively.
Let us consider an example to understand Imagine a bucket with a small hole
in the bottom. No matter at what rate water enters the bucket, the outflow is
at constant rate. When the bucket is full with water additional water entering
spills over the sides and is lost.
Similarly, each network interface contains a leaky bucket and the following
steps are involved in leaky bucket algorithm:
When host wants to send packet, packet is thrown into the bucket.
The bucket leaks at a constant rate, meaning the network interface
transmits packets at a constant rate.
Bursty traffic is converted to a uniform traffic by the leaky bucket.
In practice the bucket is a finite queue that outputs at a finite rate.
To learn more about Leaky Bucket Algorithm please refer the article.
Token Bucket Algorithm
The leaky bucket algorithm has a rigid output design at an average rate
independent of the bursty traffic.
In some applications, when large bursts arrive, the output is allowed to
speed up. This calls for a more flexible algorithm, preferably one that
never loses information. Therefore, a token bucket algorithm finds its
uses in network traffic shaping or rate-limiting.
It is a control algorithm that indicates when traffic should be sent. This
order comes based on the display of tokens in the bucket.
The bucket contains tokens. Each of the tokens defines a packet of
predetermined size. Tokens in the bucket are deleted for the ability to
share a packet.
When tokens are shown, a flow to transmit traffic appears in the
display of tokens.
No token means no flow sends its packets. Hence, a flow transfers
traffic up to its peak burst rate in good tokens in the bucket.
To learn more about Token Bucket Algorithm please refer the article.
Need of Token Bucket Algorithm
The leaky bucket algorithm enforces output pattern at the average rate, no
matter how bursty the traffic is. So in order to deal with the bursty traffic we
need a flexible algorithm so that the data is not lost. One such algorithm is
token bucket algorithm.
Steps of this algorithm can be described as follows:
In regular intervals tokens are thrown into the bucket. ƒ
The bucket has a maximum capacity. ƒ
If there is a ready packet, a token is removed from the bucket, and the
packet is sent.
If there is no token in the bucket, the packet cannot be sent.
Let’s understand with an example, In figure (A) we see a bucket holding three
tokens, with five packets waiting to be transmitted. For a packet to be
transmitted, it must capture and destroy one token. In figure (B) We see that
three of the five packets have gotten through, but the other two are stuck
waiting for more tokens to be generated.
Token Bucket vs Leaky Bucket
The leaky bucket algorithm controls the rate at which the packets are
introduced in the network, but it is very conservative in nature. Some
flexibility is introduced in the token bucket algorithm. In the token bucket
algorithm, tokens are generated at each tick (up to a certain limit). For an
incoming packet to be transmitted, it must capture a token and the
transmission takes place at the same rate. Hence some of the busty packets
are transmitted at the same rate if tokens are available and thus introduces
some amount of flexibility in the system.
Formula: M * s = C + ? * s where S – is time taken M – Maximum output rate ?
– Token arrival rate C – Capacity of the token bucket in byte
Let’s understand with an example,
Advantages of QoS
The deployment of QoS is crucial for businesses that want to ensure the
availability of their business-critical applications. It is vital for delivering
differentiated bandwidth and ensuring data transmission takes place without
interrupting traffic flow or causing packet losses. Major advantages of
deploying QoS include:
1. Unlimited application prioritization: QoS guarantees that businesses’
most mission-critical applications will always have priority and the
necessary resources to achieve high performance.
2. Better resource management: QoS enables administrators to better
manage the organization’s internet resources. This also reduces costs
and the need for investments in link expansions.
3. Enhanced user experience: The end goal of QoS is to guarantee the
high performance of critical applications, which boils down to
delivering optimal user experience. Employees enjoy high performance
on their high-bandwidth applications, which enables them to be more
effective and get their job done more quickly.
4. Point-to-point traffic management: Managing a network is vital
however traffic is delivered, be it end to end, node to node, or point to
point. The latter enables organizations to deliver customer packets in
order from one point to the next over the internet without suffering
any packet loss.
5. Packet loss prevention: Packet loss can occur when packets of data are
dropped in transit between networks. This can often be caused by a
failure or inefficiency, network congestion, a faulty router, loose
connection, or poor signal. QoS avoids the potential of packet loss by
prioritizing bandwidth of high-performance applications.
6. Latency reduction: Latency is the time it takes for a network request to
go from the sender to the receiver and for the receiver to process it.
This is typically affected by routers taking longer to analyze information
and storage delays caused by intermediate switches and bridges. QoS
enables organizations to reduce latency, or speed up the process of a
network request, by prioritizing their critical application.