0% found this document useful (0 votes)
51 views

Computer Network Notes-4

Uploaded by

fg5095863
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views

Computer Network Notes-4

Uploaded by

fg5095863
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 194

Computer network notes

1. Definition of computer network: A computer network is a set


of computers sharing resources located on or provided by network
nodes. Computers use common communication
protocols over digital interconnections to communicate with each
other. These interconnections are made up of telecommunication
network technologies based on physically wired, optical, and
wireless radio-frequency methods that may be arranged in a
variety of network topologies.

2. What is computer network? Components of a computer network: A


computer network is a system that connects many
independent computers to share information (data) and
resources. The integration of computers and other
different devices allows users to communicate more
easily. A computer network is a collection of two or more
computer systems that are linked together. A network
connection can be established using
either cable or wireless media. Hardware and software
are used to connect computers and tools in any network.

Components of Computer Networks


The key parts that are required to install a network are included in the
components of the Computer network. From simple to complex there are
numerous types of networks in Computer networks. The components that
we need to install for a network mainly depend upon the type of Network.
We can also remove some network components according to our needs.

For example: In order to establish a wireless network there is no need for


cables.

Given below is a list of components of a Computer Network:

 Network Interface Card(NIC)


 HUB
 Switch
 Repeater
 Router
 Modem
 Server
 Bridge

We are now going to discuss all the above mentioned major components
of a Computer Network:

1. Network Interface Card(NIC)


NIC mainly provide the physical interface between computer and
cabling.NIC prepares data, sends the data, and controls the flow of data. It
can also receive and translate the data into bytes for the CPU to
understand.

 NIC is a hardware component that is mainly used to connect one


computer with another on a Network.
 The main role of NIC is to move the serial signals on the network
cables or media into parallel data streams inside the PCs.
 Transfer rate supported by NIC is 10Mb/s,100 Mb/s ,1000 Mb/s.
 Two or more NIC’s are used in the server in order to split the load.
 The main job of NIC is controlling access to the media.
 NIC can be wired or wireless. In wired NIC, there are cables and
connectors that act as a medium to transfer data. While in the
wireless card, the connection is generally made using an antenna
that uses radio-wave technology

Factors to be taken into consideration when choosing a NIC:

1. Preparing data
2. Sending and Controlling data
3. Configuration
4. Drivers
5. Compatability
6. Performance

2. Hub
Hubs are those devices that are used to link several computers together.
Hubs repeat one signal that comes in on one port and then copies it to
other ports.

 A network hub is basically a centralized distribution point for all the


data transmission in a network.
 Hub is a passive device.
 The hub receives the data and then rebroadcasts the data to other
computers that are connected to it. Hub mainly does not know the
destination of a received data packet. Thus it is required to send
copies of data packets to all the hub connections.
 Also, Hubs consumes more bandwidth on the network and thus
limits the amount of communication.
 One disadvantage of using hubs is that they do not have the
intelligence to find out the best path for the data packets which then
leads to inefficiencies and wastage.

Types of Hub

1. Active Hub:

Active Hubs make use of electronics in order to amplify and clean up the
signals before they are broadcast to other ports. Active Hubs are mainly
used to extend the maximum distance between nodes. It works both as a
wiring center as well as a repeater.

2. Passive Hub:
Passive Hubs are those hubs that connect only to Active Hubs. Passive
Hubs are simply used to connect all ports together electrically and these
are usually not powered. These hubs are cheaper than Passive hub.
Passive hubs neither amplifies the signal nor regenerates the signal.

3. Intelligent Hub:

Intelligent hubs give better performance than active and passive hubs.
Nowadays Intelligent hubs are widely used and are in more demand than
active and passive hubs. These hubs are mainly used to connect various
devices. It supports amplification and regeneration of signals at any point
of incoming signals.

Intelligent hub sustains the network along with the selection path. The
tasks of both passive and active are manageable by the intelligent hub.

With the help of an Intelligent hub, the Speed and efficiency of the whole
network increases which helps to gain the fast and efficient performance
of the network.

3. Switch
Switch mainly resembles a Hub. It is a layer-2 device and it is used for the
intelligent forwarding of messages. By intelligent we mean the decision-
making ability of the switch. As hub works in the way by sending data to
all ports on the device, whereas the switch sends the data to only that
port that is connected with the destination device.

 The switch is a network component and is mainly used to connect


the segments of the network.
 The switch is more intelligent than the network hub.
 Mainly Switches are capable of inspecting the data packets as soon
as they are received, then determine the source and destination of
that packet, and then forward it appropriately.
 Switch differs from the hub as it also contains ports of different
speeds.
 Before forwarding the data to the ports switch performs the error
checking and this feature makes the switch efficient.
 As the switch delivers the message to the connected device it was
intended for, thus it conserves the bandwidth of the network and
offers better performance than the hub.
 The most important feature of the switch is that it supports
unicast(one to one), multicast(one to many), and broadcast(one to
all) communications.
 The switch makes use of MAC address in order to send data packets
to the selected destination ports.

Switches are categorized into 4:


1. Managed Switch
These are expensive switches and are mainly used in those organizations
that have large and complex networks. Managed switches are configured
using the Simple Network Management Protocol(SNMP). These switches
provide a high level of security, complete management of the network
thus beside their expensiveness these are used in large organizations
because they provide high scalability and flexibility

2. Unmanaged Switch
These are cheap switches and are mainly used in home networks and in
small businesses. The unmanaged switch does not need to be configured.
Unmanaged switches can be easily set up just by plugging them into the
network, after plugging they instantly start operating.

3. PoE Switch
These are referred to as Power over Ethernet switches. With the help of
the PoE technology, these switches combine the data and power
transmission over the same cable, and with the help of that devices
connected to this switch are able to receive both electricity as well as data
over the same line. Thus PoE switches offer more flexibility.

4. LAN Switch
LAN switch is referred to as Local Area Network switch and it is mainly
used to connect devices in the internal local area network of an
organization. These are helpful in reducing network congestion.
Bandwidth with these switches is allocated in a manner such that there is
no overlapping of data packets in the network.

4. Repeater
The repeater is a Physical layer device. As the name suggests, the
repeater is mainly used to regenerate the signal over the same network
and it mainly regenerates before the signal gets corrupted or weak.

They are incorporated into the networks in order to extend the coverage
area. Repeaters can connect signals by making the use of diffrent types of
cables.

 Repeaters are cost-effective.


 Repeaters are very easy o install, and after their installation, they
can easily extend the coverage area of the network.
 But there is a problem with repeaters and it is they cannot those
networks that are not of the same type.
 Repeaters do not help to reduce the traffic in the network.

Types of repeaters:
Types of repeaters that are available are as follows:

1. Analog Repeaters
These are only used to amplify the analog signals.

2. Digital Repeaters
These are only used to amplify digital signals.

3. Wired Repeaters
These repeaters are mainly used in wired Local area networks.

4. Wireless Repeaters
These are mainly used in wireless local area networks and also in cellular
networks.

5. Local Repeaters
These are used to connect segments of a local area network that are
separated by a small distance.

6. Remote Repeaters
These are mainly used to connect those local area networks that are far
away from each other.

5. Router
The router is a network component that is mainly used to send or
receive data on the computer network. The process of forwarding data
packets from the source to the destination is referred to as Routing.

 The router is a Network Layer(i.e Layer 3) device.


 The main responsibilities of the router are receiving data packets,
analyzing them, and then forwarding the data packets among the
connected computer networks.
 Whenever any data packet arrives, then first of all the router
inspects the destination address and then consults with its routing
tables in order to decide the optimal route and then transfers the
packet along this route towards the destination.
 Routers are mainly used to provide protection against broadcast
storms.
 Routers are expensive than a hub, switches, repeaters, and bridges.
 Routers can also connect different networks together and thus data
packets can also be sent from one network to another network.
 Routers are used in both LAN as well as in WAN(wide area network).
 Routers share data with each other in order to prepare and refresh
the routing tables.

Types of Routers:
Different types of routers are as follows:

1.Core Routers
Core routers are mainly used by service providers(like AT&T, Vodafone) or
by cloud providers like (Amazon, Microsoft, and Google). Core Routers
provide maximum bandwidth so as to connect additional routers or
switches. Core routers are used by large organizations.

2.Edge Routers
An edge router is also known as a Gateway router or gateway simply. The
gateway is the network's outermost point of connection with external
networks and also includes the Internet. These routers are mainly used
to optimize bandwidth and are designed in order to connect to other
routers so as to distribute data to end-users. Border Gateway protocol is
mainly used for connectivity by edge routers.

These are further categorized into two:

 subscriber edge routers


 label edge routers.

3. Brouters
Brouter means bridging routing device. These are special routers and they
also provide functionalities of bridges. They perform the functioning of the
bridge as well as of router; like a bridge, these routers help to transfer
data between networks, and like the router, they route the data within the
devices of a network.

4.Broadband Routers
It is a type of networking device that mainly allows end-users to access
broadband Internet from an Internet service provider (ISP). The Internet
service provider usually provides and configures the broadband router for
the end-user.

5.Distribution Routers
These routers mainly receive the data from the edge router (or gateway)
via a wired connection and then sends it on to the end-users with the help
of Wi-Fi.

5.Wireless Routers
These routers combine the functioning of both edge routers and
distribution routers. These routers mainly provide a WiFi connection to
WiFi devices like laptops, smartphones, etc. These routers also provide
the standard Ethernet routing. For indoor connections, the range of these
routers is 150 feet while for outdoor connections it is 300 feet.

6. Modem
The modem is basically a hardware component that mainly allows a
computer or any other device like a router, switch to connect to the
Internet. A modem is basically a shorthand form of Modulator-
Demodulator.

One of the most important functions of the modem is to convert analog


signals into digital signals and vice versa. Also, this device is a
combination of two devices: modulator and demodulator.
The modulator mainly converts the digital data into analog data at the
time when the data is being sent by the computer.

The demodulator basically converts the analog data signals into digital
data at the time when it is being received by the computer.

7. Server
A Server is basically a computer that serves the data to other devices. The
server may serve data to other devices or computers over a local area
network or on a Wide area network with the help of the Internet. There
can be virtual servers, proxy servers, application servers, web servers,
database servers, file servers, and many more.

Thus servers are mainly used to serve the requests of other devices. It
can be hardware or software.

8. Bridge
It is another important component of the computer network. The bridge is
also a layer-2( that is data link layer device). A bridge is mainly used to
connect two or more local area networks together. These are mainly used
as they help in the fast transferring of the data.

But these are not versatile like routers.

Thus Bridge can mainly transfer the data between different protocols (i.e.
a Token Ring and Ethernet network) and operates at the data link layer or
level 2 of the OSI (Open Systems Interconnection) networking reference
model as told above.

Bridges are further divided into two:

 Local bridge
These are ordinary bridges.
 Remote bridges
These are mainly used to connect networks that are at a distance
from each other. Generally Wide Area Network is provided between
two bridges
Some Bridge protocols are spanning tree protocol, source routing
protocol, and source routing transparent protocol.

Uses of Computer Network

There are multiple uses for a computer network


including:
 Communication: Through computer networks
individuals and organizations can collaborate
using communicational channels that may include email,
chat, and video conferencing.
 Resource sharing: These bags are a boon to users
since they provide a way to share the printer, scanner,
and files, which will help to improve work activities and
reduce costs.
 Remote access: Network technologies bring the power
of information and assistance by making it accessible
from anywhere on the globe. Hence, this enables users
to operate with more freedom and comfort.
 Collaboration: Networks function to make collaboration
gin and tonic by offering the opportunities to work jointly
on something, share thoughts, and critique in the biggest
way.
 E-commerce: Online sales and payments processing are
empowered with the computer networks, that enable
businesses to sell products online and execute secure
payments.
 Education: From their use in the educational setting
they are employed to provide a basis for distance
learning, access to resources of higher education and
give opportunity for collaboration among students and
teachers.
 Entertainment: Networks are applied to matters of
entertainment like online gaming, online film and music
streaming, and social networking.
Classification of Networks
 Personal Area Network (PAN)
 Local Area Network (LAN)
 Metropolitan Area Network (MAN)
 Wide Area Networks (WAN)

Personal Area Network (PAN) - The interconnection of devices within the range
of an individual person, typically within a range of 10 meters. For example, a
wireless network connecting a computer with its keyboard, mouse or printer is a
PAN. Also, a PDA that controls the user's hearing aid or pacemaker fits in this
category. Another example of PAN is a Bluetooth. Typically, this kind of network
could also be interconnected without wires to the Internet or other networks.

Local Area Network (LAN) - Privately-owned networks covering a small


geographic area, like a home, office, building or group of buildings (e.g. campus).
They are widely used to connect computers in company offices and factories to
share resources (e.g., printers) and exchange information. LANs are restricted in
size, which means that the worst-case transmission time is bounded and known in
advance. Knowing this bound makes it possible to use certain kinds of designs that
would not otherwise be possible. It also simplifies network management.
Traditional LANs run at speeds of 10 Mbps to 100 Mbps, have low delay
(microseconds or nanoseconds), and make very few errors. Newer LANs operate at
up to 10 Gbps.

Metropolitan Area Network (MAN) - Covers a larger geographical area than is a


LAN, ranging from several blocks of buildings to entire cities. MANs can also
depend on communications channels of moderate-to-high data rates. A MAN
might be owned and operated by a single organization, but it usually will be used
by many individuals and organizations. MANs might also be owned and operated
as public utilities. They will often provide means for internetworking of LANs.
Metropolitan Area Networks can span up to 50km, devices used are modem and
wire/cable.

Wide Area Networks (WAN) - Computer network that covers a large


geographical area, often a country or continent. (any network whose
communications links cross metropolitan, regional, or national boundaries). Less
formally, a network that uses routers and public communications links. Routers
will be discussed later.
What is data communication?
Data communication is the process of transferring data from one place to another
or between two locations. It allows electronic and digital data to move between
two networks, no matter where the two are located geographically, what the data
contains, or what format they are in.

A common example of data communication is connecting your laptop to a Wi-Fi


network. This action requires a wireless medium to send and receive data from
remote servers.

The type of data transmission demonstrates the direction in which the data
moves between the sender and receiver.
 Simplex data transmission: Data is sent from sender to receiver
 Half-duplex data transmission: Data can transmit both ways, but not
simultaneously
 Full-duplex data transmission: Data can transmit both ways at the same
time

Full-duplex data transmission is the most common type found in computer


networks. You may be familiar with some of the ways we use computer networks
in our daily lives, such as communicating through instant messaging on Slack or
video on Zoom or sharing files via tools like Apple’s AirDrop.
Components of data communication

A data communication system is comprised of the following:


1. Message: The data to be transmitted or communicated, which can include
numbers, text, photos, sound, or video.
2. Sender: The computer or device (e.g., phone, tablet) that sends the
message.
3. Receiver: The computer or device that receives the message, which can be
different from the sender.
4. Medium: The channel through which the message is carried from sender
to receiver, such as twisted pair wire, coaxial cable, fiber optic cable, or
wireless.
5. Protocol: The set of rules that govern the communication between
computers. These rules are followed by both the sender and receiver.

Transmission Impairment
In the data communication system, analog and digital signals go
through the transmission medium. Transmission media are not ideal.
There are some imperfections in transmission mediums. So, the
signals sent through the transmission medium are also not perfect.
This imperfection cause signal impairment.

It means that signals that are transmitted at the beginning of the


medium are not the same as the signals that are received at the end
of the medium that is what is sent is not what is received. These
impairments tend to deteriorate the quality of analog and digital
signals.

Consequences
1. For a digital signal, there may occur bit errors.
2. For analog signals, these impairments degrade the quality of
the signals.

Causes of Impairment
There are three main causes of impairment are,
1. Attenuation
2. Distortion
3. Noise

1. Attenuation
Here attenuation Means loss of energy that is the weaker signal.
Whenever a signal transmitted through a medium it loses its energy,
so that it can overcome by the resistance of the medium.

 That is why a wire carrying electrical signals gets warm, if not


hot, after a while. Some of the electrical energy is converted to
heat in the signal.
 Amplifiers are used to amplify the signals to compensate for
this loss.
 This figure shows the effect of attenuation and
amplification:

 A signal has lost or gained its strength, for this purpose


engineers use the concept of decibel(dB).
 Decibel is used to measure the relative strengths of two
signals or a signal at two different points.
 If a signal is attenuated then dB is negative and if a signal is
amplified so the db is positive.
Attenuation(dB) = 10log10(P2/P1)
where P2 and P1 are the power of a signal at points1 and 2.

2. Distortion
If a signal changes its form or shape, it is referred to as distortion.
Signals made up of different frequencies are composite signals.
Distortion occurs in these composite signals.

 Each component of frequency has its propagation speed


traveling through a medium and therefore, different
components have different delay in arriving at the final
destination.
 It means that signals have different phases at the receiver
than they did at the source.
 This figure shows the effect of distortion on a composite signal:

3. Noise
Noise is another problem. There are some random or unwanted
signals mix up with the original signal is called noise. Noises can
corrupt the signals in many ways along with the distortion
introduced by the transmission media.
What is Transmission Modes?
Transmission mode means transferring data between two
devices. It is also known as a communication mode. Buses and
networks are designed to allow communication to occur between
individual devices that are interconnected. There are three
types of transmission modes:

Simplex Mode
In Simplex mode, the communication is unidirectional, as on a
one-way street. Only one of the two devices on a link can
transmit, the other can only receive. The simplex mode can use
the entire capacity of the channel to send data in one direction.
Example: Keyboard and traditional monitors. The keyboard can
only introduce input, the monitor can only give the output.

Half-Duplex Mode
In half-duplex mode, each station can both transmit and receive,
but not at the same time. When one device is sending, the other
can only receive, and vice versa. The half-duplex mode is used in
cases where there is no need for communication in both
directions at the same time. The entire capacity of the channel
can be utilized for each direction.
Example: Walkie-talkie in which message is sent one at a time
and messages are sent in both directions.
Channel capacity=Bandwidth * Propagation Delay

Full-Duplex Mode
In full-duplex mode, both stations can transmit and receive
simultaneously. In full_duplex mode, signals going in one
direction share the capacity of the link with signals going in
another direction, this sharing can occur in two ways:
 Either the link must contain two physically separate
transmission paths, one for sending and the other for
receiving.
 Or the capacity is divided between signals traveling in
both directions.
Full-duplex mode is used when communication in both directions
is required all the time. The capacity of the channel, however,
must be divided between the two directions.
Example: Telephone Network in which there is communication
between two persons by a telephone line, through which both
can talk and listen at the same time.
Channel Capacity=2* Bandwidth*propagation Delay

Parallel Communication

Parallel communication involves the simultaneous transmission of multiple


bits of data over separate channels. In this method, each bit of data is sent
through its own dedicated wire, allowing for faster data transfer rates
compared to serial communication. Parallel communication is commonly
used in scenarios where high-speed data transfer is required, such as
within computer systems, where data needs to be quickly exchanged
between various components.

One of the key advantages of parallel communication is its ability to


transmit data in parallel, meaning that multiple bits can be sent
simultaneously. This parallelism allows for faster data transfer rates, as
each bit can be transmitted at the same time. Additionally, parallel
communication is less susceptible to noise and interference, as each bit
has its own dedicated wire, reducing the chances of data corruption.

However, parallel communication also has its limitations. One major


drawback is the requirement for a large number of wires to transmit data in
parallel. For example, to transmit 8 bits of data, 8 separate wires are
needed. This can lead to increased complexity in terms of wiring and can
be costly, especially when dealing with long-distance communication.
Furthermore, the synchronization of data across multiple wires can be
challenging, as any slight variation in timing can result in data corruption.

Serial Communication

Serial communication, on the other hand, involves the sequential


transmission of data over a single channel. In this method, the bits of data
are sent one after another, using a single wire or a pair of wires for
transmission. Serial communication is widely used in various applications,
including telecommunications, networking, and industrial automation.
One of the primary advantages of serial communication is its simplicity.
With only a single wire or a pair of wires required for transmission, the
wiring complexity is significantly reduced compared to parallel
communication. This makes serial communication more cost-effective,
especially for long-distance communication. Additionally, serial
communication allows for easier synchronization of data, as the bits are
transmitted sequentially.

Serial communication also offers the advantage of scalability. It can easily


adapt to different data transfer rates, making it suitable for a wide range of
applications. Furthermore, advancements in serial communication
technologies, such as the introduction of high-speed serial interfaces like
USB and Ethernet, have significantly increased the data transfer rates
achievable through serial communication.

What is Synchronous Transmission?


Synchronous transmission involves data transfer in a
synchronized manner. In this method, data is transmitted in fixed,
predefined time intervals or with reference to a clock signal. The
sender and receiver are synchronized, ensuring that data is sent
and received in a coordinated fashion.

Key Characteristics of Synchronous


Transmission
Below are some of the Characteristics of Synchronous
Transmission:

 Timing Dependency: Synchronous transmission relies on


timing mechanisms such as clock signals, making it essential
for both parties to share a common timing reference.
 Predictable Timing: Data is transmitted at regular
intervals, enabling precise timing and synchronization
between sender and receiver.
 Efficiency in Bulk Data: Synchronous transmission is
efficient for sending large amounts of data as it utilizes
consistent intervals for data transfer.
 Complex Setup: The synchronization requirement often
leads to more complex hardware and software setups.

What is Asynchronous Transmission?


Asynchronous transmission, on the other hand, does not rely on a
fixed timing mechanism. Instead, data is transmitted in discrete
units known as frames, with start and stop bits demarcating each
frame. Asynchronous transmission is not bound by a shared clock
signal and can handle variable data lengths.

Key Characteristics of Asynchronous


Transmission
Here are some of the Characteristics of Asynchronous
Transmission:

 Start-Stop Bits: Each data frame in asynchronous


transmission is preceded by a start bit and followed by a
stop bit, indicating the beginning and end of the frame.
 Variable Timing: Asynchronous transmission can
accommodate varying intervals between data frames,
making it suitable for irregular data traffic.
 Lower Efficiency for Bulk Data: Transmitting bulk data
through asynchronous transmission can be less efficient due
to the overhead of start and stop bits.
 Simplicity: The absence of strict synchronization
requirements simplifies hardware and software
implementations

Network software

Network software is defined as a wide range of software that streamlines the


operations, design, monitoring, and implementation of computer networks
Functions of network software
User management: allows administrators to add or remove users from the
network. This is particularly useful when hiring or relieving
File management: lets administrators decide the location of data storage and
control user access to that data.
Access enables: users to enjoy uninterrupted access to network resources.
Network security: systems assist administrators in looking after security and
preventing data breaches.

Application layer
The first component is the application layer or the application plane, which
refers to the applications and services running on the network. It is a program
that conveys network information, the status of the network, and the network
requirements for particular resource availability and application. This is done
through the control layer via application programming interfaces (APIs). The
application layer also consists of the application logic and one or more API
drivers.

2. Control layer
The control layer lies at the center of the architecture and is one of the most
important components of the three layers. You could call it the brain of the
whole system. Also called the controller or the control plane, this layer also
includes the network control software and the network operating system
within it. It is the entity in charge of receiving requirements from the
applications and translating the same to the network components. The control
of the infrastructure layer or the data plane devices is also done via the
controller. In simple terms, the control layer is the intermediary that facilitates
communication between the top and bottom layers through APIs interfaces.

3. Infrastructure layer
The infrastructure layer, also called the data plane, consists of the actual
network devices (both physical and virtual) that reside in this layer. They are
primarily responsible for moving or forwarding the data packets after receiving
due instructions from the control layer. In simple terms, the data plane in the
network architecture components physically handles user traffic based on the
commands received by the controller.

The application program interface (API) ties all three components together.
Communication between these three layers is facilitated through northbound
and southbound application program interfaces. The northbound API ties
communication between the application and the control layers, whereas the
southbound API enables communication between the infrastructure and the
control
Design Issues for the Layers of Computer Networks
A number of design issues exist for the layer to layer approach of computer
networks. Some of the main design issues are as follows −
Reliability
Network channels and components may be unreliable, resulting in loss of bits
while data transfer. So, an important design issue is to make sure that the
information transferred is not distorted.
Scalability
Networks are continuously evolving. The sizes are continually increasing
leading to congestion. Also, when new technologies are applied to the added
components, it may lead to incompatibility issues. Hence, the design should be
done so that the networks are scalable and can accommodate such additions
and alterations.

Addressing
At a particular time, innumerable messages are being transferred between
large numbers of computers. So, a naming or addressing system should exist so
that each layer can identify the sender and receivers of each message.
Error Control
Unreliable channels introduce a number of errors in the data streams that are
communicated. So, the layers need to agree upon common error detection and
error correction methods so as to protect data packets while they are
transferred.
Flow Control
If the rate at which data is produced by the sender is higher than the rate at
which data is received by the receiver, there are chances of overflowing the
receiver. So, a proper flow control mechanism needs to be implemented.
Resource Allocation
Computer networks provide services in the form of network resources to the
end users. The main design issue is to allocate and deallocate resources to
processes. The allocation/deallocation should occur so that minimal
interference among the hosts occurs and there is optimal usage of the
resources.
Statistical Multiplexing
It is not feasible to allocate a dedicated path for each message while it is being
transferred from the source to the destination. So, the data channel needs to
be multiplexed, so as to allocate a fraction of the bandwidth or time to each
host.
Routing
There may be multiple paths from the source to the destination. Routing
involves choosing an optimal path among all possible paths, in terms of cost
and time. There are several routing algorithms that are used in network
systems.
Security
A major factor of data communication is to defend it against threats like
eavesdropping and surreptitious alteration of messages. So, there should be
adequate mechanisms to prevent unauthorized access to data through
authentication and cryptography.

What is Layered Architecture In Computer Networks?


Layered architecture in computer networks is like a stack of building blocks,
each with a different job. These layers work together to help computers talk to
each other, splitting tasks into small, manageable parts, making it easier to
send and receive data.
Advantages of Layered Architecture In Computer Networks
1. Simplifies troubleshooting – Breaking down a network into layers makes
finding and fixing problems easier because each layer can be checked
one by one.
2. Easier to manage – Having separate layers means a network can be
looked after in parts, which is simpler than dealing with the whole thing
at once.
3. Promotes interoperability – Different network devices and software can
work together smoothly because they follow common rules for how data
should move through the layers.
4. Encourages modular design – Designing a network in chunks means you
can change one part without messing up the rest, making updates and
maintenance more straightforward.
5. Allows for scalability – As a network grows, adding more capacity or
updating technology is easier because you can do it layer by layer
without having to redo everything.
Disadvantages of Layered Architecture In Computer Networks
1. Increased complexity in design – Layered architectures can make
systems harder to build because they require careful planning and
coordination across different levels.
2. Potential performance overhead – Having multiple layers often means
extra steps for data to pass through, which can slow things down.
3. Harder to optimize layers individually – When trying to make one layer
work better, it’s tough without affecting the others, which can limit
improvement efforts.
4. Difficult cross-layer interaction – Getting layers to talk to each other can
be a headache, as they’re not always designed to communicate
smoothly.
5. Inflexible to rapid changes – When new needs or technologies pop up
quickly, layered systems can struggle to adapt fast enough.

What is OSI Model? – Layers of OSI Model

OSI stands for Open Systems Interconnection, where open stands to say non-
proprietary. It is a 7-layer architecture with each layer having specific
functionality to perform. All these 7 layers work collaboratively to transmit the
data from one person to another across the globe. The OSI reference model
was developed by ISO – ‘International Organization for Standardization‘, in the
year 1984.
The OSI model provides a theoretical foundation for understanding network
communication. However, it is usually not directly implemented in its entirety
in real-world networking hardware or software. Instead, specific
protocols and technologies are often designed based on the principles
outlined in the OSI model to facilitate efficient data transmission and
networking operations

What Are The 7 Layers of The OSI Model?


The OSI model consists of seven abstraction layers arranged in a top-down
order:
1. Physical Layer
2. Data Link Layer
3. Network Layer
4. Transport Layer
5. Session Layer
6. Presentation Layer
7. Application Layer

Physical Layer – Layer 1


The lowest layer of the OSI reference model is the physical
layer. It is responsible for the actual physical connection
between the devices. The physical layer contains information
in the form of bits. It is responsible for transmitting individual
bits from one node to the next. When receiving data, this layer
will get the signal received and convert it into 0s and 1s and
send them to the Data Link layer, which will put the frame
back together.

Data Link Layer (DLL) – Layer 2


The data link layer is responsible for the node-to-node delivery of the message.
The main function of this layer is to make sure data transfer is error-free from
one node to another, over the physical layer. When a packet arrives in a
network, it is the responsibility of the DLL to transmit it to the Host using
its MAC address.

Network Layer – Layer 3


The network layer works for the transmission of data from one host to the
other located in different networks. It also takes care of packet routing i.e.
selection of the shortest path to transmit the packet, from the number of
routes available. The sender & receiver’s IP addresses are placed in the header
by the network layer.

Transport Layer – Layer 4


The transport layer provides services to the application layer and takes services
from the network layer. The data in the transport layer is referred to
as Segments. It is responsible for the end-to-end delivery of the complete
message. The transport layer also provides the acknowledgment of the
successful data transmission and re-transmits the data if an error is found.
At the sender’s side: The transport layer receives the formatted data from the
upper layers, performs Segmentation, and also implements Flow and error
control to ensure proper data transmission. It also adds Source and
Destination port numbers in its header and forwards the segmented data to
the Network Layer.

Session Layer – Layer 5


This layer is responsible for the establishment of connection, maintenance of
sessions, and authentication, and also ensures security.

Presentation Layer – Layer 6


The presentation layer is also called the Translation layer. The data from the
application layer is extracted here and manipulated as per the required format
to transmit over the network.
responsible for the establishment of connection, maintenance of sessions, and
authentication, and also ensures security.

Application Layer – Layer 7


At the very top of the OSI Reference Model stack of layers, we find the
Application layer which is implemented by the network applications. These
applications produce the data to be transferred over the network. This layer
also serves as a window for the application services to access the network and
for displaying the received information to the user.

TCP/IP Model
The TCP/IP model is a fundamental framework for computer networking. It
stands for Transmission Control Protocol/Internet Protocol, which are the
core protocols of the Internet. This model defines how data is transmitted
over networks, ensuring reliable communication between devices. It consists
of four layers: the Link Layer, the Internet Layer, the Transport Layer, and the
Application Layer. Each layer has specific functions that help manage different
aspects of network communication, making it essential for understanding
and working with modern networks.

Layers of TCP/IP Model


 Application Layer
 Transport Layer(TCP/UDP)
 Network/Internet Layer(IP)
 Network Access Layer
1. Network Access Layer
It is a group of applications requiring network communications. This layer is
responsible for generating the data and requesting connections. It acts on
behalf of the sender and the Network Access layer on the behalf of the
receiver. During this article, we will be talking on the behalf of the receiver.
The packet’s network protocol type, in this case, TCP/IP, is identified by
network access layer. Error prevention and “framing” are also provided by this
layer. Point-to-Point Protocol (PPP) framing and Ethernet IEEE 802.2 framing
are two examples of data-link layer protocols.
2. Internet or Network Layer
This layer parallels the functions of OSI’s Network layer. It defines the protocols
which are responsible for the logical transmission of data over the entire
network. The main protocols residing at this layer are as follows:
 IP: IP stands for Internet Protocol and it is responsible for delivering
packets from the source host to the destination host by looking at the IP
addresses in the packet headers. IP has 2 versions: IPv4 and IPv6. IPv4 is
the one that most websites are using currently. But IPv6 is growing as
the number of IPv4 addresses is limited in number when compared to
the number of users.
 ICMP: ICMP stands for Internet Control Message Protocol. It is
encapsulated within IP datagrams and is responsible for providing hosts
with information about network problems.
 ARP: ARP stands for Address Resolution Protocol. Its job is to find the
hardware address of a host from a known IP address. ARP has several
types: Reverse ARP, Proxy ARP, Gratuitous ARP, and Inverse ARP.
The Internet Layer is a layer in the Internet Protocol (IP) suite, which is the set
of protocols that define the Internet. The Internet Layer is responsible for
routing packets of data from one device to another across a network. It does
this by assigning each device a unique IP address, which is used to identify the
device and determine the route that packets should take to reach it.
Example: Imagine that you are using a computer to send an email to a friend.
When you click “send,” the email is broken down into smaller packets of data,
which are then sent to the Internet Layer for routing. The Internet Layer assigns
an IP address to each packet and uses routing tables to determine the best
route for the packet to take to reach its destination. The packet is then
forwarded to the next hop on its route until it reaches its destination. When all
of the packets have been delivered, your friend’s computer can reassemble
them into the original email message.
In this example, the Internet Layer plays a crucial role in delivering the email
from your computer to your friend’s computer. It uses IP addresses and routing
tables to determine the best route for the packets to take, and it ensures that
the packets are delivered to the correct destination. Without the Internet
Layer, it would not be possible to send data across the Internet.
3. Transport Layer
The TCP/IP transport layer protocols exchange data receipt acknowledgments
and retransmit missing packets to ensure that packets arrive in order and
without error. End-to-end communication is referred to as such. Transmission
Control Protocol (TCP) and User Datagram Protocol are transport layer
protocols at this level (UDP).
 TCP: Applications can interact with one another using TCP as though they
were physically connected by a circuit. TCP transmits data in a way that
resembles character-by-character transmission rather than separate
packets. A starting point that establishes the connection, the whole
transmission in byte order, and an ending point that closes the
connection make up this transmission.
 UDP: The datagram delivery service is provided by UDP, the other
transport layer protocol. Connections between receiving and sending
hosts are not verified by UDP. Applications that transport little amounts
of data use UDP rather than TCP because it eliminates the processes of
establishing and validating connections.
4. Application Layer
This layer is analogous to the transport layer of the OSI model. It is responsible
for end-to-end communication and error-free delivery of data. It shields the
upper-layer applications from the complexities of data. The three main
protocols present in this layer are:
 HTTP and HTTPS: HTTP stands for Hypertext transfer protocol. It is used
by the World Wide Web to manage communications between web
browsers and servers. HTTPS stands for HTTP-Secure. It is a combination
of HTTP with SSL(Secure Socket Layer). It is efficient in cases where the
browser needs to fill out forms, sign in, authenticate, and carry out bank
transactions.
 SSH: SSH stands for Secure Shell. It is a terminal emulations software
similar to Telnet. The reason SSH is preferred is because of its ability to
maintain the encrypted connection. It sets up a secure session over a
TCP/IP connection.
 NTP: NTP stands for Network Time Protocol. It is used to synchronize the
clocks on our computer to one standard time source. It is very useful in
situations like bank transactions. Assume the following situation without
the presence of NTP. Suppose you carry out a transaction, where your
computer reads the time at 2:30 PM while the server records it at 2:28
PM. The server can crash very badly if it’s out of sync.

Chp 4
What is Transmission Media?
A transmission medium is a physical path between the transmitter and the
receiver i.e. it is the channel through which data is sent from one place to
another. Transmission Media is broadly classified into the following types:
Types of Transmission Media
1. Guided Media
Guided Media is also referred to as Wired or Bounded transmission media.
Signals being transmitted are directed and confined in a narrow pathway by
using physical links.
Features:
 High Speed
 Secure
 Used for comparatively shorter distances
There are 3 major types of Guided Media:
Twisted Pair Cable
It consists of 2 separately insulated conductor wires wound about each other.
Generally, several such pairs are bundled together in a protective sheath. They
are the most widely used Transmission Media. Twisted Pair is of two types:
 Unshielded Twisted Pair (UTP): UTP consists of two insulated copper
wires twisted around one another. This type of cable has the ability to
block interference and does not depend on a physical shield for this
purpose. It is used for telephonic applications.

Unshielded Twisted Pair


Advantages of Unshielded Twisted Pair
Least expensive
 Easy to install
 High-speed capacity
Disadvantages of Unshielded Twisted Pair
Susceptible to external interference
 Lower capacity and performance in comparison to STP
 Short distance transmission due to attenuation

Shielded Twisted Pair


Shielded Twisted Pair (STP): This type of cable consists of a special jacket (a
copper braid covering or a foil shield) to block external interference. It is used
in fast-data-rate Ethernet and in voice and data channels of telephone lines.
Advantages of Shielded Twisted Pair
 Better performance at a higher data rate in comparison to UTP
 Eliminates crosstalk
 Comparatively faster
Disadvantages of Shielded Twisted Pair
 Comparatively difficult to install and manufacture
 More expensive
 Bulky
Coaxial Cable
It has an outer plastic covering containing an insulation layer made of PVC or
Teflon and 2 parallel conductors each having a separate insulated protection
cover. The coaxial cable transmits information in two modes: Baseband
mode(dedicated cable bandwidth) and Broadband mode(cable bandwidth is
split into separate ranges). Cable TVs and analog television networks widely use
Coaxial cables.
Advantages of Coaxial Cable
Coaxial cables support high bandwidth.
 It is easy to install coaxial cables.
 Coaxial cables have better cut-through resistance so they are more
reliable and durable.
 Less affected by noise or cross-talk or electromagnetic inference.
 Coaxial cables support multiple channels
Disadvantages of Coaxial Cable
 Coaxial cables are expensive.
 The coaxial cable must be grounded in order to prevent any crosstalk.
 As a Coaxial cable has multiple layers it is very bulky.
 There is a chance of breaking the coaxial cable and attaching a “t-joint”
by hackers, this compromises the security of the data.
Optical Fiber Cable
Optical Fibre Cable uses the concept of refraction of light through a core made
up of glass or plastic. The core is surrounded by a less dense glass or plastic
covering called the cladding. It is used for the transmission of large volumes of
data. The cable can be unidirectional or bidirectional. The WDM (Wavelength
Division Multiplexer) supports two modes, namely unidirectional and
bidirectional mode.
Advantages of Optical Fibre Cable
 Increased capacity and bandwidth
 Lightweight
 Less signal attenuation
 Immunity to electromagnetic interference
 Resistance to corrosive materials
Disadvantages of Optical Fibre Cable
 Difficult to install and maintain
 High cost
 Fragile
Applications of Optical Fibre Cable
 Medical Purpose: Used in several types of medical instruments.
 Defence Purpose: Used in transmission of data in aerospace.
 For Communication: This is largely used in formation of internet cables.
 Industrial Purpose: Used for lighting purposes and safety measures in
designing the interior and exterior of automobiles.
Stripline
Stripline is a transverse electromagnetic (TEM) transmission line medium
invented by Robert M. Barrett of the Air Force Cambridge Research Centre in
the 1950s. Stripline is the earliest form of the planar transmission line. It uses a
conducting material to transmit high-frequency waves it is also called a
waveguide. This conducting material is sandwiched between two layers of the
ground plane which are usually shorted to provide EMI immunity.
Microstripline
In this, the conducting material is separated from the ground plane by a layer
of dielectric.
2. Unguided Media
It is also referred to as Wireless or Unbounded transmission media. No physical
medium is required for the transmission of electromagnetic signals.
Features of Unguided Media
 The signal is broadcasted through air
 Less Secure
 Used for larger distances
There are 3 types of Signals transmitted through unguided media:
Radio Waves
Radio waves are easy to generate and can penetrate through buildings. The
sending and receiving antennas need not be aligned. Frequency Range:3KHz –
1GHz. AM and FM radios and cordless phones use Radio waves for
transmission.

Further Categorized as Terrestrial and Satellite.


Microwaves
It is a line of sight transmission i.e. the sending and receiving antennas need to
be properly aligned with each other. The distance covered by the signal is
directly proportional to the height of the antenna. Frequency Range:1GHz –
300GHz. Micro waves are majorly used for mobile phone communication and
television distribution.

Microwave Transmission
Infrared
Infrared waves are used for very short distance communication. They cannot
penetrate through obstacles. This prevents interference between systems.
Frequency Range:300GHz – 400THz. It is used in TV remotes, wireless mouse,
keyboard, printer, etc.

What is Circuit Switching?


In circuit switching network resources (bandwidth) are divided into pieces and
the bit delay is constant during a connection. The dedicated path/circuit
established between the sender and receiver provides a guaranteed data rate.
Data can be transmitted without any delays once the circuit is established.
The telephone system network is one of the examples of Circuit
switching. TDM (Time Division Multiplexing) and FDM (Frequency Division
Multiplexing) are two methods of multiplexing multiple signals into a single
carrier.
 Frequency Division Multiplexing: Frequency Division Multiplexing or
FDM is used when multiple data signals are combined for simultaneous
transmission via a shared communication medium. It is a technique by
which the total bandwidth is divided into a series of non-overlapping
frequency sub-bands, where each sub-band carries different signal.
Practical use in radio spectrum & optical fiber to share multiple
independent signals.
 Time Division Multiplexing: Time-division multiplexing (TDM) is a
method of transmitting and receiving independent signals over a
common signal path using synchronized switches at each end of the
transmission line. TDM is used for long-distance communication links
and bears heavy data traffic loads from the end user.
Time-division multiplexing (TDM) is also known as a digital circuit switch.
Phases of Circuit Switching
 Circuit Establishment: A dedicated circuit between the source and
destination is constructed via a number of intermediary switching
center’s. Communication signals can be requested and received when
the sender and receiver communicate signals over the circuit.
 Data Transfer: Data can be transferred between the source and
destination once the circuit has been established. The link between the
two parties remains as long as they communicate.
 Circuit Disconnection: Disconnection in the circuit occurs when one of
the users initiates the disconnect. When the disconnection occurs, all
intermediary linkages between the sender and receiver are terminate

What is the concept of datagram packet switching?


In a packet switching network, the data or information is transmitted between
the sender and receiver in the form of packets. It does not require a dedicated
physical path to transmit the fixed-size packet across the sun-network.
If the size of information is large, then it is subdivided into multiple packets. On
the destination side these multiple packets are assembled together into the
original message, one type of packet-switched network is the datagram packet
switched network.
Types of Packet Switching Network
The packet switching network is of two types −
 Datagram packet switching
 Virtual circuit packet switching
The packet switching network is diagrammatically represented as follows −

Datagram Packet Switching


Now let us understand the concept of Datagram packet Switching.
Characteristics
The characteristics of the datagram packet switching are explained below −
 In a datagram packet switched network the data packets follow their
own path to send the packets between the source and destination.
 During data transmission, after each packet reaches a node, then it
decides which path the packet needs to follow the next.
 This dynamic decision making of datagram packet switched networks
improves the performance of data transmission.

In the above diagram the datagram approach is used to deliver four packets
from station A to station X. All the four packets belong to the same message
but they may travel via different paths to reach the destination i.e. station X.

Explore our latest online courses and learn new skills at your own pace. Enroll
and become a certified expert to boost your career.

Advantages
The advantages of the datagram packet switching are explained below −
 It provides greater flexibility
 It performs fast re-routing of data packets at the time of network
congestion or failure.
Disadvantages
The disadvantages of the datagram packet switching are explained below −
 Each packet has to decide its own dedicated path.
 If large group of packets assembled at the same destination the network
has to examine each packets that travels through the network node
individually and it has to determine next path of each packets
 This leads to inefficiency and wastage of time.

Virtual Circuit in Computer Network


Virtual Circuit is the computer network providing connection-oriented service.
It is a connection-oriented network. In virtual circuit resource are reserve for
the time interval of data transmission between two nodes. This network is a
highly reliable medium of transfer. Virtual circuits are costly to implement.

Working of Virtual Circuit:


 In the first step a medium is set up between the two end nodes.
 Resources are reserved for the transmission of packets.
 Then a signal is sent to sender to tell the medium is set up and
transmission can be started.
 It ensures the transmission of all packets.
 A global header is used in the first packet of the connection.
 Whenever data is to be transmitted a new connection is set up.
Congestion Control in Virtual Circuit:
Once the congestion is detected in virtual circuit network, closed-loop
techniques is used. There are different approaches in this technique:
 No new connection –
No new connections are established when the congestion is detected.
This approach is used in telephone networks where no new calls are
established when the exchange is overloaded.
 Participation of congested router invalid –
Another approach to control congestion is allow all new connections but
route these new connections in such a way that congested router is not
part of this route.
 Negotiation –
To negotiate different parameters between sender and receiver of the
network, when the connection is established. During the set up time,
host specifies the shape and volume of the traffic, quality of service and
other parameters.

Chp-5
Framing in Data Link Layer
Frames are the units of digital transmission, particularly in computer
networks and telecommunications. Frames are comparable to the packets of
energy called photons in the case of light energy. Frame is continuously used
in Time Division Multiplexing process.
Framing is a point-to-point connection between two computers or devices
consisting of a wire in which data is transmitted as a stream of bits. However,
these bits must be framed into discernible blocks of information. Framing is a
function of the data link layer. It provides a way for a sender to transmit a set
of bits that are meaningful to the receiver. Ethernet, token ring, frame relay,
and other data link layer technologies have their own frame structures.
Frames have headers that contain information such as error-checking codes.

Problems in Framing
 Detecting start of the frame: When a frame is transmitted, every
station must be able to detect it. Station detects frames by looking out
for a special sequence of bits that marks the beginning of the frame i.e.
SFD (Starting Frame Delimiter).
 How does the station detect a frame: Every station listens to link for
SFD pattern through a sequential circuit. If SFD is detected, sequential
circuit alerts station. Station checks destination address to accept or
reject frame.
 Detecting end of frame: When to stop reading the frame.
 Handling errors: Framing errors may occur due to noise or other
transmission errors, which can cause a station to misinterpret the
frame. Therefore, error detection and correction mechanisms, such as
cyclic redundancy check (CRC), are used to ensure the integrity of the
frame.
 Framing overhead: Every frame has a header and a trailer that contains
control information such as source and destination address, error
detection code, and other protocol-related information. This overhead
reduces the available bandwidth for data transmission, especially for
small-sized frames.
 Framing incompatibility: Different networking devices and protocols
may use different framing methods, which can lead to framing
incompatibility issues. For example, if a device using one framing
method sends data to a device using a different framing method, the
receiving device may not be able to correctly interpret the frame.
 Framing synchronization: Stations must be synchronized with each
other to avoid collisions and ensure reliable communication.
Synchronization requires that all stations agree on the frame
boundaries and timing, which can be challenging in complex networks
with many devices and varying traffic loads.
 Framing efficiency: Framing should be designed to minimize the
amount of data overhead while maximizing the available bandwidth
for data transmission. Inefficient framing methods can lead to lower
network performance and higher latency.
Types of framing
There are two types of framing:
1. Fixed-size: The frame is of fixed size and there is no need to provide
boundaries to the frame, the length of the frame itself acts as a delimiter.
 Drawback: It suffers from internal fragmentation if the data size is less
than the frame size
 Solution: Padding
2. Variable size: In this, there is a need to define the end of the frame as well
as the beginning of the next frame to distinguish. This can be done in two
ways:
1. Length field – We can introduce a length field in the frame to indicate
the length of the frame. Used in Ethernet(802.3). The problem with this
is that sometimes the length field might get corrupted.
2. End Delimiter (ED) – We can introduce an ED(pattern) to indicate the
end of the frame. Used in Token Ring. The problem with this is that ED
can occur in the data. This can be solved by:
1. Character/Byte Stuffing: Used when frames consist of characters. If data
contains ED then, a byte is stuffed into data to differentiate it from ED.
Let ED = “$” –> if data contains ‘$’ anywhere, it can be escaped using ‘\O’
character.
–> if data contains ‘\O$’ then, use ‘\O\O\O$'($ is escaped using \O and \O is
escaped using \O).

Disadvantage – It is very costly and obsolete method.


2. Bit Stuffing: Let ED = 01111 and if data = 01111
–> Sender stuffs a bit to break the pattern i.e. here appends a 0 in data =
011101.
–> Receiver receives the frame.
–> If data contains 011101, receiver removes the 0 and reads the data.
Examples:
 If Data –> 011100011110 and ED –> 0111 then, find data after bit
stuffing.
--> 011010001101100
 If Data –> 110001001 and ED –> 1000 then, find data after bit stuffing?
--> 11001010011
framing in the Data Link Layer also presents some challenges, which include:
Variable frame length: The length of frames can vary depending on the data
being transmitted, which can lead to inefficiencies in transmission. To address
this issue, protocols such as HDLC and PPP use a flag sequence to mark the
start and end of each frame.
Bit stuffing: Bit stuffing is a technique used to prevent data from being
interpreted as control characters by inserting extra bits into the data stream.
However, bit stuffing can lead to issues with synchronization and increase the
overhead of the transmission.
Synchronization: Synchronization is critical for ensuring that data frames are
transmitted and received correctly. However, synchronization can be
challenging, particularly in high-speed networks where frames are
transmitted rapidly.
Error detection: Data Link Layer protocols use various techniques to detect
errors in the transmitted data, such as checksums and CRCs. However, these
techniques are not foolproof and can miss some types of errors.
Efficiency: Efficient use of available bandwidth is critical for ensuring that
data is transmitted quickly and reliably. However, the overhead associated
with framing and error detection can reduce the overall efficiency of the
transmission.

2 . flow and error control: Flow control is design issue at Data Link Layer. It is
a technique that generally observes the proper flow of data from sender to
receiver. It is very essential because it is possible for sender to transmit data
or information at very fast rate and hence receiver can receive this
information and process it. This can happen only if receiver has very high load
of traffic as compared to sender, or if receiver has power of processing less as
compared to sender. Flow control is basically a technique that gives
permission to two of stations that are working and processing at different
speeds to just communicate with one another. Flow control in Data Link Layer
simply restricts and coordinates number of frames or amount of data sender
can send just before it waits for an acknowledgement from receiver. Flow
control is actually set of procedures that explains sender about how much
data or frames it can transfer or transmit before data overwhelms receiver.
The receiving device also contains only limited amount of speed and memory
to store data. This is why receiving device should be able to tell or inform the
sender about stopping the transmission or transferring of data on temporary
basis before it reaches limit. It also needs buffer, large block of memory for
just storing data or frames until they are processed.

Techniques of Flow Control in Data Link Layer : There are basically two types
of techniques being developed to control the flow of data
1. Stop-and-Wait Flow Control : This method is the easiest and simplest form
of flow control. In this method, basically message or data is broken down into
various multiple frames, and then receiver indicates its readiness to receive
frame of data. When acknowledgement is received, then only sender will
send or transfer the next frame. This process is continued until sender
transmits EOT (End of Transmission) frame. In this method, only one of
frames can be in transmission at a time. It leads to inefficiency i.e. less
productivity if propagation delay is very much longer than the transmission
delay and Ultimately In this method sender sent single frame and receiver
take one frame at a time and sent acknowledgement(which is next frame
number only) for new frame.
Advantages –
 This method is very easiest and simple and each of the frames is
checked and acknowledged well.
 This method is also very accurate.
Disadvantages –
 This method is fairly slow.
 In this, only one packet or frame can be sent at a time.
 It is very inefficient and makes the transmission process very slow.
2. Sliding Window Flow Control : This method is required where reliable in-
order delivery of packets or frames is very much needed like in data link layer.
It is point to point protocol that assumes that none of the other entity tries to
communicate until current data or frame transfer gets completed. In this
method, sender transmits or sends various frames or packets before receiving
any acknowledgement. In this method, both the sender and receiver agree
upon total number of data frames after which acknowledgement is needed to
be transmitted. Data Link Layer requires and uses this method that simply
allows sender to have more than one unacknowledged packet “in-flight” at a
time. This increases and improves network throughput. and Ultimately In
this method sender sent multiple frame but receiver take one by one and
after completing one frame acknowledge(which is next frame number only)
for new frame.
Advantages –
 It performs much better than stop-and-wait flow control.
 This method increases efficiency.
 Multiples frames can be sent one after another.
Disadvantages –
 The main issue is complexity at the sender and receiver due to the
transferring of multiple frames.
 The receiver might receive data frames or packets out the sequence.

Error: Data-link layer uses the techniques of error control simply to ensure
and confirm that all the data frames or packets, i.e. bit streams of data, are
transmitted or transferred from sender to receiver with certain accuracy.
Using or providing error control at this data link layer is an optimization, it
was never a requirement. Error control is basically process in data link layer
of detecting or identifying and re-transmitting data frames that might be lost
or corrupted during transmission. In both of these cases, receiver or
destination does not receive correct data frame and sender or source does
not even know anything about any such loss regarding data frames.
Therefore, in such type of cases, both sender and receiver are provided with
some essential protocols that are required to detect or identify such types of
errors as loss of data frames. The Data-link layer follows a technique known
as re-transmission of frames to detect or identify transit errors and also to
take necessary actions that are required to reduce or remove such errors.
Each and every time an error is detected during transmission, particular data
frames are retransmitted and this process is known as ARQ (Automatic
Repeat Request).
Various Techniques for Error Control : There are various techniques of error
control as given below :
1. Stop-and-Wait ARQ : Stop-and-Wait ARQ is also known as alternating bit
protocol. It is one of the simplest flow and error control techniques or
mechanisms. This mechanism is generally required in telecommunications to
transmit data or information between two connected devices. Receiver
simply indicates its readiness to receive data for each frame. In these, sender
sends information or data packets to receiver. Sender then stops and waits
for ACK (Acknowledgment) from receiver. Further, if ACK does not arrive
within given time period i.e., time-out, sender then again resends frame and
waits for ACK. But, if sender receives ACK, then it will transmit the next data
packet to receiver and then again wait for ACK from receiver. This process to
stop and wait continues until sender has no data frame or packet to send.
2. Sliding Window ARQ : This technique is generally used for continuous
transmission error control. It is further categorized into two categories as
given below :
 Go-Back-N ARQ : Go-Back-N ARQ is form of ARQ protocol in which
transmission process continues to send or transmit total number of
frames that are specified by window size even without receiving an
ACK (Acknowledgement) packet from the receiver. It uses sliding
window flow control protocol. If no errors occur, then operation is
identical to sliding window.
 Selective Repeat ARQ : Selective Repeat ARQ is also form of ARQ
protocol in which only suspected or damaged or lost data frames are
only retransmitted. This technique is similar to Go-Back-N ARQ though
much more efficient than the Go-Back-N ARQ technique due to reason
that it reduces number of retransmission. In this, the sender only
retransmits frames for which NAK is received. But this technique is
used less because of more complexity between sender and receiver
and each frame must be needed to be acknowledged individually.

Multiple Access Control


If there is a dedicated link between the sender and the receiver then data link
control layer is sufficient, however if there is no dedicated link present then
multiple stations can access the channel simultaneously. Hence multiple
access protocols are required to decrease collision and avoid crosstalk. For
example, in a classroom full of students, when a teacher asks a question and
all the students (or stations) start answering simultaneously (send data at
same time) then a lot of chaos is created( data overlap or data lost) then it is
the job of the teacher (multiple access protocols) to manage the students and
make them answer one at a time.
Thus, protocols are required for sharing data on non dedicated channels.
Multiple access protocols can be subdivided further as

1. Random Access Protocol


In this, all stations have same superiority that is no station has more priority
than another station. Any station can send data depending on medium’s
state( idle or busy). It has two features:
 There is no fixed time for sending data
 There is no fixed sequence of stations sending data
The Random access protocols are further subdivided as:
ALOHA
It was designed for wireless LAN but is also applicable for shared medium. In
this, multiple stations can transmit data at the same time and can hence lead
to collision and data being garbled.
Types of Aloha
Pure ALOHA
When a station sends data it waits for an acknowledgement. If the
acknowledgement doesn’t come within the allotted time then the station
waits for a random amount of time called back-off time (Tb) and re-sends the
data. Since different stations wait for different amount of time, the
probability of further colli
Pure Aloha
Slotted ALOHA
It is similar to pure aloha, except that we divide time into slots and sending of
data is allowed only at the beginning of these slots. If a station misses out the
allowed time, it must wait for the next slot. This reduces the probability of
collision.
Slotted ALOHA
CSMA
Carrier Sense Multiple Access ensures fewer collisions as the station is
required to first sense the medium (for idle or busy) before transmitting data.
If it is idle then it sends data, otherwise it waits till the channel becomes idle.
However there is still chance of collision in CSMA due to propagation delay.
For example, if station A wants to send data, it will first sense the medium.If
it finds the channel idle, it will start sending data. However, by the time the
first bit of data is transmitted (delayed due to propagation delay) from station
A, if station B requests to send data and senses the medium it will also find it
idle and will also send data. This will result in collision of data from station A
and B.

CSMA
CSMA Access Modes

CSMA Access Modes


 1-Persistent: The node senses the channel, if idle it sends the data,
otherwise it continuously keeps on checking the medium for being idle
and transmits unconditionally(with 1 probability) as soon as the
channel gets idle.
 Non-Persistent: The node senses the channel, if idle it sends the data,
otherwise it checks the medium after a random amount of time (not
continuously) and transmits when found idle.
 P-Persistent: The node senses the medium, if idle it sends the data with
p probability. If the data is not transmitted ((1-p) probability) then it
waits for some time and checks the medium again, now if it is found
idle then it send with p probability. This repeat continues until the
frame is sent. It is used in Wifi and packet radio systems.
 O-Persistent: Superiority of nodes is decided beforehand and
transmission occurs in that order. If the medium is idle, node waits for
its time slot to send data.
CSMA/CD
Carrier sense multiple access with collision detection. Stations can terminate
transmission of data if collision is detected. For more details refer – Efficiency
of CSMA/CD.
CSMA/CA
Carrier sense multiple access with collision avoidance. The process of
collisions detection involves sender receiving acknowledgement signals. If
there is just one signal(its own) then the data is successfully sent but if there
are two signals(its own and the one with which it has collided) then it means
a collision has occurred. To distinguish between these two cases, collision
must have a lot of impact on received signal. However it is not so in wired
networks, so CSMA/CA is used in this case.
CSMA/CA Avoids Collision By
 Interframe Space: Station waits for medium to become idle and if
found idle it does not immediately send data (to avoid collision due to
propagation delay) rather it waits for a period of time called Interframe
space or IFS. After this time it again checks the medium for being idle.
The IFS duration depends on the priority of station.
 Contention Window: It is the amount of time divided into slots. If the
sender is ready to send data, it chooses a random number of slots as
wait time which doubles every time medium is not found idle. If the
medium is found busy it does not restart the entire process, rather it
restarts the timer when the channel is found idle again.
 Acknowledgement: The sender re-transmits the data if
acknowledgement is not received before time-out.
2. Controlled Access
Controlled access protocols ensure that only one device uses the network at a
time. Think of it like taking turns in a conversation so everyone can speak
without talking over each other.
In this, the data is sent by that station which is approved by all other stations.
For further details refer – Controlled Access Protocols.
3. Channelization
In this, the available bandwidth of the link is shared in time, frequency and
code to multiple stations to access channel simultaneously.
 Frequency Division Multiple Access (FDMA) – The available bandwidth
is divided into equal bands so that each station can be allocated its
own band. Guard bands are also added so that no two bands overlap to
avoid crosstalk and noise.
 Time Division Multiple Access (TDMA) – In this, the bandwidth is
shared between multiple stations. To avoid collision time is divided into
slots and stations are allotted these slots to transmit data. However
there is a overhead of synchronization as each station needs to know
its time slot. This is resolved by adding synchronization bits to each
slot. Another issue with TDMA is propagation delay which is resolved
by addition of guard bands.
For more details refer – Circuit Switching
 Code Division Multiple Access (CDMA) – One channel carries all
transmissions simultaneously. There is neither division of bandwidth
nor division of time. For example, if there are many people in a room
all speaking at the same time, then also perfect reception of data is
possible if only two person speak the same language. Similarly, data
from different stations can be transmitted simultaneously in different
code languages.
 Orthogonal Frequency Division Multiple Access (OFDMA) – In OFDMA
the available bandwidth is divided into small subcarriers in order to
increase the overall performance, Now the data is transmitted through
these small subcarriers. it is widely used in the 5G technology.

Chp-7
Protocols in Application Layer
The Application Layer is the topmost layer in the Open System Interconnection
(OSI) model. This layer provides several ways for manipulating the data which
enables any type of user to access the network with ease. The Application Layer
interface directly interacts with the application and provides common web
application services. The application layer performs several kinds of functions that
are required in any kind of application or communication process. In this article, we
will discuss various application layer protocols.

What are Application Layer Protocols?


Application layer protocols are those protocols utilized at the application layer of
the OSI (Open Systems Interconnection) and TCP/IP models. They facilitate
communication and data sharing between software applications on various
network devices. These protocols define the rules and standards that allow
applications to interact and communicate quickly and effectively over a network.
Application Layer Protocol in Computer Network
1. TELNET
Telnet stands for the TELetype NETwork. It helps in terminal emulation. It allows
Telnet clients to access the resources of the Telnet server. It is used for managing
files on the Internet. It is used for the initial setup of devices like switches. The
telnet command is a command that uses the Telnet protocol to communicate with
a remote device or system. The port number of the telnet is 23.
2. FTP
FTP stands for File Transfer Protocol. It is the protocol that actually lets us transfer
files. It can facilitate this between any two machines using it. But FTP is not just a
protocol but it is also a program.FTP promotes sharing of files via remote
computers with reliable and efficient data transfer. The Port number for FTP is 20
for data and 21 for control.
3. TFTP
The Trivial File Transfer Protocol (TFTP) is the stripped-down, stock version of FTP,
but it’s the protocol of choice if you know exactly what you want and where to find
it. It’s a technology for transferring files between network devices and is a
simplified version of FTP. The Port number for TFTP is 69.
4. NFS
It stands for a Network File System. It allows remote hosts to mount file systems
over a network and interact with those file systems as though they are mounted
locally. This enables system administrators to consolidate resources onto
centralized servers on the network. The Port number for NFS is 2049.
5. SMTP
It stands for Simple Mail Transfer Protocol. It is a part of the TCP/IP protocol. Using
a process called “store and forward,” SMTP moves your email on and across
networks. It works closely with something called the Mail Transfer Agent (MTA) to
send your communication to the right computer and email inbox. The Port number
for SMTP is 25.
6. SNMP
It stands for Simple Network Management Protocol. It gathers data by polling the
devices on the network from a management station at fixed or random intervals,
requiring them to disclose certain information. It is a way that servers can share
information about their current state, and also a channel through which an
administrate can modify pre-defined values. The Port number of SNMP is 161(TCP)
and 162(UDP)
7. DNS
It stands for Domain Name System. Every time you use a domain name, therefore,
a DNS service must translate the name into the corresponding IP address. For
example, the domain name www.abc.com might translate to 198.105.232.4.
The Port number for DNS is 53.
Streaming Stored Video
Streaming of videos involve, storing of prerecorded videos on servers.
 Users send request to those servers.
 Users may watch the video from the start till the end, and may pause it anytime, do
a forward or reverse skip, or stop the video whenever they want to do so.
There are 3 video streaming categories:
1. UDP Streaming
2. HTTP Streaming
3. Adaptive HTTP Streaming
Usually, today’s system employs HTTP and Adaptive HTTP Streaming. Common
characteristic of these three streaming technique is that the extensive use of Client-
Side Buffering. Advantages of Client Buffering:
1. Client side buffer absorbs variations in server-client delay. Until the delayed packet
is received by the client, the received-but not yet-played video will be played.
2. Even if bandwidth drops, user can view the video until the buffer is completely
drained.
1. UDP STREAMING: UDP servers send video chunks (Chunk: unit of information
that contains either control information or user data) to clients, based on client’s
consumption rate. It transmits chunks at a rate, that matches client’s video
consumption rate by clocking out video chunks over UDP over steady state. For
example,
Video consumption rate = 2Mbps
Capacity of one UDP packet = 8000 bits

Therefore,
Transmission rate = 8000 bits/2 Mbps = 4000 msec
Properties:
 UDP does not use congestion-control mechanism. Video chunks are encapsulated
before transmission using RTT (Real-Time Transport) Protocol.
 Additional client-server path is maintained to inform the client state to server like
pause, resume, skip and so on.
Drawbacks:
 Bandwidth is unpredictable and varying between client and server.
 UDP streaming requires a separate media control server like RTSP server(Real-Time
Streaming Protocol) to track client state(pause, resume etc).
 Devices are configured with firewalls to block UDP traffic which prevents the
reception of UDP packets to clients.
2. HTTP STREAMING: Video is stored in an HTTP server as a simple ordinary file
with a unique URL. Client establishes TCP connection with server and issues a HTTP
GET request for that URL. Server sends the video file along with an HTTP
RESPONSE. Now the client buffer grabs the video and then displayed on user
screen. Advantages:
 Use of HTTP over TCP allows the video to traverse firewalls and NATs easily.
 Does not need any media control servers like RTSP servers causing reduction in cost
of large-scale deployment over internet.
Disadvantages:
 Latency or lag between a video is recorded and played. This can make the viewers
more annoyed and irritated. Only a few milliseconds delay is acceptable.
 Early pre-fetching of video, but, what if, the user stops playing the video at a early
stage? Wastage of data is not appreciated.
 All clients receive the same encoding of the video, despite the large variations in
bandwidth amount available to different clients and also for same client over time.
Uses: Youtube and Netflix uses HTTP streaming mechanism. 3. ADAPTIVE HTTP
STREAMING: The major drawbacks of HTTP streaming, lead to development of new
type of HTTP based streaming referred to as DASH (Dynamic Adaptive Streaming
over HTTP). Videos are encoded into different bit rate versions, having different
quality. The host makes a dynamic video request of few seconds in length from
different bit versions. When bandwidth is high, high bit rate chunks are received
hence high quality similarly, low quality video during low bandwidth. Advantages:
 DASH uses the user to switch over different qualities of video on screen.
 Client can use HTTP byte-range request to precisely control the amount of pre-
fetched video that is locally buffered.
 DASH also stores the audio in different versions with different quality and different
bit-rate with unique URL.
So the client dynamically selects the video and audio chunks and synchronizes it
locally in the play-out. Uses: COMCAST uses DASH for streaming high quality video
contents.
Advantages of Streaming Stored Video
 Convenience: Streaming stored video allows users to access content at any time,
without the need for a physical media or download.
 Increased Accessibility: Streaming stored video makes it easier for users to access
content, as it eliminates the need for physical storage and retrieval of media.
 On-demand Content: Streaming stored video allows users to choose what they
want to watch, and when they want to watch it, rather than having to conform to a
schedule.
 Increased User Experience: Streaming stored video provides a better viewing
experience compared to traditional broadcast, as it allows for higher quality video
and improved interactivity.
 Scalability: Streaming stored video can be scaled to meet the demands of large
numbers of users, making it a reliable solution for large-scale video distribution.
Applications of Streaming Stored Video
 Online Entertainment: Streaming stored video is commonly used for online
entertainment, allowing users to access movies, TV shows, and other content from
the internet.
 Video Conferencing: Streaming stored video is used for video conferencing,
allowing for real-time communication between participants.
 Education: Streaming stored video is used in education to facilitate online classes
and lectures.
 Corporate Communications: Streaming stored video is used in corporate
communications to share important information with employees and stakeholders.
 Advertising: Streaming stored video is used for advertising, allowing businesses to
reach target audiences with video content.

P2P (Peer To Peer) File Sharing


A peer-to-peer network allows computer hardware and software to communicate
without the need for a server. Unlike client-server architecture, there is no central
server for processing requests in a P2P architecture. The peers directly interact with
one another without the requirement of a central server.
Now, when one peer makes a request, multiple peers may have a copy of that
requested object. Now the problem is how to get the IP addresses of all those
peers. This is decided by the underlying architecture supported by the P2P systems.
Using one of these methods, the client peer can get to know all the peers which
have the requested object/file and the file transfer takes place directly between
these two peers.
P2P Architecture
1. Centralized Directory
2. Query Flooding
3. Exploiting Heterogeneity
1. Centralized Directory
A centralized Directory is somewhat similar to client-server architecture in the
sense that it maintains a huge central server to provide directory service. All the
peers inform this central server of their IP address and the files they are making
available for sharing. The server queries the peers at regular intervals to make sure
if the peers are still connected or not. So basically this server maintains a huge
database regarding which file is present at which IP addresses. The first system
which made use of this method was Napster, for Mp3 distribution.
Working
 Now whenever a requesting peer comes in, it sends its query to the server.
 Since the server has all the information of its peers, so it returns the IP addresses of
all the peers having the requested file to the peer.
 Now the file transfer takes place between these two peers.

Centralized Directory
The major problem with such an architecture is that there is a single point of
failure. If the server crashes, the whole P2P network crashes. Also, since all of the
processing is to be done by a single server so a huge amount of the database has to
be maintained and regularly updated.
2. Query Flooding
Unlike the centralized approach, this method makes use of distributed systems. In
this, the peers are supposed to be connected to an overlay network. It means if a
connection/path exists from one peer to another, it is a part of this overlay
network. In this overlay network, peers are called nodes, and the connection
between peers is called an edge between the nodes, thus resulting in a graph-like
structure. Gnutella was the first decentralized peer-to-peer network.
Working
 Now when one peer requests for some file, this request is sent to all its
neighboring nodes i.e. to all nodes connected to this node. If those nodes don’t
have the required file, they pass on the query to their neighbors and so on. This is
called query flooding.
 When the peer with the requested file is found (referred to as query hit), the query
flooding stops and it sends back the file name and file size to the client, thus
following the reverse path.
 If there are multiple query hits, the client selects from one of these peers.
Gnutella: Gnutella represents a new wave of P2P applications providing distributed
discovery and sharing of resources across the Internet. Gnutella is distinguished by
its support for anonymity and its decentralized architecture. A Gnutella network
consists of a dynamically changing set of peers connected using TCP/IP.

Query Flooding
This method also has some disadvantages, the query has to be sent to all the
neighboring peers unless a match is found. This increases traffic in the network.
3. Exploiting Heterogeneity
This P2P architecture makes use of both the above-discussed systems. It resembles
a distributed system like Gnutella because there is no central server for query
processing. But unlike Gnutella, it does not treat all its peers equally. The peers
with higher bandwidth and network connectivity are at a higher priority and are
called group leaders/supernodes. The rest of the peers are assigned to these
supernodes. These supernodes are interconnected and the peers under these
supernodes inform their respective leaders about their connectivity, IP address,
and the files available for sharing.
KaZaA technology is such an example that makes use of Napster and
Gnutella. Thus, the individual group leaders along with their child peers form a
Napster-like structure. These group leaders then interconnect among themselves to
resemble a Gnutella-like structure.
Working
 This structure can process the queries in two ways.
 The first one is that the supernodes could contact other supernodes and merge
their databases with their database. Thus, this supernode now has information
about a large number of peers.
 Another approach is that when a query comes in, it is forwarded to the
neighboring super nodes until a match is found, just like in Gnutella. Thus query
flooding exists but with limited scope as each supernode has many child peers.
Hence, such a system exploits the heterogeneity of the peers by designating some
of them as group leaders/supernodes and others as their child peers

Exploiting heterogeneity
P2P File Sharing Security Concerns
Steps that ensure that Sensitive Information on the network is secure:
 You must delete your sensitive information which you don’t require and you can
apply some restrictions to important file present within the network.
 For strong or accessing sensitive information, try to reduce or remove P2P file-
sharing programs on computers.
 Constantly try to monitor the network to find unauthorized file-sharing programs.
 Try to block the unauthorized Peer-to-Peer file sharing programs within the
perimeter of the network.
 Implement strong access controls and authentication mechanisms to prevent
unauthorized access to sensitive information on the network.
 Use encryption techniques such as Secure Socket Layer (SSL) or Transport Layer
Security (TLS) to protect data in transit between peers on the network.
 Implement firewalls, intrusion detection and prevention systems, and other
security measures to prevent unauthorized access to the network and to detect
and block malicious activity.
 Regularly update software and security patches to address known vulnerabilities in
P2P file-sharing programs and other software used on the network.
 Educate users about the risks associated with P2P file-sharing and provide training
on how to use these programs safely and responsibly.
 Use data loss prevention tools to monitor and prevent the transmission of sensitive
data outside of the network.
 Implement network segmentation to limit the scope of a security breach in case of
a compromise, and to prevent unauthorized access to sensitive areas of the
network.
 Regularly review and audit the network to identify potential security threats and to
ensure that security controls are effective and up-to-date.

Chp-8
What is Network Security?
Network security is the practice of protecting a computer network from
unauthorized access, misuse, or attacks. It involves using tools,
technologies, and policies to ensure that data traveling over the
network is safe and secure, keeping sensitive information away from
hackers and other threats.
How Does Network Security Work?
Network security uses several layers of protection, both at the edge of
the network and within it. Each layer has rules and controls that
determine who can access network resources. People who are allowed
access can use the network safely, but those who try to harm it with
attacks or other threats are stopped from doing so.
The basic principle of network security is protecting huge stored data
and networks in layers that ensure the bedding of rules and regulations
that have to be acknowledged before performing any activity on the
data. These levels are:
 Physical Network Security: This is the most basic level that includes
protecting the data and network through unauthorized personnel from
acquiring control over the confidentiality of the network. The same can
be achieved by using devices like biometric systems.
 Technical Network Security: It primarily focuses on protecting the data
stored in the network or data involved in transitions through the
network. This type serves two purposes. One is protected from
unauthorized users, and the other is protected from malicious
activities.
 Administrative Network Security: This level of network security
protects user behavior like how the permission has been granted and
how the authorization process takes place. This also ensures the level
of sophistication the network might need for protecting it through all
the attacks. This level also suggests necessary amendments that have
to be done to the infrastructure.
Types of Network Security
There are several types of network security through which we can
make our network more secure, Your network and data are shielded
from breaches, invasions, and other dangers by network security. Here
below are some important types of network security:
Email Security
Email Security is defined as the process designed to protect the Email
Account and its contents safe from unauthorized access. For Example,
you generally see, fraud emails are automatically sent to the Spam
folder. because most email service providers have built-in features to
protect the content.
The most common danger vector for a security compromise is email
gateways. Hackers create intricate phishing campaigns using recipients’
personal information and social engineering techniques to trick them
and direct them to malicious websites. To stop critical data from being
lost, an email security programme restricts outgoing messages and
stops incoming threats.
What is Cryptography?
Cryptography is a technique of securing information and
communications through the use of codes so that only those persons
for whom the information is intended can understand and process it.
Thus preventing unauthorized access to information. The prefix “crypt”
means “hidden” and the suffix “graphy” means “writing”. In
Cryptography, the techniques that are used to protect information are
obtained from mathematical concepts and a set of rule-based
calculations known as algorithms to convert messages in ways that
make it hard to decode them. These algorithms are used for
cryptographic key generation, digital signing, and verification to protect
data privacy, web browsing on the internet and to protect confidential
transactions such as credit card and debit card transactions.
Features Of Cryptography
 Confidentiality: Information can only be accessed by the person for
whom it is intended and no other person except him can access it.
 Integrity: Information cannot be modified in storage or transition
between sender and intended receiver without any addition to
information being detected.
 Non-repudiation: The creator/sender of information cannot deny his
intention to send information at a later stage.
 Authentication: The identities of the sender and receiver are
confirmed. As well destination/origin of the information is confirmed.
 Interoperability: Cryptography allows for secure communication
between different systems and platforms.
 Adaptability: Cryptography continuously evolves to stay ahead of
security threats and technological advancements.
Types Of Cryptography
1. Symmetric Key Cryptography
It is an encryption system where the sender and receiver of a message
use a single common key to encrypt and decrypt messages. Symmetric
Key cryptography is faster and simpler but the problem is that the
sender and receiver have to somehow exchange keys securely. The
most popular symmetric key cryptography systems are Data Encryption
Systems (DES) and Advanced Encryption Systems (AES).

Symmetric Key Cryptography


2. Hash Functions
There is no usage of any key in this algorithm. A hash value with a fixed
length is calculated as per the plain text which makes it impossible for
the contents of plain text to be recovered. Many operating systems use
hash functions to encrypt passwords.
3. Asymmetric Key Cryptography
In Asymmetric Key Cryptography, a pair of keys is used to encrypt and
decrypt information. A receiver’s public key is used for encryption and
a receiver’s private key is used for decryption. Public keys and Private
keys are different. Even if the public key is known by everyone the
intended receiver can only decode it because he alone knows his
private key. The most popular asymmetric key cryptography algorithm
is the RSA algorithm.
Asymmetric Key Cryptography
Applications of Cryptography
 Computer passwords: Cryptography is widely utilized in computer
security, particularly when creating and maintaining passwords. When
a user logs in, their password is hashed and compared to the hash that
was previously stored. Passwords are hashed and encrypted before
being stored. In this technique, the passwords are encrypted so that
even if a hacker gains access to the password database, they cannot
read the passwords.
 Digital Currencies: To protect transactions and prevent fraud, digital
currencies like Bitcoin also use cryptography. Complex algorithms and
cryptographic keys are used to safeguard transactions, making it nearly
hard to tamper with or forge the transactions.
 Secure web browsing: Online browsing security is provided by the use
of cryptography, which shields users from eavesdropping and man-in-
the-middle assaults. Public key cryptography is used by the Secure
Sockets Layer (SSL) and Transport Layer Security (TLS) protocols to
encrypt data sent between the web server and the client, establishing a
secure channel for communication.
 Electronic signatures: Electronic signatures serve as the digital
equivalent of a handwritten signature and are used to sign documents.
Digital signatures are created using cryptography and can be validated
using public key cryptography. In many nations, electronic signatures
are enforceable by law, and their use is expanding quickly.
 Authentication: Cryptography is used for authentication in many
different situations, such as when accessing a bank account, logging
into a computer, or using a secure network. Cryptographic methods are
employed by authentication protocols to confirm the user’s identity
and confirm that they have the required access rights to the resource.
 Cryptocurrencies: Cryptography is heavily used by cryptocurrencies like
Bitcoin and Ethereum to protect transactions, thwart fraud, and
maintain the network’s integrity. Complex algorithms and
cryptographic keys are used to safeguard transactions, making it nearly
hard to tamper with or forge the transactions.
 End-to-end Internet Encryption: End-to-end encryption is used to
protect two-way communications like video conversations, instant
messages, and email. Even if the message is encrypted, it assures that
only the intended receivers can read the message. End-to-end
encryption is widely used in communication apps like WhatsApp and
Signal, and it provides a high level of security and privacy for users.
Types of Cryptography Algorithm
 Advanced Encryption Standard (AES): AES (Advanced Encryption
Standard) is a popular encryption algorithm which uses the same key
for encryption and decryption It is a symmetric block cipher algorithm
with block size of 128 bits, 192 bits or 256 bits. AES algorithm is widely
regarded as the replacement of DES (Data encryption standard)
algorithm
 Data Encryption Standard (DES): DES (Data encryption standard) is an
older encryption algorithm that is used to convert 64-bit plaintext data
into 48-bit encrypted ciphertext. It uses symmetric keys (which means
same key for encryption and decryption). It is kind of old by today’s
standard but can be used as a basic building block for learning newer
encryption algorithms.
 RSA: RSA is an basic asymmetric cryptographic algorithm which uses
two different keys for encryption. The RSA algorithm works on a block
cipher concept that converts plain text into cipher text and vice versa.
 Secure Hash Algorithm (SHA): SHA is used to generate unique fixed-
length digital fingerprints of input data known as hashes. SHA
variations such as SHA-2 and SHA-3 are commonly used to ensure data
integrity and authenticity. The tiniest change in input data drastically
modifies the hash output, indicating a loss of integrity. Hashing is the
process of storing key value pairs with the help of a hash function into
a hash table.
Advantages of Cryptography
 Access Control: Cryptography can be used for access control to ensure
that only parties with the proper permissions have access to a
resource. Only those with the correct decryption key can access the
resource thanks to encryption.
 Secure Communication: For secure online communication,
cryptography is crucial. It offers secure mechanisms for transmitting
private information like passwords, bank account numbers, and other
sensitive data over the Internet.
 Protection against attacks: Cryptography aids in the defense against
various types of assaults, including replay and man-in-the-middle
attacks. It offers strategies for spotting and stopping these assaults.
 Compliance with legal requirements: Cryptography can assist firms in
meeting a variety of legal requirements, including data protection and
privacy legislation.

What is Firewall?
A firewall is a network security device, either hardware or software-based,
which monitors all incoming and outgoing traffic and based on a defined set
of security rules accepts, rejects, or drops that specific traffic.
 Accept: allow the traffic
 Reject: block the traffic but reply with an “unreachable error”
 Drop: block the traffic with no reply
A firewall is a type of network security device that filters incoming and
outgoing network traffic with security policies that have previously been set
up inside an organization. A firewall is essentially the wall that separates a
private internal network from the open Internet at its very basic level.

History and Need For Firewall


Before Firewalls, network security was performed by Access Control
Lists (ACLs) residing on routers. ACLs are rules that determine whether
network access should be granted or denied to specific IP address. But ACLs
cannot determine the nature of the packet it is blocking. Also, ACL alone does
not have the capacity to keep threats out of the network. Hence, the Firewall
was introduced. Connectivity to the Internet is no longer optional for
organizations. However, accessing the Internet provides benefits to the
organization; it also enables the outside world to interact with the internal
network of the organization. This creates a threat to the organization. In
order to secure the internal network from unauthorized traffic, we need a
Firewall.
Packet Filtering Firewall
Packet filtering firewall is used to control network access by monitoring
outgoing and incoming packets and allowing them to pass or stop based on
source and destination IP address, protocols, and ports. It analyses traffic at
the transport protocol layer (but mainly uses first 3 layers). Packet firewalls
treat each packet in isolation. They have no ability to tell whether a packet is
part of an existing stream of traffic. Only It can allow or deny the packets
based on unique packet headers. Packet filtering firewall maintains a filtering
table that decides whether the packet will be forwarded or discarded. From
the given filtering table, the packets will be filtered according to the following
rules:

 Incoming packets from network 192.168.21.0 are blocked.


 Incoming packets destined for the internal TELNET server (port 23) are
blocked.
 Incoming packets destined for host 192.168.21.3 are blocked.
 All well-known services to the network 192.168.21.0 are allowed.
What is Proxy Server?
A proxy server refers to a server that acts as an intermediary between the
request made by clients, and a particular server for some services or requests
for some resources. There are different types of proxy servers available that
are put into use according to the purpose of a request made by the clients to
the servers. The basic purpose of Proxy servers is to protect the direct
connection of Internet clients and Internet resources. There are many Proxy
providers in the market that provide services to both individuals and
businesses.
The proxy server also prevents the identification of the client’s IP address
when the client makes any request to any other servers.
 Internet Client and Internet resources: For Internet clients, Proxy
servers also act as a shield for an internal network against the request
coming from a client to access the data stored on the server. It makes
the original IP address of the node remain hidden while accessing data
from that server.
 Protects true host identity: In this method, outgoing traffic appears to
come from the proxy server rather than internet navigation. It must be
configured to a specific application such as HTTP or FTP. For example,
organizations can use a proxy to observe the traffic of their employees
to get the work efficiently done. It can also be used to keep a check on
any kind of highly confidential data leakage. Some can also use it to
increase their website rank.
Types Of Proxy Server
 Reverse Proxy Server: The job of a reverse proxy server to listen to the
request made by the client and redirect to the particular web server
which is present on different servers.
Example – Listen for TCP port 80 website connections which are
normally placed in a demilitarized zone (DMZ) zone for publicly
accessible services but it also protects the true identity of the host.
Moreover, it is transparent to external users as external users will not
be able to identify the actual number of internal servers. So, it is the
prime duty of reverse proxy to redirect the flow depending upon the
configurations of internal servers. The request that is made to pass
through the private network protected by firewalls will need a proxy
server that is not abiding by any of the local policies. Such types of
requests from the clients are completed using reverse proxy servers.
This is also used to restrict the access of the clients to the confidential
data residing on the particular servers.
 Web Proxy Server: Web Proxy forwards the HTTP requests, only URL is
passed instead of a path. The request is sent to particular the proxy
server responds. Examples, Apache, HAP Proxy.
 Anonymous Proxy Server: This type of proxy server does not make an
original IP address instead these servers are detectable still provides
rational anonymity to the client device.
 Highly Anonymity Proxy: This proxy server does not allow the original
IP address and it as a proxy server to be detected.
 Transparent Proxy: This type of proxy server is unable to provide any
anonymity to the client, instead, the original IP address can be easily
detected using this proxy. But it is put into use to act as a cache for the
websites. A transparent proxy when combined with gateway results in
a proxy server where the connection requests are sent by the client ,
then IP are redirected. Redirection will occurs without the client IP
address configuration. HTTP headers present on the server-side can
easily detect its redirection .
 CGI Proxy: CGI proxy server developed to make the websites more
accessible. It accepts the requests to target URLs using a web form and
after processing its result will be returned to the web browser. It is less
popular due to some privacy policies like VPNs but it still receives a lot
of requests also. Its usage got reduced due to excessive traffic that can
be caused to the website after passing the local filtration and thus
leads to damage to the organization.
 Suffix Proxy: Suffix proxy server basically appends the name of the
proxy to the URL. This type of proxy doesn’t preserve any higher level
of anonymity. It is used for bypassing the web filters. It is easy to use
and can be easily implemented but is used less due to the more
number of web filter present in it.
 Distorting Proxy: Proxy servers are preferred to generate an incorrect
original IP address of clients once being detected as a proxy server. To
maintain the confidentiality of the Client IP address HTTP headers are
used.
 Tor Onion Proxy: This server aims at online anonymity to the user’s
personal information. It is used to route the traffic through various
networks present worldwide to arise difficulty in tracking the users’
address and prevent the attack of any anonymous activities. It makes it
difficult for any person who is trying to track the original address. In
this type of routing, the information is encrypted in a multi-folds layer.
At the destination, each layer is decrypted one by one to prevent the
information to scramble and receive original content. This software is
open-source and free of cost to use.
 12P Anonymous Proxy: It uses encryption to hide all the
communications at various levels. This encrypted data is then relayed
through various network routers present at different locations and thus
I2P is a fully distributed proxy. This software is free of cost and open
source to use, It also resists the censorship.
 DNS Proxy: DNS proxy take requests in the form of DNS queries and
forward them to the Domain server where it can also be cached,
moreover flow of request can also be redirected.
 Rotating Proxy: A rotating proxy assign a new or different IP address to
each user that connects to proxy. As users connect, the unique address
is assign to it.
Chp 9
What Is a Network Tester?
A network tester is an important tool for technicians and network
administrators. It is designed to assess the operational status of network
connections, identifying issues in signal strength, interference, and
connectivity. This category encompasses a variety of testing tools, including
network cable testers, Ethernet test devices, and more specialized equipment
like cable network certifiers. These devices ensure that a network's cabling
and connections are functioning optimally, which is essential for maintaining
robust network performance.
The Importance of Cable Testing in Network Maintenance
Cable testing is fundamental to network installation and troubleshooting,
ensuring that the physical wires or fibers carrying data across a network are
capable of supporting the network and applications. Technicians can detect
faults, discontinuities, and cable quality issues using devices like the Fluke
Networks LinkIQ™ Cable+Network Tester. This process is critical for
preventing or diagnosing network failures and ensuring data transmission
integrity, especially in structured cabling systems where performance — by
category or cabling material — is vitally important.
Choosing the Right Tester for Your Network
Selecting the appropriate tester depends on the specific requirements of your
network infrastructure. Factors to consider include the types of cables used
(such as Cat 5 or Cat 6), the size and complexity of the network, and the
specific tests needed (for example, speed testing or connectivity checks).
Fluke offers testing tools tailored to various applications, from simple line
testers to advanced network connectivity testers, ensuring a tool for every
need.
How to Test Ethernet Cable
Testing Ethernet cable is straightforward with the right tool, such as the Fluke
LinkIQ. The procedure typically involves connecting the tester to the cable,
transmitting a variety of signals, and reading the results to identify any
connectivity or signal quality issues. This process helps pinpoint problems like
breaks, short circuits, and crosstalk interference, and ensure the reliability of
network connections.
Optimizing Network Performance with Advanced Testing Tools
For networks requiring rigorous analysis, advanced tools offer comprehensive
solutions for assessing network performance. These devices go beyond
simple connectivity tests to evaluating network speed, latency, and data
integrity. Utilizing such sophisticated testing equipment is essential for
diagnosing complex network issues and optimizing the performance of
critical network infrastructures.
What Does a Network Tester Do?
A network tester can be an indispensable tool for network technicians and IT
professionals. It serves a multifaceted role in ensuring the smooth operation
of both wired and wireless networks. At its core, a network tester is designed
to diagnose and troubleshoot network problems, providing critical insights
into the health and performance of a network's physical layers. Network
testers have a range of functions, many of which are standard across all
devices in their class.
Key Functions of a Network Tester
Verifying Connectivity
One of the primary functions of a network tester is to verify network
connectivity. This involves checking whether devices in a network can
communicate with each other or with devices outside the network.
Performing this test ensures that data can flow seamlessly across the
network's infrastructure.
Assessing Network Performance
Network testers evaluate the performance of network connections,
measuring parameters such as signal strength, bandwidth, and latency. These
metrics are crucial for understanding how well a network supports its
intended functions, and they can highlight areas requiring improvement or
optimization.
Identifying and Locating Faults
Network testers are adept at identifying and pinpointing faults within a
network. This includes detecting issues like broken cables, improper
configurations, and incompatible hardware, which can significantly hinder
network performance. Some network testers have advanced features that
allow for the precise location of faults, reducing downtime and accelerating
the repair process.
Cable Testing
For wired networks, network testers offer comprehensive cable testing
capabilities. They can test various types of cables — including Ethernet (Cat5,
Cat6, etc.), coaxial, and fiber optic — for faults such as open circuits, shorts,
and crossovers. Cable testing ensures that the physical wires or fibers
carrying data are free from defects that could impair network performance.
Configuring, Commissioning, and Certifying Networks
In addition to troubleshooting, network testers are also used in the initial
setup and configuration of networks, as well as certification once installation
is complete. They validate network designs, ensuring that the installed
network meets the specified performance criteria and is ready for operation.
Testing is critical for understanding the condition and performance of the
cabling and connections in your network, whether for maintaining its
function, expanding its reach, or troubleshooting issues that arise.
The Versatility of Network Testers
Network testers come in various forms, from simple handheld devices for
basic connectivity checks to sophisticated systems offering comprehensive
network infrastructure analysis. The choice of a network tester depends on
the specific needs of the network being tested, including the complexity of
the network, the types of cables used, and the level of detail required in the
analysis.
What is a Cable Tester?
A cable tester is a vital tool used to assess the functionality and integrity of
network cables, such as Ethernet (Cat 5, Cat 6, etc.), coaxial, and fiber optic
cables. It ensures that these cables can efficiently transmit data by identifying
physical defects, miswirings, or any installation errors that could degrade
network performance. By testing continuity, wire configuration, fault
detection, signal attenuation, and cable length, cable testers play an essential
role in installing, maintaining, and troubleshooting wired network systems,
facilitating optimal network reliability and performance.
What Does a Cable Tester Do?
A cable tester is an essential tool for installing, maintaining, or
troubleshooting wired network systems. It assesses the functionality and
integrity of various types of network cables, such as Ethernet (Cat 5, Cat 6,
etc.), coaxial, and fiber optic cables. The primary purpose of a cable tester is
to ensure that cables are capable of transmitting data effectively and are free
from physical defects or installation errors that could impact network
performance.
Continuity Testing
One of the essential functions of a cable tester is to check for continuity in
the wires inside a cable. Continuity testing verifies that each wire or fiber
within the cable is correctly connected and unbroken, ensuring the signal can
travel uninterrupted from one end to the other.
Identifying Wiring Configurations
Cable testers are used to identify the wiring configuration of network cables.
This is particularly important for ensuring that Ethernet cables are wired
correctly and to the applicable standard. Correct wiring is crucial for the
proper operation of network devices.
Detecting Wiring Faults
Cable testers are adept at detecting a variety of wiring faults, including
shorts, opens, reversed pairs, and crosstalk interference. These faults can
significantly degrade network performance, causing data transmission errors
and reducing reliability.
Measuring Signal Attenuation
More advanced cable testers can measure signal attenuation, which is the
loss of signal strength as it travels through a cable. High attenuation levels
can indicate poor cable quality or excessive cable length, which can affect
network performance.
Testing for Cable Length
Another capability of advanced cable testers is to measure the actual length
of a cable, as well as identify any breaks or cuts within it. This information is
vital for troubleshooting network issues, especially in large-scale installations
with extensive cable runs.
Cable testers allow you to effectively monitor the condition of your cabling.
They help you avoid unexpected faults and failures, and they speed
troubleshooting when issues occur, making them a key resource for ensuring
the reliability and performance of your network.
The Importance of Regular Cable Testing
Regular cable testing is crucial for maintaining the health of a network. It
helps identify potential issues before they can cause network failures,
ensuring that data transmission remains efficient and reliable. Cable testing is
also essential after installing new cables or changing existing network
infrastructure, verifying that the installation has been completed correctly
and meets required standards.
Choosing the Right Cable Tester
Selecting the appropriate cable tester depends on the specific needs of the
network and the types of cables used. As a leader in the manufacturing of
electronic test tools, Fluke offers a range of cable testers that are well known
for their accuracy, durability, and ease of use. Fluke cable testers provide
reliable solutions for a wide array of testing requirements, from basic models
for simple continuity checks to sophisticated testers capable of detailed
network diagnostics.
How to Test a Network Cable
Testing network cabling is crucial for ensuring that your network
infrastructure is reliable, efficient, and capable of handling your data
transmission needs. It involves a series of steps to verify the cable's
performance and identify any issues that could impact network functionality.
In general, this is the process for testing a network cable effectively:
1. Prepare Your Tools: Ensure you have a reliable network cable tester,
which depending on your network could range from a basic model for
simple continuity tests to more advanced testers for comprehensive
diagnostics.
2. Visual Inspection: Begin by visually inspecting the cable for any obvious
signs of damage, such as cuts, frays, or kinks. Physical damage can
degrade or destroy the cable's performance.
3. Connect the Tester: Plug one end of the network cable into the tester's
central unit and the other into the remote unit. For testers without a
remote, you may need to connect the cable to a network switch or
router at the other end.
4. Perform the Test: Turn on the tester and initiate the test. Basic testers
will check for continuity, whereas more advanced models can conduct
a series of tests, including speed tests, distance to fault, crosstalk, and
signal attenuation.
5. Interpret the Readings: Read the results displayed by the tester. A good
cable should show continuity on all wires, with no shorts or opens.
Advanced testers will provide more detailed results, including the exact
location of faults, if any.
6. Troubleshoot as Needed: If the test identifies any issues, troubleshoot
them accordingly. This may involve re-terminating the cable ends,
replacing defective cables, or adjusting the cable layout to reduce
interference.
7. Document the Results: Record the test results for future reference,
which becomes an invaluable resource for ongoing network
maintenance and troubleshooting.

What is network monitoring?


Network monitoring means using network monitoring software to monitor
a computer network’s ongoing health and reliability. Network performance
monitoring (NPM) systems typically generate topology maps and actionable
insights, based on the performance data collected and analyzed.
As a result of this network mapping, IT teams gain complete visibility into
network components, application performance monitoring and related IT
infrastructure. This visibility allows them to track the overall network health,
spot red flags and optimize data flow.
A network monitoring system watches for malfunctioning network devices
and overloaded resources. The system does this regardless of whether the
network resources are on-premises, in a data center, hosted by a cloud
services provider or part of a hybrid ecosystem. For example, it may find
overloaded CPUs on servers, high error rates on switches and routers, or
sudden spikes in network traffic. A key feature of NPM software is alerting
network administrators when a performance issue is spotted.
Network monitoring systems also collect data to analyze traffic flow and
measure performance and availability. One method of monitoring for
performance issues and bottlenecks is configuring thresholds, so that you
receive instant alerts when there is a threshold violation. Some thresholds
are simple static thresholds. However, modern NPM systems use machine
learning (ML) to determine normal performance across all of a network’s
metrics based on time of day and day of the week. NPM systems with such
ML-driven baselines create alerts that are typically more actionable.
Organizations often perform network monitoring within a network
operations center (NOC), which monitors network devices and the
connections between all dependencies without end-users knowing the NOC
is operating behind the scenes.
Types of network monitoring tools
There are three primary types of network monitoring tools.
1. SNMP-based tools use Simple Network Management Protocol (SNMP) to
interact with network hardware. These tools track the real-time status and
use of resources, such as CPU stats, memory consumption, bytes transmitted
and received, and other metrics. SNMP is one of the most widely used
monitoring protocols, along with Microsoft Windows Management
Instrumentation (WMI) for Windows servers and Secure Shell (SSH) for Unix
and Linux servers.
2. Flow-based tools monitor traffic flow to provide statistics about protocols
and users. Some also inspect packet sequences to identify performance
issues between two IP addresses. These flow tools capture traffic flow data
and send them to a central collector for processing and storage.
3. Active network monitoring solutions inject packets into the network and
measure end-to-end reachability, round-trip time, bandwidth, packet loss,
link utilization and more. By conducting and measuring real-time transactions
from a user’s perspective, these solutions enable faster and more reliable
detection of outages and performance degradation.
There are also both agent and agentless network monitoring methods. Agent-
based monitoring involves installing an agent, a small application or piece of
software, onto the monitored device. Agent-less monitoring (using SNMP and
SSH protocols) requires no installation; instead, network monitoring software
logs directly into the monitored device.

Dial-up Internet access is a form of Internet access that uses the facilities of
the public switched telephone network (PSTN) to establish a connection to
an Internet service provider (ISP) by dialing a telephone number on a
conventional telephone line which could be connected using an RJ-11
connector.[1] Dial-up connections use modems to decode audio signals into
data to send to a router or computer, and to encode signals from the latter
two devices to send to another modem at the ISP.
Dial-up Internet reached its peak popularity during the dot-com bubble with
the likes of ISPs such as Sprint, EarthLink, MSN Dial-up, NetZero, Prodigy, and
America Online (more commonly known as AOL). This was in large part
because broadband Internet did not become widely used until well into the
2000s. Since then, most dial-up access has been replaced by broadband.
Modems
[edit]
Banks of modems used by an ISP to provide dial-
up Internet service
Because there was no technology to allow different carrier signals on a
telephone line at the time, dial-up Internet access relied on using audio
communication. A modem would take the digital data from a
computer, modulate it into an audio signal and send it to a receiving modem.
This receiving modem would demodulate the signal from analogue noise,
back into digital data for the computer to process.[14]
The simplicity of this arrangement meant that people would be unable to use
their phone line for verbal communication until the Internet call was finished.
The Internet speed using this technology can drop to 21.6 kbit/s or less. Poor
condition of the telephone line, high noise level and other factors all affect
dial-up speed. For this reason, it is popularly called the 21600 Syndrome. [15][16]
Availability
[edit]
Dial-up connections to the Internet require no additional infrastructure other
than the telephone network and the modems and servers needed to make
and answer the calls. Because telephone access is widely available, dial-up is
often the only choice available for rural or remote areas,
where broadband installations are not prevalent due to low population
density and high infrastructure cost.[11]
A 2008 Pew Research Center study stated that only 10% of US adults still used
dial-up Internet access. The study found that the most common reason for
retaining dial-up access was high broadband prices. Users cited lack of
infrastructure as a reason less often than stating that they would never
upgrade to broadband.[17] That number had fallen to 6% by 2010,[18] and to 3%
by 2013.[19]
A survey conducted in 2018 estimated that 0.3% of Americans were using
dial-up by 2017.[20]
The CRTC estimated that there were 336,000 Canadian dial-up users in 2010.
[21]

Replacement by broadband
[edit]
Broadband Internet access via cable, digital subscriber line, wireless
broadband, mobile broadband, satellite and FTTx has replaced dial-up access
in many parts of the world. Broadband connections typically offer speeds of
700 kbit/s or higher for two-thirds more than the price of dial-up on average.
[18]
In addition, broadband connections are always on, thus avoiding the need
to connect and disconnect at the start and end of each session. Broadband
does not require the exclusive use of a phone line, and thus one can access
the Internet and at the same time make and receive voice phone calls
without having a second phone line.
However, many rural areas remain without high-speed Internet, despite the
eagerness of potential customers. This can be attributed to population,
location, or sometimes ISPs' lack of interest due to little chance of
profitability and high costs to build the required infrastructure. Some dial-up
ISPs have responded to the increased competition by lowering their rates and
making dial-up an attractive option for those who merely want email access
or basic Web browsing.[22][23]
Dial-up has seen a significant fall in usage, with the potential to cease to exist
in future as more users switch to broadband.[citation needed] In 2013, only about
3% of the U.S population used dial-up, compared to 30% in 2000. [24] One
contributing factor is the bandwidth requirements of newer computer
programs, like operating systems and antivirus software, which automatically
download sizeable updates in the background when a connection to the
Internet is first made. These background downloads can take several minutes
or longer and, until all updates are completed, they can severely impact the
amount of bandwidth available to other applications like Web browsers.
Since an "always on" broadband is the norm expected by most newer
applications being developed,[citation needed] this automatic background
downloading trend is expected to continue to eat away at dial-up's available
bandwidth to the detriment of dial-up users' applications. [25] Many newer
websites also now assume broadband speeds as the norm, and when
connected to with slower dial-up speeds may drop (timeout) these slower
connections to free up communication resources. On websites that are
designed to be more dial-up friendly, use of a reverse proxy prevents dial-ups
from being dropped as often but can introduce long wait periods for dial-up
users caused by the buffering used by a reverse proxy to bridge the different
data rates.
Despite the rapid decline, dial-up Internet still exists in some rural areas, and
many areas of developing and underdeveloped nations, although wireless
and satellite broadband are providing faster connections in many rural areas
where fibre or copper may be uneconomical.[citation needed]
In 2010, it was estimated that there were 800,000 dial-up users in the
UK. BT turned off its dial-up service in 2013.[26]
In 2012, it was estimated that 7% of Internet connections in New Zealand
were dial-up. One NZ (formerly Vodafone) turned off its dial-up service in
2021.[27][28]
Performance
[edit]
An example handshake of a dial-up modem
Modern dial-up modems typically have a maximum theoretical transfer
speed of 56 kbit/s (using the V.90 or V.92 protocol), although in most cases,
40–50 kbit/s is the norm. Factors such as phone line noise as well as the
quality of the modem itself play a large part in determining connection
speeds.[citation needed]
Some connections may be as low as 20 kbit/s in extremely noisy
environments, such as in a hotel room where the phone line is shared with
many extensions, or in a rural area, many kilometres from the phone
exchange. Other factors such as long loops, loading coils, pair gain, electric
fences (usually in rural locations), and digital loop carriers can also slow
connections to 20 kbit/s or lower.
[The dial-up sounds are] a choreographed sequence that allowed these
digital devices to piggyback on an analog telephone network. A phone line
carries only the small range of frequencies in which most human
conversation takes place: about three hundred to three thousand hertz. The
modem works within these [telephone network] limits in creating sound
waves to carry data across phone lines. What you're hearing is the way 20th
century technology tunneled through a 19th century network; what you're
hearing is how a network designed to send the noises made by your muscles
as they pushed around air came to transmit anything [that can be] coded in
zeroes and ones.
-Alexis Madrigal, paraphrasing Glenn Fleishman[29]
Analog telephone lines are digitally switched and transported inside a Digital
Signal 0 once reaching the telephone company's equipment. Digital Signal 0 is
64 kbit/s and reserves 8 kbit/s for signaling information; therefore a 56 kbit/s
connection is the highest that will ever be possible with analog phone lines.
Dial-up connections usually have latency as high as 150 ms or even more,
higher than many forms of broadband, such as cable or DSL, but typically less
than satellite connections. Longer latency can make video
conferencing and online gaming difficult, if not impossible. An increasing
amount of Internet content such as streaming media will not work at dial-up
speeds.
Video games released from the mid-1990s to the mid-2000s that utilized
Internet access such as EverQuest, Red Faction, Warcraft 3, Final Fantasy
XI, Phantasy Star Online, Guild Wars, Unreal Tournament, Halo: Combat
Evolved, Audition, Quake 3: Arena, Starsiege: Tribes and Ragnarok Online,
etc., accommodated for 56k dial-up with limited data transfer between the
game servers and user's personal computer. The first consoles to provide
Internet connectivity, the Dreamcast and PlayStation 2, supported dial-up as
well as broadband. The GameCube could use dial-up and broadband
connections, but this was used in very few games and required a separate
adapter. The original Xbox exclusively required a broadband connection.
Many computer and video games released since 2006 do not even include the
option to use dial-up. However, there are exceptions to this, such as Vendetta
Online, which can still run on a dial-up modem.

Chp-10
Defining network security
The simple definition of network security is any combination of hardware and
software products that operate in Layers 3 and 4 -- the network and transport
layers -- of the OSI stack, with the primary function to manage access to the
corporate network and network-embedded resources. Network security acts
as a gatekeeper that permits entry to authorized users and detects and
prevents unauthorized access and anything that tries to infiltrate the network
to cause harm or compromise data.
Network security is not one-size-fits-all, as it typically comprises different
components.
Network security is not one-size-fits-all, as it typically comprises different
components. Below, we explore nine elements of network security and their
roles in a security strategy. Please note that these components are not
mutually exclusive, as many features and technologies overlap in various
suppliers' offerings.
1. Network firewall
Firewalls are the first line of defense in network security. These network
applications or devices monitor and control the flow of incoming and
outgoing network traffic between a trusted internal network and untrusted
external networks. Network traffic is evaluated based on state, port and
protocol, with filtering decisions made based on both administrator-defined
security policy and static rules.

Firewalls can be broken into subcategories based on their underlying


technology, such as proxy, stateful inspection, deep inspection or next
generation. Next-generation firewalls (NGFWs) perform all the functions of
other firewalls but add application-level inspection and intrusion prevention
systems (IPSes) and use threat intelligence from outside the firewall.

Firewalls make up the single largest segment of the network security market,
according to Doyle Research and Security Mindsets. In 2019, firewalls of all
types were responsible for about 40% of network security spending, around
$8 billion.

Representative vendors: Check Point Software, Cisco, Juniper Networks and


Palo Alto Networks
2. Intrusion prevention system
Network IPSes are software products that provide continuous monitoring of
the network or system activities and analyze them for signs of policy
violations, deviations from standard security practices or malicious activity.
They log, alert and react to discovered issues. IPS products compare current
activity with a list of signatures known to represent threats. They can also use
alternative detection methods -- such as protocol analysis, anomaly and
behavioral detection or heuristics -- to discover suspicious network activity
and malicious software. Sophisticated IPSes use threat intelligence and
machine learning to increase accuracy.

Although many IPS features have been incorporated into NGFWs and unified
threat management (UTM) appliances, the IPS market is still responsible for
10% of network security spending.
Representative vendors: Alert Logic, Check Point Software, Cisco, McAfee and
Trend Micro
3. Unified threat management
A UTM product integrates multiple networking and network security
functions into a single appliance, while offering consolidated
management. UTM devices must include network routing, firewalling,
network intrusion prevention and gateway antivirus. They generally offer
many other security applications, such as VPN, remote access, URL filtering
and quality of service. Unified management of all these functions is required,
as the converged platform is designed to increase overall security, while
reducing complexity.

UTM devices are best suited for SMBs and for branch and remote sites. UTM
products are the second-largest network security category with over $5
billion in spending.

Representative vendors: Barracuda Networks, Fortinet, SonicWall, Sophos


and WatchGuard
Enterprises can evaluate these nine areas to ensure a solid approach to
network security.
4. Advanced network threat prevention
Advanced network threat prevention products perform signatureless
malware discovery at the network layer to detect cyber threats and attacks
that employ advanced malware and persistent remote access. These products
employ heuristics, code analysis, statistical analysis, emulation and machine
learning to flag and sandbox suspicious files. Sandboxing -- the isolation of a
file from the network so it can execute without affecting other resources --
helps identify malware based on its behavior rather than through
fingerprinting.

The benefit of advanced network threat prevention tools is their ability to


detect malware that has sophisticated evasion or obfuscation capabilities, as
well as detect new malware that hasn't been previously identified.
Additionally, they validate threat information and uncover critical indicators
of compromise that can be used for future investigations and threat hunting.
Advanced network threat prevention products represent a similar size as the
IPS market, about 10% of the network security market.

Representative vendors: Check Point Software, FireEye, Forcepoint, Palo Alto


Networks and Symantec
Emerging network security categories
In the four elements of network security below -- network access control
(NAC), cloud access security broker (CASB), DDoS mitigation and network
behavior anomaly detection (NBAD) -- each generates less than $1 billion in
spending, according to Doyle Research and Security Mindsets. Combined,
however, they account for about 11% of the total market, and they are all
growth categories.
5. Network access control
NAC is an approach to network management and security that supports
network visibility and access management. It consists of policies, procedures,
protocols, tools and applications that define, restrict and regulate what an
individual or component can or cannot do on a network. NAC
products enable compliant, authenticated and trusted endpoint devices and
nodes to access network resources and infrastructure. For noncompliant
devices, NAC can deny network access, place them in quarantine or restrict
access, thus keeping insecure nodes from infecting the network.

Representative vendors: Aruba Networks, Cisco, Forescout Technologies,


Fortinet and Pulse Secure
6. Cloud access security broker
CASBs are on-premises or cloud-based security policy enforcement points for
cloud application access and data usage. By acting as an intermediary among
mobile users, in-house IT architectures and cloud vendor environments,
CASBs enable an organization to extend the reach of its security policies --
especially regarding data protection -- into the public cloud.

CASB features include authentication, device profiling, auditing, malware


detection and prevention, data loss prevention, data encryption and logging.
The value of CASBs stems from their ability to give insight into cloud
application use across cloud platforms and identify unsanctioned use. This is
especially important in regulated industries.

Representative vendors: Bitglass, McAfee, Microsoft, Netskope, Symantec


and Zscaler
7. DDoS mitigation
DDoS mitigation is a set of hardening techniques, processes and tools that
enable a network, information system or IT environment to resist or mitigate
the effect of DDoS attacks on networks. DDoS mitigation activities typically
require analysis of the underlying system, network or environment for known
and unknown security vulnerabilities targeted in a DDoS attack. This also
requires identification of what normal conditions are -- through traffic
analysis -- and the ability to identify incoming traffic to separate human
traffic from humanlike bots and hijacked web browsers.

DDoS mitigation uses connection tracking, IP reputation lists, deep packet


inspection, blocklisting, allowlisting or rate limiting to filter traffic and
mitigate attacks. Many times, organizations have their DDoS mitigation needs
covered by specialized service providers, but the largest companies include
DDoS mitigation as an in-house capability.

Representative vendors: Cloudflare, F5 Networks, Imperva, Pulse Secure,


NetScout and Radware
8. Network behavior anomaly detection
NBAD products provide real-time monitoring of network traffic for deviations
in normal activity, trends or events. The tools complement
traditional perimeter security systems with their ability to detect threats and
stop suspicious activities that are unknown or specifically designed to avoid
standard detection methods. When NBAD products discover unusual activity,
they generate an alert that provides details and pass it on for further analysis.

For NBAD to be optimally effective, it must establish a baseline of normal


network or user behavior over a period of time. Once it defines certain
parameters as normal, it can then flag any departure from one or more of
those parameters.
Representative vendors: AT&T Cybersecurity, Cisco, Flowmon Networks, IBM
Security and LogRhythm
Network and security convergence
The elements of network security and networking functionality continue to
intersect. For example, many network vendors offer security features, and
security vendors offer networking functionality. This is especially prevalent in
SD-WAN and software-defined branch.
9. SD-WAN security
Advanced network security capabilities are increasingly being built into SD-
WAN products. SD-WAN security overlays security components -- such as
firewalls, IPSes, malware detection, content filtering and encryption -- onto
SD-WANs to ensure the corporate security policy is enforced at all levels. SD-
WAN security provides the ability to monitor and secure traffic that travels
directly to the internet -- e.g., SaaS and IaaS -- which is an increasing portion
of branch WAN bandwidth.

Basic Network Attacks in Computer Network


Many people rely on the Internet for many of their professional, social and
personal activities. But there are also people who attempt to damage our
Internet-connected computers, violate our privacy and render inoperable the
Internet services.
Given the frequency and variety of existing attacks as well as the threat of
new and more destructive future attacks, network security has become a
central topic in the field of computer networking.
How are computer networks vulnerable? What are some of the more
prevalent types of attacks today?
Malware – short for malicious software which is specifically designed to
disrupt, damage, or gain authorized access to a computer system. Much of
the malware out there today is self-replicating: once it infects one host, from
that host it seeks entry into other hosts over the Internet, and from the
newly infected hosts, it seeks entry into yet more hosts. In this manner, self-
replicating malware can spread exponentially fast.
Virus – A malware which requires some form of user’s interaction to infect
the user’s device. The classic example is an e-mail attachment containing
malicious executable code. If a user receives and opens such an attachment,
the user inadvertently runs the malware on the device.
Worm – A malware which can enter a device without any explicit user
interaction. For example, a user may be running a vulnerable network
application to which an attacker can send malware. In some cases, without
any user intervention, the application may accept the malware from the
Internet and run it, creating a worm.
Botnet – A network of private computers infected with malicious software
and controlled as a group without the owners’ knowledge, e.g. to send
spam.
DoS (Denial of Service) – A DoS attack renders a network, host, or other
pieces of infrastructure unusable by legitimate users. Most Internet DoS
attacks fall into one of three categories :
• Vulnerability attack: This involves sending a few well-crafted messages to a
vulnerable application or operating system running on a targeted host. If the
right sequence of packets is sent to a vulnerable application or operating
system, the service can stop or, worse, the host can crash.
• Bandwidth flooding: The attacker sends a deluge of packets to the targeted
host—so many packets that the target’s access link becomes clogged,
preventing legitimate packets from reaching the server.
• Connection flooding: The attacker establishes a large number of half-open
or fully open TCP connections at the target host. The host can become so
bogged down with these bogus connections that it stops accepting legitimate
connections.
DDoS (Distributed DoS) – DDoS is a type of DOS attack where multiple
compromised systems, are used to target a single system causing a Denial of
Service (DoS) attack. DDoS attacks leveraging botnets with thousands of
comprised hosts are a common occurrence today. DDoS attacks are much
harder to detect and defend against than a DoS attack from a single host.
Packet sniffer – A passive receiver that records a copy of every packet that
flies by is called a packet sniffer. By placing a passive receiver in the vicinity of
the wireless transmitter, that receiver can obtain a copy of every packet that
is transmitted! These packets can contain all kinds of sensitive information,
including passwords, social security numbers, trade secrets, and private
personal messages. some of the best defenses against packet sniffing involve
cryptography.
IP Spoofing – The ability to inject packets into the Internet with a false source
address is known as IP spoofing, and is but one of many ways in which one
user can masquerade as another user. To solve this problem, we will need
end-point authentication, that is, a mechanism that will allow us to
determine with certainty if a message originates from where we think it
does.
Man-in-the-Middle Attack – As the name indicates, a man-in-the-middle
attack occurs when someone between you and the person with whom you
are communicating is actively monitoring, capturing, and controlling your
communication transparently. For example, the attacker can re-route a data
exchange. When computers are communicating at low levels of the network
layer, the computers might not be able to determine with whom they are
exchanging data.
Compromised-Key Attack – A key is a secret code or number necessary to
interpret secured information. Although obtaining a key is a difficult and
resource-intensive process for an attacker, it is possible. After an attacker
obtains a key, that key is referred to as a compromised key. An attacker uses
the compromised key to gain access to a secured communication without the
sender or receiver being aware of the attack.
Phishing – The fraudulent practice of sending emails purporting to be from
reputable companies in order to induce individuals to reveal personal
information, such as passwords and credit card numbers.
DNS spoofing – Also referred to as DNS cache poisoning, is a form of
computer security hacking in which corrupt Domain Name System data is
introduced into the DNS resolver’s cache, causing the name server to return
an incorrect IP address.
Rootkit – Rootkits are stealthy packages designed to benefit administrative
rights and get the right of entry to a community tool. Once installed, hackers
have complete and unrestricted get right of entry to the tool and can,
therefore, execute any movement including spying on customers or stealing
exclusive data with no hindrance.
Find out about Organization Assaults:
There’s something else to find out about network assaults.
Zeus Malware: Variations, Techniques and History:
Zeus, otherwise called Zbot, is a malware bundle that utilizes a client/server
model. Programmers utilize the Zeus malware to make gigantic botnets. The
primary reason for Zeus is to assist programmers with acquiring unapproved
admittance to monetary frameworks by taking accreditations, banking data
and monetary information. The penetrated information is then sent back to
the assailants through the Zeus Order and Control (C&C) server.
Zeus has tainted north of 3 million PCs in the USA, and has compromised
significant associations like NASA and the Bank of America.
Cobalt Strike: White Cap Programmer Force to be reckoned with in Some
unacceptable Hands
Cobalt Strike is a business infiltration testing instrument. This instrument
empowers security analyzers admittance to a huge assortment of assault
capacities. You can utilize Cobalt Strike to execute stick phishing and gain
unapproved admittance to frameworks. It can likewise recreate an
assortment of malware and other high level danger strategies.
While Cobalt Strike is a real instrument utilized by moral programmers, some
digital hoodlums get the preliminary rendition and break its product
insurance, or even get admittance to a business duplicate of the product.
FTCode Ransomware: Dispersion, Life systems and Assurance
FTCode is a kind of ransomware, intended to encode information and power
casualties to pay a payoff for a decoding key. The code is written in
PowerShell, implying that it can scramble records on a Windows gadget
without downloading some other parts. FTCode loads its executable code just
into memory, without saving it to plate, to forestall location by antivirus. The
FTCode ransomware is conveyed through spam messages containing a
contaminated Word layout in Italian.
Mimikatz: World’s Most Perilous Secret word Taking Stage
Mimikatz is an open-source instrument at first created by moral programmer
Benjamin Delpy, to exhibit a blemish in Microsoft’s confirmation conventions.
.As such, the apparatus takes passwords. It is conveyed on Windows and
empowers clients to extricate Kerberos tickets and other validation tokens
from the machine. A portion of the more significant assaults worked with by
Mimikatz incorporate Pass-the-Hash, Kerberos Brilliant Ticket, Pass the Key,
and Pass-the-Ticket.
Understand more: Mimikatz: World’s Most Risky Secret key Taking Stage
Grasping Honor Acceleration and 5 Normal Assault Strategies
Honor heightening is a typical technique for acquiring unapproved
admittance to frameworks. Programmers start honor heightening by tracking
down weak focuses in an association’s guards and accessing a framework.
Typically, the primary place of infiltration won’t concede aggressors with the
fundamental degree of access or information. They will go on with honor
heightening to acquire authorizations or get admittance to extra, more
delicate frameworks.

Digital Signatures: Defined


To define a digital signature in the simplest terms: it is a mathematical
scheme for verifying the authenticity of digital messages or documents. Think
of it as a fingerprint that is unique to both the document and the signatory.
This fingerprint is created using cryptographic algorithms, which bring an
added layer of security. Without getting overly technical, a digital signature
algorithm typically employs a pair of keys: a private key and a public key. The
private key is known only to the signer, while the public key is available to
anyone who needs to verify the signature.
How does this work in a practical sense? When a person signs a document
digitally, their software application creates a cryptographic hash of the
document’s contents. This hash is then encrypted using the signatory’s
private key. The resulting encrypted hash is the digital signature. The
recipient, or anyone needing to verify the document, can decrypt this hash
using the signer’s public key. If the decrypted hash matches a newly
generated hash of the document, the signature is verified, proving that the
document has not been altered and that the signer’s identity is confirmed,
preventing identity theft and other cybercrimes.
What Is the Difference Between an eSignature and a Digital Signature?
eSignature and digital signature are both methods employed for signing
documents electronically, but they differ significantly in their underlying
technology and security features. An eSignature, or electronic signature,
encompasses any electronic method used to sign a document. This can range
from typing your name at the end of an email or clicking an “I Agree” button
on a website to using a stylus or finger to draw your signature on a
touchscreen device.
eSignatures are widely used due to their convenience and ease of use,
allowing users to quickly sign documents without the need for printing and
scanning. However, not all eSignatures come with inherent security features.
They often rely on the trust between the parties involved and may not
include advanced mechanisms to verify the signer’s identity or protect the
document’s integrity.
In contrast, a digital signature is a subset of electronic signatures that
employs encryption technology to provide an extra layer of security. Digital
signatures use a combination of cryptographic methods, including public key
infrastructure (PKI), to ensure that the signer is definitively identified and
that the signed document has not been altered after signing.
When a digital signature is applied, a unique digital fingerprint (hash) of the
document is created and encrypted using the signer’s private key. This
encrypted hash, along with the signer’s digital certificate, forms the digital
signature.
To verify the signature, the recipient uses the signer’s public key to decrypt
the hash and compare it to a newly generated hash of the received
document. If the hashes match, it confirms both the signer’s identity and the
document’s integrity.
Legal Status of Digital Signatures and eSignatures
Digital signatures and eSignatures have gained widespread legal acceptance
across many jurisdictions. Laws such as the US ESIGN Act and the European
eIDAS regulation provide a legal framework for making electronic signatures
valid and enforceable, equating them to traditional handwritten signatures in
most contexts.
A packet filtering firewall is a network security device that filters incoming
and outgoing network packets based on a predefined set of rules.
Rules are typically based on IP addresses, port numbers, and protocols. By
inspecting packet headers, the firewall decides if it matches an allowed rule;
if not, it blocks the packet. The process helps protect networks and manage
traffic, but it does not inspect packet contents for potential threats.

How Does a Packet Filtering Firewall Work?

This type of firewall operates at a fundamental level by applying a set of


predetermined rules to each network packet that attempts to enter or leave
the network. These rules are defined by the network administrator and are
critical in maintaining the integrity and security of the network.
Packet filtering firewalls use two main components within each data packet
to determine their legitimacy: the header and the payload.
The packet header includes the source and destination IP address, revealing
the packet's origin and intended endpoint. Protocols such as TCP, UDP, and
ICMP define rules of engagement for the packet's journey. Additionally, the
firewall examines source and destination port numbers, which are similar to
doors through which the data travels. Certain flags within the TCP header, like
a connection request signal, are also inspected. The direction of the traffic
(incoming or outgoing) and the specific network interface (NIC) the data is
traversing, are factored into the firewall's decision making process.
Packet filtering firewalls can be configured to manage both inbound and
outbound traffic, providing a bidirectional security mechanism. This ensures
unauthorized access is prevented from external sources attempting to access
the internal network, and internal threats trying to communicate outwards.

Packet Filtering Firewall Use Cases

A primary packet filtering firewall use case is the prevention of IP spoofing


attacks, where the firewall examines the source IP addresses of incoming
packets. By ensuring the packets originate from expected and trustworthy
sources, the firewall can prevent attackers from masquerading as legitimate
entities within the network. This is particularly important for perimeter
defenses.
In addition to security, packet filtering firewalls are used to manage and
streamline network traffic flow. By setting up rules that reflect network
policies, these firewalls can limit traffic between different subnets within the
enterprise. Limiting traffic between different subnets helps contain potential
breaches and segment network resources according to departmental needs
or sensitivity levels.
Another use case for packet filtering firewalls is scenarios where speed and
resource efficiency are valued. Due to their less computationally intensive
nature, packet filtering firewalls can quickly process traffic without significant
overhead.

Packet Filtering Firewall Benefits


High Speed Efficiency
One of the main benefits of packet filtering firewalls is their ability to make
quick decisions. By operating at the network layer, they rapidly accept or
reject packets based on set rules without the need for deep packet
inspection. This results in very fast processing, allowing for efficient network
traffic flow and reduced chances of bottlenecks.

A proxy server refers to a server that acts as an intermediary between the


request made by clients, and a particular server for some services or requests
for some resources. There are different types of proxy servers available that
are put into use according to the purpose of a request made by the clients to
the servers. The basic purpose of Proxy servers is to protect the direct
connection of Internet clients and Internet resources. There are many Proxy
providers in the market that provide services to both individuals and
businesses.
For example, Smartproxy has been offering unique solutions for online
anonymity and web data collection since 2018. It has a 55M+ residential
proxy pool that opens horizons for block-free web scraping and geo-targeting.
They provide access to 195+ locations worldwide, including city-level and 50
US states targeting You can check out Smartproxy’s official website to uncover
more of its unique features.
The proxy server also prevents the identification of the client’s IP address
when the client makes any request to any other servers.
 Internet Client and Internet resources: For Internet clients, Proxy
servers also act as a shield for an internal network against the request
coming from a client to access the data stored on the server. It makes
the original IP address of the node remain hidden while accessing data
from that server.
 Protects true host identity: In this method, outgoing traffic appears to
come from the proxy server rather than internet navigation. It must be
configured to a specific application such as HTTP or FTP. For example,
organizations can use a proxy to observe the traffic of their employees
to get the work efficiently done. It can also be used to keep a check on
any kind of highly confidential data leakage. Some can also use it to
increase their website rank.
Need Of Private Proxy
 Defeat Hackers: To protect an organization’s data from malicious use,
passwords are used and different architects are set up, but still, there
may be a possibility that this information can be hacked in case the IP
address is accessible easily. To prevent such kind of misuse of Data
Proxy servers are set up to prevent tracking of original IP addresses
instead data is shown to come from a different IP address.
 Filtering of Content: By caching the content of the websites, Proxy
helps in fast access to the data that has been accessed very often.
 Examine Packet Headers and Payloads: Payloads and packet headers of
the requests made by the user nodes in the internal server to access
social websites can be easily tracked and restricted.
 To control internet usage of employees and children: In this, the Proxy
server is used to control and monitor how their employees or kids use
the internet. Organizations use it, to deny access to a specific website
and instead redirecting you with a nice note asking you to refrain from
looking at said sites on the company network.
 Bandwidth savings and improved speeds: Proxy helps organizations to
get better overall network performance with a good proxy server.
 Privacy Benefits: Proxy servers are used to browse the internet more
privately. It will change the IP address and identify the information the
web request contains.
 Security: Proxy server is used to encrypt your web requests to keep
prying eyes from reading your transactions as it provides top-level
security.
Types Of Proxy Server
 Reverse Proxy Server: The job of a reverse proxy server to listen to the
request made by the client and redirect to the particular web server
which is present on different servers.
Example – Listen for TCP port 80 website connections which are
normally placed in a demilitarized zone (DMZ) zone for publicly
accessible services but it also protects the true identity of the host.
Moreover, it is transparent to external users as external users will not
be able to identify the actual number of internal servers. So, it is the
prime duty of reverse proxy to redirect the flow depending upon the
configurations of internal servers. The request that is made to pass
through the private network protected by firewalls will need a proxy
server that is not abiding by any of the local policies. Such types of
requests from the clients are completed using reverse proxy servers.
This is also used to restrict the access of the clients to the confidential
data residing on the particular servers.
 Web Proxy Server: Web Proxy forwards the HTTP requests, only URL is
passed instead of a path. The request is sent to particular the proxy
server responds. Examples, Apache, HAP Proxy.
 Anonymous Proxy Server: This type of proxy server does not make an
original IP address instead these servers are detectable still provides
rational anonymity to the client device.
 Highly Anonymity Proxy: This proxy server does not allow the original
IP address and it as a proxy server to be detected.
 Transparent Proxy: This type of proxy server is unable to provide any
anonymity to the client, instead, the original IP address can be easily
detected using this proxy. But it is put into use to act as a cache for the
websites. A transparent proxy when combined with gateway results in
a proxy server where the connection requests are sent by the client ,
then IP are redirected. Redirection will occurs without the client IP
address configuration. HTTP headers present on the server-side can
easily detect its redirection .
 CGI Proxy: CGI proxy server developed to make the websites more
accessible. It accepts the requests to target URLs using a web form and
after processing its result will be returned to the web browser. It is less
popular due to some privacy policies like VPNs but it still receives a lot
of requests also. Its usage got reduced due to excessive traffic that can
be caused to the website after passing the local filtration and thus
leads to damage to the organization.
 Suffix Proxy: Suffix proxy server basically appends the name of the
proxy to the URL. This type of proxy doesn’t preserve any higher level
of anonymity. It is used for bypassing the web filters. It is easy to use
and can be easily implemented but is used less due to the more
number of web filter present in it.
 Distorting Proxy: Proxy servers are preferred to generate an incorrect
original IP address of clients once being detected as a proxy server. To
maintain the confidentiality of the Client IP address HTTP headers are
used.
 Tor Onion Proxy: This server aims at online anonymity to the user’s
personal information. It is used to route the traffic through various
networks present worldwide to arise difficulty in tracking the users’
address and prevent the attack of any anonymous activities. It makes it
difficult for any person who is trying to track the original address. In
this type of routing, the information is encrypted in a multi-folds layer.
At the destination, each layer is decrypted one by one to prevent the
information to scramble and receive original content. This software is
open-source and free of cost to use.
 12P Anonymous Proxy: It uses encryption to hide all the
communications at various levels. This encrypted data is then relayed
through various network routers present at different locations and thus
I2P is a fully distributed proxy. This software is free of cost and open
source to use, It also resists the censorship.
 DNS Proxy: DNS proxy take requests in the form of DNS queries and
forward them to the Domain server where it can also be cached,
moreover flow of request can also be redirected.
 Rotating Proxy: A rotating proxy assign a new or different IP address to
each user that connects to proxy. As users connect, the unique address
is assign to it.

Chp-11
Wireless Links and Network Characteristics
 A number of important differences between a wired link and a wireless
link:
o Decreasing signal strength:
 Electromagnetic radiation attenuates as it passes through
matter. Even in free space, the signal will disperse,
resulting in decreased signal strength as the distance
between sender and receiver increases.
o Interference from other sources:
 Radio sources transmission in the same frequency band
will interfere with each other.
 In addition to interference from transmitting sources,
electromagnetic noise within the environment can result
in interference.
o Multipath propagation:
 It occurs when portions of the electromagnetic wave
reflect off objects and the ground, taking paths of different
lengths between a sender and receiver. Moving objects
between the sender and receiver can cause multipath
propagation to change over time.
 Wireless links employ powerful CRC error detection codes and link-
level reliable-data-transfer protocols that retransmits corrupted frames
because bit errors are more common in wireless links.
 The host receives an electromagnetic signal that is a combination of a
degraded form of the original signal transmitted by the sender and
background noise in the environment.
o The Signal-to-noise ratio (SNR) is a relative measure of the
strength of the received signal and this noise.
o The SNR is typically measured in dB.
 It is 20*the ratio of the base-10 logarithms of the
amplitude of the receives signal to the amplitude of the
noise.
 A larger SNR makes it easier for the receiver to extract the
transmitted signal from the background noise.
 BER = Bit error rate
 Physical-layer characteristics that are important to understand for
higher-layer wireless communication protocols:
o For a given modulation scheme, the higher the SNR, the lower
the BER:
 Since a sender can increase the SNR by increasing its
transmission power, a sender can decrease the probability
that a frame is received in error by increasing its
transmission power.
 There’s little gain in increasing the power beyond a
certain threshold.
 A disadvantage associated with increasing the
transmission power is that it costs more energy for the
sender and the sender’s transmissions are more likely to
interfere with transmissions of another sender.
o For a given SNR, a modulation technique with a higher bit
transmission rate will have a higher BER:

 With an SNR of 10 dB, BPSK modulation with a


transmission rate of 1 Mbps has a BER of less than $
$10^{-7}$$, while with QAM16 modulation with a
transmission rate of 4 Mbps, the BER is $$10^{-1}$$
far too high to be practically useful.
 With an SNR of 20 dB, QAM16 modulation has a
transmission rate of 4 Mbps and a BER of $$10^{-7}$
$, while BPSK modulation has a transmission rate of
only 1 Mbps and a BER that is extremely low.
 If one can tolerate a BER of $$10^{-7}$$, the higher
transmission rate offered by QAM16 would make it
the preferred modulation technique in this situation.
o Dynamic selection of the physical-layer modulation technique can
be used to adapt the modulation technique to channel
conditions:
 The SNR many change as a result of mobility or due to
changes in the environment.
 Adaptive modulation and coding are used in cellular data
systems and in the 802.11 WiFi and 4G cellular data
networks.
 This allows the selection of a modulation technique
that provides the highest transmission rate possible
subject to a constraints on the BER, for a given
channel characteristics.
 Suppose that Station A is transmitting to Station B, and that Station C is
transmitting to Station B.
o Hidden Terminal problem:
 Physical obstructions in the environment may prevent A
and C from hearing each other’s transmission, even though
A’s and C’s transmissions are indeed interfering at the
destination B.
o A second scenario that also results in undetectable collisions at
the receiver:
 Results from the fading of a signal’s strength as it
propagates through the wireless medium. Thus A’s and C’s
signals are strong enough to interfere with each other, but
not for A and C to detect it.
In Wireless Sensor Networks (WSNs), the Medium Access Control (MAC)
protocol is a set of guidelines that dictate how each node should transmit
data over the shared wireless medium. The primary objective of the MAC
protocol is to minimize the occurrence of idle listening, over-hearing, and
collisions of data packets. By efficiently managing access to the wireless
medium, the MAC protocol helps to reduce energy consumption and optimize
the use of network resources.
MAC Protocol Categories
 Contention based MAC
 Scheduled based MAC
 Hybrid MAC
 Cross-Layer MAC
Contention-based MAC
Contention-based MAC protocol is also known as a random access MAC
protocol. It allows all nodes to transmit data on the shared medium, but they
have to compete with each other to access the medium. One example of
contention-based MAC is CSMA/CA.
In CSMA/CA, each node senses the medium before transmitting the data. If
the medium is idle, the node can transmit data immediately. However, if the
channel is busy the node has to wait for a random time also known as back-
off time. This back-off time reduces the chances of collisions.
Contention-based MAC Used in Wireless Sensor Networks
Sensor MAC (SMAC) is a contention-based MAC protocol that is specifically
designed for wireless sensor networks. The primary objective of SMAC is to
minimize idle listening, over-hearing, and collisions of data packets. To
achieve this goal, SMAC adopts a duty-cycle approach, also known as a sleep-
wakeup cycle. In this approach, each node alternates between a fixed length
of active and sleeping periods based on its schedule.
To prevent collisions among packets, SMAC utilizes the Request to Send (RTS)
and Clear to Send (CTS) packets before transmitting data packets. This helps
to ensure that only one node is transmitting data at a time, reducing the
likelihood of collisions and improving overall network efficiency.
Scheduled-based MAC
Scheduled-based MAC is also known as a deterministic MAC protocol. Where
each node follows a predetermined schedule and transmits the data
according to its given time slot. The data collision is completely nullified in
scheduled-based MAC. An example of Scheduled based MAC is TDMA(Time
Division Multiple Access).
In TDMA the time is divided into fixed slots and each node is allocated a
specific time frame in which they can transmit the data. During this time slot,
other nodes remain silent.
Scheduled-based MAC Used in Wireless Sensor Networks
LEACH (Low Energy Adaptive Clustering Hierarchy) is a TDMA-based protocol
that utilizes a clustering mechanism in wireless sensor networks. A cluster
comprises sensor nodes grouped together, with one node designated as the
cluster head and the others serving as members. The cluster head is selected
based on a probabilistic algorithm, which ensures that power consumption is
evenly distributed among the nodes.
Once the cluster is formed, a schedule is created for nodes to transmit data
within the cluster. Additionally, to mitigate inter-cluster interference, each
cluster head assigns a unique CDMA code to its cluster.
Hybrid MAC
Hybrid MAC is a combination of different protocols such as contention-based
MAC and scheduled-based MAC to optimize the performance of wireless
sensor networks. For example, contention-based MAC protocols, such as
CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance), allow
nodes to access the medium based on a random backoff interval, which
reduces collisions but may result in inefficient utilization of the medium. On
the other hand, scheduled-based MAC protocols, such as TDMA (Time
Division Multiple Access), divide the medium into time slots and assign them
to different nodes, which can achieve high utilization but may not be flexible
enough to adapt to changing network conditions. Hybrid MAC solved the
issue by using other MAC protocols, During transmission of data if the
channel is idle or the channel has low traffic then Hybrid MAC switches to
contention-based MAC. If the traffic in the channel increases then it is
switched to scheduled-based MAC such as TDMA.
Hybrid MAC Used in Wireless Sensor Networks
The IEEE developed 802.15.4 as a standard for low-rate WPANs, which
outlines the PHY and MAC layers for low-power wireless communication in
the 2.4 GHz ISM band. It was specifically created for applications that require
low data rates, low power consumption, and cost-effectivenesses, such as
sensor networks, home automation, and industrial automation.
The physical layer employs DSSS modulation with a data rate of 250 kbps and
works in the 2.4 GHz ISM band that has 16 channels with 5 MHz bandwidth.
Additionally, it uses FHSS to prevent interference from other wireless
devices.
On the other hand, the media access control layer implements a CSMA-CA
protocol to avoid device collisions. It supports different data packet sizes,
ranging from 9 to 127 bytes, and also offers error detection and correction
mechanisms.
Cross-Layer MAC
Cross-layer MAC allows the different layers in the protocol stack, typically
including physical, MAC, and network layers, to interact and share
information with one another. Firstly MAC layers gather information about
the state of the channel whether the channel is busy or not. This information
will be further used to control the other parameters such as data
transmission rate, packet loss rate, and delay.
Once the parameters have been determined, the MAC layer sends the data
packets to the PHY layer for transmission over the wireless channel. After the
data transmission, the PHY layer sends feedback to the MAC layer about the
success or failure of the transmission. If the transmission was unsuccessful.
Based on the feedback MAC layer repeats the transmission
Overall, the working of Cross-Layer MAC involves the interaction between the
MAC and PHY layers to improve the efficiency of data transmission and
energy consumption in WSNs. By optimizing the transmission parameters
Cross-Layer MAC Used in Wireless Sensor Networks
The IEEE 802.11e standard expands on the existing IEEE 802.11 WLAN
standard by incorporating Quality of Service (QoS) support. It utilizes a cross-
layer approach, allowing the MAC layer to collaborate with higher layers such
as the network and application layers, to provide specific services based on
the application’s needs.
On the other hand, IEEE 802.16, or WiMAX, is intended for broadband
wireless access and utilizes a cross-layer design as well. This design allows the
MAC layer to communicate with the physical layer to adjust to the changing
channel conditions, such as interference, noise, and fading.

Mobility of IP-based Wireless Networks


Mobility management in wireless networks involves changing the point of
attachment, and hence the IP address, of a MN. This section focuses on the
fundamentals, requirements and issues of mobility, and introduces current
solutions for mobility.
2.1 Categories of Mobility
Although all types of mobility require efficient handoff, efficient routing, low
packet loss, etc, different issues exist in different patterns of movement. In
order to better understand and address the issues exist in mobility, we should
first take a look at different scenarios of mobility.
2.1.1 Host Mobility and Network Mobility
Host mobility refers to an end host changing its point of attachment to the
networks while the communication between the host and its correspondent
node stays uninterrupted.

Network mobility refers to a mobile IP subnet changing its point of


attachment to an IP backbone. [Lach03] In a simple scenario of network
mobility, a mobile network contains a mobile router and a set of mobile
nodes and the internal structure of a mobile network is a relatively stable
internal topology. While in a complex mobility scenario, a mobile network
may itself be visited by mobile nodes or other mobile networks.
2.1.2 Macro-mobility and Micro-mobility
Macro-mobility refers to the inter-domain movement. A number of well-
known proposals like Mobile IP are developed to address the issues in macro-
mobility. These proposals are well suited for macro-mobility due to their
mechanisms for achieving efficient handoff, low rate of packet loss, efficient
routing of packets, etc. However, these proposals always have relatively large
overhead.

Issues will rise when mobility solutions for macro-mobility such as Mobile IP
are adopted for micro-mobility. Base Mobile IP mechanism introduces
significant network overhead in terms of delay, packet loss, and signaling. For
example, the real-time wireless applications such as Voice IP (VoIP) would
suffer degradation of service due to frequent handoff [Inayat03]. Micro-
mobility solutions are proposed for localized mobility in a domain. These
proposals focus on reducing the handoff latency by inducing those additional
overheads due to control traffic as they have to maintain routing information
at the local network and are also heavy on the address space [Campbell02].
2.1.3 IP Mobile Multicasting
Instead of sending data to a single node, multicasting delivers data to a set of
selected receivers. In IP multicast, a source sends a single copy of a packet
and the network duplicates the packet as needed until the packet reaches all
the selected receivers. This avoids the overheads associated with both
replication of packets at the source and sending duplicated packets over the
same link. [Romdhani04] is a good survey paper for IP mobile multicast.
2.2 Requirements of Mobility
The main goal of the mobility solutions is to continue the communication of
the MN and the networks while it moves, and avoid the disrupting of the
connections. When a MN moves from one place to another, in order to
support seamless connectivity and continuous reachability, a mobility
solution should provide mechanism to handle the handover and the routing
of packets thereafter. The proposals for mobility should have the following
properties [Atiquzzaman05][Zhuang03][Henderson03]:

Efficient Handoff: The performance of a mobility scheme mainly depends on


the type of handoffs it uses. There are two types of handoffs: soft handoff
and hard handoff. Soft handoff makes a new connection before disconnecting
the previous connection. It allows the mobile node to communicate with
multiple interfaces during handoff, and the communication with the old
interface is dropped when the signal strength between the old access point
drops below a certain threshold. Hard handoff drops the previous connection
before making a new connection. Handoffs should be handled efficiently in
order to reduce or avoid the loss and delay of packets as possible.

Location Management: If a mobile host offers services to other nodes, it must


be able to be located by these nodes as it moves as well as keeping the
privacy of its topological location.

Efficient routing: Packets should be routed with the latency as low as


possible, optimally close to the shortest path provided by IP routing.

Security: Security is a crucial issue in a wireless environment. Mobility


management schemes should not introduce additional security issues to the
network. Also, the interruption of connectivity due to the time required for
authentication process should be avoided.

Scalability: A mobility scheme is said to be scalable if its performance does


not drop as the number of nodes (MNs and CNs) increases.

Fault tolerance: A scheme should be able to function even in the presence of


failure. A mobility scheme should make the communication between mobile
nodes as much tolerant to fault as the communication between stationary
nodes.

Simultaneous mobility: end hosts may move simultaneously, and the


communication between them should not be interrupted.

Link layer independence: User should be able to seamlessly operate across


heterogeneous link layer technologies, not all of which support the same link
layer mobility scheme.

Compatibility with IP routing: Mobility management must work well with IP


routing, such as acquiring a new topologically correct IP address upon
moving, since full host routes are not propagated in the Internet.
Transparency: The mobility scheme should be transparent to applications so
that the applications are not aware of the handoff, and thus do not need to
be modified for mobility.

Quality of Service: QoS should not be reduced as the MH moves and


performs handoff.
2.3 Current Solutions for Host Mobility
The Internet host mobility is mainly approached from three angles: data link
layer mobility, network layer mobility, and other higher layer mobility
[Snoeren00]. IEEE 802.11b, Mobile IP, and MSOCKS are three example
schemes of these three layers respectively [Atiquzzaman05].

Link layer technologies which support mobility include Ricochet [Ritter01],


802.11b, GSM, etc. Hiding mobility in the link layer results in the reinvention
of mobility support in each new wireless system.

Most solutions proposed so far (e.g. Mobile IP) are based on the idea of
indirection points between MN and CN so that the CN does not need to know
the topology location of the MN by sending packets to the indirection points.
These approaches do not require changing fixed hosts in the Internet, but
they require changing the underlying IP substrate. Some other solutions
emphasize on end-to-end architecture (e.g. TCP Migrate). These solutions do
not require change to the underlying IP substrate [Snoeren00].

Here we will introduce some mobility solutions which has already existed for
a period of time and is widely used or referred to.
2.3.1 Mobile IP
The most widely known mobility solution today is Mobile IP, which were
developed by IETF to support mobility on the Internet. Mobile IP aims to
allow a MN to continue the communication with its CN during its movement.
It supports network layer mobility so that TCP is not aware of the mobility.

In Mobile IP, Host Agent (HA) is used as indirection points. HA is in the MN's
home network and it intercepts and tunnels packets to the MN. A Mobile
Node (MN) has a permanent Home Address (HoA) from its home network
and obtains a temporary Care-of-Address (COA) which is routable within the
foreign network when it moves to a new network. The MN registers its COA
to the HA in its home network every time it obtains a new COA, and this
process is called registration process. For maintaining the transport and
higher-level communications when moving, the MN maintains its HoA and
uses the COA for routing purpose. A binding associates these two addresses
on both the MN part and the HA part.

Mobile IP ensures the delivery of packets which destine to a MN's home


address by creating a routing tunnel between the MN's home network and its
COA. Each time when a packet is received in the home network for the MN,
the HA will intercept the packet and then encapsulate it inside a packet and
send it to the COA of the MN. Thereafter packets sent from the MN
addressed to the CN may either be routed directly from the foreign network
to the CN, which is also known as triangle routing, or be tunneled back to the
HA and routed from HA to CN, which is known as reverse tunneling. Triangle
routing may not be allowed by the security infrastructure in the foreign
network, while reverse tunneling solves this issue.

Both Mobile IPv4 and Mobile IPv6 are based on the above ideas, and they
share many features. However, Mobile IPv6 offers some improvements:

1) There is no need for an FA in Mobile IPv6. In IPv4, a Foreign Agent (FA) is


deployed. FA is an agent in the foreign network which the MN is visiting, and
when a MN visits the foreign network, it obtains a COA from the FA of that
network. In Mobile IPv6, since it offers address auto-configuration
capabilities, there is no need to deploy FAs in foreign networks. Most packets
sent to the MN are sent using an IPv6 routing header rather than IP
encapsulation, which reduces the amount of overhead compared to Mobile
IPv4. Also, the registration on Mobile IPv6 is direct while in IPv4 it may either
be direct or through the FA.

2) Optimal routing. Another important feature in Mobile IPv6 is that it offers


support for optimal routing of data packets between the CN and the MN,
bypassing the HA, in order to avoid triangle routing. With route optimization,
the MN informs the CN of its COA using a Binding Update (BU), and then an
IPv6 routing header will be used to send packets directly from the CN to the
COA of the MN. However, route optimization compromises location privacy
by exposing the COA, and hence its location, to the CN.

3) Dynamic HA discovery. Mobile IPv4 uses a broadcasting mechanism to


dynamically discover the HA in the home network, while Mobile IPv6 uses
the IPv6 Neighbor Discovery Protocol [Narten98].

Mobile IP also has the following limitations:

1) The dependence in Mobile IP on a fixed HA reduces fault tolerance. If the


HA or the home network fails or is overloaded, the MN will be unreachable.
To address this issue, the notion of dynamic home agents is proposed for
MIPv4. However, the actual algorithm used to discover and allocate a nearby
home agent is still under investigation [Zhuang03]. MIPv6 provides a dynamic
home agent address discovery mechanism that allows a MN to dynamically
discover the IP address of a HA in its home network.

2) The routing efficiency would be degraded by routing through HA when the


MN is far away from HA.

3) Handoff performance. There have been two mechanisms proposed to


increase handoff performance in MIPv4 and MIPv6: low latency handoff and
fast handover. In low latency handoff, a BU is sent in advance of an actual
link-layer handoff. However, it must be guaranteed that the BU completes
before the actual handoff does, which is difficult to achieve in practice. Fast
handover sets up a bi-directional tunnel between an anchor FA and the
current FA. This allows the MN to delay a formal BU to the HA and minimizes
the impact on real-time applications. However, this mechanism requires the
existence of a FA in each network the MN visits.

Wi-Fi stands for Wireless Fidelity, and it is developed by an organization


called IEEE (Institute of Electrical and Electronics Engineers) they set
standards for the Wi-Fi system.
Each Wi-Fi network standard has two parameters :
1. Speed –
This is the data transfer rate of the network measured in Mbps (1
megabit per second).
2. Frequency –
On what radio frequency, the network is carried on. Two bands of
frequency for the Wi-Fi are 2.4 GHz and 5 GHz. In short, it is the
frequency of radio wave that carries data.
Two Frequencies of Wi-Fi signal :
Wi-Fi routers that come with 2.4 GHz or5 GHz are called the single-band
routers but a lot of new routers support both 2.4 GHz and 5 GHz frequency
they are called dual-band routers.
The 2.4 GHz is a common Wi-Fi band, but it is also used by other appliances
like Bluetooth devices, wireless phones, cameras, etc. Because of the signal
used by so many devices, the signal becomes overcrowded and speed
becomes slow. So 5 GHz comes into the picture, It is new, and not commonly
used, and because it is used by fewer devices there is no signal crowding and
interference.
The 2.4 GHz transmits data at a slower speed than 5 GHz but does have a
longer range than 5 GHz. The 5 GHz transmits data at a faster rate, but it has a
shorter range because it has a higher frequency.

Paramete
2.4 GHz 5 GHz
r

Speed Comparatively Low High

Comparatively
Range High
low

Different standards of Wi-Fi :


These are the Wi-Fi standards that evolved from 1997 to 2021. In 1997 IEEE
created one standard and gave the name 802.11.
IEEE 802.11 –
1. It was developed in 1997.
2. Speed is about 2 Mbps (2 megabits per second).
IEEE 802.11a –
1. This standard is developed in 1999.
2. 802.11a is useful for commercial and industrial purposes.
3. It works on a 5 GHz frequency.
4. The maximum speed of 802.11a is 54 Mbps.
5. This standard was made to avoid interference with other devices which
use the 2.4 GHz band.
IEEE 802.11b –
1. This standard also created with 802.11a in 1999.
2. The difference is that it uses a 2.4 GHz frequency band.
3. The speed of 802.11b is 11 Mbps.
4. This standard is useful for home and domestic use.
IEEE 802.11g –
1. This standard is designed in 2003.
2. Basically, it has combined the properties of both 802.11a and 802.11b.
3. The frequency band used in this is 2.4 GHz for better coverage.
4. And the maximum speed is also up to 54 Mbps.
IEEE 802.11n –
1. This was introduced in 2009.
2. 802.11n operates on both 2.4 GHz and 5 GHz frequency bands, they are
operated individually.
3. The data transfer rate is around 600 Mbps.
IEEE 802.11ac –
1. This standard is developed in 2013 named 802.11ac.
2. Wi-Fi 802.11ac works on the 5 GHz band.
3. The maximum speed of this standard is 1.3 Gbps.
4. It gives less range because of the 5 GHz band, but nowadays most of
the devices are working on 802.11n and 802.11ac standards.
IEEE 802.11ax –
1. It is the newest and advanced version of Wi-Fi.
2. This is released in 2019.
3. Operates on both 2.4 GHz and 5 GHz for better coverage as well as
better speed.
4. User will get 10 Gbps of maximum speed around 30-40 % improvement
over 802.11ac

What is Bluetooth?
Bluetooth is a wireless communication technology that allows devices to
exchange data over short distances without the need for cables. It’s
commonly used for connecting devices like wireless headphones, keyboards,
and mouse to computers and smartphones. Bluetooth is also key in smart
home devices and car systems. Developed by the Bluetooth Special Interest
Group (SIG), this technology ensures secure and reliable connections with
low power consumption, making it a crucial feature in today’s electronic
devices.
What is Zigbee?
Zigbee is a wireless communication protocol designed for low-power, low-
data-rate applications, commonly used in home automation and industrial
settings. It allows smart devices like lights, thermostats, and security systems
to communicate with each other efficiently. Developed by the Zigbee
Alliance, this technology emphasizes simplicity and reliability, providing
secure and scalable networks for various smart applications. Zigbee’s low
power consumption makes it ideal for battery-operated devices, ensuring
long-lasting and energy-efficient performance in modern smart homes
and IoT systems.
IoT Protocols Comparison
In the above figure, we can see that the data transfer rate is faster in
Bluetooth than in Zigbee whereas Zigbee covers a larger distance than
Bluetooth.
Both Bluetooth and Zigbee have a lot in common which is, each area unit
style of IEEE 802.15 WPANs, each runs within the pair of 4-GHz unlicensed
bands, and each uses tiny kind factors and low power. Besides these
similarities, there are some differences which are given below in the tabular
form.
Comparison Between Bluetooth and ZigBee

Bluetooth Zigbee

The Bluetooth SIG (Special Interest The Zigbee Alliance is responsible


Group) is the organization for managing Zigbee and testing and
responsible for managing approving Zigbee-based devices.
Bluetooth standards and devices. IEEE standardizes all Zigbee-based
Bluetooth Zigbee

protocols.

The frequency range supported in While the frequency range


Bluetooth varies from 2.4 GHz to supported in Zigbee is mostly 2.4
2.483 GHz. GHz worldwide.

There are seventy-nine RF channels There are sixteen RF channels in


in Bluetooth. Zigbee.

It uses the GFSK modulation Whereas it also uses BPSK and QPSK
technique. modulation techniques like UWB.

While there are more than sixty-five


There is a maximum of 8 cell nodes
thousand (65000) cell nodes in
in Bluetooth.
Zigbee.

Zigbee devices can be networked in


Bluetooth networks can be built
a variety of generic topologies,
using the point-to-point master-
including a star, mesh, and others. A
slave approach in which there is
cluster can be created by connecting
one master and up to seven slaves
different Zigbee-based network
form a piconet, which leads to
topologies. Zigbee Coordinator,
forming a scatter net which is a
Zigbee Router, and Zigbee Endpoint
linking of two or more piconets.
nodes make up any Zigbee network.

While Zigbee also requires low


bandwidth but greater than
Bluetooth requires low bandwidth.
Bluetooth’s bandwidth most of the
time.
Bluetooth Zigbee

The radio signal range of Bluetooth While the radio signal range of
is ten meters. ZigBee is ten to hundred meters.

Bluetooth was developed under Whereas it was developed


IEEE 802.15.1. under IEEE 802.15.4.

Bluetooth batteries may be Although ZigBee batteries cannot be


recharged. recharged, they last longer.

Blue tooth uses high data rates and


Zigbee employs low data rates and
a lot of power on large packet
little power on small packet devices.
devices.

Zigbee employs the Direct Spread


Bluetooth employs the Frequency
Spectrum technique. In the direct
Hopping Spread Spectrum. In
spread spectrum; the original signal
frequency hopping, the carrier
is mixed and recovered from a
signal is made to fluctuate in
pseudo-random code at the
frequency.
transmitter and receiver.

A network speed of up to 250 A network speed of up to 1 megabit


megabits per second. per second.

The time it takes to join a network


The time it takes to join a network
using Zigbee is about 30
using Bluetooth is about 3 seconds.
milliseconds.

Bluetooth’s protocol stack is 250K Zigbee’s protocol stack is 28K bytes


bytes in size. in size.
Bluetooth Zigbee

Systems built on the Zigbee


Computer peripherals like wireless
protocol are intended for wireless
keyboards, mice, headsets, and
sensor networking, and they are
other peripherals are the main use
more popular with compact and
cases for Bluetooth-based
energy-efficient gadgets. Zigbee-
applications. Additionally, several
based networking is used in a
wireless remote controls and
variety of applications, including
gesture-controlled devices
SCADA system sensors, medical
communicate data via Bluetooth.
devices, and television remote con

A Cellular Network is formed of some cells. The cell covers a geographical


region and has a base station analogous to 802.11 AP which helps mobile
users attach to the network and there is an air interface of physical and data
link layer protocol between mobile and base station. All these base stations
are connected to the Mobile Switching Center which connects cells to a wide-
area net, manages call setup, and handles mobility.
There is a certain radio spectrum that is allocated to the base station and to a
particular region and that now needs to be shared. There are two techniques
for sharing mobile-to-base station radio spectrum:
 Combined FDMA/TDMA: It divides the spectrum into frequency
channels and divides each channel into time slots.
 Code Division Multiple Access (CDMA): It allows the reuse of the same
spectrum over all cells. Net capacity improvement. Two frequency
bands are used one of which is for the forwarding channel (cell-site to
subscriber) and one for the reverse channel (sub to cell-site).
Cell Fundamentals
In practice, cells are of arbitrary shape(close to a circle) because it has the
same power on all sides and has same sensitivity on all sides, but putting up
two-three circles together may result in interleaving gaps or may intersect
each other so order to solve this problem we can use equilateral triangle,
square or a regular hexagon in which hexagonal cell is close to a circle used
for a system design. Co-channel reuse ratio is given by:
The number of cells in cluster N determines the amount of co-channel
interference and also the number of frequency channels available per cell.
Cell Splitting
When the number of subscribers in a given area increases allocation of more
channels covered by that channel is necessary, which is done by cell splitting.
A single small cell midway between two co-channel cells is introduced.

Cell Splitting
Need for Cellular Hierarchy
Extending the coverage to the areas that are difficult to cover by a large cell.
Increasing the capacity of the network for those areas that have a higher
density of users. An increasing number of wireless devices and the
communication between them.
Cellular Hierarchy
 Femtocells: The smallest unit of the hierarchy, these cells need to cover
only a few meters where all devices are in the physical range of the
uses.
 Picocells: The size of these networks is in the range of a few tens of
meters, e.g., WLANs.
 Microcells: Cover a range of hundreds of meters e.g. in urban areas to
support PCS which is another kind of mobile technology.
 Macrocells: Cover areas in the order of several kilometers, e.g., cover
metropolitan areas.
 Mega cells: Cover nationwide areas with ranges of hundreds of
kilometers, e.g., used with satellites.
Fixed Channel Allocation
Adjacent radio frequency bands are assigned to different cells. In analog,
each channel corresponds to one user while in digital each RF channel carries
several time slots or codes (TDMA/CDMA). Simple to implement as traffic is
uniform.
Global System for Mobile (GSM) Communications
GSM uses 124 frequency channels, each of which uses an 8-slot Time Division
Multiplexing (TDM) system. There is a frequency band that is also fixed.
Transmitting and receiving do not happen in the same time slot because the
GSM radios cannot transmit and receive at the same time and it takes time to
switch from one to the other. A data frame is transmitted in 547
microseconds, but a transmitter is only allowed to send one data frame every
4.615 microseconds since it is sharing the channel with seven other stations.
The gross rate of each channel is 270, 833 bps divided among eight users,
which gives 33.854 kbps gross.
Control Channel (CC)
Apart from user channels, there are some control channels which is used to
manage the system.
1. The broadcast control channel (BCC): It is a continuous stream of
output from the base station’s identity and the channel status. All
mobile stations monitor their signal strength to see when they move
into a new cell.
2. The dedicated control channel (DCC): It is used for location updating,
registration, and call setup. In particular, each base station maintains a
database of mobile stations. Information needed to maintain this
database is sent to the dedicated control channel.
Common Control Channel
Three logical sub-channels are:
1. Is the paging channel, that the base station uses to announce incoming
calls. Each mobile station monitors it continuously to watch for calls it
should answer.
2. Is the random access channel that allows the users to request a slot on
the dedicated control channel. If two requests collide, they are garbled
and have to be retried later.
3. Is the access grant channel which is the announced assigned slot.
Advantages of Cellular Networks
 Mobile and fixed users can connect using it. Voice and data services
also provided.
 Has increased capacity & easy to maintain.
 Easy to upgrade the equipment & has consumes less power.
 It is used in place where cables can not be laid out because of its
wireless existence.
 To use the features & functions of mainly all private and public
networks.
 Can be distributed to the larger coverage of areas.
Disadvantages of Cellular Networks
 It provides a lower data rate than wired networks like fiber optics and
DSL. The data rate changes depending on wireless technologies like
GSM, CDMA, LTE, etc.
 Macrophage cells are impacted by multipath signal loss.
 To service customers, there is a limited capacity that depends on the
channels and different access techniques.
 Due to the wireless nature of the connection, security issues exist.
 For the construction of antennas for cellular networks, a foundation
tower and space are required. It takes a lot of time and labor to do this.

Chp-6
Network Layer
o The Network Layer is the third layer of the OSI model.
o It handles the service requests from the transport layer and further
forwards the service request to the data link layer.
o The network layer translates the logical addresses into physical
addresses
o It determines the route from the source to the destination and also
manages the traffic problems such as switching, routing and controls
the congestion of data packets.
o The main role of the network layer is to move the packets from sending
host to the receiving host.
The main functions performed by the network layer are:
o Routing: When a packet reaches the router's input link, the router will
move the packets to the router's output link. For example, a packet
from S1 to R1 must be forwarded to the next router on the path to S2.
o Logical Addressing: The data link layer implements the physical
addressing and network layer implements the logical addressing.
Logical addressing is also used to distinguish between source and
destination system. The network layer adds a header to the packet
which includes the logical addresses of both the sender and the
receiver.
o Internetworking: This is the main role of the network layer that it
provides the logical connection between different types of networks.
o Fragmentation: The fragmentation is a process of breaking the packets
into the smallest individual data units that travel through different
networks.

Forwarding & Routing


In Network layer, a router is used to forward the packets. Every router has a
forwarding table. A router forwards a packet by examining a packet's header
field and then using the header field value to index into the forwarding table.
The value stored in the forwarding table corresponding to the header field
value indicates the router's outgoing interface link to which the packet is to
be forwarded.
For example, the router with a header field value of 0111 arrives at a router,
and then router indexes this header value into the forwarding table that
determines the output link interface is 2. The router forwards the packet to
the interface 2. The routing algorithm determines the values that are inserted
in the forwarding table. The routing algorithm can be centralized or
decentralized.

Services Provided by the Network Layer


o Guaranteed delivery: This layer provides the service which guarantees
that the packet will arrive at its destination.
o Guaranteed delivery with bounded delay: This service guarantees that
the packet will be delivered within a specified host-to-host delay
bound.
o In-Order packets: This service ensures that the packet arrives at the
destination in the order in which they are sent.
o Guaranteed max jitter: This service ensures that the amount of time
taken between two successive transmissions at the sender is equal to
the time between their receipt at the destination.
o Security services: The network layer provides security by using a
session key between the source and destination host. The network
layer in the source host encrypts the payloads of datagrams being sent
to the destination host. The network layer in the destination host
would then decrypt the payload. In such a way, the network layer
maintains the data integrity and source authentication services.

IP stands for Internet Protocol version v4 stands for Version Four (IPv4), is the
most widely used system for identifying devices on a network. It uses a set of
four numbers, separated by periods (like 192.168.0.1), to give each device a
unique address. This address helps data find its way from one device to
another over the internet.
IPv4 was the primary version brought into action for production within the
ARPANET in 1983. IP version four addresses are 32-bit integers which will be
expressed in decimal notation. Example- 192.0.2.126 could be an IPv4
address.
Parts of IPv4
IPv4 addresses consist of three parts:
 Network Part: The network part indicates the distinctive variety that’s
appointed to the network. The network part conjointly identifies the
category of the network that’s assigned.
 Host Part: The host part uniquely identifies the machine on your
network. This part of the IPv4 address is assigned to every host.
For each host on the network, the network part is the same, however,
the host half must vary.
 Subnet Number: This is the nonobligatory part of IPv4. Local networks
that have massive numbers of hosts are divided into subnets
and subnet numbers are appointed to that.
Characteristics of IPv4
 IPv4 could be a 32-bit IP Address.
 IPv4 could be a numeric address, and its bits are separated by a dot.
 The number of header fields is twelve and the length of the header
field is twenty.
 It has Unicast, broadcast, and multicast-style addresses.
 IPv4 supports VLSM (Virtual Length Subnet Mask).
 IPv4 uses the Post Address Resolution Protocol to map to the MAC
address.
 RIP may be a routing protocol supported by the routed daemon.
 Networks ought to be designed either manually or with DHCP.
 Packet fragmentation permits from routers and causes host.
Advantages of IPv4
 IPv4 security permits encryption to keep up privacy and security.
 IPV4 network allocation is significant and presently has quite 85000
practical routers.
 It becomes easy to attach multiple devices across an outsized network
while not NAT.
 This is a model of communication so provides quality service also as
economical knowledge transfer.
 IPV4 addresses are redefined and permit flawless encoding.
 Routing is scalable and economical as a result of addressing its
collective more effectively.
 Data communication across the network becomes a lot of specific in
multicast organizations.
o Limits net growth for existing users and hinders the use of the net
for brand-new users.
o Internet Routing is inefficient in IPv4.
o IPv4 has high System Management prices and it’s labor-intensive,
complex, slow & prone to errors.
o Security features are nonobligatory.
o Difficulty to feature support for future desires as a result of
adding it on is extremely high overhead since it hinders the
flexibility to attach everything over IP.
Limitations of IPv4
 IP relies on network layer addresses to identify end-points on the
network, and each network has a unique IP address.
 The world’s supply of unique IP addresses is dwindling, and they might
eventually run out theoretically.
 If there are multiple hosts, we need the IP addresses of the next class.
 Complex host and routing configuration, non-hierarchical addressing,
difficult to re-numbering addresses, large routing tables, non-trivial
implementations in providing security, QoS (Quality of Service),
mobility, and multi-homing, multicasting, etc. are the big limitations of
IPv4 so that’s why IPv6 came into the picture.

Ipv6

The most common version of the Internet Protocol currently in use, IPv4, will
soon be replaced by IPv6, a new version of the protocol. The well-known IPv6
protocol is being used and deployed more often, especially in mobile phone
markets. IP address determines who and where you are in the network of
billions of digital devices that are connected to the Internet.
IPv6 or Internet Protocol Version 6 is a network layer protocol that allows
communication to take place over the network. IPv6 was designed by the
Internet Engineering Task Force (IETF) in December 1998 with the purpose of
superseding IPv4 due to the global exponentially growing internet of users.
What is IPv6?
The next generation Internet Protocol (IP) address standard, known as IPv6, is
meant to work in tandem with IPv4, which is still in widespread use today,
and eventually replace it. To communicate with other devices, a computer,
smartphone, home automation component, Internet of Things sensor, or any
other Internet-connected device needs a numerical IP address. Because so
many connected devices are being used, the original IP address scheme,
known as IPv4, is running out of addresses.
What is IPv4?
The common type of IP address (is known as IPv4, for “version 4”). Here’s an
example of what an IP address might look like:
25.59.209.224
An IPv4 address consists of four numbers, each of which contains one to
three digits, with a single dot (.) separating each number or set of digits. This
group of separated numbers creates the addresses that let you and everyone
around the globe to send and retrieve data over our Internet connections.
The IPv4 uses a 32-bit address scheme allowing to store 2^32 addresses
which is more than 4 billion addresses. To date, it is considered the primary
Internet Protocol and carries 94% of Internet traffic. Initially, it was assumed
it would never run out of addresses but the present situation paves a new
way to IPv6, let’s see why? An IPv6 address consists of eight groups of
four hexadecimal digits. Here’s an example IPv6 address:
3001:0da8:75a3:0000:0000:8a2e:0370:7334
IPv6 vs IPv4
This new IP address version is being deployed to fulfil the need for more
Internet addresses. With 128-bit address space, it allows 340 undecillion
unique address space.
IPv6 support a theoretical maximum of 340, 282, 366, 920, 938, 463, 463,
374, 607, 431, 768, 211, 456. To keep it straightforward, we will never run out
of IP addresses again.
The next iteration of the IP standard is known as Internet Protocol version 6
(IPv6). Although IPv4 and IPv6 will coexist for a while, IPv6 is meant to work
in tandem with IPv4 before eventually taking its place. We need to
implement IPv6 in order to proceed and keep bringing new gadgets and
services to the Internet. We can only move forward with an innovative and
open Internet if we implement it, which was created with the needs of a
global commercial Internet in mind.

Address Resolution Protocol (ARP) –


Address Resolution Protocol is a communication protocol used for
discovering physical address associated with given network address.
Typically, ARP is a network layer to data link layer mapping process, which is
used to discover MAC address for given Internet Protocol Address. In order to
send the data to destination, having IP address is necessary but not sufficient;
we also need the physical address of the destination machine. ARP is used to
get the physical address (MAC address) of destination machine.

Before sending the IP packet, the MAC address of destination must be


known. If not so, then sender broadcasts the ARP-discovery packet
requesting the MAC address of intended destination. Since ARP-discovery is
broadcast, every host inside that network will get this message but the
packet will be discarded by everyone except that intended receiver host
whose IP is associated. Now, this receiver will send a unicast packet with its
MAC address (ARP-reply) to the sender of ARP-discovery packet. After the
original sender receives the ARP-reply, it updates ARP-cache and start
sending unicast message to the destination.

Grasping the concepts of ARP and its variants is critical for understanding
network protocols, a key area for GATE CS. If you’re looking to dive deeper
into networking topics like these, the GATE CS Self-Paced Course offers
comprehensive study materials and exercises to help you strengthen your
understanding and excel in the GATE exam.
Example –GATE CS 2005, Question 24 (ARP Based).
2. Reverse Address Resolution Protocol (RARP) –
Reverse ARP is a networking protocol used by a client machine in a local area
network to request its Internet Protocol address (IPv4) from the gateway-
router’s ARP table. The network administrator creates a table in gateway-
router, which is used to map the MAC address to corresponding IP address.
When a new machine is setup or any machine which don’t have memory to
store IP address, needs an IP address for its own use. So the machine sends a
RARP broadcast packet which contains its own MAC address in both sender
and receiver hardware address field.
A special host configured inside the local area network, called as RARP-server
is responsible to reply for these kind of broadcast packets. Now the RARP
server attempt to find out the entry in IP to MAC address mapping table. If
any entry matches in table, RARP server send the response packet to the
requesting device along with IP address.
 LAN technologies like Ethernet, Ethernet II, Token Ring and Fiber
Distributed Data Interface (FDDI) support the Address Resolution
Protocol.
 RARP is not being used in today’s networks. Because we have much
great featured protocols like BOOTP (Bootstrap Protocol) and
DHCP( Dynamic Host Configuration Protocol).
3. Inverse Address Resolution Protocol (InARP) –
Instead of using Layer-3 address (IP address) to find MAC address, Inverse
ARP uses MAC address to find IP address. As the name suggests, InARP is just
inverse of ARP. Reverse ARP has been replaced by BOOTP and later DHCP but
Inverse ARP is solely used for device configuration. Inverse ARP is enabled by
default in ATM(Asynchronous Transfer Mode) networks. InARP is used to find
Layer-3 address from Layer-2 address (DLCI in frame relay). Inverse ARP
dynamically maps local DLCIs to remote IP addresses when you configure
Frame Relay. When using inverse ARP, we know the DLCI of remote router but
don’t know its IP address. InARP sends a request to obtain that IP address
and map it to the Layer-2 frame-relay DLCI.
4. Proxy ARP –
Proxy ARP was implemented to enable devices which are separated into
network segments connected by a router in the same IP network or sub-
network to resolve IP address to MAC addresses. When devices are not in
same data link layer network but are in the same IP network, they try to
transmit data to each other as if they were on the local network. However,
the router that separates the devices will not send a broadcast message
because routers do not pass hardware-layer broadcasts. Therefore, the
addresses cannot be resolved. Proxy ARP is enabled by default so the “proxy
router” that resides between the local networks responds with its MAC
address as if it were the router to which the broadcast is addressed. When
the sending device receives the MAC address of the proxy router, it sends the
datagram to the proxy router, which in turns sends the datagram to the
designated device.

5. Gratuitous ARP –
Gratuitous Address Resolution Protocol is used in advance network scenarios.
It is something performed by computer while booting up. When the
computer booted up (Network Interface Card is powered) for the first time, it
automatically broadcast its MAC address to the entire network. After
Gratuitous ARP MAC address of the computer is known to every switch and
allow DHCP servers to know where to send the IP address if requested.
Gratuitous ARP could mean both Gratuitous ARP request and Gratuitous ARP
reply, but not needed in all cases. Gratuitous ARP request is a packet where
source and destination IP are both set to IP of the machine issuing the packet
and the destination MAC is the broadcast address ff:ff:ff:ff:ff:ff ; no reply
packet will occur. Gratuitous ARP is ARP-Reply that was not prompted by an
ARP-Request. Gratuitous Address Resolution Protocol is useful to detect IP
conflict. Gratuitous ARP is also used to update ARP mapping table and Switch
port MAC address table.
DHCP helps in managing the entire process automatically and centrally. DHCP
helps in maintaining a unique IP Address for a host using the server. DHCP
servers maintain information on TCP/IP configuration and provide
configuration of address to DHCP-enabled clients in the form of a lease offer.
Components of DHCP
The main components of DHCP include:
 DHCP Server: DHCP Server is a server that holds IP Addresses and other
information related to configuration.
 DHCP Client: It is a device that receives configuration information from
the server. It can be a mobile, laptop, computer, or any other electronic
device that requires a connection.
 DHCP Relay: DHCP relays basically work as a communication channel
between DHCP Client and Server.
 IP Address Pool: It is the pool or container of IP Addresses possessed by
the DHCP Server. It has a range of addresses that can be allocated to
devices.
 Subnets: Subnets are smaller portions of the IP network partitioned to
keep networks under control.
 Lease: It is simply the time that how long the information received
from the server is valid, in case of expiration of the lease, the tenant
must have to re-assign the lease.
 DNS Servers: DHCP servers can also provide DNS (Domain Name
System) server information to DHCP clients, allowing them to resolve
domain names to IP addresses.
 Default Gateway: DHCP servers can also provide information about the
default gateway, which is the device that packets are sent to when the
destination is outside the local network.
 Options: DHCP servers can provide additional configuration options to
clients, such as the subnet mask, domain name, and time server
information.
 Renewal: DHCP clients can request to renew their lease before it
expires to ensure that they continue to have a valid IP address and
configuration information.
 Failover: DHCP servers can be configured for failover, where two
servers work together to provide redundancy and ensure that clients
can always obtain an IP address and configuration information, even if
one server goes down.
 Dynamic Updates: DHCP servers can also be configured to dynamically
update DNS records with the IP address of DHCP clients, allowing for
easier management of network resources.
 Audit Logging: DHCP servers can keep audit logs of all DHCP
transactions, providing administrators with visibility into which devices
are using which IP addresses and when leases are being assigned or
renewed.
DHCP Packet Format
DHCP Packet Format

 Hardware Length: This is an 8-bit field defining the length of the


physical address in bytes. e.g for Ethernet the value is 6.
 Hop count: This is an 8-bit field defining the maximum number of hops
the packet can travel.
 Transaction ID: This is a 4-byte field carrying an integer. The transcation
identification is set by the client and is used to match a reply with the
request. The server returns the same value in its reply.
 Number of Seconds: This is a 16-bit field that indicates the number of
seconds elapsed since the time the client started to boot.
 Flag: This is a 16-bit field in which only the leftmost bit is used and the
rest of the bit should be set to os. A leftmost bit specifies a forced
broadcast reply from the server. If the reply were to be unicast to the
client, the destination. IP address of the IP packet is the address
assigned to the client.
 Client IP Address: This is a 4-byte field that contains the client IP
address . If the client does not have this information this field has a
value of 0.
 Your IP Address: This is a 4-byte field that contains the client IP
address. It is filled by the server at the request of the client.
 Server IP Address: This is a 4-byte field containing the server IP
address. It is filled by the server in a reply message.
 Gateway IP Address: This is a 4-byte field containing the IP address of a
routers. IT is filled by the server in a reply message.
 Client Hardware Address: This is the physical address of the
client .Although the server can retrieve this address from the frame
sent by the client it is more efficient if the address is supplied explicity
by the client in the request message.
 Server Name: This is a 64-byte field that is optionally filled by the
server in a reply packet. It contains a null-terminated string consisting
of the domain name of the server. If the server does not want to fill this
filed with data, the server must fill it with all 0s.
 Boot Filename: This is a 128-byte field that can be optionally filled by
the server in a reply packet. It contains a null- terminated string
consisting of the full pathname of the boot file. The client can use this
path to retrieve other booting information. If the server does not want
to fill this field with data, the server must fill it with all 0s.
 Options: This is a 64-byte field with a dual purpose. IT can carry either
additional information or some specific vendor information. The field is
used only in a reply message. The server uses a number, called a magic
cookie,
IPv4 is a connectionless protocol used for packet-switched networks. Internet
Protocol Version 4 (IPv4) is the fourth revision of the Internet Protocol and a
widely used protocol in data communication over different kinds of networks.
IPv4 is a connectionless protocol used in packet-switched layer networks,
such as Ethernet. It provides a logical connection between network devices
by providing identification for each device. There are many ways to configure
IPv4 with all kinds of devices – including manual and automatic
configurations – depending on the network type. IPv4 uses 32-bit addresses
for Ethernet communication in five classes: A, B, C, D and E. Classes A, B, and
C have a different bit length for addressing the network host. Class D
addresses are reserved for multicasting, while class E addresses are reserved
for military purposes. IPv4 uses 32-bit (4-byte) addressing, which gives
232 addresses. IPv4 addresses are written in the dot-decimal notation, which
comprises four octets of the address expressed individually in decimal and
separated by periods, for instance, 192.168.1.5.
Characteristics of IPv4
 IPv4 could be a 32-Bit IP Address.
 IPv4 could be a numeric address, and its bits are separated by a dot.
 The number of header fields is twelve and the length of the header
field is twenty.
 It has Unicast, broadcast, and multicast style of addresses.
 IPv4 supports VLSM (Virtual Length Subnet Mask).
 IPv4 uses the Post Address Resolution Protocol to map to the MAC
address.
 RIP may be a routing protocol supported by the routed daemon.
 Networks ought to be designed either manually or with DHCP.
 Packet fragmentation permits from routers and causing host.
IPv4 Datagram Header
 VERSION: Version of the IP protocol (4 bits), which is 4 for IPv4
 HLEN: IP header length (4 bits), which is the number of 32 bit words in
the header. The minimum value for this field is 5 and the maximum is
15.
 Type of service: Low Delay, High Throughput, Reliability (8 bits)
 Total Length: Length of header + Data (16 bits), which has a minimum
value 20 bytes and the maximum is 65,535 bytes.
 Identification: Unique Packet Id for identifying the group of fragments
of a single IP datagram (16 bits)
 Flags: 3 flags of 1 bit each : reserved bit (must be zero), do not
fragment flag, more fragments flag (same order)
 Fragment Offset: Represents the number of Data Bytes ahead of the
particular fragment in the particular Datagram. Specified in terms of
number of 8 bytes, which has the maximum value of 65,528 bytes.
 Time to live: Datagram’s lifetime (8 bits), It prevents the datagram to
loop through the network by restricting the number of Hops taken by a
Packet before delivering to the Destination.
 Protocol: Name of the protocol to which the data is to be passed (8
bits)
 Header Checksum: 16 bits header checksum for checking errors in the
datagram header
 Source IP address: 32 bits IP address of the sender
 Destination IP address: 32 bits IP address of the receiver
 Option: Optional information such as source route, record route. Used
by the Network administrator to check whether a path is working or
not.

IPv4 Datagram Header


Due to the presence of options, the size of the datagram header can be of
variable length (20 bytes to 60 bytes).

IPv6 Header Representation


The IPv6 header is the first part of an IPv6 packet, containing essential
information for routing and delivering the packet across networks. The IPv6
header representation is a structured layout of fields in an IPv6 packet,
including source and destination addresses, traffic class, flow label, payload
length, next header, and hop limit. It ensures proper routing and delivery of
data across networks.
IPv6 introduces a more simplified header structure compared to IPv4, and its
ability to handle a larger address space is essential for the future of
networking. Gaining a solid understanding of the IPv6 header is key to
mastering networking protocols. A detailed breakdown of each field in the
IPv6 header can be found in the GATE CS and IT – 2025 course, which offers
clear explanations and practical applications for real-world scenarios

IPv6 Fixed Header


The IPv6 header is a part of the information sent over the internet. It’s always
40 bytes long and includes details like where data should go and how it
should get there. This helps devices talk to each other and share information
smoothly online.
Version (4-bits)
The size of this field is 4-bit. Indicates the version of the Internet Protocol,
which is always 6 for IPv6, so the bit sequence is 0110.
Traffic Class(8-bit)
The Traffic Class field indicates class or priority of IPv6 packet which is similar
to Service Field in IPv4 packet. It helps routers to handle the traffic based on
the priority of the packet. If congestion occurs on the router then packets
with the least priority will be discarded.
As of now, only 4-bits are being used (and the remaining bits are under
research), in which 0 to 7 are assigned to Congestion controlled traffic and 8
to 15 are assigned to Uncontrolled traffic.
Priority assignment of Congestion controlled traffic :

Uncontrolled data traffic is mainly used for Audio/Video data. So we give


higher priority to Uncontrolled data traffic.
The source node is allowed to set the priorities but on the way, routers can
change it. Therefore, the destination should not expect the same priority
which was set by the source node.
Flow Label (20-bits)
Flow Label field is used by a source to label the packets belonging to the
same flow in order to request special handling by intermediate IPv6 routers,
such as non-default quality-of-service or real-time service. In order to
distinguish the flow, an intermediate router can use the source address, a
destination address, and flow label of the packets. Between a source and
destination, multiple flows may exist because many processes might be
running at the same time. Routers or Host that does not support the
functionality of flow label field and for default router handling, flow label
field is set to 0. While setting up the flow label, the source is also supposed to
specify the lifetime of the flow.
Payload Length (16-bits)
It is a 16-bit (unsigned integer) field, indicates the total size of
the payload which tells routers about the amount of information a particular
packet contains in its payload. The payload Length field includes extension
headers(if any) and an upper-layer packet. In case the length of the payload is
greater than 65,535 bytes (payload up to 65,535 bytes can be indicated with
16-bits), then the payload length field will be set to 0 and the jumbo payload
option is used in the Hop-by-Hop options extension header.
Next Header (8-bits)
Next Header indicates the type of extension header(if present) immediately
following the IPv6 header. Whereas In some cases it indicates the protocols
contained within upper-layer packets, such as TCP, UDP.
Hop Limit (8-bits)
Hop Limit field is the same as TTL in IPv4 packets. It indicates the maximum
number of intermediate nodes IPv6 packet is allowed to travel. Its value gets
decremented by one, by each node that forwards the packet and the packet
is discarded if the value decrements to 0. This is used to discard the packets
that are stuck in an infinite loop because of some routing error.
Source Address (128-bits)
Source Address is the 128-bit IPv6 address of the original source of the
packet.
Destination Address (128-bits)
The destination Address field indicates the IPv6 address of the final
destination(in most cases). All the intermediate nodes can use this
information in order to correctly route the packet.
Extension Headers
In order to rectify the limitations of the IPv4 Option Field, Extension Headers
are introduced in IP version 6. The extension header mechanism is a very
important part of the IPv6 architecture. The next Header field of IPv6 fixed
header points to the first Extension Header and this first extension header
points to the second extension header and so on.

ICMP is used for reporting errors and management queries. It is a supporting


protocol and is used by network devices like routers for sending error
messages and operations information. For example, the requested service is
not available or a host or router could not be reached.
Since the IP protocol lacks an error-reporting or error-correcting mechanism,
information is communicated via a message. For instance, when a message is
sent to its intended recipient, it may be intercepted along the route from the
sender. The sender may believe that the communication has reached its
destination if no one reports the problem. If a middleman reports the
mistake, ICMP helps in notifying the sender about the issue. For example, if a
message can’t reach its destination, if there’s network congestion, or if
packets are lost, ICMP sends back feedback about these issues. This feedback
is essential for diagnosing and fixing network problems, making sure that
communication can be adjusted or rerouted to keep everything running
smoothly.
Uses of ICMP
ICMP is used for error reporting if two devices connect over the internet and
some error occurs, So, the router sends an ICMP error message to the source
informing about the error. For Example, whenever a device sends any
message which is large enough for the receiver, in that case, the receiver will
drop the message and reply to the ICMP message to the source.
Another important use of ICMP protocol is used to perform network
diagnosis by making use of traceroute and ping utility.
Traceroute: Traceroute utility is used to know the route between two devices
connected over the internet. It routes the journey from one router to
another, and a traceroute is performed to check network issues before data
transfer.
Ping: Ping is a simple kind of traceroute known as the echo-request message,
it is used to measure the time taken by data to reach the destination and
return to the source, these replies are known as echo-replies messages.

The network layer is a part of the communication process in computer


networks. Its main job is to move data packets between different networks. It
helps route these packets from the sender to the receiver across multiple
paths and networks. Network-to-network connections enable the Internet to
function. These connections happen at the “network layer,” which sends data
packets between different networks. In the 7-layer OSI model, the network
layer is layer 3. The Internet Protocol (IP) is a key protocol used at this layer,
along with other protocols for routing, testing, and encryption.
The network layer is responsible for packetizing, routing, and forwarding data
across networks. Each of these functions plays a crucial role in ensuring
efficient communication between devices. For a deeper dive into network
layer protocols and services, the GATE CS and IT – 2025 course provides a
comprehensive guide to networking concepts, complete with practical
examples that reinforce your understanding of packet switching, routing
algorithms, and forwarding techniques
Features of Network Layer
 The main responsibility of the Network layer is to carry the data
packets from the source to the destination without changing or using
them.
 If the packets are too large for delivery, they are fragmented i.e.,
broken down into smaller packets.
 It decides the route to be taken by the packets to travel from the
source to the destination among the multiple routes available in a
network (also called routing).
 The source and destination addresses are added to the data packets
inside the network layer.
Services Offered by Network Layer
The services which are offered by the network layer protocol are as follows:
 Packetizing
 Routing
 Forwarding
1. Packetizing
The process of encapsulating the data received from the upper layers of the
network (also called payload) in a network layer packet at the source and
decapsulating the payload from the network layer packet at the destination is
known as packetizing.
The source host adds a header that contains the source and destination
address and some other relevant information required by the network layer
protocol to the payload received from the upper layer protocol and delivers
the packet to the data link layer.
The destination host receives the network layer packet from its data link
layer, decapsulates the packet, and delivers the payload to the corresponding
upper layer protocol. The routers in the path are not allowed to change either
the source or the destination address. The routers in the path are not allowed
to decapsulate the packets they receive unless they need to be fragmented.
Packetizing
2. Routing
Routing is the process of moving data from one device to another device.
These are two other services offered by the network layer. In a network,
there are a number of routes available from the source to the destination.
The network layer specifies some strategies which find out the best possible
route. This process is referred to as routing. There are a number of routing
protocols that are used in this process and they should be run to help the
routers coordinate with each other and help in establishing communication
throughout the network.
Routing
3. Forwarding
Forwarding is simply defined as the action applied by each router when a
packet arrives at one of its interfaces. When a router receives a packet from
one of its attached networks, it needs to forward the packet to another
attached network (unicast routing) or to some attached networks (in the case
of multicast routing). Routers are used on the network for forwarding a
packet from the local network to the remote network. So, the process of
routing involves packet forwarding from an entry interface out to an exit
interface.
In this section, we shall discuss how Intra-domain Routing is different
from Inter-domain Routing. Intra domain is any protocol in which Routing
algorithm works only within domains on the other hand Inter domain is any
protocol in which Routing algorithm works within and between domains. Let
us see the differences between Intradomain and Interdomain:

S.N
o Intradomain Routing Interdomain Routing

Routing algorithm works only Routing algorithm works within


1. within domains. and between domains.

It need to know only about other


It need to know only about other routers within and between their
2. routers within their domain. domain.

Protocols used in intradomain Protocols used in interdomain


routing are known as Interior- routing are known as Exterior-
3. gateway protocols. gateway protocols.

In this Routing, routing takes In this Routing, routing takes place


place within an autonomous between the autonomous
4. network. networks.

5. Intradomain routing protocols Interdomain routing protocol


assumes that the internet contains
ignores the internet outside the the collection of interconnected
AS(autonomous system). AS(autonomous systems).

Some Popular Protocols of this Popular Protocols of this routing is


routing are RIP(routing BGP(Border Gateway Protocol)
information protocol) and used to connect two or more
6. OSPF(open shortest path first). AS(autonomous system).

Routing is the process of establishing the routes that data packets must
follow to reach the destination. In this process, a routing table is created
which contains information regarding routes that data packets follow. Various
routing algorithms are used for the purpose of deciding which route an
incoming data packet needs to be transmitted on to reach the destination
efficiently.

It refers to the algorithms that help to find the shortest path between a
sender and receiver for routing the data packets through the network in
terms of shortest distance, minimum cost, and minimum time.
 It is mainly for building a graph or subnet containing routers as nodes
and edges as communication lines connecting the nodes.
 Hop count is one of the parameters that is used to measure the
distance.
 Hop count: It is the number that indicates how many routers are
covered. If the hop count is 6, there are 6 routers/nodes and the edges
connecting them.
 Another metric is a geographic distance like kilometers.
 We can find the label on the arc as the function of bandwidth, average
traffic, distance, communication cost, measured delay, mean queue
length, etc.
Common Shortest Path Algorithms
 Dijkstra’s Algorithm
 Bellman Ford’s Algorithm
 Floyd Warshall’s Algorithm
Dijkstra’s Algorithm
The Dijkstra’s Algorithm is a greedy algorithm that is used to find the
minimum distance between a node and all other nodes in a given graph. Here
we can consider node as a router and graph as a network. It uses weight of
edge .ie, distance between the nodes to find a minimum distance route.
Algorithm:
1: Mark the source node current distance as 0 and all others as infinity.
2: Set the node with the smallest current distance among the non-visited
nodes as the current node.
3: For each neighbor, N, of the current node:
 Calculate the potential new distance by adding the current distance of
the current node with the weight of the edge connecting the current
node to N.
 If the potential new distance is smaller than the current distance of
node N, update N's current distance with the new distance.
4: Make the current node as visited node.
5: If we find any unvisited node, go to step 2 to find the next node which has
the smallest current distance and continue this process.
Example:
Consider the graph G:
Graph G
Now,we will start normalising graph one by one starting from node 0.

step 1
Nearest neighbour of 0 are 2 and 1 so we will normalize them first .
step 3

Similarly we will normalize other node considering it should not form a cycle
and will keep track in visited nodes.

step 5
Bellman Ford’s Algorithm
The Bell man Ford’s algorithm is a single source graph search algorithm which
help us to find the shortest path between a source vertex and any other
vertex in a give graph. We can use it in both weighted and unweighted
graphs. This algorithm is slower than Dijkstra's algorithm and it can also use
negative edge weight.
Algorithm
1: First we Initialize all vertices v in a distance array dist[] as INFINITY.
2: Then we pick a random vertex as vertex 0 and assign dist[0] =0.
3: Then iteratively update the minimum distance to each node (dist[v]) by
comparing it with the sum of the distance from the source node (dist[u]) and
the edge weight (weight) N-1 times.
4: To identify the presence of negative edge cycles, with the help of following
cases do one more round of edge relaxation.
 We can say that a negative cycle exists if for any edge uv the sum of
distance from the source node (dist[u]) and the edge weight (weight) is
less than the current distance to the largest node(dist[v])
 It indicates the absence of negative edge cycle if none of the edges
satisfies case1

Flooding –
 Requires no network information like topology, load condition, cost of
diff. paths
 Every incoming packet to a node is sent out on every outgoing like
except the one it arrived on.
 For Example in the above figure
o An incoming packet to (1) is sent out to (2),(3)
o from (2) is sent to (6),(4), and from (3) it is sent to (4),(5)
o from (4) it is sent to (6),(5),(3), from (6) it is sent to (2),(4),(5),
from (5) it is sent to (4),(3)
Characteristics –
 All possible routes between Source and Destination are tried. A packet
will always get through if the path exists
 As all routes are tried, there will be at least one route which is the
shortest
 All nodes directly or indirectly connected are visited
Limitations –
 Flooding generates a vast number of duplicate packets
 Suitable damping mechanism must be used
Hop-Count –
 A hop counter may be contained in the packet header which is
decremented at each hop.
with the packet being discarded when the counter becomes zero
 The sender initializes the hop counter. If no estimate is known, it is set
to the full diameter of the subnet.
 Keep track of the packets which are responsible for flooding using a
sequence number. Avoid sending them out a second time.
Selective Flooding: Routers do not send every incoming packet out on every
line, only on those lines that go in approximately in the direction of the
destination.
Advantages of Flooding :
 Highly Robust, emergency or immediate messages can be sent (eg
military applications)
 Set up the route in virtual circuit
 Flooding always chooses the shortest path
 Broadcast messages to all the nodes
Disadvantages of Flooding :
 Network congestion: Flooding can cause a significant amount of traffic
in the network, leading to congestion. This can result in slower network
speeds and delays in delivering data packets.
 Wastage of network resources: Flooding uses a lot of network
resources, including bandwidth and processing power, to deliver
packets. This can result in the wastage of valuable network resources
and reduce the overall efficiency of the network.
 Security risks: Flooding can be used as a tool for launching various
types of attacks, including denial of service (DoS) attacks. Attackers can
flood the network with data packets, which can overload the network
and cause it to crash.
 Inefficient use of energy: Flooding can result in an inefficient use of
energy in wireless networks. Since all nodes receive every packet, even
if they are not the intended recipient, they will still need to process it,
which can waste energy and reduce the overall battery life of mobile
devices.
 Difficulty in network troubleshooting: Flooding can make it difficult to
troubleshoot network issues. Since packets are sent to all nodes, it can
be challenging to isolate the cause of a problem when it arises.

Distance Vector Routing (DVR) Protocol is a method used by routers to find


the best path for data to travel across a network. Each router keeps a table
that shows the shortest distance to every other router, based on the number
of hops (or steps) needed to reach them. Routers share this information with
their neighbors, allowing them to update their tables and find the most
efficient routes. This protocol helps ensure that data moves quickly and
smoothly through the network.
 A router transmits its distance vector to each of its neighbors in a
routing packet.
 Each router receives and saves the most recently received distance
vector from each of its neighbors.
 A router recalculates its distance vector when:
o It receives a distance vector from a neighbor containing different
information than before.
o It discovers that a link to a neighbor has gone down.
The DV calculation is based on minimizing the cost to each destination
Dx(y) = Estimate of least cost from x to y
C(x,v) = Node x knows cost to each neighbor v
Dx = [Dx(y): y ? N ] = Node x maintains distance vector
Node x also maintains its neighbors' distance vectors
– For each neighbor v, x maintains Dv = [Dv(y): y ? N ]
Note:
 From time-to-time, each node sends its own distance vector estimate
to neighbors.
 When a node x receives new DV estimate from any neighbor v, it saves
v’s distance vector and it updates its own DV using B-F equation:
Dx(y) = min { C(x,v) + Dv(y), Dx(y) } for each node y ? N
Example :
Consider 3-routers X, Y and Z as shown in figure. Each router have their
routing table. Every routing table will contain distance to the destination
nodes.

Consider router X , X will share it routing table to neighbors and neighbors


will share it routing table to it to X and distance from node X to destination
will be calculated using bellmen- ford equation.
Dx(y) = min { C(x,v) + Dv(y)} for each node y ? N
As we can see that distance will be less going from X to Z when Y is
intermediate node(hop) so it will be update in routing table X.

Similarly for Z also –


Finally the routing table for all –
Applications of Distance Vector Routing Algorithm
The Distance Vector Routing Algorithm has several uses:
 Computer Networking : It helps route data packets in networks.
 Telephone Systems : It’s used in some telephone switching systems.
 Military Applications : It has been used to route missiles.
Advantages of Distance Vector routing
 Shortest Path : Distance Vector Routing finds the shortest path for data
to travel in a network.
 Usage : It is used in local, metropolitan, and wide-area networks.
 Easy Implementation : The method is simple to set up and doesn’t
require many resources.
Disadvantages of Distance Vector Routing Algorithm
 It is slower to converge than link state.
 It is at risk from the count-to-infinity problem.
 It creates more traffic than link state since a hop count change must be
propagated to all routers and processed on each router. Hop count
updates take place on a periodic basis, even if there are no changes in
the network topology , so bandwidth -wasting broadcasts still occur.
 For larger networks, distance vector routing results in larger routing
tables than link state since each router must know about all other
routers. This can also lead to congestion on WAN links.
Link State Routing
Link state routing is the second family of routing protocols. While distance-
vector routers use a distributed algorithm to compute their routing tables,
link-state routing uses link-state routers to exchange messages that allow
each router to learn the entire network topology. Based on this learned
topology, each router is then able to compute its routing table by using the
shortest path computation.
Link state routing is a popular algorithm used in unicast routing to determine
the shortest path in a network. Understanding how link state protocols work
is key to mastering routing algorithms. To gain a more detailed understanding
of unicast routing and link state protocols, the GATE CS and IT – 2025
course offers thorough coverage of routing algorithms, complete with
practical exercises and real-world applications, helping you prepare for both
theoretical exams and practical network design
Link state routing is a technique in which each router shares the knowledge
of its neighborhood with every other router i.e. the internet work. The three
keys to understand the link state routing algorithm.
1. Knowledge about the neighborhood : Instead of sending its routing
table, a router sends the information about its neighborhood only. A
router broadcast its identities and cost of the directly attached links to
other routers.
2. Flooding: Each router sends the information to every other router on
the internetwork except its neighbors. This process is known as
flooding. Every router that receives the packet sends the copies to all
the neighbors. Finally each and every router receives a copy of the
same information.
3. Information Sharing : A router send the information to every other
router only when the change occurs in the information.
Link state routing has two phase:
1. Reliable Flooding: Initial state – Each node knows the cost of its
neighbors. Final state- Each node knows the entire graph.
2. Route Calculation : Each node uses Dijkstra’ s algorithm on the graph to
calculate the optimal routes to all nodes. The link state routing
algorithm is also known as Dijkstra’s algorithm which is used to find the
shortest path from one node to every other node in the network.
Features of Link State Routing Protocols
 Link State Packet: A small packet that contains routing information.
 Link-State Database: A collection of information gathered from the link-
state packet.
 Shortest Path First Algorithm (Dijkstra algorithm): A calculation
performed on the database results in the shortest path
 Routing Table: A list of known paths and interfaces.
Calculation of Shortest Path
To find the shortest path, each node needs to run the famous Dijkstra
algorithm. Let us understand how can we find the shortest path using an
example.
Illustration
To understand the Dijkstra Algorithm, let’s take a graph and find the shortest
path from the source to all nodes.
Note: We use a boolean array sptSet[] to represent the set of vertices
included in SPT. If a value sptSet[v] is true, then vertex v is included in SPT,
otherwise not. Array dist[] is used to store the shortest distance values of all
vertices.
Consider the below graph and src = 0.
Shortest Path Calculation – Step 1
STEP 1: The set sptSet is initially empty and distances assigned to vertices are
{0, INF, INF, INF, INF, INF, INF, INF} where INF indicates infinite. Now pick the
vertex with a minimum distance value. The vertex 0 is picked and included in
sptSet. So sptSet becomes {0}. After including 0 to sptSet, update the distance
values of its adjacent vertices. Adjacent vertices of 0 are 1 and 7. The distance
values of 1 and 7 are updated as 4 and 8.
The following subgraph shows vertices and their distance values. Vertices
included in SPT are included in GREEN color.

Shortest Path Calculation – Step 2


STEP 2: Pick the vertex with minimum distance value and not already
included in SPT (not in sptSET). The vertex 1 is picked and added to sptSet. So
sptSet now becomes {0, 1}. Update the distance values of adjacent vertices of
1. The distance value of vertex 2 becomes 12.

Shortest Path Calculation – Step 3


STEP 3: Pick the vertex with minimum distance value and not already
included in SPT (not in sptSET). Vertex 7 is picked. So sptSet now becomes {0,
1, 7}. Update the distance values of adjacent vertices of 7. The distance value
of vertex 6 and 8 becomes finite (15 and 9 respectively).

Shortest Path Calculation – Step 4


STEP 4: Pick the vertex with minimum distance value and not already
included in SPT (not in sptSET). Vertex 6 is picked. So sptSet now becomes {0,
1, 7, 6}. Update the distance values of adjacent vertices of 6. The distance
value of vertex 5 and 8 are updated.
Shortest Path Calculation – Step 5
We repeat the above steps until sptSet includes all vertices of the given
graph. Finally, we get the following Shortest Path Tree (SPT).

Shortest Path Calculation – Step 6


Characteristics of Link State Protocol
 It requires a large amount of memory.
 Shortest path computations require many CPU circles.
 If a network uses little bandwidth; it quickly reacts to topology changes
 All items in the database must be sent to neighbors to form link-state
packets.
 All neighbors must be trusted in the topology.
 Authentication mechanisms can be used to avoid undesired adjacency
and problems.
 No split horizon techniques are possible in the link-state routing.
 OSPF Protocol
Protocols of Link State Routing
1. Open Shortest Path First (OSPF)
2. Intermediate System to Intermediate System (IS-IS)
Open Shortest Path First (OSPF): Open Shortest Path First (OSPF) is a unicast
routing protocol developed by a working group of the Internet Engineering
Task Force (IETF). It is an intradomain routing protocol. It is an open-source
protocol. It is similar to Routing Information Protocol (RIP). OSPF is a classless
routing protocol, which means that in its updates, it includes the subnet of
each route it knows about, thus, enabling variable-length subnet masks. With
variable-length subnet masks, an IP network can be broken into many
subnets of various sizes. This provides network administrators with extra
network configuration flexibility. These updates are multicasts at specific
addresses (224.0.0.5 and 224.0.0.6). OSPF is implemented as a program in the
network layer using the services provided by the Internet Protocol. IP
datagram that carries the messages from OSPF sets the value of the protocol
field to 89. OSPF is based on the SPF algorithm, which sometimes is referred
to as the Dijkstra algorithm.

Computer security threats are potential threats to your computer’s efficient


operation and performance. These could be harmless adware or dangerous
trojan infection. As the world becomes more digital, computer security
concerns are always developing. A threat in a computer system is a potential
danger that could jeopardize your data security. At times, the damage is
irreversible.
Types of Threats:
A security threat is a threat that has the potential to harm computer systems
and organizations. The cause could be physical, such as a computer
containing sensitive information being stolen. It’s also possible that the cause
isn’t physical, such as a viral attack.
1. Physical Threats: A physical danger to computer systems is a potential
cause of an occurrence/event that could result in data loss or physical
damage. It can be classified as:
 Internal: Short circuit, fire, non-stable supply of power, hardware
failure due to excess humidity, etc. cause it.
 External: Disasters such as floods, earthquakes, landscapes, etc. cause
it.
 Human: Destroying of infrastructure and/or hardware, thefts,
disruption, and unintentional/intentional errors are among the threats.
2. Non-physical threats: A non-physical threat is a potential source of an
incident that could result in:
 Hampering of the business operations that depend on computer
systems.
 Sensitive – data or information loss
 Keeping track of other’s computer system activities illegally.
 Hacking id & passwords of the users, etc.
The non-physical threads can be commonly caused by:
(i) Malware: Malware (“malicious software”) is a type of computer program
that infiltrates and damages systems without the users’ knowledge. Malware
tries to go unnoticed by either hiding or not letting the user know about its
presence on the system. You may notice that your system is processing at a
slower rate than usual.
(ii) Virus: It is a program that replicates itself and infects your computer’s files
and programs, rendering them inoperable. It is a type of malware that
spreads by inserting a copy of itself into and becoming part of another
program. It spreads with the help of software or documents. They are
embedded with software and documents and then transferred from one
computer to another using the network, a disk, file sharing, or infected e-
mail. They usually appear as an executable file.
(iii) Spyware: Spyware is a type of computer program that tracks, records, and
reports a user’s activity (offline and online) without their permission for the
purpose of profit or data theft. Spyware can be acquired from a variety of
sources, including websites, instant chats, and emails. A user may also
unwittingly obtain spyware by adopting a software program’s End User
License Agreement.
Adware is a sort of spyware that is primarily utilized by advertising. When
you go online, it keeps track of your web browsing patterns in order to
compile data on the types of websites you visit.
(iv) Worms: Computer worms are similar to viruses in that they replicate
themselves and can inflict similar damage. Unlike viruses, which spread by
infecting a host file, worms are freestanding programs that do not require a
host program or human assistance to proliferate. Worms don’t change
programs; instead, they replicate themselves over and over. They just eat
resources to make the system down.
(v) Trojan: A Trojan horse is malicious software that is disguised as a useful
host program. When the host program is run, the Trojan performs a
harmful/unwanted action. A Trojan horse, often known as a Trojan, is
malicious malware or software that appears to be legal yet has the ability to
take control of your computer. A Trojan is a computer program that is
designed to disrupt, steal, or otherwise harm your data or network.
(vi) Denial Of Service Attacks: A Denial of Service attack is one in which an
attacker tries to prohibit legitimate users from obtaining information or
services. An attacker tries to make a system or network resource unavailable
to its intended users in this attack. The web servers of large organizations
such as banking, commerce, trading organizations, etc. are the victims.
(vii) Phishing: Phishing is a type of attack that is frequently used to obtain
sensitive information from users, such as login credentials and credit card
details. They deceive users into giving critical information, such as bank and
credit card information, or access to personal accounts, by sending spam,
malicious Web sites, email messages, and instant chats.
(viii) Key-Loggers: Keyloggers can monitor a user’s computer activity in real-
time. Keylogger is a program that runs in the background and records every
keystroke made by a user, then sends the data to a hacker with the intent of
stealing passwords and financial information.

What is social engineering?


Social engineering attacks manipulate people into sharing information that
they shouldn’t share, downloading software that they shouldn’t download,
visiting websites they shouldn’t visit, sending money to criminals or making
other mistakes that compromise their personal or organizational security.
An email that seems to be from a trusted coworker requesting sensitive
information, a threatening voicemail claiming to be from the IRS and an offer
of riches from a foreign potentate are just a few examples of social
engineering. Because social engineering uses psychological manipulation and
exploits human error or weakness rather than technical or digital system
vulnerabilities, it is sometimes called "human hacking."
Cybercriminals frequently use social engineering tactics to obtain personal
data or financial information, including login credentials, credit card
numbers, bank account numbers and Social Security numbers. They use the
information that they have stolen for identity theft, enabling them to make
purchases using other peoples’ money or credit, apply for loans in someone
else’s name, apply for other peoples’ unemployment benefits and more. But
a social engineering attack can also be the first stage of a larger-
scale cyberattack. For example, a cybercriminal might trick a victim into
sharing a username and password and then use those credentials to
plant ransomware on the victim’s employer’s network.
Social engineering is attractive to cybercriminals because it enables them to
access digital networks, devices and accounts without having to do the
difficult technical work of getting around firewalls, antivirus software and
other cybersecurity controls. This is one reason why social engineering is the
leading cause of network compromise today according to ISACA's State of
Cybersecurity 2022 report (link resides outside ibm.com). According to
IBM's Cost of a Data Breach report, breaches caused by social engineering
tactics (such as phishing and business email compromise) were among the
most costly.
Online stalking
Cyberstalking is the use of the Internet or other electronic means
to stalk or harass an individual, group, or organization.[1][2] It may include false
accusations, defamation, slander and libel. It may also
include monitoring, identity theft, threats, vandalism, solicitation for
sex, doxing, or blackmail.[1] These unwanted behaviors are perpetrated online
and cause intrusion into an individual's digital life as well as negatively
impact a victim's mental and emotional well-being, as well as their sense of
safety and security online. [3]
Cyberstalking is often accompanied by realtime or offline stalking.[4] In many
jurisdictions, such as California, both are criminal offenses.[5] Both are
motivated by a desire to control, intimidate or influence a victim.[6] A stalker
may be an online stranger or a person whom the target knows. They may be
anonymous and solicit involvement of other people online who do not even
know the target.[7]
Cyberstalking is a criminal offense under various state anti-
stalking, slander and harassment laws. A conviction can result in a restraining
order, probation, or criminal penalties against the assailant, including jail.
Cyberstalking is often defined as unwanted behavior.
Definitions and description
See also: Doxing and Cyberbullying
There have been a number of attempts by experts and legislators to define
cyberstalking. It is generally understood to be the use of the Internet or other
electronic means to stalk or harass an individual, a group, or an organization.
[1]
Cyberstalking is a form of cyberbullying; the terms are often used
interchangeably in the media. Both may include false
accusations, defamation, slander and libel.[4]
Cyberstalking may also include monitoring, identity theft, threats, vandalism,
solicitation for sex, or gathering information that may be used to threaten or
harass. Cyberstalking is often accompanied by real-time or offline stalking.
[4]
Both forms of stalking may be criminal offenses.[5]
Stalking is a continuous process, consisting of a series of actions, each of
which may be entirely legal in itself. Technology ethics professor Lambèr
Royakkers defines cyberstalking as perpetrated by someone without a
current relationship with the victim. About the abusive effects of
cyberstalking, he writes that:
[Stalking] is a form of mental assault, in which the perpetrator repeatedly,
unwantedly, and disruptively breaks into the life-world of the victim, with
whom he has no relationship (or no longer has), with motives that are
directly or indirectly traceable to the affective sphere. Moreover, the
separated acts that make up the intrusion cannot by themselves cause the
mental abuse, but do taken together (cumulative effect).[8]
Distinguishing cyberstalking from other acts
There is a distinction between cyber-trolling and cyber-stalking. Research has
shown that actions that can be perceived to be harmless as a one-off can be
considered to be trolling, whereas if it is part of a persistent campaign then it
can be considered stalking.

TM Motive Mode Gravity Description

Cyber- Cyber- In the moment and quickly


1 Playtime
bantering trolling regret

Cyber- Cyber- In the moment but do not


2 Tactical
trickery trolling regret and continue

Go out of way to cause


Cyber- Cyber- problems, but without a
3 Strategic
bullying stalking sustained and planned long-
term campaign

Goes out of the way to create


Cyber- Cyber-
4 Domination rich media to target one or
hickery stalking
more specific individuals

Cyberstalking author Alexis Moore separates cyberstalking from identity


theft, which is financially motivated.[9] Her definition, which was also used by
the Republic of the Philippines in their legal description, is as follows:[10]
Cyberstalking is a technologically-based "attack" on one person who has been
targeted specifically for that attack for reasons of anger, revenge or control.
Cyberstalking can take many forms, including:
1. harassment, embarrassment and humiliation of the victim
2. emptying bank accounts or other economic control such as ruining the
victim's credit score
3. harassing family, friends and employers to isolate the victim
4. scare tactics to instill fear and more[9]
Identification and detection
CyberAngels has written about how to identify cyberstalking:[11]
When identifying cyberstalking "in the field," and particularly when
considering whether to report it to any kind of legal authority, the following
features or combination of features can be considered to characterize a true
stalking situation: malice, premeditation,
repetition, distress, obsession, vendetta, no legitimate purpose, personally
directed, disregarded warnings to stop, harassment and threats.
A number of key factors have been identified in cyberstalking:
 False accusations: Many cyberstalkers try to damage the reputation of
their victim and turn other people against them. They post false
information about them on websites. They may set up their own
websites, blogs or user pages for this purpose. They post allegations
about the victim to newsgroups, chat rooms, or other sites that allow
public contributions such as Wikipedia or Amazon.com.[12]
 Attempts to gather information about the victim: Cyberstalkers may
approach their victim's friends, family and work colleagues to obtain
personal information. They may advertise for information on the
Internet, or hire a private detective.[13]
 Monitoring their target's online activities and attempting to trace
their IP address in an effort to gather more information about their
victims.[14]
 Encouraging others to harass the victim: Many cyberstalkers try to
involve third parties in the harassment. They may claim the victim has
harmed the stalker or his/her family in some way, or may post the
victim's name and telephone number in order to encourage others to
join the pursuit.
 False victimization: The cyberstalker will claim that the victim is
harassing him or her. Bocij writes that this phenomenon has been
noted in a number of well-known cases.[15]
 Attacks on data and equipment: They may try to damage the victim's
computer by sending viruses.
 Ordering goods and services: They order items or subscribe to
magazines in the victim's name. These often involve subscriptions
to pornography or ordering sex toys then having them delivered to the
victim's workplace.
 Arranging to meet: Young people face a particularly high risk of having
cyberstalkers try to set up meetings between them.[15]
 The posting of defamatory or derogatory statements: Using web pages
and message boards to incite some response or reaction from their
victim

The transport Layer is the second layer in the TCP/IP model and the fourth
layer in the OSI model. It is an end-to-end layer used to deliver messages to a
host. It is termed an end-to-end layer because it provides a point-to-point
connection rather than hop-to-hop, between the source host and destination
host to deliver the services reliably. The unit of data encapsulation in the
Transport Layer is a segment.
Working of Transport Layer
The transport layer takes services from the Application layer and provides
services to the Network layer.
The transport layer ensures the reliable transmission of data between
systems. Understanding protocols like TCP and UDP is crucial. If you’re aiming
for a deeper understanding of transport layer protocols, the GATE CS Self-
Paced Course offers comprehensive modules on networking, including
detailed explanations of transport layer responsibilities and how they
operate in real-world applications.
At the sender’s side: The transport layer receives data (message) from the
Application layer and then performs Segmentation, divides the actual
message into segments, adds the source and destination’s port numbers into
the header of the segment, and transfers the message to the Network layer.
At the receiver’s side: The transport layer receives data from the Network
layer, reassembles the segmented data, reads its header, identifies the port
number, and forwards the message to the appropriate port in the Application
layer.
Responsibilities of a Transport Layer
 The Process to Process Delivery
 End-to-End Connection between Hosts
 Multiplexing and Demultiplexing
 Congestion Control
 Data integrity and Error correction
 Flow control
1. The Process to Process Delivery
While Data Link Layer requires the MAC address (48 bits address contained
inside the Network Interface Card of every host machine) of source-
destination hosts to correctly deliver a frame and the Network layer requires
the IP address for appropriate routing of packets, in a similar way Transport
Layer requires a Port number to correctly deliver the segments of data to the
correct process amongst the multiple processes running on a particular host.
A port number is a 16-bit address used to identify any client-server program
uniquely.
Process to Process Delivery
2. End-to-end Connection between Hosts
The transport layer is also responsible for creating the end-to-end Connection
between hosts for which it mainly uses TCP and UDP. TCP is a secure,
connection-orientated protocol that uses a handshake protocol to establish a
robust connection between two end hosts. TCP ensures the reliable delivery
of messages and is used in various applications. UDP, on the other hand, is a
stateless and unreliable protocol that ensures best-effort delivery. It is
suitable for applications that have little concern with flow or error control
and requires sending the bulk of data like video conferencing. It is often used
in multicasting protocols.
End to End Connection.
3. Multiplexing and Demultiplexing
Multiplexing(many to one) is when data is acquired from several processes
from the sender and merged into one packet along with headers and sent as
a single packet. Multiplexing allows the simultaneous use of different
processes over a network that is running on a host. The processes are
differentiated by their port numbers. Similarly, Demultiplexing(one to many)
is required at the receiver side when the message is distributed into different
processes. Transport receives the segments of data from the network layer
distributes and delivers it to the appropriate process running on the
receiver’s machine.

Multiplexing and Demultiplexing


4. Congestion Control
Congestion is a situation in which too many sources over a network attempt
to send data and the router buffers start overflowing due to which loss of
packets occurs. As a result, the retransmission of packets from the sources
increases the congestion further. In this situation, the Transport layer
provides Congestion Control in different ways. It uses open-loop congestion
control to prevent congestion and closed-loop congestion control to remove
the congestion in a network once it occurred. TCP provides AIMD – additive
increases multiplicative decrease and leaky bucket technique for congestion
control.
Leaky Bucket Congestion Control Technique
5. Data integrity and Error Correction
The transport layer checks for errors in the messages coming from the
application layer by using error detection codes, and computing checksums, it
checks whether the received data is not corrupted and uses the ACK and
NACK services to inform the sender if the data has arrived or not and checks
for the integrity of data.

Error Correction using Checksum


6. Flow Control
The transport layer provides a flow control mechanism between the adjacent
layers of the TCP/IP model. TCP also prevents data loss due to a fast sender
and slow receiver by imposing some flow control techniques. It uses the
method of sliding window protocol which is accomplished by the receiver by
sending a window back to the sender informing the size of data it can receive.

Connection-Oriented Service is basically a technique that is typically used to


transport and send data at session layer. The data streams or packets are
transferred or delivered to receiver in a similar order in which they have seen
transferred by sender. It is actually a data transfer method among two
devices or computers in a different network, that is designed and developed
after telephone system. Whenever a network implements this service, it
sends or transfers data or message from sender or source to receiver or
destination in correct order and manner.
This connection service is generally provided by protocols of both network
layer (signifies different path for various data packets that belongs to same
message) as well as transport layer (use to exhibits independence among
packets rather than different paths that various packets belong to same
message will follow).
Operations :
There is a sequence of operations that are needed to b followed by users.
These operations are given below :
1. Establishing Connection –
It generally requires a session connection to be established just before
any data is transported or sent with a direct physical connection among
sessions.
2. Transferring Data or Message –
When this session connection is established, then we transfer or send
message or data.
3. Releasing the Connection –
After sending or transferring data, we release connection.
Different Ways :
There are two ways in which connection-oriented services can be done.
These ways are given below :
1. Circuit-Switched Connection –
Circuit-switching networks or connections are generally known as
connection-oriented networks. In this connection, a dedicated route is
being established among sender and receiver, and whole data or
message is sent through it. A dedicated physical route or a path or a
circuit is established among all communication nodes, and after that,
data stream or message is sent or transferred.
2. Virtual Circuit-Switched Connection –
Virtual Circuit-Switched Connection or Virtual Circuit Switching is also
known as Connection-Oriented Switching. In this connection, a
preplanned route or path is established before data or messages are
transferred or sent. The message Is transferred over this network is
such a way that it seems to user that there is a dedicated route or path
from source or sender to destination or receiver.
Types of Connection-Oriented Service :

Service Example

Reliable Message Stream Sequence of pages, etc.

Reliable Byte Stream Song Download, etc.

Unreliable Connection VoIP (Voice Over Internet Protocol)

Advantages :
 It kindly support for quality of service is an easy way.
 This connection is more reliable than connectionless service.
 Long and large messages can be divided into various smaller messages
so that it can fit inside packets.
 Problems or issues that are related to duplicate data packets are made
less severe.
Disadvantages :
 In this connection, cost is fixed no matter how traffic is.
 It is necessary to have resource allocation before communication.
 If any route or path failures or network congestions arise, there is no
alternative way available to continue communication.

A Connectionless Service is technique that is used in data communications to


send or transfer data or message at Layer 4 i.e., Transport Layer of Open
System Interconnection model. This service does not require session
connection among sender or source and receiver or destination. Sender starts
transferring or sending data or messages to destination.
In other words, we can say that connectionless service simply means that
node can transfer or send data packets or messages to its receiver even
without session connection to receiver. Message is sent or transferred
without prior arrangement. This usually works due to error handling
protocols that allow and give permission for correction of errors just like
requesting retransmission.
In this service, network sends each packet of data to sender one at a time,
independently of other packets. But network does not have any state
information to determine or identify whether packet is part of stream of
other packets. Even the network doesn’t have any knowledge and
information about amount of traffic that will be transferred by user. In this,
each of data packets has source or destination address and is routed
independently from source to destination.
Therefore, data packets or messages might follow different paths to reach
destination. Data packets are also called datagrams. It is also similar to that
of postal services, as it also carries full address of destination where message
is to send. Data is also sent in one direction from source to destination
without checking that destination is still present there or not or if receiver or
destination is prepared to accept message.
Connectionless Protocols :
These protocols simply allow data to be transferred without any link among
processes. Some Of data packets may also be lost during transmission. Some
of protocols for connectionless services are given below:
 Internet Protocol (IP) –
This protocol is connectionless. In this protocol, all packets in IP
network are routed independently. They might not go through same
route.

 User Datagram Protocol (UDP) –


This protocol does not establish any connection before transferring
data. It just sends data that’s why UDP is known as connectionless.

 Internet Control Message Protocol (ICMP) –


ICMP is called connectionless simply because it does not need any
hosts to handshake before establishing any connection.

 Internetwork Packet Exchange (IPX) –


IPX is called connectionless as it doesn’t need any consistent
connection that is required to be maintained while data packets or
messages are being transferred from one system to another.

Types of Connectionless Services :

Service Example

Unreliable Datagram Electronic Junk Mail, etc.

Acknowledged Registered mail, text messages along with


Datagram delivery report, etc.

Request Reply Queries from remote databases, etc.

Advantages :
 It is very fast and also allows for multicast and broadcast operations in
which similar data are transferred to various recipients in a single
transmission.
 The effect of any error occurred can be reduced by implementing error-
correcting within an application protocol.
 This service is very easy and simple and is also low overhead.
 At the network layer, host software is very much simpler.
 No authentication is required in this service.
 Some of the application doesn’t even require sequential delivery of
packets or data. Examples include packet voice, etc.
Disadvantages :
 This service is less reliable as compared to connection-oriented service.
 It does not guarantee that there will be no loss, or error occurrence,
misdelivery, duplication, or out-of-sequence delivery of the packet.
 They are more prone towards network congestions.

What is Congestion?
Congestion in a computer network happens when there is too much data
being sent at the same time, causing the network to slow down. Just like
traffic congestion on a busy road, network congestion leads to delays and
sometimes data loss. When the network can’t handle all the incoming data, it
gets “clogged,” making it difficult for information to travel smoothly from one
place to another.
Effects of Congestion in Computer Network
 Improved Network Stability: Congestion control helps keep the
network stable by preventing it from getting overloaded. It manages
the flow of data so the network doesn’t crash or fail due to too much
traffic.
 Reduced Latency and Packet Loss: Without congestion control, data
transmission can slow down, causing delays and data loss. Congestion
control helps manage traffic better, reducing these delays and ensuring
fewer data packets are lost, making data transfer faster and the
network more responsive.
 Enhanced Throughput: By avoiding congestion, the network can use its
resources more effectively. This means more data can be sent in a
shorter time, which is important for handling large amounts of data
and supporting high-speed applications.
 Fairness in Resource Allocation: Congestion control ensures that
network resources are shared fairly among users. No single user or
application can take up all the bandwidth, allowing everyone to have a
fair share.
 Better User Experience: When data flows smoothly and quickly, users
have a better experience. Websites, online services, and applications
work more reliably and without annoying delays.
 Mitigation of Network Congestion Collapse: Without congestion
control, a sudden spike in data traffic can overwhelm the network,
causing severe congestion and making it almost unusable. Congestion
control helps prevent this by managing traffic efficiently and avoiding
such critical breakdowns.
Congestion Control Algorithm
 Congestion Control is a mechanism that controls the entry of data
packets into the network, enabling a better use of a shared network
infrastructure and avoiding congestive collapse.
 Congestive-avoidance algorithms (CAA) are implemented at the TCP
layer as the mechanism to avoid congestive collapse in a network.
 There are two congestion control algorithms which are as follows:
Leaky Bucket Algorithm
 The leaky bucket algorithm discovers its use in the context of network
traffic shaping or rate-limiting.
 A leaky bucket execution and a token bucket execution are
predominantly used for traffic shaping algorithms.
 This algorithm is used to control the rate at which traffic is sent to the
network and shape the burst traffic to a steady traffic stream.
 The disadvantages compared with the leaky-bucket algorithm are the
inefficient use of available network resources.
 The large area of network resources such as bandwidth is not being
used effectively.
Let us consider an example to understand Imagine a bucket with a small hole
in the bottom. No matter at what rate water enters the bucket, the outflow is
at constant rate. When the bucket is full with water additional water entering
spills over the sides and is lost.

Similarly, each network interface contains a leaky bucket and the following
steps are involved in leaky bucket algorithm:
 When host wants to send packet, packet is thrown into the bucket.
 The bucket leaks at a constant rate, meaning the network interface
transmits packets at a constant rate.
 Bursty traffic is converted to a uniform traffic by the leaky bucket.
 In practice the bucket is a finite queue that outputs at a finite rate.
To learn more about Leaky Bucket Algorithm please refer the article.
Token Bucket Algorithm
 The leaky bucket algorithm has a rigid output design at an average rate
independent of the bursty traffic.
 In some applications, when large bursts arrive, the output is allowed to
speed up. This calls for a more flexible algorithm, preferably one that
never loses information. Therefore, a token bucket algorithm finds its
uses in network traffic shaping or rate-limiting.
 It is a control algorithm that indicates when traffic should be sent. This
order comes based on the display of tokens in the bucket.
 The bucket contains tokens. Each of the tokens defines a packet of
predetermined size. Tokens in the bucket are deleted for the ability to
share a packet.
 When tokens are shown, a flow to transmit traffic appears in the
display of tokens.
 No token means no flow sends its packets. Hence, a flow transfers
traffic up to its peak burst rate in good tokens in the bucket.
To learn more about Token Bucket Algorithm please refer the article.
Need of Token Bucket Algorithm
The leaky bucket algorithm enforces output pattern at the average rate, no
matter how bursty the traffic is. So in order to deal with the bursty traffic we
need a flexible algorithm so that the data is not lost. One such algorithm is
token bucket algorithm.
Steps of this algorithm can be described as follows:
 In regular intervals tokens are thrown into the bucket. ƒ
 The bucket has a maximum capacity. ƒ
 If there is a ready packet, a token is removed from the bucket, and the
packet is sent.
 If there is no token in the bucket, the packet cannot be sent.
Let’s understand with an example, In figure (A) we see a bucket holding three
tokens, with five packets waiting to be transmitted. For a packet to be
transmitted, it must capture and destroy one token. In figure (B) We see that
three of the five packets have gotten through, but the other two are stuck
waiting for more tokens to be generated.
Token Bucket vs Leaky Bucket
The leaky bucket algorithm controls the rate at which the packets are
introduced in the network, but it is very conservative in nature. Some
flexibility is introduced in the token bucket algorithm. In the token bucket
algorithm, tokens are generated at each tick (up to a certain limit). For an
incoming packet to be transmitted, it must capture a token and the
transmission takes place at the same rate. Hence some of the busty packets
are transmitted at the same rate if tokens are available and thus introduces
some amount of flexibility in the system.
Formula: M * s = C + ? * s where S – is time taken M – Maximum output rate ?
– Token arrival rate C – Capacity of the token bucket in byte
Let’s understand with an example,

Link to question on leaky bucket


algorithm: https://www.geeksforgeeks.org/computer-networks-set-8/
Advantages
 Stable Network Operation: Congestion control ensures that networks
remain stable and operational by preventing them from becoming
overloaded with too much data traffic.
 Reduced Delays: It minimizes delays in data transmission by managing
traffic flow effectively, ensuring that data packets reach their
destinations promptly.
 Less Data Loss: By regulating the amount of data in the network at any
given time, congestion control reduces the likelihood of data packets
being lost or discarded.
 Optimal Resource Utilization: It helps networks use their resources
efficiently, allowing for better throughput and ensuring that users can
access data and services without interruptions.
 Scalability: Congestion control mechanisms are scalable, allowing
networks to handle increasing volumes of data traffic as they grow
without compromising performance.
 Adaptability: Modern congestion control algorithms can adapt to
changing network conditions, ensuring optimal performance even in
dynamic and unpredictable environments.
Disadvantages
 Complexity: Implementing congestion control algorithms can add
complexity to network management, requiring sophisticated systems
and configurations.
 Overhead: Some congestion control techniques introduce additional
overhead, which can consume network resources and affect overall
performance.
 Algorithm Sensitivity: The effectiveness of congestion control
algorithms can be sensitive to network conditions and configurations,
requiring fine-tuning for optimal performance.
 Resource Allocation Issues: Fairness in resource allocation, while a
benefit, can also pose challenges when trying to prioritize critical
applications over less essential ones.
 Dependency on Network Infrastructure: Congestion control relies on
the underlying network infrastructure and may be less effective in
environments with outdated or unreliable equipment.

Quality of service (QoS) is the use of mechanisms or technologies that work


on a network to control traffic and ensure the performance of critical
applications with limited network capacity. It enables organizations to adjust
their overall network traffic by prioritizing specific high-performance
applications.
QoS is typically applied to networks that carry traffic for resource-intensive
systems. Common services for which it is required include internet protocol
television (IPTV), online gaming, streaming media, videoconferencing, video
on demand (VOD), and Voice over IP (VoIP).
Using QoS in networking, organizations have the ability to optimize the
performance of multiple applications on their network and gain visibility into
the bit rate, delay, jitter, and packet rate of their network. This ensures they
can engineer the traffic on their network and change the way that packets are
routed to the internet or other networks to avoid transmission delay. This
also ensures that the organization achieves the expected service quality for
applications and delivers expected user experiences.
As per the QoS meaning, the key goal is to enable networks and organizations
to prioritize traffic, which includes offering dedicated bandwidth, controlled
jitter, and lower latency. The technologies used to ensure this are vital to
enhancing the performance of business applications, wide-area networks
(WANs), and service provider networks.
Types of network traffic
Understanding how QoS network software works is reliant on defining the
various types of traffic that it measures. These are:
1. Bandwidth: The speed of a link. QoS can tell a router how to use
bandwidth. For example, assigning a certain amount of bandwidth to
different queues for different traffic types.
2. Delay: The time it takes for a packet to go from its source to its end
destination. This can often be affected by queuing delay, which occurs
during times of congestion and a packet waits in a queue before being
transmitted. QoS enables organizations to avoid this by creating a
priority queue for certain types of traffic.
3. Loss: The amount of data lost as a result of packet loss, which typically
occurs due to network congestion. QoS enables organizations to decide
which packets to drop in this event.
4. Jitter: The irregular speed of packets on a network as a result of
congestion, which can result in packets arriving late and out of
sequence. This can cause distortion or gaps in audio and video being
delivered.

Click to See Larger Image

Advantages of QoS
The deployment of QoS is crucial for businesses that want to ensure the
availability of their business-critical applications. It is vital for delivering
differentiated bandwidth and ensuring data transmission takes place without
interrupting traffic flow or causing packet losses. Major advantages of
deploying QoS include:
1. Unlimited application prioritization: QoS guarantees that businesses’
most mission-critical applications will always have priority and the
necessary resources to achieve high performance.
2. Better resource management: QoS enables administrators to better
manage the organization’s internet resources. This also reduces costs
and the need for investments in link expansions.
3. Enhanced user experience: The end goal of QoS is to guarantee the
high performance of critical applications, which boils down to
delivering optimal user experience. Employees enjoy high performance
on their high-bandwidth applications, which enables them to be more
effective and get their job done more quickly.
4. Point-to-point traffic management: Managing a network is vital
however traffic is delivered, be it end to end, node to node, or point to
point. The latter enables organizations to deliver customer packets in
order from one point to the next over the internet without suffering
any packet loss.
5. Packet loss prevention: Packet loss can occur when packets of data are
dropped in transit between networks. This can often be caused by a
failure or inefficiency, network congestion, a faulty router, loose
connection, or poor signal. QoS avoids the potential of packet loss by
prioritizing bandwidth of high-performance applications.
6. Latency reduction: Latency is the time it takes for a network request to
go from the sender to the receiver and for the receiver to process it.
This is typically affected by routers taking longer to analyze information
and storage delays caused by intermediate switches and bridges. QoS
enables organizations to reduce latency, or speed up the process of a
network request, by prioritizing their critical application.

You might also like