Computer network
Computer network
ALOHA Protocols
ALOHA (Additive Links On-Line Hawaii Area) is one of the earliest channel access
protocols. It was designed for satellite communications, where devices transmit data at
random times and rely on acknowledgments to know if their transmission was successful.
Pure ALOHA: A device transmits data at any time. If the transmission collides
with another, it is retransmitted after a random delay. This method is simple but
inefficient, with a high risk of collision.
Slotted ALOHA: Time is divided into slots, and devices are required to send data
only at the beginning of a time slot. This reduces the likelihood of collisions
compared to Pure ALOHA.
Carrier Sense Multiple Access (CSMA) is a protocol for managing access to a shared
medium by allowing devices to sense if the channel is free before transmitting. In this way, it
attempts to avoid collisions by checking the medium before transmitting. CSMA has a few
variations:
1. Basic CSMA: Before a device sends data, it listens to the channel (carrier sense) to
check if another device is transmitting. If the channel is idle, the device transmits its
data; otherwise, it waits.
2. CSMA with Collision Detection (CSMA/CD): This is an enhancement to
basic CSMA, where the device not only listens before transmission but also continues
to monitor the channel while transmitting. If a collision is detected, the device stops
transmitting and retries after a random backoff period.
CSMA/CD is used in Ethernet networks, where it ensures that devices can transmit data on
the same physical medium without excessive collisions.
Collision-Free Protocols
1. Token Passing: In this approach, a special control frame (the "token") circulates
around the network. Only the device holding the token is allowed to transmit,
preventing collisions. Examples of this method include the Token Ring protocol.
2. Polling: A central controller (polling server) asks each device in turn if it has data
to send, ensuring only one device transmits at a time.
Ethernet
Ethernet is one of the most widely used network technologies, particularly for local area
networks (LANs). Early versions of Ethernet used CSMA/CD for collision management,
where multiple devices could share the same cable or medium. However, as Ethernet
evolved, newer technologies like full-duplex communication (where sending and receiving
can happen simultaneously) and switching have reduced or eliminated the need for
CSMA/CD. Modern Ethernet networks often use switched Ethernet where each device is
connected to a switch, and collisions are effectively eliminated due to dedicated
communication paths between devices.
These protocols and mechanisms are essential for managing how data is transmitted across
shared communication channels, optimizing performance by reducing collisions or even
preventing them in the case of collision-free protocols.
1. Point-to-Point Network
A Point-to-Point (P2P) network refers to a direct connection between two devices (nodes)
in a network. The devices communicate directly without any intermediary, unlike in a
broadcast or multi-point network where multiple devices share a common channel. P2P
networks are typically used for simpler, small-scale applications or in certain types of routing
protocols.
2. Routing Algorithms
Routing algorithms determine how data packets should be forwarded from one device to
another within a network. The goal is to find the most efficient path from the source to the
destination. There are two main types of routing algorithms:
Static Routing: The paths are pre-defined by network administrators and do not
change unless manually reconfigured.
Dynamic Routing: Routing decisions are made based on the current state of the
network (e.g., routing protocols like OSPF, BGP, and RIP).
3. Congestion Control
Congestion control is a mechanism used in computer networks to manage traffic and avoid
network congestion. Congestion occurs when the demand for network resources exceeds the
available capacity, leading to performance degradation, packet loss, and delays.
Flow Control: Ensures that the sender does not overwhelm the receiver with data.
Traffic Shaping: Controls the rate of traffic sent to avoid congestion.
TCP Congestion Control: TCP protocols like slow start, congestion avoidance,
fast retransmit, and fast recovery help control congestion by adjusting the
transmission rate dynamically.
4. Internetworking
Quality of Service (QoS) refers to the overall performance of a network in terms of its ability
to deliver reliable and timely services. QoS mechanisms are implemented to prioritize certain
types of traffic, ensuring that critical applications (such as voice or video streaming) are
delivered without interruption, even during times of high network load.
Traffic Prioritization: Assigning priority to different types of traffic (e.g., voice over
data).
Bandwidth Reservation: Reserving a certain amount of bandwidth for high-priority
traffic.
Latency Control: Ensuring that packets are delivered within a certain time frame.
Jitter Control: Reducing variability in packet arrival times, which is especially
important for real-time applications like VoIP.
These concepts form the backbone of ensuring that a network operates efficiently and meets
the performance requirements of applications and users.
Thank you!!