0% found this document useful (0 votes)
5 views

Computer network

Aloha protocol,CSMA collision,CSMA protocol
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Computer network

Aloha protocol,CSMA collision,CSMA protocol
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

computer network

December 30, 1899


SUBMITTED TO: PRERNA SEHRAWAT
ROLL NO;231112033326
Computer Network

Medium Access Sublayer (MAC): Channel


Allocations
The Medium Access Control (MAC) sublayer is a part of the data link layer in the OSI
model. It governs how devices access the shared communication medium, ensuring that
multiple devices can communicate without interfering with each other. Channel allocation
refers to the method by which the MAC sublayer determines when and how devices can
transmit data over the shared medium (like radio waves, cables, etc.). There are two broad
categories of channel allocation:

1. Static Allocation: The communication channel is divided into fixed,


predefined slots. Examples include TDMA (Time Division Multiple Access) and
FDMA (Frequency Division Multiple Access).

2. Dynamic Allocation: The channel access is decided dynamically, such as in


ALOHA or CSMA protocols.

ALOHA Protocols

ALOHA (Additive Links On-Line Hawaii Area) is one of the earliest channel access
protocols. It was designed for satellite communications, where devices transmit data at
random times and rely on acknowledgments to know if their transmission was successful.

 Pure ALOHA: A device transmits data at any time. If the transmission collides
with another, it is retransmitted after a random delay. This method is simple but
inefficient, with a high risk of collision.

 Slotted ALOHA: Time is divided into slots, and devices are required to send data
only at the beginning of a time slot. This reduces the likelihood of collisions
compared to Pure ALOHA.

Carrier Sense Multiple Access Protocols (CSMA)

Carrier Sense Multiple Access (CSMA) is a protocol for managing access to a shared
medium by allowing devices to sense if the channel is free before transmitting. In this way, it
attempts to avoid collisions by checking the medium before transmitting. CSMA has a few
variations:

1. Basic CSMA: Before a device sends data, it listens to the channel (carrier sense) to
check if another device is transmitting. If the channel is idle, the device transmits its
data; otherwise, it waits.
2. CSMA with Collision Detection (CSMA/CD): This is an enhancement to
basic CSMA, where the device not only listens before transmission but also continues
to monitor the channel while transmitting. If a collision is detected, the device stops
transmitting and retries after a random backoff period.

CSMA with Collision Detection (CSMA/CD)

CSMA/CD is a protocol designed to reduce collisions in a shared communication medium.


Here’s how it works:

1. A device first checks if the medium is idle before transmitting.


2. If the channel is idle, it transmits its data.
3. While transmitting, it listens for collisions. If a collision occurs, both transmitting
devices stop immediately and send a jamming signal to notify others.
4. The devices then wait for a random period before attempting to retransmit, reducing
the chance of repeated collisions.

CSMA/CD is used in Ethernet networks, where it ensures that devices can transmit data on
the same physical medium without excessive collisions.

Collision-Free Protocols

Collision-free protocols aim to prevent collisions altogether, which increases network


efficiency. There are various strategies for achieving this:

1. Token Passing: In this approach, a special control frame (the "token") circulates
around the network. Only the device holding the token is allowed to transmit,
preventing collisions. Examples of this method include the Token Ring protocol.
2. Polling: A central controller (polling server) asks each device in turn if it has data
to send, ensuring only one device transmits at a time.

Ethernet

Ethernet is one of the most widely used network technologies, particularly for local area
networks (LANs). Early versions of Ethernet used CSMA/CD for collision management,
where multiple devices could share the same cable or medium. However, as Ethernet
evolved, newer technologies like full-duplex communication (where sending and receiving
can happen simultaneously) and switching have reduced or eliminated the need for
CSMA/CD. Modern Ethernet networks often use switched Ethernet where each device is
connected to a switch, and collisions are effectively eliminated due to dedicated
communication paths between devices.

These protocols and mechanisms are essential for managing how data is transmitted across
shared communication channels, optimizing performance by reducing collisions or even
preventing them in the case of collision-free protocols.

1. Point-to-Point Network
A Point-to-Point (P2P) network refers to a direct connection between two devices (nodes)
in a network. The devices communicate directly without any intermediary, unlike in a
broadcast or multi-point network where multiple devices share a common channel. P2P
networks are typically used for simpler, small-scale applications or in certain types of routing
protocols.

 Example: A direct connection between two routers or between a computer and a


printer.

2. Routing Algorithms

Routing algorithms determine how data packets should be forwarded from one device to
another within a network. The goal is to find the most efficient path from the source to the
destination. There are two main types of routing algorithms:

 Static Routing: The paths are pre-defined by network administrators and do not
change unless manually reconfigured.
 Dynamic Routing: Routing decisions are made based on the current state of the
network (e.g., routing protocols like OSPF, BGP, and RIP).

Common routing algorithms include:

 Dijkstra’s Algorithm: Used in link-state protocols like OSPF.


 Bellman-Ford Algorithm: Used in distance-vector protocols like RIP.
 Flooding: A simple method where a packet is sent to all nodes in the network.

3. Congestion Control

Congestion control is a mechanism used in computer networks to manage traffic and avoid
network congestion. Congestion occurs when the demand for network resources exceeds the
available capacity, leading to performance degradation, packet loss, and delays.

Techniques for congestion control:

 Flow Control: Ensures that the sender does not overwhelm the receiver with data.
 Traffic Shaping: Controls the rate of traffic sent to avoid congestion.
 TCP Congestion Control: TCP protocols like slow start, congestion avoidance,
fast retransmit, and fast recovery help control congestion by adjusting the
transmission rate dynamically.

4. Internetworking

Internetworking is the practice of connecting different networks together to form a larger


network. This is typically achieved using devices like routers and gateways, which allow for
the transmission of data between networks with different protocols, architectures, or
technologies.

 Example: Connecting a local area network (LAN) to the internet (a wide-area


network, or WAN) involves internetworking.
The key role of routers in internetworking is to determine the best path to route packets
between networks based on addressing schemes and routing algorithms.

5. Quality of Service (QoS)

Quality of Service (QoS) refers to the overall performance of a network in terms of its ability
to deliver reliable and timely services. QoS mechanisms are implemented to prioritize certain
types of traffic, ensuring that critical applications (such as voice or video streaming) are
delivered without interruption, even during times of high network load.

Common QoS techniques include:

 Traffic Prioritization: Assigning priority to different types of traffic (e.g., voice over
data).
 Bandwidth Reservation: Reserving a certain amount of bandwidth for high-priority
traffic.
 Latency Control: Ensuring that packets are delivered within a certain time frame.
 Jitter Control: Reducing variability in packet arrival times, which is especially
important for real-time applications like VoIP.

These concepts form the backbone of ensuring that a network operates efficiently and meets
the performance requirements of applications and users.

Thank you!!

You might also like