Unit 1: 1.describe The ISO/OSI Reference Model
Unit 1: 1.describe The ISO/OSI Reference Model
1. **Physical Layer**: This is the lowest layer of the OSI model and deals with the physical
transmission of data over the network medium. It defines the electrical, mechanical, and
functional specifications for transmitting data signals over various physical mediums like
copper wires, fiber optic cables, or wireless transmissions.
2. **Data Link Layer**: The data link layer is responsible for node-to-node communication,
ensuring reliable data transfer between adjacent network nodes. It deals with framing, error
detection, and flow control. Ethernet and Wi-Fi are examples of protocols operating at this
layer.
3. **Network Layer**: The network layer provides logical addressing and routing of data
packets between different networks. It determines the best path for data to travel from the
source to the destination across multiple networks. IP (Internet Protocol) is the primary
protocol operating at this layer.
4. **Transport Layer**: The transport layer ensures end-to-end communication between the
sender and receiver. It's responsible for segmenting data, error detection and correction,
flow control, and reassembling the segments at the destination. TCP (Transmission Control
Protocol) and UDP (User Datagram Protocol) are examples of protocols operating at this
layer.
5. **Session Layer**: The session layer establishes, maintains, and terminates connections
between applications. It manages sessions between applications, including synchronization,
checkpointing, and recovery of data exchange. This layer helps in managing dialogue control
and supports full-duplex or half-duplex communication.
6. **Presentation Layer**: The presentation layer ensures the compatibility of data formats
between different systems by translating, encrypting, or compressing data. It deals with data
translation, encryption, and decryption, and ensures that the data presented to the
application layer is in a readable format.
7. **Application Layer**: The application layer is the topmost layer of the OSI model and
provides network services directly to end-users or applications. It supports various
application-level protocols such as HTTP (Hypertext Transfer Protocol), FTP (File Transfer
Protocol), SMTP (Simple Mail Transfer Protocol), etc., enabling users to access network
resources and services.
The OSI model serves as a guideline for understanding network functionality, allowing
developers and network engineers to design, implement, and troubleshoot network systems
in a structured and standardized manner. However, it's important to note that while the OSI
model is a valuable conceptual framework, actual network implementations often combine
or deviate from its strict layering structure.
2. TCP/IP model
The TCP/IP (Transmission Control Protocol/Internet Protocol) model, also known as the
Internet protocol suite, is a conceptual framework used for the design and implementation
of computer network protocols. It's the foundation of the modern Internet and networking.
Unlike the OSI model, which has seven layers, the TCP/IP model consists of four layers.
Here's an overview of each layer:
2. **Internet Layer**:
- The Internet layer corresponds to the network layer in the OSI model.
- It's responsible for routing packets across different networks to their destinations.
- The primary protocol operating at this layer is the Internet Protocol (IP), which provides
logical addressing and packet forwarding.
- Other protocols like ICMP (Internet Control Message Protocol) and ARP (Address
Resolution Protocol) also operate at this layer.
3. **Transport Layer**:
- This layer corresponds to the transport layer in the OSI model.
- It provides end-to-end communication between hosts, ensuring reliable and orderly
delivery of data.
- The primary protocols at this layer are TCP (Transmission Control Protocol) and UDP (User
Datagram Protocol).
- TCP provides reliable, connection-oriented communication with features like flow control
and error recovery, while UDP provides connectionless communication without these
features.
4. **Application Layer**:
- The application layer corresponds to the combined session, presentation, and application
layers in the OSI model.
- It provides network services directly to end-users or applications.
- A wide range of protocols operates at this layer, including HTTP (Hypertext Transfer
Protocol) for web browsing, FTP (File Transfer Protocol) for file transfer, SMTP (Simple Mail
Transfer Protocol) for email, and DNS (Domain Name System) for domain name resolution.
The TCP/IP model is widely used in practice and serves as the basis for the Internet and most
modern networking technologies. It offers a simpler and more streamlined approach
compared to the OSI model, making it easier to understand and implement network
protocols and systems.
3. What is Modems and it functions?
A modem, short for modulator-demodulator, is a device used to modulate and demodulate
digital signals into analog signals and vice versa. Modems are primarily used to transmit
digital data over analog communication channels, such as telephone lines or radio
frequencies. Here's an overview of their functions and roles:
1. **Modulation**:
- The primary function of a modem is to convert digital data generated by computers or
other digital devices into analog signals suitable for transmission over analog communication
channels.
- Modulation involves altering certain properties of an analog carrier signal (such as
amplitude, frequency, or phase) to encode digital information.
- Common modulation techniques used by modems include amplitude modulation (AM),
frequency modulation (FM), and phase modulation (PM).
2. **Demodulation**:
- On the receiving end, modems demodulate incoming analog signals back into digital data
that can be understood by computers or digital devices.
- Demodulation involves extracting the digital information encoded in the analog signal by
detecting changes in its properties introduced during modulation.
- The demodulated digital data is then processed and delivered to the receiving device for
further use or display.
3. **Signal Conditioning**:
- Modems also perform various signal conditioning tasks to ensure reliable communication
over noisy analog channels.
- This may involve error correction techniques, such as forward error correction (FEC) or
automatic repeat request (ARQ), to mitigate errors introduced during transmission.
- Additionally, modems may employ techniques like equalization and echo cancellation to
compensate for signal distortion and interference.
4. **Protocol Conversion**:
- Modems often support different communication protocols used for data transmission,
such as TCP/IP for internet connections or AT commands for dial-up connections.
- They handle protocol conversion between the digital data generated by the connected
device and the communication protocol used for transmission over the analog channel.
- This ensures compatibility between devices and networks with varying communication
standards.
Overall, modems play a crucial role in enabling communication between digital devices over
analog communication channels, facilitating tasks such as internet access,
telecommunication, and remote data transmission.
4.Types of Modems ?
Modems come in various types, each designed for specific purposes and communication
technologies. Here are some common types of modems:
1. **Dial-up Modem**:
- Dial-up modems use traditional telephone lines (PSTN - Public Switched Telephone
Network) to establish a connection to the Internet or other remote networks.
- They modulate digital data into analog signals for transmission over telephone lines and
demodulate incoming analog signals back into digital data.
- Dial-up modems are relatively slow compared to other modem types, with maximum
speeds typically ranging from 56 kbps (kilobits per second) to lower rates.
3. **Cable Modem**:
- Cable modems provide Internet access over cable television infrastructure, typically
offered by cable service providers.
- They use coaxial cables to transmit data signals, allowing for higher-speed internet
connections compared to dial-up or DSL modems.
- Cable modems are commonly used in residential and commercial settings for broadband
Internet access.
4. **Fiber Modem**:
- Fiber modems, also known as fiber optic modems, enable high-speed data transmission
over fiber optic cables.
- They modulate and demodulate digital signals for transmission over optical fibers,
offering extremely high bandwidth and low latency.
- Fiber modems are used in fiber-to-the-home (FTTH) and fiber-to-the-premises (FTTP)
networks to deliver high-speed Internet, television, and voice services.
5. **Wireless Modem**:
- Wireless modems, also known as cellular modems or cellular routers, utilize cellular
networks (e.g., 3G, 4G, LTE) to provide Internet connectivity.
- They come in various forms, including USB dongles, mobile hotspots, and integrated
modem-router devices.
- Wireless modems enable mobile broadband access, allowing users to connect to the
Internet from anywhere within cellular coverage areas.
6. **Satellite Modem**:
- Satellite modems facilitate Internet connectivity via satellite communication systems.
- They modulate and demodulate signals for transmission to and from satellites in
geostationary or low-earth orbit.
- Satellite modems are often used in remote or rural areas where terrestrial broadband
infrastructure is limited or unavailable.
These are just some of the common types of modems used for various communication
technologies and network architectures. The choice of modem depends on factors such as
available infrastructure, required data speeds, geographic location, and specific application
requirements.
5 What is transmission media and it types ?
Transmission media are physical pathways or channels that carry signals from a sender to a
receiver in a communication system. These pathways facilitate the transmission of data,
voice, video, or any other form of information between devices. There are several types of
transmission media, each with its own characteristics and applications:
2. **Coaxial Cable**:
- Coaxial cables consist of a central conductor surrounded by an insulating layer, a metallic
shield, and an outer insulating layer.
- They are commonly used in cable television (CATV) networks, broadband Internet access,
and certain Ethernet networks (e.g., 10BASE5, 10BASE2).
- Coaxial cables offer high bandwidth and better resistance to EMI compared to twisted
pair cables.
4. **Wireless Transmission**:
- Wireless transmission utilizes electromagnetic waves to transmit data without the need
for physical cables.
- It includes technologies such as Wi-Fi, cellular communication (e.g., 3G, 4G, 5G),
Bluetooth, and infrared communication.
- Wireless transmission is commonly used in mobile devices, Wi-Fi networks, remote
sensors, and IoT (Internet of Things) devices.
5. **Satellite Communication**:
- Satellite communication involves transmitting data between ground stations and satellites
in orbit around the Earth.
- It uses radio waves for communication and is often used for long-distance communication
in remote or inaccessible areas.
- Satellite communication is used for broadcasting, telecommunication, internet access,
and GPS (Global Positioning System) services.
6. **Microwave Transmission**:
- Microwave transmission uses high-frequency radio waves to transmit data over short to
medium distances.
- It is commonly used in point-to-point communication links, such as microwave links
between buildings or microwave relay systems.
- Microwave transmission offers high data rates and low latency, making it suitable for
high-speed communication networks.
These are the main types of transmission media used in communication systems, each
offering different advantages and suitability for various applications depending on factors
such as distance, bandwidth requirements, and environmental conditions.
UNIT 5
1. Describe DNS
DNS, or Domain Name System, is a hierarchical decentralized naming system used to
translate domain names (e.g., www.example.com) into IP addresses (e.g., 192.0.2.1) and
vice versa. It serves as the backbone of the internet by enabling users to access websites
and other online resources using human-readable domain names instead of numeric IP
addresses.
1. **Domain Names**: Domain names are hierarchical labels used to identify websites
and other internet resources. They consist of multiple parts separated by dots, with the
top-level domain (TLD) at the rightmost part (e.g., .com, .org, .net) and the hostname
(e.g., www) at the leftmost part.
2. **DNS Resolver**: When a user enters a domain name into a web browser or
application, the system first checks its local DNS resolver cache to see if the
corresponding IP address is already stored. If not, the resolver sends a DNS query to the
DNS resolver server configured in the system's network settings.
3. **DNS Query**: The DNS resolver server receives the DNS query and checks its own
cache to see if it has the IP address for the requested domain name. If the IP address is
not cached, the resolver server starts the DNS resolution process.
4. **Root Name Servers**: If the DNS resolver server does not have the IP address
cached, it sends a query to a root name server. The root name servers are a crucial part
of the DNS hierarchy and provide information about the authoritative name servers for
top-level domains (TLDs).
5. **Top-Level Domain (TLD) Servers**: The root name server directs the DNS resolver
server to the appropriate TLD server responsible for the requested domain name's TLD
(e.g., .com, .org, .net). The TLD server provides information about the authoritative
name servers for the second-level domains within its TLD.
6. **Authoritative Name Servers**: The authoritative name servers are responsible for
storing and providing DNS records for specific domain names. When the DNS resolver
server receives the IP address information from the authoritative name servers, it caches
the information and returns the IP address to the user's system.
7. **DNS Response**: The DNS resolver server sends the IP address back to the user's
system, which then establishes a connection to the corresponding web server using the
IP address obtained from DNS resolution.
Overall, DNS plays a crucial role in translating human-readable domain names into
numeric IP addresses, facilitating seamless communication and access to internet
resources. It operates as a distributed system, with multiple DNS servers working
together to efficiently resolve domain name queries across the internet.
2 Describe FTP
FTP, or File Transfer Protocol, is a standard network protocol used for transferring files
between a client and a server on a computer network, typically the Internet. It's one of
the oldest protocols still in use and remains widely used for various purposes, including
uploading files to websites, downloading software updates, and sharing files between
users.
3. **Commands**: The client communicates with the server using a set of FTP
commands sent over a control connection (usually on TCP port 21). These commands
instruct the server to perform various operations, such as listing directory contents,
uploading files, downloading files, creating directories, and deleting files.
4. **Data Transfer**: When transferring files, FTP establishes a separate data connection
for transferring file data. There are two modes of data transfer in FTP:
- **Active Mode**: In active mode, the client initiates a data connection to the server
(on TCP port 20), specifying the port it will listen on for incoming data. The server then
connects to the client's specified port to transfer data.
- **Passive Mode**: In passive mode, the server opens a data connection (on a
randomly selected port) and informs the client of the port number. The client then
connects to the server's specified port to transfer data.
Overall, FTP provides a simple and efficient way to transfer files between devices over a
network, making it a valuable tool for both individual users and organizations needing to
exchange files. Despite the rise of more secure protocols, FTP remains a widely used and
supported standard for file transfer operations.
3 Describe SMTP
SMTP, or Simple Mail Transfer Protocol, is a standard protocol used for sending email
messages between servers over a network. It's one of the primary protocols used in
email communication and is responsible for routing and delivering email messages to
their intended recipients.
3. **Establishing Connection**: The SMTP client establishes a connection with the SMTP
server on the recipient's email domain. This is typically done over TCP port 25, although
alternative ports may be used depending on server configuration.
4. **Handshaking**: Once the connection is established, the SMTP client and server
perform a handshake process to verify communication parameters and capabilities. This
includes identifying the sender and recipient, negotiating supported authentication
methods, and confirming message delivery requirements.
5. **Message Transmission**: After the handshake, the SMTP client transmits the email
message to the SMTP server using the MAIL FROM and RCPT TO commands, specifying
the sender's and recipient's email addresses, respectively. If the recipient's server is not
directly accessible, the message may be relayed through intermediate SMTP servers.
6. **Message Routing**: The SMTP server receiving the message checks the recipient's
domain to determine the destination server responsible for delivering the message. This
involves querying DNS (Domain Name System) to resolve the recipient's domain's MX
(Mail Exchange) records, which specify the mail servers responsible for receiving email
for that domain.
7. **Delivery and Queuing**: Once the destination server is determined, the SMTP
server attempts to deliver the email message to the recipient's mailbox. If the recipient's
server is unavailable or busy, the message may be temporarily queued for delivery
retries at a later time.
5. **Methods**:
- **GET**: Retrieves data from the server specified by the URL.
- **POST**: Submits data to the server, often used for submitting form data or
uploading files.
- **PUT**: Updates an existing resource on the server with the provided data.
- **DELETE**: Deletes the specified resource on the server.
- **HEAD**: Retrieves metadata about the specified resource without retrieving the
resource itself.
6. **Status Codes**: HTTP responses include status codes indicating the outcome of the
request:
- **2xx**: Success status codes (e.g., 200 OK).
- **3xx**: Redirection status codes (e.g., 301 Moved Permanently).
- **4xx**: Client error status codes (e.g., 404 Not Found).
- **5xx**: Server error status codes (e.g., 500 Internal Server Error).
Overall, HTTP forms the backbone of web communication, enabling the transfer of
resources between clients and servers in a standardized and efficient manner. It's
constantly evolving, with newer versions introducing enhancements in performance,
security, and functionality.
UNIT 4
2. **Unreliable**: UDP does not guarantee the delivery of datagrams to the destination.
There is no acknowledgment mechanism or retransmission of lost or corrupted packets
built into the protocol. As a result, applications using UDP must implement their own
mechanisms for error detection and recovery if necessary.
3. **Minimal Overhead**: UDP has minimal overhead compared to TCP. It does not
include features such as flow control, congestion control, or error recovery, which are
present in TCP. This results in faster transmission speeds and lower latency, making UDP
suitable for real-time applications where speed is critical.
4. **Datagram Structure**: UDP datagrams consist of a header and payload. The UDP
header is small, containing only four fields: source port, destination port, length, and
checksum. The payload carries the actual data being transmitted, which can be of
variable length.
5. **Port Numbers**: UDP uses port numbers to identify different applications running
on the same host. The source and destination ports in the UDP header specify the
sending and receiving applications, allowing multiple applications to communicate
concurrently over the same IP address.
6. **Usage Scenarios**: UDP is commonly used in applications where real-time
communication and low latency are prioritized over reliability, such as:
- Real-time audio and video streaming (e.g., VoIP, video conferencing).
- Online gaming, where low latency is critical for responsive gameplay.
- DNS (Domain Name System) queries, which require quick resolution of domain names
to IP addresses.
- DHCP (Dynamic Host Configuration Protocol) for dynamic IP address assignment.
7. **Checksum**: UDP includes a checksum field in its header for error detection.
However, unlike TCP, UDP does not use checksum verification by default. It's up to the
application layer to decide whether to perform checksum validation on received
datagrams.
While UDP lacks some of the reliability features of TCP, its simplicity and low overhead
make it well-suited for applications where speed and real-time performance are
paramount, and where occasional packet loss or out-of-order delivery can be tolerated.
However, applications using UDP must handle error detection, retransmission, and
sequencing at the application layer if needed.
2 Explain about TCP Timer Management
TCP (Transmission Control Protocol) utilizes timers as part of its operation to manage
various aspects of communication, including retransmission of lost or unacknowledged
segments, controlling congestion, and handling other network conditions. Timer
management in TCP is crucial for ensuring reliable and efficient data transmission over
the network.
1. **Retransmission Timer**:
- TCP uses a retransmission timer to determine when to resend segments that have not
been acknowledged by the receiver.
- When a TCP segment is sent, the sender starts a retransmission timer for that
segment. If an acknowledgment (ACK) for the segment is not received before the timer
expires, the segment is assumed to be lost, and it is retransmitted.
- The retransmission timer is dynamically adjusted based on network conditions, such
as round-trip time (RTT) estimates and congestion levels, to adapt to varying network
delays and packet loss rates.
2. **Persistence Timer**:
- TCP employs a persistence timer to handle zero-window probes in flow control
situations.
- When the receiver's receive window shrinks to zero, indicating that it cannot accept
any more data, the sender sets a persistence timer. When this timer expires, the sender
sends a small probe segment to the receiver to prompt it to update its receive window.
- The persistence timer prevents the sender from waiting indefinitely for the receiver's
window to open, allowing it to detect and recover from the condition where the
receiver's window remains closed.
3. **Keepalive Timer**:
- TCP uses a keepalive timer to detect inactive or idle connections that may have
become stale or unresponsive.
- When a TCP connection remains idle for a certain period without any data exchange,
the sender may send periodic keepalive probes to check the liveliness of the connection.
- If the sender does not receive a response from the peer within a specified time frame,
it may consider the connection as inactive and close it to free up resources.
4. **Time-Wait Timer**:
- After a TCP connection is closed, the socket pair (source IP address, source port,
destination IP address, destination port) enters the TIME-WAIT state to ensure that any
delayed packets related to the closed connection are properly handled.
- The TIME-WAIT timer defines the duration for which the socket pair remains in the
TIME-WAIT state. Once this timer expires, the socket pair is removed from the system.
- The TIME-WAIT timer prevents delayed segments from subsequent connections with
the same socket pair from being confused with segments from the previous connection.
1. **Slow Start**:
- When a TCP connection is established or reestablished, TCP starts in the slow start
phase.
- In slow start, the sender initially increases its congestion window (cwnd)
exponentially with each acknowledgment received from the receiver. This allows TCP to
probe the network capacity and determine an appropriate sending rate without
overwhelming the network.
- The sender continues to increase the congestion window until it reaches a threshold
known as the congestion avoidance threshold (cwnd_threshold) or the receiver's
advertised window size, whichever is smaller.
2. **Congestion Avoidance**:
- Once the congestion window reaches the congestion avoidance threshold, TCP
transitions from slow start to congestion avoidance phase.
- In congestion avoidance, the sender increases the congestion window linearly with
each round-trip time (RTT) by adding one MSS (Maximum Segment Size) to the
congestion window for each RTT.
- This approach helps TCP probe the network more cautiously, reducing the risk of
causing congestion and avoiding rapid increases in transmission rate that could
overwhelm the network.
5. **TCP NewReno**:
- TCP NewReno is an enhancement of TCP Reno that addresses some of its limitations.
- It improves the handling of multiple packet losses within the same window of data
and reduces unnecessary retransmissions during fast recovery.
6. **TCP Cubic**:
- TCP Cubic is a modern TCP congestion control algorithm designed to provide better
performance in high-speed and long-distance networks.
- It uses a cubic function to determine the congestion window size, allowing for
smoother and more aggressive congestion avoidance.
Overall, TCP congestion control plays a crucial role in ensuring reliable and efficient data
transmission over the Internet. By dynamically adjusting the transmission rate based on
network conditions and responding to congestion events, TCP congestion control helps
maintain network stability, prevent packet loss, and optimize the performance of TCP
connections.
UNIT 2
1 Discuss About CRC ?
CRC, or Cyclic Redundancy Check, is an error-detection technique used in digital data
transmission to detect accidental changes to raw data. It's a widely used method for
ensuring data integrity in various communication protocols, storage systems, and digital
networks.
1. **Polynomial Division**:
- CRC operates by performing polynomial division on the input data (message) and
appending a remainder (checksum) to the message.
- The sender computes the CRC checksum by dividing the message polynomial by a
predefined generator polynomial using binary division.
- The remainder obtained from the division is the CRC checksum, which is appended to
the original message before transmission.
2. **Checksum Calculation**:
- To compute the CRC checksum, both the sender and receiver use the same generator
polynomial, known as the CRC polynomial or CRC-CCITT.
- The message is treated as a polynomial with coefficients corresponding to the bits of
the message. Zeroes may be appended to the message to ensure that its length is a
multiple of the degree of the generator polynomial.
- The CRC checksum is calculated by performing binary division on the message
polynomial and the generator polynomial. The remainder obtained from the division is
the CRC checksum.
3. **Error Detection**:
- At the receiver's end, the received message along with the CRC checksum is subjected
to the same polynomial division process.
- If the received message is error-free, the remainder obtained from the division will be
zero. However, if errors are present in the message, the remainder will be nonzero.
- By comparing the computed CRC checksum at the receiver's end with the received
CRC checksum, the receiver can determine whether the message has been corrupted
during transmission.
Overall, CRC is a robust and widely adopted error-detection technique that plays a vital
role in ensuring data integrity in digital communication systems, storage devices, and
networking protocols. Its simplicity, efficiency, and effectiveness make it a cornerstone of
error detection in modern digital systems.
2 Write about pure and slotted aloha
Pure ALOHA and Slotted ALOHA are two variants of the ALOHA multiple access protocol,
which is used in data communication systems to allow multiple users to transmit data
over a shared communication channel. Both variants were developed in the early days of
packet-switched data networks and played a significant role in the development of
modern networking protocols.
1. **Pure ALOHA**:
- In Pure ALOHA, users can transmit data packets at any time without regard to other
users' transmissions.
- When a user has data to send, it transmits the packet onto the channel immediately.
- After transmitting a packet, the user waits for a fixed period, typically equal to one
packet transmission time, to receive an acknowledgment (ACK) from the receiver.
- If an ACK is not received within the waiting period, the user assumes the packet was
lost due to collision with other users' transmissions and retransmits the packet after a
random backoff time.
- Pure ALOHA is simple to implement but suffers from low efficiency due to the high
probability of collisions, especially at high network loads.
2. **Slotted ALOHA**:
- Slotted ALOHA divides time into discrete slots, with each slot corresponding to the
transmission time of one data packet.
- Users are required to synchronize their transmissions to start at the beginning of a
time slot.
- Similar to Pure ALOHA, users transmit data packets when they have data to send.
- After transmitting a packet, the user waits for an ACK from the receiver during the
next time slot.
- If an ACK is not received within the slot, the user retransmits the packet in the next
time slot.
- Slotted ALOHA improves efficiency compared to Pure ALOHA by reducing the
probability of collisions, as transmissions occur at predefined time intervals. However, it
still suffers from inefficiency, especially at high network loads.
Comparison:
- Pure ALOHA allows users to transmit at any time, leading to a higher probability of
collisions, while Slotted ALOHA schedules transmissions into time slots, reducing collision
probability.
- Slotted ALOHA achieves higher efficiency compared to Pure ALOHA due to its
synchronized time slots, but both protocols experience decreased efficiency as network
loads increase.
- Both Pure ALOHA and Slotted ALOHA are simple and lightweight protocols suitable for
low-traffic environments or as introductory models for understanding multiple access
techniques. However, they are not widely used in modern networking due to their
limited efficiency and susceptibility to collisions. More advanced protocols like CSMA
(Carrier Sense Multiple Access) and its variants (CSMA/CD, CSMA/CA) are commonly
used in Ethernet and wireless networks to improve efficiency and collision avoidance.
3 Discuss about ARP and RARP ?
ARP (Address Resolution Protocol) and RARP (Reverse Address Resolution Protocol) are
both network protocols used in TCP/IP networks to map between network layer
addresses (such as IP addresses) and data link layer addresses (such as MAC addresses).
While ARP is used to resolve IP addresses to MAC addresses, RARP performs the reverse
process, mapping MAC addresses to IP addresses.
In summary, ARP and RARP are both protocols used in TCP/IP networks for address
resolution between IP addresses and MAC addresses. ARP resolves IP addresses to MAC
addresses for local network communication, while RARP assigns IP addresses to devices
based on their MAC addresses, primarily in diskless workstation environments. Although
RARP is less commonly used today due to the prevalence of DHCP, ARP remains a
fundamental protocol in modern networking.