CN Unit 3 notes
CN Unit 3 notes
To make this decision, the router uses a piece of information in the packet header, which
can be the destination address or a label, to find the corresponding output interface
number in the forwarding table.
The following diagram shows the idea of the forwarding process in a router.
3. Other Services
a) Error Control
Error control also can be implemented in the network layer; the packet in the network
layer may be fragmented at each router, which makes error checking at this layer
inefficient.
A checksum field is added to the datagram to control any corruption in the header.
This checksum may prevent any corruptions in the header of the datagram.
Internet uses an auxiliary protocol, ICMP, that provides error control if the datagram is
discarded
b) Flow Control
Flow control regulates the amount of data a source can send without overwhelming the
receiver.
If the source computer produces data faster than the destination computer can consume it,
the receiver will be overwhelmed with data.
To control the flow of data, the receiver needs to send some feedback to the sender to
inform the latter that it is overwhelmed with data.
c) Congestion Control
Congestion - is a situation in which too many datagrams are present in an area of the
Internet.
Congestion may occur if the number of datagrams sent by source computers is beyond
the capacity of the network.
Some routers may drop some of the datagrams.
As more datagrams are dropped, the situation may become worse because the sender may
send duplicates of the lost packets.
If the congestion continues, sometimes a situation may reach a point where the system
collapses and no datagrams are delivered.
d) Quality of Service
As the Internet has allowed new applications such as multimedia, the quality of service
(QoS) of the communication has become more important.
The Internet has thrived by providing better quality of service to support these
applications.
e) Security
To provide security for a connectionless network layer, we need to have another virtual
level that changes the connectionless service to a connection-oriented service.
When the network layer provides a connectionless service, each packet travelling in the
Internet is an independent entity.
The switches in this type of network are called routers.
A packet may be followed by a packet coming from the same or from a different source.
Each packet is routed based on the information contained in its header:
o Source and destination addresses.
The destination address defines where it should go;
The source address defines where it comes from.
The router routes the packet based only on the destination address.
The source address may be used to send an error message to the source if the packet is
discarded.
The following diagram shows the forwarding process in a router.
We have used symbolic addresses such as A and B.
In a datagram approach the forwarding decision is based on the destination address of the
packet.
2. Virtual-Circuit Approach – Connection Oriented
In a connection-oriented service (also called virtual-circuit approach), there is a
relationship between all packets belonging to a message.
Before all datagrams in a message can be sent, a virtual connection should be set up to
define the path for the datagrams.
After connection setup, the datagrams can all follow the same path.
The packet should contain the source and destination addresses and also contain a flow
label, a virtual circuit identifier that defines the virtual path the packet should follow.
The following diagram shows the concept of connection-oriented service.
1. The destination sends an acknowledgment to router R4. The acknowledgment carries the
global source and destination addresses so the router knows which entry in the table is to
be completed. The packet also carries label 77, chosen by the destination as the incoming
label for packets from A. Router R4 uses this label to complete
2. Router R4 sends an acknowledgment to router R3 that contains its incoming label in the
table, chosen in the setup phase. Router R3 uses this as the outgoing label in the table.
3. Router R3 sends an acknowledgment to router R1 that contains its incoming label in the
table, chosen in the setup phase. Router R1 uses this as the outgoing label in the table.
4. Finally router R1 sends an acknowledgment to source A that contains its incoming label
in the table, chosen in the setup phase.
5. The source uses this as the outgoing label for the data packets to be sent to destination B.
2. Data Transfer Phase
The second phase is called the data-transfer phase. After all routers have created their
forwarding table for a specific virtual circuit, then the network-layer packets belonging to
one message can be sent one after another.
3. Teardown Phase
In the teardown phase,
Source A, after sending all packets to B, sends a special packet called a teardown
packet.
Destination B responds with a confirmation packet.
All routers delete the corresponding entries from their tables.
3.3 NETWORK LAYER PERFORMANCE
The performance of a network can be measured in terms of
1. Delay 2. Throughput 3. Packet loss 4. Congestion control
1. DELAY
The delays in a network can be divided into four types:
1. Transmission delay 2. Propagation delay 3. Processing delay 4. Queuing delay.
Transmission Delay
A source host or a router cannot send a packet immediately.
A sender needs to put the bits in a packet on the line one by one.
If the first bit of the packet is put on the line at time t1 and the last bit is put on the
line at time t2, transmission delay of the packet is (t2−t1).
The transmission delay is
Delaytr(Packet length) / (Transmission rate)
Example: In a Fast Ethernet LAN with the transmission rate of 100 million bits per
second and a packet of 10,000 bits, it takes (10,000)/(100,000,000) or 100 microseconds for all
bits of the packet to be put on the line.
Propagation Delay
Propagation delay is the time it takes for a bit to travel from point A to point B in
the transmission media.
The propagation delay for a packet-switched network depends on the propagation
delay of each network (LAN or WAN).
The propagation speed of the media, which is 3 108 meters/second in a vacuum
the distance of the link.
In other words, propagation delay is
Delaypg(Distance) / (Propagation speed)
Example: if the distance of a cable link in a point-to-point WAN is 2000 meters and the
propagation speed of the bits in the cable is 2 108 meters/second, then the propagation delay is
10 microseconds.
Processing Delay
The processing delay is the time required for a destination host to receive a packet
from its input port, remove the header, perform an error detection procedure, and
deliver the packet to the output port.
The processing delay may be different for each packet, but normally is calculated
as an average.
DelayprTime required to process a packet in a router or a destination host
Queuing Delay
Queuing delay can normally happen in a router.
The queuing delay for a packet in a router is measured as the time a packet waits
in the input queue and output queue of a router.
Delayqu The time a packet waits in input and output queues in a router
Total Delay
Assuming equal delays for the sender, routers, and receiver, the total delay
(source-to destination delay) a packet encounters can be calculated if we know the
number of routers, n, in the whole path.
Total delay (n 1) (DelaytrDelaypgDelaypr) (n) (Delayqu)
If we have n routers, we have (n 1) links.
(n 1) transmission delays related to n routers and the source,
(n 1) propagation delays related to (n 1) links,
(n 1) processing delays related to n routers and the destination, and
only n queuing delays related to n routers.
2. THROUGHPUT
Throughput is defined as the number of bits passing through the point in a second,
which is actually the transmission rate of data at that point.
In a path from source to destination, a packet may pass through several links
(networks), each with a different transmission rate.
To determine the throughput of the whole path assume that we have three links,
each with a different transmission rate, as shown in figure.
In the diagram, the data can flow at the rate of 200 kbps in Link1.
However, when the data arrives at router R1, it cannot pass at this rate. Data needs
to be queued at the router and sent at 100 kbps.
When data arrives at router R2, it could be sent at the rate of 150 kbps, but there is
not enough data to be sent.
In other words, the average rate of the data flow in Link3 is also 100 kbps. We
can conclude that the average data rate for this path is 100 kbps, the minimum of
the three different data rates.
The diagram also shows that we can simulate the behaviour of each link with
pipes of different sizes; the average throughput is determined by the bottleneck,
the pipe with the smallest diameter. In general, in a path with n links in series, we
have
Throughput minimum {TR1, TR2, TRn}
Although the situation in the above diagram shows how to calculate the
throughput when the data is passed through several links, the actual situation in
the Internet is that the data normally passes through two access networks and the
Internet backbone, as shown in the following diagram.
The Internet backbone has a very high transmission rate, in the range of gigabits
per second.
The throughput is the minimum ofTR1 and TR2.
Example: if a server connects to the Internet via a Fast Ethernet LAN with the data rate of
100 Mbps, but a user who wants to download a file connects to the Internet via a dial-up
telephone line with the data rate of 40 kbps, the throughput is 40 kbps.
3. PACKET LOSS
The performance of communication is the number of packets lost during
transmission.
When a router receives a packet while processing another packet, the received
packet needs to be stored in the input buffer waiting for its turn.
A router, however, has an input buffer with a limited size.
A time may come when the buffer is full and the next packet needs to be dropped.
The effect of packet loss on the Internet network layer is that the packet needs to
be resent, Which in turn may create overflow and cause more packet loss
4. CONGESTION CONTROL
Congestion control is a mechanism for improving performance.
Congestion at the network layer is related to two issues, throughput and delay.
Delay as a function of load
When the load is much less than the capacity of the network, the delay is at a
minimum.
This minimum delay is composed of propagation delay and processing delay
When the load reaches the network capacity, the delay increases sharply because
we now need to add the queuing delay to the total delay.
Throughput as a function of load
When the load is below the capacity of the network, the throughput increases
proportionally with the load
We expect the throughput to remain constant after the load reaches the capacity,
but instead the throughput declines sharply.
The reason is the discarding of packets by the routers.
.
Congestion control refers to techniques and mechanisms that can either prevent
congestion before it happens or remove congestion after it has happened.
Two Broad Categories
In general, we can divide congestion control mechanisms into two broad
categories: open-loop congestion control (prevention) and closed-loop
congestion control (removal).
Open-Loop Congestion Control
In open-loop congestion control, policies are applied to prevent congestion before
it happens.
In these mechanisms, congestion control is handled by either the source or the
destination. We give a brief list of policies that can prevent congestion.
Retransmission Policy
If the sender feels that a sent packet is lost or corrupted, the packet needs to be
retransmitted.
Retransmission may increase congestion in the network. However, a good
retransmission policy can prevent congestion.
Window Policy
The type of window at the sender may also affect congestion. The Selective
Repeat window is better than the Go-Back-N window for congestion control.
In the Go-Back-N window, when the timer for a packet times out, several packets
may be resent. This duplication may make the congestion worse.
The Selective Repeat window tries to send the specific packets that have been lost
or corrupted.
Acknowledgment Policy
The acknowledgment policy imposed by the receiver may also affect congestion.
If the receiver does not acknowledge every packet it receives, it may slow down
the sender and help prevent congestion.
A receiver may send an acknowledgment only if it has a packet to be sent or a
special timer expires.
A receiver may decide to acknowledge only N packets at a time.
Discarding Policy
A good discarding policy by the routers may prevent congestion and at the same
time may not harm the integrity of the transmission.
Admission Policy
An admission policy, which is a quality-of-service mechanism, can also prevent
congestion in virtual-circuit networks.
Switches in a flow first check the resource requirement of a flow before admitting
it to the network.
A router can deny establishing a virtual-circuit connection if there is congestion in
the network or if there is a possibility of future congestion.
Closed-Loop Congestion Control
Closed-loop congestion control mechanisms try to alleviate congestion after it
happens. Several mechanisms have been used by different protocols. We describe
a few of them here.
Backpressure
The technique of backpressure refers to a congestion control mechanism in which
a congested node stops receiving data from the immediate upstream node or
nodes.
This may cause the upstream node or nodes to become congested, and they, in
turn, reject data from their upstream node or nodes, and so on.
Backpressure is a node-to-node congestion control that starts with a node and
propagates, in the opposite direction of data flow, to the source.
Node III in the figure has more input data than it can handle. It drops some
packets in its input buffer and informs node II to slow down.
Node II, in turn, may be congested because it is slowing down the output flow of
data. If node II is congested, it informs node I to slow down, which in turn may
create congestion.
If so, node I informs the source of data to slow down. This, in time, alleviates the
congestion.
Note that the pressure on node III is moved backward to the source to remove the
congestion.
Choke Packet
A choke packet is a packet sent by a node to the source to inform it of congestion.
Note the difference between the backpressure and choke-packet methods. In
backpressure, the warning is from one node to its upstream node, although the
warning may eventually reach the source station.
In the choke-packet method, the warning is from the router, which has
encountered congestion, directly to the source station.
The intermediate nodes through which the packet has travelled are not warned.
The warning message goes directly to the source station; the intermediate routers
do not take any action. The following diagram shows the idea of a choke packet.
Implicit Signalling
In implicit signalling, there is no communication between the congested node and
the source.
For example, when a source sends several packets and there is no
acknowledgment for a while, one assumption is that the network is congested.
Explicit Signalling
The node that experiences congestion can explicitly send a signal to the source or
destination.
In the explicit-signalling method, the signal is included in the packets that carry
data. Explicit signalling can occur in either the forward or the backward direction.
Address Space
IPv4 defines addresses has an address space.
An address space is the total number of addresses used by the protocol.
If a protocol uses ‘n’ bits to define an address, the address space is ‘2n’ because each bit
can have two different values (0 or 1).
IPv4 uses 32-bit addresses, which means that the address space is 232 or 4,294,967,296
If there were no restrictions, more than 4 billion devices could be connected to the
Internet
Notation
There are three common notations in an IPv4 address:
binary notation (base 2), dotted-decimal notation (base 256), and hexadecimal notation
(base 16).
In binary notation, an IPv4 address is displayed as 32 bits. To make the address more
readable, one or more spaces are usually inserted between each octet (8 bits).
To make the IPv4 address more compact and easier to read, it is usually written in
decimal form with a decimal point (dot) separating the bytes.
This format is referred to as dotted-decimal notation. Each byte (octet) is only 8 bits, each
number in the dotted-decimal notation is between 0 and 255.
Sometimes we can see an IPv4 address in hexadecimal notation. Each hexadecimal digit
is equivalent to four bits.
This notation is often used in network programming.
Figure shows an IP address in the three discussed notations.
Hierarchy in Addressing
In any communication network, the addressing system is hierarchical.
A 32-bit IPv4 address is also hierarchical, but divided only into two parts.
The first part of the address, called the prefix, defines the network;
The second part of the address, called the suffix, defines the node
The prefix length is n bits and the suffix length is (32 − n) bits.
A prefix can be fixed length or variable length.
The network identifier in the IPv4 was first designed as a fixed-length prefix. This
scheme is referred to as classful addressing.
The new scheme, which is referred to as classless addressing, uses a variable-length
network prefix. Figure shows the prefix and suffix of a 32-bit IPv4 address.
The prefix length is n bits and the suffix length is (32 − n) bits.
3.4.1 CLASSFUL ADDRESSING
When the Internet started, an IPv4 address was designed with a fixed-length prefix, but to
accommodate both small and large networks, three fixed-length prefixes were designed
instead of one (n = 8, n = 16, and n = 24).
The whole address space was divided into five classes (class A, B, C, D, and E)
This scheme is referred to as classful addressing.
In class A, the network length is 8 bits, but since the first bit, which is 0, defines the class,
we can have only seven bits as the network identifier.
This means there are only 2^7 = 128 networks in the world that can have a class A
address.
In class B, the network length is 16 bits, but since the first two bits define the class,
We can have only 14 bits as the network identifier. This means there are only 2^14 =
16,384 networks in the world that can have a class B address.
In class C, the network length is 24 bits,
We can have only 21 bits as the network identifier. This means there are 221 = 2,097,152
networks in the world that can have a class C address.
Class D is not divided into prefix and suffix. It is used for multicast addresses.
All addresses that start with 1111 in binary belong to class E.
Class D, Class E is not divided into prefix and suffix and is used as reserve.
Subnetting
To alleviate address depletion, two strategies were proposed and, to some extent,
implemented: subnetting and supernetting.
Dividing a network into two or more network is called subnet. It is a logical subdivision
of an IP address.
In subnetting, a class A or class B block is divided into several subnets.
Each subnet has a larger prefix length than the original network.
Benefits
1. Reduce network traffic 2. Optimized network performance 3. Simplified network
management
Subnet masks code:
1-Represent network or subnet address
0-Represent the host address
Designing Subnets
The subnetworks in a network should be carefully designed to enable the routing of
packets.
We assume the total number of addresses granted to the organization is N, the prefix
length is n, the assigned number of addresses to each subnetwork is N sub, and the prefix
length for each subnetwork is nsub. Then the following steps need to be carefully followed
to guarantee the proper operation of the subnetworks.
1. The number of addresses in each subnetwork should be a power of 2.
2. The prefix length for each subnetwork should be found using the following formula:
first address = (prefix in decimal) × 232 − n = (prefix in decimal) × N.
nsub 32 -log2Nsub
Supernetting
Supernetting is the opposite of Subnetting.
Insubnetting, a single big network is divided into multiple smaller subnetworks.
In Supernetting, multiple networks are combined into a bigger network termed as a
Supernetwork or Supernet.
Example:1
A classless address is given as 167.199.170.82/27. We can find the above three pieces of
information as follows. The number of addresses in the network is 2 32 − n = 25 = 32 addresses.
The first address can be found by keeping the first 27 bits and changing the rest of the bits to 0s.
The last address can be found by keeping the first 27 bits and changing the rest of the bits to 1s.
Address Mask
Another way to find the first and last addresses in the block is to use the address mask.
The address mask is a 32-bit number in which the n leftmost bits are set to 1s and the rest
of the bits (32 − n) are set to 0s.
A computer can easily find the address mask because it is the complement of (2 32 − n − 1).
The reason for defining a mask in this way is that it can be used by a computer program
to extract the information in a block, using the three bit-wise operations NOT, AND, and
OR.
1. The number of addresses in the block N = NOT (mask) + 1.
2. The first address in the block = (Any address in the block) AND (mask).
3. The last address in the block = (Any address in the block) OR [(NOT (mask)].
Example:2
A classless address is given as 167.199.170.82/27.The mask in dotted-decimal notation is
256.256.256.224. The AND, OR, and NOT operations can be applied to individual bytes.
Example:3
In classless addressing, an address cannot per se define the block the address belongs to. For
example, the address 230.8.24.56 can belong to many blocks. Some of them are shown below
with the value of the prefix associated with that block.
Address Aggregation
One of the advantages of the CIDR strategy is address aggregation. When blocks of
addresses are combined to create a larger block, routing can be done based on the prefix
of the larger block.
ICANN assigns a large block of addresses to an ISP.
Each ISP in turn divides its assigned block into smaller subblocks and grants the
subblocks to its customers.
Special Addresses
Five special addresses that are used for special purposes:
1. This-host address
2. Limited-broadcast address
3. Loopback address
4. Private addresses
5. Multicast addresses
This-host Address
The only address in the block 0.0.0.0/32 is called the this-host address.
It is used whenever a host needs to send an IP datagram but it does not know its own
address to use as the source address.
Limited-broadcast Address
The only address in the block 255.255.255.255/32 is called the limited-broadcast address.
It is used whenever a router or a host needs to send a datagram to all devices in a
network.
The routers in the network, however, block the packet having this address as the
destination; the packet cannot travel outside the network.
Loopback Address
The block 127.0.0.0/8 is called the loopback address.
A packet with one of the addresses in this block as the destination address never leaves
the host; it will remain in the host.
Any address in the block is used to test a piece of software in the machine.
Private Addresses
Four blocks are assigned as private addresses: 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16,
and 169.254.0.0/16.
Multicast Addresses
The block 224.0.0.0/4 is reserved for multicast addresses.
When the DHCP client first starts, it is in the INIT state (initializing state).
The client broadcasts a discover message.
When it receives an offer, the client goes to the SELECTING state.
While it is there, it may receive more offers.
After it selects an offer, it sends a request message and goes to the REQUESTING state.
If an ACK arrives while the client is in this state, it goes to the BOUND state and uses the
IP address.
When the lease is 50 percent expired, the client tries to renew it by moving to the
RENEWING state.
If the server renews the lease, the client moves to the BOUND state again.
If the lease is not renewed and the lease time is 75 percent expired, the client moves to
the REBINDING state.
If the server agrees with the lease (ACK message arrives), the client moves to the
BOUND state and continues using the IP address; otherwise, the client moves to the INIT
state and requests another IP address.
Note that the client can use the IP address only when it is in the BOUND, RENEWING,
or REBINDING state.
The above procedure requires that the client uses three timers: renewal timer (set to 50
percent of the lease time), rebinding timer (set to 75 percent of the lease time), and
expiration timer (set to the lease time).
3.4.4 NETWORK ADDRESS TRANSLATION (NAT)
Network Address Translation (NAT) is a process in which one or more local IP address is
translated into one or more Global IP address and vice versa in order to provide Internet
access to the local hosts.
Also, it does the translation of port numbers i.e. masks the port number of the host with
another port number, in the packet that will be routed to destination.
It then makes the corresponding entries of IP address and port number in the NAT table.
NAT generally operates on router or firewall.
Address Translation
All of the outgoing packets go through the NAT router, which replaces the source address
in the packet with the global NAT address.
All incoming packets also pass through the NAT router, which replaces the destination
address in the packet with the appropriate private address.
Translation Table
There may be tens or hundreds of private IP addresses, each belonging to one specific
host. The problem is solved if the NAT router has a translation table.
Using One IP Address
In its simplest form, a translation table has only two columns: the private address and the
external address.
When the router translates the source address of the outgoing packet, it also makes note
of the destination address— where the packet is going.
When the response comes back from the destination, the router uses the source address of
the packet (as the external address) to find the private address of the packet.
Using a Pool of IP Addresses
The use of only one global address by the NAT router allows only one private-network
host to access a given external host. To remove this restriction, the NAT router can use a
pool of global addresses.
For example, instead of using only one global address (200.24.5.8), the NAT router can
use four addresses (200.24.5.8, 200.24.5.9, 200.24.5.10, and 200.24.5.11). In this case,
four private-network hosts can communicate with the same external host at the same time
because each pair of addresses defines a separate connection.
However, there are still some drawbacks. No more than four connections can be made to
the same destination.
No private-network host can access two external server programs (e.g., HTTP and
TELNET) at the same time.
And, likewise, two private-network hosts cannot access the same external server program
(e.g., HTTP or TELNET) at the same time.
Using Both IP Addresses and Port Addresses
To allow a many-to-many relationship between private-network hosts and external server
programs, we need more information in the translation table.
For example, suppose two hosts inside a private network with addresses 172.18.3.1 and
172.18.3.2 need to access the HTTP server on external host 25.8.3.2.
If the translation table has five columns, instead of two, that include the source and
destination port addresses and the transport-layer protocol, the ambiguity is eliminated.
Advantages of NAT
NAT conserves legally registered IP addresses.
It provides privacy as the device IP address, sending and receiving the traffic, will be
hidden.
Eliminates address renumbering when a network evolves.
Disadvantage of NAT
Translation results in switching path delays.
Certain applications will not function while NAT is enabled.
Complicates tunneling protocols such as IPsec.
Also, router being a network layer device, should not tamper with port numbers
(transport layer) but it has to do so because of NAT.
For example, if n is 26 and the network address is 180.70.65.192, then one can combine
the two as one piece of information: 180.70.65.192/26.
Figure shows a simple forwarding module and forwarding table for a router with only
three interfaces.
The job of the forwarding module is to search the table, row by row.
In each row, the n leftmost bits of the destination address (prefix) are kept and the rest of
the bits (suffix) are set to 0s.
If the resulting address, matches with the address in the first column, the information in
the next two columns is extracted; otherwise the search continues.
Normally, the last row has a default value in the first column, which indicates all
destination addresses that did not match the previous rows.
For example, instead of giving the address-mask combination of 180.70.65.192/26, we
can give the value of the 26 leftmost bits as shown below.
10110100 01000110 01000001 11
Address Aggregation
When we use classful addressing,
There is only one entry in the forwarding table for each site outside the organization.
The entry defines the site even if that site is subnetted.
When a packet arrives at the router, the router checks the corresponding entry and
forwards the packet accordingly.
When we use classless addressing,
The number of forwarding table entries will increase.
This is because the intent of classless addressing is to divide up the whole address space
into manageable blocks.
The increased size of the table results in an increase in the amount of time needed to
search the table.
In Figure we have two routers.
R1 is connected to networks of four organizations that each use 64 addresses.
R2 is somewhere far from R1.
R1 has a longer forwarding table because each packet must be correctly routed to the
appropriate organization.
R2 can have a very small forwarding table.
For R2, any packet with destination 140.24.7.0 to 140.24.7.255is sent out from interface
m0 regardless of the organization number.
This is called address aggregation because the blocks of addresses for four organizations
are aggregated into one larger block.
R2 would have a longer forwarding table if each organization had addresses that could
not be aggregated into one block.
Hierarchical Routing
National ISPs are divided into regional ISPs, and regional ISPs are divided into local
ISPs.
If the forwarding table has a sense of hierarchy like the Internet architecture, the
forwarding table can decrease in size.
Local ISP.
A local ISP can be assigned a single, but large, block of addresses with a certain prefix
length.
The local ISP can divide this block into smaller blocks of different sizes, and assign these
to individual users and organizations, both large and small.
If the block assigned to the local ISP starts with a.b.c.d/n, the ISP can create blocks
starting with e.f.g.h/m, where m may vary for each customer and is greater than n.
All customers of the local ISP are defined as a.b.c.d/n to the rest of the Internet.
Every packet destined for one of the addresses in this large block is routed to the local
ISP.
Geographical Routing
To decrease the size of the forwarding table even further, we need to extend hierarchical
routing to include geographical routing.
We must divide the entire address space into a few large blocks.
Forwarding Table Search Algorithms
The forwarding table can be divided into buckets, one for each prefix.
The router first tries the longest prefix.
If the destination address is found in this bucket, the search is complete.
If the address is not found, the next prefix is searched, and so on.
It is obvious that this type of search takes a long time.
Label. This 20-bit field defines the label that is used to index the forwarding table in the router.
Exp. This 3-bit field is reserved for experimental purposes.
S. The one-bit stack field defines the situation of the subheader in the stack. When the bit is 1, it
means that the header is the last one in the stack.
TTL. This 8-bit field is similar to the TTL field in the IP datagram. Each visited router
decrements the value of this field. When it reaches zero, the packet is discarded to prevent
looping.
The third fragment contains the last 376 bytes of data, and the offset is now 2×512÷8 =
3.6.2 OPTIONS
The header of the IPv4 datagram is made of two parts: a fixed part and a variable part.
The fixed part is 20 bytes long.
The variable part comprises the options that can be a maximum of 40 bytes to
preserve the boundary of the header.
Options, as the name implies, are not required for a datagram.
They can be used for network testing and debugging.
Although options are not a required part of the IPv4header, option processing is
required of the IPv4 software.
This means that all implementations must be able to handle options if they are present
in the header. Options are divided into two broad categories: single-byte options and
multiple-byte options.
Single-Byte Options
There are two single-byte options.
No Operation
A no-operation option is a 1-byte option used as a filler between options.
End of Option
An end-of-option option is a 1-byte option used for padding at the end of the option field. It,
however, can only be used as the last option.
Multliple-Byte Options
There are four multiple-byte options.
Record Route
A record route option is used to record the Internet routers that handle the datagram. It can list
up to nine router addresses. It can be used for debugging and management purposes.
Strict Source Route
A strict source route option is used by the source to predetermine a route for the datagram as it
travels through the Internet.
Dictation of a route by the source can be useful for several purposes.
The sender can choose a route with a specific type of service, such as minimum delay or
maximum throughput.
Loose Source Route
A loose source route option is similar to the strict source route, but it is less rigid. Each router in
the list must be visited, but the datagram can visit other routers as well.
Timestamp
A timestamp option is used to record the time of datagram processing by a router.
The time is expressed in milliseconds from midnight, Universal time or Greenwich meantime.
Knowing the time a datagram is processed can help users and managers track the behavior of the
routers in the Internet.
3.6.3 SECURITY OF IPV4 DATAGRAMS
There are three security issues that are particularly applicable to the IP protocol: packet sniffing,
packet modification, and IP spoofing.
Packet Sniffing
o An intruder may intercept an IP packet and make a copy of it.
o Packet sniffing is a passive attack, in which the attacker does not change the contents of the
packet.
o This type of attack is very difficult to detect because the sender and the receiver may never
know that the packet has been copied.
o Although packet sniffing cannot be stopped, encryption of the packet can make the
attacker’s effort useless. The attacker may still sniff the packet, but the content is not
detectable.
Packet Modification
o The second type of attack is to modify the packet.
o The attacker intercepts the packet, changes its contents, and sends the new packet to the
receiver.
o The receiver believes that the packet is coming from the original sender.
o This type of attack can be detected using a data integrity mechanism.
o The receiver, before opening and using the contents of the message, can use this
mechanism to make sure that the packet has not been changed during the transmission.
IP Spoofing
An attacker can masquerade as somebody else and create an IP packet that carries the source
address of another computer.
An attacker can send an IP packet to a bank pretending that it is coming from one of the
customers. This type of attack can be prevented using an origin authentication mechanism.
IPSec
The IP packets today can be protected from the previously mentioned attacks using a protocol
called IPSec (IP Security).
This protocol, which is used in conjunction with the IP protocol, creates a connection-oriented
service between two entities in which they can exchange IP packets without worrying about the
three attacks discussed above.
IPSec provides the following four services:
Defining Algorithms and Keys. The two entities that want to create a secure channel
between them can agree on some available algorithms and keys to be used for security
purposes.
Packet Encryption. The packets exchanged between two parties can be encrypted for
privacy using one of the encryption algorithms and a shared key agreed upon in the first
step. This makes the packet sniffing attack useless.
Data Integrity. Data integrity guarantees that the packet is not modified during the
transmission. If the received packet does not pass the data integrity test, it is discarded.
his prevents the second attack, packet modification, described above.
Origin Authentication. IPSec can authenticate the origin of the packet to be sure that the
packet is not created by an imposter. This can prevent IP spoofing attacks as described
above.
3.7 ICMPv4
The IPv4 has no error-reporting or error-correcting mechanism.
The Internet Control Message Protocol version 4 (ICMPv4) has been designed to compensate
for the above two deficiencies.
o It is a companion to the IP protocol. ICMP itself is a network-layer protocol.
o However, its messages are not passed directly to the data-link layer as would be expected.
o Instead, the messages are first encapsulated inside IP datagrams before going to the lower
layer.
o When an IP datagram encapsulates an ICMP message, the value of the protocol field in
the IP datagram is set to 1 to indicate that the IP payroll is an ICMP message.
3.7.1 MESSAGES
ICMP messages are divided into two broad categories: error-reporting messages and query
messages.
The error-reporting messages report problems that a router or a host (destination) may
encounter when it processes an IP packet.
The query messages, which occur in pairs, help a host or a network manager get specific
information from a router or another host.
For example, nodes can discover their neighbours. Also, hosts can discover and learn
about routers on their network and routers can help a node redirect its messages.
An ICMP message has an 8-byte header and a variable-size data section.
Although the general format of the header is different for each message type, the first 4
bytes are common to all.
As Figure shows, the first field, ICMP type, defines the type of the message.
The code field specifies the reason for the particular message type.
The last common field is the checksum field (to be discussed later in the chapter). The
rest of the header is specific for each message type.
o The data section in error messages carries information for finding the original
packet that had the error. In query messages, the data section carries extra
information based on the type of query.
Error Reporting Messages
Since IP is an unreliable protocol, one of the main responsibilities of ICMP is to report
some errors that may occur during the processing of the IP datagram. ICMP does not
correct errors, it simply reports them.
Error correction is left to the higher-level protocols. Error messages are always sent to the
original source because the only information available in the datagram about the route is
the source and destination IP addresses. ICMP uses the source IP address to send the
error message to the source (originator) of the datagram.
To make the error-reporting process simple, ICMP follows some rules in reporting
messages.
o First, no error message will be generated for a datagram having a multicast
address or special address (such as this host or loopback).
o Second, no ICMP error message will be generated in response to a datagram
carrying an ICMP error message.
o Third, no ICMP error message will be generated for a fragmented datagram that is
not the first fragment.
Note that all error messages contain a data section that includes the IP header of the
original datagram plus the first 8 bytes of data in that datagram. The original datagram header is
added to give the original source, which receives the error message, information about the
datagram itself. The 8 bytes of data are included because the first8 bytes provide information
about the port numbers (UDP and TCP) and sequence number (TCP). This information is needed
so the source can inform the protocols (TCP or UDP) about the error.
Note that all error messages contain a data section that includes the IP header of the
original datagram plus the first 8 bytes of data in that datagram. The original datagram header is
added to give the original source, which receives the error message, information about the
datagram itself. The 8 bytes of data are included because, the first 8 bytes provide information
about the port numbers (UDP and TCP) and sequence number (TCP). This information is needed
so the source can inform the protocols (TCP or UDP) about the error. ICMP forms an error
packet, which is then encapsulated in an IP datagram (see Figure 19.9).
Destination Unreachable
The most widely used error message is the destination unreachable (type 3).
This message uses different codes (0 to 15) to define the type of error message and the
reason why a datagram has not reached its final destination.
For example, code 0 tells thesource that a host is unreachable.
This may happen, for example, when we use theHTTP protocol to access a web page, but
the server is down.
The message “destinationhost is not reachable” is created and sent back to the source.
Source Quench
Another error message is called the source quench (type 4) message, which informs the
sender that the network has encountered congestion and the datagram has been dropped.
The source needs to slow down sending more datagrams.
In other words, ICMP adds a kind of congestion control mechanism to the IP protocol by
using this type of message.
Redirection Message
The redirection message (type 5) is used when the source uses a wrong router to send out
its message.
The router redirects the message to the appropriate router, but informs the source that it
needs to change its default router in the future. The IP address of the default router is sent
in the message.
o When the TTL value becomes 0, the datagram is dropped by the visiting router
and a time exceeded message (type 11) with code 0 is sent to the source to inform
it about the situation.
o The time-exceeded message (with code 1) can also be sent when not all fragments
of a datagram arrive within a predefined period of time.
Parameter Problem
A parameter problem message (type 12) can be sent when either there is a problem in the header
of a datagram (code 0) or some options are missing or cannot be interpreted (code 1).
Query Messages
Interestingly, query messages in ICMP can be used independently without relation to an
IP datagram. Of course, a query message needs to be encapsulated in a datagram, as a
carrier.
Query messages are used to probe or test the liveliness of hosts or routers in the Internet,
find the one-way or the round-trip time for an IP datagram between two devices, or even
find out whether the clocks in two devices are synchronized. Naturally, query messages
come in pairs: request and reply.
The echo request (type 8) and the echo reply (type 0) pair of messages are used by a host
or a router to test the liveliness of another host or router.
A host or router sends an echo request message to another host or router; if the latter is
alive, it responds with an echo reply message.
It has two debugging tools: ping and trace route.
The timestamp request (type 13) and the timestamp reply (type 14) pair of messages are
used to find the round-trip time between two devices or to check whether the clocks in
two devices are synchronized.
The timestamp request message sends a 32-bitnumber, which defines the time the
message is sent.
The timestamp reply resends that number, but also includes two new 32-bit numbers
representing the time the request was received and the time the response was sent.
If all timestamps represent Universal time, the sender can calculate the one-way and
round-trip time.
The routing table is the table that is built up by the routing algorithms.
NET
WORK AS A GRAPH
A network can be represented as a graph. Nodes are denoted as Vertices and Links are denoted
as
Edges.
UNICAST ROUTING ALGORITHMS:
1. Intra domain routing Algorithm
1. Distance vector routing
2. Link state routing
2. Inter Domain Routing Algorithm
1. Path Vector Routing Algorithm
3.8.1 DISTANCE-VECTOR ROUTING ALGORITHM
Each node constructs a one-dimensional array (a vector) containing the “distances”
(costs) to all other nodes.
Each node knows the cost of the link to each of its directly connected neighbors.
Working of Distance Vector Routing Algorithm
Thus all nodes in the network apply a similar step as of A to generate their final routing tables.
The process of getting consistent routing information to all the nodes is called convergence.
Shown below are the final distances stored at each node.
When does a given node decide to send a routing update to its neighbors?
1. Periodic update - each node automatically sends an update message periodically even if
nothing has changed.
2. Triggered update – whenever a node notices a link failure or receives an update from one
of its neighbors that causes it to change one of the routes in its routing table.
What happens when a link or node fails?
The nodes that notice first send new lists of distances to their neighbors, and normally the system
settles down fairly quickly to a new state.
How does a node detect a failure?
A node continually tests the link to another node by sending a control packet and seeing
if it receives an acknowledgment.
A node determines that the link (or the node at the other end of the link) is down if it does
not receive the expected periodic routing update for the last few update cycles.
Count to Infinity
A problem with distance-vector routing is that any decrease in cost propagates quickly,
but any increase in cost will propagate slowly.
For a routing protocol to work properly, if a link is broken (cost becomes infinity), every
other router should be aware of it immediately, but in distance-vector routing, this takes
some time.
The problem is referred to as count to infinity. Ex: A→E Link is failure.
It sometimes takes several updates before the cost for a broken link is recorded as infinity
by all routers.
Split Horizon
The technique to improve the time to stabilize routing is called split horizon. The idea is
that when a node sends a routing update to its neighbours, it does not send those routes it
learned from each neighbour back to that neighbour.
Poison Reverse
Using the split-horizon strategy has one drawback.
Normally, the corresponding protocol uses a timer, and if there is no news about a route,
the node deletes the route from it stable.
The idea is that when a node sends a routing update to its neighbours, it does not send
those routes it learned from each neighbour back to that neighbour.
Advantages of Distance Vector routing
It is simpler to configure and maintain than link state routing.
Disadvantages of Distance Vector routing
It is slower to converge than link state.
It is at risk from the count-to-infinity problem.
It creates more traffic than link state since a hop count change must be propagated to all
routers and processed on each router. Hop count updates take place on a periodic basis,
even if there are no changes in the network topology, so bandwidth-wasting broadcasts still
occur.
For larger networks, distance vector routing results in larger routing tables than link state
since each router must know about all other routers. This can also lead to congestion on
WAN links.
The spanning tree selected by A and E is such that the communication does not pass through D
as a middle node.
Similarly, the spanning tree selected by B is such that the communication does not pass through
C as a middle node.
Creation of Spanning Trees
Path-vector routing, like distance-vector routing, is an asynchronous and distributed
routing algorithm.
The spanning trees are made, gradually and asynchronously, by each node.
When a node is booted, it creates a path vector based on the information it can obtain
about its immediate neighbor.
A node sends greeting messages to its immediate neighbors to collect these pieces of
information.
Each node, after the creation of the initial path vector, sends it to all its immediate neighbours.
Each node, when it receives a path vector from a neighbour, updates its path vector using an
equation
We can define this equation as
In this equation, the operator () means to add x to the beginning of the path.
The policy is defined by selecting the best of multiple paths.
Path-vector routing also imposes one more condition on this equation:
If Path (v,y) includes x, that path is discarded to avoid a loop in the path.
In other words, x does not want to visit itself when it selects a path to y.
Figure shows the path vector of node C after two events.
In the first event, node C receives a copy of B’s vector, which improves its vector: now it
knows how to reach node A.
In the second event, node C receives a copy of D’s vector, which does not change its
vector.
As a matter of fact the vector for node C after the first event is stabilized and serves as its
forwarding table.
3.9 UNICAST ROUTING PROTOCOLS
A protocol needs to define its domain of operation, the messages exchanged, communication
between routers, and interaction with protocols in other domains. There are three common
protocols used in the Internet:
Forwarding Tables
The routers in an autonomous system need to keep forwarding tables to forward packets to their
destination networks.
A forwarding table in RIP is a three-column table in which
The first column is the address of the destination network,
The second column is the address of the next router to which the packet should be
forwarded, and
The third column is the cost (the number of hops) to reach the destination network.
Request
A request message is sent by a router that has just come up or by a router that has some time-out
entries. A request message can ask about specific entries or all entries.
Response
A response (or update) message can be either solicited or unsolicited.
A solicited response message is sent only in answer to a request message.
It contains information about the destination specified in the corresponding request message.
An unsolicited response message, on the other hand, is sent periodically, every 30
seconds or when there is a change in the forwarding table.
RIP Algorithm
RIP implements the same algorithm as the distance-vector routing algorithm
The changes need to be made to the algorithm to enable a router to update its forwarding
table:
❑ Instead of sending only distance vectors, a router needs to send the whole contents of
its forwarding table in a response message.
❑ The receiver adds one hop to each cost and changes the next router field to the address
of the sending router.
Timers in RIP
RIP uses three timers to support its operation.
1. The periodic timer controls the advertising of regular update messages.
Each router has one periodic timer that is randomly set to a number between 25 and
35 seconds
The timer counts down; when zero is reached, the update message is sent, and the
timer is randomly set once again.
2. The expiration timer governs the validity of a route.
o When a router receives update information for a route, the expiration timer is set
to 180 seconds for that particular route.
o Every time a new update for the route is received, the timer is reset.
o Every route has its own expiration timer.
3. The garbage collection timer is used to purge a route from the forwarding table.
o When the information about a route becomes invalid, the router does not
immediately purge that route from its table.
o Instead, it continues to advertise the route with a metric value of 16.
o At the same time, a garbage collection timer is set to 120 seconds for that route.
o When the count reaches zero, the route is purged from the table.
o This timer allows neighbours to become aware of the invalidity of a route prior to
purging.
3.9.2 Open Shortest Path First (OSPF)
Open Shortest Path First (OSPF) is also an intradomain routing protocol like RIP, but
it is based on the link-state routing protocol.
OSPF is an open protocol, which means that the specification is a public document.
OSPF Packet Format
Figure shows the forwarding tables for the simple AS in Figure. Comparing the
forwarding tables for the OSPF and RIP in the same AS, we find that the only difference
is the cost values.
In other words, if we use the hop count for OSPF, the tables will be exactly the same. The
reason for this consistency is that both protocols use the shortest-path trees to define the
best route from a source to a destination.
3.9.3 BORDER GATEWAY PROTOCOL (BGP)
The Border Gateway Protocol version 4 (BGP4) is the only inter domain routing
protocol used in the Internet today.
BGP4 is based on the path-vector algorithm we described before, but it is tailored to
provide information about the reachability of networks in the Internet.
Autonomous Systems
Internet is organized as autonomous systems (AS).
Each AS is under the BGP control of a single administrative entity.
The following figure shows a simple network with two autonomous systems.
Basic idea of AS
To provide an additional way to hierarchically aggregate routing information in a large
internet, thus improving scalability.
Routing problem is divided into two parts:
a. Routing within a single autonomous system - intradomain routing
b. Routing between autonomous systems - interdomain routing
There are two major interdomain routing protocols – EGP and BGP
BGP:
BGP makes virtually no assumptions about how autonomous systems are
interconnected—they form an arbitrary graph.
Local traffic is traffic that originates at or terminates on nodes within an AS.
Transit traffic is traffic that passes through an AS.
Classification of AS
1. Stub AS—an AS that has only a single connection to one other AS; such an AS will only
carry local traffic. (eg) small corporation in the figure is a stub AS.
2. Multihomed AS—an AS that has connections to more than one other AS but that refuses
to carry transit traffic, (eg) large corporation in the figure is a multihomed AS.
3. Transit AS—an AS that has connections to more than one other AS and that is designed
to carry both transit and local traffic, (eg) the backbone providers in the figure is a transit
AS.
4. Peering Point-Many providers arrange to interconnect with each other at a single point.
Basics of BGP
Each AS has one or more border routers through which packets enter and leave the AS.
A border router is simply an IP router that is charged with the task of forwarding packets
between autonomous systems.
Each AS that participates in BGP must also have at least one BGP speaker.
BGP speaker
o A router that “speaks” BGP to other BGP speakers in other autonomous systems.
o Border routers can also act as BGP speakers.
o BGP advertises complete paths as an enumerated list of autonomous systems to
reach a particular network.
o It is sometimes called a path-vector protocol.
Types:
1. External BGP (eBGP)
2. Internal BGP (iBGP)
1. External BGP (eBGP)
The figure also shows the simplified update messages sent by routers involved in the
eBGP sessions.
The circled number defines the sending router in each case.
For example, message number 1 is sent by router R1 and tells router R5 that N1, N2, N3,
and N4 can be reached through router R1 (R1 gets this information from the
corresponding intradomain forwarding table).
Router R5 can now add these pieces of information at the end of its forwarding table.
When R5 receives any packet destined for these four networks, it can use its forwarding
table and find that the next router is R1.
There are two problems that need to be addressed:
1. Some border routers do not know how to route a packet destined for non neighbor ASs. For
example, R5 does not know how to route packets destined for networks in AS3 and AS4. Routers
R6 and R9 are in the same situation as R5: R6 does not know about networks in AS2 and AS4;
R9 does not know about networks in AS2 and AS3.
2. None of the nonborder routers know how to route a packet destined for any networks in other
ASs.
3. To address the above two problems, we need to allow all pairs of routers (border or
nonborder) to run the second variation of the BGP protocol, iBGP.
2. Internal BGP (iBGP)
First, if an AS has only one router, there cannot be an iBGP session. For example, we
cannot create an iBGP session inside AS2 or AS4 in our internet.
Second, if there are n routers in an autonomous system, there should be [n × (n − 1) / 2]
iBGP sessions in that autonomous system (a fully connected mesh) to prevent loops in
the system.
In other words, each router needs to advertise its own reachability to the peer in the
session instead of flooding what it receives from another peer in another session.
The first message (numbered 1) is sent by R1 announcing that networks N8 and N9 are
reachable through the path AS1-AS2, but the next router is R1.
This message is sent, through separate sessions, to R2, R3, and R4. Routers R2, R4, and
R6 do the same thing but send different messages to different destinations.
The interesting point is that, at this stage, R3, R7, and R8 create sessions with their peers,
but they actually have no message to send.
Messages
BGP uses four types of messages for communication between the BGP speakers across
the ASs and inside an AS: open, update, keepalive, and notification. All BGP packets
share the same common header.
❑Open Message. To create a neighborhood relationship, a router running BGP opens a
TCP connection with a neighbor and sends an open message.
❑Update Message. The update message is the heart of the BGP protocol. It is used by a
router to withdraw destinations that have been advertised previously, to announce a route
to a new destination, or both. Note that BGP can withdraw several destinations that were
advertised before, but it can only advertise one new destination (or multiple destinations
with the same path attributes) in a single update message.
❑ Keepalive Message. The BGP peers that are running exchange keepalive messages
regularly (before their hold time expires) to tell each other that they are alive.
❑Notification. A notification message is sent by a router whenever an error condition is
detected or a router wants to close the session.
Performance
BGP performance can be compared with RIP. BGP speakers exchange a lot of messages
to create forwarding tables, but BGP is free from loops and count-to-infinity.
3.10 MULTICASTING BASICS
3.10.1 Multicast Addresses
In multicast communication, the sender is only one, but the receiver is many.
If a new group is formed with some active members, an authority can assign an unused
multicast address to this group to uniquely define it.
A host, which is a member of n groups, actually has (n 1) addresses:
o one unicast address that is used for source or destination address in unicast
communication
o n multicast addresses that are used only for destination addresses to receive
messages sent to a group.
Multicast Addresses in IPv4
Multicast addresses in IPv4 belong to a large block of addresses that are specially
designed for this purpose.
In classful addressing, all of class D was composed of these addresses;
Classless addressing used
the same block, but it was
referred to as the block
224.0.0.0/4 (from 224.0.0.0
to
239.255.255.255).
Four bits define the block; the rest of the bits are used as the identifier for the group.
The number of addresses in the multicast block is huge (228).
We definitely cannot have that many individual groups.
However, the block is divided into several subblocks and each subblock is used in a
particular multicast application.
The following gives some of the common sub blocks:
Local Network Control Block.
o The subblock 224.0.0.0/24 is assigned to a multicast routing protocol to be used
inside a network, which means that the packet with a destination address in this
range cannot be forwarded by a router.
o In this subblock, the address 224.0.0.0 is reserved, the address 224.0.0.1 is used to
send datagrams to all hosts and routers inside a network, and the address
224.0.0.2 is used to send datagrams to all routers inside a network.
Internetwork Control Block.
o The subblock 224.0.1.0/24 is assigned to a multicast routing protocol to be used in
the whole Internet, which means that the packet with a destination address in this
range can be forwarded by a router.
Source-Specific Multicast (SSM) Block.
o The block 232.0.0.0/8 is used for source specific multicast routing.
GLOP Block.
o The block 233.0.0.0/8 is called the GLOP block.
o This block defines a range of addresses that can be used inside an autonomous
system.
o Each autonomous system is assigned a 16-bit number.
Administratively Scoped Block.
o The block 239.0.0.0/8 is called the Administratively Scoped Block.
o The addresses in this block are used in a particular area of the Internet.
o The packet whose destination address belongs to this range is not supposed to
leave the area.
o In other words, an address in this block is restricted to an organization.
Selecting Multicast Address
To select a multicast address to be assigned to a group is not an easy task.
The selection of address depends on the type of application.
01:00:5E:00:00:00 to 01:00:5E:7F:FF:FF.
Example 21.1
Change the multicast IP address 232.43.14.7 to an Ethernet multicast physical address.
Solution
We can do this in two steps:
a. We write the rightmost 23 bits of the IP address in hexadecimal. This can be done bychanging
the rightmost 3 bytes to hexadecimal and then subtracting 8 from the leftmostdigit if it is greater
than or equal to 8. In our example, the result is 2B:0E:07.
b. We add the result of part a to the starting Ethernet multicast address, which
is01:00:5E:00:00:00. The result is
01:00:5E:2B:0E:07
Example 21.2
Change the multicast IP address 238.212.24.9 to an Ethernet multicast address.
Solution
a. The rightmost 3 bytes in hexadecimal are D4:18:09. We need to subtract 8 from the
leftmostdigit, resulting in 54:18:09.
b. We add the result of part a to the Ethernet multicast starting address. The result is
01:00:5E:54:18:09
3.10.3 Multicast Forwarding
Another important issue in multicasting is the decision a router needs to make to forward a
multicast packet. Forwarding in unicast and multicast communication is different in two aspects:
1. In multicast communication, the destination of the packet defines one group, but that
group may have more than one member in the internet.
a. To reach all of the destinations, the router may have to send the packet out of
more than one interface.
Figure shows the concept. In
unicasting, the destination
network N1 cannot be in more
than one part of the internet; in
multicasting, the group G1 may
have members in more than one part of the internet.
The entire functionality of IPv4’s three main address classes (A, B, and C)is contained
inside the “everything else” range.
Multicast addresses start with a byte of all 1s.
Link-local unicast addresses enable a host to construct an address that will work on the
network to which it is connected without being concerned about the global uniqueness of
the address.
Global unicast address
o A node may be assigned an IPv4-compatible IPv6address by zero-extending a 32-
bit IPv4 address to 128 bits.
o A node that is only capable of understanding IPv4 can be assigned an IPv4-
mappedIPv6 address by prefixing the 32-bit IPv4 address with 2 bytes of all 1s
and then zero-extending the result to 128 bits.
Types:
1) UnicastAddress
A unicast address defines a single interface (computer or router).
The packet with a unicast address will be delivered to the intended recipient.
2) AnycastAddress
An anycast address defines a group of computers that all share a single address.
A packet with an anycast address is delivered to only one member of the group.
The member is the one who is first reachable.
3) MulticastAddress
A multicast address also defines a group of computers.
Difference between anycasting and multicasting.
i) In anycasting, only one copy of the packet is sent to one of the members of the
group.
ii) in multicasting each member of the group receives a copy.
Address Notation
The standard representation is x:x:x:x:x:x:x:x, where each “x” is a hexadecimal
representation of a 16-bit piece of the address.
47CD:1234:4422:ACO2:0022:1234:A456:0124
An address with a large number of contiguous 0s can be written more compactly by
omitting all the 0 fields. Thus,
47CD:0000:0000:0000:0000:0000:A456:0124
could be written as 47CD::A456:0124
IPv4-mapped IPv6 address of a host whose IPv4 address was 128.96.33.81 could be
written as::FFFF:128.96.33.81
Note: The double colon at the front indicates the leading 0s.
How does IPv6 handle options?
Each option has its own type
of extension header.
The type of each extension
header is identified by the
value of the NextHeader field
in the header that precedes it, and each extension header contains a NextHeader field to
identify the header following it.
The last extension header will be followed by a transport-layer header (e.g., TCP) and in
this case the value of the NextHeader field is the same as the value of the Protocol field
would be in an IPv4 header.
Consider the example of the fragmentation header, shown in the figure.
The NextHeader field of the IPv6 header would contain the value 44, which is the value
assigned to indicate the fragmentation header.
The NextHeader field of the fragmentation header itself contains a value describing the
header that follows it.
The next header might be the TCP header, which results in NextHeader containing the
value 6.
If the fragmentation header were followed an authentication header, then the
fragmentation header’s NextHeader field would contain the value 51.
Auto configuration
One goal of IPv6, therefore, is to provide support for auto configuration, sometimes
referred to as plug-and-play operation.
Auto configuration is possible for IPv4, but it depends on the existence of a server that is
configured to hand out addresses and other configuration information to Dynamic Host
Configuration Protocol (DHCP) clients.
The longer address format in IPv6 provides a useful, new form of auto configuration
called stateless auto configuration, which does not require a server.
The auto configuration problem is subdivided into two parts:
1. Obtain an interface ID that is unique on the link to which the host is
attached.
Every host on a link must have a unique link-level address. For example, all hosts
on an Ethernet have a unique 48-bit Ethernet address. This can be turned into a
valid link-local use address by adding the appropriate prefix (1111 1110 10)
followed by enough 0s to make up 128 bits.
This address may be perfectly adequate for some devices on a network that do not
connect to any other networks.
2. Obtain the correct address prefix for this subnet.
Those devices that need a globally valid address depend on a router on the same
link to periodically advertise the appropriate prefix for the link. This requires that
the router be configured with the correct address prefix, and that this prefix be
chosen in such a way that there is enough space at the end (e.g., 48 bits) to attach
an appropriate link-level address.