Summer Internship at CRIS, Delhi
Summer Internship at CRIS, Delhi
TOPICS STUDIED:
1) THE OSI MODEL
Established in 1947, the International Standards Organization (ISO) is a multi-national body dedicated to
worldwide agreement on international standards. Almost three-fourths of countries in the world are
represented in the ISO. An ISO standard that overs all aspects of network communications is the Open
Systems Interconnection (OSI) model. It was first introduced in the late 1970s.An open system is a set of
protocols that allows any two different systems to communicate regardless of their underlying architecture.
The purpose of the OSI model is to show how to facilitate communication between different systems
without requiring changes to the logic of the underlying hardware and software. The OSI model is not a
protocol; it is a model for understanding and designing a network architecture that is flexible, robust, and
interoperable. The OSI model was intended to be the basis for the Creation of the protocols in the OSI stack.
The OSI model is a layered framework for the design of network systems that allows communication
between all types of computer systems. It consists of seven separate but related layers, each of which defines
a part of the process of moving information across a network (see Figure 1). Understanding the
fundamentals of the OSI model provides a solid basis for exploring data communications.
Figure 1
Layered Architecture:
The OSI model is composed of seven ordered layers: physical (layer 1), data link (layer 2), network (layer
3), transport (layer 4), session (layer 5), presentation (layer 6), and application (layer 7). Figure 2 shows the
layers involved when a message is sent from device A to device B. As the message travels from A to B, it
may pass through many intermediate nodes. These intermediate nodes usually involve only the first three
layers of the OSI model. In developing the model, the designers distilled the process of transmitting data to
its most fundamental elements. They identified which networking functions had related uses and collected
those functions into discrete groups that became the layers. Each layer defines a family of functions distinct
from those of the other layers. By defining and localizing functionality in this fashion, the designers created
an architecture that is both comprehensive and flexible. Most important, the OSI model allows complete
interoperability between otherwise incompatible systems. Within a single machine, each layer calls upon
the services of the layer just below it. Layer 3, for example, uses the services provided by layer 2 and
provides services for layer 4. Between machines, layer x on one machine logically communicates with layer
x on another machine. This communication is governed by an agreed-upon series of rules and conventions
called protocols.
Figure 2
Layer-to-Layer Communication
In Figure 2.4, device A sends a message to device B (through intermediate nodes). At the sending site, the
message is moved down from layer 7 to layer 1. At layer 1 the entire package is converted to a form that
can be transferred to the receiving site. At the receiving site, the message is moved up from layer 1 to layer
7. Interfaces between Layers. The passing of the data and network information down through the layers of
the sending device and back up through the layers of the receiving device is made possible by an interface
between each pair of adjacent layers. Each interface defines what information and services a layer must
provide for the layer above it. Well-defined interfaces and layer functions provide modularity to a network.
As long as a layer provides the expected services to the layer above it, the specific implementation of its
functions can be modified or replaced without requiring changes to the surrounding layers.
Figure 3
Upon reaching its destination, the signal passes into layer 1 and is transformed back into digital form. The
data units then move back up through the OSI layers. As each block of data reaches the next higher layer,
the headers and trailers attached to it at the corresponding sending layer are removed, and actions
appropriate to that layer are taken. By the time it reaches layer 7, the message is again in a form appropriate
to the application and is made available to the recipient.
Encapsulation
Figure 3 reveals another aspect of data communications in the OSI model: encapsulation. A packet at level
7 is encapsulated in the packet at level 6. The whole packet at level 6 is encapsulated in a packet at level 5,
and so on.In other words, the data part of a packet at level N is carrying the whole packet (data and overhead)
from level N + 1. The concept is called encapsulation because level N is not aware what part of the
encapsulated packet is data and what part is the header or trailer. For level N, the whole packet coming
from level N + 1 is treated as one integral unit.
The physical layer coordinates the functions required to carry a bit stream over a physical medium. It deals
with the mechanical and electrical specifications of the interface and transmission media. It also defines the
procedures and functions that physical devices and interfaces have to perform for transmission to occur.
The physical layer is also concerned with the following:
❑ Physical characteristics of interfaces and media. The physical layer defines the characteristics of the
interface between the devices and the transmission media. It also defines the type of transmission media.
❑ Representation of bits. The physical layer data consists of a stream of bits (sequence of 0s or 1s) with
no interpretation. To be transmitted, bits must be encoded into signals—electrical or optical. The physical
layer defines the type of encoding (how 0s and 1s are changed to signals).
❑ Data rate. The transmission rate—the number of bits sent each second—is also defined by the physical
layer. In other words, the physical layer defines the duration of a bit, which is how long it lasts.
❑ Synchronization of bits. The sender and receiver must not only use the same bit rate but must also be
synchronized at the bit level. In other words, the sender and the receiver clocks must be synchronized.
❑ Line configuration. The physical layer is concerned with the connection of devices to the media. In a
point-to-point configuration, two devices are connected together through a dedicated link. In a multipoint
configuration, a link is shared between several devices.
❑ Physical topology. The physical topology defines how devices are connected to make a network.
Devices can be connected using a mesh topology (every device connected to every other device), a star
topology (devices are connected through a central device), a ring topology (each device is connected to the
next, forming a ring), or a bus topology (every device on a common link).
❑ Transmission mode. The physical layer also defines the direction of transmission between two devices:
simplex, half-duplex, or full-duplex. In the simplex mode, only one device can send; the other can only
receive. The simplex mode is a one-way communication. In the half-duplex mode, two devices can send
and receive, but not at the same time. In a full-duplex (or simply duplex) mode, two devices can send and
receive at the same time.
The data link layer transforms the physical layer, a raw transmission facility, to a reliable link. It makes the
physical layer appear error-free to the upper layer (network layer). Other responsibilities of the data link
layer include the following:
❑ Framing: The data link layer divides the stream of bits received from the network layer into manageable
data units called frames.
❑ Physical addressing: If frames are to be distributed to different systems on the network, the data link
layer adds a header to the frame to define the sender and/or receiver of the frame. If the frame is intended
for a system outside the sender’s network, the receiver address is the address of the connecting device that
connects the network to the next one.
❑ Flow control: If the rate at which the data is absorbed by the receiver is less than the rate produced at
the sender, the data link layer imposes a flow control mechanism to prevent overwhelming the receiver.
❑ Error control: The data link layer adds reliability to the physical layer by adding mechanisms to detect
and retransmit damaged or lost frames. It also uses a mechanism to recognize duplicate frames. Error control
is normally achieved through a trailer added to the end of the frame.
❑ Access control: When two or more devices are connected to the same link, data link layer protocols
are necessary to determine which device has control over the link at any given time.
Network Layer
The network layer is responsible for the source-to-destination delivery of a packet, possibly across multiple
networks (links). Whereas the data link layer oversees the delivery of the packet between two systems on
the same network (link), the network layer ensures that each packet gets from its point of origin to its final
destination. If two systems are connected to the same link, there is usually no need for a network layer.
However, if the two systems are attached to different networks (links) with connecting devices between the
networks (links), there is often a need for the network layer to accomplish source-to-destination delivery.
Other responsibilities of the network layer include the following:
❑ Logical addressing: The physical addressing implemented by the data link layer handles the
addressing problem locally. If a packet passes the network boundary, we need another addressing system
to help distinguish the source and destination systems. The network layer adds a header to the packet coming
from the upper layer that, among other things, includes the logical addresses of the sender and receiver.
❑ Routing: When independent networks or links are connected together to create internetworks (network
of networks) or a large network, the connecting devices (called routers or switches) route or switch the
packets to their final destination. One of the functions of the network layer is to provide this mechanism.
Transport Layer
The transport layer is responsible for process-to-process delivery of the entire message. A process is an
application program running on the host. Whereas the network layer oversees source-to-destination delivery
of individual packets, it does not recognize any relationship between those packets. It treats each one
independently, as though each piece belonged to a separate message, whether or not it does. The transport
layer, on the other hand, ensures that the whole message arrives intact and in order, overseeing both error
control and flow control at the source-to-destination level. Other responsibilities of the transport layer
include the following:
❑ Service-point addressing: Computers often run several programs at the same time. For this reason,
source-to-destination delivery means delivery not only from one computer to the next but also from a
specific process (running program) on one computer to a specific process (running program) on the other.
The transport layer header must therefore include a type of address called a service-point address (or port
address). The network layer gets each packet to the correct computer; the transport layer gets the entire
message to the correct process on that computer.
❑ Segmentation and reassembly: A message is divided into transmittable segments,with each segment
containing a sequence number. These numbers enable the transport layer to reassemble the message
correctly upon arriving at the destination and to identify and replace packets that were lost in transmission.
❑ Flow control: Like the data link layer, the transport layer is responsible for flow control. However,
flow control at this layer is performed end to end rather than across a single link.
❑ Error control: Like the data link layer, the transport layer is responsible for error control. However,
error control at this layer is performed process-to-process rather than across a single link. The sending
transport layer makes sure that the entire message arrives at the receiving transport layer without error
(damage, loss, or duplication). Error correction is usually achieved through retransmission.
Session Layer
The services provided by the first four layers (physical, data link, network and transport) are not sufficient
for some processes. The session layer is the network dialog controller. It establishes, maintains, and
synchronizes the interaction between communicating systems. Specific responsibilities of the session layer
include the following:
❑ Dialog control: The session layer allows two systems to enter into a dialog. It allows the
communication between two processes to take place in either half-duplex (one way at a time) or full-duplex
(two ways at a time) mode.
❑ Synchronization: The session layer allows a process to add checkpoints (synchronization points) into
a stream of data. For example, if a system is sending a file of 2,000 pages, it is advisable to insert
checkpoints after every 100 pages to ensure that each 100-page unit is received and acknowledged
independently. In this case, if a crash happens during the transmission of page 523, the only pages that need
to be resent after system recovery are pages 501 to 523. Pages previous to 501 need not be resent.
Presentation Layer
The presentation layer is concerned with the syntax and semantics of the information exchanged between
two systems. Specific responsibilities of the presentation layer include the following:
❑ Translation: The processes (running programs) in two systems are usually exchanging information in
the form of character strings, numbers, and so on. The information should be changed to bit streams before
being transmitted. Because different computers use different encoding systems, the presentation layer is
responsible for interoperability between these different encoding methods. The presentation layer at the
sender changes the information from its sender-dependent format into a common format. The presentation
layer at the receiving machine changes the common format into its receiver-dependent format.
❑ Encryption: To carry sensitive information a system must be able to assure privacy. Encryption means
that the sender transforms the original information to another form and sends the resulting message out over
the network. Decryption reverses the original process to transform the message back to its original form.
❑ Compression: Data compression reduces the number of bits contained in the information. Data
compression becomes particularly important in the transmission of multimedia such as text, audio, and
video.
Application Layer
The application layer enables the user, whether human or software, to access the network. It provides user
interfaces and support for services such as electronic mail, remote file access and transfer, shared database
management, and other types of distributed information services. Specific services provided by the
application layer include the following:
❑ Network virtual terminal: A network virtual terminal is a software version of a physical terminal and
allows a user to log on to a remote host. To do so, the application creates a software emulation of a terminal
at the remote host. The user’s computer talks to the software terminal, which, in turn, talks to the host, and
vice versa. The remote host believes it is communicating with one of its own terminals and allows you to
log on.
❑ File transfer, access, and management (FTAM): This application allows a user to access files in a
remote host (to make changes or read data), to retrieve files from a remote computer for use in the local
computer, and to manage or control files in a remote computer locally.
❑ E-mail services: This application provides the basis for e-mail forwarding and storage.
❑ Directory services: This application provides distributed database sources and access for global
information about various objects and services.
2) CABLING
Cable is the medium through which information usually moves from one network device to another. There
are several types of cable which are commonly used with LANs. In some cases, a network will utilize only
one type of cable, other networks will use a variety of cable types. The type of cable chosen for a network
is related to the network's topology, protocol, and size. Understanding the characteristics of different types
of cable and how they relate to other aspects of a network is necessary for the development of a successful
network.
The following sections discuss the types of cables used in networks and other related topics.
Twisted pair cabling comes in two varieties: shielded and unshielded. Unshielded twisted pair (UTP) is the
most popular and is generally the best option for school networks (See fig. 1).
The quality of UTP may vary from telephone-grade wire to extremely high-speed cable. The cable has four
pairs of wires inside the jacket. Each pair is twisted with a different number of twists per inch to help
eliminate interference from adjacent pairs and other electrical devices. The tighter the twisting, the higher
the supported transmission rate and the greater the cost per foot. The EIA/TIA (Electronic Industry
Association/Telecommunication Industry Association) has established standards of UTP and rated six
categories of wire (additional categories are emerging).
The standard connector for unshielded twisted pair cabling is an RJ-45 connector. This is a plastic connector
that looks like a large telephone-style connector (See fig. 2). A slot allows the RJ-45 to be inserted only one
way. RJ stands for Registered Jack, implying that the connector follows a standard borrowed from the
telephone industry. This standard designates which wire goes with each pin inside the connector.
Coaxial Cable
Coaxial cabling has a single copper conductor at its center. A plastic layer provides insulation between the
center conductor and a braided metal shield (See fig. 3). The metal shield helps to block any outside
interference from fluorescent lights, motors, and other computers.
Although coaxial cabling is difficult to install, it is highly resistant to signal interference. In addition, it can
support greater cable lengths between network devices than twisted pair cable. The two types of coaxial
cabling are thick coaxial and thin coaxial.
Thin coaxial cable is also referred to as thinned. 10Base2 refers to the specifications for thin coaxial cable
carrying Ethernet signals. The 2 refers to the approximate maximum segment length being 200 meters. In
actual fact the maximum segment length is 185 meters. Thin coaxial cable has been popular in school
networks, especially linear bus networks.
Thick coaxial cable is also referred to as thicknet. 10Base5 refers to the specifications for thick coaxial
cable carrying Ethernet signals. The 5 refers to the maximum segment length being 500 meters. Thick
coaxial cable has an extra protective plastic cover that helps keep moisture away from the center conductor.
This makes thick coaxial a great choice when running longer lengths in a linear bus network. One
disadvantage of thick coaxial is that it does not bend easily and is difficult to install.
The most common type of connector used with coaxial cables is the Bayone-Neill-Concelman (BNC)
connector (See fig. 4). Different types of adapters are available for BNC connectors, including a T-
connector, barrel connector, and terminator. Connectors on the cable are the weakest points in any network.
To help avoid problems with your network, always use the BNC connectors that crimp, rather screw, onto
the cable.
Fig. 4. BNC connector
Fiber optic cabling consists of a center glass core surrounded by several layers of protective materials (See
fig. 5). It transmits light rather than electronic signals eliminating the problem of electrical interference.
This makes it ideal for certain environments that contain a large amount of electrical interference. It has
also made it the standard for connecting networks between buildings, due to its immunity to the effects of
moisture and lighting.
Fiber optic cable has the ability to transmit signals over much longer distances than coaxial and twisted
pair. It also has the capability to carry information at vastly greater speeds. This capacity broadens
communication possibilities to include services such as video conferencing and interactive services. The
cost of fiber optic cabling is comparable to copper cabling; however, it is more difficult to install and
modify. 10BaseF refers to the specifications for fiber optic cable carrying Ethernet signals.
The center core of fiber cables is made from glass or plastic fibers (see fig 5). A plastic coating then cushions
the fiber center, and kevlar fibers help to strengthen the cables and prevent breakage. The outer insulating
jacket made of teflon or PVC.
There are two common types of fiber cables -- single mode and multimode. Multimode cable has a larger
diameter; however, both cables provide high bandwidth at high speeds. Single mode can provide more
distance, but it is more expensive
CAT 5 CABLE:
Category 5 cable, commonly referred to as Cat 5, is a twisted pair cable for computer networks. The cable standard
provides performance of up to 100 MHz and is suitable for most varieties of Ethernet over twisted pair. Cat 5 is also
used to carry other signals such as telephony and video.
This cable is commonly connected using punch-down blocks and modular connectors . Most Category 5 cables
are unshielded, relying on the balanced line twisted pair design and differential signaling for noise rejection.
The category 5 specification was deprecated in 2001 and is superseded by the category 5e specification.
Cable types, connector types and cabling topologies are defined by TIA/EIA-568-B. Nearly always, 8P8C modular
connectors (often referred to as RJ45 connectors) are used for connecting category 5 cable. The cable is terminated in
either the T568A scheme or the T568B scheme. The two schemes work equally well and may be mixed in an
installation so long as the same scheme is used on both ends of each cable.
RJ45 Connector
Category 5 cable is used in structured cabling for computer networks such as Ethernet over twisted pair. The cable
standard provides performance of up to 100 MHz and is suitable for 10BASE-T, 100BASE-TX (Fast Ethernet),
and 1000BASE-T (Gigabit Ethernet). 10BASE-T and 100BASE-TX Ethernet connections require two wire pairs.
1000BASE-T Ethernet connections require four wire pairs. Through the use of power over Ethernet (PoE), power can
be carried over the cable in addition to Ethernet data.
Cat 5 is also used to carry other signals such as telephony and video. In some cases, multiple signals can be carried on
a single cable; Cat 5 can carry two conventional telephone lines as well as 100BASE-TX in a single cable.
The USOC/RJ-61 wiring standard may be used in multi-line telephone connections. Various schemes exist for
transporting both analog and digital video over the cable. HDBaseT (10.2 Gbit/s) is one such scheme.
CAT 6 CABLE :
Category 6 cable, commonly referred to as Cat 6, is a standardized twisted pair cable for Ethernet and other
network physical layers that is backward compatible with the Category 5/5e and Category 3 cable standards.
Compared with Cat 5 and Cat 5e, Cat 6 features more stringent specifications for crosstalk and system noise. The
cable standard also specifies performance of up to 250 MHz compared to 100 MHz for Cat 5 and Cat 5e.
Whereas Category 6 cable has a reduced maximum length of 55 meters when used for 10GBASE-T, Category 6A
cable (or Augmented Category 6) is characterized to 500 MHz and has improved alien crosstalk characteristics,
allowing 10GBASE-T to be run for the same 100-meter maximum distance as previous Ethernet variants.
Cat 6 cable can be identified by the printing on the side of the cable sheath. Cable types, connector types and cabling
topologies are defined by TIA/EIA-568-B.
Cat 6 patch cables are normally terminated in 8P8C modular connectors. If Cat 6 rated patch cables, jacks and
connectors are not used with Cat 6 wiring, overall performance is degraded and may not meet Cat 6 performance
specifications.
Connectors use either T568A or T568B pin assignments; performance is comparable provided both ends of a cable
are terminated identically.
3) IP ADDRESS
An Internet Protocol address (IP address) is a numerical label assigned to each device connected to a computer
network that uses the Internet Protocol for communication. An IP address serves two principal functions: host or
network interface identification and location addressing.
Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. However, because of the growth of the
Internet and the depletion of available IPv4 addresses, a new version of IP (IPv6), using 128 bits for the IP address,
was developed in 1995, and standardized in December 1998.In July 2017, a final definition of the protocol was
published. IPv6 deployment has been ongoing since the mid-2000s.
IP addresses are usually written and displayed in human-readable notations, such as 172.16.254.1 in IPv4,
and 2001:db8:0:1234:0:567:8:1 in IPv6. The size of the routing prefix of the address is designated in CIDR
notation by suffixing the address with the number of significant bits, e.g., 192.168.1.15/24, which is equivalent to the
historically used subnet mask 255.255.255.0.
The IP address space is managed globally by the Internet Assigned Numbers Authority (IANA), and by five regional
Internet registries (RIRs) responsible in their designated territories for assignment to end users and local Internet
registries, such as Internet service providers. IPv4 addresses have been distributed by IANA to the RIRs in blocks of
approximately 16.8 million addresses each. Each ISP or private network administrator assigns an IP address to each
device connected to its network. Such assignments may be on a static (fixed or permanent) or dynamic basis,
depending on its software and practices.
FUNCTION
An IP address serves two principal functions. It identifies the host, or more specifically its network interface, and it
provides the location of the host in the network, and thus the capability of establishing a path to that host. Its role has
been characterized as follows: "A name indicates what we seek. An address indicates where it is. A route indicates
how to get there." The header of each IP packet contains the IP address of the sending host, and that of the destination
host.
IP VERSIONS
Two versions of the Internet Protocol are in common use in the Internet today. The original version of the Internet
Protocol that was first deployed in 1983 in the ARPANET, the predecessor of the Internet, is Internet Protocol version
4 (IPv4).
The rapid exhaustion of IPv4 address space available for assignment to Internet service providers and end user
organizations by the early 1990s, prompted the Internet Engineering Task Force (IETF) to explore new technologies
to expand the addressing capability in the Internet. The result was a redesign of the Internet Protocol which became
eventually known as Internet Protocol Version 6 (IPv6) in 1995. IPv6 technology was in various testing stages until
the mid-2000s, when commercial production deployment commenced.
Today, these two versions of the Internet Protocol are in simultaneous use. Among other technical changes, each
version defines the format of addresses differently. Because of the historical prevalence of IPv4, the generic term IP
address typically still refers to the addresses defined by IPv4. The gap in version sequence between IPv4 and IPv6
resulted from the assignment of version 5 to the experimental Internet Stream Protocol in 1979, which however was
never referred to as IPv5.
IPV4 ADDRESSES
An IPv4 address has a size of 32 bits, which limits the address space to 4294967296 (232) addresses. Of this number,
some addresses are reserved for special purposes such as private networks (~18 million addresses) and multicast
addressing (~270 million addresses).
IPv4 addresses are usually represented in dot-decimal notation, consisting of four decimal numbers, each ranging from
0 to 255, separated by dots, e.g., 172.16.254.1. Each part represents a group of 8 bits (an octet) of the address. In some
cases of technical writing, IPv4 addresses may be presented in various hexadecimal, octal, or binary representations.
Subnetting
In the early stages of development of the Internet Protocol, network administrators interpreted an IP address in two
parts: network number portion and host number portion. [1] The highest order octet (most significant eight bits) in an
address was designated as the network number and the remaining bits were called the rest field or host identifier, and
were used for host numbering within a network.
This early method soon proved inadequate as additional networks developed that were independent of the existing
networks already designated by a network number. In 1981, the addressing specification was revised with the
introduction of classful network architecture.
Classful network design allowed for a larger number of individual network assignments and fine-
grained subnetwork design. The first three bits of the most significant octet of an IP address were defined as
the class of the address. Three classes (A, B, and C) were defined for universal unicast addressing. Depending on the
class derived, the network identification was based on octet boundary segments of the entire address. Each class used
successively additional octets in the network identifier, thus reducing the possible number of hosts in the higher order
classes (B and C). The following table gives an overview of this now obsolete system.
Size
Size Number of
Leading of network Number Start
Class of rest addresses End address
bits number bit of networks address
bit field per network
field
Classful network design served its purpose in the startup stage of the Internet, but it lacked scalability in the face of
the rapid expansion of the network in the 1990s. The class system of the address space was replaced with Classless
Inter-Domain Routing (CIDR) in 1993. CIDR is based on variable-length subnet masking (VLSM) to allow allocation
and routing based on arbitrary-length prefixes.
Today, remnants of classful network concepts function only in a limited scope as the default configuration parameters
of some network software and hardware components (e.g. netmask), and in the technical jargon used in network
administrators' discussions.
Private addresses
Early network design, when global end-to-end connectivity was envisioned for communications with all Internet hosts,
intended that IP addresses be uniquely assigned to a particular computer or device. However, it was found that this
was not always necessary as private networks developed and public address space needed to be conserved.
Computers not connected to the Internet, such as factory machines that communicate only with each other via TCP/IP,
need not have globally unique IP addresses. Three non-overlapping ranges of IPv4 addresses for private networks are
reserved. These addresses are not routed on the Internet and thus their use need not be coordinated with an IP address
registry.
Today, such private networks typically connect to the Internet with network address translation (NAT), when needed.
Any user may use any of the reserved blocks. Typically, a network administrator will divide a block into subnets; for
example, many home routers automatically use a default address range
of 192.168.0.0 through 192.168.0.255 (192.168.0.0/24).
IPV6 ADDRESSES
In IPv6, the address size was increased from 32 bits in IPv4 to 128 bits or 16 octets, thus providing up to
2128 (approximately 3.403×1038) addresses. This is deemed sufficient for the foreseeable future.
The intent of the new design was not to provide just a sufficient quantity of addresses, but also redesign routing in the
Internet by more efficient aggregation of subnetwork routing prefixes. This resulted in slower growth of routing tables
in routers. The smallest possible individual allocation is a subnet for 264 hosts, which is the square of the size of the
entire IPv4 Internet. At these levels, actual address utilization ratios will be small on any IPv6 network segment. The
new design also provides the opportunity to separate the addressing infrastructure of a network segment, i.e. the local
administration of the segment's available space, from the addressing prefix used to route traffic to and from external
networks. IPv6 has facilities that automatically change the routing prefix of entire networks, should the global
connectivity or the routing policy change, without requiring internal redesign or manual renumbering.
The large number of IPv6 addresses allows large blocks to be assigned for specific purposes and, where appropriate,
to be aggregated for efficient routing. With a large address space, there is no need to have complex address
conservation methods as used in CIDR.
All modern desktop and enterprise server operating systems include native support for the IPv6 protocol, but it is not
yet widely deployed in other devices, such as residential networking routers, voice over IP (VoIP) and multimedia
equipment, and network peripherals.
Private addresses
Just as IPv4 reserves addresses for private networks, blocks of addresses are set aside in IPv6. In IPv6, these are
referred to as unique local addresses (ULA). The routing prefix fc00::/7 is reserved for this block, which is divided
into two /8 blocks with different implied policies. The addresses include a 40-bit pseudorandom number that
minimizes the risk of address collisions if sites merge or packets are misrouted.
Early practices used a different block for this purpose (fec0::), dubbed site-local addresses. However, the definition of
what constituted sites remained unclear and the poorly defined addressing policy created ambiguities for routing. This
address type was abandoned and must not be used in new systems.
Addresses starting with fe80:, called link-local addresses, are assigned to interfaces for communication on the attached
link. The addresses are automatically generated by the operating system for each network interface. This provides
instant and automatic communication between all IPv6 host on a link. This feature is required in the lower layers of
IPv6 network administration, such as for the Neighbor Discovery Protocol.
Private address prefixes may not be routed on the public Internet.
IP subnetworks
IP networks may be divided into subnetworks in both IPv4 and IPv6. For this purpose, an IP address is logically
recognized as consisting of two parts: the network prefix and the host identifier, or interface identifier (IPv6).
The subnet mask or the CIDR prefix determines how the IP address is divided into network and host parts.
The term subnet mask is only used within IPv4. Both IP versions however use the CIDR concept and notation. In this,
the IP address is followed by a slash and the number (in decimal) of bits used for the network part, also called
the routing prefix. For example, an IPv4 address and its subnet mask may be 192.0.2.1 and 255.255.255.0,
respectively. The CIDR notation for the same IP address and subnet is 192.0.2.1/24, because the first 24 bits of the IP
address indicate the network and subnet.
IP address assignment
IP addresses are assigned to a host either dynamically at the time of booting, or permanently by fixed configuration
of the host hardware or software. Persistent configuration is also known as using a static IP address. In contrast, when
a computer's IP address is assigned newly each time it restarts, this is known as using a dynamic IP address.
The configuration of a static IP address depends in detail on the software or hardware installed in the computer.
Computers used for the network infrastructure, such as routers and mail servers, are typically configured with static
addressing, Static addresses are also sometimes convenient for locating servers inside an enterprise.
Dynamic IP addresses are assigned using methods such as Zeroconf for self-configuration, or by the Dynamic Host
Configuration Protocol (DHCP) from a network server. The address assigned with DHCP usually has an expiration
period, after which the address may be assigned to another device, or to the originally associated host if it is still
powered up. A network administrator may implement a DHCP method so that the same host always receives a specific
address.
DHCP is the most frequently used technology for assigning addresses. It avoids the administrative burden of assigning
specific static addresses to each device on a network. It also allows devices to share the limited address space on a
network if only some of them are online at a particular time. Typically, dynamic IP configuration is enabled by default
in modern desktop operating systems. DHCP is not the only technology used to assign IP addresses dynamically.
Dialup and some broadband networks use dynamic address features of the Point-to-Point Protocol.
In the absence or failure of static or stateful (DHCP) address configurations, an operating system may assign an IP
address to a network interface using stateless auto-configuration methods, such as Zeroconf.
Address autoconfiguration
Address block 169.254.0.0/16 is defined for the special use in link-local addressing for IPv4 networks. In IPv6, every
interface, whether using static or dynamic address assignments, also receives a local-link address automatically in the
block fe80::/10.
These addresses are only valid on the link, such as a local network segment or point-to-point connection, that a host
is connected to. These addresses are not routable and like private addresses cannot be the source or destination of
packets traversing the Internet.
When the link-local IPv4 address block was reserved, no standards existed for mechanisms of address
autoconfiguration. Filling the void, Microsoft created an implementation that is called Automatic Private IP
Addressing (APIPA). APIPA has been deployed on millions of machines and has, thus, become a de facto standard in
the industry. Many years later, in May 2005, the IETF defined a formal standard for it.
Addressing conflicts
An IP address conflict occurs when two devices on the same local physical or wireless network claim to have the same
IP address. A second assignment of an address generally stops the IP functionality of one or both of the devices. Many
modern operating systems notify the administrator of IP address conflicts.If one of the devices is the gateway, the
network will be crippled. When IP addresses are assigned by multiple people and systems with differing methods, any
of them may be at fault.
Routing
IP addresses are classified into several classes of operational characteristics: unicast, multicast, anycast and broadcast
addressing.
Unicast addressing
The most common concept of an IP address is in unicast addressing, available in both IPv4 and IPv6. It normally
refers to a single sender or a single receiver, and can be used for both sending and receiving. Usually, a unicast address
is associated with a single device or host, but a device or host may have more than one unicast address. Some individual
PCs have several distinct unicast addresses, each for its own distinct purpose. Sending the same data to multiple
unicast addresses requires the sender to send all the data many times over, once for each recipient.
Broadcast addressing
Broadcasting is an addressing technique available in IPv4 to send data to all possible destinations on a network in one
transmission operation, while all receivers capture the network packet (all-hosts broadcast). The
address 255.255.255.255 is used for network broadcast. In addition, a directed (limited) broadcast uses the all-ones
host address with the network prefix. For example, the destination address used for directed broadcast to devices on
the network 192.0.2.0/24 is 192.0.2.255.
IPv6 does not implement broadcast addressing, and replaces it with multicast to the specially-defined all-nodes
multicast address.
Multicast addressing
A multicast address is associated with a group of interested receivers. In IPv4,
addresses 224.0.0.0 through 239.255.255.255 (the former Class D addresses) are designated as multicast
addresses. IPv6 uses the address block with the prefix ff00::/8 for multicast applications. In either case, the sender
sends a single datagram from its unicast address to the multicast group address and the intermediary routers take care
of making copies and sending them to all receivers that have joined the corresponding multicast group.
Anycast addressing
Like broadcast and multicast, anycast is a one-to-many routing topology. However, the data stream is not transmitted
to all receivers, just the one which the router decides is logically closest in the network. Anycast address is an inherent
feature of only IPv6. In IPv4, anycast addressing implementations typically operate using the shortest-path metric
of BGP routing and do not take into account congestion or other attributes of the path. Anycast methods are useful for
global load balancing and are commonly used in distributed DNSsystems.
4) IP ROUTING
IP routing is the field of routing methodologies of Internet Protocol (IP) packets within and across IP networks. This
involves not only protocols and technologies, but includes the policies of the worldwide organization and
configuration of Internet infrastructure. In each IP network node, IP routing involves the determination of a suitable
path for a network packet from a source to its destination in an IP network. The process uses static configuration rules
or dynamically obtained status information to select specific packet forwarding methods to direct traffic to the next
available intermediate network node one hop closer to the desired final destination, a total path potentially spanning
multiple computer networks.
Networks are separated from each other by specialized hosts, called gateways or routers with specialized software
support optimized for routing. In routers, packets arriving at any interface are examined for source and destination
addressing and queued to the appropriate outgoing interface according to their destination address and a set of rules
and performance metrics. Rules are encoded in a routing table that contains entries for all interfaces and their
connected networks. If no rule satisfies the requirements for a network packet, it is forwarded to a default route.
Routing tables are maintained either manually by a network administrator, or updated dynamically with a routing
protocol. Routing rules may contain other parameters than source and destination, such as limitations on available
bandwidth, expected packet loss rates, and specific technology requirements.
IP forwarding algorithms take into account the size of each packet, the type of service specified in the header, as well
as characteristics of the available links to other routers in the network, such as link capacity, utilization rate, and
maximum datagram size that is supported on the link. In general, most routing software determines a route through a
shortest path algorithm. However, other routing protocols may use other metrics for determining the best path. Based
on the metrics required and present for each link, each path has an associated cost. The routing algorithm attempts to
minimize the cost when choosing the next hop.
A routing protocol is a software mechanism by which routers communicate and share information about the topology
of the network, and the capabilities of each routing node. It thus implements the network-global rules by which traffic
is directed within a network and across multiple networks. Different protocols are often used for different topologies
or different application areas. For example, the Open Shortest Path First (OSPF) protocol is generally used for routing
packets between subnetworks within an enterprise and the Border Gateway Protocol (BGP) is used on a global scale.
BGP, in particular is the de facto standard of worldwide Internet routing.
Comparison Chart:
DYNAMIC
BASIS FOR COMPARISON STATIC ROUTING
ROUTING
are
dynamically
filled in the
table.
updated
according to
change in
topology.
complex
routing
algorithms
to perform
routing
operations.
DYNAMIC
BASIS FOR COMPARISON STATIC ROUTING
ROUTING
networks
doesn't
affect the
rerouting.
due to
sending
broadcasts
and
multicasts.
protocols
such as RIP,
EIGRP, etc
are involved
in the
DYNAMIC
BASIS FOR COMPARISON STATIC ROUTING
ROUTING
routing
process.
additional
resources to
store the
information.
DYNAMIC
BASIS FOR COMPARISON STATIC ROUTING
ROUTING
Dynamic Routing
Dynamic routing performs the same function as static routing except it is more robust. Static routing
allows routing tables in specific routers to be set up in a static manner so network routes for packets are
set. If a router on the route goes down the destination may become unreachable. Dynamic routing allows
routing tables in routers to change as the possible routes change. There are several protocols used to
support dynamic routing including RIP and OSPF.
Routing cost
Hop count - How many routers the message must go through to reach the recipient.
Tic count - The time to route in 1/18 seconds (ticks).
Dynamic routing protocols do not change how routing is done. They just allow for dynamic altering of
routing tables.
There are two classifications of protocols:
1. IGP - Interior Gateway Protocol. The name used to describe the fact that each system on the
internet can choose its own routing protocol. RIP and OSPF are interior gateway protocols.
2. EGP - Exterior Gateway Protocol. Used between routers of different systems. There are two of
these, the first having the same name as this protocol description:
1. EGP - Exterior Gateway Protocol
2. BGP - Border Gateway Protocol.
The daemen "routed" uses RIP. The daemon "gated" supports IGP's and EGP's.
Distance vector - Periodically sends route table to other routers. Works best on LANs, not WANs.
Link-state - Routing tables are broadcast at startup and then only when they change. OSPF uses
link-state.
The Routing Information Protocol (RIP) is one of the oldest distance-vector routing protocols which
employ the hop count as a routing metric. RIP prevents routing loops by implementing a limit on the
number of hops allowed in a path from source to destination. The largest number of hops allowed for RIP
is 15, which limits the size of networks that RIP can support.
RIP implements the split horizon, route poisoning and holddown mechanisms to prevent incorrect routing
information from being propagated.
In RIPv1 router broadcast updates with their routing table every 30 seconds. In the early
deployments, routing tables were small enough that the traffic was not significant. As networks grew in
size, however, it became evident there could be a massive traffic burst every 30 seconds, even if the routers
had been initialized at random times.
In most networking environments, RIP is not the preferred choice for routing as its time to
converge and scalability are poor compared to EIGRP, OSPF, or IS-IS. However, it is easy to configure,
because RIP does not require any parameters unlike other protocols.
Open Shortest Path First (OSPF) is a routing protocol for Internet Protocol (IP) networks. It uses a link
state routing (LSR) algorithm and falls into the group of interior gateway protocols (IGPs), operating within
a single autonomous system (AS). It is defined as OSPF Version 2 in RFC 2328 (1998) for IPv4. The
updates for IPv6 are specified as OSPF Version 3 in RFC 5340 (2008). OSPF supports the Classless Inter-
Domain Routing (CIDR) addressing model.
OSPF is a widely used IGP in large enterprise networks. IS-IS, another LSR-based protocol, is more
common in large service provider networks.
Open Shortest Path First (OSPF) was designed as an interior gateway protocol, for use in an autonomous
system such as a local area network (LAN). It implements Dijkstra's algorithm, also known as the shortest
path first (SPF) algorithm. As a link-state routing protocol it was based on the link-state algorithm
developed for the ARPANET in 1980 and the IS-IS routing protocol. OSPF was first standardised in 1989
as RFC 1131, which is now known as OSPF version 1. The development work for OSPF prior to its
codification as open standard was undertaken largely by the Digital Equipment Corporation, which
developed its own proprietary DECnet protocols.
Routing protocols like OSPF calculate the shortest route to a destination through the network based on an
algorithm. The first routing protocol that was widely implemented, the Routing Information Protocol (RIP),
calculated the shortest route based on hops, that is the number of routers that an IP packet had to traverse
to reach the destination host. RIP successfully implemented dynamic routing, where routing tables change
if the network topology changes. But RIP did not adapt its routing according to changing network
conditions, such as data-transfer rate. Demand grew for a dynamic routing protocol that could calculate
the fastest route to a destination. OSPF was developed so that the shortest path through a network was
calculated based on the cost of the route, taking into account bandwidth, delay and load. Therefore OSPF
undertakes route cost calculation on the basis of link-cost parameters, which can be weighted by the
administrator. OSPF was quickly adopted because it became known for reliably calculating routes through
large and complex local area networks.
Border Gateway Protocol (BGP) is a standardized exterior gateway protocol designed to exchange
routing and reachability information among autonomous systems (AS) on the Internet. The protocol is
classified as a path vector protocol. The Border Gateway Protocol makes routing decisions based on paths,
network policies, or rule-sets configured by a network administrator and is involved in making
core routing decisions.
BGP may be used for routing within an autonomous system. In this application it is referred to as Interior
Border Gateway Protocol, Internal BGP, or iBGP. In contrast, the Internet application of the protocol may
be referred to as Exterior Border Gateway Protocol, External BGP, or eBGP.
BGP neighbors, called peers, are established by manual configuration between routers to create
a TCP session on port 179. A BGP speaker sends 19-byte keep-alive messages every 60 seconds to
maintain the connection. Among routing protocols, BGP is unique in using TCP as its transport protocol.
When BGP runs between two peers in the same autonomous system (AS), it is referred to as Internal
BGP (iBGP or Interior Border Gateway Protocol). When it runs between different autonomous systems, it
is called External BGP (eBGP or Exterior Border Gateway Protocol). Routers on the boundary of one AS
exchanging information with another AS are called border or edge routers or simply eBGP peers and are
typically connected directly, while iBGP peers can be interconnected through other intermediate routers.
Other deployment topologies are also possible, such as running eBGP peering inside a VPN tunnel,
allowing two remote sites to exchange routing information in a secure and isolated manner. The main
difference between iBGP and eBGP peering is in the way routes that were received from one peer are
propagated to other peers. For instance, new routes learned from an eBGP peer are typically redistributed
to all iBGP peers as well as all other eBGP peers (if transit mode is enabled on the router). However, if new
routes are learned on an iBGP peering, then they are re-advertised only to all eBGP peers. These route-
propagation rules effectively require that all iBGP peers inside an AS are interconnected in a full mesh.
How routes are propagated can be controlled in detail via the route-maps mechanism. This mechanism
consists of a set of rules. Each rule describes, for routes matching some given criteria, what action should
be taken. The action could be to drop the route, or it could be to modify some attributes of the route before
inserting it in the routing table.
5) LAN SWITCHING
LAN switching is a form of packet switching used in local area networks (LAN). Switching technologies
are crucial to network design, as they allow traffic to be sent only where it is needed in most cases, using
fast, hardware-based methods. LAN switching uses different kinds of network switches. A standard switch
is known as a layer 2 switch and is commonly found in nearly any LAN. Layer 3 or layer 4 switches require
advanced technology and are more expensive, and thus are usually only found in larger LANs or in special
network environments.
Layer 2 switching uses the media access control address (MAC address) from the host's network interface
cards (NICs) to decide where to forward frames. Layer 2 switching is hardware-based, which means
switches use application-specific integrated circuit (ASICs) to build and maintain filter tables (also known
as MAC address tables or CAM tables). One way to think of a layer 2 switch is as multiport bridge.
Layer 2 switching provides the following
Server farms — Servers need no longer be distributed to physical locations because virtual LANs can
be created to create broadcast domains and network proximity in a switched internetwork. This means
that all servers can be placed in a central location, yet a certain server can still be part of a workgroup
in a remote branch, for example.
Intranets — Allows organization-wide client/server communications based on a Web technology.
These new technologies allow more data to flow off from local subnets and onto a routed network, where
a router's performance can become the bottleneck.
LIMITATIONS:
Layer 2 switches have the same limitations as bridge networks. Bridges are good if a network is designed
by the 80/20 rule: users spend 80 percent of their time on their local segment.
Bridged networks break up collision domains, but the network remains one large broadcast domain.
Similarly, layer 2 switches (bridges) cannot break up broadcast domains, which can cause performance
issues and limits the size of a network. Broadcast and multicasts, along with the slow convergence of
spanning tree, can cause major problems as the network grows. Because of these problems, layer 2 switches
cannot completely replace routers in the internet work.
6) LAYER 3 SWITCHING
Layer 3 switching is solely based on (destination) IP address stored in the header of IP datagram (see layer
4 switching later on this page for the difference). The difference between a layer 3 switch and a router is
the way the device is making the routing decision. Traditionally, routers use microprocessors to make
forwarding decisions in software, while the switch performs only hardware-based packet switching (by
specialized ASIC with the help of content-addressable memory). However, some traditional routers can
have advanced hardware functions as well in some of the higher-end models.
The main advantage of layer 3 switches is the potential for lower network latency as a packet can be routed
without making extra network hops to a router. For example, connecting two distinct segments
(e.g. VLANs) with a router to a standard layer 2 switch requires passing the frame to the switch (first L2
hop), then to the router (second L2 hop) where the packet inside the frame is routed (L3 hop) and then
passed back to the switch (third L2 hop). A layer 3 switch accomplishes the same task without the need for
a router (and therefore additional hops) by making the routing decision itself, i.e. the packet is routed to
another subnet and switched to the destination network port simultaneously.
Because many layer 3 switches offer the same functionality as traditional routers they can be used as
cheaper, lower latency replacements in some networks. Layer 3 switches can perform the following actions
that can also be performed by routers:
A router is a networking device that forwards data packets between computer networks. Routers perform
the traffic directing functions on the Internet. A data packet is typically forwarded from one router to
another router through the networks that constitute an internetwork until it reaches its destination node.
A router is connected to two or more data lines from different networks. When a data packet comes in on
one of the lines, the router reads the network address information in the packet to determine the ultimate
destination. Then, using information in its routing table or routing policy, it directs the packet to the next
network on its journey.
The most familiar type of routers are home and small office routers that simply forward IP packets between
the home computers and the Internet. An example of a router would be the owner's cable or DSL router,
which connects to the Internet through an Internet service provider (ISP). More sophisticated routers, such
as enterprise routers, connect large business or ISP networks up to the powerful core routers that forward
data at high speed along the optical fiber lines of the Internet backbone. Though routers are typically
dedicated hardware devices, software-based routers also exist.
OPERATION :
When multiple routers are used in interconnected networks, the routers can exchange information about
destination addresses using a routing protocol. Each router builds up a routing table listing the preferred
routes between any two systems on the interconnected networks.
A router has two types of network element components organized onto separate planes:
Control plane: A router maintains a routing table that lists which route should be used to forward a data
packet, and through which physical interface connection. It does this using internal preconfigured
directives, called static routes, or by learning routes dynamically using a routing protocol. Static and
dynamic routes are stored in the routing table. The control-plane logic then strips non-essential
directives from the table and builds a forwarding information base (FIB) to be used by the forwarding
plane.
Forwarding plane: The router forwards data packets between incoming and outgoing interface
connections. It forwards them to the correct network type using information that the
packet header contains matched to entries in the FIB supplied by the control plane.
APPLICATIONS :
A router may have interfaces for different types of physical layer connections, such as copper
cables, fiber optic, or wireless transmission. It can also support different network
layer transmission standards. Each network interface is used to enable data packets to be forwarded
from one transmission system to another. Routers may also be used to connect two or more logical
groups of computer devices known as subnets, each with a different network prefix.
Routers may provide connectivity within enterprises, between enterprises and the Internet, or
between internet service providers' (ISPs') networks. The largest routers (such as the Cisco CRS-
1 or Juniper PTX) interconnect the various ISPs, or may be used in large enterprise
networks. Smaller routers usually provide connectivity for typical home and office networks.
All sizes of routers may be found inside enterprises. The most powerful routers are usually found
in ISPs, academic and research facilities. Large businesses may also need more powerful routers to
cope with ever-increasing demands of intranet data traffic. A hierarchical internetworking
model for interconnecting routers in large networks is in common use.
SECURITY
FIREWALL :
In computing, a firewall is a network security system that monitors and controls incoming and
outgoing network traffic based on predetermined security rules. A firewall typically establishes a barrier
between a trusted internal network and untrusted external network, such as the Internet.
Firewalls are often categorized as either network firewalls or host-based firewalls. Network firewalls
filter traffic between two or more networks and run on network hardware. Host-based firewalls run on host
computers and control network traffic in and out of those machines.
TYPES :
Firewalls are generally categorized as network-based or host-based. Network-based firewalls are positioned
on the gateway computers of LANs, WANs and intranets. They are either software appliances running on
general-purpose hardware, or hardware-based firewall computer appliances. Firewall appliances may also
offer other functionality to the internal network they protect, such as acting as a DHCP or VPN server for
that network.Host-based firewalls are positioned on the network node itself and control network traffic in
and out of those machines. The host-based firewall may be a daemon or service as a part of the operating
system or an agent application such as endpoint security or protection. Each has advantages and
disadvantages. However, each has a role in layered security.
Firewalls also vary in type depending on where communication originates, where it is intercepted, and the
state of communication being traced.
SIEM :
In the field of computer security, security information and event management (SIEM) software products
and services combine security information management (SIM) and security event management (SEM).
They provide real-time analysis of security alerts generated by applications and network hardware.
Vendors sell SIEM as software, as appliances or as managed services; these products are also used to log
security data and generate reports for compliance purposes.
The acronyms SEM, SIM and SIEM have been sometimes used interchangeably. The segment of security
management that deals with real-time monitoring, correlation of events, notifications and console views is
known as security event management (SEM). The second area provides long-term storage as well as
analysis, manipulation and reporting of log data and security records of the type collated by SEM software,
and is known as security information management (SIM).As with many meanings and definitions of
capabilities, evolving requirements continually shape derivatives of SIEM product-categories.
Organizations are turning to big data platforms, such as Apache Hadoop, to complement SIEM capabilities
by extending data storage capacity and analytic flexibility. The need for voice-centric visibility or vSIEM
(voice security information and event management) provides a recent example of this evolution.
The term security information event management (SIEM), coined by Mark Nicolett and Amrit Williams of
Gartner in 2005,
the product capabilities of gathering, analyzing and presenting information from network and security
devices
vulnerability management and policy-compliance tools
operating-system, database and application logs
external threat data
A key focus is to monitor and help manage user and service privileges, directory services and other system-
configuration changes; as well as providing log auditing and review and incident response.
CAPABILTIES / COMPONENTS :
Data aggregation: Log management aggregates data from many sources, including network, security,
servers, databases, applications, providing the ability to consolidate monitored data to help avoid
missing crucial events.
Correlation: looks for common attributes, and links events together into meaningful bundles. This
technology provides the ability to perform a variety of correlation techniques to integrate different
sources, in order to turn data into useful information. Correlation is typically a function of the Security
Event Management portion of a full SIEM solution
Alerting: the automated analysis of correlated events and production of alerts, to notify recipients of
immediate issues. Alerting can be to a dashboard, or sent via third party channels such as email.
Dashboards: Tools can take event data and turn it into informational charts to assist in seeing patterns,
or identifying activity that is not forming a standard pattern.
Compliance: Applications can be employed to automate the gathering of compliance data, producing
reports that adapt to existing security, governance and auditing processes.
Retention: employing long-term storage of historical data to facilitate correlation of data over time,
and to provide the retention necessary for compliance requirements. Long term log data retention is
critical in forensic investigations as it is unlikely that discovery of a network breach will be at the time
of the breach occurring.
Forensic analysis: The ability to search across logs on different nodes and time periods based on
specific criteria. This mitigates having to aggregate log information in your head or having to search
through thousands and thousands of logs.
The Spanning Tree Protocol (STP) is a network protocol that builds a loop-free logical
topology for Ethernet networks. The basic function of STP is to prevent bridge loops and the broadcast
radiation that results from them. Spanning tree also allows a network design to include backup links to
provide fault tolerance if an active link fails.
As the name suggests, STP creates a spanning tree within a network of connected layer-2 bridges, and
disables those links that are not part of the spanning tree, leaving a single active path between any two
network nodes. STP is based on an algorithm that was invented by Radia Perlman while she was working
for Digital Equipment Corporation.
In 2001, the IEEE introduced Rapid Spanning Tree Protocol (RSTP) as 802.1w. RSTP provides
significantly faster recovery in response to network changes or failures, introducing new convergence
behaviors and bridge port roles to do this. RSTP was designed to be backwards-compatible with standard
STP.
STP was originally standardized as IEEE 802.1D but the functionality of spanning tree (802.1D), rapid
spanning tree (802.1w), and multiple spanning tree (802.1s) has since been incorporated into IEEE
802.1Q-2014.
All switch ports in the LAN where STP is enabled are categorized.
Blocking - A port that would cause a switching loop if it were active. No user data is sent or received
over a blocking port, but it may go into forwarding mode if the other links in use fail and the
spanning tree algorithm determines the port may transition to the forwarding state. BPDU data is still
received in blocking state. Prevents the use of looped paths.
Listening - The switch processes BPDUs and awaits possible new information that would cause it to
return to the blocking state. It does not populate the MAC address table and it does not forward
frames.
Learning - While the port does not yet forward frames it does learn source addresses from frames
received and adds them to the filtering database (switching database). It populates the MAC address
table, but does not forward frames.
Forwarding - A port receiving and sending data in Ethernet frames, normal operation. The
Forwarding port monitors incoming BPDUs that would indicate it should return to the blocking state
to prevent a loop.
Disabled - A network administrator has manually disabled a switch port
9) Rapid Spanning Tree Protocol
In 2001, the IEEE introduced Rapid Spanning Tree Protocol (RSTP) as 802.1w. RSTP provides
significantly faster spanning tree convergence after a topology change, introducing new convergence
behaviors and bridge port roles to do this. RSTP was designed to be backwards-compatible with standard
STP.
While STP can take 30 to 50 seconds to respond to a topology change, RSTP is typically able to respond
to changes within 3 × Hello times (default: 3 times 2 seconds) or within a few milliseconds of a physical
link failure. The Hello time is an important and configurable time interval that is used by RSTP for
several purposes; its default value is 2 seconds.
Standard IEEE 802.1D-2004 incorporates RSTP and obsoletes the original STP standard.
Rapid Spanning Tree Operation
RSTP adds new bridge port roles in order to speed convergence following a link failure. The number of
states a port can be in has been reduced to three instead of STP's original five.
RSTP bridge port roles:
Root - A forwarding port that is the best port from non-root bridge to root bridge
Designated - A forwarding port for every LAN segment
Alternate - An alternate path to the root bridge. This path is different from using the root port
Backup - A backup/redundant path to a segment where another bridge port already connects
Disabled - Not strictly part of STP, a network administrator can manually disable a port
RSTP switch port states:
The Multiple Spanning Tree Protocol (MSTP), originally defined in IEEE 802.1s and later merged
into IEEE 802.1Q-2005, defines an extension to RSTP to further develop the usefulness of virtual LANs
(VLANs).
In the standard a spanning tree that maps one or more VLANs is called multiple spanning tree (MST). If
MSTP is implemented a spanning tree can be defined for individual VLANs or for groups of VLANs.
Furthermore, the administrator can define alternate paths within a spanning tree. VLANs must be
assigned to a so-called multiple spanning tree instance(MSTI). Switches are first assigned to an MST
region, then VLANs are mapped against or assigned to this MST. A Common Spanning Tree (CST) is an
MST to which several VLANs are mapped, this group of VLANs is called MSL Instance (MSTI). CSTs
are backward compatible with the STP and RSTP standard. A MST that has only one VLAN assigned to
it is a Internal Spanning Tree (IST).
Unlike some proprietary per-VLAN spanning tree implementations, MSTP includes all of its spanning
tree information in a single BPDU format. Not only does this reduce the number of BPDUs required on a
LAN to communicate spanning tree information for each VLAN, but it also ensures backward
compatibility with RSTP (and in effect, classic STP too). MSTP does this by encoding additional region
information after the standard RSTP BPDU as well as a number of MSTI messages (from 0 to 64
instances, although in practice many bridges support fewer). Each of these MSTI configuration messages
conveys the spanning tree information for each instance. Each instance can be assigned a number of
configured VLANs and frames (packets) assigned to these VLANs operate in this spanning tree instance
whenever they are inside the MST region. In order to avoid conveying their entire VLAN to spanning tree
mapping in each BPDU, bridges encode an MD5 digest of their VLAN to instance table in the MSTP
BPDU. This digest is then used by other MSTP bridges, along with other administratively configured
values, to determine if the neighboring bridge is in the same MST region as itself.
MSTP is fully compatible with RSTP bridges, in that an MSTP BPDU can be interpreted by an RSTP
bridge as an RSTP BPDU. This not only allows compatibility with RSTP bridges without configuration
changes, but also causes any RSTP bridges outside of an MSTP region to see the region as a single RSTP
bridge, regardless of the number of MSTP bridges inside the region itself. In order to further facilitate this
view of an MST region as a single RSTP bridge, the MSTP protocol uses a variable known as remaining
hops as a time to live counter instead of the message age timer used by RSTP. The message age time is
only incremented once when spanning tree information enters an MST region, and therefore RSTP
bridges will see a region as only one "hop" in the spanning tree. Ports at the edge of an MST region
connected to either an RSTP or STP bridge or an endpoint are known as boundary ports. As in RSTP,
these ports can be configured as edge ports to facilitate rapid changes to the forwarding state when
connected to endpoints.