0% found this document useful (0 votes)
13 views

ccna_mod1_notes

The document compares the OSI and TCP/IP models, highlighting their structures, purposes, and key differences. The OSI model consists of seven layers that define specific functions for data communication, while the TCP/IP model has four layers that combine some OSI functions. Additionally, it discusses encapsulation processes, the role of the Transport Layer, and the significance of protocols like TCP and UDP in ensuring reliable data transmission.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

ccna_mod1_notes

The document compares the OSI and TCP/IP models, highlighting their structures, purposes, and key differences. The OSI model consists of seven layers that define specific functions for data communication, while the TCP/IP model has four layers that combine some OSI functions. Additionally, it discusses encapsulation processes, the role of the Transport Layer, and the significance of protocols like TCP and UDP in ensuring reliable data transmission.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

TCP/IP vs OSI Model

There are two network models that are very much used nowadays. One is the OSI
model, and the other is the TCP/IP model. TCP/IP vs. OSI is a persistent topic in the
networking field, so let’s see how both models really differ.

OSI Model
OSI (Open System Interconnection) model was created by the International
Organization for Standardization (ISO), an international standard-setting body. It was
designed to be a reference model for describing the functions of a communication
system.

The OSI model provides a framework for creating and implementing networking
standards and devices and describes how network applications on different computers
can communicate through the network media.

The OSI model has seven layers, each describing a different function of data traveling
through a network. Here is the graphical representation of these layers:

The layers are usually numbered from the last one, meaning that the Physical layer is
considered the first layer. It is useful to remember these layers since there will certainly
be some questions regarding them on the CCNA exam. Most people learn the
mnemonic “Please Do Not Throw Sausage Pizza Away“:

So, what is the purpose of these layers? They are most commonly used by vendors.
They enable them to implement some functionality into a networking device, enabling
easier interoperability with devices from other vendors.

Here is a brief description of each of the layers of the OSI model.


1. Physical Layer – defines how to move bits from one device to another. It details
how cables, connectors, and network interface cards are supposed to work and
how to send and receive bits.
2. Data Link Layer – encapsulates a packet in a frame. A frame contains a header
and a trailer that enable devices to communicate. A header (most commonly)
contains a source and destination MAC address. A trailer contains the Frame
Check Sequence field, which detects transmission errors. The data link layer has
two sublayers:
 Logical Link Control – used for flow control and error detection.
 Media Access Control – used for hardware addressing and controlling
the access method.
3. Network Layer – defines device addressing, routing, and path determination.
Device (logical) addressing is used to identify a host on a network (e.g. by its IP
address).
4. Transport Layer – segments big chunks of data received from the upper layer
protocols. Establishes and terminates connections between two computers. Used
for flow control and data recovery.
5. Session Layer – defines establishing and terminating a session between the two
systems.
6. Presentation Layer – defines data formats. Compression and encryption are
defined at this layer.
7. Application Layer – this layer is the closest to the user. It enables network
applications to communicate with other network applications.

It is a common practice to reference a protocol by the layer number or layer name. For
example, HTTPS is referred to as an application (or Layer 7) protocol. Network devices
are also sometimes described according to the OSI layer on which they operate – e.g. a
Layer 2 switch or a Layer 7 firewall.

The following table shows which protocols reside on which layer of the OSI model:

TCP/IP Model
The TCP/IP model was created in the 1970s by the Defense Advance Research
Project Agency (DARPA) as an open, vendor-neutral, public networking model. Like
the OSI model, it describes general guidelines for designing and implementing
computer protocols. It consists of four layers: Network Access, Internet, Transport, and
Application:
The following picture shows the comparison between the TCP/IP vs. OSI model:

As you can see from the picture above, the TCP/IP model has fewer layers than the OSI
model. The OSI model’s Application, Presentation, and Session layers are merged into
a single layer in the TCP/IP model. Also, the Physical and Data Link layers are called
the Network Access layer in the TCP/IP model. Here is a brief description of each layer:

1. Network Access Layer – defines the protocols and hardware required to deliver
data across a physical network.
2. Internet Layer – defines the protocols for logically transmitting packets over the
network.
3. Transport Layer – defines protocols for setting up the level of transmission
service for applications. This layer is responsible for the reliable transmission of
data and the error-free delivery of packets.
4. Application Layer – defines protocols for node-to-node application
communication and provides services to the application software running on a
computer.

Differences Between TCP/IP vs OSI Model


There are other differences between these two models besides the obvious difference
in the number of layers. OSI model prescribes the steps needed to transfer data over a
network, and it is very specific in it, defining which protocol is used at each layer and
how. The TCP/IP model is not that specific. It can be said that the OSI model prescribes
and TCP/IP model describes.

TCP/IP suite of protocols


The TCP/IP suite is a set of protocols used on computer networks today (most notably
on the Internet). It provides an end-to-end connectivity by specifying how data should be
packetized, addressed, transmitted, routed and received on a TCP/IP network. This
functionality is organized into four abstraction layers and each protocol in the suite
resides in a particular layer.

The TCP/IP suite is named after its most important protocols, the Transmission Control
Protocol (TCP) and the Internet Protocol (IP). Some of the protocols included in the
TCP/IP suite are:

 ARP (Address Resolution Protocol) – used to associate an IP address with a


MAC address.
 IP (Internet Protocol) – used to deliver packets from the source host to the
destination host based on the IP addresses.
 ICMP (Internet Control Message Protocol) – used to detects and reports
network error conditions. Used in ping.
 TCP (Transmission Control Protocol) – a connection-oriented protocol that
enables reliable data transfer between two computers.
 UDP (User Datagram Protocol) – a connectionless protocol for data transfer.
Since a session is not created before the data transfer, there is no guarantee of
data delivery.
 FTP (File Transfer Protocol) – used for file transfers from one host to another.
 Telnet (Telecommunications Network) – used to connect and issue commands
on a remote computer.
 DNS (Domain Name System) – used for host names to the IP address
resolution.
 HTTP (Hypertext Transfer Protocol) – used to transfer files (text, graphic
images, sound, video, and other multimedia files) on the World Wide Web.

The following table shows which protocols reside on which layer of the TCP/IP model:

Encapsulation in OSI and TCP/IP Models


The term encapsulation is used to describe a process of adding headers and trailers
around some data. This process can be explained with the four-layer TCP/IP model,
with each step describing the role of the layer. For example, here is what happens
when you send an email using your favourite email program (such as Outlook or
Thunderbird):

1. the email is sent from the Application layer to the Transport layer.
2. the Transport layer encapsulates the data and adds its own header with its own
information, such as which port will be used and passes the data to the Internet
layer
3. the Internet layer encapsulates the received data and adds its own header,
usually with information about the source and destination IP addresses. The
Internet layer than passes the data to the Network Access layer
4. the Network Access layer is the only layer that adds both a header and a trailer.
The data is then sent through a physical network link.

Here is a graphical representation of how each layer add its own information:

Each packet (header + encapsulated data) defined by a particular layer has a specific
name:

 Frame – encapsulated data defined by the Network Access layer. A frame can
have both a header and a trailer.
 Packet – encapsulated data defined by the Network layer. A header contains the
source and destination IP addresses.
 Segment – encapsulated data as defined by the Transport layer. Information
such as the source and destination ports or sequence and acknowledgment
numbers are included in the header.

NOTE
The term decapsulation refers to the process of removing headers and trailers as data passes from
lower to upper layers. This process happens on the computer that is receiving data.

Data encapsulation in the OSI model


Just like with the TCP/IP layers, each OSI layer asks for services from the next lower
layer. The lower layer encapsulates the higher layer’s data between a header (Data Link
protocols also add a trailer).

While the TCP/IP model uses terms like segment, packet and frame to refer to a data
packet defined by a particular layer, the OSI model uses a different term: protocol data
unit (PDU). A PDU represent a unit of data with headers and trailers for the particular
layer, as well as the encapsulated data. Since the OSI model has 7 layers, PDUs are
numbered from 1 to 7, with the Physical layer being the first one. For example, the term
Layer 3 PDU refers to the data encapsulated at the Network layer of the OSI model.

Here is a graphical representation of all the PDUs in the OSI model:


Transport Layer Explanation – Layer 4 of the OSI
Model
Did you ever wonder how the raw data (message) that the application from our desktop
is transmitted over the Internet? By using the OSI model as a reference, we are able to
understand how the raw data are transmitted from one host and received from another
end-hosts without error. The OSI model has seven (7) layers. In this article, we will
concentrate on Layer 4, which is the Transport Layer.

The upper layers, the Application Layer, Presentation Layer, and Session Layer, are
responsible for preparing and sending the raw data. In contrast, the lower layers, the
Network Layer, Data Link Layer, and Physical Layer, are responsible for encapsulating
the raw data by using headers so that the network devices like routers and switches can
understand and direct the traffic to the right device.

Contributions of the Transport Layer on Data Transmission


Transport Layer is responsible for end-to-end communication over the network and
provides service to upper-layer protocols (application layer). Simply, it is responsible for
tracking the conversations (raw data) between multiple applications that are passing
through the network.
NOTE
Transport Layer provides the logical communication between applications that runs on different
hosts by simply adding a transport header on the raw data. The Protocol Data Unit (PDU) is now
called a Segment.

Networking devices, like routers and switches, and end devices, like desktops and
servers, have limitations on the amount of data that can be inserted in an IP packet.
Because of that, the Transport Layer segments and reassemble the data (messages)
between the sender and the receiver.

Whenever the hosts send a message into the network (internet), the Transport Layer
prepares and separates the raw data (message) into smaller pieces of data for delivery.
When received on the other hosts, the Transport Layer reassembles those smaller
pieces of data and sends them to the upper layers.
The Application Layer has a lot of protocols that recognize the function of each data.
Email traffic uses SMTP and POP3 protocols, while HTTP and HTTPS are the protocols
used for web browsing. Each protocol is formatted differently based on its purpose.

If a different protocol is received on a specific application, then the application will drop
the data. If the web server is receiving an SMTP protocol, then the data will be dropped
as the web server is expecting to receive an HTTP or HTTPS protocol. The role of the
Transport Layer is to ensure that the data is transmitted and delivered to the intended
application.

Transport Layer Protocols – TCP and UDP


Every protocol uses a unique decimal number to ensure that the data is sent and
received on the intended application as it passes through the network or Internet. The
commonly used Transport Layer protocols responsible for message delivery
are Transmission Control Protocol (TCP) and User Datagram Protocol (UDP).

TCP is a connection-oriented protocol which means it guarantees the delivery of the


message, while UDP is a connectionless protocol that sends the data without error
correction. Under the TCP and UDP are port numbers that are used to distinguish the
specific type of application. A specific port number is attached when sending the data so
that the data will be received exactly to the intended application. The below diagram
shows a segment in which the raw data is encapsulated by transport header (source
and destination port).

Ports explained
A port is a 16-bit number used to identify specific applications and services. TCP and
UDP specify the source and destination port numbers in their packet headers and that
information, along with the source and destination IP addresses and the transport
protocol (TCP or UDP), enables applications running on hosts on a TCP/IP network to
communicate.
Applications that provide a service (such as FTP and HTTP servers) open a port on the
local computer and listen for connection requests. A client can request the service by
pointing the request to the application’s IP address and port. A client can use any locally
unused port number for communication. Consider the following example:

In the picture above you can see that a host with an IP address of 192.168.0.50 wants
to communicate with the FTP server. Because FTP servers use, by default, the well-
known port 21, the host generates the request and sends it to the FTP server’s IP
address and port. The host use the locally unused port of 1200 for communication. The
FTP server receives the request, generates the response,and sends it to the host’s IP
address and port.
Port numbers are from 0 to 65535. The first 1024 ports are reserved for use by certain
privileged services:

NOTE
The combination of an IP address and a port number is called a socket. In our example the socket
would be 192.168.0.50:1200.

TCP (Transmission Control Protocol) Explained


One of the main protocols in the TCP/IP suite is Transmission Control Protocol
(TCP). TCP provides reliable and ordered delivery of data between applications running
on hosts on a TCP/IP network. Because of its reliable nature, TCP is used by
applications that require high reliability, such as FTP, SSH, SMTP, HTTP, etc.

TCP is connection-oriented, which means that, before data is sent, a connection


between two hosts must be established. The process used to establish a TCP
connection is known as the three-way handshake. After the connection has been
established, the data transfer phase begins. After the data is transmitted, the connection
is terminated.

One other notable characteristic of TCP is its reliable delivery. TCP uses sequence
numbers to identify the order of the bytes sent from each computer so that the data can
be reconstructed in order. If any data is lost during the transmission, the sender can
retransmit the data.

Because of all of its characteristics, TCP is considered to be complicated and costly in


terms of network usage. The TCP header is up to 24 bytes long and consists of the
following fields:
 source port – the port number of the application on the host sending the data.
 destination port – the port number of the application on the host receiving the
data.
 sequence number – used to identify each byte of data.
 acknowledgment number – the next sequence number that the receiver is
expecting.
 header length – the size of the TCP header.
 reserved – always set to 0.
 flags – used to set up and terminate a session.
 window – the window size the sender is willing to accept.
 checksum – used for error-checking of the header and data.
 urgent – indicates the offset from the current sequence number, where the
segment of non-urgent data begins.
 options – various TCP options, such as Maximum Segment Size (MSS) or
Window Scaling.

NOTE
TCP is a Transport layer protocol (Layer 4 of the OSI model).

TCP three-way handshake


Since TCP is a connection-oriented protocol, a connection needs to be established
before two devices can communicate. TCP uses a process called three-way handshake
to negotiate the sequence and acknowledgment fields and start the session. Here is a
graphical representation of the process:

As the name implies, the three way handshake process consists of three steps:
1.
1. Host A initiates the connection by sending the TCP SYN packet to the
destination host. The packet contains the random sequence number
(e.g. 5432) which marks the beginning of the sequence numbers for data
that the Host A will transmit.
2. The Server receives the packet and responds with its own sequence
number. The response also includes the acknowledgment number, which
is Host A’s sequence number incremented by 1 (in our case, that would
be 5433).
3. Host A acknowledges the response of the Server by sending the
acknowledgment number, which is the Server’s sequence number
incremented by 1.

Here is another picture with the numbers included:

After the data transmission process is finished, TCP will terminate the connection
between two endpoints. This four-step process is illustrated below:

1. The client application that wants to close the connection sends a TCP segment
with the FIN (Finished) flag set to 1.
2. The server receives the TCP segment and acknowledges it with the ACK
segment.
3. Server sends its own TCP segment with the FIN flag set to 1 to the client in order
to terminate the connection.
4. The client acknowledges the server’s FIN segment and closes the connection.

UDP (User Datagram Protocol) Explained


One other important protocol in the TCP/IP site is User Datagram Protocol (UDP).
This protocol is basically a scaled-down version of TCP. Just like TCP, this protocol
provides delivery of data between applications running on hosts on a TCP/IP network,
but, unlike TCP, it does not sequence the data and does not care about the order in
which the segments arrive at the destination. Because of this it is considered to be an
unreliable protocol. UDP is also considered to be a connectionless protocol, since no
virtual circuit is established between two endpoints before the data transfer takes place.

Because it does not provide many features that TCP does, UDP uses much less
network resources than TCP. UDP is commonly used with two types of applications:

 applications that are tolerant of the lost data – VoIP (Voice over IP) uses
UDP because if a voice packet is lost, by the time the packet would be
retransmitted, too much delay would have occurred, and the voice would be
unintelligible.
 applications that have some application mechanism to recover lost data –
Network File System (NFS) performs recovery with application layer code, so
UDP is used as a transport-layer protocol.

The UDP header is 8 bytes long and consists of the following fields:

Here is a description of each field:

 source port – the port number of the application on the host sending the data.
 destination port – the port number of the application on the host receiving the
data.
 length – the length of the UDP header and data.
 checksum – checksum of both the UDP header and UDP data fields.

NOTE
UDP is a Transport layer protocol (Layer 4 of the OSI model).
IP header
An IP header is a prefix to an IP packet that contains information about the IP version,
length of the packet, source and destination IP addresses, etc. It consists of the
following fields:

Here is a description of each field:

 Version – the version of the IP protocol. For IPv4, this field has a value of 4.
 Header length – the length of the header in 32-bit words. The minumum value is
20 bytes, and the maximum value is 60 bytes.
 Priority and Type of Service – specifies how the datagram should be handled.
The first 3 bits are the priority bits.
 Total length – the length of the entire packet (header + data). The minimum
length is 20 bytes, and the maximum is 65,535 bytes.
 Identification – used to differentiate fragmented packets from different
datagrams.
 Flags – used to control or identify fragments.
 Fragmented offset – used for fragmentation and reassembly if the packet is too
large to put in a frame.
 Time to live – limits a datagram’s lifetime. If the packet doesn’t get to its
destination before the TTL expires, it is discarded.
 Protocol – defines the protocol used in the data portion of the IP datagram. For
example, TCP is represented by the number 6 and UDP by 17.
 Header checksum – used for error-checking of the header. If a packet arrives at
a router and the router calculates a different checksum than the one specified in
this field, the packet will be discarded.
 Source IP address – the IP address of the host that sent the packet.
 Destination IP address – the IP address of the host that should receive the
packet.
 Options – used for network testing, debugging, security, and more. This field is
usually empty.
Unicast, Multicast, and Broadcast Addresses
There are three types of Ethernet addresses:

1. Unicast Addresses

Unicast addresses represent a single LAN interface. A unicast frame will be sent to a
specific device, not to a group of devices on the LAN:

The unicast address will have the value of the MAC address of the destination device.

2. Multicast Addresses

Multicast addresses represent a group of devices in a LAN. A frame sent to a multicast


address will be forwarded to a group of devices on the LAN:

Multicast frames have a value of 1 in the least-significant bit of the first octet of the
destination address. This helps a network switch to distinguish between unicast and
multicast addresses. One example of an Ethernet multicast address would
be 01:00:0C:CC:CC:CC, which is the address used by CDP (Cisco Discovery
Protocol).

3. Broadcast Addresses

Broadcast addresses represent all devices on the LAN. Frames sent to a broadcast
address will be delivered to all devices on the LAN:

The broadcast address has the value of FFFF.FFFF.FFFF (all binary ones). The switch
will flood broadcast frames out of all ports except the port that it was received on.

Types of IP Addresses
The IP addresses are divided into three different types, based on their operational
characteristics:

1. unicast IP addresses – an address of a single interface. The IP addresses of this


type are used for one-to-one communication. Unicast IP addresses are used to direct
packets to a specific host. Here is an example:
In the picture above you can see that the host wants to communicate with the server. It
uses the (unicast) IP address of the server (192.168.0.150) to do so.

2. multicast IP addresses – used for one-to-many communication. Multicast messages


are sent to IP multicast group addresses. Routers forward copies of the packet out to
every interface that has hosts subscribed to that group address. Only the hosts that
need to receive the message will process the packets. All other hosts on the LAN will
discard them. Here is an example:

R1 has sent a multicast packet destined for 224.0.0.9. This is an RIPv2 packet, and only
routers on the network should read it. R2 will receive the packet and read it. All other
hosts on the LAN will discard the packet.
3. broadcast IP addresses – used to send data to all possible destinations in the
broadcast domain (the one-to-everybody communication). The broadcast address for a
network has all host bits on. For example, for the network 192.168.30.0
255.255.255.0 the broadcast address would be 192.168.30.255*. Also, the IP address
of all 1’s (255.255.255.255) can be used for local broadcast. Here’s an example:

R1 wants to communicate with all hosts on the network and has sent a broadcast
packet to the broadcast IP address of 192.168.30.255. All hosts in the same broadcast
domain will receive and process the packet.

*This is because the subnet mask of 255.255.255.0 means that the last octet in the IP
address represents the host bits. And 8 one’s written in decimal is 255.

What is IPv4 Address and its Role in the


Network?
IPv4 or Internet Protocol version 4, address is a 32-bit string of numbers separated by
periods. It uniquely identifies a network interface in a device. IP is a part of the TCP/IP
(Transmission Control Protocol/Internet Protocol) suite, where IP is the principal set of
rules for communication on the Internet. An IP address is needed to be allocated on the
devices, such as PCs, printers, servers, routers, switches, etc., to be able to
communicate with each other in the network and out the Internet.
IPv4 Address Format
IPv4 addresses are expressed as a set of four numbers in decimal format, and each set
is separated by a dot. Thus, the term ‘dotted decimal format.’ Each set is called an
‘octet’ because a set is composed of 8 bits. The figure below shows the binary format of
each octet in the 192.168.10.100 IP address:

A number in an octet can range from 0 to 255. Therefore, the full IPv4 address space
goes from 0.0.0.0 to 255.255.255.255. The IPv4 address has two parts, the network
part and the host part. A subnet mask is used to identify these parts.

Network Part

The network part of the IPv4 address is on the left-hand side of the IP address. It
specifies the particular network to where the IPv4 address belongs. The network portion
of the address also identifies the IP address class of the IPv4 address.

For example, we have the IPv4 address 192.168.10.100 and a /24 subnet mask. /24
simply means that the first 24 bits, starting from the left side, is the network portion of
the IPv4 address. The 8 remaining bits of the 32 bits will be the host portion.

Host Part
The host portion of the IPv4 address uniquely identifies the device or the interface on
your network. Hosts that have the same network portion can communicate with one
another directly, without the need for the traffic to be routed.
IPv4 Address Allocation
The Internet Protocol address can be allocated to hosts or interfaces either manually or
dynamically.

 Static – static IP address is set manually on the device. It is best practice to set static IP
addresses on network devices, such as routers and switches, and on servers as well.
 Dynamic – dynamic IP address can be automatically allocated to a device via Dynamic
Host Configuration Protocol (DHCP). Dynamic IP addresses are best to be used on end
devices, such as PCs.

Types of IPv4 Addresses


We have two types of IP addresses, namely public IP addresses and private IP
addresses.

 Public IP address – used to route Internet traffic. This is used on the Internet and is
given out by Internet Service Providers (ISPs) to their customers.
 Private IP address – used in private networks for internal traffics within the LAN. Private
addresses are not routable out the Internet.

Subnet Mask Explained


 An IP address is divided into two parts: network and host parts. For example, an
IP class A address consists of 8 bits identifying the network and 24 bits
identifying the host. This is because the default subnet mask for a class A IP
address is 8 bits long. (or, written in dotted decimal notation, 255.0.0.0). What
does it mean? Well, like an IP address, a subnet mask also consists of 32 bits.
Computers use it to determine the network part and the host part of an address.
The 1s in the subnet mask represent a network part, and the 0s a host part.
 Computers work only with bits. The math used to determine a network range is
binary AND.


 Let’s say that we have the IP address of 10.0.0.1 with the default subnet mask of
8 bits (255.0.0.0).
First, we need to convert the IP address to binary:
 IP address: 10.0.0.1 = 00001010.00000000.00000000.00000001
Subnet mask 255.0.0.0 = 11111111.00000000.00000000.0000000
 Computers then use the AND operation to determine the network number:

 The computer can then determine the size of the network. Only IP addresses that
begins with 10 will be in the same network. So, in this case, the range of
addresses in this network is 10.0.0.0 – 10.255.255.255.
NOTE
A subnet mask must always be a series of 1s followed by a series of 0s.

Slash Notation

Aside from the dotted decimal format, we can also write the subnet mask in slash
notation. It is a slash ‘/’ then followed by the subnet mask bits. To determine the slash
notation of the subnet mask, convert the dotted decimal format into binary, count the
series of 1s, and add a slash on the start.

For example, we have the dotted decimal subnet mask of 255.0.0.0. In binary, it is
11111111.00000000.00000000.0000000. The number of succeeding 1s are 8, therefore
the slash notation of 255.0.0.0 is /8.

Classes of IP addresses
TCP/IP defines five classes of IP addresses: class A, B, C, D, and E. Each class has a
range of valid IP addresses. The value of the first octet determines the class. IP
addresses from the first three classes (A, B and C) can be used for host addresses. The
other two classes are used for other purposes – class D for multicast and class E for
experimental purposes.

The system of IP address classes was developed for the purpose of Internet IP
addresses assignment. The classes created were based on the network size. For
example, for the small number of networks with a very large number of hosts, the Class
A was created. The Class C was created for numerous networks with small number of
hosts.
Classes of IP addresses are:

For the IP addresses from Class A, the first 8 bits (the first decimal number) represent
the network part, while the remaining 24 bits represent the host part. For Class B, the
first 16 bits (the first two numbers) represent the network part, while the remaining 16
bits represent the host part. For Class C, the first 24 bits represent the network part,
while the remaining 8 bits represent the host part.

Consider the following IP addresses:

 10.50.120.7 – because this is a Class A address, the first number (10) represents
the network part, while the remainder of the address represents the host part
(50.120.7). This means that, in order for devices to be on the same network, the
first number of their IP addresses has to be the same for both devices. In this
case, a device with the IP address of 10.47.8.4 is on the same network as the
device with the IP address listed above. The device with the IP address 11.5.4.3
is not on the same network, because the first number of its IP address is
different.
 172.16.55.13 – because this is a Class B address, the first two numbers (172.16)
represent the network part, while the remainder of the address represents the
host part (55.13). A device with the IP address of 172.16.254.3 is on the same
network, while a device with the IP address of 172.55.54.74 isn’t.

NOTE
The system of network address ranges described here is generally bypassed today by use of
the Classless Inter-Domain Routing (CIDR) addressing.

Special IP address ranges that are used for special purposes are:

 0.0.0.0/8 – addresses used to communicate with the local network


 127.0.0.0/8 – loopback addresses
 169.254.0.0/16 – link-local addresses (APIPA)
Subnetting Explained
Subnetting is the practice of dividing a network into two or more smaller networks. It
increases routing efficiency, enhances the security of the network, and reduces the size
of the broadcast domain.

Consider the following example:

In the picture above we have one huge network: 10.0.0.0/24. All hosts on the network
are in the same subnet, which has the following disadvantages:

 a single broadcast domain – all hosts are in the same broadcast domain. A
broadcast sent by any device on the network will be processed by all hosts,
creating lots of unnecessary traffic.
 network security – each device can reach any other device on the network,
which can present security problems. For example, a server containing sensitive
information shouldn’t be in the same network as the user’s workstations.
 organizational problems – in large networks, different departments are usually
grouped into different subnets. For example, you can group all devices from
the Accounting department in the same subnet and then give access to
sensitive financial data only to hosts from that subnet.

The network above could be subnetted like this:


Now, two subnets were created for different departments: 10.0.0.0/24 for Accounting
and 10.1.0.0/24 for Marketing. Devices in each subnet are now in a different broadcast
domain. This will reduce the amount of traffic flowing on the network and allow us to
implement packet filtering on the router.
CIDR (Classless inter-domain routing)
CIDR (Classless inter-domain routing) is a method of public IP address assignment.
It was introduced in 1993 by Internet Engineering Task Force with the following goals:

 to deal with the IPv4 address exhaustion problem


 to slow down the growth of routing tables on Internet routers

Before CIDR, public IP addresses were assigned based on the class boundaries:

 Class A – the classful subnet mask is /8. The number of possible IP addresses is
16,777,216 (2 to the power of 24).
 Class B – the classful subnet mask is /16. The number of addresses is 65,536
 Class C – the classful subnet mask is /24. Only 256 addresses are available.

Some organizations were known to have gotten an entire Class A public IP address (for
example, IBM got all the addresses in the 9.0.0.0/8 range). Since these addresses can’t
be assigned to other companies, there was a shortage of available IPv4 addresses.
Also, since IBM probably didn’t need more than 16 million IP addresses, a lot of
addresses were unused.

To combat this, the classful network scheme of allocating the IP address was
abandoned. The new system was classsless – a classful network was split into multiple
smaller networks. For example, if a company needs 12 public IP addresses, it would get
something like this: 190.5.4.16/28.

The number of usable IP addresses can be calculated with the following formula:

2 to the power of host bits – 2

In the example above, the company got 14 usable IP addresses from the 190.5.4.16 –
190.5.4.31 range because there are 4 host bits and 2 to the power of 4 minus 2 is 14
The first and the last address are the network address and the broadcast address,
respectively. All other addresses inside the range could be assigned to Internet hosts.

Private IP addresses explained


The original design of the Internet intended that each host on every network should
have a real, routable IP address. An organization that would like to access the Internet
would complete some paperwork to describe its internal network and the number of
hosts on it. The organization would then receive a number of IP addresses, according to
its needs. But there was one huge problem with this concept – if each host on each
network in the world was provided with an unique IP address, we would have run out of
IP addresses a long time ago!
Therefore, the concept of private IP addressing was developed to address the IP
address exhaustion problem. The private IP addresses can be used on the private
network of any organization in the world and are not globally unique.

Consider the following example:

In the example above you can see that two unrelated organizations use the same
private IP network (10.0.0.0/24) inside their respective internal networks. Because
private IP addresses are not globally unique, both organizations can use private IP
addresses from the same range. To access the Internet, the organizations can use a
technology called Network Address Translation (NAT), which we will describe in the
later lessons.

There are three ranges of addresses that can be used in a private network (e.g. your
home LAN or office)

 10.0.0.0 – 10.255.255.255
 172.16.0.0 – 172.31.255.255
 192.168.0.0 – 192.168.255.255

Internet routers are configured to discard any packets coming from the private IP
address ranges, so these addresses are not routable on the Internet.
Ethernet explained
Ethernet is the most used networking technology for LANs today. It defines wiring and
signaling for the Physical layer of the OSI model. For the Data Link layer, it defines
frame formats and protocols.

Ethernet is described as IEEE 802.3 standard. It uses Carrier Sense Multiple Access
with Collision Detection (CSMA/CD) access method and supports speeds up to 100
Gbps. It can use coaxial, twisted pair and fiber optic cables. Ethernet uses frames with
source and destination MAC addresses to deliver data.
NOTE
The term Ethernet LAN refers to a combination of computers, switches, and different kinds of
cables that use the Ethernet standard to communicate over the network. It is by far the most popular
LAN technology today.

What is an Ethernet Frame?


We have already learned that encapsulated data defined by the Network Access layer is
called an Ethernet frame. An Ethernet frame starts with a header, which contains the
source and destination MAC addresses, among other data. The middle part of the frame
is the actual data. The frame ends with a field called Frame Check Sequence (FCS).

The Ethernet frame structure is defined in the IEEE 802.3 standard. Here is a graphical
representation of an Ethernet frame and a description of each field in the frame:

 Preamble – informs the receiving system that a frame is starting and enables
synchronisation.
 SFD (Start Frame Delimiter) – signifies that the Destination MAC Address field
begins with the next byte.
 Destination MAC – identifies the receiving system.
 Source MAC – identifies the sending system.
 Type – defines the type of protocol inside the frame, for example IPv4 or IPv6.
 Data and Pad – contains the payload data. Padding data is added to meet the
minimum length requirement for this field (46 bytes).
 FCS (Frame Check Sequence) – contains a 32-bit Cyclic Redundancy Check
(CRC) which allows detection of corrupted data.

The FCS field is the only field present in the Ethernet trailer. It allows the receiver to
discover whether errors occurred in the frame. Note that Ethernet only detects in-transit
corruption of data – it does not attempt to recover a lost frame. Other higher level
protocols (e.g. TCP) perform error recovery.
MAC & IP addresses
MAC address

A Media Access Control (MAC) address is a 48-bit (6 bytes) address that is used for
communication between two hosts in an Ethernet environment. It is a hardware
address, which means that it is stored in the firmware of the network card.

Every network card manufacturer gets a universally unique 3-byte code called
the Organizationally Unique Identifier (OUI). Manufacturers agree to give all NICs a
MAC address that begins with the assigned OUI. The manufacturer then assigns a
unique value for the last 3 bytes, which ensures that every MAC address is globaly
unique.

MAC addresses are usually written in the form of 12 hexadecimal digits. For example,
consider the following MAC address:
D8-D3-85-EB-12-E3

Every hexadecimal character represents 4 bits, so the first six hexadecimal characters
represent the vendor (Hewlett Packard in this case).

How to find out your own MAC address?

If you are using Windows, start the Command Prompt (Start – Programs – Accessories
– Command Prompt). Type the ipconfig/all command and you should see a field
called Physical Address under the Ethernet adapter settings:
If you are using Linux, type the ifconfig command. You should see your MAC address
referred to as HWaddress.

IP address

An IP address is a 32-bit number that identifies a host on a network. Each device that
wants to communicate with other devices on a TCP/IP network needs to have an IP
address configured. For example, in order to access the Internet, your computer will
need to have an IP address assigned (usually obtained by your router from the ISP).

An IP address is usually written in the form of four decimal numbers seperated by


periods (e.g. 10.0.50.1). The first part of the address represents the network the device
is on (e.g. 10.0.0.0), while the second part of the address identifies the host device (e.g.
10.0.50.1).

In contrast to MAC address, an IP address is a logical address. It can be configured


manually or it can be obtained from a DHCP server.
NOTE
The term IP address is usually used for IPv4, which is the fourth version of the IP protocol. A newer
version exists, IPv6, and uses 128-bit addressing.

Private IP addresses

There are three ranges of addresses that can be used in a private network (e.g. your
home LAN). These addresses are not routable through the Internet.

Private addresses ranges are:

 10.0.0.0 – 10.255.255.255
 172.16.0.0 – 172.31.255.255
 192.168.0.0 – 192.168.255.255

How to find out your IP address

If you are using Windows, start the Command Prompt (Start – Programs – Accessories
– Command Prompt). Enter the ipconfig command. You should see a field called IP
Address:
Linux users:

Enter ifconfig. You should see a field called inet addr:

Types of Ethernet cabling


There are three cable types commonly used for Ethernet cabling: coaxial, twisted pair,
and fiber-optic cabling. In today’s LANs, the twisted pair cabling is the most popular type
of cabling, but the fiber-optic cabling usage is increasing, especially in high performance
networks. Coaxial cabling is generally used for cable Internet access. Let’s explain all
three cable types in more detail.

Coaxial cabling

A coaxial cable has an inner conductor that runs down the middle of the cable. The
conductor is surrounded by a layer of insulation which is then surrounded by another
conducting shield, which makes this type of cabling resistant to outside interference.
This type of cabling comes in two types – thinnet and thicknet. Both types have
maximum transmission speed of 10 Mbps. Coaxial cabling was previously used in
computer networks, but today are largely replaced by twisted-pair cabling (Photo credit:
Wikipedia)
Twisted-pair cabling

A twisted-pair cable has four pair of wires. These wires are twisted around each other to
reduce crosstalk and outside interference. This type of cabling is common in current
LANs.

Twisted-pair cabling can be used for telephone and network cabling. It comes in two
versions, UTP (Unshielded Twisted-Pair) and STP (Shielded Twisted-Pair). The
difference between these two is that an STP cable has an additional layer of insulation
that protects data from outside interferences.

Here you can see how a twisted pair cable looks like (Photo credit: Wikipedia):
A twisted-pair cable uses 8P8C connector, sometimes wrongly referred to as RJ45
connector (Photo credit: Wikipedia).

Fiber-optic cabling

This type of cabling uses optical fibers to transmit data in the form of light signals. The
cables have strands of glass surrounded by a cladding material (Photo credit:
Wikipedia):

This type of cabling can support greater cable lengths than any other cabling type (up to
a couple of miles). The cables are also immune to electromagnetic interference. As you
can see, this cabling method has many advantages over other methods but its main
drawback is that it is more expensive.

There are two types of fiber-optic cables:

 Single-mode fiber (SMF) – uses only a single ray of light to carry data. Used for larger
distances.
 Multi-mode fiber (MMF) – uses multiple rays of light to carry data. Less expensive than
SMF.
Four types of connectors are commonly used:

 ST (Straight-tip connector)
 SC (Subscriber connector)
 FC (Fiber Channel)
 LC (Lucent Connector)

IEEE Ethernet standards


Ethernet is defined in a number of IEEE 802.3 standards. These standards define the
physical and data-link layer specifications for Ethernet. The most important 802.3
standards are:

 10Base-T (IEEE 802.3) – 10 Mbps with category 3 unshielded twisted pair (UTP)
wiring, up to 100 meters long.
 100Base-TX (IEEE 802.3u) – known as Fast Ethernet, uses category 5, 5E, or 6
UTP wiring, up to 100 meters long.
 100Base-FX (IEEE 802.3u) – a version of Fast Ethernet that uses multi-mode
optical fiber. Up to 412 meters long.
 1000Base-CX (IEEE 802.3z) – uses copper twisted-pair cabling. Up to 25 meters
long.
 1000Base-T (IEEE 802.3ab) – Gigabit Ethernet that uses Category 5 UTP
wiring. Up to 100 meters long.
 1000Base-SX (IEEE 802.3z) – 1 Gigabit Ethernet running over multimode fiber-
optic cable.
 1000Base-LX (IEEE 802.3z) – 1 Gigabit Ethernet running over single-mode fiber.
 10GBase-T (802.3.an) – 10 Gbps connections over category 5e, 6, and 7 UTP
cables.

Notice how the first number in the name of the standard represents the speed of the
network in megabits per second. The word base refers to baseband, meaning that the
signals are transmitted without modulation. The last part of the standard name refers to
the cabling used to carry signals. For example, 1000Base-T means that the speed of
the network is up to 1000 Mbps, baseband signaling is used, and the twisted-pair
cabling will be used (T stands for twisted-pair).

Types of Ethernet cables – straight-through and


crossover
Ethernet cables can come in two forms when it comes to wiring:
1. Straight-through cable

This cable type has identical wiring on both ends (pin 1 on one end of the cable is
connected to pin 1 at the other end of the cable, pin 2 is connected to pin 2 etc.):

This type of cable is used to connect the following devices:

 computer to hub
 computer to switch
 router to hub
 router to switch

Computers and routers use wires 1 and 2 to transmit data and wires 3 and 6 to receive
data. Hubs and switches use wires 1 and 2 to receive data and wires 3 and 6 to send
data. That is why, if you want to connect two computers together, you will need a
crossover cable.

2. Crossover cable

With the crossover cable, the wire pairs are swapped, which means that different pins
are connected together – pin 1 on one end of the cable is connected to pin 3 on the
other end, pin 2 on one end is connected to pin 6 on the other end (Photo credit:
Wikipedia):
This type of cable is used when you need to connect two devices that use same wires
to send and receive data. For example, consider connecting two computers together. If
you use straight-through cable, with identical wiring in both ends, both computers will
use wires 1 and 2 to send data. If computer A sends some packets to computer B,
computer A will send that data using wires 1 and 2. That will cause a problem because
computers expect packets to be received on wires 3 and 6, and your network will not
work properly. This is why you need to use a crossover cable for such connections.
NOTE
Newer devices support the Auto MDI-X capability to automatically detect and configure the required
cable connection type. This removes the need for a specific cable type between certain
devices. Also, note that the Gigabit Ethernet and faster standards use all four wire pairs to transfer
data in both direction simultaneously.

Cisco PoE Explained – What is Power over


Ethernet?
All devices need electricity for them to operate. What if devices are installed farther from
the power outlet like an access point that needs to be installed on an upper level to
provide good quality Wi-Fi signal and coverage? One best solution is by using Power
over Ethernet (PoE), typically on a networking device like a network switch.

Power over Ethernet (PoE) is a technology that transmits both electrical power and
network data over an ethernet cable. With PoE, each Ethernet interface of LAN
switches can supply power to devices like VoIP phones, IP cameras or security
cameras, and wireless access points (AP).

NOTE
The PoE device like LAN switches that are supplying power is called Power Sourcing Equipment
(PSE). The power that is supplying is in Direct Current (DC) form. The device like IP phones or
access points that are being powered is called Powered Device (PD).
How Does Power over Ethernet Work?
Some devices are not capable of being powered through ethernet ports, which might
destroy the device if being plugged into a PSE (PoE Switch). PSE must also ensure that
the power level supplied to PD is enough and will not destroy it. To meet those
requirements, PoE has an IEEE standardized mechanism called autonegotiation.

Autonegotiation initiates a handshake procedure that establishes how much power the
PD or connected device requires. The handshake needs to be established while PD is
off, as PD needs the power to boot and initialize. Using autonegotiation, PSE (PoE
Switch) avoids powering up devices that are not capable of receiving power over
ethernet ports. Thus, it avoids damaging the ethernet port or the device itself.

PoE (Power over Ethernet) Standards


During autonegotiation, the PD is signalling the PSE of how much wattage of power it
requires. The below standards are the power in watts that the PSE will supply to PD
based on its requirement:

1. PoE –IEEE 802.3af standard that supplies up to 15 watts of DC power from PSE and
12.95 watts from PD due to losses on an ethernet cable. It uses two pairs of wires like
CAT3 or CAT5 cables as a medium.

2. PoE+ – IEEE 802.3at standard that supplies power up to 30 watts of DC power from
PSE and 25.5 watts from PD due to losses on an ethernet cable. It is also using two
pairs of wires like CAT5 or higher as a medium.

3. UPoE (Universal PoE) – IEEE 802.3bt standard that supplies power up to 60 watts
of DC power from PSE and 51 watts from PD due to losses on an ethernet cable. It
uses four pairs of wire as a medium.

4. UPoE+ (Universal PoE +) – IEEE 802.3bt standard that supplies power up to 100
watts of DC power from PSE and 71.3 watts from PD due to losses on an ethernet
cable. It is also using four pairs of ethernet cabling as a medium.

NOTE
There are classes under the PoE standards. Under 802.3af, there are four classes
under it that have different power in watts. On 802.3at, there is only one class, and
lastly, 802.3bt has four classes under it.
PoE Implementation
Implementing PoE on the LAN network connection requires an effort for planning and
designing. Powered devices, power requirements, switch ports, switch power supplies,
and PoE standards should be checked before implementing PoE on the LAN network.
Below are the ways we can implement PoE on our network using network switches.

1. Endspan – is a PoE switch and sometimes called “endpoint”. The ethernet port of the
switch can supply both power and data to devices that support PoE like PD.

2. Midspan – if there is an existing non-PoE switch on the network and needs to power
up a device that requires PoE, then a PoE device needs to be put in between the non-
PoE switch and a PD. The PoE device will connect to the non-PoE switch and will
supply power to PD. A commonly known midspan is a PoE injector.

Network devices
Let’s take a look at the network devices commonly found in today’s LANs..

Hubs

A hub serves as a central point to which all of the hosts in a network connect to. A Hub
is an OSI Layer 1 device and has no concept of Ethernet frames or addressing. It simply
receives a signal from one port and sends it out to all other ports. Here is an example 4-
port Ethernet hub (source: Wikipedia):
Today, hubs are considered obsolete and switches are commonly used instead.

Switches

Like hubs, a switch is used to connect multiple hosts together, but it has many
advantages over a hub. Switch is an OSI Layer 2 device, which means that it can
inspect received traffic and make forwarding decisions. Each port on a switch is a
separate collision domain and can run in a full duplex mode (photo credit: Wikipedia).

Routers

A router is a device that routes packets from one network to another. A router is most
commonly an OSI Layer 3 device. Routers divide broadcast domains and have traffic
filtering capabilities.

The picture below shows a typical home router:


In the next sections we will describe each of these devices in more detail.

Network hubs explained


A hub serves as a central point to which all of the hosts in a network connect to. It is an
OSI Layer 1 device and has no concept of Ethernet frames or addressing – it simply
receives the signal from one port and sends it out to all other ports. Here is an example
4-port Ethernet hub (image source: Wikipedia):
As mentioned above, hubs have no way of distinguishing out which port a signal should
be sent to; instead, an electrical signal is sent out each port. All nodes on the network
will receive data, and the data will eventually reach the correct destination, but with a lot
of unnecessary network traffic:

In the example above you can see that the hub has sent out the receiving signal out all
other ports, except the incoming port. Hubs are therefore considered obsolete and
switches are commonly used instead in modern LANs. Hubs have numerous
disadvantages over switches, such as:

 they are not aware of the traffic that passes through them
 they create only one large collision domain
 a hub typically operates in half duplex
 there is also a security issue with hubs since the traffic is forwarded to all ports
(except the source port), which makes it possible to capture all traffic on a
network with a network sniffer!

NOTE
Hubs are also known as multiport repeaters because that is basically what they do – repeat the
electrical signal that comes in one port out all other ports (except the incoming port).

Network Switch Explained


Just like hubs and bridges, a switch is used to connect multiple hosts together, but it
has many advantages over them. The switch is an OSI Layer 2 device, which means
that it can inspect received traffic and make forwarding decisions. Each port on a switch
is a separate collision domain and can run in a full duplex mode (photo credit:
Wikipedia).

A switch manages the flow of data across a network by inspecting the incoming frame’s
destination MAC address and forwarding the frame only to the host for which the data
was intended. Each switch has a dynamic table (called the MAC address table) that
maps MAC addresses to ports. With this information, a switch can identify which system
is sitting on which port and where to send the received frame.

To better understand how a switch works, consider the following example:


As you can see from the example above, Host A is trying to communicate with Host C
and sends a packet with Host C’s destination MAC address. The packet arrives at the
switch, which looks at the destination MAC address. The switch then searches that
MAC address in its MAC address table. If the MAC address is found, the switch then
forwards the packet only out of the port connected to the frame’s destination. Hosts
connected to other ports will not receive the frame.

Collision and Broadcast Domain


Collision Domain

A collision domain is, as the name implies, the part of a network where packet collisions
can occur. A collision occurs when two devices send a packet at the same time on the
shared network segment. The packets collide and both devices must send the packets
again, which reduces network efficiency. Collisions are often in a hub environment
because each port on a hub is in the same collision domain. By contrast, each port on a
bridge, a switch, or a router is in a separate collision domain.

The following example illustrates collision domains:

We have 6 collision domains in the example above.


NOTE
Remember, each port on a hub is in the same collision domain. Each port on a bridge, switch, or
router is in a separate collision domain.

Broadcast Domain

A broadcast domain is a domain in which a broadcast is forwarded. A broadcast domain


contains all devices that can reach each other at the data link layer (OSI layer 2) by
using broadcast. All ports on a hub or a switch are by default in the same broadcast
domain. All ports on a router are in the different broadcast domains and routers don’t
forward broadcasts from one broadcast domain to another.

The following example clarifies the concept:

In the picture above we have three broadcast domains, since all ports on a hub or a
switch are in the same broadcast domain, and all ports on a router are in a different
broadcast domain.

How Switches Work


Each network card has a unique identifier called a Media Access Control (MAC)
address. This address is used in LANs for communication between devices on the
same network segment. Devices that want to communicate need to know each other
MAC addresses before sending out packets.

Switches also use MAC addresses to make accurate forwarding or filtering decision.
When a switch receives a frame, it associates the media access control (MAC) address
of the sending device with the port on which it was received. The table that stores such
associations is called a MAC address table. This table is stored in the volatile memory,
so associations are erased after the switch is rebooted.

Switches usually perform these three functions in a LAN:

 address learning – switches learn MAC addresses by examining the source


MAC address of each received frame.
 forward/filter decisions – switches decide whether to forward or filter a frame,
based on the destination MAC address.
 loop avoidance – switches use Spanning Tree Protocol (STP) to prevent
network loops while still permitting redundancy.

To better how a network switch works, take a look at the following example:

Let’s say that host A wants to communicate with host B for the first time. Host A knows
the IP address of host B, but since this is the first time the two hosts communicate, the
hardware (MAC) addresses are not known. Host A uses the ARP process to find out the
MAC address of host B. The switch forwards the ARP request out all ports except the
port the host A is connected to. Host B receives the ARP request and responds with its
MAC address. Host B also learns the MAC address of host A (because host A sent its
MAC address in the ARP request). Host C receives the ARP request, but doesn’t
respond since the IP address listed in the request is not its own.

As mentioned above, a switch learns which MAC addresses are associated with which
port by examining the source MAC address of each received frame. Because host B
responded with the ARP reply that included its MAC address, the switch knows the
MAC address of host B and stores that address in its MAC address table. For host A,
the switch knows its MAC address because of the ARP request that included it.

Now, when host A sends a packet to host B, the switch looks up in its MAC address
table and forwards the frame only out the Fa0/2 port – the port on which host B is
connected to. Other hosts on the network will not be involved in the communication:
NOTE
By default, MAC addresses stay in the switch’s MAC address table for 5 minutes. So if host A and
host B decide to communicate inside the next 5 minutes, a new ARP process will not be necessary.

You can display the MAC address table of the switch by using the show mac-address-
table command:
Switch#show mac-address-table

Mac Address Table

-------------------------------------------

Vlan Mac Address Type Ports

---- ----------- -------- -----

1 0003.e489.513e DYNAMIC Fa0/2

1 00e0.8f13.6970 DYNAMIC Fa0/1

The output is pretty much self-explanatory: all ports belong to VLAN 1 and MAC
addresses associated with specific ports are listed. DYNAMIC means that the address
were learned dynamically by using the source MAC address of the received frames.
Layer 2 switching
Layer 2 switching (or Data Link layer switching) is the process of using devices’
MAC addresses to decide where to forward frames. Switches and bridges are used for
Layer 2 switching. They break up one large collision domain into multiple smaller ones.

In a typical LAN, all hosts are connected to one central device. In the past, the device
was usually a hub. But hubs had many disadvantages, such as not being aware of
traffic that passes through them, creating one large collision domain, etc. To overcome
some of the problems with hubs, bridges were created. They were better than hubs
because they created multiple collision domains, but they had limited number of ports.
Finally, switches were created and are still widely used today. Switches have more
ports than bridges, can inspect incoming traffic and make forwarding decisions
accordingly. Also. each port on a switch is a separate collision domain, so no packet
collisions should occur.

Layer 2 switches are faster than routers because they don’t take up time looking at the
Network layer header information. Instead, they look at the frame’s hardware addresses
to decide what to do with the frame – to forward, flood, or drop it. Here are other major
advantages of Layer 2 switching:

 fast hardware-based bridging (using ASICs chips)


 wire speed
 low latency
 low cost

Here is an example of the typical LAN network – the switch serves as a central device
that connects all devices together:
Differences between hubs and switches

To better understand the concept of frame switching based on the hardware address of
a device, you need to understand how switches differ from hubs.

First, consider an example of a LAN in which all hosts connects to a hub:

As mentioned previously, hubs create only a single collision domain, so the chance for a
collision to occur is high. The hub depicted above simply repeats the signal it receives
out all ports, except the one from which the signal was received, so no frame filtering
takes place. Imagine if you had 20 hosts connected to a hub, a packet would be sent to
19 hosts, instead of just one! This can also cause security problems, because an
attacker can capture all traffic on the network.
Now consider the way the switches work. We have the same topology as above, only
this time we are using a switch instead of a hub:

Switches increase the number of collision domains. Each port is one collision domain,
which means that the chances for collisions to occur are minimal. A switch learns which
device is connected to which port and forwards a frame based on the destination MAC
address included in the frame. This reduces traffic on the LAN and enhances security.
Network router explained
A router is a network device that routes packets from one network to another. It is
usually connected to two or more different networks. When a packet comes to a router
port, the router reads the address information in the packet to determine out which port
the packet will be sent. For example, a router provides you with the internet access by
connecting your LAN with the Internet.
NOTE
A router is most commonly an OSI Layer 3 device, since its forwarding decision is based on the
information of the OSI Layer 3 – the destination IP address. Routers divide broadcast domains,
provide full duplex communication, and have traffic filtering capabilities.

The picture below shows a typical home router:

If two hosts from different networks want to communicate, they will need a router in
order to exchange data. Consider the following example:
We have a network of three hosts and a router. Note that each computer is on a
different network. Host A wants to communicate with Host B and sends the packet with
the Host B’s IP address (10.0.0.20) to the router. The router receives the packet,
compares the packet’s destination IP address to the entries in its routing table and finds
a match. It then sends the packet out the interface associated with the network
10.0.0.0/24. Only Host B will receive and process the packet. In fact, Host C will not
even be aware that the communication took place.

What is Layer 3 Switch and How Does it Works?


A Layer 3 switch is a special network device that has the functionality of a router and a
switch combined into one chassis. It works in our network by simply allowing connected
devices that are on the same subnet or virtual LAN (VLAN) to exchange information at
lightning speed, just like a switch that operates in the data link layer of the OSI model,
but it also has the IP routing intelligence of a router built into it.

It can inspect incoming packets in the network layer, support routing protocols, and
even make routing decisions based on the source and destination IP addresses. With
both its Layer 2 and Layer 3 capabilities, this device is popularly known also as a
Multilayer Switch. Just be mindful that Layer 3 switches do not have WAN ports which
should be considered while designing your network.
How do Layer 3 Switches function in the Network?
Layer 2 switch dynamically routes traffic between its physical interfaces according to the
MAC addresses of the connected devices, wherein Layer 3 switches use this feature to
manage traffic in a LAN. A Layer 2 switch functions well in low to medium traffic in its
VLANs, but these switches have their limitations once traffic increases.

The Layer 3 switch was conceived to augment this limitation by developing equipment
that has routing capabilities within the same chassis. The hardware is where the main
difference lies. Layer 3 switches have a mix of traditional switches and routers, except
for the fact that the router’s software logic is replaced by integrated circuit hardware to
improve its performance further.

Layer 3 switches can perform on the OSI model’s Layer 2 and Layer 3. The Layer 3
switching functionality can take either of two forms:

 Cut-through switches – will only look into the first packet of a series of packets
to determine its logical Layer 3 destination IP address and then shift the
remainder of the packets in the series using the MAC address leading to higher
data throughput rates.
 Packet-by-Packet Layer 3 (PPL3) switches – will look into every packet to
determine its logical Layer 3 destination IP address. A PPL3 switch basically
functions as a high-speed router with the routing functionality built into its
hardware instead of software. Similar to routers, aside from forwarding packets to
their destination, PPL3 switches perform other functions that a standard router
accomplishes, such as using the packet’s checksum to verify its integrity,
updating the packet’s Time to Live (TTL) information after each hop, and
processing any optional information in the packet’s header.

In addition to performing Layer 3 switching functions and routing functions, these


switches perform the Layer 2 switches functions, such as bridging functions, at each
switch interface. You can group switching interfaces in various ways to allocate
bandwidth and contain broadcasts, which makes Layer 3 switches a powerful, scalable
technology for building high-speed Ethernet backbone networks.

L3 Switch Benefits
Layer 3 switches were developed to provide the network with the following advantages:

 Better fault isolation and traffic segregation


 Simplify security management
 Reduce broadcast traffic volume
 Easier VLAN configuration process
 Support Inter-VLAN routing
 Separate routing tables
 Reduce effort and time in troubleshooting
 Support flow accounting and high-speed scalability
 Lower network latency

You might also like