Unit-I Notes
Unit-I Notes
Introduction
Modern world scenario is ever changing. Data Communication and network have changed the
way business and other daily affair works. Now, they highly rely on computer networks and
internetwork. A set of devices often mentioned as nodes connected by media link is called a
Network. A node can be a device which is capable of sending or receiving data generated by
other nodes on the network like a computer, printer etc. These links connecting the devices are
called Communication channels.
Computer network is a telecommunication channel using which we can share data with other
computers or devices, connected to the same network. It is also called Data Network. The best
example of computer network is Internet.
Computer network does not mean a system with one Control Unit connected to multiple other
systems as its slave. That is Distributed system, not Computer Network.
A network must be able to meet certain criteria, these are mentioned below:
1. Performance
2. Reliability
3. Scalability
● Transit time: It is the time taken to travel a message from one device to another.
● Response time: It is defined as the time elapsed between enquiry and response.
1. Efficiency of software
2. Number of users
3. Capability of connected hardware
It refers to the protection of data from any unauthorized user or access. While travelling through
network, data passes many layers of network, and data can be traced if attempted. Hence security
is also a very important characteristic for Networks.
1. Interpersonal Communication: We can communicate with each other efficiently and easily.
Example: emails, chat rooms, video conferencing etc, all of these are possible because of
computer networks.
2. Resources can be shared: We can share physical resources by making them available on a
network such as printers, scanners etc.
3. Sharing files, data: Authorized users are allowed to share the files on the network.
Data Communication
The exchange of data between two devices through a transmission medium is called Data
Communication. The data is exchanged in the form of 0's and 1's. The transmission medium
used is wire cable. For data communication to occur, the communication device must be a part of
a communication system. Data Communication has two types - Local and Remote which are
discussed below:
Local communication takes place when the communicating devices are in the same geographical
area, same building, or face-to-face etc.
Remote communication takes place over a distance i.e. the devices are farther. The effectiveness
of a data communication can be measured through the following features :
IP Core/Edge Networks
Core network and backbone network typically refer to the high capacity communication facilities
that connect primary nodes. Core/backbone network provides path for the exchange of
information between different sub-networks. Edge network provides information exchange
between the access network and the core network. The devices and facilities in the edge
networks are switches, routers, routing switches, IADs and a variety of MAN/WAN devices,
which are often called edge devices. Edge network provide entry points into carrier/service
provider core/backbone networks.
CommVerge Solutions provides independent and unbiased end-to-end IP Core/Edge Network
Solutions to service providers in addressing their various technical, operational and business
needs. Through its team of experienced and high-performing telecommunications and business
professionals, CommVerge Solutions designs and implements the following IP Core/Edge
Network Solutions:
Mobile Gateway
IP/MPLS Networks
To learn more about our solutions offerings, please click on any of the icon below:
Access Networks and Physical Media, Delay and Loss in Packet-Switched network
Having now briefly considered the major "pieces" of the Internet architecture – the applications,
end systems, end-to-end transport protocols, routers, and links – let us now consider what can
happen to a packet as it travels from its source to its destination. Recall that a packet starts in a
host (the source), passes through a series of routers, and ends its journey in another host (the
destination). As a packet travels from one node (host or router) to the subsequent node (host or
router) along this path, the packet suffers from several different types of delays at each node
along the path. The most important of these delays are the nodal processing delay, queuing
delay, transmission delay and propagation delay; together, these delays accumulate to give
a total nodal delay. In order to acquire a deep understanding of packet switching and computer
networks, we must understand the nature and importance of these delays.
Figure 1.6-1: The delay through router A
Let us explore these delays in the context of Figure 1.6-1. As part of its end-to-end route between
source and destination, a packet is sent from the upstream node through router, A, to router B.
Our goal is to characterize the nodal delay at router A. Note that router has three outbound
links, one leading to router B, another leading to router C, and yet another leading to router D.
Each link is preceded a queue (also known as a buffer). When the packet arrives at router A
(from the upstream node), router A examines the packet's header to determine the appropriate
outbound link for the packet, and then directs the packet to the link. In this example, the
outbound link for the packet is the one that leads to router B. A packet can only be transmitted on
a link if there is no other packet currently being transmitted on the link and if there are no other
packets preceding it in the queue; if the link is currently busy or if there are other packets already
queued for the link, the newly arriving packet will then join the queue.
The time required to examine the packet's header and determine where to direct the packet is part
of the processing delay. The processing delay can also include other factors, such as the time
needed to check for bit-level errors in the packet that occurred in transmitting the packet's bits
from the upstream router to router A. After this nodal processing, the router directs the packet to
the queue that precedes the link to router B. (In section 4.7 we will study the details of how a
router operates.) At the queue, the packet experiences a queuing delay as it waits to be
transmitted onto the link. The queuing delay of a specific packet will depend on the number of
other, earlier-arriving packets that are queued and waiting for transmission across the link; the
delay of a given packet can vary significantly from packet to packet. If the queue is empty and
no other packet is currently being transmitted, then our packet's queuing delay is zero. On the
other hand, if the traffic is heavy and many other packets are also waiting to be transmitted, the
queuing delay will be long. We will see shortly that the number of packets that an arriving packet
might expect to find on arrival (informally, the average number of queued packets, which is
proportional to the average delay experienced by packets) is a function of the intensity and
nature of the traffic arriving to the queue.
Once a bit is pushed onto the link, it needs to propagate to router B. The time required to
propagate from the beginning of the link to route B is the propagation delay. The bit
propagates at the propagation speed of the link. The propagation speed depends on the physical
medium of the link (i.e., multimode fiber, twisted-pair copper wire,i etc.) and is in the range of
equal to, or a little less than, the speed of light. The propagation delay is the distance between
two routers divided by the propagation speed. That is, the propagation delay is d/s, where d is the
distance between router A and router B and s is the propagation speed of the link. Once the last
bit of the packet propagates to node B, it and all the preceding bits of the packet are stored in
router B. The whole process then continues with router B now performing the forwarding.
Newcomers to the field of computer networking sometimes have difficulty understanding the
difference between transmission delay and propagation delay. The difference is subtle but
important. The transmission delay is the amount of time required for the router to push out the
packet; it is a function of the packet's length and the transmission rate of the link, but has nothing
to do with the distance between the two routers. The propagation delay, on the other hand, is the
time it takes a bit to propagate from one router to the next; it is a function of the distance
between the two routers, but has nothing to do with the packet's length or the transmission rate of
the link.
An analogy might clarify the notions of transmission and propagation delay. Consider a highway
which has a toll booth every 100 kilometers. You can think of the highway segments between toll
booths as links and the toll booths as routers. Suppose that cars travel (i.e., propagate) on the
highway at a rate of 100 km/hour (i.e., when a car leaves a toll booth it instantaneously
accelerates to 100 km/hour and maintains that speed between toll booths). Suppose that there is a
caravan of 10 cars that are traveling together, and that these ten cars follow each other in a fixed
order. You can think of each car as a bit and the caravan as a packet. Also suppose that each toll
booth services (i.e., transmits) a car at a rate of one car per 12 seconds, and that it is late at night
so that the caravan's cars are only cars on the highway. Finally, suppose that whenever the first
car of the caravan arrives at a toll booth, it waits at the entrance until the nine other cars have
arrived and lined up behind it. (Thus the entire caravan must be "stored" at the toll booth before
it can begin to be "forwarded".) The time required for the toll booth to push the entire caravan
onto the highway is 10/(5 cars/minute) = 2 minutes. This time is analogous to the transmission
delay in a router. The time required for a car to travel from the exit of one toll booth to the next
toll booth is 100 km/(100 km/hour) = 1 hour. This time is analogous to propagation delay.
Therefore the time from when the caravan is "stored" in front of a toll booth until the caravan is
"stored" in front of the next toll booth is the sum of "transmission delay" and "the propagation
delay" – in this example, 62 minutes.
Let's explore this analogy a bit more. What would happen if the toll-booth service time for a
caravan were greater than the time for a car to travel between toll booths? For example, suppose
cars travel at rate 1000 km/hr and the toll booth services cars at rate one car per minute. Then the
traveling delay between toll booths is 6 minutes and the time to serve a caravan is 10 minutes. In
this case, the first few cars in the caravan will arrive at the second toll booth before the last cars
in caravan leave the first toll booth. This situation also arises in packet-switched networks – the
first bits in a packet can arrive at a router while many of the remaining bits in the packet are still
waiting to be transmitted by the preceding router.
If we let dproc, dqueue, dtrans and dprop denote the processing, queuing, transmission and propagation
delays, then the total nodal delay is given by
The contribution of these delay components can vary significantly. For example, dprop can be
negligible (e.g., a couple of microseconds) for a link connecting two routers on the same
university campus; however, dprop is hundreds of milliseconds for two routers interconnected by a
geostationary satellite link, and can be the dominant term in dnodal. Similarly, dtranscan be range
from negligible to significant. Its contribution is typically negligible for transmission rates of
10 Mbps and higher (e.g., for LANs); however, it can be hundreds of milliseconds for large
Internet packets sent over 28.8 kbps modem links. The processing delay, dproc, is often
negligible; however, it strongly influences a router's maximum throughput, which is the
maximum rate at which a router can forward packets.
Queuing Delay
The most complicated and interesting component of nodal delay is the queuing delay dqueue. In
fact, queuing delay is so important and interesting in computer networking that thousands of
papers and numerous of books have been written about it [Bertsekas 1992] [Daigle 1991]
[Kleinrock 1975] [Kleinrock 1976] [Ross 1995]! We only give a high-level, intuitive discussion
of queuing delay here; the more curious reader may want to browse through some of the books
(or even eventually write a Ph.D. thesis on the subject!). Unlike the other three delays
(namely, dproc , dtrans and dprop ), the queuing delay can vary from packet to packet. For example,
if ten packets arrive to an empty queue at the same time, the first packet transmitted will suffer
no queuing delay, while the last packet transmitted will suffer a relatively large queuing delay
(while it waits for the other nine packets to be transmitted). Therefore, when characterizing
queuing delay, one typically uses statistical measures, such as average queuing delay, variance of
queuing delay and the probability that the queuing delay exceeds some specified value.
When is the queuing delay big and when is it insignificant? The answer to this question depends
largely on the rate at which traffic arrives to the queue, the transmission rate of the link, and the
nature of the arriving traffic, i.e., whether the traffic arrives periodically or whether it arrives in
bursts. To gain some insight here, let a denote the average rate at which packets arrive to the
queue (a is units of packets/sec). Recall that R is the transmission rate, i.e., it is the rate (in
bits/sec) at which bits are pushed out of the queue. Also suppose, for simplicity, that all packets
consist of L bits. Then the average rate at which bits arrive to the queue is La bits/sec. Finally,
assume that the queue is very big, so that it can hold essentially an infinite number of bits. The
ratio La/R, called the traffic intensity, often plays an important role in estimating the extent of
the queuing delay. If La/R > 1, then the average rate at which bits arrive to the queue exceeds the
rate at which the bits can be transmitted from the queue. In this unfortunate situation, the queue
will tend to increase without bound and the queuing delay will approach infinity! Therefore, one
of the golden rules in traffic engineering is: design your system so that the traffic intensity is no
greater than one.
Now consider the case La/R ≤ 1. Here, the nature of the arriving traffic impacts the queuing
delay. For example, if packets arrive periodically, i.e., one packet arrives every L/R seconds, then
every packet will arrive to an empty queue and there will be no queuing delay. On the other
hand, if packets arrive in bursts but periodically, there can be a significant average queuing delay.
For example, suppose N packets arrive at the same time every (L/R)N seconds. Then the first
packet transmitted has no queuing delay; the second packet transmitted has a queuing delay
of L/R seconds; and more generally, the nth packet transmitted has a queuing delay
of (n−1)L/R seconds. We leave it as an exercise for the reader to calculate the average queuing
delay in this example.
The two examples described above of periodic arrivals are a bit academic. Typically the arrival
process to a queue is random, i.e., the arrivals do not follow any pattern; packets are spaced apart
by random amounts of time. In this more realistic case, the quantity La/R is not usually sufficient
to fully characterize the delay statistics. Nonetheless, it is useful in gaining an intuitive
understanding of the extent of the queuing delay. In particular, if traffic intensity is close to zero,
then packets are pushed out at a rate much higher than the packet arrival rate; therefore, the
average queuing delay will be close to zero. On the other hand, when the traffic intensity is close
to 1, there will be intervals of time when the arrival rate exceeds the transmission capacity (due
to the burstiness of arrivals), and a queue will form. As the traffic intensity approaches 1, the
average queue length gets larger and larger. The qualitative dependence of average queuing delay
on the traffic intensity is shown in Figure 1.6-2 below.
One important aspect of Figure 1.6-2 is the fact that as the traffic intensity approaches 1, the
average queueing delay increases rapidly. A small percentage increase in the intensity will result
in a much larger percentage-wise increase in delay. Perhaps you have experienced this
phenomenon on the highway. If you regularly drive on a road that is typically congested, the fact
that the road is typically congested means that its traffic intensity is close to 1. If some event
causes an even slightly-larger-than-usual amount of traffic, the delays you experience can be
huge.
Packet Loss
In our discussions above, we have assumed that the queue is capable of holding an infinite
number of packets. In reality a queue preceding a link has finite capacity, although the queuing
capacity greatly depends on the switch design and cost. Because the queue capacity is a finite,
packet delays do not really approach infinity as the traffic intensity approaches one. Instead, a
packet can arrive to find a full queue. With no place to store such a packet, a router
will drop that packet; that is, the packet will be lost. From an end-system viewpoint, this will
look like a packet having been transmitted into the network core, but never emerging from the
network at the destination. The fraction of lost packets increases as the traffic intensity increases.
Therefore, performance at a node is often measured not only in terms of delay, but also in terms
of the probability of packet loss. As we shall discuss in the subsequent chapters, a lost packet
may be retransmitted on an end-to-end basis, by either the application or by the transport layer
protocol.
End-to-End Delay
Our discussion up to this point has been focused on the nodal delay, i.e., the delay at a single
router. Let us conclude our discussion by briefly considering the delay from source to
destination. To get a handle on this concept, suppose there are Q−1 routers between the source
host and the destination host. Let us also suppose that the network is uncongested (so that
queuing delays are negligible), the processing delay at each router and at the source host is
dproc, the transmission rate out of each router and out of the source host is R bits/sec, and the
propagation delay between each pair or routers and between the source host and the first router
is dprop. The nodal delays accumulate and give an end-to-end delay,
where once again dtrans = L/R, where L is the packet size. We leave it to the reader to generalize
this formula to the case of heterogeneous delays at the nodes and to the presence of an average
queuing delay at each node.
Our discussion of layering in the previous section has perhaps given the impression that the
Internet is a carefully organized and highly intertwined structure. This is certainly true in the
sense that all of the network entities (end systems, routers, and bridges) use a common set of
protocols, enabling the entities to communicate with each other. However, from a topological
perspective, to many people the Internet seems to be growing in a chaotic manner, with new
sections, branches, and wings popping up in random places on a daily basis. Indeed, unlike the
protocols, the Internet's topology can grow and evolve without approval from a central authority.
Let us now try to get a grip on the seemingly nebulous Internet topology.
As we mentioned at the beginning of this chapter, the topology of the Internet is loosely
hierarchical. Roughly speaking, from bottom-to-top the hierarchy consists of end systems (PCs,
workstations, and so on) connected to local Internet service providers (ISPs). The local ISPs are
in turn connected to regional ISPs, which are in turn connected to national and international
ISPs. The national and international ISPs are connected together at the highest tier in the
hierarchy. New tiers and branches can be added just as a new piece of Lego can be attached to an
existing Lego construction.
In this section we describe the topology of the Internet in the United States as of 2000. Let's
begin at the top of the hierarchy and work our way down. Residing at the very top of the
hierarchy are the national ISPs, which are called national service providers (NSPs). The NSPs
form independent backbone networks that span North America (and typically extend abroad as
well). Just as there are multiple long-distance telephone companies in the United States, there are
multiple NSPs that compete with each other for traffic and customers. The existing NSPs include
internetMCI, SprintLink, PSINet, UUNet Technologies, and AGIS. The NSPs typically have
high-bandwidth transmission links, with bandwidths ranging from 1.5 Mbps to 622 Mbps and
higher. Each NSPalso has numerous hubs that interconnect its links and at which regional
ISPs can tap into the NSP.
The NSPs themselves must be interconnected to each other. To see this, suppose one regional
ISP, say MidWestnet, is connected to the MCI NSP and another regional ISP, say EastCoastnet, is
connected to Sprint's NSP. How can traffic be sent from MidWestnet to EastCoastnet? The
solution is to introduce switching centers, called network access points (NAPs), which
interconnect the NSPs, thereby allowing each regional ISP to pass traffic to any other regional
ISP. To keep us all confused, some of the NAPs are not referred to as NAPs but instead as MAEs
(metropolitan area exchanges). In the United States, many of the NAPs are run by RBOCs
(regional Bell operating companies); for example, PacBell has a NAP in San Francisco and
Ameritech has a NAP in Chicago. For a list of major NSPs (those connected into at least three
NAPs/MAE's), see [Haynal 1999]. In addition to connecting to each other at NAPs, NSPs can
connect to each other through so-called private peering points; see Figure 1.26. For a discussion
of NAPs as well as private peering among NSPs, see [Huston 1999a].
Because the NAPs relay and switch tremendous volumes of Internet traffic, they are typically in
themselves complex high-speed switching networks concentrated in a small geographical area
(for example, a single building). Often the NAPs use high-speed ATM switching technology in
the heart of the NAP, with IP riding on top of ATM. Figure 1.27 illustrates PacBell's San
Francisco NAP. The details of Figure 1.27 are unimportant for us now; it is worthwhile to note,
however, that the NSP hubs can themselves be complex data networks.
Figure 1.27: The PacBell NAP architecture (courtesy of the Pacific Bell Web site)
Running an NSP is not cheap. In June 1996, the cost of leasing 45 Mbps fiber optics from coast
to coast, as well as the additional hardware required, was approximately $150,000 per month.
And the fees that an NSP pays the NAPs to connect to the NAPs can exceed $300,000 annually.
NSPs and NAPs also have significant capital costs in equipment for high-speed networking. An
NSP earns money by charging a monthly fee to the regional ISPs that connect to it. The fee that
an NSP charges to a regional ISP typically depends on the bandwidth of the connection between
the regional ISP and the NSP; clearly a 1.5 Mbps connection would be charged less than a 45
Mbps connection. Once the fixed-bandwidth connection is in place, the regional ISP can pump
and receive as much data as it pleases, up to the bandwidth of the connection, at no additional
cost. If an NSP has significant revenues from the regional ISPs that connect to it, it may be able
to cover the high capital and monthly costs of setting up and maintaining an NSP. For a
discussion of the current practice of financial settlement among interconnected network
providers, see [Huston 1999b].
A regional ISP is also a complex network, consisting of routers and transmission links with rates
ranging from 64 Kbps upward. A regional ISP typically taps into an NSP (at an NSP hub), but it
can also tap directly into a NAP, in which case the regional ISP pays a monthly fee to a NAP
instead of to an NSP. A regional ISP can also tap into the Internet backbone at two or more
distinct points (for example, at an NSP hub or at a NAP). How does a regional ISP cover its
costs? To answer this question, let's jump to the bottom of the hierarchy.
End systems gain access to the Internet by connecting to a local ISP. Universities and
corporations can act as local ISPs, but backbone service providers can also serve as a local ISP.
Many local ISPs are small "mom and pop" companies, however. A popular Web site known
simply as "The List" contains links to nearly 8,000 local, regional, and backbone ISPs [List
1999]. The local ISPs tap into one of the regional ISPs in its region. Analogous to the fee
structure between the regional ISP and the NSP, the local ISP pays a monthly fee to its regional
ISP that depends on the bandwidth of the connection. Finally, the local ISP charges its customers
(typically) a flat, monthly fee for Internet access: the higher the transmission rate of the
connection, the higher the monthly fee.
We conclude this section by mentioning that any one of us can become a local ISP as soon as we
have an Internet connection. All we need to do is purchase the necessary equipment (for
example, router and modem pool) that is needed to allow other users to connect to our so-called
point of presence. Thus, new tiers and branches can be added to the Internet topology just as a
new piece of Lego can be attached to an existing Lego construction.
OSI model
OSI (Open Systems Interconnection) model was created by the International Organization
for Standardization (ISO), an international standard-setting body. It was designed to be a
reference model for describing the functions of a communication system. The OSI model
provides a framework for creating and implementing networking standards and devices and
describes how network applications on different computers can communicate through the
network media. It has seven layers, with each layer describing a different function of data
traveling through a network.
The layers are usually numbered from the last one, meaning that the Physical layer is considered
to be the first layer. It is useful to remember these layers, since there will certainly be a couple of
questions on the CCNA exam regarding them. Most people learn the mnemonic „Please Do Not
Throw Sausage Pizza Away“:
● Physical – defines how to move bits from one device to another. It details how cables,
connectors and network interface cards are supposed to work and how to send and receive
bits.
● Data Link – encapsulates a packet in a frame. A frame contains a header and a trailer that
enable devices to communicate. A header, most commonly, contains a source and a
destination MAC address. A trailer contains the Frame Check Sequence field, which is used
to detect transmission errors. The data link layer has two sublayers:
1. Logical Link Control – used for flow control and error detection.
2. Media Access Control – used for hardware addressing and for controlling the access method.
● Network – defines device addressing, routing, and path determination. Device (logical)
addressing is used to identify a host on a network (e.g. by its IP address).
● Transport – segments great chunks of data received from the upper layer protocols.
Establishes and terminates connections between two computers. Used for flow control and
data recovery.
● Session – defines how to establish and terminate a session between the two systems.
● Presentation – defines data formats. Compression and encryption are defined at this layer.
● Application – this layer is the closest to the user. It enables network applications to
communicate with other network applications.
The following table shows which protocols reside on which layer of the OSI model:
TCP/IP model
The TCP/IP model was created in the 1970s by the Defense Advance Research Project Agency
(DARPA). Like the OSI model, it describes general guidelines for designing and implementing
computer protocols. It consists of four layers: Network Access, Internet, Transport, and
Application:
The following picture show the comparison between the TCP/IP model and OSI model:
As you can see from the picture above, the TCP/IP model has fewer layers than the OSI model.
The Application, Presentation, and Session layers of the OSI model are merged into a single
layer in the TCP/IP model. Also, Physical and Data Link layers are called Network Access layer
in the TCP/IP model.
There are some other differences between these two models, besides the obvious difference in
the number of layers. OSI model prescribes the steps needed to transfer data over a network and
it is very specific in it, defining which protocol is used at each layer and how. The TCP/IP model
is not that specific. It can be said that the OSI model prescribes and TCP/IP model describes.
Media is the actual physical environment through which data travels as it moves from one
component to another, and it connects network devices. The most common types of net-work
media are twisted-pair cable, coaxial cable, fiber-optic cable, and wireless. Each media type has
specific capabilities and serves specific purposes.
Understanding the types of connections that can be used within a network provides a better
understanding of how networks function in transmitting data from one point to another.
Physical Layer: Introduction to Guided and unguided physical media.
Twisted-Pair Cable
Twisted-pair is a copper wire-based cable that can be either shielded or unshielded. Twisted- pair
is the most common media for network connectivity.
Unshielded twisted-pair (UTP) cable, as shown in Figure 4-1, is a four-pair wire. Each of the
eight individual copper wires in UTP cable is covered by an insulating material. In addition, the
wires in each pair are twisted around each other. The advantage of UTP cable is its ability to
cancel interference, because the twisted-wire pairs limit signal degradation from electromagnetic
interference (EMI) and radio frequency interference (RFI). To further reduce crosstalk between
the pairs in UTP cable, the number of twists in the wire pairs varies. UTP, as well as shielded
twisted-pair (STP) cable, must follow precise specifications as to how many twists or braids are
permitted per meter.
● Category 1—Used for telephone communications; not suitable for transmitting data
● Category 2—Capable of transmitting data at speeds of up to 4 Mbps
● Category 3—Used in 10BASE-T networks; can transmit data at speeds up to 10 Mbps
● Category 4—Used in Token Ring networks; can transmit data at speeds up to 16 Mbps
● Category 5—Capable of transmitting data at speeds up to 100 Mbps
● Category 5e—Used in networks running at speeds up to 1000 Mbps (1 Gbps)
● Category 6—Consists of four pairs of 24-gauge copper wires that can transmit data at
speeds up to 1000 Mbps
Shielded twisted-pair (STP) cable, as shown in Figure 4-2, combines the techniques of shielding
and the twisting of wires to further protect against signal degradation. Each pair of wires is
wrapped in a metallic foil. The four pairs of wires are then wrapped in an overall metallic braid
or foil, usually 150-ohm cable. Specified for use in Ethernet network installations, STP reduces
electrical noise both within the cable (pair-to-pair coupling, or crosstalk) and from outside the
cable (EMI and RFI). Token Ring network topology uses STP.
When you consider using UTP and STP for your network media, consider the
following:
● Both are the least-expensive media for data communication. UTP is less expensive than
STP.
● Because most buildings are already wired with UTP, many transmission standards are
adapted to use it to avoid costly rewiring with an alternative cable type.
Twisted-pair cabling is the most common networking cabling in use today; however, some
networks still use older technologies like coaxial cable, as discussed in the next section.
Coaxial Cable
Coaxial cable consists of a hollow outer cylindrical conductor that surrounds a single inner wire
conducting element. This section describes the characteristics and uses of coaxial cable.
As shown in Figure 4-3, the single inner wire located in the center of a coaxial cable is a copper
conductor, surrounded by a layer of flexible insulation. Over this insulating material is a woven
copper braid or metallic foil that acts both as the second wire in the circuit and as a shield for the
inner conductor. This second layer, or shield, can help reduce the amount of outside interference.
An outer jacket covers this shield. The BNC connector shown looks much like a cable-television
connector and connects to an older NIC with a BNC interface.
Coaxial cable offers several advantages for use in LANs. It can be run with fewer boosts from
repeaters, which regenerate the signals in a network so that they can cover greater distances
between network nodes than either STP or UTP cable. Coaxial cable is less expensive than
fiber-optic cable, and the technology is well known. It has been used for many years for all types
of data communication.
When you work with cable, consider its size. As the thickness, or diameter, of the cable
increases, so does the difficulty in working with it. Cable must often be pulled through existing
conduits and troughs that are limited in size. Coaxial cable comes in a variety of sizes. The
largest diameter, frequently referred to as Thicknet, was specified for use as Ethernet backbone
cable because historically it had greater transmission length and noise rejection characteristics.
However, Thicknet cable can be too rigid to install easily in some environments because of its
thickness. Generally, the more difficult the network media is to install, the more expensive it is to
install. Coaxial cable is more expensive to install than twisted-pair cable, and Thicknet cable is
almost never used except for special-purpose installations, where shielding from EMI or distance
requires the use of such cables.
In the past, coaxial cable with an outside diameter of only 0.35 cm, sometimes referred to
as Thinnet, was used in Ethernet networks. It was especially useful for cable installations that
required the cable to make many twists and turns. Because Thinnet was easier to install, it was
also cheaper to install. Thus, it was also referred to as Cheapernet. However, because the outer
copper or metallic braid in coaxial cable comprised half the electrical circuit, special care needed
to be taken to ground it properly, by ensuring that a solid electrical connection existed at both
ends of the cable. Installers frequently failed to make a good connection. Connection problems
resulted in electrical noise, which interfered with signal transmission. For this reason, despite its
small diameter, Thinnet is no longer commonly used in Ethernet networks.
Although coaxial cable offers some distance advantages over twisted-pair, the disadvantages far
outweigh the benefits. If a communications signal needs to travel a greater distance at high rates
of speed, it is more common to use fiber-optic cable.
Fiber-Optic Cable
Fiber-optic cable used for networking consists of two fibers encased in separate sheaths. Viewing
it in cross section in Figure 4-4, you can see that each optical fiber is surrounded by layers of
protective buffer material: usually a plastic shield, then a plastic such as Kevlar, and finally, an
outer jacket that provides protection for the entire cable. The plastic conforms to appropriate fire
and building codes. The purpose of the Kevlar is to furnish additional cushioning and protection
for the fragile, hair-thin glass fibers. Where buried fiber-optic cables are required by codes, a
stainless steel wire is sometimes included for added strength. Several connectors can connect
fiber to the networking device; the most common is a SC connector, which has two optics, one
connecting to transmit and the other connecting to receive.
Fiber-optic cable does not carry electrical impulses as copper wire does. Instead, signals that
represent bits are converted into pulses of light. Two types of fiber-optic cable exist:
The characteristics of the different media have a significant impact on the speed of data transfer.
Although fiber-optic cable is more expensive, it is not susceptible to EMI and is capable of
higher data rates than any of the other types of networking media discussed here. Fiber-optic
cable is also more secure because it does not emit electrical signals that could be received by
external devices.
NOTE
Even though light is an electromagnetic wave, light in fibers is not considered wireless because
the electromagnetic waves are guided in the optical fiber. The term wireless is reserved for
radiated, or unguided, electromagnetic waves.
In some instances, it might not be possible to run any type of cable for network
communi-cations. This situation might be the case in a rented facility or in a location where you
do not have the ability to install the appropriate infrastructure. In these cases, it might be useful
to install a wireless network, as discussed in the next section.
Wireless Communications
Wireless networks are becoming increasingly popular, and they utilize a different type of
technology. Wireless communication uses radio frequencies (RFs) or infrared waves to transmit
data between devices on a LAN. For wireless LANs, a key component is the wireless hub, or
access point, used for signal distribution. To receive the signals from the access point, a PC or
laptop needs to install a wireless adapter card, or wireless network interface card (NIC). Figure
4-5 shows a number of wireless access points connected to an Ethernet backbone to provide
access to the Internet.
Wireless signals are electromagnetic waves that can travel through the vacuum
of outer space and through a medium such as air. No physical medium is
necessary for wireless signals, making them a versatile way to build a network. They use
portions of the RF spectrum to transmit voice, video, and data. Wireless frequencies range from
3 kHz to 300 GHz. The data-transmission rates range from 9 kbps to 54 Mbps. Figure 4-6 shows
the electromagnetic spectrum chart.
Another common application of wireless data communication is the wireless LAN (WLAN),
which is built in accordance with Institute of Electrical and Electronic Engineers (IEEE) 802.11
standards. WLANs typically use radio waves (for example, 902 MHz), microwaves (for
example, 2.4 GHz), and infrared (IR) waves (for example, 820 nm) for communication. Wireless
technologies are a crucial part of the future of networking.