Cnet Notes
Cnet Notes
Definition
Computer Networking is simply a group of computers that are connected to each other
in some way (eg. wired (LAN), wireless or internet) for the purpose of
communicating to each other.
Networking, especially with full access to the web, allows ways of communication
that would simply be impossible before it was developed. Instant messaging can now
allow users to talk in real time and send files to other people wherever they are in the
world, which is a huge boon for businesses. Also, it allows access to a vast amount of
useful information, including traditional reference materials and timely facts, such as
news and current events.
This benefit is very important, particularly for larger companies that really need to
produce huge numbers of resources to be shared to all the people. Since the
technology involves computer-based work, it is assured that the resources they wanted
to get across would be completely shared by connecting to a computer network which
their audience is also using.
Computer networking allows easier accessibility for people to share their files, which
greatly helps them with saving more time and effort, since they could do file sharing
more accordingly and effectively.
It is highly flexible.
It is an inexpensive system.
Installing networking software on your device would not cost too much, as you are
assured that it lasts and can effectively share information to your peers. Also, there is
no need to change the software regularly, as mostly it is not required to do so.
With computer networking, you can use a lot of software products available on the
market which can just be stored or installed in your system or server, and can then be
used by various workstations.
Since you are going to share information, files and resources to other people, you
have to ensure all data and content are properly stored in the system. With this
networking technology, you can do all of this without any hassle, while having all the
space you need for storage.
It lacks independence.
Because there would be a huge number of people who would be using a computer
network to get and share some of their files and resources, a certain user’s security
would be always at risk. There might even be illegal activities that would occur,
which you need to be careful about and aware of.
As previously stated, if a computer network’s main server breaks down, the entire
system would become useless. Also, if it has a bridging device or a central linking
server that fails, the entire network would also come to a standstill. To deal with these
problems, huge networks should have a powerful computer to serve as file server to
make setting up and maintaining the network easier.
There would be instances that stored files are corrupt due to computer viruses. Thus,
network administrators should conduct regular check-ups on the system, and the
stored files at the same time.
It has been observed that providing users with internet connectivity has fostered
undesirable behavior among them. Considering that the web is a minefield of
distractions—online games, humor sites and even porn sites—workers could be
tempted during their work hours. The huge network of machines could also encourage
them to engage in illicit practices, such as instant messaging and file sharing, instead
of working on work-related matters. While many organizations draw up certain
policies on this, they have proven difficult to enforce and even engendered resentment
from employees
For a computer network to work efficiently and optimally, it requires high technical
skills and know-how of its operations and administration. A person just having basic
skills cannot do this job. Take note that the responsibility to handle such a system is
high, as allotting permissions and passwords can be daunting. Similarly, network
configuration and connection is very tedious and cannot be done by an average
technician who does not have advanced knowledge.
Based on Scale
Based on the scale (size), networks are classified into following
I. PAN (Personal Area Network)
II. LAN (Local Area Network)
III. CAN (Campus Area Network)
IV. MAN (Metropolitan Area Network)
V. DAN (Desert Area Network)
VI. CAN* (Country Area Network)
VII. WAN (Wide Area Network)
VIII. GAN (Global Area Network)
A network that spans broader geographical area than a local area network over public
communication network. WANs interconnect LANs, which then provide access to computers
or file servers in other locations. Because WANs connect user networks over a large
geographical area, they make it possible for businesses to communicate across great
distances. WANs allow computers, printers, and other devices on a LAN to be shared with
distant locations. WANs provide instant communications across large geographic areas.
Collaboration software provides access to real-time information and resources and allows
meetings to be held remotely. WANs have created a new class of workers called
telecommuters. These people never have to leave their homes to go to work.
Modes of communication:
Simplex:
The simplest signal flow technique is the simplex configuration. In Simplex transmission, one
of the communicating devices can only send data, whereas the other can only receive it. Here,
communication is only in one direction (unidirectional) where one party is the transmitter and
the other is the receiver. Examples of simplex communication are the simple radio, and
Public broadcast television where, you can receive data from stations but can’t transmit data
back. The television station sends out electromagnetic signals. The station does not expect
and does not monitor for a return signal from the television set. This type of channel design is
easy and inexpensive to set up.
Half-Duplex
Half duplex refers to two-way communication where, only one party can transmit data at a
time. Unlike, the Simplex mode here, both devices can transmit data though, not at the same
time, that is Half duplex provides Simplex communication in both directions in a single
channel. When one device is sending data, the other device must only receive it and vice
versa. Thus, both sides take turns at sending data. This requires a definite turnaround time
during which, the device changes from the receiving mode to the transmitting mode. Due to
this delay, half duplex communication is slower than simplex communication. However, it is
Full Duplex
Full duplex refers to the transmission of data in two directions simultaneously. Here, both the
devices are capable of sending as well as receiving data at the same time as. Sharing the same
channel and moving signals in both directions increases the channel throughput without
increasing its bandwidth. For example, a telephone is a full-duplex device because both
parties can talk to each other simultaneously.
Internet is comprises of :
A community of people: who use and develop the network.
A collection of resources: that can be reached from those networks.
A setup to facilitate collaboration: Among the members of the research and educational
communities worldwide.
The connected networks use the TCP/IP protocols:
Intranet:
A private TCP/IP internetwork within an organization that uses Internet technologies such as
Web servers and Web browsers for sharing information and collaborating. Intranets can be
used to publish company policies and newsletters, provide sales and marketing staff with
product information, provide technical support and tutorials, and just about anything else you
can think of that fits within the standard Web server/Web browser environment.
Intranet Web servers differ from public Web servers in that the public must have the proper
permissions and passwords to access the intranet of an organization. Intranets are designed to
Extranet:
Extranets refer to applications and services that are Intranet based, and use extended, secure
access to external users or enterprises. This access is usually accomplished through
passwords, user IDs, and other application-level security. An extranet is the extension of two
or more intranet strategies with a secure interaction between participant enterprises and their
respective intranets.
Part of a Company's Intranet that is extended to users outside the company (eg Normally over
the Internet). In its simplest form, a private TCP/IP network that securely shares information
using Hypertext Transfer Protocol (HTTP) and other Internet protocols with business partners
such as vendors, suppliers, and wholesale customers. An extranet is thus a corporate intranet
that is exposed over the Internet to certain specific groups that need access to it. Extranets
built in this fashion follow the client/server paradigm, with Web servers such as Apache.
Extranets are a powerful tool because they let businesses share resources on their own private
networks over the Internet with suppliers, vendors, business partners, or customers. Extranets
are typically used for supporting real-time supply chains, for enabling business partners to
work together, or to share information such as catalogs with customers. The power of the
extranet is that it leverages the existing technology of the Internet to increase the power,
flexibility, and competitiveness of businesses utilizing well-known and easily used tools such
as Web servers and Web browsers. Extranets also save companies money by allowing them
to establish business-to-business connectivity over the Internet instead of using expensive,
dedicated leased lines. Extranets can also save money by reducing phone and fax costs.
Bus Topology:
A networking topology that connects networking components along a single cable or that
uses a series of cable segments that are connected linearly. A network that uses a bus
topology is referred to as a “bus network.” Bus networks were the original form of Ethernet
networks, using the 10Base5 cabling standard. Bus topology is used for:
Small work-group local area networks (LANs) whose computers are connected using
a thinnet cable
Trunk cables connecting hubs or switches of departmental LANs to form a larger
LAN
Backboning, by joining switches and routers to form campus-wide networks
Advantages:
Easy to install
Costs are usually low
Easy to add systems to network
Great for small networks
Disadvantages:
Out of date technology.
Include difficult reconnection and fault isolation
Ring Topology
In a ring topology, each device has a dedicated point-to-point connection with only the two
devices on either side of it. A signal is passed along the ring in one direction, from device to
device, until it reaches its destination. Each device in the ring incorporates a repeater. When a
device receives a signal intended for another device, its repeater regenerates the bits and
passes them along.
A ring is relatively easy to install and reconfigure. Each device is linked to only its immediate
neighbors (either physically or logically). To add or delete a device requires changing only
two connections. The only constraints are media and traffic considerations (maximum ring
length and number of devices). In addition, fault isolation is simplified. Generally in a ring, a
signal is circulating at all times. If one device does not receive a signal within a specified
period, it can issue an alarm. The alarm alerts the network operator to the problem and its
location.
However, unidirectional traffic can be a disadvantage. In a simple ring, a break in the ring
(such as a disabled station) can disable the entire network. This weakness can be solved by
using a dual ring or a switch capable of closing off the break.
Advantages:
Very orderly network where every device has access to the token and the opportunity
to transmit.
Performs better than a bus topology under heavy network load
Does not require network server to manage the connectivity between the computers
Disadvantage:
One malfunctioning workstation or bad port in the MAU can create problems for the
entire network
Moves, adds and changes of devices can affect the network
Network adapter cards and MAU's a Multistation Access Unit are much more
expensive than Ethernet cards and hubs
Much slower than an Ethernet network under normal load
Mesh Topology:
In a mesh topology, every device has a dedicated point-to-point link to every other device.
The term dedicated means that the link carries traffic only between the two devices
it connects. To connect n nodes in Mesh topology, we require n(n-1)/2 duplex mode
links.
Advantages:
The use of dedicated links guarantees that each connection can carry its own data
load, thus eliminating the traffic problems that can occur when links must be shared
by multiple devices.
Disadvantage:
Required high amount of cabling and the number of I/O ports.
The sheer bulk of the wiring can be greater than the available space (in walls, ceilings,
or floors) can accommodate.
The hardware required to connect each link (I/O ports and cable) can be prohibitively
expensive.
One practical example of a mesh topology is the connection of telephone regional offices in
which each regional office needs to be connected to every other regional office.
Star Topology:
In a star topology, each device has a dedicated point-to-point link only to a central controller,
usually called a hub. The devices are not directly linked to one another. Unlike a mesh
topology, a star topology does not allow direct traffic between devices. The controller acts as
an exchange: If one device wants to send data to another, it sends the data to the controller,
which then relays the data to the other connected device.
Advantages:
Less Expensive than Mesh topology.
In a star topology, each device needs only one link and one I/O port to connect it to
any number of other devices. This factor also makes it easy to install and reconfigure.
Less Cabling, Addition and Deletion involves only one connection between the
devices and the Hub or Switch.
Easy for Fault identification and fault isolation. If one link fails, only that link is
affected. All other links remain active.
Disadvantage:
One big disadvantage of a star topology is the dependency of the whole topology on
one single point, the hub. If the hub goes down, the whole system is dead.
An extended star topology links individual stars together by connecting the hubs or
switches.
Logical Topology:
The logical topology of a network determines how the hosts communicate across the
medium. The two most common types of logical topologies are broadcast and token
passing.
The use of a broadcast topology indicates that each host sends its data to all other
The second logical topology is token passing. In this type of topology, an electronic token is
passed sequentially to each host. When a host receives the token, that host can send data on
the network. If the host has no data to send, it passes the token to the next host and the
process repeats itself. Two examples of networks that use token passing are Token Ring and
Fiber Distributed Data Interface(FDDI). A variation of Token Ring and FDDI is Arcnet.
Arcnet is token passing on a bus topology.
Network Architecture:
Peer-to-Peer Model:
In a peer-to-peer network, individual users control their own resources. The users may decide
to share certain files with other users. The users may also require passwords before they
allow others to access their resources. Since individual users make these decisions, there is no
central point of control or administration in the network. In addition, individual users must
back up their own systems to be able to recover from data loss in case of failures. When a
computer acts as a server, the user of that machine may experience reduced performance as
the machine serves the requests made by other systems.
Peer-to-peer networks are relatively easy to install and operate. No additional equipment is
necessary beyond a suitable operating system installed on each computer. Since users control
their own resources, no dedicated administrators are needed.
Client-server Model:
The term client-server refers to a popular model for computer networking that utilizes client
and server devices each designed for specific purposes. The client-server model can be used
on the Internet as well as local area networks (LANs). Examples of client-server systems on
the Internet include Web browsers and Web servers, FTP clients and servers, and DNS.
Servers are designed to handle requests from many clients simultaneously. Before a client can
access the server resources, the client must be identified and be authorized to use the
resource. Each client is assigned an account name and password that is verified by an
authentication service. The authentication service guards access to the network. With the
centralization of user accounts, security, and access control, server-based networks simplify
the administration of large networks. The concentration of network resources
such as files, printers, and applications on servers also makes it easier to back-up and
maintain the data. Resources can be located on specialized, dedicated servers for easier
access. Most client/server systems also include ways to enhance the network with new
services that extend the usefulness of the network.
The centralized functions in a client/server network has substantial advantages and some
disadvantages. Although a centralized server enhances security, ease of access, and control, it
introduces a single point of failure into the network. Without an operational server, the
Client/server describes the relationship between two computer programs in which one
program, the client, makes a service request from another program, the server, which fulfill
the request. Although programs within a single computer can use the client/server idea, it is a
more important idea in a network. In a network, the client/server model provides a
convenient way to interconnect programs that are distributed efficiently across
different locations. Computer transactions using the client/server model are very common.
For example, to check your bank account from your computer, a client program in your
computer forwards your request to a server program at the bank. That program might in turn
forward the request to its own client program that sends a request to a database server at
another bank computer to retrieve your account balance. The balance is returned back to the
bank data client, which in turn serves it back to the client in your personal
computer, which displays the information for you.
Network software is used to efficiently share information among computers. It encloses the
information to be sent in a “package” that contains a “header” and a “trailer”. The header and
trailer contain information for the receiving computer, such as the address of that computer
and how the information package is coded. Information is transferred between computers as
either electrical signals in electric wires, as light signals in fiber-optic cables, or as
electromagnetic waves through space.
Protocols
A protocol is used for communication between entities in different systems. The terms
‘entity’ and ‘system’ are used in a very general sense. Examples of entities are user
application programs, file transfer package, database management systems, electronic mail
facilities, and terminals. Examples of systems are computers, terminals, and remote sensors.
Note that in some cases the entity and the system in which it resides are coextensive (e.g.
terminal).
In general, an entity is anything capable of sending or receiving information, and a
system is physically distinct object that contains one or more entities. For two entities to
communicate successfully, they must ‘speak the same language’. What is communicated?
How it is communicated and when it is communicated? Must conform to some mutually
acceptable conventions between the entities involved. The conventions are referred to as a
protocol.
Protocol may be defined as a set of rules governing the exchange of data between two
entities. It also may be conceived as a set of agreement between two communicating
processes. The key elements of protocol are:
Syntax: Includes such things as data format and signal levels.
Semantics: Includes control information for co-ordination and error handling.
Timing: Includes speed matching and sequencing, create new paragraph.
Functions of protocols
Some of the numerous functions served by network protocol are as follows:
Orderly exchange of data messages.
Management of priorities at both the network entry and transmission levels within the
network.
Process synchronization.
Session establishment between network users
Session termination between network users.
Standards
Standards are essential in creating and maintaining an open and competitive market for
equipment manufactures and in guaranteeing national and international interoperability of
data and telecommunications technology and processes. Standards provide guidelines to
manufacturers, vendors, government agencies, and other service providers to ensure the kind
of interconnectivity necessary in today’s marketplace and in international communication.
Data communication standards fall into two categories: by fact and by law.
Layered Approach
Some of the key design issues that occur in computer networking are present in
several layers.
I. Every layer needs a mechanism for identifying senders and receivers. Since network
normally has many computers, some of which have multiple processors means is
needed for a process on one machine to specify with whom it wants to talk. As a
consequence having multiple destinations, some form of addressing is needed in order
to specify a Specific destination.
II. Another set of design decision concerns the rules for data transfer, such as. Simplex,
Half duplex, Full duplex.
III. Error control is an important issue because physical communication circuits are not
perfect. Many error-detecting and error-correcting codes are known, but both ends of
the connection must agree on which one is being used. Also the receiver must have
some way of telling the sender which messages have been correctly received and
which has not.
IV. Not all communication channels preserve the order of messages sent on them. To deal
with a possible loss of sequencing the protocol must make explicit provision for the
receiver to allow the pieces to be put back together properly.
V. An issue that occurs at every level is how to keep a fast sender from swamping a slow
receiver with data.
VI. Another problem that must be solved at several levels is the inability of all processes
to accept arbitrarily long messages. This property leads to mechanisms for
disassembling, transmitting and then reassembling messages.
When there are multiple paths between source and destination, a route must be chosen
sometimes this decision must be split over two or more layers.
1. Physical Layer:
Physical layer is the bottom layer of the OSI reference model. The physical
layer has four important characteristics.
Mechanical: Relates to the physical properties of the interface to a
transmission medium. Typically, the specification is of a pluggable connector that
joins one or more signal conductors, called circuits.
Electrical: Relates to the representation of bits (e.g., in terms of voltage levels) and the
data transmission rate of bits. It defines the voltage, current, modulation, bit
synchronization, connection activation and deactivation, and various electrical
characteristics for the transmission media (such as unshielded or shielded twisted-pair
cabling, coaxial cabling, and fiber-optic cabling).
The physical layer provides only a raw bit-stream service, the data link layer attempts to
make the physical link reliable while providing the means to activate, maintain, and
deactivate the link . For LANs, the Project 802 standards of the Institute of Electrical and
Electronics Engineers (IEEE) separate the data-link layer into two sub layers:
The logical link control (LLC) layer, the upper of the two layers, which is
responsible for flow control, error correction, and resequencing functions for
connection-oriented communication, but which also supports connectionless
communication
The media access control (MAC) layer, the lower of the two layers, which is
responsible for providing a method for stations to gain access to the medium
Functions:
Framing: The data link layer divides the stream of bits received from the network
layer into manageable data units called frames.
Physical addressing: If frames are to be distributed to different systems on the
network, the data link layer adds a header to the frame to define the sender and/or
receiver of the frame. If the frame is intended for a system outside the sender's
network, the receiver address is the address of the device that connects the network to
the next one.
Flow control: If the rate at which the data are absorbed by the receiver is less than the
rate at which data are produced in the sender, the data link layer imposes a
flow control mechanism to avoid overwhelming the receiver.
Error control: The data link layer adds reliability to the physical layer by adding
mechanisms to detect and retransmit damaged or lost frames. It also uses a
mechanism to recognize duplicate frames. Error control is normally achieved through
a trailer added to the end of the frame.
Access control: When two or more devices are connected to the same
link, data link layer protocols are necessary to determine which device has
control over the link at any given time
Examples of data-link protocols for local area networking include the following:
IEEE 802.3, which provides the Carrier Sense Multiple Access with Collision
Detection (CSMA/CD) access method for baseband Ethernet networks
For WANs, data-link layer protocols encapsulate LAN traffic into frames suitable for
transmission over WAN links. Common data-link encapsulation methods for WAN
transmission include the following:
Point-to-point technologies such as Point-to-Point Protocol (PPP) and High-level Data
Link Control (HDLC) protocol
Multipoint technologies such as frame relay, Asynchronous Transfer Mode
(ATM), Switched Multimegabit Data Services (SMDS), and X.25
3. Network Layer:
The network layer also supplies connectionless and connection-oriented services to the
transport layer above it. The network layer functions closely with the physical layer (layer 1)
and data-link layer (layer 2) in most real-world network protocol implementations.
On TCP/IP-based networks, IP addresses and network numbers are used at the network layer,
and IP routers perform their routing functions at this layer. An example of an OSI model
network layer protocol is the X.25 packet-switching network layer protocol, which is built on
the X.21 physical layer protocol.
4. Transport Layer:
The transport layer is responsible for providing reliable transport services to the upper-
layer protocols. These services include the following:
Flow control to ensure that the transmitting device does not send more data than
the receiving device can handle. Packet sequencing for segmentation of data
packets and remote reassembly.
Error handling and acknowledgments to ensure that data is retransmitted when
required.
Multiplexing for combining data from several sources for transmission over one
data path.
Virtual circuits for establishing sessions between communicating stations.
The Transmission Control Protocol (TCP) of the TCP/IP protocol suite resides at the transport
layer
The connection between two devices that acts as though it's a direct connection even though
it may physically be circuitous. The term is used most frequently to describe connections
between two hosts in a packet-switching network. In this case, the two hosts can communicate
as though they have a dedicated connection even though the packets might actually travel
very different routes before arriving at their destination. An X.25 connection is an example of
a virtual circuit. Virtual circuits can be either permanent (called PVCs) or temporary (called
SVCs).
Layer 5 of the Open Systems Interconnection (OSI) reference model, which enables
sessions between computers on a network to be established and terminated. The session
layer does not concern itself with issues such as the reliability and efficiency of data
transfer between stations because these functions are provided by the first four layers of
the OSI reference model.
Functions:
Dialog control: The session layer allows two systems to enter into a dialog. It allows
the communication between two processes to take place in either half- duplex (one way at a
time) or full-duplex (two ways at a time) mode.
Synchronization: The session layer allows a process to add checkpoints, or synchronization
points, to a stream of data. For example, if a system is sending a file of 2000 pages, it is
advisable to insert checkpoints after every 100 pages to ensure that each 100-page unit is
received and acknowledged independently. In this case, if a crash happens during the
transmission of page 523, the only pages that need to be resent after system recovery are
pages 501 to 523. Pages previous to 501 need not be resent.
6. Presentation Layer:
The presentation layer is concerned with the syntax and semantics of the information
exchanged between two systems .
Specific responsibilities of the presentation layer include the following:
Translation: The processes (running programs) in two systems are usually exchanging
information in the form of character strings, numbers, and so on. The information
must be changed to bit streams before being transmitted. Because different
computers use different encoding systems, the presentation layer is responsible for
interoperability between these different encoding methods. The presentation layer at the
sender changes the information from its sender-dependent format into a common format. The
presentation layer at the receiving machine changes the common format into its receiver-
dependent format.
Compression: Data compression reduces the number of bits contained in the information.
Data compression becomes particularly important in the transmission of multimedia such as
text, audio, and video.
7. Application layer:
Layer 7 of the Open Systems Interconnection (OSI) reference model, in which network-
aware, user-controlled software is implemented—for example, e-mail, file transfer
utilities, and terminal access. The application layer represents the window between the
user and the network. Examples of protocols that run at the application layer include File
File transfer, access, and management: This application allows a user to access files in a
remote host (to make changes or read data), to retrieve files from a remote computer for use
in the local computer, and to manage or control files in a remote computer locally.
Mail services: This application provides the basis for e-mail forwarding and storage.
Directory services: This application provides distributed database sources and access for
global information about various objects and services.
TCP/IP OSI
Implementation of OSI model Reference model
Model around which Internet is developed This is a theoretical mode
Has only 4 layers Has 7 layers
Considered more reliable Considered a reference tool
Horizontal approach Vertical approach
Combines the session and presentation Has separate session and presentation layer
layer in the application layer
Protocols were developed first and then the Model was developed before the
model was developed development of protocols
Supports only connectionless Supports connectionless and connection-
communication in the network layer oriented communication in the network
layer
Protocol dependent standard Protocol independent standard
Protocols are not strictly defined Stricter boundaries for the protocols
Networking Hardware:
A NIC is a hardware board or card that you put into an empty slot in the back of your client
computer or server. NIC is the interface between the PC and the physical network connection.
This card physically connects to the cable that links network.
Hub
Networking components that extend a network by boosting the signal so that it can travel
farther along the cabling.
How It Works
Digital signals traveling on cables weaken with distance—a phenomenon known as
attenuation. A repeater is a form of digital amplifier that works at the physical layer (layer 1)
of the Open Systems Interconnection (OSI) reference model for networking to regenerate
(amplify) the signal so that it can travel farther. Repeaters also perform other functions such
as filtering out noise caused by electromagnetic interference (EMI), reshaping the signal, and
correcting timing to remove signal jitter so that the signal can travel farther. Repeaters can
also be used to join dissimilar media such as unshielded twisted-pair (UTP) cabling and
thinnet, but they cannot be used to join dissimilar network architectures such as Ethernet and
Token Ring. Repeaters are an inexpensive way to extend a network.
Repeaters can be used in Ethernet and Token Ring local area networks (LANs) to extend
signal transmission to remote nodes and over long fiber-optic cabling runs to connect LANs.
Repeaters can also be used in mainframe environments for boosting signals for serial
transmission to remote terminals.
Other uses for repeaters include the following:
Joining two 16-Mbps Token Ring networks in different buildings over distances up to
3000 meters over multimode fiber-optic cabling or up to 20 kilometers over single-
mode fiber
Increasing the lobe length between a Token Ring main ring and a remote node
Joining dissimilar 10Base2 and 10Base5 segments to form a single Ethernet LAN
Boosting signals from mainframe controllers to 3270 terminals over coaxial or UTP
cabling to support distances up to 2500 meters
Extending the operating distance of T1 lines by placing G.703 repeaters at 2.2-
kilometer intervals
Extending backbone fiber-optic cable runs in campus wide LANs or metropolitan area
networks (MANs)
Repeaters are also used in fiber-optic networks to amplify and regenerate light signals for
long-distance cable runs. Repeaters come in various types for different network architectures
and data communication technologies.
Bridge:
A networking component used either to extend or to segment networks. Bridges work at the
OSI data-link layer. They can be used both to join dissimilar media such as unshielded
twisted-pair (UTP) cabling and fiber-optic cabling, and to join different network architectures
such as Token Ring and Ethernet. Bridges regenerate signals but do not perform any protocol
conversion, so the same networking protocol (such as TCP/IP) must be running on both
network segments connected to the bridge. Bridges can also support Simple Network
Management Protocol (SNMP), and they can have other diagnostic features.
How it works?
Bridges operate by sensing the source MAC addresses of the transmitting nodes on the
network and automatically building an internal routing table. This table is used to determine
which connected segment to route packets to, and it provides the filtering capability that
bridges are known for. If the bridge knows which segment a packet is intended for, it
forwards the packet directly to that segment. If the bridge doesn’t recognize the packet’s
Switch:
Switch is essentially a multi-port bridge. Switches allow the segmentation of the LAN into
separate collision domains. Each port of the switch represents a separate collision domain and
provides the full media bandwidth to the node or nodes connected on that port. With fewer
nodes in each collision domain, there is an increase in the average bandwidth available to
each node, and collisions are reduced.
Why Switches:
In a LAN where all nodes are connected directly to the switch, the throughput of the network
increases dramatically. The three primary reasons for this increase are:
Dedicated bandwidth to each port
Collision-free environment
Full-duplex operation
Routers
Routers are internetwork connectivity devices. An internetwork may consists of two ormore
physical connected independent network. These networks can be of different type. For
example, they can be Ethernet and Token ring network. Each network is logically separate
and is assigned an address. Routers can use network address to assists efficient delivery of
message. Delivering packets according to logical network address is called routing. Routers
perform routing. Routing is the process of finding a path from a source to every destination in
the network. Routers are intelligent. They can use algorithms to determine most efficient path
for sending a packet to any given network. Routers can also be used to divide large busy
LANs into smaller segments. The protocols like IP, IPX and DDP are used to support routing
functions. Routers are also employed to connect LAN to wide area network (WAN).
Routers are of two types.
1. Static routers: Static routers do not determine paths, but need to specify them.
2. Dynamic routers: Dynamic routers have capacity to determine routes.
Transmission Medium:
A transmission medium can be broadly defined as anything that can carry information
from a source to a destination. For example, the transmission medium for two people
having a dinner conversation is the air. The air can also be used to convey the
message in a smoke signal or semaphore. For a written message, the
transmission medium might be a mail carrier, a truck, or an airplane.
In data communications the definition of the information and the transmission medium is
more specific. The transmission medium is usually free space, metallic cable, or fiber-optic
cable. The information is usually a signal that is the result of a conversion of data from
another form.
Guided Media:
Guided media, which are those that provide a conduit from one device to another, include
twisted-pair cable, coaxial cable, and fiber-optic cable. A signal traveling along any of these
media is directed and contained by the physical limits of the medium. Twisted-pair and
coaxial cable use metallic (copper) conductors that accept and transport signals in the form of
electric current. Optical fiber is a cable that accepts and transports signals in the form of light.
The twisting has the effect of canceling unwanted signals. When two wires in an electrical
circuit are placed close together, external electromagnetic fields create the same interference
in each wire. The pairs are twisted to keep the wires in as close proximity as is physically
possible. When this common interference is present on the wires in a twisted pair, the
receiver processes it in equal yet opposite ways. As a result, the signals caused by
electromagnetic interference from external sources are effectively cancelled.
This cancellation effect also helps avoid interference from internal sources called
crosstalk. Crosstalk is the interference caused by the magnetic field around the
adjacent pairs of wires in the cable. When electrical current flows through a wire, it
creates a circular magnetic field around the wire. With the current flowing in
opposite directions in the two wires in a pair, the magnetic fields - as equal but opposite
forces - have a cancellation effect on each other. Additionally, the different pairs
of wires that are twisted in the cable use different number of twists per meter to help protect
the cable from crosstalk between pairs.
Cables in higher categories are designed and constructed to support higher data rates. As new
gigabit speed Ethernet technologies are being developed and adopted, Cat5e is now the
minimally acceptable cable type, with Cat6 being the recommended type for new building
installations.
Co-axial Cable:
Coaxial cable (or coax) carries signals of higher frequency ranges than those in
twisted-pair cable, in part because the two media are constructed quite differently. Instead of
having two wires, coax has a central core conductor of solid or stranded wire (usually copper)
enclosed in an insulating sheath, which is, in turn, encased in an outer conductor of metal foil,
braid, or a combination of the two. The outer metallic wrapping serves both as a shield
against noise and as the second conductor, which completes the circuit. This outer conductor
is also enclosed in an insulating sheath, and the whole cable is protected by a plastic cover
All the elements of the coaxial cable encircle the center conductor. Because they all share the
same axis, this construction is called coaxial, or coax for short.
Uses of Coaxial Cable
The coaxial cable design has been adapted for different purposes. Coax is an important type
of cable that is used in wireless and cable access technologies. Coax cables are used to attach
Coax is also the most widely used media for transporting high radio frequency signals over
wire, especially cable television signals. Traditional cable television, exclusively transmitting
in one direction, was composed completely of coax cable.
Cable service providers are currently converting their one-way systems to two-way systems
to provide Internet connectivity to their customers. To provide these services,
portions of the coaxial cable and supporting amplification elements are replaced with
multi-fiber-optic cable. However, the final connection to the customer's location and the
wiring inside the customer's premises is still coax cable. This combined use of fiber and coax
is referred to as hybrid fiber coax (HFC).
In the past, coaxial cable was used in Ethernet installations. Today UTP offers lower costs
and higher bandwidth than coaxial and has replaced it as the standard for all Ethernet
installations.
To connect coaxial cable to devices, we need coaxial connectors. The most common type of
connector used today is the Bayone-Neill-Concelman (BNC), connector. Three types of
connectors: the BNC connector, the BNC T connector, and the BNC terminator. The BNC
connector is used to connect the end of the cable to a device, such as a TV set. The BNC T
connector is used in Ethernet networks o branch out to a connection to a computer or other
device. The BNC terminator is used at the end of the cable to prevent the reflection of the
signal.
Fiber-optics
Fiber-optic cabling uses either glass or plastic fibers to guide light impulses from source to
destination. The bits are encoded on the fiber as light impulses. Optical fiber cabling is
capable of very large raw data bandwidth rates. Most current transmission standards have yet
to approach the potential bandwidth of this media.
Principle of Fiber-optics:
It is based on the principle of Total internal Reflection.
Optical fibers use reflection to guide light through a channel. A glass or plastic core is
surrounded by a cladding of less dense glass or plastic. The difference in density of the two
materials must be such that a beam of light moving through the core is reflected off the
cladding instead of being refracted into it as shown in fig below.
Given that the fibers used in fiber-optic media are not electrical conductors,
the media is immune to electromagnetic interference and will not conduct unwanted
electrical currents due to grounding issues. Because optical fibers are thin and have relatively
low signal loss, they can be operated at much greater lengths than copper media, without the
need for signal regeneration. Some optical fiber Physical layer specifications allow lengths
that can reach multiple kilometers.
Propagation modes:
Current technology supports two modes (multimode and single mode) for
propagating light along optical channels, each requiring fiber with different physical
Single-mode optical fiber carries a single ray of light, usually emitted from a laser. Because
the laser light is uni-directional and travels down the center of the fiber, this type of fiber can
transmit optical pulses for very long distances.
Single-mode uses step-index fiber and a highly focused source of light that limits beams to a
small range of angles, all close to the horizontal. The single- mode fiber itself is
manufactured with a much smaller diameter than that of multimode fiber, and with
substantially lower density (index of refraction). The decrease in density results in a critical
angle that is close enough to 90° to make the propagation of beams almost horizontal. In this
case, propagation of different beams is almost identical, and delays are negligible. All the
beams arrive at the destination "together" and can be recombined with little distortion to the
signal
Multimode fiber typically uses LED emitters that do not create a single coherent light wave.
Instead, light from an LED enters the multimode fiber at different angles. Because light
entering the fiber at different angles takes different amounts of time to travel down the fiber,
long fiber runs may result in the pulses becoming blurred on reception at the receiving end.
This effect, known as modal dispersion, limits the length of multimode fiber segments.
Multimode Step-index:
In multimode step-index fiber, the density of the core remains constant from the center to the
edges. A beam of light moves through this constant density in a straight line until it reaches
the interface of the core and the cladding. At the interface, there is an abrupt change due to a
lower density; this alters the angle of the beam's motion. The term step index refers to the
suddenness of this change, which contributes to the distortion of the signal as it passes
through the fiber .
Multimode graded-index fiber, decreases this distortion of the signal through the cable. The
word index here refers to the index of refraction. As we saw above, the index of refraction is
related to density. A graded-index fiber, therefore, is one with varying densities. Density is
highest at the center of the core and decreases gradually to its lowest at the edge
Advantages
Fiber-optic cable has several advantages over metallic cable (twisted- pair or coaxial).
Higher bandwidth: Fiber-optic cable can support dramatically higher bandwidths
(and hence data rates) than either twisted-pair or coaxial cable. Currently, data rates
and bandwidth utilization over fiber-optic cable are limited not by the medium but by
the signal generation and reception technology available.
Less signal attenuation: Fiber-optic transmission distance is significantly greater
than that of other guided media. A signal can run for 50 km without requiring
regeneration. We need repeaters every 5 km for coaxial or twisted-pair cable.
Immunity to electromagnetic interference: Electromagnetic noise cannot affect
fiber-optic cables.
Resistance to corrosive materials: Glass is more resistant to corrosive materials than
copper.
Light weight: Fiber-optic cables are much lighter than copper cables.
Greater immunity to tapping: Fiber-optic cables are more immune to tapping than
copper cables. Copper cables create antenna effects that can easily be tapped.
Disadvantages
There are some disadvantages in the use of optical fiber.
Installation and maintenance: Fiber-optic cable is a relatively new technology. Its
installation and maintenance require expertise that is not yet available everywhere.
Unidirectional light propagation: Propagation of light is unidirectional. If
we need bidirectional communication, two fibers are needed.
Wireless Networking
Unguided media transport electromagnetic waves without using a physical
conductor. This type of communication is often referred to as wireless communication.
Signals are normally broadcast through free space and thus are available to anyone who has a
device capable of receiving them.
Unguided signals can travel from the source to destination in several ways: ground
propagation, sky propagation, and line-of-sight propagation, as shown in Figure
Omnidirectional Antenna
Radio waves use omnidirectional antennas that send out signals in all directions. Based on the
wavelength, strength, and the purpose of transmission, we can have several types
of antennas.
Applications
The omnidirectional characteristics of radio waves make them useful for multicasting, in
which there is one sender but many receivers. AM and FM radio, television, maritime
radio, cordless phones, and paging are examples of multicasting.
Microwaves
Electromagnetic waves having frequencies between 1 and 300 GHz are called microwaves.
Microwaves are unidirectional. When an antenna transmits microwave waves, they can be
narrowly focused. This means that the sending and receiving antennas need to be aligned.
The unidirectional property has an obvious advantage. A pair of antennas can be aligned
without interfering with another pair of aligned antennas. The following describes some
characteristics of microwave propagation:
Microwave propagation is line-of-sight. Since the towers with the mounted antennas
need to be in direct sight of each other, towers that are far apart need to be very tall.
The curvature of the earth as well as other blocking obstacles do not allow two short
towers to communicate by using microwaves. Repeaters are often needed for long-
distance communication.
Very high-frequency microwaves cannot penetrate walls. This characteristic can be a
disadvantage if receivers are inside buildings.
Applications
Microwaves, due to their unidirectional properties, are very useful when unicast (one-to-one)
communication is needed between the sender and the receiver. They are used in cellular
phones, satellite networks, and wireless LANs
Unidirectional Antenna
Microwaves need unidirectional antennas that send out signals in one direction. Two types of
antennas are used for microwave communications: the parabolic dish and the horn.
A parabolic dish antenna is based on the geometry of a parabola: Every line parallel to the
line of symmetry (line of sight) reflects off the curve at angles such that all the lines intersect
in a common point called the focus. The parabolic dish works as a funnel, catching a wide
range of waves and directing them to a common point. In this way, more of the signal is
recovered than would be possible with a single-point receiver.
A horn antenna looks like a gigantic scoop. Outgoing transmissions are broadcast up a stem
(resembling a handle) and deflected outward in a series of narrow parallel beams by the
curved head. Received transmissions are collected by the scooped shape of the horn, in a
manner similar to the parabolic dish, and are deflected down into the stem.
Infrared
Infrared waves, with frequencies from 300 GHz to 400 THz (wavelengths from 1 mm to 770
nm), can be used for short-range communication. Infrared waves, having high
frequencies, cannot penetrate walls. This advantageous characteristic prevents
interference between one system and another; a short-range communication system in one
room cannot be affected by another system in the next room. When we use our infrared
remote control, we do not interfere with the use of the remote by our neighbors. However,
this same characteristic makes infrared signals useless for long-range communication. In
addition, we cannot use infrared waves outside a building because the sun's rays contain
infrared waves that can interfere with the communication.
Satellite Communication:
For business data applications requiring high data rates for short periods of time.
National Weather Service, news services, credit card verification, automatic tellers,
car rental agencies.
Commonly connects a central location with many remote ones.
Communication between two sites is via a satellite and allows a low-cost small
antenna dishes (5 ft).
Circuit Switching:
A dedicated path between the source node and the destination node is set up for the duration
of communication session to transfer data. That path is a connected sequence of links
between network nodes. On each physical link, a logical channel is dedicated to the
connection. Communication via circuit switching involves three phases,
I. Circuit Establishment: Before any signals can be transmitted, an end-to-end (station-
to-station) circuit must be established.
II. Data Transfer: The data may be analog or digital, depending on the nature of the
network
III. Circuit Disconnect: After some period of data transfer, the connection is terminated,
usually by the action of one of the two stations
Messages are divided into subsets of equal length called packets. In packet
switching approach, data are transmitted in short packets (few Kbytes). A long message is
broken up into a series of packets as shown in Figure. Every packet contains some control
information in its header, which is required for routing and other purposes.
Main difference between Packet switching and Circuit Switching is that the
communication lines are not dedicated to passing messages from the source to the
destination. In Packet Switching, different messages (and even different packets) can pass
through different routes, and when there is a "dead time" in the communication between the
source and the destination, the lines can be used by other sources.
There are two basic approaches commonly used to packet Switching: virtual circuit packet
switching and datagram packet switching. In virtual-circuit packet switching a virtual circuit
is made before actual data is transmitted, but it is different from circuit switching in a sense
that in circuit switching the call accept signal comes only from the final destination to the
source while in case of virtual-packet switching this call accept signal is transmitted between
each adjacent intermediate node as shown in Fig. Other features of virtual circuit packet
switching are discussed in the following subsection.
Virtual Circuit:
An initial setup phase is used to set up a route between the intermediate nodes for all the
packets passed during the session between the two end nodes. In each intermediate node, an
entry is registered in a table to indicate the route for the connection that has been set up.
Thus, packets passed through this route, can have short headers, containing only a virtual
circuit identifier (VCI), and not their destination. Each intermediate node passes the packets
according to the information that was stored in it, in the setup phase. In this way, packets
arrive at the destination in the correct sequence, and it is guaranteed that essentially there will
not be errors. This approach is slower than Circuit Switching, since different virtual circuits
may compete over the same resources, and an initial setup phase is needed to initiate the
circuit. As in Circuit Switching, if an intermediate node fails, all virtual circuits that pass
through it are lost. The most common forms of Virtual Circuit networks are X.25 and Frame
Relay, which are commonly used for public data networks (PDN).
Virtual Circuit
Datagram:
This approach uses a different, more dynamic scheme, to determine the route through the
network links. Each packet is treated as an independent entity, and its header contains full
information about the destination of the packet. The intermediate nodes examine the header
of the packet, and decide to which node to send the packet so that it will reach its destination.
In this method, the packets don't follow a pre-established route, and the intermediate nodes
(the routers) don't have pre-defined knowledge of the routes that the packets should be passed
through. Packets can follow different routes to the destination, and delivery is not guaranteed.
Due to the nature of this method, the packets can reach the destination in a different order
than they were sent, thus they must be sorted at the destination to form the original message.
This approach is time consuming since every router has to decide where to send each packet.
The main implementation of Datagram Switching network is the Internet, which uses the IP
network protocol.
Virtual Circuit:
Virtual Circuits is a connection between two network devices appearing like a direct and
dedicated connection but it but is actually a group of logic circuit resources from which
specific circuits are allocated as needed to meet traffic requirements in a packet switched
network. In this case, the two network devices can communicate as though they have a
dedicated physical connection. Examples of networks with virtual circuit capabilities include
X.25 connections, Frame Relay and ATM networks.
Virtual circuits can be either permanent, called Permanent virtual Circuits (PVC), or
temporary, called Switched Virtual Circuits (SVCs).
A Permanent Virtual Circuit (PVC) is a virtual circuit that is permanently available to the
user. A PVC is defined in advance by a network manager. A PVC is used on a circuit that
includes routers that must maintain a constant connection in order to transfer routing
information in a dynamic network environment. Carriers assign PVCs to customers to reduce
overhead and improve performance on their networks.
A switched virtual circuit (SVC) is a virtual circuit in which a connection session
is set up dynamically between individual nodes temporarily only for the duration of a
session. Once a communication session is complete, the virtual circuit is disabled.
A packet-switching protocol for wide area network (WAN) connectivity that uses a public
data network (PDN) that parallels the voice network of the Public Switched Telephone
Network (PSTN). The current X.25 standard supports synchronous, full-duplex
communication at speeds up to 2 Mbps over two pairs of wires, but most implementations are
64-Kbps connections via a standard DS0 link.
Because X.25 was designed when analog telephone transmission over copper wire was the
norm, X.25 packets have a relatively large overhead of error-correction information,
resulting in comparatively low overall bandwidth. Newer WAN technologies such as
frame relay, Integrated Services Digital Network (ISDN), and T-carrier services are now
generally preferred over X.25. However, X.25 networks still have applications in areas such
as credit card verification, automatic teller machine transactions, and other dedicated business
and financial uses.
How It Works
The X.25 standard corresponds in functionality to the first three layers of the Open Systems
Interconnection (OSI) reference model for networking. Specifically, X.25 defines the
following:
The physical layer interface for connecting data terminal equipment (DTE),
such as computers and terminals at the customer premises, with the data
communications equipment (DCE), such as X.25 packet switches at the X.25
carrier’s facilities. The physical layer interface of X.25 is called X.21bis and was
derived from the RS-232 interface for serial transmission.
An end node (DTE) can initiate a communication session with another end node by dialing its
X.121 address and establishing a virtual circuit that can be either permanent or switched,
depending on the level of service required. Packets are routed through the X.25 backbone
network by using the ID number of the virtual circuit established for the particular
communication session. This ID number is called the logical channel identifier (LCI) and is a
12-bit address that identifies the virtual circuit. Packets are generally up to 128 bytes in size,
although maximum packet sizes range from 64 to 4096 bytes, depending on the system.
Disadvantages of X.25
Frame Relay is a high-performance WAN protocol that operates at the physical and Data Link
layers of the OSI reference model. X.25 has several disadvantages so Frame Relay was
invented. Frame Relay is a wide-area network with the following features:
I. Frame Relay operates at a higher speed (1.544 Mbps and recently 44.376 Mbps). This
means that it can easily be used instead of a mesh of T-I or T-3 lines.
II. Frame Relay operates in just the physical and data link layers. This means it can easily
be used as a backbone network to provide services to protocols that already have a
network layer protocol, such as the Internet.
III. Frame Relay allows bursty data.
IV. Frame Relay allows a frame size of 9000 bytes, which can accommodate all local-
area network frame sizes.
V. Frame Relay is less expensive than other traditional WANs.
VI. Frame Relay has error detection at the data link layer only. There is no flow control or
error control. There is not even a retransmission policy if a frame is damaged; it is
silently dropped. Frame Relay was designed in this way to provide fast
transmission capability for more reliable media and for those protocols that
have flow and error control at the higher layers.
Virtual Circuits
The connection through a Frame Relay network between two DTEs is called a virtual circuit
(VC). The circuits are virtual because there is no direct electrical connection from end to end.
The connection is logical, and data moves from end to end, without a direct electrical circuit.
C/R: Command/response
EA: Extended address
FECN: Forward explicit congestion notification
BECN: Backward explicit congestion notification
DE: Discard eligibility
DLCI: Data link connection identifier
Address (DLCI) field: The first 6 bits of the first byte makes up the first part of the DLCI.
The second part of the DLCI uses the first 4 bits of the second byte. These bits are part of
the10-bit data link connection identifier defined by the standard.
Command/response (CIR): The command/response (C/R) bit is provided to allow upper
layers to identify a frame as either a command or a response. It is not used by the Frame
Relay protocol.
Extended address (EA): The extended address (EA) bit indicates whether the current byte is
the final byte of the address. An EA of 0 means that another address byte is to follow. An EA
of 1 means that the current byte is the final one.
Forward explicit congestion notification (FECN): The forward explicit congestion
notification (FECN) bit can be set by any switch to indicate that traffic is congested. This bit
informs the destination that congestion has occurred. In this way, the destination knows that it
should expect delay or a loss of packets.
Backward explicit congestion notification (BECN): The backward explicit congestion
notification (BECN) bit is set (in frames that travel in the other direction) to indicate a
congestion problem in the network. This bit informs the sender that congestion has occurred.
In this way, the source knows it needs to slow down to prevent the loss of packets.
FRADs
To handle frames arriving from other protocols, Frame Relay uses a device
called a Frame Relay assembler/disassembler (FRAD). A FRAD assembles and
disassembles frames coming from other protocols to allow them to be carried by Frame Relay
frames. A FRAD can be implemented as a separate device or as part of a switch.
VOFR
Frame Relay networks offer an option called Voice Over Frame Relay (VOFR) that sends
voice through the network. Voice is digitized using PCM and then compressed. The result is
sent as data frames over the network. This feature allows the inexpensive sending of voice
over long distances. However, note that the quality of voice is not as good as voice over a
circuit-switched network such as the telephone network. Also, the varying delay mentioned
earlier sometimes corrupts real-time voice.
The “asynchronous” in ATM means ATM devices do not send and receive information at
fixed speeds or using a timer, but instead negotiate transmission speeds based on
hardware and information flow reliability. The “transfer mode” in ATM refers to the
ATM transfers information in fixed-size units called cells. Each cell consists of 53 octets, or
bytes as shown in Figure.
As these cells traverse an ATM network, switching takes place by changing the
VPI/VCI values (label swapping). Although the VPI/VCI values are not necessarily
consistent from one end of the connection to the other, the concept of a circuit is consistent
(unlike IP, where any given packet could get to its destination by a different route than the
others).
Another advantage of the use of virtual circuits comes with the ability to use them as a
multiplexing layer, allowing different services (such as voice, Frame Relay, n* 64 channels,
IP).
A virtual path connection (VPC) is a bundle of VCCs that have the same endpoints. Thus, all
User Plane: Provides for user information transfer, along with associated controls
(e.g, flow control, error control).
Physical layer: Analogous to the physical layer of the OSI reference model, the ATM
physical layer manages the medium-dependent transmission.
ATM layer: Combined with the ATM adaptation layer, the ATM layer is roughly analogous
to the data link layer of the OSI reference model. The ATM layer is responsible for the
simultaneous sharing of virtual circuits over a physical link (cell multiplexing) and passing
cells through the ATM network (cell relay). To do this, it uses the VPI and VCI information in
the header of each ATM cell.
ATM Advantages:
ATM supports voice, video and data allowing multimedia and mixed services over a
single network.
Provides the best multiple service support
Supports delay close to that of dedicated services
Supports the broadest range of burstiness, delay tolerance and loss performance
through the implementation of multiple QoS classes
Provides the capability to support both connection-oriented and connectionless traffic
using AALs
Able to use all common physical transmission paths (such as DS1, SONET).
Cable can be twisted-pair, coaxial or fiber-optic
Ability to connect LAN to WAN
Legacy LAN emulation
Efficient bandwidth use by statistical multiplexing
Scalability
Higher aggregate bandwidth
High speed Mbps and possibly Gbps
ATM disadvantages
Flexible to efficiency’s expense, at present, for any one application it is usually
possible to find a more optimized technology
Cost, although it will decrease with time
New customer premises hardware and software are required
Competition from other technologies -100 Mbps FDDI, 100 Mbps Ethernet and fast
Ethernet
Integrated Services Digital Network (ISDN) is a network that provides end-to-end digital
connectivity to support a wide range of services including voice and data services. ISDN
allows multiple digital channels to operate simultaneously through the same regular phone
wiring used for analog lines, but ISDN transmits a digital signal rather than analog. Latency
is much lower on an ISDN line than on an analog line.
ISDN standards define two main channel types, each with a different transmission rate.
The bearer channel or B channel: B channel is defined as a clear digital path of 64 kbps for
voice or up to 64 Kbps of data. It is said to be clear because it can be used to transmit any
type of digitized data in full-duplex mode. For example, a digitized voice call can be
transmitted on a single B channel.
Delta channel or D channel: The D channel carries signaling messages, such as call setup
and teardown, to control calls on B channels. Traffic over the D channel employs the Link
Access Procedure on the D Channel (LAPD) protocol. LAPD is a data link layer protocol
based on HDLC. There can either be 16 kbps for the Basic Rate Interface (BRI) or 64 kbps
for the Primary Rate Interface (PRI). The D channel is used to carry control information for
the B channel.
The basic digital circuit in the PSTN is a 64-kilobits-per-second channel, originally designed
by Bell Labs, called a "DS0" or Digital Signal 0. To carry a typical phone call from a calling
party to a called party, the audio sound is digitized at an 8 kHz sample rate using 8-bit pulse
code modulation. The call is then transmitted from one end to another via telephone
exchanges. The call is switched using a signaling protocol (SS7) between the telephone
exchanges under an overall routing strategy.
PSTN, the Public Switched Telephone Network, is a circuit-switched network that is used
primarily for voice communications worldwide, with more than 800 million subscribers.
Originally a network of fixed-line analog telephone systems, the PSTN is now almost entirely
digital and also includes mobile as well as fixed telephones. The basic digital circuit in the
PSTN is a 64-kilobit-per-second channel, known as "DS0" or Digital Signal 0. DS0's are also
known as timeslots because they are multiplexed together in a time-division fashion. To carry
a typical phone call from a calling party to a called party, the audio sound is digitized at an 8
kHz sample rate using 8-bit pulse code modulation. Multiple DS0's can be multiplexed
together on higher capacity circuits, such that 24 DS0's make a DS1 signal or T1 (the
European equivalent is an E1, containing 32 64 kbit/s channels). For more than a hundred
years, the PSTN was the only bearer network available for telephony. Today, the mobile
Destination Service Access Point (DSAP): IEEE 802.2 header begins with a 1 byte field,
which identifies the receiving upper-layer process.
Source Service Access Point (SSAP): Following the DSAP address is the 1-byte address,
which identifies the sending upper-layer process.
Control: The Control field employs three different formats, depending on the type of LLC
frame used.
Information (I) frame -- Carries upper-layer information and some control
information.
Supervisory (S) frame -- Provides control information. An S frame can request and
suspend transmission, reports on status, and acknowledge receipt of I frames. S
frames do not have an Information field.
Unnumbered (U) frame -- Used for control purposes and is not sequenced. A U frame
can be used to initialize secondaries. Depending on the function of the U frame, its
Control field is 1 or 2 bytes. Some U frames have an Information field.
Framing:
The data link layer needs to pack bits into frames, so that each frame is distinguishable from
another. The Data Link layer prepares a packet for transport across the local media by
encapsulating it with a header and a trailer to create a frame.
Our postal system practices a type of framing. The simple act of inserting a letter into an
envelope separates one piece of information from another; the envelope serves as the
delimiter. In addition, each envelope defines the sender and receiver addresses since the
postal system is a many-to-many carrier facility. Framing in the data link layer separates a
message from one source to a destination, or from other messages to other destinations, by
adding a sender address and a destination address. The destination address defines where the
packet is to go; the sender address helps the recipient acknowledge the receipt.
Fixed-Size Framing
Frames can be of fixed or variable size. In fixed-size framing, there is no need for defining
the boundaries of the frames; the size itself can be used as a delimiter. An example of this
type of framing is the ATM wide-area network, which uses frames of fixed size called cells.
Character-Oriented Protocols
In a character-oriented protocol, data to be carried are 8-bit characters from a coding system
such as ASCII. The header, which normally carries the source and destination
addresses and other control information, and the trailer, which carries error detection or
error correction redundant bits, are also multiples of 8 bits. To separate one frame from the
next, an 8-bit (I-byte) flag is added at the beginning and the end of a frame. The flag,
composed of protocol-dependent special characters, signals the start or end of a frame. Any
pattern used for the flag could also be part of the information. To fix this problem, a byte-
stuffing strategy was added to character-oriented framing. In byte stuffing (or character
stuffing), a special byte is added to the data section of the frame when there is a character
with the same pattern as the flag. The data section is stuffed with an extra byte. This byte is
usually called the escape character (ESC), which has a predefined bit pattern. Whenever the
receiver encounters the ESC character, it removes it from the data section and treats the next
character as data, not a delimiting flag. Character-oriented protocols present a problem in data
communications. The universal coding systems in use today, such as Unicode, have 16-bit
and 32-bit characters that conflict with 8-bit characters. We can say that in general, the
tendency is moving toward the bit-oriented protocols that we discuss next.
This flag can create the same type of problem we saw in the byte-oriented protocols. That is,
if the flag pattern appears in the data, we need to somehow inform the receiver that this is not
the end of the frame. We do this by stuffing 1 single bit (instead of 1 byte) to prevent the
pattern from looking like a flag. The strategy is called bit stuffing. In bit stuffing, if a 0 and
five consecutive 1 bits are encountered, an extra 0 is added. This extra stuffed bit is
eventually removed from the data by the receiver. Note that the extra bit is added after one 0
followed by five 1s regardless of the value of the next bit. This guarantees that the flag field
sequence does not inadvertently appear in the frame. This means that if the flag like pattern
01111110 appears in the data, it will change to 011111010 (stuffed) and is not mistaken as a
flag by the receiver. The real flag 01111110 is not stuffed by the sender and is recognized by
the receiver.
Flow Control
Flow Control is a set of procedures that tells the sender how much data it can transmit before
it must wait for an acknowledgment from the receiver. The flow of data should not be
allowed to overwhelm the receiver. Receiver should also be able to inform the transmitter
before its limits (this limit may be amount of memory used to store the incoming data or the
processing power at the receiver end) are reached and the sender must send fewer frames.
Hence, Flow control refers to the set of procedures used to restrict the amount of data the
transmitter can send before waiting for acknowledgment.
Therefore, the sliding window of sender shrinks from left when frames of data are
sending. The sliding window of the sender expands to right when acknowledgments
are received.
The sliding window of the receiver shrinks from left when frames of data are
received. The sliding window of the receiver expands to the right when
acknowledgement is sent.
Stop-and-Wait ARQ
In Stop-and-Wait ARQ, which is simplest among all protocols, the sender (say station A)
transmits a frame and then waits till it receives positive acknowledgement (ACK) or negative
acknowledgement (NACK) from the receiver (say station B). Station B sends an ACK if the
frame is received correctly, otherwise it sends NACK. Station A sends a new frame after
receiving ACK; otherwise it retransmits the old frame, if it receives a NACK.
Normal operation
To tackle the problem of a lost or damaged frame, the sender is equipped with a timer. In case
of a lost ACK, the sender transmits the old frame. The sender is unaware of this loss, but
starts a timer after sending each PDU. Normally an ACK PDU is received before the timer
expires. In this case no ACK is received, and the timer counts down to zero and triggers
retransmission of the same PDU by the sender.
Lost Frame
Go-back-N ARQ
The most popular ARQ protocol is the go-back-N ARQ, where the sender sends the frames
continuously without waiting for acknowledgement. That is why it is also called as
continuous ARQ. As the receiver receives the frames, it keeps on sending ACKs or a NACK,
in case a frame is incorrectly received. When the sender receives a NACK, it retransmits the
frame in error plus all the succeeding frames. Hence, the name of the protocol is go-back-N
ARQ
Normal Operation
.
Lot Frame
Selective-Repeat ARQ
The selective-repetitive ARQ scheme retransmits only those for which NAKs are received or
for which timer has expired. This is the most efficient among the ARQ schemes, but the
sender must be more complex so that it can send out-of-order frames. The receiver also must
have storage space to store the post NAK frames and processing power to reinsert frames in
proper sequence.
Data-link layer uses some error control mechanism to ensure that frames (data bit streams)
are transmitted with certain level of accuracy. But to understand how errors is controlled, it is
essential to know what types of errors may occur.
Types of Errors
There may be three types of errors:
Single bit error
Blocks of data from the source are subjected to a check bit or Parity bit generator form, where
a parity of 1 is added to the block if it contains an odd number of 1’s (ON bits) and 0 is added
if it contains an even number of 1’s. At the receiving end the parity bit is computed from the
received data bits and compared with the received parity bit, as shown in figure. This scheme
makes the total number of 1’s even, that is why it is called even parity checking.
Performance
An observation of the table reveals that to move from one code word to another, at least two
data bits should be changed. Hence these set of code words are said to have a minimum
distance (hamming distance) of 2, which means that a receiver that has knowledge of the
code word set can detect all single bit errors in each code word. However, if two errors occur
in the code word, it becomes another valid member of the set and the decoder will see only
another valid code word and know nothing of the error. Thus errors in more than one bit
cannot be detected. In fact it can be shown that a single parity check code can detect only odd
number of errors in a code word.
Performance
CRC is a very effective error detection technique. If the divisor is chosen according to the
previously mentioned rules, its performance can be summarized as follows:
CRC can detect all single-bit errors
CRC can detect all double-bit errors (three 1’s)
CRC can detect any odd number of errors (X+1)
CRC can detect all burst errors of less than the degree of the polynomial.
CRC detects most of the larger burst errors with a high probability.
For example CRC-12 detects 99.97% of errors with a length 12 or more.
Hamming Codes
The most common types of error-correcting codes used in RAM are based on the codes
devised by R. W. Hamming. In the Hamming code, k parity bits are added to an n-bit data
word, forming a new word of n+k bits. The bit positions are numbered in sequence from 1 to
n+k. Those positions numbered with powers of two are reserved for the parity bits. The
remaining bits are the data bits. The code can be used with words of any length.
To find the redundant bit R1, we check for even parity. Since the total number of 1’s in
all the bit positions corresponding to R1 is an even number the value of R1 (parity bit’s
value) = 0
2. R2 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the second position from the least significant bit.
R2: bits 2,3,6,7,10,11
To find the redundant bit R2, we check for even parity. Since the total number of 1’s in
all the bit positions corresponding to R2 is an odd number the value of R2(parity bit’s
value)=1
3. R4 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the third position from the least significant bit.
R4: bits 4, 5, 6, 7
To find the redundant bit R8, we check for even parity. Since the total number of 1’s in
all the bit positions corresponding to R8 is an even number the value of R8(parity bit’s
value)=0.
Thus, the data transferred is:
The bits give the binary number as 0110 whose decimal representation is 6. Thus, the bit 6
contains an error. To correct the error the 6th bit is changed from 1 to 0.
ALOHA
ALOHA is a system for coordinating and arbitrating access to a shared communication
Networks channel. It was developed in the 1970s by Norman Abramson and his colleagues at
the University of Hawaii. The original system used for ground based radio broadcasting, but
the system has been implemented in satellite communication systems.
A shared communication system like ALOHA requires a method of handling collisions that
occur when two or more systems attempt to transmit on the channel at the same time. In the
ALOHA system, a node transmits whenever data is available to send. If another node
Pure ALOHA
In pure ALOHA, the stations transmit frames whenever they have data to send.
When two or more stations transmit simultaneously, there is collision and the frames
are destroyed.
In pure ALOHA, whenever any station transmits a frame, it expects the
acknowledgement from the receiver.
If acknowledgement is not received within specified time, the station assumes that the
frame (or acknowledgement) has been destroyed.
If the frame is destroyed because of collision the station waits for a random amount of
time and sends it again. This waiting time must be random otherwise same frames
will collide again and again.
Therefore pure ALOHA dictates that when time-out period passes, each station must
wait for a random amount of time before resending its frame. This randomness will
help avoid more collisions.
Figure shows an example of frame collisions in pure ALOHA.
In slotted ALOHA, if any station is not able to place the frame onto the channel at the
beginning of the slot i.e. it misses the time slot then the station has to wait until the
beginning of the next time slot.
In slotted ALOHA, there is still a possibility of collision if two stations try to send at
the beginning of the same time slot as shown in fig.
Slotted ALOHA still has an edge over pure ALOHA as chances of collision are
reduced to one-half
Explanation:
A station which has a frame ready will send it.
Then it waits for some time.
If it receives the acknowledgement then the transmission is successful.
Otherwise the station uses a backoff strategy, and sends the packet again.
After many times if there is no acknowledgement then the station aborts the idea of
transmission.
2. Nonpersistent: In the nonpersistent method, a station that has a frame to send senses
the line. If the line is idle, it sends immediately. If the line is not idle, it waits a
random amount of time and then senses the line again. The nonpersistent approach
reduces the chance of collision because it is unlikely that two or more stations will
wait the same amount of time and retry to send simultaneously. However, this method
reduces the efficiency of the network because the medium remains idle when there
may be stations with frames to send.
3. P-Persistent: The p-persistent method is used if the channel has time slots with a slot
duration equal to or greater than the maximum propagation time. The p-persistent
approach combines the advantages of the other two strategies. It reduces the chance of
collision and improves efficiency.
IP Address:
Each computer in a TCP/IP network must be given a unique identifier, or IP address. This
address, which operates at Layer 3, allows one computer to locate another computer on a
network. All computers also have a unique physical address, which is known as a MAC
address. These are assigned by the manufacturer of the NIC. MAC addresses operate at Layer
2 of the OSI model.
An IP address (IPv4) is a 32-bit sequence of ones and zeros. To make the IP address easier to
work with, it is usually written as four decimal numbers separated by periods. For example,
an IP address of one computer is 192.168.1.2. Another computer might have the address
128.10.2.1. This is called the dotted decimal format. Each part of the address is called an
octet because it is made up of eight binary digits. For example, the IP address 192.168.1.8
would be 11000000.10101000.00000001.00001000 in binary notation. The dotted decimal
notation is an easier method to understand than the binary ones and zeros method. This dotted
decimal notation also prevents a large number of transposition errors that would result if only
the binary numbers were used.
Ipv4 Header:
Class A Blocks
A class A address block was designed to support extremely large networks with more than 16
million host addresses. Class A IPv4 addresses used a fixed /8 prefix with the first octet to
indicate the network address. The remaining three octets were used for host addresses.
The first bit of a Class A address is always 0. With that first bit a 0, the lowest number that
can be represented is 00000000, decimal 0. The highest number that can be represented is
01111111, decimal 127. The numbers 0 and 127 are reserved and cannot be used as network
addresses. Any address that starts with a value between 1 and 126 in the first octet is a Class
A address.
Class B Blocks
Class B address space was designed to support the needs of moderate to large size networks
with more than 65,000 hosts. A class B IP address used the two high-order octets to indicate
the network address. The other two octets specified host addresses. As with class A, address
space for the remaining address classes needed to be reserved.
The first two bits of the first octet of a Class B address are always 10. The remaining six bits
may be populated with either 1s or 0s. Therefore, the lowest number that can be represented
Class C Blocks:
The class C address space was the most commonly available of the historic address classes.
This address space was intended to provide addresses for small networks with a maximum of
254 hosts. Class C address blocks used a /24 prefix. This meant that a class C network used
only the last octet as host addresses with the three high-order octets used to indicate the
network address.
A Class C address begins with binary 110. Therefore, the lowest number that can be
represented is 11000000, decimal 192. The highest number that can be represented is
11011111, decimal 223. If an address contains a number in the range of 192 to 223 in the first
octet, it is a Class C address.
Class D Blocks:
The Class D address class was created to enable multicasting in an IP address. A multicast
address is a unique network address that directs packets with that destination
address to predefined groups of IP addresses. Therefore, a single station can
simultaneously transmit a single stream of data to multiple recipients.
The Class D address space, much like the other address spaces, is mathematically
constrained. The first four bits of a Class D address must be 1110. Therefore, the
first octet range for Class D addresses is 11100000 to 11101111, or 224 to 239. An
IP address that starts with a value in the range of 224 to 239 in the first octet is a Class D
address.
Class E Block:
A Class E address has been defined. However, the Internet Engineering Task Force (IETF)
reserves these addresses for its own research. Therefore, no Class E addresses have been
released for use in the Internet. The first four bits of a Class E address are always set to 1s.
Therefore, the first octet range for Class E addresses is 11110000 to 11111111, or 240 to 255.
Within the address range of each IPv4 network, we have three types of addresses:
Network address - The address by which we refer to the network
Broadcast address - A special address used to send data to all hosts in the network
Host addresses - The addresses assigned to the end devices in the network
Subnet Mask:
To define the network and host portions of an address, the devices use a separate 32-bit
pattern called a subnet mask. We express the subnet mask in the same dotted decimal format
as the IPv4 address. The subnet mask is created by placing a binary 1 in each bit position that
represents the network portion and placing a binary 0 in each bit position that represents the
host portion.
The prefix and the subnet mask are different ways of representing the same thing - the
network portion of an address.
CIDR:
A routing system used by routers and gateways on the backbone of the Internet for routing
packets. CIDR replaces the old class method of allocating 8, 16, or 24 bits to the network ID,
and instead allows any number of contiguous bits in the IP address to be allocated as the
network ID. For example, if a company needs a few thousand IP addresses for its network, it
can allocate 11 or 12 bits of the address for the network ID instead of 8 bits for a class C
(which wouldn’t work because you would need to use several class C networks) or 16 bits for
class B (which is wasteful).
How It Works?
CIDR assigns a numerical prefix to each IP address. For example, a typical destination IP
address using CIDR might be 177.67.5.44/13. The prefix 13 indicates that the first 13 bits of
the IP address identify the network, while the remaining 32 - 13 = 19 bits identify the host.
The prefix helps to identify the Internet destination gateway or group of gateways to which
the packet will be forwarded. Prefixes vary in size, with longer prefixes indicating more
specific destinations. Routers use the longest possible prefix in their routing
tables when determining how to forward each packet. CIDR enables packets to be sent to
groups of networks instead of to individual networks, which considerably simplifies the
TB=BR + BB
TB=Total bits in host portion
BB=Bits borrowed
BR=Bits Remaining
We can’t use a /31 or /32 because we have to have at least 2 host bits for assigning IP
addresses to hosts.
How many hosts per subnet? We have 6 host bits off (11000000), so the equation would
What are the valid subnets? 256 – 192 = 64. Remember, we start at zero and count in our
block size, so our subnets are 0, 64, 128, and 192. (Magic Number=256-Subnet Mask)
What’s the broadcast address for each subnet? The number right before the value of the next
subnet is all host bits turned on and equals the broadcast address. For the zero subnet, the
next subnet is 64, so the broadcast address for the zero subnet is 63.
What are the valid hosts? These are the numbers between the subnet and broadcast address.
The easiest way to find the hosts is to write out the subnet address and the broadcast address.
This way, the valid hosts are obvious.
The following table shows the 0, 64, 128, and 192 subnets, the valid host ranges of each, and
the broadcast address of each subnet:
The Subnet 0 64 128 192
The Broadcast Address 63 127 191 255
Usable Host Address 1-62 65-126 129-190 193-254
These are the exact numbers we used with Class C; we use them in the third octet and add a 0
in the fourth octet for the network address.
Valid hosts?
The following table shows the two subnets available, the valid host range, and the broadcast
address of each:
Subnet 172.16.0.0 172.16.128.0
Broadcast 172.16.127.255 172.16.255.255
Usable Host Range 172.16.0.1-172.16.127.254 172.16.128.1-172.16.255.254
Larger address space: Offers improved global reachability and flexibility; the
aggregation of prefixes that are announced in routing tables; multihoming to
several Internet service providers (ISPs) auto configuration that can include
link-layer addresses in the address space; plug-and-play options; public-to private
readdressing end to end without address translation; and simplified mechanisms
for address renumbering and modification.
Simpler header: Provides better routing efficiency; no broadcasts and thus
no potential threat of broadcast storms; no requirement for processing checksums;
simpler and more efficient extension header mechanisms; and flow labels for per-flow
processing with no need to open the transport inner packet to identify the various
traffic flows.
Mobility and security: Ensures compliance with mobile IP and IPsec standards
functionality; mobility is built in, so any IPv6 node can use it when necessary; and
enables people to move around in networks with mobile network devices—with
many having wireless connectivity.
Mobile IP is an Internet Engineering Task Force (IETF) standard available for both
IPv4 and IPv6. The standard enables mobile devices to move without breaks in
established network connections. Because IPv4 does not automatically provide
this kind of mobility, you must add it with additional configurations.
IPsec is the IETF standard for IP network security, available for both IPv4 and IPv6.
Although the functionalities are essentially identical in both environments,
IPsec is mandatory in IPv6. IPsec is enabled on every IPv6 node and is
available for use. The availability of IPsec on all nodes makes the IPv6 Internet
more secure. IPsec also requires keys for each party, which implies a
global key deployment and distribution.
Transition richness: You can incorporate existing IPv4 capabilities in IPv6 in the
following ways:
Configure a dual stack with both IPv4 and IPv6 on the interface of a network
device.
IPV6 Header
IPv6 options improve over IPv4 by being placed in separate extension headers that are
located between the IPv6 header and the transport-layer header in a packet. Most extension
headers are not examined or processed by any router along a packet's delivery path until it
arrives at its final destination. This mechanism improves router performance for packets
containing options. In IPv4, the presence of any options requires the router to examine all
options.
Another improvement is that IPv6 extension headers, unlike IPv4 options, can be of arbitrary
length and the total amount of options that a packet carries is not limited to 40 bytes. This
feature, and the manner in which it is processed, permit IPv6 options to be used for
functions that were not practical in IPv4, such as the IPv6 Authentication and
Security Encapsulation options.
IPV6 Addressing:
Address Representation:
Represented by breaking 128 bit into Eight 16-bit segments (Each 4 Hex character each).
Each segment is written in Hexadecimal separated by colons. Hex digit are not case sensitive.
Rule 1:
Drop leading zeros:
2001:0050:0000:0235:0ab4:3456:456b:e560
2001:050:0:235:ab4:3456:456b:e560
Rule2:
Successive fields of zeros can be represented as “::” , But double colon appear only once in
the address. FF01:0:0:0:0:0:0:1
FF01::1
Note : An address parser identifies the number of missing zeros by separating the two parts
and entering 0 until the 128 bits are complete. If two “::” notations are placed in the address,
there is no way to identify the size of each block of zeros.
IPV4 VS IPV6
IPV4 IPV6
source and destination addresses are 32 Source and destination addresses are 128 bits.
bits
ipv4 support small address space. Supports a very large address space sufficient
for each and every people on earth.
ipv4 header includes checksum. ipv6 header doesn't includes the checksum.
addresses are represented in dotted Addresses are represented in 16-bit
decimal format. (Eg. 192.168.5.1) segments Each segment is written in
Hexadecimal separated by
colons. (Eg.
2001:0050:020c:0235:0ab4:3456:456b:e560
Header includes options All optional data is moved to IPV6 extension
header
Broadcast address are used to send traffic There is no IPV6 broadcast address. Instead a
to all nodes on a subnet link local scope all-nodes multicast address is
used
No identification of packet flow for QOS Packet flow identification for QOS handling by
handling by router is present within the routers is present within the IPV6 header using
ipv4 header the flow label
field.
uses host address (A) resource records in Uses AAAA records in the DNS to map host
the Domain name system(DNS) to map names ipv6 addresses.
host names to ipv4 addresses.
Both routers and the sending host Only the sending host fragments packets;
fragment packets. routers do not.
ICMP Router Discovery is used to . ICMPv6 Router Solicitation and Router
determine the IPv4 address of the best Advertisement messages are used to determine
default gateway, and it is optional the IP .address of the best default gateway,
and they are required.
Dual Stack:
Dual stack is an integration method where a node has implementation and connectivity to
both Ipv4 and ipv6 network. If both ipv4 and ipv6 are configured on an interface, this
interface is dual-stacked.
Tunneling Technique
With manually configured IPv6 tunnels, an IPv6 address is configured on a tunnel interface,
and manually configured IPv4 addresses are assigned to the tunnel source and the tunnel
destination. The host or router at each end of a configured tunnel must support both the IPv4
and IPv6 protocol stacks.
It is a translation mechanism that sits between an IPv6 network and an Ipv4 network. The
translator translates IPv6 packets into IPv4 packets and vice versa.
The network layer has been designed with the following goals:
I. The services provided should be independent of the underlying technology. Users of
the service need not be aware of the physical implementation of the network - for all
they know, they're messages could be transported via carrier pigeon! This design goal
has great importance when we consider the great variety of networks in operation. In
the area of Public networks, networks in underdeveloped countries are nowhere near
the technological prowess of those in the countries like the US or Ireland. The design
of the layer must not disable us from connecting to networks of different technologies.
II. The transport layer (that is the host computer) should be shielded from the number,
type and different topologies of the subnets he uses. That is, all the transport layer
want is a communication link, it need not know how that link is made.
III. Finally, there is a need for some uniform addressing scheme for network addresses.
With these goals in mind, two different types of service emerged: Connection oriented and
connectionless. A connection-oriented service is one in which the user is given a
"reliable" end to end connection. To communicate, the user requests a connection, then
uses the connection to his heart’s content, and then closes the connection. A telephone call is
the classic example of a connection oriented service.
In a connection-less service, the user simply bundles his information together, puts an address
on it, and then sends it off, in the hope that it will reach its destination. There is no guarantee
that the bundle will arrive. So connection less service is one reminiscent of the postal system.
A letter is sent, that is, put in the post box. It is then in the "postal network" where it gets
bounced around and hopefully will leave the network in the correct place, that is, in the
addressee's letter box. We can never be totally sure that the letter will arrive, but we know
that there is a high probability that it will, and so we place our trust in the postal network.
Now, the question was which service would the network layer provide, a
connection-oriented or a connectionless one?
With a connection oriented service, the user must pay for the length (ie the duration) of his
connection. Usually this will involve a fixed start up fee. Now, if the user intends to send a
constant stream of data down the line, this is great - he is given a reliable service for as long
as he wants. However, say the user wished to send only a packet or two of data - now the cost
of setting up the connection greatly overpowers the cost of sending that one packet. Consider
also the case where the user wishes to send a packet once every 3 minutes. In a connection-
oriented service, the line will thus be idle for the majority of the time, thus wasting
bandwidth. So, connection-oriented services seem to be useful only when the user wishes to
send a constant stream of data.
One would therefore think that the reliable nature of the connection oriented service would
prompt people to choose it over the connectionless service - this is in fact not the case. One
can never ensure that the network is 100% reliable; in fact for many applications we must
assume that the network is not reliable at all. With this in mind, many applications perform
their own error detection, flow and congestion control at a higher level in the protocol stack,
that is, on their own machine, in the transport layer. So, if the sender and the receiver are
going to engage in their own control mechanisms, why put this functionality into the network
layer? This is the argument for the connectionless service: the network layer should provide a
The primary responsibility of a router is to direct packets destined for local and remote
networks by:
Determining the best path to send packets
Forwarding packets toward their destination
The router uses its routing table to determine the best path to forward the packet. When the
router receives a packet, it examines its destination IP address and searches for the best match
with a network address in the router's routing table. The routing table also includes the
interface to be used to forward the packet. Once a match is found, the router encapsulates the
IP packet into the data link frame of the outgoing or exit interface, and the packet is then
forwarded toward its destination.
Static Routes:
Static routes are configured manually; network administrators must add and delete static
routes to reflect any network topology changes. In a large network, the manual maintenance
of routing tables could require a lot of administrative time. On small networks with few
possible changes, static routes require very little maintenance. Static routing is not as scalable
as dynamic routing because of the extra administrative requirements. Even in large
networks, static routes that are intended to accomplish a specific purpose are
often configured in conjunction with a dynamic routing protocol.
A network is connected to the Internet only through a single ISP. There is no need to use a
dynamic routing protocol across this link because the ISP represents the only exit point to the
Internet.
Connected Routes:
Those network that are directly connected to the Router are called connected routes and are
Dynamic Routes:
Dynamic routing protocol uses a route that a routing protocol adjusts automatically
for topology or traffic changes.
Adaptive routing algorithm When a ROUTER uses an adaptive routing algorithm to decide
the next computer to which to transfer a PACKET of data, it examines the traffic conditions
in order to determine a route which is as near optimal as possible. For example, it tries to pick
a route which involves communication lines which have light traffic. This strategy is in
contrast to a NON-ADAPTIVE ROUTING ALGORITHM. (Also called Dynamic route)
Routing Protocol:
A routing protocol is the communication used between routers. A routing protocol
allows routers to share information about networks and their proximity to each other.
Routers use this information to build and maintain routing tables.
Autonomous System:
An AS is a collection of networks under a common administration that share a common
routing strategy. To the outside world, an AS is viewed as a single entity. The AS
may be run by one or more operators while it presents a consistent view of routing
to the external world. The American Registry of Internet Numbers (ARIN), a service
provider, or an administrator assigns a 16-bit identification number to each AS.
Metric:
There are cases when a routing protocol learns of more than one route to the same
destination. To select the best path, the routing protocol must be able to evaluate and
differentiate between the available paths. For this purpose a metric is used. A metric is a value
used by routing protocols to assign costs to reach remote networks. The metric is used to
determine which path is most preferable when there are multiple paths to the same remote
network.
Each routing protocol uses its own metric. For example, RIP uses hop count, EIGRP uses a
combination of bandwidth and delay, and Cisco's implementation of OSPF uses bandwidth.
As the name implies, distance vector means that routes are advertised as vectors of distance
and direction. Distance is defined in terms of a metric such as hop count and direction is
simply the next-hop router or exit interface. A router using a distance vector routing protocol
does not have the knowledge of the entire path to a destination network. Instead the router
knows only:
The direction or interface in which packets should be forwarded and
The distance or how far it is to the destination network.
To show you more exactly what a distance vector protocol does, Figure shows a
view of what a router learns with a distance vector routing protocol. The figure shows
an internetwork in which R1 learns about three routes to reach subnet X:
The four-hop route through R2
The three-hop route through R5
The two-hop route through R7
Distance vector protocols typically use the Bellman-Ford algorithm for the best path route
determination.
Initial Update:
R1
Sends an update about network 10.1.0.0 out the Serial0/0/0 interface
Sends an update about network 10.2.0.0 out the FastEthernet0/0 interface
Receives update from R2 about network 10.3.0.0 with a metric of 1
Stores network 10.3.0.0 in the routing table with a metric of 1
R2
Sends an update about network 10.3.0.0 out the Serial 0/0/0 interface
Sends an update about network 10.2.0.0 out the Serial 0/0/1 interface
Receives an update from R1 about network 10.1.0.0 with a metric of 1
Stores network 10.1.0.0 in the routing table with a metric of 1
Receives an update from R3 about network 10.4.0.0 with a metric of 1
Stores network 10.4.0.0 in the routing table with a metric of 1
After this first round of update exchanges, each router knows about the connected networks
of their directly connected neighbors. However, did you notice that R1 does not yet know
about 10.4.0.0 and that R3 does not yet know about 10.1.0.0? Full knowledge and a
converged network will not take place until there is another exchange of routing information.
Next Update:
R1
Sends an update about network 10.1.0.0 out the Serial 0/0/0 interface.
Sends an update about networks 10.2.0.0 and 10.3.0.0 out the FastEthernet0/0
interface.
Receives an update from R2 about network 10.4.0.0 with a metric of 2.
Stores network 10.4.0.0 in the routing table with a metric of 2.
Same update from R2 contains information about network 10.3.0.0 with a metric of 1.
There is no change; therefore, the routing information remains the same.
R2
Sends an update about networks 10.3.0.0 and 10.4.0.0 out of Serial 0/0/0 interface.
Sends an update about networks 10.1.0.0 and 10.2.0.0 out of Serial 0/0/1 interface.
Receives an update from R1 about network 10.1.0.0. There is no change; therefore,
the routing information remains the same.
Receives an update from R3 about network 10.4.0.0. There is no change; therefore,
the routing information remains the same.
R3
Sends an update about network 10.4.0.0 out the Serial 0/0/0 interface.
Sends an update about networks 10.2.0.0 and 10.3.0.0 out the FastEthernet0/0
interface.
Receives an update from R2 about network 10.1.0.0 with a metric of 2.
Stores network 10.1.0.0 in the routing table with a metric of 2.
Same update from R2 contains information about network 10.2.0.0 with a metric of 1.
There is no change; therefore, the routing information remains the same.
So exactly how does a link-state routing protocol work? All routers will complete the
following generic link-state routing process to reach a state of convergence:
1. Each router learns about its own links, its own directly connected
networks. This is done by detecting that an interface is in the up state.
2. Each router is responsible for meeting its neighbors on directly connected networks.
link state routers do this by exchanging Hello packets with other link-state routers on
directly connected networks.
3. Each router builds a Link-State Packet (LSP) containing the state of each directly
connected link. This is done by recording all the pertinent information about each
neighbor, including neighbor ID, link type, and bandwidth.
4. Each router floods the LSP to all neighbors, who then store all LSPs received in a
database. Neighbors then flood the LSPs to their neighbors until all routers in the area
have received the LSPs. Each router stores a copy of each LSP received from its
neighbors in a local database.
Faster Convergence:
When receiving a Link-state Packet (LSP), link-state routing protocols
immediately flood the LSP out all interfaces except for the interface from which the LSP
was received. This way, it achieves the faster convergence. With distance vector routing
algorithm, router needs to process each routing update and update its routing table before
flooding them out other interfaces.
The transport layer provides a logical connection between a source host and a destination
host. Transport protocols segment and reassemble data sent by upper-layer applications into
the same data stream, or logical connection, between end points.
Creates packet from bytes stream received from the application layer.
Uses port number to create process to process communication.
Uses a sliding window protocol to achieve flow control.
Uses acknowledgement packet, timeout and retransmission to achieve error control.
The primary duty of the transport layer is to provide end-to-end control and reliability as data
travels through this cloud. This is accomplished through the use of sliding windows,
sequence numbers, and acknowledgments. The transport layer also defines end-to-end
connectivity between host applications. Transport layer protocols include TCP and UDP.
TCP
TCP is a connection-oriented transport layer protocol that provides reliable full-duplex data
transmission. TCP is part of the TCP/IP protocol stack. In a connection-oriented environment,
a connection is established between both ends before the transfer of information can begin.
TCP breaks messages into segments, reassembles them at the destination, and resends
anything that is not received. TCP supplies a virtual circuit between end-user
applications.
TCP Services
TCP offers following services to the processes at the application layer:
Stream Delivery Service
Sending and Receiving Buffers
Bytes and Segments
Full Duplex Service
Connection Oriented Service
Reliable Service
The following are the definitions of the fields in the TCP segment:
Source port – Number of the port that sends data
Destination port – Number of the port that receives data
Sequence number – Number used to ensure the data arrives in the correct order
Acknowledgment number – Next expected TCP octet
HLEN – Number of 32-bit words in the header
Reserved – Set to zero
Code bits – Control functions, such as setup and termination of a session
Window – Number of octets that the sender will accept
Checksum – Calculated checksum of the header and data fields
Urgent pointer – Indicates the end of the urgent data
Option – One option currently defined, maximum TCP segment size
Data – Upper-layer protocol data
Code Bits or Flags (6 bits).
o URG: Urgent pointer field significant.
o ACK: Acknowledgment field significant.
o PSH: Push function.
o RST: Reset the connection.
o SYN: Synchronize the sequence numbers.
o FIN: No more data from sender.
TCP Handshake
The steps 1, 2 establish the connection parameter (sequence number) for one direction and it
is acknowledged. The steps 2, 3 establish the connection parameter (sequence number) for
the other direction and it is acknowledged. With these, a full-duplex communication is
established.
UDP is the connectionless transport protocol in the TCP/IP protocol stack. UDP is a
simple protocol that exchanges datagrams without guaranteed delivery. It relies on
higher-layer protocols to handle errors and retransmit data.
Features of UDP:
Provides connectionless, unreliable service.
So UDP faster than TCP.
Adds only checksum and process-to-process addressing to IP.
Used for DNS and NFS.
Used when socket is opened in datagram mode.
It sends bulk quantity of packets.
No acknowledgment.
Good for video streaming it is an unreliable protocol.
It does not care about the delivery of the packets or the sequence of delivery.
No flow control /congestion control, sender can overrun receiver's buffer.
Real time application like video conferencing needs (Because it is faster).
An UDP datagram is used in Network File System (NFS), DNS, SNMP, TFTP etc.
It has no handshaking or flow control.
It not even has windowing capability.
It is a fire and forget type protocol.
An application can use a UDP port number and another application can use the same
port number for a TCP session from the same IP address.
UDP and IP are on different levels of the OSI stack and correspond to other protocols
like TCP and ICMP.
No connection establishment tear down; data is just sent right away.
For data transfer with UDP a lock-step protocol is required (to be implemented by the
application).
No error control; corrupted data is not retransmitted (even though UDP header has a
checksum to detect errors and report these to the application).
The following are the definitions of the fields in the UDP segment:
Socket Programming:
Sockets are the combination of IP address plus corresponding TCP/UDP port numbers. For
applications to work with TCP/IP, Application Program Interface (API) is required. API
serves as an interface between different software programs and facilitates their interaction,
similar to the way the user interface facilitates interaction between humans and computers.
A socket is not a port, though there is a close relationship between them. A socket is
associated with a port, though this is a many-to-one relationship. Each port can have a single
passive socket, awaiting incoming connections, and multiple active sockets, each
corresponding to an open connection on the port.
Congestion in a network may occur if the load on the network (the number of packets sent to
the network) is greater than the capacity of the network(the number of packets a network can
handle). Congestion control refers to the mechanisms and techniques to control the
congestion and keep the load below the capacity.
When too many packets are pumped into the system, congestion occur leading into
degradation of performance.
Congestion tends to feed upon it and backups.
Congestion shows lack of balance between various networking equipment.
It is a global issue.
1. Retransmission Policy
Retransmission is sometimes unavoidable. If the sender feels that a sent packet is lost
or corrupted, the packet needs to be retransmitted. Retransmission in general may
increase congestion in the network. However, a good retransmission policy can
prevent congestion. The retransmission policy and the retransmission timers must be
designed to optimize efficiency and at the same time prevent congestion. For
example, the retransmission policy used by TCP is designed to prevent or alleviate
congestion.
3. Acknowledgment Policy
The acknowledgment policy imposed by the receiver may also affect
congestion. If the receiver does not acknowledge every packet it receives,
it may slow down the sender and help prevent congestion. Several
approaches are used in this case. A receiver may send an acknowledgment only if it
has a packet to be sent or a special timer expires. A receiver may decide to
acknowledge only N packets at a time. We need to know that the acknowledgments
are also part of the load in a network. Sending fewer acknowledgments means
imposing fewer loads on the network.
4. Discarding Policy
A good discarding policy by the routers may prevent congestion and at the same time
may not harm the integrity of the transmission. For example, in audio transmission, if
the policy is to discard less sensitive packets when congestion is likely to happen, the
quality of sound is still preserved and congestion is prevented or alleviated.
5. Admission Policy
An admission policy, which is a quality-of-service mechanism, can also prevent
congestion in virtual-circuit networks. Switches in a flow first check the resource
requirement of a flow before admitting it to the network. A router can deny
establishing a virtual- circuit connection if there is congestion in the network or if
there is a possibility of future congestion.
1. Back-pressure
The technique of backpressure refers to a congestion control mechanism in
which a congested node stops receiving data from the immediate upstream node or
nodes. This may cause the upstream node or nodes to become congested, and they, in
turn, reject data from their upstream nodes or nodes. And so on. Backpressure is a
node-to-node congestion control that starts with a node and propagates, in the
opposite direction of data flow, to the source. The backpressure technique can be
applied only to virtual circuit networks, in which each node knows the upstream node
from which a flow of data is corning.
2. Choke Packet
A choke packet is a packet sent by a node to the source to inform it of congestion.
Note the difference between the backpressure and choke packet methods. In
backpressure, the warning is from one node to its upstream node, although the
warning may eventually reach the source station. In the choke packet method, the
warning is from the router, which has encountered congestion, to the source station
directly. The intermediate nodes through which the packet has traveled are not
warned. We have seen an example of this type of control in ICMP. When a router in
the Internet is overwhelmed datagrams, it may discard some of them; but it informs
the source, host, using a source quench ICMP message. The warning message goes
directly to the source station; the intermediate routers, and does not take any action.
Figure shows the idea of a choke packet.
3. Implicit Signaling
In implicit signaling, there is no communication between the congested node or nodes
and the source. The source guesses that there is congestion somewhere in the network
from other symptoms. For example, when a source sends several packets and there is
no acknowledgment for a while, one assumption is that the network is congested. The
delay in receiving an acknowledgment is interpreted as congestion in the network; the
source should slow down.
4. Explicit Signaling
The node that experiences congestion can explicitly send a signal to the source or
destination. The explicit signaling method, however, is different from the choke
packet method. In the choke packet method, a separate packet is used for this purpose;
in the explicit signaling method, the signal is included in the packets that carry data.
Explicit signaling, as we will see in Frame Relay congestion control, can occur in
either the forward or the backward direction.
Backward Signaling: A bit can be set in a packet moving in the direction opposite to
the congestion. This bit can warn the source that there is congestion and that it needs
to slow down to avoid the discarding of packets.
Forward Signaling: A bit can be set in a packet moving in the direction of the
congestion. This bit can warn the destination that there is congestion. The
receiver in this case can use policies, such as slowing down the
acknowledgments, to alleviate the congestion.
Leaky Bucket
If a bucket has a small hole at the bottom, the water leaks from the bucket at a constant rate
as long as there is water in the bucket. The rate at which the water leaks does not depend on
the rate at which the water is input to the bucket unless the bucket is empty. The input rate
can vary, but the output rate remains constant. Similarly, in networking, a technique called
leaky bucket can smooth out bursty traffic. Bursty chunks are stored in the bucket and sent
out at an average rate. Figure shows a leaky bucket and its effects.
In the figure, we assume that the network has committed a bandwidth of 3 Mbps for a host.
The use of the leaky bucket shapes the input traffic to make it conform to this commitment.
In Figure the host sends a burst of data at a rate of 12 Mbps for 2 s, for a total of 24 Mbits of
data. The host is silent for 5 s and then sends data at a rate of 2 Mbps for 3 s, for a total of 6
Mbits of data. In all, the host has sent 30 Mbits of data in l0s. The leaky bucket smooths the
traffic by sending out data at a rate of 3 Mbps during the same 10 s. Without the leaky bucket,
the beginning burst may have hurt the network by consuming more bandwidth than is set
aside for this host. We can also see that the leaky bucket may prevent congestion.
A leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by averaging the data
rate. It may drop the packets if the bucket is full.
Figure shows the two scenarios before and after the tokens present in the bucket have been
consumed. In Figure (a), the bucket holds two tokens, and three packets are waiting to be sent
out of the interface, in Figure (b) two packets have been sent out by consuming two tokens,
and 1 packet is still left.
The token bucket algorithm is less restrictive than the leaky bucket algorithm, in a sense that
it allows bursty traffic. However, the limit of burst is restricted by the number of tokens
available in the bucket at a particular instant of time.
The implementation of basic token bucket algorithm is simple; a variable is used just to count
the tokens. This counter is incremented every t seconds and is decremented whenever a
packet is sent. Whenever this counter reaches zero, no further packet is sent out as shown in
Figure (c).
(a) (b)
The original Hypertext Transfer Protocol (HTTP) 1.0 protocol is a stateless protocol whereby
a Web browser forms a connection with a Web server, downloads the appropriate file, and
then terminates the connection. The browser usually requests a file using an HTTP GET
method request on TCP port 80, which consists of a series of HTTP request headers that
define the transaction method (GET, POST, HEAD, and so on) and indicates to the server the
capabilities of the client. The server responds with a series of HTTP response headers that
indicate whether the transaction is successful, the type of data being sent, the type of server,
and finally the requested data.
IIS 4 supports a new version of this protocol called HTTP 1.1, which has new features that
make it more efficient. These new features include the following:
Persistent connections:
An HTTP 1.1 server can keep TCP connections open after a file has been transferred,
eliminating the need for a connection to be opened and closed each time a file is
transferred, as is the case with HTTP 1.0.
Pipelining:
This is a process whereby an HTTP 1.1 client can send multiple Internet Protocol (IP)
packets to the server without waiting for the server to respond to each packet.
Buffering:
This process allows several HTTP requests by the client to be buffered into a single
packet and sent to the server, which results in faster transfer times because fewer and
larger packets are used.
Host headers:
This feature enables an HTTP 1.1–compliant Web server to host multiple Web sites
using a single IP address.
Http put and http delete commands:
These commands enable Web browsers to upload and delete files from Web servers
using HTTP
DNS makes its possible to refer to the Internet protocol (IP) based system (hosts) by human
friendly names (domain names). Name resolution is that act of determining the IP address of
a given hostname. The benefits of DNS are two folds. First Domain Name can be logical and
easily remembered. Secondly, should an IP address for a host change, the domain
name can still resolve transparently to the users or application. DNS name
resolution is a critical Internet service. Many network services require functional
name service for correct operation.
Domain names are separated by dots with the topmost element on the right. Each element
may be up to 63 characters long; the entire name may be at most 255 characters long. Letters,
numbers or dashes may be used in an element.
Domain Name
Each node in the tree has a domain name. A full domain name is a sequence of labels
separated by dots (.). The domain names are always read from the node up to the root.
The last label is the label of the root (null). This means that a full domain name always ends
in a null label, which means the last character is a dot because the null string is
nothing. Figure shows some domain names.
SMTP supports sending of email only It cannot pull messages from a remote server on
demand. Other protocols, such as the Post Office Protocol (POP) and the Internet
Message Access Protocol (IMAP) are specifically designed for retrieving messages and
managing mail boxes. However, SMTP has a feature to initiate mail queue processing on a
remote server so that the requesting system may receive any messages destined for it
(cf. Remote Message Queue Starting). POP and IMAP are preferred protocols when a user's
personal computer is only intermittently powered up, or Internet connectivity is only
transient and hosts cannot receive message during off-line periods.
The overall flow for message creation, mail transport, and delivery may be illustrated as
shown.
Email is submitted by a mail client (MUA, mail user agent) to a mail server (MSA, mail
submission agent) using SMTP on TCP port 587. Most mailbox providers still allow
submission on traditional port 25. From there, the MSA delivers the mail to its mail transfer
agent (MTA, mail transfer agent). Often, these two agents are just different instances of the
same software launched with different options on the same machine. Local processing can be
done either on a single machine, or split among various appliances; in the former case,
involved processes can share files; in the latter case, SMTP is used to transfer the message
internally, with each host configured to use the next appliance as a smart host. Each process is
an MTA in its own right; that is, an SMTP server.
The boundary MTA has to locate the target host. It uses the Domain name system (DNS) to
look up the mail exchanger record (MX record) for the recipient's domain (the part of the
address on the right of @). The returned MX record contains the name of the target host. The
MTA next looks up the A record for that name in order to get the IP address and connect to
such host as an SMTP client.
Once the MX target accepts the incoming message, it hands it to a mail delivery agent
(MDA) for local mail delivery. An MDA is able to save messages in the relevant mailbox
Once delivered to the local mail server, the mail is stored for batch retrieval by
authenticated mail clients (MUAs). Mail is retrieved by end-user applications, called email
clients, using Internet Message Access Protocol (IMAP), a protocol that both facilitates
access to mail and manages stored mail, or the Post Office Protocol (POP) which typically
uses the traditional m box mail file format or a proprietary system such as Microsoft
Exchange/Outlook or Lotus Notes/Domino. Webmail clients may use either method, but the
retrieval protocol is often not a formal standard.
How It Works?
SMTP provides the underlying message transport mechanism for sending e-mail over the
Internet, but it does not provide any facility for storing and retrieving messages. SMTP hosts
must be continuously connected to one another, but most users do not have a dedicated
connection to the Internet.
IMAP4 provides mechanisms for storing messages received by SMTP in a receptacle called a
mailbox. An IMAP4 server stores messages received by each user until the user connects to
download and read them using an IMAP4 client such as Evolution or Microsoft Outlook
Express.
IMAP4 includes a number of features that are not supported by POP3. Specifically, IMAP4
allows users to
Access multiple folders, including public folders
Create hierarchies of folders for storing messages
Leave messages on the server after reading them so that they can access the messages
again from another location
Search a mailbox for a specific message to download
Flag messages as read
Selectively download portions of messages or attachments only
Review the headers of messages before downloading them
An Internet standard protocol for storing and retrieving messages from Simple Mail Transfer
Protocol (SMTP) hosts.
How It Works?
SMTP provides the underlying transport mechanism for sending e-mail messages over the
Internet, but it does not provide any facility for storing messages and retrieving them. SMTP
hosts must be continuously connected to one another, but most users do not have a dedicated
connection to the Internet.
Post Office Protocol version 3 (POP3) provides mechanisms for storing messages sent to
each user and received by SMTP in a receptacle called a mailbox. A POP3 server stores
messages for each user until the user connects to download and read them using a POP3client
such as Microsoft Outlook 98, Microsoft Outlook Express, or Microsoft Mail and News.
To retrieve a message from a POP3 server, a POP3 client establishes a Transmission Control
Protocol (TCP) session using TCP port 110, identifies itself to the server, and then issues a
series of POP3 commands:
stat:
Asks the server for the number of messages waiting to be retrieved
list:
Determines the size of each message to be retrieved
retr:
Retrieves individual messages
Quit:
Ends the POP3 session
After a POP3 client reads a message in its mailbox on a POP3 server, the message is deleted.
Primarily because of this, POP3 is being supplanted by Internet Mail Access Protocol version
4 (IMAP4), which offers better support for mobile users. POP3 is supported by Microsoft
Exchange Server.
IMAP VS POP:
When you use IMAP you are accessing your inbox on the mail server. IMAP does not
actually move messages onto your computer. You can think of an e-mail program using
IMAP as a window to your messages on the server. Although the messages appear on your
computer while you work with them, they remain on the central mail server.
POP does the opposite. Instead of just showing you what is in your inbox on the U's mail
server, it checks the server for new messages, downloads all the new messages in your inbox
onto your computer, and then deletes them from the server. This means that every time you
IMAP makes it easier to view mail from home, work, and other locations
Because IMAP leaves all of your messages on the central mail server, you can view these
messages from any location with Internet access. This means the U of M e-mail inbox you
view from home will be the same one you see at work.
Since POP downloads new messages to your computer and removes them from the server,
you will not be able to see those new messages on another computer when you check your
inbox. Those messages exist only on the computer that downloaded them using POP.
However, if you use IMAP and create e-mail folders on the server, these folders are
accessible from anywhere you read your e-mail using IMAP. If you use POP and create e-
mail folders, they are stored locally, and you cannot access these folders from anywhere
except the computer on which you created them.
POP can create problems if you alternate between it and IMAP. There is an option in many
POP e-mail programs to leave copies of the messages on the server, but this option has
complications. When you leave copies of the messages on the server, then access your e-mail
using Webmail or another IMAP e-mail client, the POP client
may create duplicate messages next time it accesses the inbox; you will see each of the
messages more than once, and you will have to clean out (delete) the unwanted ones.
The Internet is a worldwide, publicly accessible IP network. Due to its vast global
proliferation, it has become a viable method of interconnecting remote sites. However, the
fact that it is a public infrastructure has deterred most enterprises from adopting it as a viable
remote access method for branch and SOHO sites.
A virtual private network (VPN) is a concept that describes how to create a private network
over a public network infrastructure while maintaining confidentiality and security.
VPNs use cryptographic tunneling protocols to provide sender authentication, message
integrity, and confidentiality by protecting against packet sniffing. VPNs can be implemented
at Layers 2, 3, and 4 of the Open Systems Interconnection (OSI) model. Figure illustrates a
typical VPN topology. Components required to establish a VPN include:
Basically, a VPN is a private network that uses a public network (usually the Internet) to
connect remote sites or users together. Instead of using a dedicated, real-world connection
such as leased line, a VPN uses "virtual" connections routed through the Internet from the
company's private network to the remote site or employee.
A well-designed VPN can greatly benefit a company. For example, it can:
Extend geographic connectivity
improve security
Reduce operational costs versus traditional WAN
Reduce transit time and transportation costs for remote users
Improve productivity
Simplify network topology
Provide global networking opportunities
Provide telecommuter support
Provide broadband networking compatibility
IPSEC
IPsec provides a mechanism for secure data transmission over IP networks, ensuring
confidentiality, integrity, and authenticity of data communications over unprotected networks
such as the Internet . IPsec encompasses a suite of protocols and is not bound to any
specific encryption or authentication algorithms, key generation technique, or security
association (SA). IPsec provides the rules while existing algorithms provide the
encryption, authentication, key management, and so on. IPsec acts at the
network layer, protecting and authenticating IP packets between IPsec devices
(peers), such as Cisco PIX Firewalls, Adaptive Security Appliances (ASA), Cisco
routers, the Cisco Secure VPN Client, and other IPsec-compliant products.
Proxy server
A computer that can act on the behalf of other computers to request content from the Internet
or an intranet. Proxy Server is placed between a user's machine and the Internet. It can act as
a firewall to provide protection and as a cache area to speed up Web page display. A firewall
mechanism that replaces the IP address of a host on the internal (protected) network with its
own IP address for all traffic passing through it. A software agent that acts on behalf of a user,
typical proxies accept a connection from a user, make a decision as to whether or not the user
or client IP address is permitted to use the proxy, perhaps does additional authentication, and
then completes a connection on behalf of the user to a remote destination.
Types of Proxy:
1. Forward Proxy:
Forward proxies are proxies where the client server names the target server to connect
to. Forward proxies are able to retrieve from a wide range of sources (in most cases
anywhere on the Internet).The terms "forward proxy" and "forwarding proxy" are a
general description of behavior (forwarding traffic) and thus ambiguous. Except for
Reverse proxy
Fig:A forward proxy taking requests from an internal network and forwarding
them to the Internet
2. Open Proxy:
An open proxy is a forward proxy server that is accessible by any Internet user.
Gordon Lyon estimates there are "hundreds of thousands" of open proxies on the
Internet.[4] An anonymous open proxy allows users to conceal their IP address while
browsing the Web or using other Internet services.
3. Reverse Proxy:
A reverse proxy is a proxy server that appears to clients to be an ordinary server.
Requests are forwarded to one or more origin servers which handle the request. The
ù
Fig: A reverse proxy taking requests from the Internet and forwarding them to
servers in an internal network. Those making requests connect to the proxy
and may not be aware of the internal network.
How It Works?
File Transfer Protocol (FTP) is one of the earliest Internet protocols, and is
still used for uploading and downloading files between clients and servers. An FTP
client is an application that can issue FTP commands to an FTP server, while an FTP server is
a service or daemon running on a server that responds to FTP commands from a client. FTP
commands can be used to change directories, change transfer modes between binary and
ASCII, upload files, and download files. FTP uses Transmission Control Protocol (TCP) for
reliable network communication by establishing a session before initiating data transfer. TCP
port number 21 on the FTP server listens for connection attempts from an FTP client and is
used as a control port for establishing a connection between the client and server, for
allowing the client to send an FTP command to the server, and for returning the server’s
response to the command. Once a control connection has been established, the server opens
port number 20 to form a new connection with the client for transferring the
actual data during uploads and downloads.
While transferring Data over the network, two modes can be used:
1. Ascii Mode
2. Binary Mode
The two types differ from the way they send the data. When a file is sent
using an ASCII-type transfer, the individual letters, numbers and characters are
sent. The receiving machine saves these in a text file in the appropriate format
(for example, a Unix machine saves it in a Unix format, a Macintosh saves it in a Mac
format). Hence if an ASCII transfer is used it can be assumed plain text is sent, which is
stored by the receiving computer in its own format.
Sending a file in binary mode is different. The sending machine sends each file bit
for bit and as such the recipient stores the bit-stream as it receives it.
By default, most FTP clients use ASCII mode. Some clients, nevertheless are more clever
and try to determine the required transfer-mode by inspecting the file's contents.
Configuration Management
A large network is usually made up of hundreds of entities that are physically or logically
connected to one another. These entities have an initial configuration when the network is set
up, but can change with time. Desktop computers may be replaced by others; application
software may be updated to a newer version; and users may move from one group to another.
The configuration management system must know, at any time, the status of each entity and
its relation to other entities. Configuration management can be subdivided into two parts
reconfiguration and Documentation.
Fault Management:
Falls on two categories.
Reactive Fault Management
A reactive fault management system is responsible for detecting, isolating, correcting,
and recording faults. It handles short-term solutions to faults.
Proactive Fault Management
Proactive fault management tries to prevent faults from occurring. Although this is not
always possible, some types of failures can be predicted and prevented.
Performance management:
It is is closely related to fault management and tries to monitor and control the network to
ensure that it is running as efficiently as possible.
Security Management
Security management is responsible for controlling access to the network based on the
predefined policy.
Accounting Management
Accounting management is the control of users' access to network resources through charges.
Charging does not necessarily mean cash transfer; it may mean debiting the departments or
divisions for budgeting purposes. Today, organizations use an accounting management
system for the following reasons:
The Simple Network Management Protocol (SNMP) is a framework for managing devices in
an Internet using the TCPIIP protocol suite. It provides a set of fundamental operations
for monitoring and maintaining an Internet.
Concept
SNMP uses the concept of manager and agent. That is, a manager, usually a host, controls
and monitors a set of agents, usually routers . SNMP is an application-level protocol in which
a few manager stations control a set of agents. The protocol is designed at the
application level so that it can monitor devices made by different manufacturers
and installed on different physical networks.
A managed device is a network node that implements an SNMP interface that allows
unidirectional (read-only) or bidirectional access to node-specific information. Managed
devices exchange node-specific information with the NMSs. Sometimes called network
elements, the managed devices can be any type of device, including, but not limited to,
routers, access servers, switches, bridges, hubs, IP telephones, IP video cameras, computer
hosts, and printers.
A network management system (NMS) executes applications that monitor and control
managed devices. NMSs provide the bulk of the processing and memory resources required
A manager checks an agent by requesting information that reflects the behavior of the
agent.
A manager forces an agent to perform a task by resetting values in the agent database.
An agent contributes to the management process by warning the manager of an
unusual situation.
SNMP operates in the Application Layer of the Internet Protocol Suite (Layer 7 of the OSI
model). The SNMP agent receives requests on UDP port 161. The manager may send
requests from any available source port to port 161 in the agent. The agent response will be
sent back to the source port on the manager. The manager receives notifications (Traps and
InformRequests) on port 162. The agent may generate notifications from any available port.
Role of SNMP
SNMP has some very specific roles in network management. It defines the format of the
packet to be sent from a manager to an agent and vice versa. It also interprets the result and
creates statistics (often with the help of other management software). The packets exchanged
contain the object (variable) names and their status (values). SNMP is responsible for reading
and changing these values.
Roles of SMI
SMI defines the general rules for naming objects, defining object types
(including range and length), and showing how to encode objects and values. SM1 does
not define the number of objects an entity should manage or name the objects to be managed
or define the association between the objects and their values.
The Structure of Management Information, version 2 (SMIv2) is a component for network
management. Its functions are
To name objects
To define the type of data that can be stored in an object
To show how to encode data for transmission over the network
SMI is a guideline for SNMP. It emphasizes three attributes to handle an object: name, data
type, and encoding method .
Roles of MIB
For each entity to be managed, this protocol must define the number of objects, name them
according to the rules defined by SMI, and associate a type to each named object .MIB
creates a collection of named objects, their types, and their relationships to each other in an
entity to be managed. Each agent has its own MIB2, which is a collection of all the objects
that the manager can manage. The objects in MIB2 are categorized under 10 different groups:
system, interface, address translation, ip, icmp, tcp, udp, egp, transmission, and snmp.
Cipher
We refer to encryption and decryption algorithms as ciphers. The term cipher is also used to
refer to different categories of algorithms in cryptography. This is not to say that every
sender-receiver pair needs their very own unique cipher for a secure communication. On the
contrary, one cipher can serve millions of communicating pairs.
Key
A key is a number (or a set of numbers) that the cipher, as an algorithm, operates on. To
encrypt a message, we need an encryption algorithm, an encryption key, and the plain-text.
These create the cipher-text. To decrypt a message, we need a decryption algorithm, a
decryption key, and the cipher-text. These reveal the original plain-text.
Conventional encryption has benefits. It is very fast. It is especially useful for encrypting data
that is not going anywhere. However, conventional encryption alone as a means for
transmitting secure data can be quite expensive simply due to the difficulty of secure
key distribution.
For a sender and recipient to communicate securely using conventional encryption, they must
agree upon a key and keep it secret between themselves. If they are in different physical
locations, they must trust a courier, the Bat Phone, or some other secure communication
medium to prevent the disclosure of the secret key during transmission. Anyone who
overhears or intercepts the key in transit can later read, modify, and forge all
information encrypted or authenticated with that key.
Asymmetric-Key Cryptography
Public key cryptography is an asymmetric scheme that uses a pair of keys for encryption: a
public key, which encrypts data, and a corresponding private, or secret key for decryption.
You publish your public key to the world while keeping your private key secret. Anyone with
a copy of your public key can then encrypt information that only you can read . It is
computationally infeasible to deduce the private key from the public key. Anyone who has a
public key can encrypt information but cannot decrypt it. Only the person who has the
corresponding private key can decrypt the information.
With this approach, all the participants have access to public keys, and private keys are
generated locally by each participant and therefore need never be distributed. As long as a
user protects his and her private key, incoming communication is secure. At any time, a user
change the private key and publish the companion public key replace the old public key.
Comparison
Let us compare symmetric-key and asymmetric-key cryptography. Encryption can be thought
of as electronic locking; decryption as electronic unlocking. The sender puts the message in a
box and locks the box by using a key; the receiver unlocks the box with a key and takes out
the message. The difference lies in the mechanism of the locking and unlocking and the type
of keys used. In symmetric-key cryptography, the same key locks and unlocks the box. In
asymmetric-key cryptography, one key locks the box, but another key is needed to unlock it.
Substitution cipher:
A substitution cipher substitutes one symbol with another. If the symbols in the
plain- text are alphabetic characters, we replace one character with another. For example,
we can replace character A with D, and character T with Z. If the symbols are digits (0 to 9),
we can replace 3 with 7, and 2 with 6. It is also known and Ceaser's Cipher who invented it.
For example, if we encode the word “SECRET” using Caesar’s key value of 3, we offset the
alphabet so that the 3rd letter down (D) begins the alphabet. So starting with
ABCDEFGHIJKLMNOPQRSTUVWXYZ and sliding everything up by 3, you get
DEFGHIJKLMNOPQRSTUVWXYZABC where D=A, E=B, F=C, and so on. Using this
scheme, the plaintext, “SECRET” encrypts as “VHFUHW.” To allow someone
else to read the cipher text, you tell them that the key is 3.
Transposition Ciphers
In a transposition cipher, there is no substitution of characters; instead, their locations change.
A character in the first position of the plaintext may appear in the tenth position of the cipher
text. A character in the eighth position may appear in the first position. In other words, a
transposition cipher reorders the symbols in a block of symbols.
Key: In a transposition cipher, the key is a mapping between the position of the symbols in
the plaintext and cipher text. For example, the following shows the key using a block of four
characters:
Encryption algorithm:
The most commonly used symmetric encryption are block ciphers. A block cipher processes
the plain text input in fixed size blocks and produces a block of cipher text of equal size for
each plain text block.
The two most important symmetric algorithms, both of which are block ciphers, are
Data Encryption Standard (DES)
Advanced Encryption Standard (AES)
DES has two transposition blocks (P-boxes) and 16 complex round ciphers (they are
repeated). Although the 16 iteration round ciphers are conceptually the same, each uses a
RSA Algorithm
This cryptosystem is one the initial system. It remains most employed cryptosystem even
today. The system was invented by three scholars Ron Rivest, Adi Shamir, and Len
Adleman and hence, it is termed as RSA cryptosystem.
We will see two aspects of the RSA cryptosystem, firstly generation of key pair and
secondly encryption-decryption algorithms.
o The pair of numbers (n, e) form the RSA public key and is made public.
o Private Key d is calculated from p, q, and e. For given n and e, there is unique
number d.
o Number d is the inverse of e modulo (p - 1)(q – 1). This means that d is the
number less than (p - 1)(q - 1) such that when multiplied by e, it is equal to 1
modulo (p - 1)(q - 1).
ed = 1 mod (p − 1)(q − 1)
The Extended Euclidean Algorithm takes p, q, and e as input and gives d as output.
Example
An example of generating RSA Key pair is given below. (For ease of understanding, the
primes p & q taken here are small values. Practically, these values are very high).
Select e = 5, which is a valid choice since there is no number that is common factor
of 5 and (p − 1)(q − 1) = 6 × 12 = 72, except for 1.
The pair of numbers (n, e) = (91, 5) forms the public key and can be made available
to anyone whom we wish to be able to send us encrypted messages.
Input p = 7, q = 13, and e = 5 to the Extended Euclidean Algorithm. The output will
be d = 29.
Interestingly, RSA does not directly operate on strings of bits as in case of symmetric key
encryption. It operates on numbers modulo n. Hence, it is necessary to represent the
plaintext as a series of numbers less than n.
RSA Encryption
Suppose the sender wish to send some text message to someone whose public key is
(n, e).
The sender then represents the plaintext as a series of numbers less than n.
To encrypt the first plaintext P, which is a number modulo n. The encryption process
is simple mathematical step as −
C = Pe mod n
In other words, the ciphertext C is equal to the plaintext P multiplied by itself e times
and then reduced modulo n. This means that C is also a number less than n.
Returning to our Key Generation example with plaintext P = 10, we get ciphertext C
−
C = 105 mod 91
RSA Decryption
The decryption process for RSA is also very straightforward. Suppose that the
receiver of public-key pair (n, e) has received a ciphertext C.
Receiver raises C to the power of his private key d. The result modulo n will be the
plaintext P.
Plaintext = Cd mod n
Returning again to our numerical example, the ciphertext C = 82 would get decrypted
to number 10 using private key 29 −
Key Generation − The difficulty of determining a private key from an RSA public
key is equivalent to factoring the modulus n. An attacker thus cannot use knowledge
of an RSA public key to determine an RSA private key unless he can factor n. It is
also a one way function, going from p & q values to modulus n is easy but reverse is
not possible.
If either of these two functions are proved non one-way, then RSA will be broken. In fact, if
a technique for factoring efficiently is developed then RSA will no longer be safe.
The strength of RSA encryption drastically goes down against attacks if the number p and q
are not large primes and/ or chosen public key e is a small number.