0% found this document useful (0 votes)
19 views19 pages

Ele 203 - 2-1

ICT

Uploaded by

Rin Buk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views19 pages

Ele 203 - 2-1

ICT

Uploaded by

Rin Buk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

INTRODUCTION TO COMPUTER NETWORKS

COMPUTER NETWORK
A Computer network consists of two or more autonomous computers that are linked
(connected) together in order to:
 Share resources (files, printers, modems, fax machines).
 Share Application software like MS Office.
 Allow Electronic communication.
 Increase productivity (makes it easier to share data amongst users)
The Computers on a network may be linked through cables, telephones lines, radio waves,
satellites etc. A Computer network includes the network operating system in the client and
server machines, the cables, which connect different computers and all supporting hardware
in between such as bridges, routers and switches. In wireless systems, antennas and towers
are also part of the network.

NETWORK GOALS AND MOTIVATIONS


Before designing a computer network we should see that the designed network fulfils the
basic goals. We have seen that a computer network should satisfy a broad range of purposes
and should meet various requirements. One of the main goals of a computer network is to
enable its users to share resources, to provide low cost facilities and easy addition of new
processing services. The computer network thus, creates a global environment for its users
and computers. Some of the basic goals that a Computer network should satisfy are:
 Cost reduction by sharing hardware and software resources.
 Provide high reliability by having multiple sources of supply.
 Provide an efficient means of transport for large volumes of data among various
locations (High throughput).
 Provide inter-process communication among users and processors.
 Reduction in daily driving data transport.
 Increase productivity by making it easier to share data amongst users.
 Repairs, upgrades, expansions, and changes to the network should be performed with
minimal impact on the majority of network users.
 Standards and protocols should be supported to allow many types of equipment from
different vendors to share the network (Inter-operatability).
 Provide centralised/distributed management and allocation of network resources like
host processors, transmission facilities etc.

CLASSIFICATION OF COMPUTER NETWORKS


Computer networks are generally classified depending on individual‟s perspective; we can
classify networks in different ways such as:
 Based on transmission media use: wired (UTP, coaxial cables, fiber-optic cables) and
wireless networks.
 Based on transmission technology whether the network contains switching elements
or not: point-to-point (circuit-switched or packet-switched) and broadcast networks
(packet radio network, satellite network and LANs).
 Based on the geographical area of coverage (size): LAN, WAN and MAN
 Based on management method (architecture) put in place in the network: Peer-to-peer
and Client/Server
 Based on topology (arrangement) employed in the network: Bus, Star, Ring etc.
 Based on set of rules enforce on the network for communication between network
entities (protocol): Ethernet protocol, token-ring protocol, FDDI
Page 1 of 19
NETWORK ARCHITECTURE
Two common architectures in computer networks are Client/Server or Peer-to-Peer.
Client/Server Architecture - Client/Server Architecture is one in which the client (personal
computer or workstation) is the requesting machine and the server is the supplying machine,
both of which are connected via a local area network (LAN) or wide area network (WAN).
Since the early 1990s, client/server has been the buzzword for building applications on LANs
in contrast to centralised minis and mainframes with dedicated terminals. A client/server
network is called Centralised or Server based network. Figure 7 shows the arrangement of
computers in the client/server environment.

Figure 7: Client/Server Architecture

The client contains the user interface and may perform some or all of the application
processing. Servers can be high-speed microcomputers, minicomputers or even mainframes.
A database server maintains the databases and processes requests from the client to extract
data from or update the database. An application server provides additional business
processing for the clients. The term client/server is sometimes used to contrast a peer-to-peer
network, in which any client can also act as a server. In that case, a client/server entails
having a dedicated server. However, client/server architecture means more than dedicated
servers. Simply downloading files from or sharing programs and databases on a server is not
true client/server either. True client/server implies that the application was originally
designed to run on a network and that the network infrastructure provides the same quality of
service as traditional mini and mainframe information systems.
The network operating system (NOS) together with the database management system
(DBMS) and transaction monitor (TP monitor) are responsible for integrity and security of
these types of networks. Some of these products have gone through many client/server
versions by now and have finally reached industrial strength.
Non-client/server - In non-client/server architecture, the server is nothing more than a
remote disk drive. The user‟s machine does all the processing. If, many users routinely
perform lengthy searches, this can bog down the network, because each client has to pass the
entire database over the net. At 1,000 bytes per record, a 10,000 record database requires
10MB of data be transmitted.
Two-tier client/server - Two-tier client/server is really the foundation of client/server. The
database processing is done in the server. An SQL request is generated in the client and
transmitted to the server. The DBMS searches locally and returns only the matching records.
If 50 records met the criteria only 50K would be transmitted. This reduces traffic in the LAN.
Three-tier client/server - Many applications lend themselves to centralised processing. If,
they contain proprietary algorithms, security is improved. Upgrading is also simpler.
Sometimes, programs are just too demanding to be placed into every client PC. In three-tier
client/server, application processing is performed in one or more servers.

Page 2 of 19
Peer-to-Peer Architecture - A type of network in which each workstation has equal
capabilities and responsibilities is called peer-to-peer network. Figure 8 shows the
arrangement of computers in a peer-to-peer environment. Here each workstation acts as both
a client and a server. There is no central repository for information and there is no central
server to maintain. Data and resources are distributed throughout the network, and each user
is responsible for sharing data and resources connected to their system. This differs from
client/server architectures, in which some computers are dedicated to serving the others. Peer-
to-peer networks are generally simpler and less expensive, but they usually do not offer the
same performance under heavy loads. A peer-to-peer network is also known as a Distributed
network.

Figure 8: Peer-to-peer Architecture

NETWORK DESIGN AND IMPLEMENTATION


Computer network designs and implementations are mostly carried out on the basis of the
geographical area that the network covers, the topology used, the transmission media used
and the computing model employed. Based on the geographical area covered the network
design and implementations may be LAN, MAN, WAN.
Local Area Network (LAN) – Computer Network designed for an organization or business
outfit. In terms of size, it is a small network covering a short distance, typically implemented
in a room, a floor or a building. Typical characteristics of a LAN include:
 Limited by number of computers and distance covered
 Usually one kind of technology is used throughout the network
 Put up to service a department within an organization
Examples of LAN include network inside the Cafe, network inside Senate building, network
inside your home etc.
Metropolitan Area Network (MAN) - Metropolitan Area Network is a Computer network
designed for a town or city. In terms of geographic area MAN‟s are larger than local-area
networks (LANs), but smaller than wide-area networks (WANs). MAN‟s are usually
characterised by very high-speed connections using fiber optical cable or other digital media.
The Typical Characteristics of a MAN are:
 Confined to a larger area than a LAN and can range from 10km to a few 100km in
length.
 Slower than a LAN but faster than a WAN.
 Operates at a speed of 1.5 to 150 Mbps.
 Expensive equipment.
 Moderate error rates.
Wide Area Network (WAN) - Wide Area Network is a computer network that spans a
relatively large geographical area. Typically, a WAN consists of two or more local-area
networks (LANs). They can connect networks across cities, states or even countries.

Page 3 of 19
Computers connected to a wide-area network are often connected through public networks,
such as the telephone system. They can also be connected through leased lines or satellites.
The Typical characteristics of a WAN are:
 A WAN can range from 100krn to 1000krn and the speed between cities can vary
from l.5 Mbps to 2.4 Gbps.
 WAN supports large number of computers and multiple host machines.
 Various segments of network are interconnected using sophisticated support devices
like routers and gateways.
 Usually the speed is much slower than LAN speed.
 Highest possible error rate compared to LAN and MAN.

NETWORK STANDARDS AND PROTOCOLS


Networking Model
Most of the networks today are organised as a series of stacked layers with each layer stacked
over another layer below it. This is done in order to divide the workload and to simplify the
systems design. The architecture is considered scalable if it is able to accommodate a number
of layers in either large or small scales. For example, a computer that runs an Internet
application may require all of the layers that were defined for the architectural model.
Similarly, a computer that acts as a router may not need all these layers. Systems design is
furthermore simplified because with a layered architecture, the design has to only concern the
layer in question and not worry about the architecture in a macro sense. The depth and
functionality of this stack differs from network to network. However, regardless of the
differences among all networks, the purpose of each layer is to provide certain services (job
responsibilities) to the layer above it, shielding the upper layers from the intricate details of
how the services offered are implemented.

Figure 9: Layered network architecture

Every computer in a network possesses within it a generic stack. A logical communication


may exist between any two computers through the layers of the same “level”. Layer-n on one
computer may converse with layer-n on another computer. There are rules and conventions

Page 4 of 19
used in the communication at any given layers, which are known collectively as the layer-n
protocol for the nth layer.
Data are not directly transferred from layer-n on one computer to layer-n on another
computer. Rather, each layer passes data and control information to the layer directly below
until the lowest layer is reached. Below layer-1 (the bottom layer), is the physical medium
(the hardware) through which the actual transaction takes place. In Figure 8 logical
communication is shown by a broken-line arrow and physical communication by a solid-line
arrow. Between every pair of adjacent layers is an interface. The interface is a specification
that determines how the data should be passed between the layers. ]t defines what primitive
operations and services the lower layer should offer to the upper layer. One of the most
important considerations when designing a network is to design clean-cut interfaces between
the layers. To create such an interface between the layers would require each layer to perform
a specific collection of well understood functions. A clean-cut interface makes it easier to
replace the implementation of one layer with another implementation because all that is
required of the new implementation is that, it offers, exactly the same set of services to its
neighbouring layer above as the old implementation did.

OSI References Model - The Open System Interconnection (OSI) model is a set of protocols
that attempt to define and standardise the data communications process; we can say that it is a
concept that describes how data communications should take place. The OSI model was set
by the International Standards Organisation (ISO) in 1984, and it is now considered the
primary architectural model for inter-computer communications. The Open Systems
Interconnection (OS1) reference model describes how information from a software
application in one computer moves through a network medium to a software application in
another computer. The OSI reference model is a conceptual model composed of seven layers
as shown in Figure 9 each specifying particular network functions and into these layers are
fitted the protocol standards developed by the ISO and other standards bodies.
The OSI model divides the tasks involved with moving information between networked
computers into seven smaller, more manageable task groups. A task or group of tasks is then
assigned to each of the seven OSI layers. Each layer is reasonably self-contained so that the
tasks assigned to each layer can be implemented independently. This enables the solutions
offered by one layer to be updated without affecting the other layers.

Figure 10: Layers or OSI reference model

The OSI model is modular. Each successive layer of the OSI model works with the one above
and below it. Although, each layer of the OSI model provides its own set of functions, it is
possible to group the layers into two distinct categories. The first four layers i.e., physical,
data link, network, and transport layer provide the end-to-end services necessary for the

Page 5 of 19
transfer of data between two systems. These four layers part of the protocols are associated
with the communications network used to link two computers together. Together, these are
communication oriented.
The top three layers i.e., the application, presentation, and session layers provide the
application services required for the exchange of information. That is, they allow two
applications, each running on a different node of the network to interact with each other
through the services provided by their respective operating systems. Together, these are data
processing oriented. The following are the seven layers of the Open System Interconnection
(OSI) reference model:
Application Layer (Layer 7) - This top layer defines the language and syntax that programs
use to communicate with other programs. The application layer represents the purpose of
communicating in the first place. Common functions at this layer are opening, closing,
reading and writing files, transferring files and e-mail messages, executing remote jobs and
obtaining directory information about network resources etc.
Presentation Layer (Layer 6) - The Presentation layer performs code conversion and data
reformatting (syntax translation). It is the translator of the network; it makes sure the data is
in the correct form for the receiving application. When data are transmitted between different
types of computer systems, the presentation layer negotiates and manages the way data are
represented and encoded. For example, it provides a common denominator between ASCII
and EBCDIC machines as well as between different floating point and binary formats. Sun‟s
XDR and OSI‟s ASN.1 are two protocols used for this purpose. This layer is also used for
encryption and decryption. It also provides security features through encryption and
decryption.
Session Layer (Layer 5) - The Session layer decides when to turn communication on and off
between two computers. It provides the mechanism that controls the data-exchange process
and coordinates the interaction (communication) between them in anl orderly manner. It sets
up and clears communication channels between two communicating components. It
determines one-way or two-way communications and manages the dialogue between both
parties; for example, making sure that the previous request has been fulfilled before the next
one is sent. It also marks significant parts of the transmitted data with checkpoints to allow
for fast recovery in the event of a connection failure.
Transport Layer (Layer 4) - The transport layer is responsible for overall end-to-end
validity and integrity of the transmission i.e., it ensures that data is successfully sent and
received between two computers. The lower data link layer (layer 2) is only responsible for
delivering packets from one node to another. Thus, if a packet gets lost in a router somewhere
in the enterprise Internet, the transport layer will detect that. It ensures that if a 12MB file is
sent, the full 12MB is received. If data is sent incorrectly, this layer has the responsibility of
asking for retransmission of the data. Specifically, it provides a network independent, reliable
message-independent, reliable message interchange service to the top three application-
oriented layers. This layer acts as an interface between the bottom and top three layers. By
providing the session layer (layer 5) with a reliable message transfer service, it hides the
detailed operation of the underlying network from the session layer.
Network Layer (Layer 3) - The network layer establishes the route between the sending and
receiving stations. The unit of data at the network layer is called a packet. It provides network
routing and flow and congestion functions across computer-network interface. It makes a
decision as to where to route the packet based on information and calculations from other
routers, or according to static entries in the routing table. It examines network addresses in
the data instead of physical addresses seen in the Data Link layer. The Network layer
establishes, maintains, and terminates logical and/or physical connections. The network layer

Page 6 of 19
is responsible for translating logical addresses, or names, into physical addresses. The main
device found at the Network layer is a router.
Data link Layer (Layer 2) - The data link layer groups the bits that we see on the Physical
layer into Frames. It is primarily responsible for error-free delivery of data on a hop. The
Data link layer is split into two sub-layers i.e., the Logical Link Control (LLC) and Media
Access Control (MAC). The Data-Link layer handles the physical transfer, framing (the
assembly of data into a single unit or block), flow control and error- Control functions (and
retransmission in the event of an error) over a single transmission link; it is responsible for
getting the data packaged and onto the network cable. The data link layer provides the
network layer (layer 3) reliable information-transfer capabilities. The main network device
found at the Datalink layer is a bridge. This device works at a higher layer than the repeater
and therefore is a more complex device. It has some understanding of the data it receives and
can make a decision based on the frames it receives as to whether it needs to let the
information pass, or can remove the information from the network. This means that the
amount of traffic on the medium can be reduced and therefore, the usable bandwidth can be
increased.
Physical Layer (Layer 1) - The data units on this layer are called bits. This layer defines the
mechanical and electrical definition of the network medium (cable) and network hardware.
This includes how data is impressed onto the cable and retrieved from it. The physical layer
is responsible for passing bits onto and receiving them from the connecting medium. This
layer gives the data-link layer (layer 2) its ability to transport a stream of serial data bits
between two communicating systems; it conveys the bits that move along the cable. It is
responsible for ensuring that the raw bits get from one place to another, no matter what shape
they are in, and deals with the mechanical and electrical characteristics of the cable. This
layer has no understanding of the meaning of the bits, but deals with the electrical and
mechanical characteristics of the signals and signalling methods. The main network device
found the Physical layer is a repeater. The purpose of a repeater (as the name suggests) is
simply to receive the digital signal, reform it, and retransmit the signal. This has the effect of
increasing the maximum length of a network, which would not be possible due to signal
deterioration if, a repeater were not available. The repeater simply regenerates cleaner digital
signal so it does not have to understand anything about the information it is transmitting, and
processing on the repeater is non-existent. An example of the Physical layer is RS-232.

Each layer, with the exception of the physical layer, adds information to the data as it travels
from the Application layer down to the physical layer. This extra information is called a
header. The physical layer does not append a header to information because it is concerned
with sending and receiving information on the individual bit level. We see that the data for
each layer consists of the header and data of the next higher layer. Because the data format is
different at each layer, different terms are commonly used to name the data package at each
level.
The OSI model provides a conceptual framework for communication between computers, but
the model itself is not a method of communication. Actual communication is made possible
by using communication protocols. In the context of data networking, a protocol is a formal
set of rules and conventions that governs how computers exchange information over a
network medium. A protocol implements the functions of one or more of the OSI layers. A
wide variety of communication protocols exist, but all tend to fall into one of the following
groups: LAN protocols, WAN protocols, network protocols, and routing protocols. LAN
protocols operate at the network and data link layers of the OSI model and define
communication over the various LAN media. WAN protocols operate at the lowest three
layers of the OSI model and define communication over the various wide-area media.

Page 7 of 19
Routing protocols are network-layer protocols that are responsible for path determination and
traffic switching. Finally, network protocols are the various upper-layer protocols that exist in
a given protocol suite.

TCP/IP Reference Model - TCP/IP stands for Transmission Control Protocol I Internet
Protocol. It is a protocol suite used by most communications software. TCP/IP is a robust and
proven technology that was first tested in the early I980s on ARPA Net, the U.S. military‟s
Advanced Research Projects Agency network, and the world‟s first packet-switched network.
TCP/IP was designed as an open protocol that would enable all types of computers to
transmit data to each other via a common communications language. TCP/IP is a layered
protocol similar to the ones used in all the other major networking architectures, including
IBM‟s SNA, Windows‟ NetBIOS, Apple‟s AppleTalk, Novell‟s NetWare and Digital‟s
DECnet. The different layers of the TCP/IP reference model are shown in Figure 11.
Layering means that after an application initiates the communications, the message (data) to
be transmitted is passed through a number of stages or layers until it actually moves out onto
the wire. The data are packaged with a different header at each layer. At the receiving end,
the corresponding programs at each protocol layer unpack the data, moving it “back up the
stack” to the receiving application.

APPLICATION LAYER
TRANSPORT LAYER
NETWORK LAYER
LINK/PHYSICAL LAYER

Figure 11: Layers of TCP/IP reference model

TCP/IP is composed of two major parts: TCP (Transmission Control Protocol) at the
transport layer and IP (Internet Protocol) at the network layer. TCP is a connection-oriented
protocol that passes its data to IP, which is a connectionless one. TCP sets up a connection at
both ends and guarantees reliable delivery of the full message sent. TCP tests for errors and
requests retransmission if necessary, because IP does not. An alternative protocol to TCP
within the TCP/IP suite is UDP (User Datagram Protocol), which does not guarantee
delivery. Like IP, it is also connectionless, but very useful for real-time voice and video,
where it does not matter if a few packets get lost. The following describe layers of TCP/IP
model.
Application Layer (Layer 4) - The top layer of the protocol stack is the application layer. It
refers to the programs that initiate communication in the first place. TCP/IP includes several
application layer protocols for mail, file transfer, remote access, authentication and name
resolution. These protocols are embodied in programs that operate at the top layer just as any
custom made or packaged client/server application would. There are many Application Layer
protocols and new protocols are always being developed. The most widely known
Application Layer protocols are those used for the exchange of user information, some of
them are:
 The HyperText Transfer Protocol (HTTP) is used to transfer files that make up the
Web pages of the World Wide Web.
 The File Transfer Protocol (FTP) is used for interactive file transfer.
 The Simple Mail Transfer Protocol (SMTP) is used for the transfer of mail
messages and attachments.

Page 8 of 19
 Telnet, is a terminal emulation protocol, and, is used for remote login to network
hosts.
 Other Application Layer protocols that help in the management of TCP/IP networks
are:
 The Domain Name System (DNS), which, is used to resolve a host name to an IP
address.
 The Simple Network Management Protocol (SNMP) which is used between
network management consoles and network devices (routers, bridges, and intelligent
hubs) to collect and exchange network management information.
Examples of Application Layer interfaces for TCP/IP applications are Windows Sockets and
NetBIOS. Windows Sockets provides a standard application-programming interface (API)
under the Microsoft Windows operating system. NetBIOS is an industry standard interface
for accessing protocol services such as sessions, datagrams, and name resolution.
Transport Layer (Layer 3) - The Transport Layer (also known as the Host-to-Host
Transport Layer) is responsible for providing the Application Layer with session and
datagram communication services. TCP/IP does not contain Presentation and Session layers,
the services are performed if required, but they are not part of the formal TCP/IP stack. For
example, Layer 6 (Presentation Layer) is where data conversion (ASCII to EBCDIC, floating
point to binary, etc.) and encryption/decryption is performed, Layer 5 is the Session Layer,
which is performed in layer 4 in TCP/IP, Thus, we jump from layer 7 of OSI down to layer 4
of TCP/IP. From Application to Transport Layer, the application delivers its data to the
communications system by passing a Stream of data bytes to the transport layer along with
the socket of the destination machine. The core protocols of the Transport Layer are TCP and
the User Datagram Protocol (UDP).
 TCP: TCP provides a one-to-one, connection-oriented, reliable communications
service. TCP is responsible for the establishment of a TCP connection, the sequencing
and acknowledgment of packets sent, and the recovery of packets lost during
transmission.
 UDP: UDP provides a one-to-one or one-to-many, connectionless, unreliable
communications service. UDP is used when the amount of data to be transferred is
small (such as the data that would fit into a single packet), when the overhead of
establishing a TCP connection is not desired, or when the applications or upper layer
protocols provide reliable delivery. The transport Layer encompasses the
responsibilities of the OSI Transport Layer and some of the responsibilities of the OSI
Session Layer.
Internet Layer (Layer 2) - The internet layer handles the transfer of information across
multiple networks through the use of gateways and routers. The internet layer corresponds to
the part of the OSI network layer that is concerned with the transfer of packets between
machines that are connected to different networks. It deals with the routing of packets across
these networks as well as with the control of congestion. A key aspect of the internet layer is
the definition of globally unique addresses for machines that are attached to the Internet. The
Internet layer provides a single service namely, best-effort connectionless packet transfer. IP
packets are exchanged between routers without a connection setup; the packets are „routed
independently and so they may traverse different paths. For this reason, IP packets are also
called datagrams. The connectionless approach makes the system robust; that is, if failures
occur in the network, the packets are routed around the points of failure; hence, there is no
need to set up connections. The gateways that interconnect the intermediate networks may
discard packets when congestion occurs. The responsibility for recovery from these losses is

Page 9 of 19
passed on to the Transport Layer, The core protocols of the Internet Layer are IP. ARP,
ICMP, and IGMP.
 The Internet Protocol (IP) is a routable protocol responsible for IP addressing and
the fragmentation and reassembly of packets.
 The Address Resolution Protocol (ARP) is responsible for the resolution of the
Internet Layer address to the Network Interface Layer address, such as a hardware
address.
 The Internet Control Message Protocol (ICMP) is responsible for providing
diagnostic functions and reporting errors or conditions regarding the delivery of IP
packets.
 The Internet Group Management Protocol (IGMP) is responsible for the
management of IP multicast groups. The Internet Layer is analogous to the Network
layer of the OSI model.
Link/Physical Layer (Layer I)
The Link/Physical Layer (also called the Network Access Layer) is responsible for placing
TCP/IP packets on the network medium and receiving TCP/IP packets of the network
medium. TCP/IP was designed to be independent of the network access method, frame
format, and medium. In this way, TCP/IP can be used to connect differing network types.
This includes LAN technologies such as Ethernet or Token Ring and WAN technologies such
as X.25 or Frame Relay. Independence from any specific network technology gives TCP/IP
the ability to be adapted to new technologies such as Asynchronous Transfer Mode (ATM).
The Network Interface Layer encompasses the Data Link and Physical layers of the OSI
Model. Note, that the Internet Layer does not take advantage of sequencing and
acknowledgement services that may be present in the Data Link Layer. An unreliable
Network Interface Layer is assumed, and reliable communications through session
establishment and the sequencing and acknowledgement of packets is the responsibility of the
Transport Layer.

Comparison between OSI And TCP/IP reference models


Both OSI and TCP/IP reference models are based on the concept of a stack of protocols. The
functionality of the layers is almost similar. In both models the layers are there to provide an
end-to-end network independent transport service to processes wishing to communicate with
each other. The Two models have many differences. An obvious difference between the two
models is the number of layers: the OSI model has seven layers and the TCP/IP has .four
layers. Both have (inter) network, transport, and application layers, but the other layers are
different. OSI uses strict layering, resulting in vertical layers whereas TCP/IP uses loose
layering resulting in horizontal layers. The OSI model supports both connectionless and
connection-oriented communication in the network layer, but only connection-oriented
communication at the transport layer. The TCP/IP model has only one mode in network layer
(connectionless), but supports both modes in the transport layer. With the TCP/IP model,
replacing IP by a substantially different protocol would be virtually impossible, thus,
defeating one of the main purposes of having layered protocols in the first place.
The OSI reference model was devised before the OSI protocols were designed. The OSI
model was not biased toward one particular set of protocols, which made it quite general. The
drawback of this ordering is that the designers did not have much experience with the subject,
and did not have a good idea of the type of functionality to put in a layer. With TCP/IP the
reverse was true: the protocols carne first and the model was really just a description of the
existing protocols. There was no problem with the protocols fitting the model. The only
drawback was that the model did not fit any other protocol stacks.

Page 10 of 19
Some of the drawbacks of OSI reference model are:
 All layers are not roughly, of equal size and complexity. In practise, the session layer
and presentation layer are absent from many existing architectures.
 Some functions like addressing, flow control, retransmission are duplicated at each
layer, resulting in deteriorated performance.
 The initial specification of the OSI model ignored the connectionless model, thus,
leaving much of the LANs behind.
Some of the drawbacks of TCP/IP model are:
 TCP/IP model does not clearly distinguish between the concepts of service, interface,
and protocol.
 TCP/IP model is not a general model and therefore it cannot be used to describe any
protocol other than TCP/IP.
 TCP/IP model does not distinguish or even mention the Physical or the Data link
layer. A proper model should include both these layers as separate.

NETWORK TOPOLOGY
Topology refers to the shape of a network, or the network‟s layout. How different nodes in a
network are connected to each other and how they communicate with each other is
determined by the network‟s topology. Topologies are either physical or logical. The
parameters that are to be considered while selecting a physical topology are: ease of
installation, ease of reconfiguration and ease of troubleshooting. Some of the most common
network topologies are: bus topology, star topology, ring topology, tree topology, mesh
topology, and cellular topology.

Bus Topology - In Bus topology, all devices are connected to a central cable, called the bus
or backbone. The bus topology connects workstations using a single cable. Each workstation
is connected to the next workstation in a point to-point fashion. All workstations connect to
the same cable. In this type of topology, if one workstation goes faulty all workstations may
be affected as all workstations share the same cable for the sending and receiving of
information. The cabling cost of bus systems is the least of all the different topologies. Each
end of the cable is terminated using a special terminator. The common implementation of this
topology is Ethernet. Here, message transmitted by one workstation is heard by all the other
workstations.

Figure 1: Bus Topology

Advantages of Bus Topology


 Installation is easy and cheap when compared to other topologies
 Connections are simple and this topology is easy to use.
 Less cabling is required.

Page 11 of 19
Disadvantages of Bus Topology
 Used only in comparatively small networks.
 As all computers share the same bus, the performance of the network deteriorates
when we increase the number of computers beyond a certain limit.
 Fault identification is difficult.
 A single fault in the cable stops all transmission.

Star Topology - Start topology uses a central hub through which, all components are
connected. In a Star topology, the central hub is the host computer, and at the end of each
connection is a terminal as shown in Figure 2. Nodes communicate across the network by
passing data through the hub. A star network uses a significant amount of cable as each
terminal is wired back to the central hub, even if two terminals are side by side but several
hundred meters away from the host. The central hub makes all routing decisions, and all other
workstations can be simple. An advantage of the star topology is, that failure, in one of the
terminals does not affect any other terminal; however, failure of the central hub affects all
terminals. This type of topology is frequently used to connect terminals to a large time-
sharing host computer.

Figure 2: Star Topology

Advantages of Star Topology


 Installation and configuration of network is easy.
 Less expensive when compared to mesh topology.
 Faults in the network can be easily traced.
 Expansion and modification of star network is easy.
 Single computer failure does not affect the network.
 Supports multiple cable types like shielded twisted pair cable, unshielded twisted pair
cable, ordinary telephone cable etc.
Disadvantages of Star Topology
 Failure in the central hub brings the entire network to a halt.
 More cabling is required in comparison to tree or bus topology because each node is
connected to the central hub.

Ring Topology - In Ring Topology all devices are connected to one another in the shape of a
closed loop, so that each device is connected directly to two other devices, one on either side
of it, i.e., the ring topology connects workstations in a closed loop, which is depicted in
Figure 3. Each terminal is connected to two other terminals (the next and the previous), with
the last terminal being connected to the first. Data is transmitted around the ring in one
direction only; each station passing on the data to the next station till it reaches its
destination.

Page 12 of 19
Figure 3: Ring Topology

Information travels around the ring from one workstation to the next. Each packet of data sent
on the ring is prefixed by the address of the station to which it is being sent. When a packet of
data arrives, the workstation checks to see if the packet address is the same as its own, if it is,
it grabs the data in the packet. If the packet does not belong to it, it sends the packet to the
next workstation in the ring. Faulty workstations can be isolated from the ring. When the
workstation is powered on, it connects itself to the ring. When power is off, it disconnects
itself from the ring and allows the information to bypass the workstation.
The common implementation of this topology is token ring. A break in the ring causes the
entire network to fail. Individual workstations can be isolated from the ring.
Advantages of Ring Topology
 Easy to install and modify the network.
 Fault isolation is simplified.
 Unlike Bus topology, there is no signal loss in Ring topology because the tokens are
data packets that are re-generated at each node.
Disadvantages of Ring Topology
 Adding or removing computers disrupts the entire network.
 A break in the ring can stop the transmission in the entire network.
 Finding fault is difficult.
 Expensive when compared to other topologies.

Tree Topology - Tree topology is a LAN topology in which only one route exists between
any two nodes on the network. The pattern of connection resembles a tree in which all
branches spring from one root. Figure 4 shows computers connected using Tree Topology.
Tree topology is a hybrid topology, it is similar to the star topology but the nodes are
connected to the secondary hub, which in turn is connected to the central hub. In this
topology groups of star-configured networks are connected to a linear bus backbone.

Figure 4: Tree Topology

Page 13 of 19
Advantages of Tree Topology
 Installation and configuration of network is easy.
 Less expensive when compared to mesh topology.
 Faults in the network can be detected traced.
 The addition of the secondary hub allows more devices to be attached to the central
hub.
 Supports multiple cable types like shielded twisted pair cable, unshielded twisted pair
cable, ordinary telephone cable etc.
Disadvantages of Tree Topology
 Failure in the central hub brings the entire network to a halt.
 More cabling is required when compared to bus topology because each node is
connected to the central hub.

Mesh Topology - Devices are connected with many redundant interconnections between
network nodes. In a well-connected topology, every node has a connection to every other
node in the network. The cable requirements are high, but there are redundant paths built in.
Failure in one of the computers does not cause the network to break down, as they have
alternative paths to other computers. Mesh topologies are used in critical connection of host
computers (typically telephone exchanges). Alternate paths allow each computer to balance
the load to other computer systems in the network by using more than one of the connection
paths available. A fully connected mesh network therefore has n(n  1) / 2 physical channels
to link n devices. To accommodate these, every device on the network must have ( n  1)
input/output ports.

Figure 5: Mesh Topology

Advantages of Mesh Topology


 Use of dedicated links eliminates traffic problems.
 Failure in one of the computers does not affect the entire network.
 Point-to-point link makes fault isolation easy.
 It is robust.
 Privacy between computers is maintained as messages travel along dedicated path.
Disadvantages of Mesh Topology
 The amount of cabling required is high.
 A large number of I/O (input/output) ports are required.

Cellular Topology - Cellular topology, divides the area being serviced into cells. In wireless
media each point transmits in a certain geographical area called a cell, each cell represents a
portion of the total network area. Figure 6 shows computers using Cellular Topology.
Devices that are present within the cell, communicate through a central hub. Hubs in different

Page 14 of 19
cells are interconnected and hubs are responsible for routing data across the network. They
provide a complete network infrastructure. Cellular topology is applicable only in case of
wireless media that does not require cable connection.

Figure 6: Cellular Topology

Advantages of Cellular Topology


 If the hubs maintain a point-to-point link with devices, trouble shooting is easy.
 Hub-to-hub fault tracking is more complicated, but allows simple fault isolation.
Disadvantages of Cellular Topology
 When a hub fails, all devices serviced by the hub lose service (are affected).

NETWORK HARDWARE
As technology advances and IP-based networks are integrated into building infrastructure and
household utilities, network hardware is becoming an ambiguous term owing to the vastly
increasing number of "network capable" endpoints. The most common kind of networking
hardware today is a copper-based Ethernet adapter which is a standard inclusion on most
modern computer systems. Wireless networking has become increasingly popular, especially
for portable and handheld devices.
Networking devices may include gateways, routers, network bridges, modems, wireless
access points, networking cables, line drivers, switches, hubs, and repeaters; and may also
include hybrid network devices such as multilayer switches, protocol converters, bridge
routers, proxy servers, firewalls, network address translators, multiplexers, network interface
controllers, wireless network interface controllers, ISDN terminal adapters and other related
hardware. Other networking hardware used in computers includes data centre equipment
(such as file servers, database servers and storage areas), network services (such
as DNS, DHCP, email, etc.) as well as devices which assure content delivery. Taking a wider
view, mobile phones, PDAs and even modern coffee machines may also be considered
networking hardware.
For emphasis sake, below is the summary of what each of these hardware devices does in a
typical computer network.
Specific Devices
 Gateway: an interface providing compatibility between networks by converting
transmission speeds, protocols, codes, or security measures.
 Router: a networking device that forwards data packets between computer networks.
Routers perform the "traffic directing" functions on the Internet. A data packet is
typically forwarded from one router to another through the networks that constitute the
internetwork until it reaches its destination node. It works on OSI layer 3.
 Switch: a device that connects devices together on a computer network, by using packet
switching to receive, process and forward data to the destination device. Unlike less

Page 15 of 19
advanced network hubs, a network switch forwards data only to one or multiple devices
that need to receive it, rather than broadcasting the same data out of each of its ports. It
works on OSI layer 2.
 Bridge: a device that connects multiple network segments. It works on OSI layers 1 and
2.
 Hub: for connecting multiple Ethernet devices together to make them act as a single
network segment. It has multiple input/output (I/O) ports, in which a signal introduced at
the input of any port appears at the output of every port except the original incoming. A
hub works at the physical layer (layer 1) of the OSI model. Repeater hubs also participate
in collision detection, forwarding a jam signal to all ports if it detects a collision. Hubs
are now largely obsolete, having been replaced by network switches except in very old
installations or specialized applications.
 Repeater: an electronic device that receives a signal and retransmits it at a higher level or
higher power, or onto the other side of an obstruction, so that the signal can cover longer
distances.
Hybrid network devices include:
 Multilayer switch: a switch that, in addition to switching on OSI layer 2, provides
functionality at higher protocol layers.
 Protocol converter: a hardware device that converts between two different types
of transmission, for interoperation.
 Bridge router (brouter): a device that works as a bridge and as a router. The brouter routes
packets for known protocols and simply forwards all other packets as a bridge would.
Hardware or software components which typically sit on the connection point of different
networks (for example, between an internal network and an external network) include:
 Proxy server: computer network service which allows clients to make indirect network
connections to other network services.
 Firewall: a piece of hardware or software put on the network to prevent some
communications forbidden by the network policy. A firewall typically establishes a
barrier between a trusted, secure internal network and another outside network, such as
the Internet, that is assumed to not be secure or trusted.
 Network address translator (NAT): network service (provided as hardware or as software)
that converts internal to external network addresses and vice versa.
Other hardware devices used for establishing networks or dial-up connections includes:
 Multiplexer: a device that selects only one signal from several electrical input signals.
 Network interface controller (NIC): a device connecting a computer to a wire-based
computer network.
 Wireless network interface controller: a device connecting the attached computer to a
radio-based computer network.
 Modem: device that modulates an analog "carrier" signal (such as sound) to encode
digital information, and that also demodulates such a carrier signal to decode the
transmitted information. Used (for example) when a computer communicates with
another computer over a telephone network.
 ISDN terminal adapter (TA): a specialized gateway for ISDN.
 Line driver: a device to increase transmission distance by amplifying the signal; used in
base-band networks only.
COMPUTER SOFTWARE
Computer software, or simply software, is that part of a computer system that consists
of encoded information or computer instructions, in contrast to the physical hardware from

Page 16 of 19
which the system is built. Computer software includes computer programs, libraries and
related non-executable data, such as online documentation or digital media. Computer
hardware and software require each other and neither can be realistically used on its own.
The majority of software is written in high-level programming languages that are easier and
more efficient for programmers, meaning closer to a natural language. High-level languages
are translated into machine language using a compiler or an interpreter or a combination of
the two. Software may also be written in a low-level assembly language, essentially, a
vaguely mnemonic representation of a machine language using a natural language alphabet,
which is translated into machine language using an assembler.
Types of Software
On virtually all computer platforms, software can be grouped into a few broad categories
based on:
 The purpose or domain of use
 Nature or domain of execution
 Programming tool
 Architecture
Based on purpose or domain of use – computer software can be divided into:
 Application software, which is software that uses the computer system to perform special
functions or provide entertainment functions beyond the basic operation of the computer
itself. There are many different types of application software, because the range of tasks
that can be performed with a modern computer is so large.
 System software, which is software that directly operates the computer hardware, to
provide basic functionality needed by users and other software, and to provide a platform
for running application software. System software includes:
o Operating systems, which are essential collections of software that manage
resources and provides common services for other software that runs "on top" of
them. Supervisory programs, boot loaders, shells and window systems are core
parts of operating systems. In practice, an operating system comes bundled with
additional software (including application software) so that a user can potentially do
some work with a computer that only has an operating system.
o Device drivers, which operate or control a particular type of device that is attached
to a computer. Each device needs at least one corresponding device driver; because
a computer typically has at minimum at least one input device and at least one
output device, a computer typically needs more than one device driver.
o Utilities, which are computer programs designed to assist users in the maintenance
and care of their computers.
 Malicious software or malware, which is software that is developed to harm and disrupt
computers. As such, malware is undesirable. Malware is closely associated with
computer-related crimes, though some malicious programs may have been designed
as practical jokes.
Based on nature or domain of execution – software include:
 Desktop applications such as web browsers and Microsoft Office, as well
as smartphone and tablet applications (called "apps"). (There is a push in some parts of

Page 17 of 19
the software industry to merge desktop applications with mobile apps, to some
extent. Windows 8, and later Ubuntu Touch, tried to allow the same style of application
user interface to be used on desktops, laptops and mobiles.)
 JavaScript scripts are pieces of software traditionally embedded in web pages that are run
directly inside the web browser when a web page is loaded without the need for a web
browser plugin. Software written in other programming languages can also be run within
the web browser if the software is either translated into JavaScript, or if a web browser
plugin that supports that language is installed; the most common example of the latter
is ActionScript scripts, which are supported by the Adobe Flash plugin.
 Server software, including:
 Web applications, which usually run on the web server and output dynamically
generated web pages to web browsers, using e.g. PHP, Java, ASP.NET, or
even JavaScript that runs on the server. In modern times these commonly include
some JavaScript to be run in the web browser as well, in which case they typically
run partly on the server, partly in the web browser.
 Plugins and extensions are software that extends or modifies the functionality of another
piece of software, and require that software be used in order to function;
 Embedded software resides as firmware within embedded systems, devices dedicated to a
single use or a few uses such as cars and televisions (although some embedded devices
such as wireless chipsets can themselves be part of an ordinary, non-embedded computer
system such as a PC or smartphone).[3] In the embedded system context there is
sometimes no clear distinction between the system software and the application software.
However, some embedded systems run embedded operating systems, and these systems
do retain the distinction between system software and application software (although
typically there will only be one, fixed, application which is always run).
 Microcode is a special, relatively obscure type of embedded software which tells the
processor itself how to execute machine code, so it is actually a lower level than machine
code. It is typically proprietary to the processor manufacturer, and any necessary
correctional microcode software updates are supplied by them to users (which is much
cheaper than shipping replacement processor hardware). Thus an ordinary programmer
would not expect to ever have to deal with it.
Based on programming tools:
Programming tools are also software in the form of programs or applications that software
developers (also known as programmers, coders, hackers or software engineers) use to
create, debug, maintain (i.e. improve or fix), or otherwise support software. Software is
written in one or more programming languages; there are many programming languages in
existence, and each has at least one implementation, each of which consists of its own set of
programming tools. These tools may be relatively self-contained programs such
as compilers, debuggers, interpreters, linkers, and text editors, that can be combined together
to accomplish a task; or they may form an integrated development environment (IDE), which
combines much or all of the functionality of such self-contained tools. IDEs may do this by
either invoking the relevant individual tools or by re-implementing their functionality in a
new way. An IDE can make it easier to do specific tasks, such as searching in files in a
particular project. Many programming language implementations provide the option of using
either individual tools or an IDE.

Page 18 of 19
Based on architecture - Users often see things differently from programmers. People who
use modern general purpose computers (as opposed to embedded systems, analog
computers and supercomputers) usually see three layers of software performing a variety of
tasks: platform, application, and user software.
 Platform software: The Platform includes the firmware, device drivers, an operating
system, and typically a graphical user interface which, in total, allow a user to interact
with the computer and its peripherals (associated equipment). Platform software often
comes bundled with the computer. On a PC one will usually have the ability to change
the platform software.
 Application software: Application software or Applications are what most people think of
when they think of software. Typical examples include office suites and video
games. Application software is often purchased separately from computer hardware.
Sometimes applications are bundled with the computer, but that does not change the fact
that they run as independent applications. Applications are usually independent programs
from the operating system, though they are often tailored for specific platforms. Most
users think of compilers, databases, and other "system software" as applications.
 User-written software: End-user development tailors systems to meet users' specific
needs. User software includes spreadsheet templates and word processor templates. Even
email filters are a kind of user software. Users create this software themselves and often
overlook how important it is. Depending on how competently the user-written software
has been integrated into default application packages, many users may not be aware of
the distinction between the original packages, and what has been added by co-workers.

Page 19 of 19

You might also like