Module 0 E-Learning Book
Module 0 E-Learning Book
E-Learning
Book
SECTION 2: CONTENTS
CHAPTER 1: INTRODUCTION TO COMPUTER
HARDWARE AND SOFTWARE
PART 1: COMPUTER HARDWARE
Learning Objectives
To gain understanding of components of Information Systems (IS) Infrastructure
To understand working of computer and peripheral devices
To understand Data Representation in memory and storage
To gain understanding of Hardware Asset Management
To understand Auditing Hardware
relevant at the time of its preparation. However, it is expected to be updated on regular basis.
Please visit the CIT portal of ICAI (http://cit.icai.org) for the latest updated material.
1.1 Introduction
We cannot think of any enterprise today without Information systems and the critical infrastructure
for developing and maintaining Information Systems is provided by Information Technology (IT). IT
is a key enabler for increasing operational excellence, customer and supplier relations, better
decision making and competitive advantage. In short, IT is required not only to thrive but to survive
in the modern IT age. When IT is integrated with organisation and management, it can provide
better ways of conducting business that empower organisations with a strategic advantage.
Business today is growing exponentially and this growth of business is propelled by information.
The more information a business acquires, the more difficult it becomes to make decisions. In the
past, people could rely on manual processes to make decisions because they had limited amount
of information to process. Today, with massive volumes of available information, it is almost
impossible for management to take decisions without the aid of information systems. Highly
complex decisions must be made in increasingly shorter time frames. All this and more has made
adoption of IT an imperative for running any enterprise.
20
Chapter 1, Part 1: Computer Hardware
PEOPLE (USERS)
APPLICATIONS
DBMS
SYSTEM SOFTWARE
NETWORK
HARDWARE
Components of IS Infrastructure
21
Module 1: e-Learning
Data bases are being used at Front end (A front end to a database usually refers to the
user interface that is working with that database). A back end database is the database
"behind" an application. Most of web based applications would be having some or the
other database at back end.
We are these days in learning revolution and Databases provide life blood to information
economy.
1.2.5 Network
In today’s high speed world, we cannot imagine an information system without an effective
communication system. Effective and efficient communication is a valuable resource
which helps in good management. To enable this communication, we need
communication networks.
Computer Network is a collection of computers and other hardware interconnected by
communication channels that allow sharing of resources and information. Where at least
one process in one device is able to send/receive data to/from at least one process
residing in a remote device, then the two devices are said to be in a network. Physically
it would be devices which are connected but logically it would be a process in one device
transmitting data to another process in another device. A network is a group of devices
connected to each other.
1.2.6 Hardware
Hardware is the tangible portion of our computer systems, something we can touch and
see and it consists of devices that perform the functions of input, processing, data storage
and output activities of the computer.
The keyboard, monitor, CPU, HDD (Hard Disk drive) etc. are all hardware components of
a computer.
22
Chapter 1, Part 1: Computer Hardware
Control Arithmetic
Unit Logic Unit
ALU
Output
Input
Devices
Devices
Memory
Registers Cache
Primary Memory
23
Module 1: e-Learning
24
Chapter 1, Part 1: Computer Hardware
million transistors. We can think of transistors as electronic switches that allow a current to pass
through i.e. “on” or “off “i.e. taking a value of 1 or 0.
The processor or Central processing Unit (CPU) is the brain of the computer. The main function
of CPU is to execute programs stored in memory. It consists of three functional units:
1. Control Unit (CU)
Fetches Program from the memory and determines their type
o Controls the flow of data and instruction to and from memory
o Interprets the instructions
o It simply controls which ones to execute and when
3. Registers
Small High Speed Memories that store temporary results
o High-speed memory units within CPU for storing small amount of data(Mostly 32
or 64 Bits)
o Registers could be
Accumulators: they can keep running totals of arithmetic values.
Address Registers: they can store memory addresses which tell the
CPU where in memory an instruction is located.
Storage Registers: they can temporarily store data that is being sent
to or coming from the system memory.
Miscellaneous: general purpose used for several functions
Bus Lines
The CPU is connected through Bus lines with main memory and I/O Devices. The computer buses
of one form or another move data between all of these devices. The basic job of Bus is to move
data from one location to other within the computer.
25
Module 1: e-Learning
Fetch Decode
Store Execute
These phases, fetch and decode are done by CU or Control Unit whose job is explaining to ALU
or Arithmetic Logic Unit whose job is to execute and the results are stored in Register
1.3.3 Memory
Virtual
Memory -
Register Cache Primary ------------
Secondary
Memory
Memory Hierarchy
Memory is where data and programs are stored. Various types of memory devices are
Internal memory
Registers are internal memory within CPU, very fast and very small
Primary Memory
These are devices in which any location can be accessed in any order (in contrast with
sequential order). These are of primarily two types:
Random access memory (RAM)
It is a Read write memory.
Information can be read as well as modified.
Volatile in nature, means Information is lost as soon as power is turned off.
Read Only Memory (ROM)
It is a Non Volatile in nature (Contents remain even in the absence of power).Usually
these are used to store small amount of information for quick reference by CPU.
Information can be read not modified.
26
Chapter 1, Part 1: Computer Hardware
Cache Memory
There is a huge speed difference between Registers and Primary Memory, to bridge these
speed differences we have cache memory. Cache (Pronounced kăsh) is a smaller, faster
memory which stores copies of the data from the most frequently used main memory locations
and processor/registers can access it more rapidly than main memory.
Cache memory is something like, we record frequently used telephone numbers in our mobile
which we can access quickly otherwise we have to go to telephone directory which takes a
longer time. Cache memory works on the same principle frequently used data is stored in
easily accessible cache memory instead of slower memory like RAM. Because, there is less
data in cache, processor can access it more quickly. There are two types of Cache memory
Level 1 (L1) cache which is available in CPU
Level 2 (L2) cache which is available on mother board of most systems
Now with multi-core chips, Level 3 (L3) cache has also been introduced.
Virtual Memory
Virtual Memory is in fact not a separate device but an imaginary memory area supported by
some operating systems (for example, Windows) in conjunction with the hardware. If a
computer lacks the random access memory (RAM) needed to run a program or operation, the
OS uses a predefined space on hard disk as an extension of RAM. Virtual memory combines
computer’s RAM with temporary space on the hard disk. When RAM runs low, virtual memory
moves data from RAM to a space called a paging file or swap file. Moving data to and from
the paging file frees up RAM to complete its work.
Thus, Virtual memory is an allocation of hard disk space to help RAM. With virtual memory,
the computer can look for areas of RAM that have not been used recently and copy them onto
the hard disk. This frees up space in RAM to load the new applications. Area of the space on
hard disk which acts as an extension to RAM is called a page file or swap file. In some
operating systems an entire disk or partition can be devoted to virtual memory.
27
Module 1: e-Learning
Secondary Memory
CPU refers to the main memory for execution of programs, but these main memories are
volatile in nature and hence cannot be used to store data on a permanent basis. Moreover,
they are very small in sizes. The secondary memories are available in huge sizes. Thus
programs and data can be stored on secondary memories.
Secondary storage differs from primary storage in that it is not directly accessible by the CPU.
The computer usually uses its input/output channels to access secondary storage and
transfers the desired data using intermediate area in primary storage. Secondary storage does
not lose the data when the device is powered down—it is non-volatile.
The features of secondary memory devices are
Non-volatility: Contents are permanent in nature
Greater capacity: These are available in large size
Greater economy: The cost per unit information of these is lesser compared to RAM.
Slow: Slower in speed compared to registers or primary storage
Storage devices could differ amongst each other in terms of:
Speed and access time
Cost / Portability
Capacity
Type of access
Based on these parameters most common forms of secondary storage are:
Hard Drive
DAT Tapes
Pen Drives
CD, DVD and Blue ray Disks
Smart card
The kind of storage we need would depend on our information system objectives.
28
Chapter 1, Part 1: Computer Hardware
Semiconductor
Semiconductor memory uses semiconductor-based integrated circuits to store information, could
be volatile or non-volatile. Primary storage like RAM is volatile semiconductor memory. Secondary
memory like Flash memory (Pen drives) and now flash-based solid-state drives (SSDs) are Non-
Volatile semiconductor memories. Data is represented with switches.
Magnetic
Magnetic storage uses different microscopic magnets of different polarities to store information.
Magnetic storage is non-volatile. Hard Disks, Tape Drives provide Magnetic Storage. In Magnetic
Storage, data (0 or 1) is represented with polarity of Magnetic field.
Optical storage
Optical storage stores information by creating microscopic pits on the surface of a circular disc and
reads this information by illuminating the surface with a laser diode and observing the reflection.
Optical disc storage is non-volatile CDs, DVDs and Blue-ray all use optical means to store Data.
29
Module 1: e-Learning
Evaluation Criteria
Compatibility and Industry Standards
o Is the hardware to be procured compatible with the existing one and does it take
care of future applications?
o Have the workload and performance requirements been calculated and is the
hardware suggested capable of fulfilling them?
o Are there any industry standards for the same, and do the hardware components
comply with them?
Ease of Operations
o Can the hardware be installed and maintained by locally available engineers?
o Can the hardware be serviced, maintained, and upgraded locally?
o Does the hardware need any special training for its operation or will the users be
able to access/use it with minimal additional technological competence?
30
Chapter 1, Part 1: Computer Hardware
Support
o What type of technical support will be provided by the vendor?
o Are appropriate manuals for various operations available?
o If so, are they available in a variety of media?
o Can the manuals be understood by intended users?
o Does the vendor have a strong Research and Development Division with
adequate staff?
o Will the vendor help in the smooth transition from the existing application to the
new one?
o What is the quantum of training that the vendor will provide?
o What is the backup facility that the vendor will provide?
Cost
o Is the cost quoted competitive and comprehensive?
o Are all the requested components included in the purchase price?
o Are there any hidden costs?
o Will the vendor provide support at an affordable cost?
For acquiring Hardware any organisation would issue, an ITT (Invitation to Tender) or
RFP (Request for Proposal).
ITT or RFP
The ITT or RFP documents the requirements of organisation. It has to inter alia include the
following:
Information processing requirements: In terms of workload and performance
requirements in accordance with applications which are proposed to be implemented.
Hardware requirements: In terms of Processing Speed, peripheral Devices and
Network capability required as per business need.
System software applications: In terms of Operating System, Database
management Software and other system software requirements.
Support requirements: In terms of post implementation support and training.
Adaptability requirements: In terms of Compatibility with existing infrastructure and
up gradation capabilities.
Constraints: In terms of Delivery dates
Conversion requirements: In terms of testing, pricing, structure and schedule.
Delivery schedules
Up gradation capabilities
Benchmarking
Contract terms (Right to audit)
Price
32
Chapter 1, Part 1: Computer Hardware
33
Module 1: e-Learning
constant and never-ending growth of data and requirement for greater storage capacity along with
the problem of data safety, security and integrity. In modern day enterprises, which employ large
scale database applications or multimedia applications, the requirements for disk storage run from
gigabytes to terabytes. If a proper data and storage management mechanism is in place, problems
of downtime, business loss on account of lost data and insufficient storage space can be avoided.
The key issues in data and capacity management are:
How to effectively manage rapidly growing volume of data?
How to leverage data and storage technology to support business needs?
What is the best data and storage management framework for an enterprising
business environment?
How to optimize the performance of the data storage hardware and software to
ensure high availability?
What is the best way to achieve greater storage capacity?
How effective is the current data backup and storage management system?
Capacity Planning is planning and monitoring of the computer resources to ensure that the
available resources are being used efficiently and effectively, in terms of
CPU utilisation
Computer Storage Utilisation
Telecommunication and Wide Area Network bandwidth utilisation
Terminal utilisation
I/O channel utilisation etc.
Like every other investment in an organisation, IT investment too has to be justified on the basis of
Returns on Investment (ROI). The capacity of IT resources required, such as processor power,
CPU clock speed, hard disk capacity and number of terminals have to be planned meticulously
keeping the business requirements of the current and the near future and also the rate of
technology obsolescence so that the benefits from IT investments are realised.
Proper capacity planning has the following advantages:
It assures appropriate infrastructure capacity levels to support existing and future
business functions.
It reduces resource expenditures and IT operating costs.
It improves application infrastructure availability.
It enables prioritization and reallocation of existing applications, systems, and network
components.
It projects future resource needs, protecting the infrastructure from slow-downs as
capacity demands increase.
It facilitates improved support of new and existing applications.
34
Chapter 1, Part 1: Computer Hardware
35
Module 1: e-Learning
1.7 Summary
Modern organisations are heavily dependent on automated information systems for performing
processing of transactions whether it is related to delivery of product or services. The basic
infrastructure required to develop, maintain and sustain Information Systems is IT. Hardware is
one of the important components and includes the input, output, processing and storage devices.
It is important to understand functionality of various components of hardware as hardware is the
important resource in managing capacities for effective and efficient working of an enterprise.
Investment in IT hardware is a major investment and its maintenance is a major component of
expenditure. Hence, effective and optimum utilisation of IT asset is important function in in any
organisation to ensure organisational goals are achieved.
36
Chapter 1, Part 1: Computer Hardware
1.8 References
http://www.cio.in
Ralph M. Stair, George W. Reynolds, ‘Principles of Information Systems’, Cengage
Learning
www.whatis.com
www.howstuffworks.com
www.google.com
www.wikipedia.com
www.youtube.com
37
PART 2: SYSTEMS SOFTWARE
Learning Objectives
To gain understanding of system software and application software
To understand operating systems
To understand various types of system software
To gain understanding of software asset management
To understand how to audit software
2.1 Introduction
Computer hardware constitutes the physical components and works based on the instruction input
given by the users and other programs. These set of instruction input are called software. The term
software in its broad sense refers to the program that the machine executes. Software is an
intangible portion whereas hardware is tangible part which we can touch and see. Computer
Software is the collection of computer programs, procedures and documentation that perform
different tasks on a computer system. Program software performs the function of the program it
implements, either by directly providing instructions to the computer hardware or by serving as
input to another piece of software.
2.1.1 Software
Software consists of clearly-defined instruction sets that upon execution, tell a computer what to
do. It is a set of instructions which when executed in a specified sequence accomplishes desired
task. The most critical function of software is to direct the working of computer hardware. This
causes a computer to perform useful tasks. There are two broad categories of software:
1. System Software
2. Application Software
1. System Software
System software is a collection of computer programs used in design, processing and control
of application software. System software coordinates instructions between applications and
hardware. It is the low-level software required to manage computer resources and support the
production or execution of application programs but is not specific to any particular application.
These refer to the set of programs that -
o Enable a computer to work.
o Control the working of a computer.
Module 1: e-Learning
2. Application Software
Users interact with Application Software. It refers to the set of software that performs a specific
function directly for the end-user. It is a collection of programs which address a real life problem
for its end users, may be business or scientific or any other problem. There are varieties of
application software for different needs of users. Some of the Application Programs could be
o Application Suite: E.g. MS Office etc.
o Enterprise Software: E.g. ERP Applications like SAP, etc.
o Enterprise Infrastructure Software: E.g. email servers, Security software.
o Information Worker Software: E.g. CAAT (Computer assisted audit tools), etc.
o Content Access Software: E.g. Media Players etc.
o Educational Software: E.g. ELearning, Examination Test CDs
o Media Development Software: E.g. Desktop Publishing, Video Editing etc.
40
Chapter 1, Part 2: Systems Software
Multi-processing
o Links more than one processor (CPU)
o If the underlying hardware provides more than one processor then that is
multiprocessing.
Multi-threading
o It is an execution model that allows a single process to have multiple code
(threads) run concurrently within the context of that process
o Runs several processes or threads of a program simultaneously.
o Threads can be imagined as child processes that share the parent process
resources but execute independently.
Processor Management
In multitasking environment, OS decides which task (process) gets the processor when and for
how much time. This function is called process scheduling.
Operating System does the following activities for processor management.
Keeps tracks of processor and status of process.
Allocates and de-allocates the processor (CPU) to a process.
Creating and deleting both user and system processes by suspending and resuming
processes.
Providing mechanisms for process synchronization.
Providing mechanisms for process communication.
Providing mechanism for deadlock handling.
Memory Management
Memory management refers to management of Primary Memory or Main Memory. Main memory
is a large array of words or bytes where each word or byte has its own address. Main memory
provides a fast storage that can be accessed directly by the CPU. So, for a program to be
executed, it must be in the main memory.
42
Chapter 1, Part 2: Systems Software
Device Management
One of the purposes of an operating system is to hide the peculiarities of specific hardware devices
from users, which is done by I/O subsystem itself.
The I/O subsystem consists of:
o A memory management component that includes buffering, caching and
spooling.
o A general device called driver interface.
o Drivers for specific hardware devices
OS manages device communication via their respective drivers.
Operating System does the following activities for device management.
o Keeps tracks of all devices. Program responsible for this task is known as the
I/O controller.
o Decides which process gets the device when and for how much time.
o Allocates and de-allocates the device in the efficient way.
File Management
A file system is normally organized into directories for easy navigation and usage. These directories
may contain files and other directories.
Operating System does the following activities for file management.
o Keeps track of information, location, uses, status etc. The collective facilities are
often known as file system.
o Decides who gets the resources.
o Allocates and De-allocates the resources.
o Creating and deleting files.
o Creating and deleting directories.
o Mapping files onto secondary storage.
o Backing up files on stable storage media.
43
Module 1: e-Learning
Networking
The processors in the system are connected through a communication network, configured in a
number of different ways. Different protocols are used to communicate between different
computers.
Protection System
Protection is a mechanism for controlling the access of programs, processes or users to the
resources defined by a computer system. This is achieved by means of password and similar other
techniques, preventing unauthorised access to programs and data. This mechanism provides
means for specification of controls to be imposed and means for enforcement. Protection
mechanism improves reliability by detecting latent errors at the interfaces between computer
subsystems.
44
Chapter 1, Part 2: Systems Software
45
Module 1: e-Learning
46
Chapter 1, Part 2: Systems Software
translates it and executes it, thus Debugging is easier. It occupies less memory space. The
machine codes produced by interpreter are not saved.
SQL Engine: It converts SQL statement to Machine Code.
Debugger: A debugger is a utility that helps in identifying any problem occurring during execution.
By using the debugger utility, one can pinpoint when and where a program terminates abnormally,
indicates the value of the variables used, and in general provides information so that the
programmer can locate bugs.
Linker: Linking is the process of combining various pieces of code and data together to form a
single executable unit that can be loaded in memory. The process of linking is generally done
during compilation of the program. After linking, the loading process takes place by using loader
software.
Loader: A loader is a program which loads the code and data of the executable object file into the
main memory, goes to the very first instruction and then executes the program.
Editor: An editor program is a system software utility that allows the user to create and edit files.
During the editing process, no special characters are added. On account of this, the editor utility is
used to create the source code.
47
Module 1: e-Learning
48
Chapter 1, Part 2: Systems Software
environments where users manage their own file security. ACL is a matrix of access permissions,
with columns for files, objects and rows for users and related capabilities.
Access control lists are widely used, often with Roles. Where Role = set of users, which could be
Administrator, User, Guest. In a Role Based access Control, permissions are given based on roles
and through roles each user gets permissions based on Roles they are assigned to.
49
Module 1: e-Learning
User Creation
In Windows 7 we can create a user by going to Control Panel > User Accounts > Manage accounts
as shown in the next page.
Assign Roles
In Windows 7 we can assign roles by going to Control Panel > User Accounts > Manage accounts.
The default Roles could be Standard User or Administrator
50
Chapter 1, Part 2: Systems Software
Assign Password
In Windows 7, we can assign password by: Control Panel > User Accounts > Manage accounts
Permissions
Permissions are attached to objects. Right Click on Properties > Security Tab > Advanced or Edit
52
Chapter 1, Part 2: Systems Software
Requirements Definition
The key tasks that should be considered for requirement definitions:
Establish the scope, objectives background and project charter.
Establish business requirements.
Develop a conceptual model of the base computer environment that will support the efficient
application development and processing required to meet the business needs and structure.
Develop security, control and performance requirements.
Consolidate the definition of all requirements.
Analyse and evaluate alternative solutions.
Software Alternatives
The key tasks that should be considered in evaluating software alternatives are:
Criteria for selecting or rejecting alternatives.
Cost factors in development vs. purchase decision.
Software cost.
Initial and continuing support availability.
Delivery schedule including lead time requirements.
Requirements and constraints in order to use the software.
Capabilities and limitations of the software.
Potential risk of using a package in terms of future costs and vendor access to the
organisation.
Selection advice from vendors, comparable installations and consultants.
Compatibility with existing in-house system software.
Financial stability of the software suppliers.
Technical expertise of the software suppliers.
Cost/Benefit Analysis
Cost
Functionality
Vendor support
Viability of vendor
Flexibility
Documentation
Response time
Ease of installation
Software Development
If we have planned for Software development the steps involved are:
Requirement Definition- Business requirements are gathered in this phase.
Questions answered are: Who is going to use the system? How will they use the
system? What data should be input into the system? What data should be output by
the system?
Requirement Analysis- These requirements are analysed for their validity and the
possibility of incorporating the requirements in the Application and a Requirement
Specification document is created
Design- software design is prepared from the requirement specifications
Coding- Based on the Design, code is produced by the developer and is the longest
phase.
Testing- Code produced is tested against the requirements to make sure that the
product is actually solving the needs addressed and gathered during the
requirements phase
Changeover and Implementation- After successful testing the product is deployed
in the live environment.
Post Implementation Review: Once the user starts using the developed system
then the actual problems crop up and these have to be solved from time to time.
Maintenance-The process where the corrections and up gradations for the
developed product is known as maintenance.
Software Asset Management (SAM) is the business practice of using people, processes, and
technology to systematically track, evaluate, and manage software licenses and software usage.
SAM is a key process for any organisation to meet its legal, financial and reputational
responsibilities. SAM answers the following key questions:
What is installed in the environment?
What is supposed to be installed?
Who is using the Software?
How much are they using it?
Are they supposed to be using it?
How are they using it?
Can they prove they’re allowed to use it?
Benefits of SAM
Software is a critical and valuable asset and needs effective management. Key benefits of
implementing SAM are:
Cost Savings
o With SAM we can control cost through Standardisation.
o Get volume discounts by going in for volume licensing
o We might be having redundant programs but once we analyse our programs we
might reduce Redundancy.
o With awareness we have a more streamlined operations
Risk Management
o Reduce business, reputational and legal risks.
o When we go for genuine software is used then the reputational and legal risk is
reduced.
o With updated programs, chances of malware also reduce, reducing the business
risk.
Streamlined Operations
o Proper software deployment makes software trouble free, leading to better
productivity.
o With genuine software we’ll have peace of mind and infrastructure optimisations
are possible.
Good Governance
o With better compliance with responsibility under regulations will lead to better
Governance
Disaster Protection
o Updated inventory lets us know what we have.
o We could also organise offsite backups of the software media that we have.
o Faster restoration is possible.
57
Module 1: e-Learning
What is an Endpoint?
An endpoint device is an Internet-capable computer hardware device on a TCP/IP network. Where
an information stream is generated or ends. This could be a:
Computer
Laptops
smart phones
Thin clients,
POS terminals etc.
In any Distributed environment endpoints need to be deployed, configured, patched, secured and
supported.
Endpoint Management solutions encompass
Patch delivery,
inventory,
software distribution,
OS deployment,
remote control capabilities, and
near real-time visibility into the state of endpoints Including advanced capabilities to
support server endpoints
Issues
Difficult to manually identifying systems that require OS migrations, upgrades, or
replacements
58
Chapter 1, Part 2: Systems Software
Information is not easily available to track hardware and software assets to verify software
and hardware contracts
Managing remote offices with high on-site IT support costs
59
Module 1: e-Learning
Management (DRM) comes into play, what is there to protect the Intellectual Property Rights (IPR)
relating to the digital content.
DRM removes usage control from the person in possession of digital content and puts it in the
hands of a computer program. It refers to any scheme that controls access to copyrighted material
using technological means. This is also associated with concept of Data leak prevention (DLP),
which is a strategy for making sure that end users do not send sensitive or critical information
outside of the corporate network.
Scope of DRM
Digital Rights management consists of all activity conducted in relation to the rights governing the:
Digital content creation
Distribution
Storage
Retrieval
Use
Disposal
Why DRM
To protect the property rights of an enterprise’s assets
Primarily for protecting data from leak
To establish the awareness of Intellectual Property Rights (IPR) in society.
60
Chapter 1, Part 2: Systems Software
o Address both, the Information System (IS) requirements and business plans.
o Include an overview of the capabilities of the software and control options.
o Same selection criteria are applied to all proposals.
Review cost/benefit analysis of system software procedures to determine the following:
o Total cost of ownership has been considered while deciding for System software
o Hardware requirements
o Training and technical support requirements.
o Impact on data security.
o Financial stability of the vendor’s operations.
Review controls over the installation of changed system software to determine the
following:
o System software changes are scheduled when they least impact transaction
processing.
o A written plan is in place for testing changes to system software.
o Tests are being completed as planned.
o Problems encountered during testing were resolved and the changes were re-
tested.
o Fall-back or restoration procedures are in place in case of failure.
Review Configuration Management and OS Hardening would involve:
o A review of the pre-defined/default user accounts,
o Whether the multi-boot option is required, if not whether disabled or not
o Whether services enabled are as per organisation's business requirements,
o What devices to be allowed/disallowed
o What resources are accessible by default by all users?
o What are the procedures for controlling application of patches?
o Are the user accounts and particularly the Admin/Super User accounts accessed
through a secured access control mechanism?
o At a PC level, is a personal firewall enabled or not?
Review system software maintenance activities to determine the following:
o Changes made to the system software are documented.
o Current versions of the software are supported by the vendor.
o Vendor’s maintenance activities are logged.
Review systems documentation specifically in the areas of:
o Parameter tables.
o Activity logs/reports.
Review and test systems software implementation to determine the adequacy of controls
in:
o Change procedures.
o Authorisation procedures.
o Access security features.
o Documentation requirements.
o Documentation of system testing.
o Audit trails.
61
Module 1: e-Learning
2.8 Summary
Hardware becomes functional only with the software which runs on it. The cost and complexity of
software has been continuously increasing over the years. The two most important types of
software for any organisation are System Software and Application software. System software
includes both operating systems and utility programs. System software controls how various
technology tools work with application software. Operating system is one of the important system
software as it controls how hardware devices work together and interfaces with applications
software. Utility Software provides additional functionality to operating systems. Application
Software fulfils the specific information processing needs of an organisation. Software is crucial to
the success of any organisation and hence has to be effectively and constantly managed.
62
Chapter 1, Part 2: Systems Software
Understanding how software works in terms of various features and functionality helps us in
optimally managing all the software assets throughout the economic lifecycle.
2.9 References
Ralph M. Stair, George W. Reynolds, ‘Principles of Information Systems’, Cengage
Learning
ITAssetManagement.Net
BSA.org
http://www.gcflearnfree.org/computerbasics/2
63
PART 3: DATABASE AND DATABASE
MANAGEMENT SYSTEMS
Learning Objectives
To gain understanding of Data Base Management Systems
To understand Database Models
To understand Database Languages
To gain understanding of People involved with DBMS
To understand Controls features in Database Management Systems
To understand how to audit DBMS
3.1 Introduction
Every organisation has to manage its information i.e. any organisation should know what it needs
in terms of information, acquire that information, organize that information in a meaningful way,
assure information quality and provide software tools so that users in the organisation can access
information they require. To achieve the above objectives, the organisation has to use Data Base
Management Systems (DBMS)
66
Chapter 1 Part 3: Database and Database Management Systems
The Database system helps us do various operations on these files such as:
Adding new files to database
Deleting existing files from database
Inserting data in existing files
Modifying data in existing files
Deleting data in existing files
Retrieving or querying data from existing files.
Invoicing
Application
Database
Database Management
Systems
Accounting
Application
With this Database approach, applications would access a common database through Database
Management Systems.
Advantages of a DBMS
Permitting Data Sharing:
One of the main advantages of a DBMS is that the same information can be made
available to different users.
Controlling Data Redundancy:
In a DBMS duplication of information or redundancy is, if not eliminated, carefully
controlled or reduced i.e. there is no need to repeat the same data over and over again.
Minimizing redundancy can, therefore, significantly reduce the cost of storing information
on hard drives and other storage devices, as also avoid data inconsistency.
67
Module 1: e-Learning
Disadvantages of a DBMS
There are two major downsides to using DBMSs. These are Cost and Threat to data security.
68
Chapter 1 Part 3: Database and Database Management Systems
Expensive
o Implementing a DBMS system can be expensive and time-consuming.,
especially in large organisations.
o Training requirements alone can be quite costly.
o Other disadvantage could be cost of data conversion from legacy Non-DBMS
solutions, requirement of specific skill sets to manage the DBMS (need for a
DBA).
o DBA is required to fine tune the DBMS from time to time with increased volumes
and changing requirements.
Security:
o In the absence of proper safeguards in place, it may be possible for some
unauthorised users to access the database. If one gets access to database then
it could be an all or nothing proposition.
DBMS failure: Single point of failure
o As the whole data may be stored in the DBMS, it becomes a single point of
failure resulting in disruption of services/non-availability of data.
69
Module 1: e-Learning
One typical Hierarchical structure could be your Directory (folders) in Windows OS.
70
Chapter 1 Part 3: Database and Database Management Systems
This model represents the database as a collection of Relations and is governed by these rules:
• Data is represented in a two-dimensional table, in rows and columns.
• Columns describe the attributes.
• Each column in the table has a unique name.
• All the entries in any column are of the same type or same domain.
• Each column has a domain, a set of possible values that can appear in that column.
• All the entries in any column are of the same type or same domain.
• A row in the table is called Tuple.
• Ordering of rows and columns is insignificant.
• Duplicate rows are not allowed.
• All data items stored in the columns are atomic in nature, that is, they cannot be split
further without loss of information.
• In many tables, there is a column called the Key Column whose value is unique and
cannot be null.
• Entity Constraint: This rule is to ensure and assure that the data values for the primary
key are valid and not null. Example: In Income Tax Assesse Database the primary key
would be PAN Number which is unique and never null.
• Referential Integrity: In a relational data model, associations between tables are defined
by using foreign keys. The referential integrity rule states that if there is a foreign key in
one table, either the foreign key must match the primary key of the other table or else the
foreign key value must be null.
In the above case SalesPersonID in Sales Order Table is foreign key but is primary Key in Sales
Person Table. This Constraint says that if in Sales Order Table for a transaction, SalesPersonId is
0033 then that value should exist as Primary key in Sales Person Table. There might also be a
case where there is direct order, in such a case SalesPersonID would be Null (in the Sales Order
table). Thus Referential Integrity Constraint would not allow someone to create an order booked
by a non-existent salesman.
People Associated With DBMS
• Data Owners: Owners of database
• Database Designers: Help in Database design and Structure
• Application Developers: Implement Database Design and application programs
• Database Administrators: Manage Database management systems(discussed later)
• End Users: Who query and update databases
Normalisation
“Norma” in Latin means Carpenter Square whose job is to set it at right angle.
72
Chapter 1 Part 3: Database and Database Management Systems
Similarly, normalization is used to set the table right. Database normalization is a conceptual
Database design technique which involves process of organizing the fields and tables of a
relational database to minimize redundancy and dependency on non-key elements. Objective is all
non-key items are related on key fields It is the process of Conceptual Database Design achieved
through - identification of relationships among various data items, Grouping data items and
establishing relationships and constraints. Normalization usually involves dividing large tables into
smaller and still smaller (and less redundant) tables and defining relationships between them in
order to improve storage efficiency, data integrity and scalability. The objective is to have data
Tables in such a way that additions, deletions, and modifications of a field can be made in just one
table and then propagated through the rest of the database using the defined relationships.
73
Module 1: e-Learning
External Schema
It is the user’s view of the database. This level describes that part of the database that is relevant
to each user.
External level is the one which is closest to the end users. Individual users are given different views
according to the user’s requirement.
Let’s say an HR person’s requirement in terms of data would be EmployeeID, EmployeeName,
DateofBirth, DateofJoining, Department, Designation, Basic Pay. Whereas Payroll Persons would
be interested in EmployeeID, EmployeeName, Department, Designation, Basic Pay, leave, TDS,
Deduction. These would be different user’s view of data or External Schema.
Conceptual Schema
This level describes what data is stored in the database and the relationships and constraints
among the data. This level contains the logical structure of the entire database. The conceptual
level represents all entities, their attributes and their relationships.
The conceptual schema in above case would consist of EmployeeID, EmployeeName, DateofBirth,
DateofJoining, Department, Designation, Basic Pay, Leave, TDS, Deduction and relationship
amongst them i.e. incorporates view of community of users HR as well as payroll.
All these data items would be in the form of Normalized small Tables with relationships and
constraints.
Internal Schema
It is the physical representation of database on the computer. This level describes how the data is
physically stored in the database. The internal level is concerned with storage space allocation,
Record descriptions, Records placement, Data Compression and Data Encryption Techniques.
74
Chapter 1 Part 3: Database and Database Management Systems
Logical
Structure Conceptual
Schema
Physical
Storage
Structure
Internal Schema
75
Module 1: e-Learning
Data Definition Language (DDL) is used by the DBA (Database Administrator- explained later) and
by database designers to define both conceptual and internal schemas. DDL does not deal with
data.
It is a set of SQL commands used to create, modify and drop database structure. These commands
are not generally used by a common user.
Example: To create schema, SQL statement CREATE Table is used or to Modify Schema ALTER
Table is used. All these statements are used to define Database
In the above example: CREATE table isastudent (StudentID varchar (20), roll_no number (10))
This means that we want to create a table with the name ‘isastudent’ with two columns - StudentID
and roll_no. The name field will have attributes of type of: data: character, Maximum size of 20
characters. Similarly, roll no will be a number type with maximum length of 10.
Use of ALTER with RENAME for Renaming Table
76
Chapter 1 Part 3: Database and Database Management Systems
In the above example: SELECT StudentID, roll_No FROM isastudents WHERE StudentID ="1888“
77
Module 1: e-Learning
In the table name isastudents, StudentID and roll_No refer to two fields containing StudentID and
Roll number respectively in the table and we are trying to extract data relating to Student with ID
“1888”.
78
Chapter 1 Part 3: Database and Database Management Systems
79
Module 1: e-Learning
Some Enterprises have a separate position Data Administrator which is a non-technical position.
Job of Data Administrator relates to data rather than Database. All administrative and policy
matters rest with DA. Thus, the primary responsibility of defining External and Conceptual Schema
is jobs of DA, whereas the primary responsibility of defining internal schema rests with DBA.
80
Chapter 1 Part 3: Database and Database Management Systems
Stored Procedures
o Database servers offer developers the ability to create and reuse SQL code
through the use of objects called as Stored Procedures (Group of SQL
statements).
o These are available to applications accessing a database system and are
actually stored in database. Stored procedures reduce the long SQL queries to
a single line.
Triggers
o These are designed to be automatically ‘fired’ when a specific action/event takes
place within a database. E.g. On execution of code to delete an employee from
Master File, a trigger would be executed (automatically) to scan the Employee
Liability Table to check whether any liability against this employee exists.
81
Module 1: e-Learning
Data base Administrator will then grant access rights or privileges to user as shown below.
82
Chapter 1 Part 3: Database and Database Management Systems
Auditing Database
The databases may be Front End or Back end. It is important to audit application controls.
Applications facilitate access by customers, employees, and business partners. Many different
applications may be accessing the same database and be subject to differing sets of controls. It is
important to audit the management of controls and also segregation of duties relating to these
applications.
Perimeter Controls: Data access is distributed in most organisations. Perimeter controls
(Firewalls, IDS etc.) protect the databases from malicious users. It is essential to audit
these perimeter controls.
Database Controls: Segregation of duties, concurrency control and possibility of anybody
access data without going through application are some of the controls which need to be
examined.
83
Module 1: e-Learning
84
Chapter 1 Part 3: Database and Database Management Systems
85
Module 1: e-Learning
3.9 Summary
At the very heart of most of management information systems are a database and a database
management system. A database maintains information about various entities and a database
management system (DBMS) is software through which users and application programs interact
with a database. We can very well say that Databases and Database Management Systems form
the foundation of information economy. DBMS is crucial for any organisation and has to be
controlled, monitored and assessed.
3.10 References
1. C J Date, An Introduction to Database Systems (8th Edition), Addison-Wesley.
2. Raghu Ramakrishnan, Database Management Systems, 3rd Edition, McGraw-Hill
3. http://asktom.oracle.com
4. http://www.java2s.com/Code/SQLServer/CatalogSQLServer.htm
86
PART 4: HARDWARE/SOFTWARE DEPLOYMENT
STRATEGIES
Learning Objectives
To gain understanding of deployment strategies for IS Infrastructure
To understand Information Technology Components of a Data Centre in CBS
Environment
To understand Configuration management of IS Components
To gain understanding of Hardening of Systems
To understand Auditing IS Infrastructure.
4.1 Introduction
Any business organisation would like to have its goals and objectives achieved. To meet these
business objectives it will have some business processes, these business processes would need
some IT Services. These IT Services would require IT infrastructure. After considering this top
down approach, the business organisations plan for its IT infrastructure, since only a well-designed
IT infrastructure can help an organisation achieve its objectives. After making an IT plan it is
imperative to implement the plan, the term which is used more often for this implementation is
“Deployment”. Deployments in this background involves acquiring new software and Hardware and
have it up and running including installation, configuration, running, testing and necessary
customisations and modifications.
from several different sources, and/or there are particular technical issues such as the design of
the network. Example could be most of CBS banks.
Benefits:
Resource Sharing
o Data used across organisation in one place,
o Easier to undertake organisation-wide activities.
o Exchange of hardware, software.
Avoidance of duplication
Better security
Achieving economies of scale.
All data is at central level so it can be fully replicated for higher availability and disaster
recovery purposes.
Policy enforcements can be done at central level.
Security like patch management anti-virus management etc. can be done at central level.
Disadvantages:
Single Point of Failure
o Central server and databases could be single points of failure. Thus, there is a
need for proper controls in the form of database mirroring, back-up sites,
component redundancies, access controls etc.
Inflexibility
o Less flexibility to cope with local internal or external changes
Increased dependence and vulnerability
o Greater numbers of staff rely on single information systems,
o Greater reliance on a few key staff who plan, develop and run those systems,
o Greater technical complexity that makes problems harder to diagnose, and
o Greater potential impact of data security breaches.
88
Chapter 1, Part 4: Hardware/Software Deployment Strategies
Benefits:
Greater fit between systems and local needs
o Proximity of user and developer helps meet the users’ real needs.
Higher usage of computerised systems
o Fit local needs, users are motivated
Faster system development
o less distance between system user and system developer
No single point of Failure
o One unit can work as a back-up site for the other unit.
Reduced CAPEX
o Deployment can be done in phases, and hence can be matched with budget
cycles since all deployment need not be at one go.
o Up gradations can be done in phases
Disadvantages:
Barriers to sharing data
o Information systems could be mutually incompatible.
Barriers to sharing other resources, including human resources
o Inability to share resources other than data
Duplication of effort
o Databases replication would be required at all decentralised locations, could
consume more resources.
Lack of Centralised Control like
o Patch management, version control, security implementation, Antivirus updates
etc.
Latency
o Because of distributed architecture there may be latency, due to the amount of
routing involved between distributed setups.
89
Module 1: e-Learning
90
Chapter 1, Part 4: Hardware/Software Deployment Strategies
Reserve Bank of India’s MPLS (Multi-Protocol label switching) network and NPCI(National
Payments Corporation of India)
SWIFT (Society for Worldwide Interbank Financial Telecommunication)
Master/VISA/American Exchange
National Clearing Cell and Cheque truncation system
Utility services networks like telephone companies, mobile companies, electricity companies,
water department etc.,
Government Tax Departments
Other Govt. agencies like Central Board of Excise and Customs, Director General Foreign
Trade.
ACS Server for Terminal Access Controller Access-Control System (TACACS) – For
authentication, authorisation and accounting services of all network devices including
branch routers.
Firewalls – (Core and Segment) - Each of the 2 zones is protected by 2 separate clusters
of high performing firewalls
Internet Router – To connect DC/DR to ISPs.
Intrusion Detection and Protection Systems – Separate for each of the two zones.
Two Factor Authentication – Hardware and software to provide second factor
authentication for internet banking users.
Security Solutions for email, and web.
End-Point Security solutions.
The NOC (Network Operations Centre) is usually at a different location for carrying out above
activities. A generic depiction of EMS deployment is given below:
92
Chapter 1, Part 4: Hardware/Software Deployment Strategies
93
Module 1: e-Learning
identification of all significant components within the IT infrastructure and recording the details of
these components in the Configuration Management Database. The configuration management
system also records relationships between these components. It provides comprehensive
information about all components in the infrastructure that enable all other processes to function
more effectively and efficiently. In configuration management, we record functional and physical
characteristic of hardware and software. Configuration management is applied throughout lifecycle
of IS component (Hardware, Software etc.).
Configuration management defines Configuration Items (CI), Items that require change control and
process for controlling such change. It is concerned with the policies, processes and tools for
managing changing IS Components. Special configuration management software is available
commercially. When a system needs hardware or software upgrade, a computer technician can
access the configuration management program and database to see what is currently installed.
The technician can then make a more informed decision about the upgrade needed.
Configuration Management System has the following aspects
Configuration identification
Configuration control
Configuration status reporting
Configuration audit
Configuration control is the activity of managing configuration Items and their related
documentation throughout the life cycle of these products. This activity helps us understand:
Items which are controlled
Process of controlling these changes
Version control
Who controls these changes
Ensures that latest approved version of items are used.
95
Module 1: e-Learning
96
Chapter 1, Part 4: Hardware/Software Deployment Strategies
97
Module 1: e-Learning
4.8 Summary
Information Technology forms the backbone of any organisation. For which, IT Infrastructure has
to be deployed, this could be centralised or decentralised depending on the needs of the
organisation. We have studied the relative advantages and dis-advantages of centralised and
decentralised approach. We also had an overview of IT infrastructure in a CBS Bank. Every
organisation has to keep a track and control changes in configurations through Configuration
Management (CM). We also understood how to harden our systems and risks and controls in
deployment of IT infrastructure and auditing thereof.
4.9 References
http://www.nsa.gov/ia/mitigation_guidance/security_configuration_guides/operating_syst
ems.shtml
http://www.configurationkit.com/index.htm
http://www.sans.org/critical-security-controls
98
CHAPTER 2: INTRODUCTION TO COMPUTER
NETWORKS
PART 1: NETWORK BASICS
Learning Objectives
To gain understanding of basics of communication
To understand transmission modes
To understand different categories and classification of Networks
To gain understanding of LAN topologies
To gain understanding of WAN transmission technologies
To understand factors impacting selection of a suitable topology.
1.1 Introduction
In today’s highly geographically dispersed organisations we cannot imagine an information system
without an effective communication system. Effective and efficient communication is a valuable
resource which helps the management in achieving its objectives. This communication is facilitated
with communication networks. Computer network is a collection of computers (Servers / Nodes),
communication medium, software that help them to communicate and the transmission methods
used in such communication, software and communication protocols. Where at least one process
in one device is able to send/receive data to/from at least one process residing in another device,
then the two devices are said to be in a network. Physically Devices are connected but logically it
is the processes which send or receive data. A network is a group of devices connected to each
other.
Each component, namely the computer, printer, etc. in a computer network is called a node.
Computer networks are used for exchange of data among different computers and also to share
the resources. The field of computer networks is one of the most interesting and rapidly growing
fields in computer science. With advantages of faster and better processing capabilities, existing
computer systems are connected to each other to form a computer network which allows them to
share CPU, I/O devices, storages, etc. without much of an impact on individual systems.
The main reason is resource sharing and communication. Other reasons for networking computers
could be:
Users can save shared files and documents on a file server rather than storing them in
their individual computers.
Module 1: e-Learning
Users can share resources like network printer, which costs much less than having a
locally attached printer for each user’s computer.
Enables sharing of data stored in a central database.
Users can share applications running on application servers, which enable users to share
data and documents, to send messages, and to collaborate.
Better Security- The job of administering and securing a company’s computer resources
can be concentrated on a few centralised servers
Simplex
In Simplex communication, data transmission is always unidirectional; that is, the signal flows in
one direction from any node A to any node B. Example is keyboard to computer connections.
Advantages
This mode of channel is -
Simple, (including software)
Inexpensive, and easy to install.
Disadvantages
Simplex mode has restricted applications, for it is -
Only a one-way communication.
There is no possibility of sending back error or control signals to the transmitter.
101
Module 1: e-Learning
Half Duplex
In Half-Duplex communication, there are facilities to send and receive, but at a time, only one
activity can be performed at a time, either send or receive. Example can be Line between a desktop
workstation and a remote CPU. If another computer is transmitting to a workstation, the operator
cannot send new messages until the other computer finishes its message to acknowledge an
interruption.
Advantages
This mode of channel:
Helps to detect errors and request the sender to retransmit information in case of
corruption of information.
Is less costly than full duplex.
Disadvantages
Only one device can transmit at a time.
Costs more than simplex.
Full Duplex
In Full-Duplex, data can travel in both directions simultaneously, we can also think of duplex as
two simplexes, each travelling in a different direction. Example: Ethernet connections (The kind of
networks we have in our offices) work by making simultaneous use of two physical pairs of twisted
cable.
Advantage
It enables two-way communication simultaneously.
Disadvantage
It is the most expensive method in terms of equipment because two bandwidth channels
are needed.
Serial transmission
Serial transmission can be either Synchronous or Asynchronous on the basis of the
synchronisation between the receiver and sender. In Serial Transmission, as single wire transports
information, the problem is how to synchronise the sender and receiver.
102
Chapter 2, Part 1: Network Basics
Asynchronous Transmission
Also termed as Start-Stop communication, an asynchronous communication technique is a
technique in which the timing of a signal is unimportant and is most widely used by computers to
provide connectivity to printers etc. Any communication between devices of dissimilar speeds will
be of asynchronous one. In asynchronous transmission, the transmitter sends 1 start bit (0) at the
beginning and 1 or more stop bits (1s) at the end of each byte. There may be a gap between each
byte. The basic characteristics of an Asynchronous Communication System are:
Sender and receiver have independent transmit and receive clocks.
Simple interface and inexpensive to implement.
Limited data rate, typically < 64 kbps.
Requires start and stop bits that provide byte timing.
Increased overhead.
Parity often used to validate correct reception.
gap
Advantages
It is simple and does not require synchronisation of the two communication sides.
Since timing is not critical for asynchronous transmission, hardware can be cheaper.
Its set-up is fast and well suited for applications where messages are generated at
irregular intervals, like data entry from the keyboard.
Disadvantages
Slower-Because of the insertion of start and stop bits into the bit stream, asynchronous
transmission is slower than other forms of transmission that operate without the addition
of control information.
Synchronous Transmission
In synchronous transmission, we send bits one after another without start/stop bits or gaps. It is
the responsibility of the receiver to group the bits. In synchronous communication, the clock of the
receiver is synchronised with the clock of the transmitter. On account of this, higher data
transmission rates are possible with no start-stop bits. The characteristics of synchronous
communication are:
There is synchronisation between the clocks of the transmitter and receiver.
It supports high data rates.
It is used in communication between computer and telephony networks.
103
Module 1: e-Learning
Advantages
The advantage of synchronous transmission is speed. With no extra bits or gaps to
introduce it is faster than asynchronous transmission.
Disadvantages
In synchronous transmission, the data is to be recognized at the receiver’s end, as there
may be differences between the transmitter and receiver clocks. That is why each data
transmission must be sustained long enough for the receiver to distinguish it.
Slightly more complex than the asynchronous one.
It’s hardware is more expensive than that of asynchronous one.
104
Chapter 2, Part 1: Network Basics
105
Module 1: e-Learning
The Client-Server architecture separates an application into at least two processes. One process
plays the role of the client, the other plays the role of the server. In this architecture, the client
process requests services from server process. Client also shares the processing load of the
server. The application also has to be designed to work in Client Server mode. In this model there
is separation of computational logic from interface-oriented logic. This model is not about how
computers or nodes are connected but more about processes or computational logic being split.
i.e. there are client processes which typically request for information and there are server
processes which take request for information gather that information and send it to client. Thus it
is not about hardware it is about applications and their processes.
Functions of a Client
Client process is implemented on hardware and software environments better suited for
humans
A client can request services from one or more servers
Client executes in a different address space from the server
Usually manages the GUI
Manages the display of data
Performs data input and validation and all tasks that can be handled locally.
Dispatches requests to server(s)
Manages local environment, Display, Keyboard etc.
Functions of a Server
A server is a supplier of services to clients
106
Chapter 2, Part 1: Network Basics
Middleware
Middleware are a kind of programs that help in clients communicate with Server applications. This
is like plumbing which joins clients with server. It consists of all the software that supports
connectivity between clients and servers. Middleware controls communication, authentication and
message delivery.
Middleware does the job of Transporting, Queuing, and Scheduling. Let’s say that a client request
for information- Middleware does the job of conveying this request to server, when middleware
takes the request from client, client is in a way free to do other jobs, middleware then schedules
the request of client to server, maintain a queue of requests and after getting the required response
from server hands over the responses back to client.
Middleware has the capability to connect to diverse platforms thus providing platform
independence.
Peer -to-Peer Networking is where two or more computers are connected and share resources
without going through a separate server computer. In this approach all computers share equivalent
responsibility for processing data.
Advantages of Peer-to-Peer Networks are:
No extra investment in server hardware or software
Easy setup
No network administration required
Ability of users to control resource sharing
No reliance on other computers for their operation
Lower cost for small networks
Disadvantages of Peer-to Peer Networks are
Additional load on computers because of resource sharing
Lack of central organisation
No central point of storage for file archiving
108
Chapter 2, Part 1: Network Basics
109
Module 1: e-Learning
Coaxial Cable
Co-axial cable consists of a central core conductor of solid or stranded wires. The central core is
held inside a plastic cladding with outer wire mesh and an outer plastic Cladding providing a shield.
The axis of both the conductors is same hence the name; Co-Axial. This is the kind of Cable we
generally see being used for Cable TV.
Thinnet- Flexible used for internal cabling
Thicknet: Used for external cabling
110
Chapter 2, Part 1: Network Basics
consists of a pure glass material or plastic/polymer/acrylic about the diameter of a human hair. An
optical fibre has the following parts:
111
Module 1: e-Learning
112
Chapter 2, Part 1: Network Basics
113
Module 1: e-Learning
Hub
Hub is a hardware device that contains multiple, independent ports that match the cable type
of the network. It provides multiport connectivity. Networks using a Star topology require a
central point for the devices to connect.
Some features of hubs are:
Hubs offer an inexpensive option for transporting data between devices
Don't offer any form of intelligence.
Transmission is broadcast
Hubs can be active or passive.
Active hub amplifies and regenerates the incoming signals before sending the data on
to its destination.
Passive hubs do nothing with the signal
Switches
Switches are a special type of hub that offers an additional layer of intelligence. Switch reads
the MAC address (described above, it is address of NIC Card) of each frame it receives. It
maintains its Switching Table. This information allows switches to transmit only to nodes to
which a frame is addressed. This speeds up the network and reduces congestion, reduces
collisions. Since this provides point to point connectivity multiple connections are possible.
Bridges
Bridges are used to extend or segment two networks. They sit between two physical network
segments and manage the flow of data between the two segments by looking at the MAC
address of the devices connected to each segment, Bridges can elect to forward the data or
block it from crossing.
Routers
Routers are networking devices used to extend or segment networks by forwarding packets
from one logical network to another. A router can be a dedicated hardware device or a
computer system with more than one network interface and the appropriate routing software.
Routers contain internal tables of information called routing tables that keep track of all known
network addresses and possible paths throughout the internetwork, along with number of hops
of reaching each network. When a router receives a packet of data, it reads the header of the
114
Chapter 2, Part 1: Network Basics
packet to determine the destination address. Once it has determined the address, it looks in
its routing table to determine whether it knows how to reach the destination and, if it does, it
forwards the packet to the next hop on the route.
Static routing, routes and route information are entered into the routing tables manually.
Dynamic routing: The router decides about the route based on the latest routing information
gathered from connected routers.
Gateway
Gateway is a device that translates one data format to another. It is used to connect networks
using different protocols. Key point about a gateway is that only the data format is translated,
not the data itself. Example could be Email Gateways.
Bus Topology
The bus topology computers are connected on a single backbone cable.
115
Module 1: e-Learning
This is the simplest method of networking computers. The cable runs from device to device by
using “tee” connectors that plug into the network adapter cards. Terminator is used at the end
to stop the signal from bouncing. A device wishing to communicate with another device on the
network sends a broadcast message onto the wire that all other devices see, but only the
intended recipient actually accepts and processes the message. Because only one computer
at a time can send data on a bus network, the number of computers attached to the bus will
affect network performance.
Advantages
Less expensive when compared to star topology due to less cabling and no network hubs.
Good for smaller networks not requiring higher speeds.
Networks can be extended by the use of repeaters.
Easy to install.
Disadvantages
Limited in size and speed.
Attenuation
One bad connector or failure of the backbone cable shuts down entire network.
Difficult to troubleshoot.
Addition of nodes negatively affects the performance of the whole network, and if there is
a lot of traffic throughput decreases rapidly.
Star Topology
Star topology contains a central hub or Switch to which each and every node is connected.
This necessitates drawing of a separate cable from each and every node to the central hub. All
inter-node data transmission has to pass through it.
Advantages
Easy to troubleshoot.
116
Chapter 2, Part 1: Network Basics
Ring Topology
The ring topology connects computers on a single circle of cable
In a ring network, every device has exactly two neighbours for communication purposes.
All messages travel through a ring in the same direction (effectively either "clockwise" or "anti-
clockwise").
A token, or small data packet, is continuously passed around the network. Whenever a device has
to transmit, it holds the token. Whosoever holds the token has the right to communicate.
The Token Ring is a network with different logical and physical topologies. Physical Topology is
wiring diagram and logical topology is the channel access method. Here, the physical topology is
a star bus; that is, there is a length of cable from each computer that connects it to a central hub
(called a Multi-Station Access Unit, or MAU).Inside the hub, however, the ports are wired together
sequentially in a ring, and they send data around the ring instead of sending it out to all ports
simultaneously as it would if the network were a logical star. The MAU makes a logical ring
connection between the devices internally.
117
Module 1: e-Learning
Advantages
Every device gets an opportunity to transmit.
No computer can monopolise
Every node gets fair share of network resources.
Performs better than the star topology under heavy network load.
Caters to heavy network load
Disadvantages
One malfunctioning workstation or bad port in the MAU can create problems for the entire
network.
Moves, additions and change of devices can affect the network.
Network adapter cards and MAUs are much more expensive than Ethernet cards and
hubs.
Mesh Topology
In this topology, every node is physically connected to every other node.
This is generally used in systems which require a high degree of fault tolerance, such as the
backbone of a telecommunications company or an ISP.
Advantages
Highly fault tolerant
When one node fails, traffic can easily be diverted to other nodes.
Guaranteed communication channel capacity
Easy to troubleshoot
118
Chapter 2, Part 1: Network Basics
Disadvantages
It requires maximum number of cables for connecting devices than any other topology.
It is complex and difficult to set up and maintain.
It is difficult to introduce or remove nodes from the system as it necessitates rewiring.
Its maintenance is expensive.
Hybrids
Hybrid networks use a combination of any two or more topologies in such a way that the resulting
network does not exhibit any one of the standard topologies (e.g., bus, star, ring, etc.). For example
two star networks connected together create a hybrid network topology. A hybrid topology is always
produced when two different basic network topologies are connected.
Two common examples for Hybrid network are: Star Ring network and Star Bus network.
Star Ring Network: A Star Ring network consists of two or more star topologies
connected by using a multi-station access unit (MAU) as a centralised hub.
Star Bus Network: A Star Bus network consists of two or more star topologies connected
by using a bus trunk (the bus trunk serves as the network's backbone).
1.7.1 Switching
For data transfer across WAN, Switching Techniques are required, Switching implies sending on
a path.
Following are the switching techniques used for WAN:
Circuit Switching
Packet Switching
Message Switching
119
Module 1: e-Learning
Circuit Switching
Circuit Switching is a type of communication in which a temporary physical connection is
established between two devices, such as computers or phones, for the entire duration of
transmission usually a session.
In circuit switching, the entire bandwidth of the circuit is available to the communicating parties.
The cost of the circuit is calculated on the basis of the time used. Circuit switching networks
are ideal for real-time data transmissions. Example: Telephone connections
How it works
Communication via circuit switching networks involves 3 phases:
Circuit establishment – an end-to-end connection must be established before any signal
can be transmitted via the channel.
Data transfer – the transmission may be analog voice, digitalized voice or binary data
depending on the nature of the network.
Circuit disconnection – once the transmission is complete, the connection will be
terminated by one of the end station.
Advantages Disadvantages
Compatible with voice No routing techniques, call subject to
blockage
No special training / protocols are needed to Devices at both ends must be
handle data traffic compatible in terms of protocols and
data flow.
Predictable, constant rate of data traffic Large processing and signal burden
on the network
Packet Switching
It is a Network technology that breaks up a message into smaller packets for transmission. Unlike
circuit switching which requires point to point connection establishment, each packet in the packet
switched network contains a destination address. Thus, packets in a single message do not have
to travel the same path and can arrive out of order. Destination computer reassembles them into
proper sequence. Example is internet communication and most computer networks.
How it works
Data is transmitted in blocks, called packets
Message is broken into a series of packets
Packets consists of a portion of data plus a packet header that includes control information
At each node en-route, packet is received, stored briefly and passed to the next node.
120
Chapter 2, Part 1: Network Basics
Advantages Disadvantages
Provide speed conversion, Two devices of Complex Routing and control
different speed can communicate
Efficient path Utilisation Delay in data Flow during time of load
Packet switching networks support two kinds of services:
1. Datagram: In which each packet is treated independently and packet path is chosen
based on the information received from neighbouring nodes on traffic, line failures etc.
Each independent packet referred as a datagram.
2. Virtual circuit: Pre-planned route is established before sending any packets. Once the
route is stabled, all packets follow the route but it is logical connection for fix duration and
referred as virtual circuit.
Message Switching
Message switch is also known as store-and-forward switching. No physical path is established
in advance between the sender and the receiver. Data is sent which is concentrated at
switching point and whenever path is available it is sent forward.
Its cost is determined by the length of the message being transmitted.
Example: SMS (Short Message Service) on mobiles
How it Works
Not necessary to establish circuit all over the network from source to destination
When the sender sends a block of data, it is stored in the switching point
When the appropriate route is available, it is transferred to the next switching point, one
hop at a time until the message reaches its destination.
Each block is received in its entirety, inspected for errors, and then retransmitted.
1.7.2 Multiplexing
Multiplexing is a set of techniques that permit the simultaneous transmission of multiple
signals on a single carrier. With increase in data-and-telecommunications usage, there is
increase in traffic and also the need to accommodate individual users. To achieve this, we
have to either increase individual lines each time a new channel is needed, or install higher
capacity links and use each to carry multiple signals. If the transmission capacity of a link is
greater than the transmission needs of the devices connected to it, the excess capacity is
wasted. Multiplexing is an efficient system that maximizes the utilities of all facilities.
A device that performs multiplexing is called a multiplexer (MUX), and a device that performs
the reverse process is called a de-multiplexer (DEMUX). Prominent kinds of multiplexing are:
121
Module 1: e-Learning
122
Chapter 2, Part 1: Network Basics
123
Module 1: e-Learning
1.9 Summary
We can very well say that in today's competitive market, responsiveness to customer or supplier
demand is often a decisive factor in the success of an organisation. This market responsiveness
requires communication which is facilitated through computer networks. Computer networks are
considered one of the most critical resources in any organisation.
In this chapter we have learnt about communication which is facilitated through data transmission
over computer networks. We have also learnt about the network components which help in
constituting structure of networks or network topologies. We have also had an overview of some
WAN transmission technologies. Knowledge of these technologies helps us in selecting a suitable
network topology.
1.10 References
Ralph M. Stair, George W. Reynolds, ‘Principles of Information Systems’, Cengage
Learning
http://www.di.unisa.it/~vitsca/RC-0809I/ch04.pdf
http://www.tutorialspoint.com/unix_sockets/client_server_model.htm
http://magazine.redhat.com/2008/03/11/what-is-middleware-in-plain-english-please/
http://pluto.ksi.edu/
124
PART 2: NETWORK STANDARDS AND
PROTOCOLS
Learning Objectives
To gain understanding of network standards and networking protocols
To gain an overview of OSI Layers and understand layering concept
To understand TCP/IP Model
To gain understanding of Wireless Networks
2.1 Introduction
A network may be described as a group or collection of computers connected together for sharing
resources and information. Where individual networks are connected by different types of network
devices, such as routers and switches, it is called Internetwork. Enabling effective and secure
communication of various systems with disparate technologies is a challenging task. Networks are
a complicated subject and when we talk of internetwork i.e. two different networks having different
technologies, problem is compounded. Understanding of these complicated systems was made
easier by splitting the communication process to small portions via a reference model called OSI.
The Open Systems Interconnection (OSI) model enables easy integration of various technologies
and provides solutions for managing the internetworking environment. A practical model or more
appropriate a suite of specific network protocols is TCP/IP, on which Internet has been built. We
would understand these concepts.
linked together by internetworking devices. Initially NSF (National Science Foundation) created the
first backbone for Internet, but now there are many companies that operate their own high-capacity
backbones, and all of them interconnect at various Network Access Points (NAPs) around the
world. This ensures that anybody who is on Internet, irrespective of area they are in and the
company they use, are able to communicate through Internet with everybody else. The Internet is
thus a giant agreement between different companies to help communication between different
participants.
There is no governing body in control of Internet. But if anything has to function then some rules
and regulations need to be followed. Some organisations develop technical aspects of this network,
set standards for creating applications on it. Some bodies help in managing the Internet. These
are:
The Internet Society (ISOC), (www.isoc.org) a non-governmental international
organisation providing coordination for the Internet, and its internetworking technologies
and applications.
The Internet Architecture Board (IAB) (www.iab.org) governs administrative and
technical activities on the Internet.
The Internet Engineering Task Force (IETF) (www.ietf.org) has the primary
responsibility for the technical activities of the Internet, including writing specifications and
protocols.
The Forum of Incident Response and Security Teams (www.first.org) is the
coordinator of a number of Computer Emergency Response Teams (CERTs)
representing many countries, governmental agencies, and ISPs (Internet Service
Providers) throughout the world.
The World Wide Web Consortium (W3C) (www.w3.org) takes a lead role in developing
common protocols for the World Wide Web to promote its evolution and ensure its
interoperability.
The Internet Corporation for Assigned Names and Numbers (ICANN).
(www.icann.org)This body handles governance of generic Top Level Domain (gTLD),
registration of Domain names and anything relating to Names and Numbers on Internet.
126
Chapter 2, Part 2: Network Standards And Protocols
So there are standards which define the distance and angle between pins, diameter of pin etc. so
that one plug could fit into socket manufactured by the other manufacturer. The benefits of
standardisation are:
• Allows different computers to communicate.
• Increases the market for products adhering to the standard.
2.2.2 Protocols
A networking protocol is defined as a set of rules that governs data communication over a network
for:
What is communicated,
How it is communicated and
When it is communicated.
To take a simple example, if we are speaking with somebody, for the other person to understand
it is important that we are speaking the same language governed by same grammar and the speed
of communication should be such that the other person can comprehend. Similarly, communication
in networks is facilitated by these common rules called protocols.
The key elements of protocol are:
Syntax:
o Represents What is communicated
o It is the structure or format of the data, that is, the order in which they are
presented.
o In simple terms it means which portion of packet is address and which is content.
o As an analogy, in a Post Card it is pre-defined which part will carry address and
which part the content.
Semantics:
o Represent how it is communicated.
o It means each section of bits, how a particular pattern is to be interpreted, and
based on that, what action to be taken.
o In simple terms it could represent even in address which portion is senders
address and which is receivers address
Timing:
o Represents When it is communicated
o It indicates when the data is to be transmitted and how fast it can be sent.
Even though both these terms are interchangeable a bit of difference could be a protocol defines
a set of rules used by two or more parties to interact between themselves. A standard is a
formalised protocol accepted by most of the parties that implement it.
127
Module 1: e-Learning
Concepts of layers
Internetwork is a network of networks, these networks might not be on same platforms,
technologies. Imagine windows based system communicating with Apple Macintosh or UNIX/Linux
based systems, where technologies are different and perhaps languages are different. In these
cases what becomes important is each side should have knowledge of steps taken for
communication so that it could do a reverse process irrespective of technologies. This is where
layering concepts come into play.
Imagine you want to send an audit report to your client in Berlin, Germany. The key steps are:
First we will write a report.
Then the dispatch department in our office will put the report in an envelope, seal the
envelope, write senders and receivers addresses, put a postage stamp on the envelope
and post it.
Postal staff will take out all the letters from the letter box and sort them. Letter meant for
Berlin is put in a bag, which itself is put in another bag meant for Germany.
The air cargo takes those bags to central postal hub in Frankfurt, Germany.
The German postal office people open the bag and see the bag meant for Berlin.
This bag goes to Berlin where it is opened and out comes an envelope meant for the client
address
It is handed over to person delivering letters to that clients address.
The administrative staff of the client opens the envelop takes out the report and delivers
to the client.
Sending our report to our Berlin client involves so many steps, lot of action has been done on it so
that it can reach the other end. Similarly when a message is sent from one computer to another
again lots of action and processes happen for it to reach other end. In our case there have been
various phases, report writing phase, envelopment phase, sorting phase, similarly at the other end
there have been sorting phase, de-envelopment phase and all. These different phases in network
terminology could be considered analogous to Layers.
Each phase consists of some actions to be taken, in envelopment phase, actions are to put the
report in an envelope, seal the envelope, write senders and receivers addresses, put a postage
stamp on the envelop and post it. Similarly, in each layer some actions are done on data after
which it passes to the next layer or phase. Each layer knows the form of data which it will get and
also knows the form of data it has to pass on to the next layer and the function the layer has to do.
A layer is not concerned about actions taken at previous layer or actions to be taken by next layer
so that there is a form of independence amongst different layers.
The dispatch section in our office knows the form of data it will get i.e. a report and functions it has
to perform, i.e. to put the report in an envelope, seal the envelope, write senders and receivers
129
Module 1: e-Learning
addresses, put a postage stamp on the envelop and post it . It also knows the format of data it has
to give to the next layer i.e. an Envelope. Our dispatch section is not bothered about actions we
have taken, whether we have written report by hand or using a printer, it only knows that a data in
the form of letter will be received by it. Neither is dispatch section bothered about the sorting
process adopted by postal staff, whether it is automated sorting or manual sorting, it just knows
the format of data in which it has to send.
Each layer talks to its upper layer, lower layer and there is also peer to peer communication with
corresponding layer. Layering concept is important so that the layers could be independent of each
other and technological advancements could take place at each layer independent of other layers.
Also irrespective of technology, if a layer understands the action taken by corresponding layer at
other end it could do a reverse job. If a layer knows encryption has happened at other end, then it
knows that it will have to decrypt. This information of action undertaken is passed on by use of
headers and trailers added to the data. Layers also help the process become technology
independent. At our end and at German end technologies could be different but what is important
is that phases are understood by the other end.
Our dispatch section is manual and letters are sealed manually whereas the German client could
have automated letter openers i.e. technologies are different. What is important is that clients’ staff
when they receive the envelope knows that if there is an envelope there is bound to be a letter in
the envelope provided they have understanding of the process. They will know that envelope has
to be opened to extract the letter. At our end, we have put the report in an envelope and have put
envelop in a bag i.e. performed encapsulation of the message. At the other end bag is opened and
envelop taken out, envelop is opened and report taken out i.e. de-capsulation of the message. So,
at one end encapsulation would be done and headers attached; at the other end de-capsulation
would be done, i.e. the headers have been removed.
130
Chapter 2, Part 2: Network Standards And Protocols
The sequence of layers can be remembered by mnemonic “All people seem to need data
processing”
All (Application)
People (Presentation)
Seem (Session)
To (Transport)
Need (Network)
Data (Data Link)
Processing (Physical)
131
Module 1: e-Learning
132
Chapter 2, Part 2: Network Standards And Protocols
2 Courier will add its Local UniqueID Datalink Reliable transit of data
and the Local UniqueID of clients Data Framing
also lets add a seal to see packet is Add MAC Address
not tampered en route. Error Detection
1 Mail is urgent so it is sent by air Physical Deals with 0s and 1s
cargo Converts Bits to Voltages
Transmission rates
Contains physical components
133
Module 1: e-Learning
134
Chapter 2, Part 2: Network Standards And Protocols
Application Layer
The Application layer is the topmost layer of the TCP/IP protocol suite and runs various applications
which provide them the ability to access the services of the other layers and define the protocols
that applications use to exchange data. There are many Application layer protocols, and new ones
are always being developed.
DNS: Domain Name System maps a name to an address and an address to a name. The
addressing system on the Internet uses IP addresses, which are usually indicated by numbers
such as 220.227.161.85. Since such numbers are difficult to remember, a user-friendly system that
has been created is known as the Domain Name System (DNS). It provides the mnemonic
equivalent of a numeric IP address and ensures that every site on the Internet has a unique
address. For example: http://www. icai.org. If this address is accessed through a Web browser, it
is resolved to 220.227.161.85.
TIP: As an experiment try going to the site of ICAI by just writing 220.227.161.85 in your browser.
SNMP: It stands for Simple Network Management Protocol, network administrators use SNMP to
monitor and map network availability, performance, and error rates.
E-mail protocols
MIME (Multipurpose Internet Mail Extensions)
Allows images, audio and non-ASCII formats to be included in email messages (any file format).It
enables use of non-ASCII character sets. It is also used by HTTP.
POP3 (Post Office Protocol)
POP3 is used for retrieving email. Downloads mail from the (mail) server and stores it locally, then
deletes it from the server.
IMAP (Internet Message Access Protocol)
IMAP is also for retrieving email. The difference is that unlike POP, IMAP supports (among other
things) synchronization across multiple devices. This means that if we reply to an email on our
smartphone, then IMAP checks email on your Laptop, our reply will show up there and vice versa.
Transport Layer
The Transport layer (also known as Host-to-Host Transport layer) is responsible for ensuring logical
connections between hosts, and transfer of data. In other words, it acts as the delivery service
used by the application layer above.
Protocol data unit (PDU) is segment.
Data can be delivered with or without reliability. The main protocols used are
Transmission Control Protocol (TCP)
User Datagram Protocol (UDP).
User Datagram Protocol (UDP): Example - letter dropped in letterbox, Unreliable delivery
UDP provides a one – to - one or one – to - many, connectionless, unreliable communications
service. It is used when the data to be transferred is small (such as the data that would fit into a
single packet), and the overhead of establishing a TCP connection is not desired or the applications
or upper layer protocols provide reliable delivery.
136
Chapter 2, Part 2: Network Standards And Protocols
UDP is connectionless, meaning that A doesn’t initiate a transport connection to B before it starts
sending data. If B detects an error, it simply ignores it (rather than acknowledge the error to A),
leaving any potential fix entirely up to A.
E.g. small queries, VOIP, Streaming data, etc.
Transmission Control Protocol (TCP): Example- Speed Post with POD
TCP provides a one to - one, connection-oriented, reliable communications service. It is
responsible for the establishment of a TCP connection, the sequencing and acknowledgment of
packets sent, and the recovery of packets lost during transmission. It is responsible also for
breaking up the message into datagrams, reassembling datagrams at the other end, resending
anything that gets lost, and putting things back in the right order.
If A is sending a stream of data to B, TCP will split it into smaller blocks called segments. Each
segment is checked for errors, and retransmitted if any are found. A knows whether a segmented
contains errors depending upon the acknowledgement it receives from B. TCP is reliable because
there is a mechanism in it called PAR (Positive Acknowledgment with Re-transmission), which
sends the data to the recipient again and again until it receives a Data OK signal from the recipient.
Network Layer
The Network layer enables us to find the best way from source to destination. The objective of
Network layer is to decide which physical path the information should follow from its source to its
destination (routing). This layer ensures routing of data between different networks and subnets.
Its key tasks are:
Message addressing (Logical addressing, IP Addressing)
Path determination between source and destination nodes on different networks
Routing messages between networks
Controlling congestion on the subnet
Translating logical addresses into physical addresses
Protocol data unit (PDU) is Packet.
Protocols used in Network Layer IP, ARP, RARP, ICMP and IGMP.
Internetwork Protocol (IP):
Internet Protocol (IP) is an Internet layer protocol that uses unique addresses (called IP Addresses
which are discussed in next part) for the destination computer and for our computer. IP provides
the means for our computer to determine whether the destination Computer is a local computer or
a computer located somewhere on the Internet. To reach a destination computer on the Internet,
IP protocol also allows computer to figure out how to reach the Internet destination computer via
our default gateway.
It is an unreliable and connectionless datagram protocol which provides no error checking or
tracking. This is like the delivery service of a post office. The post office does its best to deliver mail
137
Module 1: e-Learning
but does not always succeed. If an unregistered letter is lost, it is up to the sender or would - be
recipient to discover the loss. The post office itself does not keep track of every letter and cannot
notify a sender of loss or damage. When the letter is delivered, the receiver mails the postcard
back to the sender to indicate that he got it. If the sender does not receive the postcard, he or she
assumes the letter has been lost and sends another one.
Address Resolution Protocol (ARP):
Address Resolution Protocol maps IP addresses to physical (MAC) addresses. It is used by hosts
to find MAC address of other hosts whose IP address is known.
Reverse Address Resolution Protocol (RARP)
RARP allows a host to discover its IP address if it knows its physical address (MAC).
Internet Control Message Protocol ICMP
ICMP is a mechanism used by hosts and routers to send notification of datagram problems back
to the sender. ICMP allows IP to inform a sender that a datagram is undeliverable. Many of us
would have at some time or other used PING command PING sends out ICMP (Internet Control
Message Protocol) messages to verify both the logical addresses and the physical connection,
Looks for any congestion on the route.
Internet Group Message Protocol (IGMP)
IGMP is a companion to the IP Protocol. IGMP helps a multicast router identify the hosts in a LAN
that are members of a multicast group.
message frame. The receiving computer recalculates the CRC and compares it to the one sent
with the data. If they match data received is error free.
Access Control: When two or more devices are connected to the same link, data link layer
protocols are necessary to determine which device has control over the link at any given time.
Physical Layer
The physical layer deals in zeroes and ones and voltages. Physical layer provides the path through
which data moves among devices on the network. The protocol data unit (PDU) is called a Bit.
The physical layer is concerned with the following:
Physical characteristics of interfaces and media: The physical layer defines characteristics of
the interface between the devices and the transmission medium and also defines the type of
transmission medium.
Representation of bits: The physical layer data consists of a stream of bits (0 and 1) without any
interpretation. To be transmitted, bits must be encoded into electrical or optical signals.
Data Rate: The transmission rate – the number of bits sent each second, is also defined by the
physical layer.
Synchronization of bits: The sender and receiver must be synchronized at the bit level.
Line Configuration: The physical layer is concerned with the connection of devices to the medium.
Transmission Mode: The physical layer also defines the direction of transmission between the
two devices: simplex, half-duplex and duplex.
139
Module 1: e-Learning
2.3.1 Wi-Fi
Wi-Fi is the name of a popular wireless networking technology that uses radio waves to provide
wireless high-speed Internet and network connections. Wi-Fi networks have limited range. A typical
wireless access point might have a range of 32 m (120 ft.). Security in Wi-Fi can be improved using
WEP (Wired Equivalent Privacy) or WPA (Wi-Fi Protected Access).
Printers
Cell phones and headsets
PDAs (Personal Digital Assistants)
Desktop and laptop computers
Digital cameras
2.5 Summary
In this part we have learnt about Internet and technologies helping in working of Internet which is
a set of heterogeneous networks and to facilitate interoperability we have network standards and
protocols. Understanding OSI model helped us gain an insight into the different phases or layers
in network communication. We also gain an understanding on the main technology on which
internet is built TCP/IP and related protocols. Networking is increasingly becoming wireless, and s
we learnt about some wireless technologies in everyday use.
2.6 References
• http://www.tcpipguide.com/free/t_TCPIPOverviewandHistory.htm
• http://www.garykessler.net/library/tcpip.html
• http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-
us/wireless_networking_overview.mspx?mfr=true
141
PART 3: THE IP NETWORKS
Learning Objectives
To gain understanding of IP Networks
To gain an overview of IP Addressing Scheme
To gain an understanding of Domain Name Services, Ports
To understand Network Services
To get an overview of Public Global Internet Infrastructure
To understand Factors Impacting Quality of Service
3.1 Introduction
Imagine posting a letter with an address 16, Civil Lines, will it reach the address? Answer to this
question depends on whether we have posted the letter in the Green Letter boxes meant for local
delivery at GPO or Red letter boxes which are meant for general delivery across the world. If we
have posted in Green letter boxes, yes it will reach the address even if city is not specified. If we
are posting a letter in Red letter box which has to be delivered across the world, proper address
with City, Country need to be specified.
Communication between hosts (Devices) can happen only if they can identify each other on the
network. In a single network, hosts can communicate directly via MAC address (MAC address we
have already discussed, is a factory coded 48-bits hardware address which can uniquely identify
a host in the world). On the other hand, in case; if a host wants to communicate with a remote host
which is not on the same network, we require a means of addressing to identify the remote host
uniquely; i.e. we must know the address of the network as well the host.
A logical address is given to all hosts connected to Internet and this logical address is called
Internet Protocol Address (IP Address). In TCPIP Model; we have also understood that the network
layer is responsible for carrying data from one host to another. It facilitates providing logical
addresses to hosts and identifies them uniquely using the same.
The Internet layer takes data units from Transport Layer and cuts them in to smaller units called
Data Packets which are a PDU (Protocol Data Unit) for Network Layer. Internet layer defines IP
addressing also selects the path that the packets should follow to reach the destination. Routers
work on Internet layer and facilitate routing of data to its destination.
Module 1: e-Learning
3.2 IP Networks
An IP network is a communication network that uses Internet Protocol (IP) to send and receive
messages between one or more computers. An IP network is implemented in internet networks,
local area networks (LAN) and enterprise networks. All hosts or network nodes must be configured
with the TCP/IP suite and must have an IP address.
3.2.1 IP Addressing
Internet Protocol has the responsibility of identification of hosts based upon their logical addresses
and routing of data between these hosts over the underlying network. For identification of hosts
Internet Protocol Addressing Scheme gives the address of a device attached to an IP network
(TCP/IP network).
Every client, server and network device are assigned an IP address, and every IP packet traversing
an IP network contains a source IP address and a destination IP address.
IP encapsulates data unit received from above and adds its own header information
IP Encapsulation
IP header contains all the necessary information to deliver the packet at the other end like Version
of IP, Protocol, Source IP Address, Destination IP Address etc.
IP Version 4 (IPv4)
IP Address (IPv4) is an address which is 32-bits in length of the type
01110101 10010101 00011101 11101010
It is written in form of 4 Octets or Bytes separated by a dot. Using these 32 bits we can have 232
permutations and combinations i.e. in all 4,29,49,67,296 different and unique IP Addresses.
IP address can be a value from00000000.00000000.00000000.00000000
to 11111111.11111111.11111111.11111111
If thought of in binary terms 0 means 00000000 and 255 in binary is 11111111 and all permutation
and combination in between. Thus, each of these Octets can be any value from 0 to 255, both
inclusive i.e. in all 256.
. 0 . 0 . 0
0
to to to
to
. 255 . 255 . 255
255
144
Chapter 2, Part 3: IP Networks
IPv4 addresses are usually represented in "dotted decimal" notation, separated by decimal points
like xxx.xxx.xxx.xxx or 117.149.29.234
IP Addressing Scheme-Classes
IP addressing system is designed in such a way that it contains information of three fields: class
type, netid, and hostid. IPv4 addressing Scheme is a classful addressing scheme
Classful network design allowed for a larger number of individual network assignments. There are
5 classes of Addresses
o Class A
o Class B
o Class C
o Class D
o Class E
The first three bits of the most significant octet of an IP address was defined as the class of the
address. Three classes (A, B, and C) were defined for universal Unicast Addressing.
Class D is reserved for Multicasting.
Class E is for reserved for experimental purposes only like for research and development or study.
Unicast Addressing Mode: In this mode, data is sent only to one destined host. The destination
address field contains 32- bit IP address of the destination host.
Broadcast Addressing Mode: In this mode the packet is addressed to all hosts in a network
segment. IP address having host bit all ones is the Broadcast IP, for example in class A, the
address XXX.255.255.255
Multicast Addressing Mode: This mode is a mix of previous two modes, i.e. the packet sent is
neither destined to a single host nor the entire hosts on the segment. In this packet, the destination
address contains special address. Example is online games.
Network IP and Broadcast IP
IP address having host bit all zeroes is the Network IP.
IP address having host bit all ones is the Broadcast IP.
Both these addresses cannot be physically used on any system. Given any IP address, a
network IP and broadcast IP can be associated with it.
Network and Host ID
Each of the IP address of Class A, B and C will have a Network and Host portion of the Address
145
Module 1: e-Learning
That is out of 32 Bit address some bits or more specifically Octets or Bytes will represent Network
Address and some octets will represent Host ID
Class A Address
Class A address is wherever the first octet of an IP address is any number ranging from 0-127.
XXX.XXX.XXX.XXX
-------
0-127
If we consider in binary then it means first octet is 00000000 (decimal equivalent 0) to
01111111(Decimal Equivalent 127) and all the permutations and combinations in between. It
means, when we move from 00000000 to 01111111 all the bits will keep on changing between 0
and 1 whereas 1st bit will remain constant at 0, which is called a Higher Order Bit.
Where ever it’s a class A address
First octet represents NetworkID and remaining 3 octets represent HostID. Earlier while discussing
Broad Cast and Network IP addresses we had discussed that in network and host portion of an IP
Address all changing Bits can never be all 0s or all 1s. Thus network portion of Class A address
can never be00000000 and never be 01111111, since in these cases all changing bits will be 0s
or 1s. Therefore in Class A the first octet ranges from 1 – 126, i.e. Class A addresses only include
IP starting from 1.x.x.x to 126.x.x.x only.
The IP range 127.x.x.x is reserved for loopback IP addresses. This implies that Class A addressing
can have 126 networks (27-2) and each network can have 16777214 hosts (224-2) since address
space for host ID is 24Bits and all changing bits can never be 0s or 1s.
Class A IP address format thus, is 0NNNNNNN.HHHHHHHH.HHHHHHHH.HHHHHHHH
This is meant for big networks needing huge number of hosts e.g. Microsoft
Class A address is wherever the first octet of an IP address is any number ranging from 128 to
191, i.e.
146
Chapter 2, Part 3: IP Networks
XXX.XXX.XXX.XXX
-------
128-191
If we consider in Binary then it means first octet is 10000000 (decimal equivalent 128) to
10111111(Decimal Equivalent 191) and all the permutations and combinations in between.
It means, when we move from 10000000 to 10111111 all the bits will keep on changing between
0 and 1 whereas 1st two bits will remain constant at10, which are called a Higher Order Bits
Class B Address
Where ever it’s a class B address
First two octet represents NetworkID and remaining two octets represent HostID. Again in network
and host portion of an IP Address all changing Bits can never be all 0s or all 1s. Thus, network
portion of Class B address can never be10000000 00000000and never be 1011111111111111,
since in these cases all changing bits will be 0s or 1s. This implies that Class B addressing can
have 16384 networks (214-2)) since address space for NetworkID is 16Bits and only 14 bits are
changing Bits and all changing bits can never be 0s or 1s
Each network can have 65534 hosts (216-2) since address space for host ID is 16Bits and all
changing bits can never be 0s or 1s. Class B IP address format is,
10NNNNNN.NNNNNNNN.HHHHHHHH.HHHHHHHH
Class C Address
Class C address is wherever the first octet of an IP address is any number ranging from 192 to
223, i.e.
XXX.XXX.XXX.XXX
192
To
223
If we consider in Binary then it means first octet is 11000000 (decimal equivalent 192) to
11011111(Decimal Equivalent 223) and all the permutations and combinations in between.
It means, when we move from 11000000 to 11011111 all the bits will keep on changing
between 0 and 1 whereas 1st three bits will remain constant at110, which are called Higher
Order Bits
147
Module 1: e-Learning
First three octet represents NetworkID and remaining one octet represents HostID. Again in
network and host portion of an IP Address all changing Bits can never be all 0s or all 1s. Thus
network portion of Class C address can never be11000000 00000000 00000000 and never
be 101111111111111111111111, since in these cases all changing bits will be 0s or 1s. This
implies that Class C addressing can have 2097150 networks (221-2) ) since address space for
NetworkID is 24Bits and only 21 bits are changing Bits and all changing bits can never be 0s
or 1s .
Each network can have 254 hosts (28-2) since address space for host ID is 8Bits and all
changing bits can never be 0s or 1s
Class C IP address format is 110NNNNN.NNNNNNNN.NNNNNNNN.HHHHHHHH
Class D Address
Class D address is wherever the first octet of an IP address is any number ranging from
22i.e.to 239, i.e.
XXX.XXX.XXX.XXX
------- 224-239
Very first four bits of the first octet in Class D IP addresses are set to 1110 which are called
Higher Order Bits
Class D has IP address rage from 224.0.0.0 to 239.255.255.255.
Class D is reserved for Multicasting.
In multicasting data is not destined for a particular host, that's why there is no need to extract
host address from the IP address.
Class E Address
This IP Class is reserved for experimental purposes only like for research and development or
study. IP addresses in this class range from 240.0.0.0 to 255.255.255.254. To summarise
148
Chapter 2, Part 3: IP Networks
st
1 Oc st Network/ Hosts per
tet 1 Octe Host ID Number Network
Deci t High Default
Class N=Networ of (Usable
mal Order Subnet Mask
k Networks Addresses
Rang Bits H=Host )
e
1– 126 1,67,77,21
A 0 N.H.H.H 255.0.0.0 7 24
126* (2 – 2) 4 (2 – 2)
149
Module 1: e-Learning
Thus a user's computer in company A can have the same address as a user in company B and
thousands of other companies. However, private IP addresses are not reachable from the outside
world.
An IP address is considered private if the IP number falls within one of the IP address ranges
reserved for private networks such as a Local Area Network (LAN). The Internet Assigned Numbers
Authority (IANA) has reserved the following three blocks of the IP address space for private
networks (local networks).These addresses cannot be routed on Internet so packets containing
these private addresses are dropped by the Routers. Most ISPs will block the attempt to address
these IP addresses. These IP addresses are used for internal use by companies that need to use
TCP/IP but do not want to be directly visible on the Internet.
Class Private Networks Private Network Finish Address No of Addresses
Start address
A 10.0.0.0 10.255.255.255 1, 67, 77,216
B 172.16.0.0 172.31.255.255 10, 48,576
C 192.168.0.0 192.168.255.255 65,536
In order to communicate with outside world, internet, these IP addresses must have to be
translated to some public IP addresses using NAT (Network Address Translation) process can
be used
150
Chapter 2, Part 3: IP Networks
would be configured on our NAT device. NAT would be responsible for correlating the Private
and Public addresses and translating as needed to support communications.
NAT allows a single device, such as a router, to act as an agent between the Internet (or
"public network") and a local (or "private") network. This means that only a single, unique IP
address is required to represent an entire group of computers
Reserved addresses
The following are reserved addresses:
IP address 0.0.0.0 refers to the default network and is generally used for routing. The network
address 0.0.0.0 designates a default gateway. This is used in routing tables to represent "All
Other Network Addresses".
IP address 255.255.255.255 is called the Broadcast address.
IP address 127.0.0.1 is called the Loopback address. It is used to simplify programming,
testing, and troubleshooting. It allows applications to communicate with the local host in the
same manner as communicating with a remote host, without the need to look up the local
address
151
Module 1: e-Learning
In every network the host address which is all zeroes identifies the network itself. This is called
the Network IP Address and is used in routing tables to refer to the whole network.
A host address which is all ones is called the Broadcast Address or Announce Address for
that network. For example, on the Class C network 205.217.146.0, the address
205.217.146.255 is the broadcast address.
Packets addressed to the broadcast address will be received by all hosts on that network. The
network address and the broadcast address are not used as actual host addresses. These are
invalid addresses on the internet. Routers don't route them.
Subnet Mask
We are now moving towards classless addresses i.e. network and host portion of address can
be derived using IP Address with a subnet mask.
The 32-bit IP address contains information about the host and its network. It is very necessary
to distinguish both. For this, routers use Subnet Mask, which is as long as the size of the
network address in the IP address. Subnet Mask is also 32 bits long.
Subnet mask helps in creating subnets.
Internet Service Providers (ISP) may face a situation where they need to allocate IP subnets
of different sizes as per the requirement of customer, with the help of subnet mask; he may
want to subnet the subnets in such a way that results in minimum wastage of IP addresses.
152
Chapter 2, Part 3: IP Networks
3.3 Ports
French word Porte really means ‘door’.
Ports can be readily explained with a simple real world example. We can think of IP addresses
as the street address of a building, and the port number as the number of a particular Flat
within that building. If a letter (a data packet is sent to the building (IP) without a flat number
(port number) on it, then nobody would know who it is for (which service it is for). In order for
the delivery to work, the sender has to include the flat number along with the address.
A port is a 16-bit number, which along with an IP address forms a socket. Since port numbers
are specified by 16-bit numbers, the total number of ports is given by 216 which are equal to
65536 (i.e. from 0 to 65535).
Service Ports
A number is assigned to user sessions and server applications in an IP network. The port
number resides in the TCP or UDP header of the packet.
Source Ports
The source port, which can be a random number, is assigned to the client and is used to keep
track of user sessions.
Destination Ports
The destination port is used to route packets on a server to the appropriate network
application. For example, port 80 is the standard port number for HTTP traffic, and port 80
packets are processed by a Web server. Destination ports are typically well-known ports (0-
1023) for common Internet applications such as HTTP, FTP and SMTP. It can also be a
registered port (1024-49151) that vendors use for proprietary applications.
In both TCP and UDP, each packet header will specify a source port and a destination port,
each of which is a 16-bit unsigned integer (i.e. ranging from 0 to 65535) and the source and
destination network addresses (IP-numbers), among other things.
153
Module 1: e-Learning
are very difficult to remember. So we feed the telephone Number in our cell phone and
associate it with the name of our acquaintance. When we need to call that person we dial that
person and our cell phone based on the contact list resolves it into a telephone number and
calls the number. The same process happens for internet.
All hosts in the Internet are addressed using IP Addresses. Since they are 32 bits in length,
almost all users find it difficult to memorize the numeric addresses. For example, it is easier to
remember www.icai.org rather than 220.227.161.85. The Domain Name System (DNS) was
created to overcome this problem. It is a distributed database that has the host name and IP
address information for all domains on the Internet.
For example, when a Web site address is given to the DNS either by typing a URL in a browser
or behind the scenes from one application to another, DNS servers return the IP address of
the server associated with that name.
In the above example, www. icai.org would be converted into the IP address 220.227.161.85.
Without DNS, we would have to type the four numbers and dots into our browser to retrieve
the Website.
When we want to obtain a host's IP address based upon the host's name, a DNS request is
made by the initial host to a local name server. If the information is available in the local name
server, it is returned else the local name server then forwards the request to one of the root
servers. The root server, then, returns the IP address.
154
Chapter 2, Part 3: IP Networks
156
Chapter 2, Part 3: IP Networks
Guaranteed service: Services like VoIP and Video etc. are given top priority in
transmission at a guaranteed speed.
From user perspective, customer services like internet banking may have guaranteed services.
Organisation’s Email and access to intranet may be categorised under differentiated services and
internet browsing for internal users may be categorized under best effort basis. Some organisations
use least priority, priority and top priority for categorization of various services over the network for
implementing QoS.
3.9 Summary
In this part, we have learnt about IP Networks the main strength behind internet. We also learnt
about IP addressing scheme, concepts of Public and Private Addresses. Our use of internet is
enriched by browsing various sites reaching those sites are made easy by Domain Name system
as we don’t have to remember IP addresses. We also saw emerging concepts of Network Services
and On- Demand Computing. We also had an overview of Public Global Internet infrastructure.
3.10 References
• http://www.tcpipguide.com/free/t_TCPIPOverviewandHistory.htm
• http://www.garykessler.net/library/tcpip.html
• http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-
us/wireless_networking_overview.mspx?mfr=true
157
CHAPTER 3: SECURING NETWORKS
PART 1: FIREWALLS
Learning Objectives
To gain understanding of Firewalls
To gain an overview of types of Firewalls
To gain an understanding of Common Implementations of Firewalls
To understand Limitations of Firewalls
To get an overview of Unified Threat Management
To understand Firewall Life Cycle
To understand Baseline Configuration of Firewalls
1.1 Introduction
With the increased use of networked systems, the threat profile to organisations’ information assets
has increased manifold. Organisations’ information resources are accessed by users from different
access points. Further internal users also need to access Internet. However, while Internet access
provides benefits, it enables the outside world to reach and interact with local network assets,
creating a threat to the organisation. This exposes assets to attacks, both malicious and
unintentional.
1.2 Firewall
Firewall is a system or group of systems to enforce Control policy. Firewalls are an effective means
of protecting a local system or network of systems from network-based security threats while at the
same time affording access to the outside world via wide area networks and the Internet. Firewall
may be a hardware device or a software program running on a secure host computer, or a
combination of hardware and software. Firewalls and associated technical controls have become
fundamental security tools. Firewall is deployed at the perimeter of the network, i.e. the entry point.
(The term ‘Firewall’ is coined after the firewalls which are used to prevent the spread of Fire from
one area to another; in this case, they limit the damage that could spread from one subnet to
another).
Module 1: e-Learning
As an access control mechanism, the firewall enforces the security policy between an
organisation’s secured network and the unsecured environment. Based on the rules configured, it
filters the traffic both in-bound and outbound from the secured network and determines,
Which inside machines, applications and services may be accessed from outside,
Who outsiders are permitted access to internal and what resources, and
Which outside services, the insiders may access.
160
Chapter 3, Part 1: Firewalls
161
Module 1: e-Learning
every firewall whether in a SOHO (small office home office) environment or implemented in an
enterprise. NAT firewalls automatically provide protection to systems behind the firewall because
they allow connections that originate from the systems inside of the firewall only. It conceals the
internal network.
Personal Firewalls
A personal firewall controls the traffic between a personal computer or workstation on one side and
the Internet or enterprise network on the other side. Personal firewall functionality can be used in
the home environment and on corporate intranets. Typically, the personal firewall is a software
module on the personal computer. Personal firewalls assume that outbound traffic from the system
is to be permitted and inbound traffic requires inspection.
162
Chapter 3, Part 1: Firewalls
Personal firewall capabilities are normally built into operating system. In a SOHO environment
having multiple computers connected to the Internet, firewall functionality can also be housed in a
router that connects all of such computers to a DSL, cable modem, or other network interface.
Such a firewall is known as a Personal Firewall Appliance.
163
Module 1: e-Learning
packet of data travelling between the Internet and the corporate network. Packet filtering firewalls
work at the network layer of the OSI model, or the Internet layer of TCP/IP model.
The header of a packet would contain:
Type of packet i.e. TCP.UDP, ICMP, IP SEC
IP Address of Source
Port Number of Source
IP Address of Destination
Port Number of Destination
Firewall rule could be based on any of these.
In case, the rule is matched, and the rule permits, the packet is passed to the next server/device
as per the information in the Routing Table of the screening router. If match is found but the rule
disallows, the packet is dropped. However, if match is not found, the action to forward or drop the
packet is taken as per the default parameter.
Sample Access Control List (ACL)
Source Source Destination Destination Protocol Action
Address Port Address Port
Any Any 200.1.1.2 80 TCP Allow
Any TCP 200.1.1.3 53 UDP Allow
Any TCP 200.1.1.4 25 TCP Allow
Any Any Any other Any Any Deny
164
Chapter 3, Part 1: Firewalls
as per the first three rules, then the last rule is followed and the packets are denied access. As per
the above rules, no traffic is permitted to secured servers 200.1.1.10 and 200.1.1.11.
Advantages of Packet Filtering Firewalls
Packet filtering firewalls have two main strengths: speed, and flexibility. They are not very costly
and have low impact on network performance.
Limitations of Packet Filtering Firewalls
Configuring the packet filtering firewalls and defining the Access Criteria poses difficulty.
Most packet filtering firewalls do not support advanced user authentication schemes.
Many packet filtering firewalls lack robust logging facilities as only limited information is
available to the firewall.
The primary focus of Packet Filtering Firewalls is to limit inbound traffic while providing for outbound
and established traffic to flow unimpeded. Thus, packet filter firewalls are very suitable for high-
speed environments where logging and user authentication with network resources are not
important. Packet filtering firewalls are prone to following types of attacks,
IP Address Spoofing attack
In this type of attack, the attacker fakes the IP address of either an internal network host or a trusted
network (outside) host so that the packet will pass the access rules defined in the firewall. For
example using a TOR Browser which provides proxy services we can fake our IP address and
breach a Packet filtering firewall which has rule to restrict our IP Address.
Source routing attack: In this process, it is possible to define such a route that it bypasses the
firewall. To prevent such attacks, all packets wherein source routing specification is enabled may
be dropped. However, this control will not be effective where the topology permits a route to be
defined.
Tiny Fragment attack: Using this method, an attacker fragments the IP packet into smaller ones
and pushes it through the firewall in the hope that only the first of the sequence of fragmented
packets would be examined and the others would pass without review.
165
Module 1: e-Learning
In this example, as a connection is established between the (source port 1035), with the remote
server having port 80, the Stateful inspection firewall makes an entry in the Connection Status
table. This state of a connection becomes one of the filtering criteria. If an incoming packet matches
an existing connection listed on the table, it will be permitted access without further checking.
gateways, also called proxies, are similar to circuit-level gateways except that they are application
specific. It forces both sides of the conversation to conduct the communication through the proxy.
Application level gateway firewalls are capable of permitting or rejecting requests based on the
content of the network traffic.
Examples of access control functions that the application-level gateway can process:
Access control based on content - whether the request contains known exploits?
Access control based on user authentication- whether the user is permitted to access the
resource requested?
Access control based on source network zone - whether the access to the requested
resource from the source network is allowed? For example, certain intranet resources
from the Internet may be prohibited access to particular service(s).
Access control based on source address - whether the sender address is allowed access
to the resource?
The working of an Application level gateway firewall is depicted here.
167
Module 1: e-Learning
They are vulnerable to bugs present in the operating system and any application running
on the system.
168
Chapter 3, Part 1: Firewalls
169
Module 1: e-Learning
170
Chapter 3, Part 1: Firewalls
171
Module 1: e-Learning
Firewalls cannot stop internal users from accessing websites with malicious code.
Firewall may not have been implemented on a secured Operating system/hardened
system.
The access list configured in the firewall might not be properly defined, implemented or
adequate.
Network segmentation might not be as per security requirements of the organisation.
Firewalls cannot provide complete protection against viruses.
Monitoring of firewall alerts may be inadequate.
172
Chapter 3, Part 1: Firewalls
UTM Functionalities
Following functionalities are normally supported by the UTM.
Firewall: performs stateful packet inspection
VPN: enables secure remote access to networks
Gateway anti-virus: prevents malicious payloads
Gateway anti-spam: prevents unsolicited messages from entering the network
Intrusion Prevention: detects and blocks intrusions and certain attacks
Content filtering: stops access to malicious, inappropriate, or questionable websites and
online content.
Bandwidth management: enables effective management of Bandwidth.
Application control: provides visibility and control of application behaviour and content
Centralised reporting as the basic features.
They can also support data-loss prevention by blocking accidental or intentional loss of confidential,
proprietary, or regulated data. For enterprises with remote networks or distantly located offices,
UTMs are a means to provide centralised security with complete control over their globally
distributed networks.
Key advantages
Reduced complexity: It is a single security solution integrated together that reduces the
complexity.
It makes the task of installation of various security products easier. Hence maintenance
and vendor issues become simpler.
UTM facilitates easy management of the solution. Generally, it works on simple plug and
play architecture. It supports GUI interface for manageability.
Reduced technical training requirements, one product to learn.
Key Disadvantages
Becomes a Single point of failure (SPOF) for network traffic. In case of its failure, the
complete network is exposed. The vulnerabilities in the OS or the UTM can make it the
single point of compromise.
Deployment of UTM may have impact on latency and bandwidth when the UTM cannot
keep up with the traffic
173
Module 1: e-Learning
1.3 Summary
Firewall acts as security policy enforcement point for inbound and outbound access to the internal
networks as packets cross network boundaries. This is done by inspecting the data that is received
and tracking the connections that are made to determine what data should be permitted and what
data should be denied. Stateful packet filtering examines packet data with memory of connection
state between hosts. Network Address Translation (NAT) is the addressing method that hides
internal network addresses. Application layer gateways (proxy servers) control how internal
network applications access external networks by setting up proxy services.
For a firewall to be able to successfully protect resources, it is critical to implement a design that
lends itself to protecting those resources in the most efficient manner. From a simple packet filtering
to the screened sub-netting implementations, there are various implementation strategies.
Although a single firewall will do an adequate job of protecting most resources, certain high security
174
Chapter 3, Part 1: Firewalls
environments may warrant using multi-firewall architecture to minimize exposure and risk. Like any
other security device firewalls have to be properly configured and monitored.
1.4 References
1. http://csce.uark.edu/~kal/info/private/Networking%20Bookshelf/fire/ch06_01.htm
175
PART 2: CONFIGURING PERSONAL FIREWALLS
AND IDS
Learning Objectives
To gain understanding of Personal Firewalls
To gain an understanding on configuring Personal Firewalls
To understand Intrusion Detection systems
To understand General Controls in firewalls.
2.1 Introduction
The only way to make a personal computer connected over Internet, 100% secure is to turn it off
or disconnect it from the Internet. The real issue is how to make it secure when it is connected. At
a minimum, home computers need to have personal firewall and a Security suite having Anti-Virus,
Anti spyware etc. In our typical SOHO (Small Office Home Office) Environment, some protection
is needed which is provided by personal firewalls. Good personal firewall software is very important
because it stops unwanted visitors at the front door.
Common Features
Protecting the computer from unwanted incoming connection attempts.
Allows the user to control which programs can and cannot access the local network and/or
Internet and provide the user with information about an application that makes a
connection attempt.
Block or alert the user about outgoing connection attempts.
Monitor applications that are listening for incoming connections.
Module 1: e-Learning
Limitations
Many malwares can compromise the system and manipulate the firewall even to the
extent of shutting of Firewalls. Like Bagle worm that can disable Windows XP Firewall.
(Bagle worm is a Mass-mailing email worm with remote access capabilities).
These firewalls could cause many false alerts which could irritate non tech savvy users.
These firewalls could be impacted by vulnerabilities in OS.
179
Module 1: e-Learning
180
Chapter 3, Part 2: Configuring Personal Firewalls and IDS
On the left pane, we can see various links like, Change notification settings, Turn
Windows Firewall on or off, Restore defaults and advanced settings etc.
On the right pane, there are two types of networks link for which we can set firewall
settings. They are Home or Work (Private) networks and Public networks.
By default the Firewall state is ON for both the networks.
Click the Change Notification settings button in the Allowed Programs window.
181
Module 1: e-Learning
Select the program or feature and whether we want to open it up to home/work (private),
public, or both for all networks.
Click OK to save our changes.
182
Chapter 3, Part 2: Configuring Personal Firewalls and IDS
We can see the options of Inbound Rules, Outbound Rules, Connection Security Rules
and Monitoring.
Click on the Inbound Rules link in the left pane. A list of all Inbound Rules is displayed.
On clicking an enabled rule, lists of actions are shown in the Actions Pane. Click on
Disable Rule to disable the selected rule. We can also Cut, Copy and Delete the rule.
183
Module 1: e-Learning
184
Chapter 3, Part 2: Configuring Personal Firewalls and IDS
We can see the properties of a rule. In the following depicts, we see details of the rule allowing
Skype in our system.
Click Monitoring in the left pane to monitor the settings like Active Networks, Firewall State, General
Settings and Logging settings etc.
Then the wizard asks for parameters for three parts of header Type i.e. TCP etc., Local Port,
Remote Port.
186
Chapter 3, Part 2: Configuring Personal Firewalls and IDS
The wizard now requires parameters for two remaining parts of packet header, i.e. Local and
Remote IP addresses.
Based on the criteria stated, Action to be taken can, now be defined. Apart from allowing and
blocking the connection, we can make access rule based on Authentication mechanism.
187
Module 1: e-Learning
After specifying the rule, we can specify when this rule will apply i.e. based on Location.
Windows firewall thus has many features for advanced setup and is configurable according to our
needs.
with firewall implementation must be understood and followed properly. The general controls
associated with firewalls can be classified as,
Physical Security Controls: The organisations are required to devise, implement and
periodically review policies related to the physical security of all network components, viz. the
servers, the devices, appliances etc.
Operating System Security: A firewall, to be secure, has to be implemented on hardened
vulnerability free operating system. The OS patches must be examined and applied without
delay.
Configuration of firewall policy: The firewall must be configured as per the organisation
policy for allowing/denying traffic both in-bound and out-bound.
Change Control procedures: The access rules must be implemented as per the access
criteria defined in the firewall policy of the organisation. The configuration must be reviewed/
updated from time to time.
Documentation: Network diagram and configuration must be documented and updated as
per changes. The documentation must be thoroughly examined to identify any lapses and
vulnerabilities.
Log Monitoring: Log generation facility must be enabled and these logs/alerts must be
monitored as a part of regular activity. Attempted violations of the access criteria or the
vulnerabilities indicated in these logs should be examined promptly for remedial action.
189
Module 1: e-Learning
the ability to prevent the intrusion. IDS can detect network scans, packet-spoofing, Denial of
Service (DoS), unauthorised attempts to connect to services and improper activity on a system.
Types of IDS
There are two types of IDS which are explained below.
1. Network Intrusion Detection Systems (NIDS): NIDS are placed at various choke points, i.e.
Routers, Switches etc. within the network and they monitor all traffic to and from devices on
the network. Sensors capture all network traffic and analyze the content of individual packets
for malicious traffic. These do not create much system overhead. Some possible downsides
to network-based IDS include encrypted packet payloads and high-speed networks, both of
which inhibit the effectiveness of packet analysis.
2. Host Intrusion Detection Systems HIDS: HIDS are implemented on individual hosts or
devices on the network. It monitors all packets to and from the host only and alerts the
administrator if suspicious activity is detected. It monitors who accessed what. Downside is
that it has to be implemented on individual hosts so there is greater deployment and
maintenance cost.
Detection Methodologies
Detection methods which an IDS can use could be:
Signature Based
A signature based IDS monitors packets on the network and compares them against large
databases of attack signatures. This is just like most antivirus software detects malware. The
downside is that it could fail against new attacks, because there won’t be a history or signature.
There will always be a lag between new attack being discovered and its signature added to
database; during this lag IDS will not be able to detect the new threat.
190
Chapter 3, Part 2: Configuring Personal Firewalls and IDS
2.4 Summary
Personal Firewalls are an effective means of protecting personal computers and hosts. Apart from
many vendor products windows 7 has a configurable firewall, which offers lot of configuration
options. A firewall is like a doorkeeper protecting attacks from outside but hacker could already be
inside the system. To prevent these attacks we have Intrusion Detection Systems to provide alerts
while monitoring traffic and events.
2.5 References
http://www.giac.org/paper/gsec/1377/host-vs-network-based-intrusion-detection-
systems/102574
https://www.ischool.utexas.edu/~netsec/ids.html
191
PART 3: CRYPTOGRAPHY AND PKI
Learning Objectives
To gain understanding of cryptography including need and types
To understand digital signatures
To gain an overview of Public Key Infrastructure
To understand Cryptanalysis
3.1 Introduction
Cryptology is the science of hiding and Cryptography means the practice and study of hiding
information. Cryptography is the theory and practice of secure communication. It is the process of
transforming data into something that cannot be understood without some additional information.
It has been used since ages in military communications and purpose was, should the message fall
into the hands of unintended person, he should not be able to decipher the contents of message.
These days, in any network particularly the internet, systems are communicating over untrusted
medium, and cryptography becomes necessary. It provides mechanisms for authenticating users
on a network, ensuring the integrity of transmitted information and preventing users from
repudiating (i.e. rejecting ownership of) their transmitted messages.
3.2 Cryptography
Cryptography is the art of protecting information by transforming it into unintelligible form and the
process is called encryption. Many schemes used for encryption constitute the area of
Cryptography. These are called Cryptographic Systems or Ciphers. Techniques used for
deciphering a message without knowledge of enciphering details constitute cryptanalysis. A
cryptanalyst is one who analyses cryptographic mechanisms and decodes messages without
knowledge of the key employed in encryption. Most of Cryptanalyst are deployed in areas like
Military etc.
194
Chapter 3, Part 3: Cryptography and PKI
Cryptography not only protects data from theft or alteration, but also ensures its authentication.
There are, in general, three types of cryptographic schemes to accomplish these goals:
Secret Key or Symmetric Cryptography (SKC): Uses a single key for both encryption and
decryption.
Public Key or Asymmetric Cryptography (PKC): Uses one key for encryption and another for
decryption.
Message Hash Functions: Reduce a message of variable length to a fixed and usually much
shorter length.
195
Module 1: e-Learning
In this case if Ram sends a message to Rahim encrypted with Key 1 then Rahim can only decrypt
it using Key 2, the message can in no way be decrypted with Key 1.
This Asymmetric Encryption approach is used mainly for exchange of Symmetric Keys and Digital
Signatures.
196
Chapter 3, Part 3: Cryptography and PKI
For example: RSA: The first, and still the most common PKC implementation, named after the
three MIT mathematicians who developed it - Ronald Rivest, Adi Shamir, and Leonard Adleman in
1977. RSA today is used in hundreds of software products and can be used for key exchange,
digital signatures, or encryption of small blocks of data. Key size used these days is 2048 bit i.e.
22048 Possible keys.
Comparison between Symmetric and Asymmetric Key Encryption Algorithms
Symmetric Key / Private Key Encryption Asymmetric Key / Public Key Encryption
Key Size is generally small as compared to Key Size is generally large when compared to
public key encryption private key encryption
Less secure when compared to Increased security because private keys are
asymmetric method because of key never revealed to anyone
management problem
197
Module 1: e-Learning
198
Chapter 3, Part 3: Cryptography and PKI
199
Module 1: e-Learning
PKI is a comprehensive system that provides public-key encryption and digital signature services
to ensure confidentiality, access control, data integrity, authentication and non-repudiation. So the
basic purpose of PKI is to help in maintaining the attributes of trust in any electronic transaction.
Today we are increasingly using electronic communication over internet for all forms of
transactions, from E- Commerce to E-Filing of documents. In all these transactions the
requirements are:
Identify of users accessing sensitive information? (Authentication)
Control who accesses information (Access Control)
Be sure communication is private but carried over the Internet? (Privacy-Confidentiality)
Ensure data has not been tampered with? (Integrity)
Provide a digital method of signing information and transactions? (Non-repudiation)
7. He uses the symmetric key so obtained to decrypt the Signed message to get the plain
text (decryption of the message).
8. He then uses Ram’s (Sender’s) public key to decrypt the digital signature (i.e. encrypted
message digest (decryption of the message digest).
9. The plain-text so obtained in step No.7 is subjected to the hash function and the message
digests are compared. If they are the same, the message is accepted. .
10. Hence he achieves message integrity, identity authentication, non-repudiation and
confidentiality.
At the senders end
201
Module 1: e-Learning
202
Chapter 3, Part 3: Cryptography and PKI
203
Module 1: e-Learning
Key storage and security is a huge challenge for any Certifying Authority and one of the biggest
challenges is CAs root private key, because compromise of a CA private key would allow cyber
criminals to start forging their own trusted certificates. Some of the measures adopted by CAs for
key management include:
CA key is stored in a Hardware Security Module (HSM).
The machines are locked and stored in steel vaults and all with heavy physical security.
The HSM will log all signatures.
The machine is under 24/7 video surveillance, with off-site recording.
For Public key of subscribers, Key repositories are used to store certificates and publicly required
information, so applications can retrieve them on behalf of users. LDAP (Lightweight Directory
Access Protocol) is one of the protocols used to access these repositories.
3.5 Cryptanalysis
Cryptanalysis mainly deals with methods of recovering the plaintext from ciphertext without using
the key. In other words, it is defined as the study of methods for obtaining the meaning of encrypted
information, without access to the secret information which is normally required to do so. It also
deals with identifying weakness in a cryptosystem. Cryptanalysis is also used to refer to any
attempt to circumvent the security of other types of cryptographic algorithms and protocols, and
not just encryption. There are many types of cryptanalytic attacks. The basic assumption is that
the cryptanalyst has complete knowledge of the encryption algorithm used.
For example Known-plaintext attack: In this case the cryptanalyst has access to not only the
ciphertext of several messages, but also to the plaintext of those messages. Using the ciphertext
and its corresponding plaintext, he deduces the key used to encrypt the messages.
3.6 Summary
Cryptography or the science and art of coding messages provide us a method to transmit
messages over open networks, like internet and still achieve the objectives of confidentiality,
integrity, authenticity and non-repudiation. Digital certificates provide a means to digitally sign the
message. PKI affords us the infrastructure to manage the Asymmetric keys, and a means of
certifying the authenticity of holder of key.
3.7 References
http://www.garykessler.net/library/crypto.html
http://en.wikibooks.org/wiki/Cryptography
http://resources.infosecinstitute.com/role-of-cryptography/
http://www.di-mgt.com.au/rsa_alg.html#simpleexample
204
PART 4: APPLICATION OF CRYPTOGRAPHIC
SYSTEMS
Learning Objectives
To understand uses of cryptographic Systems
To gain an overview of SSL/ TLS
To gain an overview of cryptographic systems like IPSec, SSH, SET
To gain an overview of generic Enterprise Network Architecture in a Bank.
To understand risks and controls in Enterprise Network
To understand auditing network security
4.1 Introduction
The need to send sensitive information over the Internet is increasing, and so is the necessity to
secure information in transit through the Internet as also Intranets. Cryptography is used for
securing transmission of messages, protection of data, authentication of sender and to provide
privacy and security in any situation where information is not intended for public.
1
Web Server
Web Browser
1. Browser connects to a web server (website) secured with SSL. Browser requests that the
server identify itself.
2. Server sends a copy of its SSL Certificate, including the server’s public key.
3. Browser checks the certificate root against a list of trusted CAs and to verify that the
certificate is unexpired, unrevoked, and that its common name is valid for the website that
it is connecting to. If the browser trusts the certificate, it creates, encrypts, and sends back
a symmetric session key using the server’s public key.
4. Server decrypts the symmetric session key using its private key and sends back an
acknowledgement encrypted with the session key to start the encrypted session.
5. Server and Browser now encrypt all transmitted data with the session key.
Technically, SSL is a transparent protocol which requires little interaction from the end user when
establishing a secure session. In the case of a browser for instance, users are alerted to the
206
Chapter 3, Part 4: Application of Cryptographic Systems
presence of SSL when the browser displays a padlock. This is the key to the success of SSL – it
is an incredibly simple experience for end users.
Almost any service on the Internet can be protected with SSL. SSL is being used for
Secure online credit card transactions.
Secure system logins and any sensitive information exchanged online e.g. secure Internet
Banking session
Secure cloud-based computing platforms.
Secure the connection between E-mail Client and E-mail Server.
Secure the transfer of files over https and FTP(s) services.
Secure intranet based traffic such as internal networks, file sharing, extranets, and
database connections.
207
Module 1: e-Learning
One of the limitations of HTTPS is that it slows down the web services.
208
Chapter 3, Part 4: Application of Cryptographic Systems
209
Module 1: e-Learning
210
Chapter 3, Part 4: Application of Cryptographic Systems
211
Module 1: e-Learning
212
Chapter 3, Part 4: Application of Cryptographic Systems
In view of difficulty in managing the de-centralised environment, most of the banks prefer the
centralised approach. As the networks are now being used more and more for data, voice and
video, the MPLS (multi-protocol label switching) technology supported networks set up by various
service providers are extensively being used as the back-bone. MPLS has the advantage of
213
Module 1: e-Learning
providing high availability, economics of scale and flexibility of implementing QoS (Quality of
Service) for managing different types of traffic.
The back-bone is provided by the service provider which is usually optical fiber with redundant
routes. These are central conduits designed to transfer network traffic at high speeds, and are
designed to maximize the reliability and performance of large-scale, long-distance data
communications.
The last mile primary links, in most cases, are the leased lines backed up with secondary link –
normally ISDN, Wi-Fi or satellite link. The last mile connects the branch/office to the nearby POP
(Point of Presence) of the Service Provider.
The DC and the DRC are in different seismic zones so that, in case of an earthquake, both of them
are not impacted simultaneously. The bank may have in addition to DC and DRC, a nearby DC.
This is to curtail the data loss for mission critical applications. Normally banks have separate
Nearby DC at a location within 20-30 KMs of the DC. The RPO (Recovery point objective) and the
RTO (Recovery time objective) determine the need for a nearby Data Centre. The Near-site DC is
connected over point to point redundant links with DC through two or more different service
providers for zero data loss. The DC and DR are connected to the WAN cloud (MPLS) through
primary (active) and the secondary (redundant) links by using different routers working in High
Availability mode. The DC and DR are connected through redundant replication links.
Therefore, when the production site fails, the nearby data center would contain all transactions that
were executed up to the time of the outage. The Near Data Center is linked to both the DC site and
the DR site through redundant links. In the event of a DC failure the backup site brings its database
up-to-date by establishing a session with the Near Data Center.
II. Security
The private and public domain servers and other assets are hosted in DC, DR and near-site DC in
different demilitarized zones (DMZ). The DMZs are created through sets of firewalls and VLANs
implemented over multiple layer 2 switches.
All data to and from the WAN to branches and DC, DRC and near DC site is encrypted. Similarly,
layers of security is implemented over all secured assets though firewalls, intrusion detection and
prevention systems, database activity monitoring tools, anti-virus solutions, end-point security
tools, access control systems etc. Application servers that need to be directly accessed from the
Internet are placed in the DMZ between the Internet and the internal enterprise network, which
allows internal hosts and Internet hosts to communicate with servers in the DMZ.
In order to have an access to internet, a minimum of two links from two different Internet Service
Providers (ISPs) are commissioned at DC. This ISP level redundancy is built at DC, so that in case
if one of the links/ISP is not available, then the Internet facing Servers at DC should be accessible
using the second link from the other ISP. Redundant internet links are also provided at the DR.
214
Chapter 3, Part 4: Application of Cryptographic Systems
215
Module 1: e-Learning
216
Chapter 3, Part 4: Application of Cryptographic Systems
217
Module 1: e-Learning
For monitoring internal network traffic, Intrusion Detection Systems are in place.
Please refer to Chapter-6 of Module-4 for understanding details of network security.
4.7 Summary
Cryptographic systems provide ability of secure communication over networks. Many Secure
protocols and frameworks have application of cryptographic techniques like SSL, HTTPS, IPsec,
SSH, SET and S-MIME to name a few. Generic Enterprise Network Architecture in a bank gave us
an understanding of how network is implemented in typical bank. To mitigate the risks in an
enterprise network, we need to secure networks by implementing adequate controls. Effectiveness
of controls can be gauged through Auditing.
4.8 References
http://resources.infosecinstitute.com/role-of-cryptography/
http://en.wikibooks.org/wiki/Cryptography
http://www.moserware.com/2009/06/first-few-milliseconds-of-https.html
http://www.di-mgt.com.au/rsa_alg.html#simpleexample
218