0% found this document useful (0 votes)
39 views

2nd Year - Operating System

Uploaded by

Vaishnavi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views

2nd Year - Operating System

Uploaded by

Vaishnavi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 83

Career – Oriented Course: E – Commerce

2.3 – OPERATING SYSTEM


Syllabus
UNIT – I: Operating System - Evolution of operating System – features of operating
System – Basic Component of an operating system – Classification of Operating system –
Single user System – Batch processing system – Multiprogramming system – Time
sharing – Distributed system – Real time system.

UNIT – II: Memory management – Need for memory Management – Types of Memory –
RAM – ROM – PROM – EPROM – Virtual memory – Paging – Thrashing.

UNIT – III: Process Management – Process – States of a process – Process Scheduling –


FIFO Scheduling – Round robin Scheduling – Priority Scheduling – Shortest job first
Scheduling.

UNIT – IV: File Management – File organisation – File manager – Functions of file
Manager – File Properties – File access methods – Sequential access methods – direct
access methods – indexed access method.

UNIT – V: Network Management – Need for networking – Peer to Peer Network –


Master – Slave network – Combination network – Protocols – TCP /IP.
STUDY MATERIAL
UNIT – I
OPERATING SYSTEM:
Introduction:
An operating system acts as an intermediary between the user of a computer and
computer hardware. The purpose of an operating system is to provide an environment in
which a user can execute programs conveniently and efficiently.
An operating system is software that manages computer hardware. The hardware must
provide appropriate mechanisms to ensure the correct operation of the computer system
and to prevent user programs from interfering with the proper operation of the system.
An operating system is concerned with the allocation of resources and services, such as
memory, processors, devices, and information. The operating system correspondingly
includes programs to manage these resources, such as a traffic controller, a scheduler,
a memory management module, I/O programs, and a file system.
History of Operating System
The operating system has been evolving through the years. The following table shows
the history of OS.

Generatio
n Year Electronic device used Types of OS Devices

First 1945-55 Vacuum Tubes Plug Boards

Second 1955-65 Transistors Batch Systems

Integrated
Third 1965-80 Multiprogramming
Circuits(IC)

Since Large Scale


Fourth PC
1980 Integration
Functionalities of Operating System
 Resource Management: When parallel accessing happens in the OS means when
multiple users are accessing the system the OS works as Resource Manager, Its
responsibility is to provide hardware to the user. It decreases the load in the system.
 Process Management: It includes various tasks like scheduling and termination of
the process. It is done with the help of CPU Scheduling algorithms.
 Storage Management: The file system mechanism used for the management of the
storage. NIFS, CIFS, CFS, NFS, etc. are some file systems. All the data is stored in
various tracks of Hard disks that are all managed by the storage manager. It
included Hard Disk.
 Memory Management: Refers to the management of primary memory. The
operating system has to keep track of how much memory has been used and by
whom. It has to decide which process needs memory space and how much. OS also
has to allocate and deallocate the memory space.
 Security/Privacy Management: Privacy is also provided by the Operating system
using passwords so that unauthorized applications can’t access programs or data. For
example, Windows uses Kerberos authentication to prevent unauthorized access to
data.
The process operating system as User Interface:
1. User
2. System and application programs
3. Operating system
4. Hardware
Every general-purpose computer consists of hardware, an operating system(s), system
programs, and application programs. The hardware consists of memory, CPU, ALU, I/O
devices, peripheral devices, and storage devices. The system program consists of
compilers, loaders, editors, OS, etc. The application program consists of business
programs and database programs.
Conceptual View of Computer System
Every computer must have an operating system to run other programs.
The operating system coordinates the use of the hardware among the various system
programs and application programs for various users. It simply provides an environment
within which other programs can do useful work.
The operating system is a set of special programs that run on a computer system that
allows it to work properly. It performs basic tasks such as recognizing input from the
keyboard, keeping track of files and directories on the disk, sending output to the display
screen, and controlling peripheral devices.
Layered Design of Operating System

Fig. Layered OS
The extended machine provides operations like context save, dispatching, swapping, and
I/O initiation. The operating system layer is located on top of the extended machine
layer. This arrangement considerably simplifies the coding and testing of OS modules by
separating the algorithm of a function from the implementation of its primitive
operations. It is now easier to test, debug, and modify an OS module than in a monolithic
OS. We say that the lower layer provides an abstraction that is the extended machine.
We call the operating system layer the top layer of the OS.
Purposes and Tasks of Operating Systems
Several tasks are performed by the Operating Systems and it also helps in serving a lot
of purposes which are mentioned below. We will see how Operating System helps us in
serving in a better way with the help of the task performed by it.
Purposes of an Operating System
 It controls the allocation and use of the computing System’s resources among the
various user and tasks.
 It provides an interface between the computer hardware and the programmer that
simplifies and makes it feasible for coding and debugging of application programs.
Tasks of an Operating System
1. Provides the facilities to create and modify programs and data files using an editor.
2. Access to the compiler for translating the user program from high-level language to
machine language.
3. Provide a loader program to move the compiled program code to the computer’s
memory for execution.
4. Provide routines that handle the details of I/O programming.
I/O System Management
The module that keeps track of the status of devices is called the I/O traffic controller.
Each I/O device has a device handler that resides in a separate process associated with
that device.
The I/O subsystem consists of:
 A memory Management component that includes buffering caching and spooling.
 A general device driver interface.
Advantages of Operating System
 It helps in managing the data present in the device i.e. Memory Management.
 It helps in making the best use of computer hardware.
 It helps in maintaining the security of the device.
 It helps different applications in running them efficiently.
Disadvantages of Operating System
 Operating Systems can be difficult for someone to use.
 Some OS are expensive and they require heavy maintenance.
 Operating Systems can come under threat if used by hackers.
EVOLUTION OF OPERATING SYSTEM:
An operating system is a type of software that acts as an interface between the user and
the hardware. It is responsible for handling various critical functions of the computer or
any other machine. Various tasks that are handled by OS are file management, task
management, garbage management, memory management, process management, disk
management, I/O management, peripherals management, etc.
Generation of Operating System
Below are four generations of operating systems.
 The First Generation
 The Second Generation
 The Third Generation
 The Fourth Generation
1. The First Generation (1940 to early 1950s)
In 1940, an operating system was not included in the creation of the first electrical
computer. Early computer users had complete control over the device and wrote
programs in pure machine language for every task. During the computer generation, a
programmer can merely execute and solve basic mathematical calculations. an operating
system is not needed for these computations.
2. The Second Generation (1955 – 1965)
GMOSIS, the first operating system (OS) was developed in the early 1950s. For the IBM
Computer, General Motors has created the operating system. Because it gathers all
related jobs into groups or batches and then submits them to the operating system using
a punch card to finish all of them, the second-generation operating system was built on a
single-stream batch processing system.
3. The Third Generation (1965 – 1980)
Because it gathers all similar jobs into groups or batches and then submits them to the
second generation operating system using a punch card to finish all jobs in a machine,
the second-generation operating system was based on a single stream batch processing
system. Control is transferred to the operating system upon each job’s completion,
whether it be routinely or unexpectedly. The operating system cleans up after each work
is finished before reading and starting the subsequent job on a punch card. Large,
professionally operated machines known as mainframes were introduced after
that. Operating system designers were able to create a new operating system in the late
1960s that was capable of multiprogramming—the simultaneous execution of several
tasks in a single computer program.
In order to create operating systems that enable a CPU to be active at all times by
carrying out multiple jobs on a computer at once, multiprogramming has to be
introduced. With the release of the DEC PDP-1 in 1961, the third generation of
minicomputers saw a new phase of growth and development.
4. The Fourth Generation (1980 – Present Day)
The fourth generation of personal computers is the result of these PDPs. The Generation
IV (1980–Present)The evolution of the personal computer is linked to the fourth
generation of operating systems. Nonetheless, the third-generation minicomputers and
the personal computer have many similarities. At that time, minicomputers were only
slightly more expensive than personal computers, which were highly expensive.
The development of Microsoft and the Windows operating system was a significant
influence in the creation of personal computers. In 1975, Microsoft developed the first
Windows operating system. Bill Gates and Paul Allen had the idea to advance personal
computers after releasing the Microsoft Windows OS. As a result, the MS-DOS was
released in 1981, but users found it extremely challenging to decipher its complex
commands. Windows is now the most widely used and well-liked operating system
available. Following then, Windows released a number of operating systems, including
Windows 95, Windows 98, Windows XP, and Windows 7, the most recent operating
system. The majority of Windows users are currently running Windows 10. Apple is
another well-known operating system in addition to Windows.
TYPES OF OPERATING SYSTEM
Operating Systems have evolved in past years. It went through several changes before
getting its original form. These changes in the operating system are known as
the evolution of operating systems. OS improve itself with the invention of new
technology. Basically , OS added the feature of new technology and making itself more
powerful. Let us see the evolution of operating system year-wise in detail:
 No OS – (0s to 1940s)
 Batch Processing Systems -(1940s to 1950s)
 Multiprogramming Systems -(1950s to 1960s)
 Time-Sharing Systems -(1960s to 1970s)
 Introduction of GUI -(1970s to 1980s)
 Networked Systems – (1980s to 1990s)
 Mobile Operating Systems – (Late 1990s to Early 2000s)
 AI Integration – (2010s to ongoing)
1. No OS – (0s to 1940s)
As we know that before 1940s, there was no use of OS . Earlier, people are lacking OS
in their computer system so they had to manually type instructions for each tasks in
machine language(0-1 based language) . And at that time , it was very hard for users to
implement even a simple task. And it was very time consuming and also not user-
friendly . Because not everyone had that much level of understanding to understand the
machine language and it required a deep understanding.
2. Batch Processing Systems -(1940s to 1950s)
With the growth of time, batch processing system came into the market .Now Users had
facility to write their programs on punch cards and load it to the computer operator. And
then operator make different batches of similar types of jobs and then serve the different
batch(group of jobs) one by one to the CPU .CPU first executes jobs of one batch and
them jump to the jobs of other batch in a sequence manner.
3. Multiprogramming Systems -(1950s to 1960s)
Multiprogramming was the first operating system where actual revolution began. It
provide user facility to load the multiple program into the memory and provide a specific
portion of memory to each program. When one program is waiting for any I/O
operations (which take much time) at that time the OS give permission to CPU to switch
from previous program to other program(which is first in ready queue) for continuos
execution of program with interrupt.
4. Time-Sharing Systems -(1960s to 1970s)
Time-sharing systems is extended version of multiprogramming system. Here one extra
feature was added to avoid the use of CPU for long time by any single program and give
access of CPU to every program after a certain interval of time. Basically OS switches
from one program to another program after a certain interval of time so that every
program can get access of CPU and complete their work.
5. Introduction of GUI -(1970s to 1980s)
With the growth of time, Graphical User Interfaces (GUIs) came. First time OS became
more user-friendly and changed the way of people to interact with computer. GUI
provides computer system visual elements which made user’s interaction with computer
more comfortable and user-friendly. User can just click on visual elements rather than
typing commands. Here are some feature of GUI in Microsoft’s windows icons, menus
and windows.
6. Networked Systems – (1980s to 1990s)
At 1980s,the craze of computer networks at it’s peak .A special type of Operating
Systems needed to manage the network communication . The OS like Novell NetWare
and Windows NT were developed to manage network communication which provide
users facility to work in collaborative environment and made file sharing and remote
access very easy.
7. Mobile Operating Systems – (Late 1990s to Early 2000s)
Invention of smartphones create a big revolution in software industry, To handle the
operation of smartphones , a special type of operating systems were developed. Some of
them are : iOS and Android etc. These operating systems were optimized with the time
and became more powerful.
8. AI Integration – (2010s to ongoing)
With the growth of time, Artificial intelligence came into picture. Operating system
integrates features of AI technology like Siri, Google Assistant, and Alexa and became
more powerful and efficient in many way. These AI features with operating system
create a entire new feature like voice commands, predictive text, and personalized
recommendations.
CHARACTERISTICS / FEATURES OF OPERATING SYSTEM:
Let us now discuss some of the important characteristic features of operating systems:
 Device Management: The operating system keeps track of all the devices. So, it is
also called the Input/Output controller that decides which process gets the device,
when, and for how much time.
 File Management: It allocates and de-allocates the resources and also decides who
gets the resource.
 Job Accounting: It keeps track of time and resources used by various jobs or users.
 Error-detecting Aids: These contain methods that include the production of dumps,
traces, error messages, and other debugging and error-detecting methods.
 Memory Management: It keeps track of the primary memory, like what part of it is
in use by whom, or what part is not in use, etc. and It also allocates the memory when
a process or program requests it.
 Processor Management: It allocates the processor to a process and then de-allocates
the processor when it is no longer required or the job is done.
 Control on System Performance: It records the delays between the request for a
service and the system.
 Security: It prevents unauthorized access to programs and data using passwords or
some kind of protection technique.
 Convenience: An OS makes a computer more convenient to use.
 Efficiency: An OS allows the computer system resources to be used efficiently.
 Ability to Evolve: An OS should be constructed in such a way as to permit the
effective development, testing, and introduction of new system functions at the same
time without interfering with service.
 Throughput: An OS should be constructed so that It can give maximum
throughput (Number of tasks per unit time).
BASIC COMPONENT OF AN OPERATING SYSTEM:
There are various components of operating system available but here we will discuss the
most important components of operating system the list of all the important components
of operating system is given below:
Process Signals Secondary Storage System calls I/O device
Management Management
Command Security Files Management Network Main Memory
Interpreter Management Management management
Now we will discuss each of the components of operating system briefly.
1. Process Management
Process management is one of the key components of operating system that involves
creating, scheduling, and terminating processes. A process is a program that is in
execution and has its own memory space and system resources.
When a user runs a program, the operating system creates a new process for it and assigns
it a unique process ID. The process is then added to a queue of ready processes, where it
waits to be scheduled for execution by the operating system’s scheduler. The scheduler
uses various algorithms to determine which process should be executed next, based on
factors such as priority level and resource requirements.
The operating system also manages the resources used by processes, such as memory and
processing power, to ensure that each process has the resources it needs to function
properly.
Processes can also communicate with each other through inter-process communication
(IPC) mechanisms, such as pipes and sockets, provided by the operating system. The
operating system also provides mechanisms for synchronizing and coordinating the
actions of multiple processes, such as semaphores and monitors.
When a process completes its execution or is terminated by the user, the operating system
releases any resources the process was using and removes it from the queue of ready
processes. This ensures that resources are used efficiently and the computer’s performance
is not affected negatively by unnecessary processes.
Overall, process management is a critical function of an operating system that ensures that
processes are created, scheduled, and terminated efficiently and that the computer’s
resources are used effectively.
2. Command Interpreter
The components of operating system with the most wide use is a command interpreter. A
command interpreter, also known as a shell, is a program in an operating system that
interprets and executes commands from users or other programs. It provides an interface
for users to interact with the system, allowing them to launch programs, navigate the file
system, and perform other tasks. The most common type of command interpreter is the
command-line interface (CLI), where users enter commands through a text-based
interface.
Different operating systems have different command interpreters, such as the Windows
Command Prompt, and macOS and Linux’s Bash. CLI shells generally have a prompt that
indicates the system is ready to receive commands, and users enter commands by typing
them in and hitting enter. The command interpreter then parses the command and sends it
to the appropriate part of the operating system for execution. Some shells also support
command history and tab completion, which makes it easier for users to navigate the
system and enter commands quickly.
3. Signals
Signals are one of the components of operating system. In an operating system, signals are
a way for one process to communicate with another. They are typically used to notify a
process that a particular event has occurred, such as a keyboard interrupt or a child
process exiting. Signals can be generated by the operating system itself or by other
processes. A process can choose to either handle the signal or ignore it. Signals can also
be used to terminate a process or force it to terminate. Common signals include SIGHUP,
SIGINT, and SIGTERM. Signal handling is implemented differently in different operating
systems, but the basic concept is the same. The signals have a special property in which it
can temporarily suspend the current running process and stores its information in the stack
and starts running the special handling procedure.
4. Security Management
Operating systems include various security management features to protect the system and
its resources from unauthorized access and malicious attacks. These features can include
things like user authentication, access control, encryption, and firewall protection. User
authentication ensures that only authorized users can access the system, while access
control determines what resources and operations a user are allowed to access and
perform. Encryption is used to secure data and communications, and firewall protection is
used to block unauthorized access to the system over a network. Additionally, many
operating systems also include security monitoring and incident response capabilities to
detect and respond to security incidents. Overall, security management in operating
systems is an essential aspect to protect the system and its resources from unauthorized
access and malicious attacks.
5. Secondary Storage Management
Secondary storage simply refers to the memory storage where the user can store the data
and from where the user can easily retrieve the stored data and management of that space
is an important components of operating system. Secondary storage management in
operating systems refers to the management of non-volatile data storage devices such as
hard drives, solid-state drives, and external storage devices. This includes tasks such as
allocating space on these devices, managing files and directories, and providing access to
the stored data. The file system, which is a part of the operating system, is responsible for
these tasks. It organizes the storage space on the secondary storage devices and provides a
logical view of the stored data to the user and applications. Common file systems include
NTFS, FAT32, and ext4.
6. Files Management
Files are used for long-term storage. File management is one of the components of
operating system and it refers to the process of organizing, storing, and accessing files on
a computer. The operating system’s file system is responsible for managing files and
directories, which includes creating, deleting, and modifying them. The file system
organizes the files and directories in a hierarchical structure, with the root directory at the
top, followed by subdirectories and files. Each file and directory is assigned a unique
name and location on the storage device. The operating system provides interfaces for
users and applications to access and manipulate files, such as creating, reading, writing,
and deleting files. Additionally, file permissions can be set to control access to files and
directories by different users and applications. File management mainly includes three
steps they are File creation, File deletion, and read and write operations.
7. System Calls
A system call is a request made by a program to the operating system for a specific service
or operation to be performed. These calls are typically made using a specific set of
instructions or functions that are provided by the operating system. Examples of common
system calls include requests for input/output operations, memory management, and
process management. The operating system then carries out the requested operation and
returns any results or status information to the calling program.
8. Network Management
As the world is evolving so are the network complexities of the system and to provide the
user the best experience of the network we have to maintain the network hence network
management is one of the important components of operating system. Network
management in operating systems involves controlling and monitoring the communication
between computer systems on a network. This includes tasks such as configuring network
interfaces, managing network connections, and monitoring network traffic. The operating
system also provides APIs (Application Programming Interfaces) and libraries to help
applications make use of the network. Network management in OS also includes a variety
of security features to protect against unauthorized access and malicious activity.
Examples include firewalls, VPNs, and intrusion detection/prevention systems.
Additionally, network management includes monitoring and troubleshooting network
issues and providing network statistics for performance analysis.
9. I/O device management
Input and output devices are an essential part of the operating system hence the
management of these devices is an important components of operating system. Operating
systems manage input/output (I/O) devices to ensure that data can be properly transferred
between the computer and the device. This is accomplished through the use of device
drivers, which are specialized programs that act as a bridge between the operating system
and the device. The operating system communicates with the device driver, which in turn
communicates with the device. This allows the operating system to control and manage
the device and enables it to perform tasks such as reading and writing data, as well as
controlling the device’s functions. In addition, the operating system also manages the
allocation of resources such as memory and processing power to ensure that the device
can function efficiently.
10. Main Memory Management
Main memory is very essential for processes and hence its management is an important
components of operating system. In operating systems, main memory management is the
process of controlling and coordinating the use of memory by multiple programs and
processes. This includes managing the allocation of memory to different programs and
processes, as well as managing the movement of data between main memory and other
storage devices, such as hard drives. Memory management also includes managing
memory allocation for system processes and managing memory allocation for different
levels of the system, such as the kernel and user space. Memory management is a critical
aspect of operating system design, as it helps to ensure the efficient and stable operation
of the system.
CLASSIFICATION OF OPERATING SYSTEM:
Operating systems can be classified based on multiple factors, such as the number of users
they support, the number of tasks they can perform at a given time, the type of interaction
they allow with the system, and the type of environment they work in.
 Based on Number of Users: Operating systems can be Single-User or Multi-User.
A Single-User OS allows only one user to work on a machine at a time, while a
Multi-User OS allows multiple users to work on a device simultaneously.
 Based on Number of Tasks: Operating systems can be Single-Tasking or Multi-
Tasking. A Single-Tasking OS can manage only one task at a time, while a Multi-
Tasking OS can handle multiple tasks at once.
 Based on Interaction: Operating systems can be characterised as Command-Line
Interface (CLI) or Graphical User Interface (GUI). A CLI OS requires commands to
be typed, whereas a GUI OS allows users to interact with the system using visual
indicators like icons.
 Based on Environment: Operating systems can be categorised as Real-Time,
Distributed, Network, Mobile, or Embedded systems. Real-Time OS responds in
real-time and is used in embedded systems. Distributed OS uses multiple central
processors. Network OS controls and coordinates networked computers. Mobile OS
is designed for mobile devices, and Embedded OS is tailored for devices like digital
watches and MP3 players.
Each type of operating system has numerous examples.
 Single User: Microsoft Windows, macOS
 Multi-User: UNIX, Linux
 Single-Tasking: MS-DOS
 Multi-Tasking: Microsoft Windows, macOS, Linux
 CLI: MS-DOS, Linux (shell)
 GUI: Microsoft Windows, macOS
 Real-Time: VxWorks, RTLinux
 Distributed: Amoeba, Plan9
 Network: Microsoft Windows Server, Novell NetWare
Mobile Operating Systems
Mobile Operating Systems are OS specifically designed to run on mobile devices such as
smartphones, tablets, and portable media players. These systems are optimised for
wireless communication, mobile hardware, touch screens, and battery power efficiencies.
Mobile OS also supports features necessary for mobile devices, such as cellular
communication, Bluetooth, Wi-Fi, GPS, and cameras. They're user-friendly, easy to
navigate and offer numerous applications for different tasks.
SINGLE USER SYSTEM:
Single user operating system is also known as a single-tasking operating system, and a
single-user operating system is designed especially for home computers. A single user
can access the computer at a particular time. The single-user operating system allows
permission to access your personal computer at a time by a single user, but sometimes it
can support multiple profiles. It can also be used in official work and other environments
as well.

So this operating system does not require the support of memory protection, file
protection, and security system. The computers based on this operating system have a
single processor to execute only a single program at all times. This system provides all
the resources such as CPU, and I/O devices, to a single user at a time.

The operating system for those computers which support only one computer. In this
operating system, another user can not interact with another working user. The core part
of the single-user operating system is one kernel image that will run at a time i.e there is
no other facility to run more than one kernel image.
Features of the Single-User Operating System:
Interpreting user’s Memory management Resource allocation.
commands.
File management. Input/output management Managing process

Advantages:
 This OS occupies less space in memory.
 Easy to maintain.
 Less chance of damage.
 This is a single-user interface it allows only one user’s tasks to execute in a given
time.
 In this operating system only one user work at a time, so there will be no interruption
of others.
Disadvantages:
 It can perform only a single task.
 The main drawback is, the OS remains idle for most of the time and is not utilized to
its maximum.
 Tasks take longer to complete.
 It has a high response time.
Types of Single-user Operating Systems:
This operating system is of two types:-
(i) Single User Single-Tasking (ii) Single User Multi-Tasking
Single-User Single-Tasking: Operating system allows a single user to execute one
program at a particular time. This operating system is designed especially for wireless
phones and two-way messaging. Some functions such as printing a document, and
downloading images and videos are performed in one given frame.
Example: MS-DOS, Palm OS (Used in Palm-held computers).

Single-User Single-Tasking
Advantages: (i) Uses less area in memory (ii)Cost efficient
Disadvantage: (i) Less Optimized
Single-User Multi-Tasking: Operating system allows a single user to execute multiple
programs at the same time, the single user can perform multiple tasks at a time. This
type of operating system is found on personal desktops and laptops. The most popular
single-user multi-tasking is Microsoft windows. This single-user multi-tasking can be
pre-emptive or cooperative.
 Pre-emptive: The operating system shares the central processing time by dedicating
a single slot to each of the programs.
 Co-operative: This is attained by relying on each process to give time to other
methods in a defined manner. Some example taking photos while capturing video, a
user can perform different tasks such as making calculations in excel
sheets. Example: Windows, Mac
Advantages Disadvantage
 Time-saving  Require more space
 High productivity in less time frame  More complexity
 Less memory is used.

BATCH PROCESSING SYSTEM:


A batch Processing Operating System (BatchOS) is an open-source operating system
designed to manage multiple jobs in sequence. It is based on the CentOS Linux
distribution and is licensed under the GNU General Public License. The Batch operating
system is designed to support a wide range of batch processing tasks, including data
warehousing, OLAP and data mining, big data processing, data integration, and time
series analysis.
Batch processing is a process that is used in many industries to improve efficiency. It is
a type of operating system that is used to manage multiple tasks and processes in a
sequence. It is a type of operating system that is used to improve the efficiency of a
business by allowing it to run multiple tasks at the same time.
One of the main benefits of using a batch-processing operating system is that it can
improve the efficiency of a business by allowing it to run multiple tasks at the same
time. This can be done by allowing the operating system to manage the tasks and
processes. This can allow the business to run more tasks at the same time without having
to wait for each one to finish.
Batch processing operating systems are designed to execute a large number of similar
jobs or tasks without user intervention. These operating systems are commonly used in
business and scientific applications where a large number of jobs need to be processed in
a specific order.
Batch processing operating system:
The Batch operating system is a real-time operating system designed for batch
processing. It features a modular architecture, which allows for the addition of new
modules without affecting the existing codebase.
A batch processing operating system (bpos) is a computer operating system that
processes large amounts of data in batches. This type of system is typically used by
businesses and organizations that need to process large amounts of data quickly and
efficiently. Batch processing systems are generally faster and more efficient than
traditional interactive systems, which can make them ideal for businesses that need to
process large amounts of data on a regular basis.
Features of BPOS:
 Batch OS is an operating system designed specifically for batch processing. It
includes a command line interface, a library for scheduling tasks, and a user
interface for managing tasks.
 Batch OS is designed to simplify the process of managing and scheduling tasks
across a network of computers.
 Batch OS includes a library for scheduling tasks. This library allows tasks to be
scheduled in a hierarchical manner, which makes it easy to manage and schedule
tasks across a network of computers. The user interface allows users to view and
manage tasks in a graphical manner.
 Batch OS is designed to simplify the process of managing and scheduling tasks. It
includes a command line interface, a library for scheduling tasks, and a user
interface for managing tasks.
 Batch OS is designed to simplify the process of managing and scheduling tasks
across a network of computers.
Working:
The Batch operating system is a new, open-source operating system that is being
developed by the Berkeley Open Infrastructure for Network Computing (BOINC)
project. Batch is a modular operating system that can be assembled from smaller pieces,
allowing it to be customized to specific needs.
The Batch project is led by Berkeley computer scientist Pieter Abbeel, who is also the
project’s primary code contributor. The batch is designed to be lightweight and efficient
and is intended to be used primarily in grid computing environments.
The Batch project is currently in the development stage, and there is still a lot of work to
be done before the operating system is ready for use. However, progress has been made
in recent months, and the project is expected to be completed within the next year.
There are many types of batch operating systems. One popular type is the scheduled
batch system. This type of system is used to control the execution of a series of tasks or
jobs. Other types of batch systems include the interactive batch system, the real-time
batch system, and the concurrent batch system.

Batch Processing Operating System


Advantages:
There are many advantages to using a batch operating system.
 One of the most important is the speed at which a batch job can be executed.
 Batch systems are designed to handle large numbers of tasks quickly and
efficiently. They are also well-suited for automated tasks and processes.
 Another advantage of using a batch system is the ability to automate tasks. This
can save time and effort for administrators and users. Batch systems can also be
used to manage large files and data sets. This is especially important in
environments that are prone to data breaches.
 Lastly, batch systems are stable and reliable. This is important in environments
where the systems are used for long periods of time.
 Batch systems are also easier to learn and use than other types of operating
systems.
The advantages of batch processing operating systems include:
1. Efficient use of resources: Batch processing operating systems allow for the
efficient use of computing resources, as jobs are processed in batches and scheduled
to run when resources are available.
2. High throughput: Batch processing operating systems can process a large number of
jobs quickly, allowing for high throughput and fast turnaround times.
3. Reduced errors: As batch processing operating systems do not require user
intervention, they can help reduce errors that may occur during manual job
processing.
4. Simplified job management: Batch processing operating systems simplify job
management by automating job submission, scheduling, and execution.
5. Cost-effective: Batch processing operating systems can be cost-effective, as they
allow for the efficient use of resources and can help reduce errors and processing
time.
6. Scalability: Batch processing operating systems can easily handle a large number of
jobs, making them scalable for large organizations that require high-volume data
processing.
Overall, batch processing operating systems can provide significant benefits for
organizations that require high-volume, repetitive data processing. They can help reduce
errors, increase throughput, and simplify job management, making them a cost-effective
solution for large-scale data processing needs.
Disadvantages:
There are many disadvantages to using batch operating systems, including:
 Limited functionality: Batch systems are designed for simple tasks, not for more
complex tasks. This can make them difficult to use for certain tasks, such as
managing files or software.
 Security issues: Because batch systems are not typically used for day-to-day tasks,
they may not be as secure as more common operating systems. This can lead to
security risks if the system is used by people who should not have access to it.
 Interruptions Batch systems can be interrupted frequently, which can lead to missed
deadlines or mistakes.
 Inefficiency: Batch systems are often slow and difficult to use, which can lead to
inefficiency in the workplace.
BPOS – A Centralized Execution Architecture: The BPOS architecture is a
centralized execution architecture that enables businesses and organizations to process
large amounts of data quickly and efficiently. This type of system is typically used by
businesses and organizations that need to process large amounts of data on a regular
basis.
BPOS – User-Friendly Graphical Interface: The BPOS architecture includes a user-
friendly graphical interface that makes it easy for businesses and organizations to
process large amounts of data quickly and efficiently. Batch processing systems are
generally faster and more efficient than traditional interactive systems, which can make
them ideal for businesses that need to process large amounts of data regularly.
BPOS – Batch Processing Made Easy: The BPOS system makes it easy to process
large amounts of data in a batch processing system.
MULTIPROGRAMMING SYSTEM:
Multiprogramming in an operating system as the name suggests multi means more than
one and programming means the execution of the program. when more than one program
can execute in an operating system then this is termed a multiprogramming operating
system.
Before the concept of Multiprogramming, computing takes place in other way which
does not use the CPU efficiently.Earlier, CPU executes only one program at a time. In
earlier day’s computing ,the problem is that when a program undergoes in waiting state
for an input/output operation, the CPU remains idle which leads to underutilization of
CPU and thus poor performance . Multiprogramming addresses this issue and solve this
issue.
Multiprogramming was developed in 1950s. It was first used in mainframe computing.
The major task of multiprogramming is to maximize the utilization of resources.
Multiprogramming is broadly classified into two types namely:
1. Multi-user operating system
2. Multitasking operating system
Multiuser and Multitasking both are different in every aspect and multitasking is
an operating system that allows you to run more than one program simultaneously. The
operating system does this by moving each program in and out of memory one at a time.
When a program runs out of memory, it is temporarily stored on disk until it is needed
again.
A multi-user operating system allows many users to share processing time on a powerful
central computer on different terminals. The operating system does this by quickly
switching between terminals, each receiving a limited amount of CPU time on the
central computer. Operating systems change so rapidly between terminals that each user
appears to have constant access to the central computer. If there are many users on such
a system, the time it takes for the central computer to respond may become more
apparent.

Features of Multiprogramming
1. Need Single CPU for implementation.
2. Context switch between process.
3. Switching happens when current process undergoes waiting state.
4. CPU idle time is reduced.
5. High resource utilization.
6. High Performance.
Disadvantages of Multiprogramming
1. Prior knowledge of scheduling algorithms is required.
2. If it has a large number of jobs, then long-term jobs will have to require a long wait.
3. Memory management is needed in the operating system because all types of tasks are
stored in the main memory.
4. Using multiprogramming up to a larger extent can cause a heat-up issue.
Multitasking is a method that is divided into two types:
1. Preemptive Scheduling algorithm: In the preemptive scheduling algorithm if more
than one process wants to enter into the critical section then it will be allowed and it
can enter into the critical section without any interruption only if no other progress is
in the critical section.
2. Non-Preemptive scheduling algorithm: If a process gets a critical section then it
will not leave the critical section until or unless it works gets done.
TIME SHARING:
The Time-Sharing Operating System is a type of operating system in which the user can
perform more than one task and each task gets the same amount of time to execute. It is
also called a multitasking operating system.
In a time-sharing operating system, each task uses the CPU in such a way that the
response time of the CPU is minimized. Each task takes the same amount of time to
execute.
The time-sharing operating system is different from a multiprogramming operating
system. In a multiprogramming operating system, the main objective is to maximize the
use of the CPU. But in Time-sharing OS, the main aim is to minimize the response time of
the CPU.
Working of Time-Sharing Operating System
The Time-Sharing Operating System uses CPU scheduling and multiprogramming.
 When the user performs more than one task, each process's CPU time is divided.
 There is a time quantum fixed for each process to execute at a time. This time
quantum is minimal and in 10-100 milliseconds. Time quantum is also known as
time slot or time slice.
 For example, if there are three processes, P1, P2, and P3, running on the system.
Suppose the time Quantum is fixed to 4 nanoseconds (ns). Let's see further how
these processes will be executed.
 Process P1 will execute first for 4ns and as soon as it gets over, process P2 starts
executing for 4ns, and when P2 is executed for 4ns then process P3 executes
for 4ns. This process continues till all the processes are completed.
 In this way, if the process runs for only the fixed time quantum, the switching
between the processes is very fast. So, the user thinks that all the processes are
running simultaneously. In this way, response time of the CPU is minimized.

The above diagram shows the working of the time-sharing operating system.
The process 4 in the diagram is shown in active state. Process 5 is in ready
state while process 1,2,3 and 6 are in waiting state.
Let's understand what these active state, ready state, and waiting for state means.
1. Active state - The process currently using the CPU is said to be in an active state.
Only one process can be inactive state as the CPU can be assigned to only one
process at a time for processing.
2. Ready state - The process that is ready for execution and is waiting for the CPU to
get assigned to it is said to be in a Ready state. More than one process can be in a
Ready state at a time, but the CPU is allocated to only one of them at a time for
processing.
3. Waiting State - The processes that are not ready for execution and are waiting for
some input/output process to be completed are said to be in a waiting state. Once
the input/output process is completed, the process jumps to a Ready state and is
ready for execution.

As shown in the diagram above, the process can switch from one state to another. Once
the process executes its one-time quantum, it goes to the Ready state and is ready for
execution again. If the process gets some input/output operation while in the active state,
it goes to the waiting state and waits there until the input/output is completed and then
goes to the ready state again as soon as the input/output operation is completed.
Challenges for Time-sharing Operating System
The main challenges that Time-sharing OS has are:-
 Time-sharing operating system consumes many resources. There is a very fast
switching between the processes. It needs a queue to keep the process in order for
further execution after a certain time period.
 Time-sharing operating system also requires high specification hardware as the very
fast switching between the process for CPU execution is required.
 As there is very fast switching between the processes, there is a risk of data mixing
between the processes of different programs.
Example of Time-Sharing Operating System
For example, In a transaction processing system, all types of processors can execute
every user program in a small burst or quantum of computation. Like, when n users exist,
then every user is capable of grabbing a time quantum.
Some examples of Time-sharing operating systems are:
UNIX TOPS-10 (DEC) Windows 2000 server
Multics TOPS-20 (DEC) Windows NT server
Linux
Advantages and Disadvantages of Time-Sharing Operating System
Advantages
 Time-sharing operating system works in such a way that it minimizes the response
time of the CPU.
 Each process gets an equal opportunity as the processes get equal time quantum for
execution at a time by the CPU.
 CPU idle time is also reduced as the CPU is continuously switching between the
process for the process, so the idle time is minimized in the Time-sharing operating
system.
Disadvantages
 There are chances of data mixing due to fast switching between the processes.
 Time-sharing OS faces reliability issues.
 Data communication is a problem in Time-Sharing Operating Systems as
communication is an essential factor for the processes to get the CPU processing
time.
DISTRIBUTED SYSTEM:
A distributed system is a collection of computer programs that utilize computational
resources across multiple, separate computation nodes to achieve a common, shared
goal. Distributed systems aim to remove bottlenecks or central points of failure from a
system.
Distributed computing systems have the following characteristics:
 Resource sharing – A distributed system can share hardware, software, or data.
 Simultaneous processing – Multiple machines can process the same function
simultaneously.
 Scalability – The computing and processing capacity can scale up as needed when
extended to additional machines.
 Error detection – Failures can be more easily detected
Transparency – A node can access and communicate with other nodes in the
system
REAL TIME SYSTEM:
The term “real-time system” refers to any information processing system with hardware
and software components that perform real-time application functions and can respond to
events within predictable and specific time constraints. Common examples of real-time
systems include air traffic control systems, process control systems, and autonomous
driving systems.
it must satisfy two requirements:
 Timeliness: The ability to produce the expected result by a specific deadline.
 Time synchronization: The capability of agents to coordinate independent clocks
and operate together in unison.
When evaluating real-time systems, companies can measure the value of any system in
how predictable it is in completing events or tasks. Predictability can be further evaluated
by examining the system’s:
 Latency: Measurement of time between two events
 Computer jitter: Latency variation between iterations
 Another important characteristic in real-time systems is their ability to perform
concurrent execution of real-time and non-real-time workloads in order to avoid
critical system failure.
 Finally, it’s important to understand how real-time systems are typically
categorized. They are designated as either a soft real-time system or a hard real-
time system based on timing constraints.
Benefits of Real-Time Systems for Applications
Real-time systems offer several benefits:
Benefits:

More precise Real-time systems are designed to perform tasks that must be
timing executed within precise cycle deadlines (down to
microseconds).

Higher Because real-time systems process data in defined,


predictability and predictable time frames, execution of tasks or workloads is
reliability practically guaranteed, thus improving the reliability of
critical systems for business.

Prioritization of When specific real-time workloads must be completed within


real-time the set deadline to avoid critical system failure, the ability to
workloads prioritize some workloads over others is paramount. Some,
but not all, real-time systems have this capability for
workload or task prioritization.

Soft Real-Time Systems vs. Hard Real-Time Systems


The concept of real-time can be applied to a variety of use cases. The majority of those
use cases, such as web browsing and gaming, fall within the soft real-time classification.
Soft real-time is when a system continues to function even if it’s unable to execute within
an allotted time. If the system has missed its deadline, it will not result in critical
consequences. The system can continue to function, though with undesirable lower quality
of output.
However, there are certain industries, such as robotics, automotive, utilities, and
healthcare, where use cases have higher requirements for synchronization, time lines, and
worst-case execution time guarantee. Those examples fall within the hard real-time
classification.
Hard real-time is when a system will cease to function if a deadline is missed, which can
result in catastrophic consequences.
UNIT – II
MEMORY MANAGEMENT:
In a multiprogramming computer, the Operating System resides in a part of memory, and
the rest is used by multiple processes. The task of subdividing the memory among
different processes is called Memory Management. Memory management is a method in
the operating system to manage operations between main memory and disk during
process execution. The main aim of memory management is to achieve efficient
utilization of memory.
Why Memory Management is Required?
 Allocate and de-allocate memory before and after process execution.
 To keep track of used memory space by processes.
 To minimize fragmentation issues.
 To proper utilization of main memory.
 To maintain data integrity while executing of process.
Basic Functions of Memory Management
When considering the various tasks of memory management, it's clear that these can be
categorized into a few fundamental operations. These include:
 Tracking each byte of memory in the system.
 Allocating and deallocating memory spaces as needed by the system's processes.
 Managing swap spaces, which store inactive pages of memory.
 Implementing policies for memory allocation.
Advantages of Memory Management
 It is a simple management approach
Disadvantages of Memory Management
 It does not support multiprogramming
 Memory is wasted
Importance Explanation
Enhances operational Optimal memory management reduces system latency,
speed thereby speeding up operations.
Importance Explanation
Ensures resource By allocating and deallocating memory as per
optimization requirement, resource wastage is minimized.

Supports multi- Effective memory management enables the execution of


programming multiple processes simultaneously.

NEED FOR MEMORY MANAGEMENT:


Memory management is fundamentally crucial to the operation of computer systems.
Without it, a system may encounter stagnated performance, involved debugging, or even
program crashes. Imagine a scenario where a certain process requires additional memory,
but the system is unable to provide it.
The purpose of memory management is to optimize the use of a computer or device's
internal memory, that is, RAM.
Role of Memory management:
Following are the important roles of memory management in a computer system:
o Memory manager is used to keep track of the status of memory locations, whether it
is free or allocated. It addresses primary memory by providing abstractions so that
software perceives a large memory is allocated to it.
o Memory manager permits computers with a small amount of main memory to
execute programs larger than the size or amount of available memory. It does this
by moving information back and forth between primary memory and secondary
memory by using the concept of swapping.
o The memory manager is responsible for protecting the memory allocated to each
process from being corrupted by another process. If this is not ensured, then the
system may exhibit unpredictable behavior.
o Memory managers should enable sharing of memory space between processes.
Thus, two programs can reside at the same memory location although at different
times.
Memory Management Techniques:
The memory management techniques can be classified into following main
categories:
o Contiguous memory management schemes
o Non-Contiguous memory management schemes

Characteristics of Computer Memory


 It is faster computer memory as compared to secondary memory.
 It is semiconductor memories.
 It is usually a volatile memory, and main memory of the computer.
 A computer system cannot run without primary memory.
Types of Computer Memory
In general, computer memory is of three types:
(i) Primary memory (ii) Secondary memory (iii) Cache memory
Now we discuss each type of memory one by one in detail:
The memory that is an in-built part of a computer is considered to be the main memory of
the computer and is also known as the internal memory or internal storage.
Primary memory: Primary memory is faster than secondary memory when it comes to
accessing data either for volatile or non-volatile memory, but it is generally limited in
space or capacity when comparing it with secondary memory. Further classification of
primary memory can be seen in the image given below:

Characteristics of Primary Memory are:


o A computer’s primary memory is also known as the main memory, a temporary
memory, or a prime memory.
o Primary memory is considered volatile memory.
o Semiconductor technology is used in the construction of this memory
o In the event of power failure the data gets deleted automatically
o Processing speed of secondary memory is slower than primary memory
o Primary memory is considered as the main working memory of the computer.
o In the absence of primary memory a computer is not able to process
Random Access Memory (RAM)
As we have discussed primary memory is of two types, let us understand the first type of
primary memory which is RAM ( Random Access Memory). Another name for random
access memory is read-write memory. While executing a function, the program and the
data which is required are stored in RAM. As soon as the power is disconnected the data
on this memory is lost, that is the reason behind considering it a volatile memory.
Features and Characteristics of RAM
o Writing and erasing both types of tasks can be performed by RAM memory.
o Adding up RAM will help boost the computer system’s speed and performance.
o RAM memory permits the central processing unit to access data quickly so that
execution by the system can be done faster.
o Cost of RAM is very less than the cost of a solid-state drive commonly known as
SSD, but if we carry out the comparison between the two, then RAM will execute
any instruction more quickly than SSD.
Types of Random Access Memory (RAM)
1. SRAM (Static Random Access Memory):
SRAM falls under the category of semiconductor memory and bistable latching circuitry
is being used to store each bit, which makes it very fast and thus named cache memory.
However, it is more expensive than DRAM and it takes up a lot more space which leads to
lesser memory on a chip.
It is also widely used as a cache in a CPU and is usually installed in L2 or L3. But as
discussed earlier, because it is very expensive, the price of L2 and L3 is generally in the
range of 1MB to 16MB only.
Characteristics of SRAM
o It also has a memory cell, like all memory components, it consists of 6 transistors.
o It is used as a CPU cache memory.
o Access time is lower, which makes it faster than DRAM.
o SRAM is costlier.
o SRAM requires a continuous power supply to store information and thus use
additional power.
o Contains flip-flop circuitry to store pieces of information.
Advantages of SRAM
o It is preferred because of its access speed, despite being very expensive.
o During the speed-sensitive cache it is very helpful.
o SRAM is simple and easy to manage.
o It is very reliable and therefore is used for cache memory.
Disadvantages of SRAM
o Its speed does not justify its price.
o It is a volatile memory, so all data is lost when power is cut off.
o It also has a small storage capacity and takes up a lot of space.
o The design is complex and not easy to build or understand.
Types of SRAM
SRAM can be further classified into the following types:
(i) Non-Volatile SRAM (ii) Pseudo SRAM (iii) Asynchronous (iv) Synchronous
2. DRAM (Dynamic Random Access Memory).
DRAM is also a different type of RAM in which every piece of data is stored within an
integrated circuit in a separate capacitor. Each memory cell is a DRAM chip holding one
bit of data and is made up of transistors and capacitors. In this memory controller is
entitled to read the data and rewrite it afterwards, refreshing it constantly. As this is a
lengthy process which makes DRAM slower than SRAM.
However, DRAM is more cost-effective than SRAM, which makes it usable as the main
memory in the CPU, although it is slower than SRAM, it is still fast considering its price
and ability to connect directly to the CPU bus. Unlike expensive SRAM, DRAM is
typically 4GB to 16 GB for laptops and 1GB to 2GB for smaller devices.
Characteristics of DRAM
o As an inexpensive option, DRAM is the most widely used memory in PCs today.
o It requires very little space.
o It is more power-efficient than SRAM.
o DRAM is slower than SRAM, but still has a good access time.
Advantages of DRAM
o DRAM has a simple design, as it consists of only a single transistor.
o It has high congestion rates.
o It is less costly than SRAM.
o Memory space is large.
Disadvantages of DRAM
o This memory is volatile, as it is continuously refreshing.
o It has a complex manufacturing process.
o Refreshing is required Continuously.
o It is slower than SRAM.
Difference Between DRAM and SRAM
DRAM SRAM
1. Tiny capacitors are used in its construction 1. Construction of its circuits is similar
that leak electricity. to D flip-flops.
2. It needs a recharge after a few 2. It holds its information as long as
milliseconds to maintain its data. power is available.
3. Less costly 3. More costly
4. Slower than SRAM. 4. Faster than DRAM.
5. Many bits per chip can be stored 5. Many bits per chip cannot be stored
6. Power-efficient 6. Power Hungry
7. Generates less heat. 7. Generates more heat.
8. Used as the main memory. 8. Used as cache memory.
Read-Only Memory (ROM)
ROM also comes under the classification of primary memory just like RAM, over RAM,
ROM has the advantage that it can store data permanently which makes it non-volatile.
ROM is the one which is capable of performing the process of bootstrap, all the crucial
information that is required to start the system is stored inside it. It is used in embedded
systems or where the programming needs no change. It is also used in calculators and
peripheral devices. There are further four types of ROM:
(1) MROM (2) PROM (3) EPROM (4) EEPROM.
Characteristics and Features of ROM
o It consumes less power.
o It is less expensive than RAM.
o ROM has a simple interface in comparison with RAM.
o ROM Memory has a very simple testing method.
o IT is a non-volatile memory, which means that data can be stored permanently.
o The data of the read-only memory does not get deleted even if there is a sudden
power cut-off.
o ROM is static in nature that does not need to be refreshed continuously.
o Circuits used in building ROM are very simple which makes it a very reliable
memory.
Types of Read-Only Memory (ROM)
1. MROM (Mask Read-Only Memory)
It is a type of read-only memory that’s pre-programmed with data. These are the original
ROMs, which were hard-wired devices that contained a pre-programmed set of data or
instructions
Advantages of MROM
o It has a low cost of production.
o It has a smaller storage capacity.
o It is less expensive than any other sort of secondary memory when large quantities
of the same ROM are created.
Disadvantages of MROM
Design flaws are particularly expensive because if a defect occurs in the code, MROM
becomes unusable and must be replaced in order to change its coding.
2. PROM (Programmable read-only memory):
This memory can be programmed by the user and once the user has programmed it, the
data and programmed instructions cannot be changed.
Advantages of PROM
Hardwiring PROM is not required now as there is a lot of software available in the market
for programming PROM today.
Disadvantages of PROM
Biggest disadvantage of PROM is that data cannot be modified or rewritten if any type of
error has prevailed.
3. EPROM (Erasable Programmable read-only memory):
As the name suggests it can be reprogrammed, but to erase data from it, one has to expose
it to ultraviolet light and to reprogram it.
Advantages of EPROM
o It is a non-volatile memory in nature.
o It has the capacity to erase and rewrite the program.
o It is more cost-effective than PROM.
Disadvantages of EPROM
o It consumes more static power
o In EPROM Memory Chip it needs more time to delete the data
o It is not capable of deleting any specific information type stored on it, as if one byte
of EPROM is deleted then rest all the bytes of EPROM are also deleted together.
4. EEPROM (Electrically erasable programmable read-only memory):
EEPROM is a memory in which the data can be erased by applying an electric field, there
is no need for ultraviolet light. In this type of memory, data can be erased in portions of
the chip also.
Advantages of EEPROM
o In EEPROM data is deleted and uploaded with the help of an electric current.
o An infinite number of times the data can be deleted and rewritten on EEPROM.
o The data stored on an EEPROM chip can be deleted in parts also.
Disadvantages of EEPROM
o EEPROM has limited data retention time.
o EEPROM is more costly than other ROM chips.
o To perform tasks like deleting and rewriting data on the EEPROM memory chip
different voltages are required.
Difference Between PROM and EPROM and EEPROM
PROM EPROM EEPROM
A Read-Only Memory A programmable ROM A user-modifiable ROM
(ROM) that can be modified that can be crushed and that can be erased and
only reused reprogrammed repeatedly
once by users through a normal electrical
voltage
Stands for Programmable Stands for Erasable Stands for Electrically
Read-Only Memory Programmable Read-Only Erasable Programmable
Memory Read-Only Memory
Developed by Wen Tsing Developed by Dov Developed by George
Chow in 1956 Frohman in 1971 Perlegos in 1978
Reprogrammable Can be reprogrammed Can be reprogrammed using
only once using ultraviolet light electrical charge
Difference Between RAM and ROM
RAM ROM
1. It is referred to as volatile 1. It is referred to as non-volatile memory.
memory.
2. It is considered temporary 2. It is considered permanent storage.
storage.
3. Writing data in this type of 3. Writing data in this type of memory is slow.
memory is fast.
4. This memory is used in normal 4. This memory is used for the startup process of
operations. the computer.
5. In this, data is stored in MBs. 5. In this, data is stored in GBs.
Advantages of RAM and ROM
Advantages of RAM:
o RAM has the capability to increase the speed of the system, and the higher the
RAM, the greater the speed.
o RAM reads data slower than the central processing unit.
o It consumes very less power
o It has the ability to write and delete programs.
Advantages of ROM:
o It is cheaper memory than RAM and with the benefit of being non-volatile in
nature.
o They do not need refreshing every time as they are static in nature
o Circuits of ROM are simple which makes them more reliable than RAM.
o It can store data permanently.
o It helps in the completion of the bootstrap process which starts the computer and
loads the OS.
Disadvantages of RAM and ROM
Disadvantages of RAM:
o If the CPU wants to read the data from the RAM, then the data access from the
registers and the cache is slow in comparison
o RAM is volatile, i.e. it is difficult to store data forever.
o It is expensive.
o It has limited space.
Disadvantages of ROM:
o It is not the quickest type of memory.
o ROM Memory has the capacity to read the data only, it cannot change the ROM
data.
o Unlike RAM memory, if ROM Memory is erased incorrectly, its contents will brick
the memory.
o Users can only rewrite contents in some special types of ROM Memories.
VIRTUAL MEMORY:
 Virtual memory is defined as a memory management method where computers use
secondary memory to compensate for the scarcity of physical memory.
 Virtual memory provides benefits in terms of costs, physical space, multitasking
capabilities, and data security.
 Virtual memory frees up RAM by swapping data that has not been used recently
over to a storage device, such as a hard drive or solid-state drive (SSD). Virtual
memory is important for improving system performance, multitasking and using
large programs.

PAGING:
Paging is a computer memory management function that presents storage locations to the
computer's central processing unit (CPU) as additional memory, called virtual memoryA
paging space is a type of logical volume with allocated disk space that stores information
which is resident in virtual memory but is not currently being accessed. This logical
volume has an attribute type equal to paging, and is usually simply referred to as paging
space or swap space.
Paging is a storage mechanism used in OS to retrieve processes from secondary storage to
the main memory as pages. The primary concept behind paging is to break each process
into individual pages. Thus the primary memory would also be separated into frames.
There are primarily two types of Paging in OS commonly used in operating systems:
 Fixed-size paging.
 Variable-size paging.
Uses of Paging in OS:
Paging is used for faster access to data. When a program needs a page, it is available in
the main memory as the OS copies a certain number of pages from your storage device to
main memory. Paging allows the physical address space of a process to be noncontiguous.
THRASHING:
Thrashing in OS is a phenomenon that occurs in computer operating systems when the
system spends an excessive amount of time swapping data between physical memory
(RAM) and virtual memory (disk storage) due to high memory demand and low
available resources.
Thrashing can occur when there are too many processes running on a system and not
enough physical memory to accommodate them all. As a result, the operating system
must constantly swap pages of memory between physical memory and virtual memory.
This can lead to a significant decrease in system performance, as the CPU is spending
more time swapping pages than it is actually executing code.
The following are some of the symptoms of thrashing in an operating system:
1. High CPU utilization:
When a system is thrashing, the CPU is spending a lot of time swapping pages of memory
between physical memory and disk. This can lead to high CPU utilization, even when the
system is not performing any significant work.
2. Increased Disk Activity:
When the system is Thrashing in os, the disk activity increases significantly as the system
tries to swap data between physical memory and virtual memory.
3. High page fault rate:
A page fault is an event that occurs when the CPU tries to access a page of memory that is
not currently in physical memory. Thrashing can cause a high page fault rate, as the
operating system is constantly swapping pages of memory between physical memory and
disk.
4. Slow Response Time:
When the system is Thrashing in os, its response time slows significantly.
If you are experiencing any of these symptoms, it is possible that your system
is thrashing. You can use a system monitoring tool to check the CPU utilization, page
fault rate, and disk activity to confirm this.
Algorithms during Thrashing
At the time, when thrashing starts then the operating system tries to apply either
the Global page replacement Algorithm or the Local page replacement algorithm.
Global Page Replacement
The Global Page replacement has access to bring any page, whenever thrashing found it
tries to bring more pages. Actually, due to this, no process can get enough frames and as a
result, the thrashing will increase more and more. Thus the global page replacement
algorithm is not suitable whenever thrashing happens.
Local Page Replacement
Unlike the Global Page replacement, the local page replacement will select pages which
only belongs to that process. Due to this, there is a chance of a reduction in the thrashing.
As it is also proved that there are many disadvantages of Local Page replacement. Thus
local page replacement is simply an alternative to Global Page replacement.
Causes of Thrashing in OS
The main causes of thrashing in an operating system are:
1. High degree of multiprogramming:
When too many processes are running on a system, the operating system may not have
enough physical memory to accommodate them all. This can lead to thrashing, as the
operating system is constantly swapping pages of memory between physical memory and
disk.
2. Lack of frames:
Frames are the units of memory that are used to store pages of memory. If there are not
enough frames available, the operating system will have to swap pages of memory to disk,
which can lead to thrashing.
3. Page replacement policy:
The page replacement policy is the algorithm that the operating system uses to decide
which pages of memory to swap to disk. If the page replacement policy is not effective, it
can lead to thrashing.
4. Insufficient physical memory:
If the system does not have enough physical memory, it will have to swap pages of
memory to disk more often, which can lead to thrashing.
5. Inefficient memory management:
If the operating system is not managing memory efficiently, it can lead to fragmentation
of physical memory, which can also lead to thrashing.
6. Poorly designed applications:
Applications that use excessive memory or that have poor memory management practices
can also contribute to thrashing.
Techniques to Prevent Thrashing
There are a number of ways to eliminate thrashing, including:
 Increase the amount of physical memory: This is the most effective way to
eliminate thrashing, as it will give the operating system more space to store pages of
memory in physical memory.
 Reduce the degree of multiprogramming: This means reducing the number of
processes that are running on the system. This can be done by terminating or
suspending processes, or by denying new processes from starting.
 Use an effective page replacement policy: The page replacement policy is the
algorithm that the operating system uses to decide which pages of memory to swap
to disk. An effective page replacement policy can help to minimize the number of
page faults that occur, which can help to eliminate thrashing.
 Optimize applications: Applications should be designed to use memory efficiently
and to avoid memory management practices that can lead to thrashing. For
example, applications should avoid using excessive memory or using inefficient
data structures.
 Monitor the system's resource usage: Monitor the system's CPU utilization,
memory usage, and disk activity. If you notice that any of these resources are being
overutilized, you may need to take steps to reduce the load on the system.
 Use a system monitoring tool: A system monitoring tool can help you to identify
bottlenecks in your system and to track the system's resource usage over time. This
information can help you to identify potential problems before they cause thrashing.
UNIT – III
PROCESS MANAGEMENT:
Process management refers to the activities involved in managing the execution of
multiple processes in an operating system. It includes creating, scheduling, and
terminating processes, as well as allocating system resources such as CPU time, memory,
and I/O devices.
Process management includes various tools and techniques such as process mapping,
process analysis, process improvement, process automation, and process control. By
applying these tools and techniques, organizations can streamline their processes,
eliminate waste, and improve productivity. Overall, process management is a critical
aspect of modern business operations and can help organizations achieve their goals and
stay competitive in today’s rapidly changing marketplace.
If the operating system supports multiple users then services under this are very important.
In this regard, operating systems have to keep track of all the completed processes,
Schedule them, and dispatch them one after another. However, the user should feel that he
has full control of the CPU. Process management refers to the techniques and strategies
used by organizations to design, monitor, and control their business processes to achieve
their goals efficiently and effectively. It involves identifying the steps involved in
completing a task, assessing the resources required for each step, and determining the best
way to execute the task.
Process management can help organizations improve their operational efficiency, reduce
costs, increase customer satisfaction, and maintain compliance with regulatory
requirements. It involves analyzing the performance of existing processes, identifying
bottlenecks, and making changes to optimize the process flow.
Key Components of Process Management
Below are some key component of process management.
 Process mapping: Creating visual representations of processes to understand how
tasks flow, identify dependencies, and uncover improvement opportunities.
 Process analysis: Evaluating processes to identify bottlenecks, inefficiencies, and
areas for improvement.
 Process redesign: Making changes to existing processes or creating new ones to
optimize workflows and enhance performance.
 Process implementation: Introducing the redesigned processes into the
organization and ensuring proper execution.
 Process monitoring and control: Tracking process performance, measuring key
metrics, and implementing control mechanisms to maintain efficiency and
effectiveness.
Importance of Process Management System
 It is critical to comprehend the significance of process management for any
manager overseeing a firm.
 It does more than just make workflows smooth.
 Process Management makes sure that every part of business operations moves as
quickly as possible.
 By implementing business processes management, we can avoid errors caused by
inefficient human labor and cut down on time lost on repetitive operations.
 It also keeps data loss and process step errors at bay.
 Additionally, process management guarantees that resources are employed
effectively, increasing the cost-effectiveness of our company.
Process management not only makes business operations better, but it also makes sure that
our procedures meet the needs of your clients. This raises income and improves consumer
happiness.

Advantages of Process Management


 Improved Efficiency: Process management can help organizations identify
bottlenecks and inefficiencies in their processes, allowing them to make changes to
streamline workflows and increase productivity.
 Cost Savings: By identifying and eliminating waste and inefficiencies, process
management can help organizations reduce costs associated with their business
operations.
 Improved Quality: Process management can help organizations improve the
quality of their products or services by standardizing processes and reducing errors.
 Increased Customer Satisfaction: By improving efficiency and quality, process
management can enhance the customer experience and increase satisfaction.
 Compliance with Regulations: Process management can help organizations
comply with regulatory requirements by ensuring that processes are properly
documented, controlled, and monitored.
Disadvantages of Process Management
 Time and Resource Intensive: Implementing and maintaining process
management initiatives can be time-consuming and require significant resources.
 Resistance to Change: Some employees may resist changes to established
processes, which can slow down or hinder the implementation of process
management initiatives.
 Overemphasis on Process: Overemphasis on the process can lead to a lack of
focus on customer needs and other important aspects of business operations.
 Risk of Standardization: Standardizing processes too much can limit flexibility
and creativity, potentially stifling innovation.
 Difficulty in Measuring Results: Measuring the effectiveness of process
management initiatives can be difficult, making it challenging to determine their
impact on organizational performance.
PROCESS: A process is a program in execution. For example, when we write a program
in C or C++ and compile it, the compiler creates binary code. The original code and
binary code are both programs. When we actually run the binary code, it becomes a
process. A process is an ‘active’ entity instead of a program, which is considered a
‘passive’ entity. A single program can create many processes when run multiple times; for
example, when we open a .exe or binary file multiple times, multiple instances begin
(multiple processes are created).
How Does a Process Look Like in Memory? The process looks like
Explanation of Process
 Text Section: A Process, sometimes known as the Text Section, also includes the
current activity represented by the value of the Program Counter.
 Stack: The stack contains temporary data, such as function parameters, returns
addresses, and local variables.
 Data Section: Contains the global variable.
 Heap Section: Dynamically memory allocated to process during its run time.
Characteristics of a Process
A process has the following attributes.
 Process Id: A unique identifier assigned by the operating system.
 Process State: Can be ready, running, etc.
 CPU registers: Like the Program Counter (CPU registers must be saved and
restored when a process is swapped in and out of the CPU)
 Accounts information: Amount of CPU used for process execution, time limits,
execution ID, etc
 I/O status information: For example, devices allocated to the process, open files,
etc
 CPU scheduling information: For example, Priority (Different processes may
have different priorities, for example, a shorter process assigned high priority in the
shortest job first scheduling)
STATES OF A PROCESS:
A process is in one of the following states:
 New: Newly Created Process (or) being-created process.
 Ready: After the creation process moves to the Ready state, i.e. the process is ready
for execution.
 Run: Currently running process in CPU (only one process at a time can be under
execution in a single processor)
 Wait (or Block): When a process requests I/O access.
 Complete (or Terminated): The process completed its execution.
 Suspended Ready: When the ready queue becomes full, some processes are moved
to a suspended ready state
 Suspended Block: When the waiting queue becomes full.

PROCESS SCHEDULING:
The operating system can use different scheduling algorithms to schedule processes. Here
are some commonly used timing algorithms:
 First-come, first-served (FCFS): This is the simplest scheduling algorithm, where
the process is executed on a first-come, first-served basis. FCFS is non-preemptive,
which means that once a process starts executing, it continues until it is finished or
waiting for I/O.
 Shortest Job First (SJF): SJF is a proactive scheduling algorithm that selects the
process with the shortest burst time. The burst time is the time a process takes to
complete its execution. SJF minimizes the average waiting time of processes.
 Round Robin (RR): Round Robin is a proactive scheduling algorithm that reserves
a fixed amount of time in a round for each process. If a process does not complete
its execution within the specified time, it is blocked and added to the end of the
queue. RR ensures fair distribution of CPU time to all processes and avoids
starvation.
 Priority Scheduling: This scheduling algorithm assigns priority to each process
and the process with the highest priority is executed first. Priority can be set based
on process type, importance, or resource requirements.
 Multilevel queue: This scheduling algorithm divides the ready queue into several
separate queues, each queue having a different priority. Processes are queued based
on their priority, and each queue uses its own scheduling algorithm. This scheduling
algorithm is useful in scenarios where different types of processes have different
priorities.
FIFO SCHEDULING:
First Come First Serve CPU Scheduling Algorithm shortly known as FCFS is the first
algorithm of CPU Process Scheduling Algorithm. In First Come First Serve Algorithm
what we do is to allow the process to execute in linear manner.
This means that whichever process enters process enters the ready queue first is executed
first. This shows that First Come First Serve Algorithm follows First In First Out (FIFO)
principle.
The First Come First Serve Algorithm can be executed in Pre Emptive and Non Pre
Emptive manner. Before, going into examples, let us understand what is Pre Emptive and
Non Pre Emptive Approach in CPU Process Scheduling.
Pre Emptive Approach
In this instance of Pre Emptive Process Scheduling, the OS allots the resources to a
Process for a predetermined period of time. The process transitions from running state to
ready state or from waiting state to ready state during resource allocation. This switching
happens because the CPU may assign other processes precedence and substitute the
currently active process for the higher priority process.
Non Pre Emptive Approach
In this case of Non Pre Emptive Process Scheduling, the resource cannot be withdrawn
from a process before the process has finished running. When a running process finishes
and transitions to the waiting state, resources are switched.
Convoy Effect In First Come First Serve (FCFS ):
Convoy Effect is a phenomenon which occurs in the Scheduling Algorithm named First
Come First Serve (FCFS).
The First Come First Serve Scheduling Algorithm occurs in a way of non preemptive way.
The Non preemptive way means that if a process or job is started execution, then the
operating system must complete its process or job. Until, the process or job is zero the
new or next process or job does not start its execution. The definition of Non Preemptive
Scheduling in terms of Operating System means that the Central Processing Unit (CPU)
will be completely dedicated till the end of the process or job started first and the new
process or job is executed only after finishing of the older process or job.
There may be a few cases, which might cause the Central Processing Unit (CPU) to allot a
too much time. This is because in the First Come First Serve Scheduling Algorithm Non
Preemptive Approach, the Processes or the jobs are chosen in serial order. Due, to this
shorter jobs or processes behind the larger processes or jobs takes too much time to
complete its execution. Due, to this the Waiting Time, Turn Around Time, Completion
Time is very high.
ROUND ROBIN SCHEDULING:
Round-robin scheduling allocates each task an equal share of the CPU time. In its simplest
form, tasks are in a circular queue and when a task's allocated CPU time expires, the task
is put to the end of the queue and the new task is taken from the front of the queue.
Round-robin scheduling is not very satisfactory in many real-time applications where each
task can have varying amounts of CPU requirements depending upon the complexity of
processing required. One variation of the pure round-robin scheduling is to provide
priority-based scheduling, where tasks with the same priority levels receive equal amounts
of CPU time. It is also possible to allocate different maximum CPU times to each task.
The simplest preemptive scheduling algorithm is round-robin, in which the processes are
given turns at running, one after the other in a repeating sequence, and each one is
preempted when it has used up its time slice. So, for example, if we have three processes
{A, B, C}, then the scheduler may run them in the sequence A, B, C, A, B, C, A, and so
on, until they are all finished. Figure 2.20 shows a possible process state sequence for this
set of processes, assuming a single processing core system. Notice that the example shown
is maximally efficient because the processor is continuously busy; there is always a
process running.

PRIORITY SCHEDULING:
Priority scheduling is a non-preemptive algorithm and one of the most common
scheduling algorithms in batch systems. Each process is assigned a priority. Process
with highest priority is to be executed first and so on. Processes with same priority are
executed on first come first served basis.
This algorithm assigns priorities to tasks based on their periods: the shorter the period,
the higher the priority. The rate (of job releases) of a task is the inverse of its period.
Hence, the higher its rate, the higher its priority.
There are two different categories of priority scheduling algorithms. Preemptive priority
scheduling is one, while non-preemptive priority scheduling is the other. Each process
may or may not have a different priority number assigned to it.
The three queues used for process scheduling are the job queue, the ready queue, and the
device queue.
SHORTEST JOB FIRST SCHEDULING.
The Shortest Job First Scheduling is the policy that holds the process on the waiting list
with the shortest execution time. It does so to execute the next process. The Shortest Job
First Scheduling is of two types; one is preemptive, and another is non-preemptive. The
job with the shortest processing time is processed first. This rule reduces work-in-
process inventory, average job completion (flow) time, and average job lateness.
UNIT – IV
FILE MANAGEMENT :
File management in an operating system refers to the set of processes and techniques
involved in creating, organizing, accessing, manipulating, and controlling files stored on
storage devices such as hard drives, solid-state drives, or network storage. It encompasses
a range of tasks and functionalities that ensure efficient handling of files, including their
creation, deletion, naming, classification, and protection.
File management serves as the intermediary layer between applications and the underlying
storage hardware, providing a logical and organized structure for storing and retrieving
data. It involves managing file metadata, which includes attributes such as file name, file
size, creation date, access permissions, ownership, and file type.
Objectives of File Management in Operating System
The main objectives of file management in an operating system (OS) are:
 File organization: To provide a logical and efficient way of organizing files and
folders, so that they can be easily located and accessed by users.
 Data security: To protect files from unauthorized access, accidental deletion, or
modification and provide a mechanism for data recovery.
 Data sharing: To enable multiple users to access and edit the same file
simultaneously or share files with other devices on a network.
 File backup: To create copies of important files to prevent data loss in case of
hardware failure or other issues.
 File compression: To reduce the size of files to save disk space or to make them
easier to transfer over the internet.
 File encryption: To protect files from unauthorized access by encrypting them with
a password or other security measures.
 File retrieval: To provide an efficient way of searching and retrieving files based
on keywords, file attributes, or other parameters.
 Space management: To manage the storage space efficiently by allocating and
deallocating space as required by the files and folders.
 File versioning: To maintain multiple versions of a file, so that previous versions
can be accessed and compared if needed.
 File Auditing: To provide a mechanism to trace the files and folder access and
modification history.
Properties of File Management System
File management in an operating system involves the organization, manipulation, and
management of files on a computer’s storage devices. Some key properties of file
management include:
 File organization: The way in which files are stored, organized, and accessed on a
storage device. This can include things like file naming conventions, directory
structures, and file metadata.
 File access: The ways in which files can be opened, read, written, and closed. This
can include things like permissions and access controls, which determine who can
read and write to a file.
 File backup and recovery: The ability to create copies of files for safekeeping, and
to restore them in the event of data loss.
 File compression and encryption: The ability to compress and encrypt files to
save space and protect data.
 File indexing and search: The ability to search for files based on certain criteria,
such as keywords in the file name or contents, and quickly locate and open them.
 File sharing: The ability for multiple users to access and collaborate on the same
files, either locally or over a network.
Functions of File Management in Operating System
The file management function of the operating system includes:
 File creation: Creating new files and folders for storing data.
 File organization: Organizing files and folders in a logical and efficient manner,
such as grouping related files together in a common folder.
 File backup: Creating copies of important files to prevent data loss in case of
hardware failure or other issues.
 File search: Finding files quickly and easily by searching for keywords or file
attributes such as date created or file size.
 File compression: Reducing the size of files to save disk space or to make them
easier to transfer over the internet.
 File encryption: Protect files from unauthorized access by encrypting them with a
password or other security measures.
 File sharing: Allowing multiple users to access and edit the same file
simultaneously or share files with other devices on a network.
 File deletion: Removing files or folders from the storage device to free up space.
 File recovery: Restoring files that have been accidentally deleted or lost due to
system crashes or other issues.
 File permissions: Setting access controls for files and folders to determine who can
read, write, or execute them.
Types of File Management in Operating System
There are several types of file management in operating systems, including:
1. Sequential File Management: In a sequential file management system, files are stored
on storage devices in a sequential manner. Each file occupies a contiguous block of
storage space, and accessing data within the file requires reading from the beginning until
the desired location is reached. This type of file management is simple but can be
inefficient for random access operations.
2. Indexed File Management: Indexed file management utilizes an index structure to
improve file access efficiency. In this system, an index file is created alongside the main
data file, containing pointers to various locations within the file. These pointers allow for
quick navigation and direct access to specific data within the file.
3. Direct File Management: Direct file management, also known as random access file
management, enables direct access to any part of a file without the need to traverse the
entire file sequentially. It utilizes a file allocation table (FAT) or a similar data structure to
keep track of file locations. This approach allows for faster and more efficient file access,
particularly for larger files.
4. File Allocation Table (FAT) File System: The FAT file system is commonly used in
various operating systems, including older versions of Windows. It employs a table,
known as the file allocation table, to track the allocation status of each cluster on a storage
device. The FAT file system supports sequential and random access to files and provides a
simple and reliable file management structure.
5. New Technology File System (NTFS): NTFS is a more advanced file system
commonly used in modern Windows operating systems. It offers features such as
enhanced security, file compression, file encryption, and support for larger file sizes and
volumes. NTFS utilizes a complex structure to manage files efficiently and provides
advanced file management capabilities.
6. Distributed File Systems: Distributed file systems allow files to be stored across
multiple networked devices or servers, providing transparent access to files from different
locations. Examples include the Network File System (NFS) and the Server Message
Block (SMB) protocol used in network file sharing.
These are some of the commonly used types of file management systems in operating
systems, each with its own strengths and characteristics, catering to different requirements
and use cases.
Advantages of File Management in OS
The advantages of file management in operating system are as follows:
 Improved organization: A file management system allows for the efficient
organization of files and folders, making it easier to locate and access the files you
need.
 Data security: File management systems provide mechanisms for protecting files
from unauthorized access and accidental deletion, as well as data recovery in case
of failure.
 Data sharing: File management systems enable multiple users to access and edit
the same file simultaneously or share files with other devices on a network.
 Backup and recovery: File management systems can automatically create backups
of important files, making it easy to restore lost or damaged files.
 Compression: File management systems can compress files, reducing their size and
making them easier to transfer over the internet.
 Encryption: File management systems can encrypt files, making them secure and
protecting them from unauthorized access.
 Search and retrieval: File management systems provide efficient ways to search
and retrieve files based on keywords, file attributes, or other parameters.
 Space management: File management systems manage storage space efficiently,
allocating and deallocating space as required by the files and folders.
 Versioning: File management systems can maintain multiple versions of a file so
that previous versions can be accessed and compared if needed.
Limitations of File Management in OS
Some known limitations of the file management system are given below:
 Limited storage capacity: Depending on the size of the storage device, the number
of files that can be stored may be limited.
 Data security: File management systems may not provide adequate protection
against data breaches or cyber-attacks.
 Limited search capabilities: File management systems may not provide advanced
search capabilities, making it difficult to locate specific files among a large number
of files.
 Complexity: File management systems may be complex to use, especially for non-
technical users.
 Limited collaboration: File management systems may not have the capability to
support multiple users to access and edit the same file simultaneously.
 Limited backup and recovery options: File management systems may not provide
the option to backup files in multiple locations, or may not have advanced recovery
options.
 Dependence on the OS: File management systems are dependent on the OS they
are implemented in, and may not be compatible with other systems.
 Limited versioning: File management systems may not provide a robust versioning
system, making it difficult to manage different versions of a file.
 Limited Auditing: File management systems may not provide detailed auditing,
making it difficult to trace files and folder access and modification history.
 Limited scalability and flexibility: File management systems may be limited in
their ability to scale to accommodate growing needs or to be customized to suit the
specific requirements of an organization.
Examples of File Management System
Some examples of File Management System are :
1. Windows Explorer on Windows OS: Windows Explorer is the default file
management system on Windows operating systems. It allows users to organize and
manage files and folders and search for and access files.
2. Finder on macOS: Finder is the default file management system on macOS. It
allows users to organize and manage files and folders and search for and access
files.
3. File Manager on Linux: Linux operating systems often come with a default file
manager such as Nautilus, Dolphin, or PCManFM that allows users to organize and
manage files and folders, as well as search for and access files.
4. Network-attached storage (NAS) systems are specialized file management
systems that can store and manage files on a network, allowing multiple users to
access and edit files simultaneously.
5. Cloud-based file storage services: Services like Dropbox, Google Drive, and
OneDrive provide a file management system that allows users to store and manage
files in the cloud, allowing access from multiple devices and collaboration with
other users.
6. Content management systems (CMS): These are specialized file management
systems that allow users to manage and organize digital assets like images, videos,
and documents, and also provide options for versioning and tagging.
7. Source code management systems: These are specialized file management
systems for managing source code, for example, Git, and SVN.
8. Database management systems: These are specialized file management systems
that allow users to manage and organize large amounts of structured data, for
example, MySQL, MongoDB, and PostgreSQL.
FILE ORGANISATION:
A file is a collection of data, usually stored on disk. As a logical entity, a file enables you
to divide your data into meaningful groups, for example, you can use one file to hold all of
a company's product information and another to hold all of its personnel information. As a
physical entity, a file should be considered in terms of its organization.
2.1 File Organizations
The term "file organization" refers to the way in which data is stored in a file and,
consequently, the method(s) by which it can be accessed. This COBOL system supports
three file organizations: sequential, relative and indexed.
2.1.1 Sequential Files
A sequential file is one in which the individual records can only be accessed sequentially,
that is, in the same order as they were originally written to the file. New records are
always added to the end of the file.
Three types of sequential file are supported by this COBOL system:
(i) Record sequential (ii) Line sequential (iii) Printer sequential
2.1.1.1 Record Sequential Files
Record sequential files are nearly always referred to simply as sequential files because
when you create a file and specify the organization as sequential, a record sequential file
is created by default.
2.1.1.2 Line Sequential Files
The primary use of line sequential files (which are also known as "text files" or "ASCII
files") is for display-only data. Most PC editors, for example Notepad, produce line
sequential files.
2.1.1.3 Printer Sequential Files
Printer sequential files are files which are destined for a printer, either directly, or by
spooling to a disk file. They consist of a sequence of print records with zero or more
vertical positioning characters (such as line-feed) between records. A print record consists
of zero or more printable characters and is terminated by a carriage return (x"0D").
With a printer sequential file, the OPEN statement causes a x"0D" to be written to the file
to ensure that the printer is located at the first character position before printing the first
print record. The WRITE statement causes trailing spaces to be removed from the print
record before it is written to the printer with a terminating carriage return (x"0D"). The
BEFORE or AFTER clause can be specified in the WRITE statement to cause one or
more line-feed characters (x"0A"), a form-feed character (x"0C"), or a vertical tab
character (x"0B") to be sent to the printer before or after writing the print record.
Printer sequential files should not be opened for INPUT or I/O.
2.1.2 Relative Files
A relative file is a file in which each record is identified by its ordinal position within the
file (record 1, record 2 and so on). This means that records can be accessed randomly as
well as sequentially. For sequential access, you simply execute a READ or WRITE
statement to access the next record in the file. For random access, you must define a data-
item as the relative key and then specify, in the data-item, the ordinal number of the
record that you want to READ or WRITE.
Access to relative files is fast, because the physical location of a record in the file is
directly calculated from its key.
Although variable-length records can be declared for a relative file, this can be wasteful of
disk space because the system allocates the maximum record length for all records in the
file, and pads unused character positions. This is done to maintain the fixed relationship
between the key and the location of the record.
As relative files always contain fixed-length records, no space is saved by specifying data
compression. In fact, if data compression is specified for a relative file, it is ignored by the
File Handler.
Each record in a relative file is followed by a two-byte record marker which indicates the
current status of the record.
FILE MANAGER:
Concept and Responsibilities or Functions:
The File Manager is a system software responsible for the creation, deletion,
modification of the files and managing their access, security and the resources used by
them. These functions are performed in collaboration with the Device Manager.
The File Manager has big Responsibilities / Functions in it’s hands. It is in charge of the
physical components of the computer system, information resources and the policies to
store and distribute the files. It’s Responsibilities / Functions include :
1. Keeping track of each file. In the system, the File Manager keeps track of each file
through directories that contain the file’s name, location in secondary storage and
other important information.
2. Using of the policies that determine where & how the files would be stored, in order
to efficiently use the available storage and provide access to them. It has a set of
predetermined policies that decides where and how the files would be stored and
how the user will be able to gain access to them. Also, it must also determine who
can access what material. This involves flexibility of access to information and it’s
protection. It is done by allowing the user the access to shared files and public
directories. The operating system must also protect it’s file from system
malfunctions or tampering of files.
3. Allocating each file when the user is granted access to them, and recording their
use. The File Manager allocates the files by activating the appropriate secondary
storage device and loading the file into the main memory while also updating the
records of who is using what file.
4. Deallocating the files when their use in finished and are not needed, and also
communicating to others about it’s availability which are waiting for it. It
deallocates the file by updating the file tables and rewriting the updated file into the
secondary storage, then communicating with other processes and notifying them
about it’s availability.
5. The File Manager may provide features for compressing files to reduce storage
space usage and encrypting files to protect sensitive information. This ensures data
integrity and confidentiality.
6. The File Manager may facilitate file backup operations to create copies of important
files for disaster recovery purposes. It also assists in restoring files from backups in
case of data loss or system failures.
7. The File Manager enables file sharing and collaboration among users. It allows
multiple users to access and work on the same file simultaneously, providing
mechanisms for synchronization and conflict resolution.
8. The File Manager enables file sharing and collaboration among users. It allows
multiple users to access and work on the same file simultaneously, providing
mechanisms for synchronization and conflict resolution.
9. The File Manager is responsible for the maintenance and optimization of the file
system. This includes tasks such as file system integrity checks, disk
defragmentation, and managing file system quotas and permissions.
10. The File Manager handles the management of file metadata, such as file attributes
(e.g., size, creation date, modification date), permissions, ownership, and access
control lists. It ensures the accurate and consistent representation of file metadata.
FILE PROPERTIES:
Each file has characteristics like file name, file type, date (on which file was created), etc.
These characteristics are referred to as ‘File Attributes’. The operating system associates
these attributes with files. In different operating systems files may have different
attributes. Some people call attributes metadata also.
Attributes/ Properties includes File name, Identifier, size, location, type, protection
(access control), date and time, user name, etc. These are some common attributes
which may be present in all OS but there are some attributes which may vary from one OS
to another.
Following are some common file attributes:
1. Name: File name is the name given to the file. A name is usually a string of
characters.
2. Identifier: Identifier is a unique number for a file. It identifies files within the file
system. It is not readable to us, unlike file names.
3. Type: Type is another attribute of a file which specifies the type of file such as
archive file (.zip), source code file (.c, .java), .docx file, .txt file, etc.
4. Location: Specifies the location of the file on the device (The directory path). This
attribute is a pointer to a device.
5. Size: Specifies the current size of the file (in Kb, Mb, Gb, etc.) and possibly the
maximum allowed size of the file.
6. Protection: Specifies information about Access control (Permissions about Who
can read, edit, write, and execute the file.) It provides security to sensitive and
private information.
7. Time, date, and user identification: This information tells us about the date and
time on which the file was created, last modified, created and modified by which
user, etc.

OS File Attributes
Some Other Attributes May Include:
Attributes related to flags. These Flags control or enable some specific property:
1. Read-only flag: 0 for read/write; 1 for read-only.
2. Hidden flag: 0 for normal; 1 for do not display in listings of all files.
3. System flag: 0 for normal files; 1 for system files.
4. Archive flag: 0 for has been backed up; 1 for needs to be backed up.
5. ASCII/binary flag: 0 for ASCII file; 1 for binary file.
6. Random access flag: 0 for sequential access only; 1 for random access.
7. Temporary flag: 0 for normal; 1 for deleted file on process exit.
8. Lock flags: 0 for unlocked; nonzero for locked.
Attribute related to keys. These are present in files which can be accessed using key:
1. Record length: Number of bytes in a record.
2. Key position: Offset of the key within each record.
3. Key length: Number of bytes in the key field.
Some file systems also support extended file attributes, such as character encoding of the
file and security features such as a file checksum.
All above attributes are not present in all files. Files may posses different attributes as per
the requirement. The attributes also varies from system to system. Attributes are also
stored in secondary storage (File name and identifier are stored in directory structure.
Identifier in turn locates other attributes). Attributes are important because they provide
that extra information about the files which can be useful.
FILE ACCESS METHODS:
When a file is used, information is read and accessed into computer memory and there are
several ways to access this information of the file. Some systems provide only one access
method for files. Other systems, such as those of IBM, support many access methods, and
choosing the right one for a particular application is a major design problem.
There are three ways to access a file into a computer system: Sequential-Access, Direct
Access, Index sequential Method.
1. Sequential Access –
It is the simplest access method. Information in the file is processed in order, one
record after the other. This mode of access is by far the most common; for example,
editor and compiler usually access the file in this fashion.
Key points:
o Data is accessed one record right after another record in an order.
o When we use read command, it move ahead pointer by one
o When we use write command, it will allocate memory and move the pointer
to the end of the file
o Such a method is reasonable for tape.
Advantages of Sequential Access Method :
 It is simple to implement this file access mechanism.
 It uses lexicographic order to quickly access the next entry.
 It is suitable for applications that require access to all records in a file, in a specific
order.
 It is less prone to data corruption as the data is written sequentially and not
randomly.
 It is a more efficient method for reading large files, as it only reads the required data
and does not waste time reading unnecessary data.
 It is a reliable method for backup and restore operations, as the data is stored
sequentially and can be easily restored if required.
Disadvantages of Sequential Access Method :
 If the file record that needs to be accessed next is not present next to the current
record, this type of file access method is slow.
 Moving a sizable chunk of the file may be necessary to insert a new record.
 It does not allow for quick access to specific records in the file. The entire file must
be searched sequentially to find a specific record, which can be time-consuming.
 It is not well-suited for applications that require frequent updates or modifications
to the file. Updating or inserting a record in the middle of a large file can be a slow
and cumbersome process.
 Sequential access can also result in wasted storage space if records are of varying
lengths. The space between records cannot be used by other records, which can
result in inefficient use of storage.
2.Direct Access –
Another method is direct access method also known as relative access method. A fixed-
length logical record that allows the program to read and write record rapidly. in no
particular order. The direct access is based on the disk model of a file since disk allows
random access to any file block. For direct access, the file is viewed as a numbered
sequence of block or record. Thus, we may read block 14 then block 59, and then we can
write block 17. There is no restriction on the order of reading and writing for a direct
access file.
A block number provided by the user to the operating system is normally a relative block
number, the first relative block of the file is 0 and then 1 and so on.
Advantages of Direct Access Method :
 The files can be immediately accessed decreasing the average access time.
 In the direct access method, in order to access a block, there is no need of traversing
all the blocks present before it.
3.Index sequential method –
It is the other method of accessing a file that is built on the top of the sequential access
method. These methods construct an index for the file. The index, like an index in the
back of a book, contains the pointer to the various blocks. To find a record in the file, we
first search the index, and then by the help of pointer we access the file directly.
Key points:
 It is built on top of Sequential access.
 It control the pointer by using index.
4.Relative Record Access –
Relative record access is a file access method used in operating systems where records are
accessed relative to the current position of the file pointer. In this method, records are
located based on their position relative to the current record, rather than by a specific
address or key value.
Key Points of Relative Record Access:
 Relative record access is a random access method that allows records to be accessed
based on their position relative to the current record.
 This method is efficient for accessing individual records but may not be suitable for
files that require frequent updates or random access to specific records.
 Relative record access requires fixed-length records and may not be flexible enough
for some applications.
 This method is useful for processing records in a specific order or for files that are
accessed sequentially.
Advantages of Relative Record Access:
 Random Access: Relative record access allows random access to records in a file.
The system can access any record at a specific offset from the current position of
the file pointer.
 Efficient Retrieval: Since the system only needs to read the current record and any
records that need to be skipped, relative record access is more efficient than
sequential access for accessing individual records.
 Useful for Sequential Processing: Relative record access is useful for processing
records in a specific order. For example, if the records are sorted in a specific order,
the system can access the next or previous record relative to the current position of
the file pointer.
Disadvantages of Relative Record Access:
 Fixed Record Length: Relative record access requires fixed-length records. If the
records are of varying length, it may be necessary to use padding to ensure that each
record is the same length.
 Limited Flexibility: Relative record access is not very flexible. It is difficult to
insert or delete records in the middle of a file without disrupting the relative
positions of other records.
 Limited Application: Relative record access is best suited for files that are accessed
sequentially or with some regularity, but it may not be appropriate for files that are
frequently updated or require random access to specific records.
5.Content Addressable Access-
Content-addressable access (CAA) is a file access method used in operating systems that
allows records or blocks to be accessed based on their content rather than their address. In
this method, a hash function is used to calculate a unique key for each record or block, and
the system can access any record or block by specifying its key.
Keys in Content-Addressable Access:
 Unique: Each record or block has a unique key that is generated using a hash
function.
 Calculated based on content: The key is calculated based on the content of the
record or block, rather than its location or address.
Advantages of Content-Addressable Access:
 Efficient Search: CAA is ideal for searching large databases or file systems because
it allows for efficient searching based on the content of the records or blocks.
 Flexibility: CAA is more flexible than other access methods because it allows for
easy insertion and deletion of records or blocks.
 Data Integrity: CAA ensures data integrity because each record or block has a
unique key that is generated based on its content.
Disadvantages of Content-Addressable Access:
 Overhead: CAA requires additional overhead because the hash function must be
calculated for each record or block.
 Collision: There is a possibility of collision where two records or blocks can have
the same key. This can be minimized by using a good hash function, but it cannot
be completely eliminated.
 Limited Key Space: The key space is limited by the size of the hash function used,
which can lead to collisions and other issues.
Key Points of Content-Addressable Access:
 Content-addressable access is a file access method that allows records or blocks to
be accessed based on their content rather than their address.
 CAA uses a hash function to generate a unique key for each record or block.
 CAA is efficient for searching large databases or file systems and is more flexible
than other access methods.
 CAA requires additional overhead for calculating the hash function and may have
collisions or limited key space.
UNIT – V
NETWORK MANAGEMENT:
Network management is the sum total of applications, tools and processes used to
provision, operate, maintain, administer and secure network infrastructure. The
overarching role of network management is ensuring network resources are made
available to users efficiently, effectively and quickly. It leverages fault analysis and
performance management to optimize network health.
A network brings together dozens, hundreds or thousands of interacting components.
These components will sometimes malfunction, be misconfigured, get over utilized or just
fail. Enterprise network management software must respond to these challenges by
employing the best suited tools required to manage, monitor and control the network.
The Importance of Network Management:
The principal objective of network management is to ensure your network infrastructure
runs efficiently and smoothly. By doing so, it achieves the following objectives.
a) Minimizes costly network disruptions
Network disruptions are expensive. Depending on the size of the organization or nature of
the affected processes, businesses could experience losses in the thousands or millions of
dollars after just an hour of downtime.
This loss is more than just the direct financial impact of network disruption – it’s also the
cost of a damaged reputation that makes customers reconsider their long-term
relationship. Slow, unresponsive networks are frustrating to both customers and
employees. They make it more difficult for staff to respond to customer requests and
concerns. Customers who experience network challenges too often will consider jumping
ship.
b) Improved productivity
By studying and monitoring every aspect of the network, network management does
multiple jobs simultaneously. With that, IT staff are freed from repetitive everyday
routines and can focus on the more strategic aspects of their job.
c) Improved network security
An effective network management program can identify and respond to cyber threats
before they spread and impact user experience. Network management ensures best
practice standards and compliance with regulatory requirements. Better network security
enhances network privacy and gives users reassurance that they can use their devices
freely.
d) Holistic view of network performance
Effective network management provides a comprehensive view of your infrastructure’s
performance. You are in a better position to identify, analyze and fix issues fast.

NEED FOR NETWORKING :


Network management encompasses the following aspects.
a) Network administration
Network administration covers the addition and inventorying of network resources such as
servers, routers, switches, hubs, cables and computers. It also involves setting up the
network software, operating systems and management tools used to run the entire
network. Administration covers software updates and performance monitoring too.
b) Network operations
Network operations ensures the network works as expected. That includes monitoring
network activity, identifying problems and remediating issues. Identifying and addressing
problems should preferably occur proactively and not reactively even though both are
components of network operation.
c) Network maintenance
Network maintenance addresses fixes, upgrades and repairs to network resources
including switches, routers, transmission cables, servers and workstations. It consists of
remedial and proactive activities handled by network administrators such as replacing
switches and routers, updating access controls and improving device configurations.
When a new patch is available, it is applied as soon as possible.
d) Network provisioning
Network provisioning is the configuration of network resources in order to support a wide
range of services such as voice functions or additional users. It involves allocating and
configuring resources in line with organization’s required services or needs. The network
administrator deploys resources to meet the evolving needs of the organization.
For instance, a project may have many project team members logging in remotely thus
increasing the need for broadband. If a team requires file transfer or additional storage, the
onus falls on the network administrator to avail these.
e) Network security
Network security is the detection and prevention of network security breaches. That
involves maintaining activity logs on routers and switches. If a violation is detected, the
logs and other network management resources should provide a means of identifying the
offender. There should be a process of alerting and escalating suspicious activity.
The network security role covers the installation and maintenance of network protection
software, tracking endpoint devices, monitoring network behavior and identifying unusual
IP addresses.
f) Network automation
Automating the network is an important capability built to reduce cost and improve
responsiveness to known issues. As an example, rather than using manual effort to update
hundreds or thousands of network device configurations, network automation software
can deploy changes and report on configuration status automatically.
Challenges Of Network Management:
a) Complexity
Network infrastructure is complex, even in small and medium-sized businesses. The
number and diversity of network devices have made oversight more difficult. Thousands
of devices, operating systems and applications have to work together. The struggle to
maintain control over this sprawling ecosystem has been compounded by the adoption of
cloud computing and new networking technologies such as software-defined networking
(SDN).
b) Security threats
The number, variety and sophistication of network security threats has grown rapidly. As
a network grows, new vulnerabilities and potential points of failure are introduced.
c) User expectations
Users have grown accustomed to fast speeds. Advances in hardware and network
bandwidth, even at home, means that users expect consistently high network performance
and availability. There’s low tolerance for downtime.
d) Cost
The management of network infrastructure comes at a cost. While automated tools have
made the process easier than ever, there’s both the cost of technology and cost of labor to
contend with. This cost can be compounded when multiple instances of network
management software need to be deployed due to lack of scalability to support modern
enterprise networks with 10s of thousands of devices.
PEER TO PEER NETWORK:
Peer-to-peer network operating systems allow users to share resources and files located
on their computers and to access shared resources found on other computers.
However, they do not have a file server or a centralized management source.
Before the development of P2P, USENET came into existence in 1979. The network
enabled the users to read and post messages. Unlike the forums we use today, it did not
have a central server. It is used to copy the new messages to all the servers of the node.
 In the 1980s the first use of P2P networks occurred after personal computers were
introduced.
 In August 1988, the internet relay chat was the first P2P network built to share text
and chat.
 In June 1999, Napster was developed which was a file-sharing P2P software. It
could be used to share audio files as well. This software was shut down due to the
illegal sharing of files. But the concept of network sharing i.e P2P became popular.
 In June 2000, Gnutella was the first decentralized P2P file sharing network. This
allowed users to access files on other users’ computers via a designated folder.
Types of P2P networks:
1. Unstructured P2P networks: In this type of P2P network, each device is able to
make an equal contribution. This network is easy to build as devices can be
connected randomly in the network. But being unstructured, it becomes difficult to
find content. For example, Napster, Gnutella, etc.
2. Structured P2P networks: It is designed using software that creates a virtual layer
in order to put the nodes in a specific structure. These are not easy to set up but can
give easy access to users to the content. For example, P-Grid, Kademlia, etc.
3. Hybrid P2P networks: It combines the features of both P2P networks and client-
server architecture. An example of such a network is to find a node using the central
server.
Features of P2P network
 These networks do not involve a large number of nodes, usually less than 12. All
the computers in the network store their own data but this data is accessible by the
group.
 Unlike client-server networks, P2P uses resources and also provides them. This
results in additional resources if the number of nodes increases. It requires
specialized software. It allows resource sharing among the network.
 Since the nodes act as clients and servers, there is a constant threat of attack.
 Almost all OS today support P2P networks.
P2P Network Architecture
In the P2P network architecture, the computers connect with each other in a workgroup to
share files, and access to internet and printers.
 Each computer in the network has the same set of responsibilities and capabilities.
 Each device in the network serves as both a client and server.
 The architecture is useful in residential areas, small offices, or small companies
where each computer act as an independent workstation and stores the data on its
hard drive.
 Each computer in the network has the ability to share data with other computers in
the network.
 The architecture is usually composed of workgroups of 12 or more computers.

MASTER – SLAVE NETWORK :

Master-Slave Operating Systems [1] are designated for systems containing a cluster of
computers. In such systems, there is one "master" computer, whereas the others are
"slaves". The "master" sets the processes' scheduling of the "slaves". In such a manner
most of the scheduling activity is done by the "master".
The master/slave configuration is a single processor system where extra slave processors
are working, managed by the primary master processor. It is an asymmetrical system.

The work of the master processor is to manage the entire system consisting of files,
devices, main memory and the slave processors. It maintains the status of all the
processes, schedules the work for the slave processor and executes all control programs. It
is also responsible for storage management. This type of configuration is suitable for
computing environments where the processing time needs to be divided between front end
and back end processor. The advantage of this configuration is that it is simple to
understand. The disadvantages include:
 It is as reliable as a single processor system, i.e., if the master processor fails the
entire system fails.
 It creates more overhead charges. There would be situations when the slave
processors would be free before the master processor could assign them another
task. Then it takes the valuable time of processing.
 After each task completed by the slave processors, it interrupts the master processor
for some operating system intervention, like I/O requests. This creates long queues
at master level processor.
COMBINATION NETWORK:
The combination network is one of the simplest and insightful networks in coding
theory. The vector network coding solutions for this network and some of its sub-
networks are examined. For a fixed alphabet size of a vector network coding solution, an
upper bound on the number of nodes in the network is obtained.
PROTOCOLS:
The objective of (OS) is to provide a set of standards for representing optimization
instances, results, solver options, and communication between clients and solvers in a
distributed environment using Web Services. A network protocol is a set of established
rules that specify how to format, send and receive data so that computer network
endpoints, including computers, servers, routers and virtual machines, can communicate
despite differences in their underlying infrastructures, designs or standards.
Need of Protocols
It may be that the sender and receiver of data are parts of different networks, located in
different parts of the world having different data transfer rates. So, we need protocols to
manage the flow control of data, and access control of the link being shared in the
communication channel. Suppose there is a sender X who has a data transmission rate of
10 Mbps. And, there is a receiver Y who has a data receiving rate of 5Mbps. Since the rate
of receiving the data is slow so some data will be lost during transmission. In order to
avoid this, receiver Y needs to inform sender X about the speed mismatch so that sender X
can adjust its transmission rate. Similarly, the access control decides the node which will
access the link shared in the communication channel at a particular instant in time. If not
the transmitted data will collide if many computers send data simultaneously through the
same link resulting in the corruption or loss of data.
IP Address:
An IP address represents an Internet Protocol address. A unique address that identifies the
device over the network. It is almost like a set of rules governing the structure of data sent
over the Internet or through a local network. An IP address helps the Internet to
distinguish between different routers, computers, and websites. It serves as a specific
machine identifier in a specific network and helps to improve visual communication
between source and destination.
TCP /IP:
TCP/IP was designed and developed by the Department of Defense (DoD) in the 1960s
and is based on standard protocols. It stands for Transmission Control Protocol/Internet
Protocol. The TCP/IP model is a concise version of the OSI model. It contains four layers,
unlike the seven layers in the OSI model.
The number of layers is sometimes referred to as five or four. Here In this article, we’ll
study five layers. The Physical Layer and Data Link Layer are referred to as one single
layer as the ‘Physical Layer’ or ‘Network Interface Layer’ in the 4-layer reference.
The main work of TCP/IP is to transfer the data of a computer from one device to another.
The main condition of this process is to make data reliable and accurate so that the
receiver will receive the same information which is sent by the sender. To ensure that,
each message reaches its final destination accurately, the TCP/IP model divides its data
into packets and combines them at the other end, which helps in maintaining the accuracy
of the data while transferring from one end to another end.
Difference between TCP and IP:
TCP and IP are different protocols of Computer Networks. The basic difference between
TCP (Transmission Control Protocol) and IP (Internet Protocol) is in the transmission of
data. In simple words, IP finds the destination of the mail and TCP has the work to send
and receive the mail. UDP is another protocol, which does not require IP to communicate
with another computer. IP is required by only TCP. This is the basic difference between
TCP and IP.
TCP/IP Model Work:
Whenever we want to send something over the internet using the TCP/IP Model, the
TCP/IP Model divides the data into packets at the sender’s end and the same packets have
to be recombined at the receiver’s end to form the same data, and this thing happens to
maintain the accuracy of the data. TCP/IP model divides the data into a 4-layer procedure,
where the data first go into this layer in one order and again in reverse order to get
organized in the same way at the receiver’s end.
Layers of TCP/IP Model
(i) Application Layer (ii) Transport Layer(TCP/UDP) (iii) Network/Internet Layer(IP)
(iv) Data Link Layer (MAC) (v) Physical Layer
The diagrammatic comparison of the TCP/IP and OSI model is as follows:
TCP/IP and OSI:
1. Physical Layer
It is a group of applications requiring network communications. This layer is responsible
for generating the data and requesting connections. It acts on behalf of the sender and the
Network Access layer on the behalf of the receiver. During this article, we will be talking
on the behalf of the receiver.
2. Data Link Layer
The packet’s network protocol type, in this case, TCP/IP, is identified by the data-link
layer. Error prevention and “framing” are also provided by the data-link layer. Point-to-
Point Protocol (PPP) framing and Ethernet IEEE 802.2 framing are two examples of data-
link layer protocols.
3. Internet Layer
This layer parallels the functions of OSI’s Network layer. It defines the protocols which
are responsible for the logical transmission of data over the entire network. The main
protocols residing at this layer are as follows:
 IP: IP stands for Internet Protocol and it is responsible for delivering packets from
the source host to the destination host by looking at the IP addresses in the packet
headers. IP has 2 versions: IPv4 and IPv6. IPv4 is the one that most websites are
using currently. But IPv6 is growing as the number of IPv4 addresses is limited in
number when compared to the number of users.
 ICMP: ICMP stands for Internet Control Message Protocol. It is encapsulated
within IP datagrams and is responsible for providing hosts with information about
network problems.
 ARP: ARP stands for Address Resolution Protocol. Its job is to find the hardware
address of a host from a known IP address. ARP has several types: Reverse ARP,
Proxy ARP, Gratuitous ARP, and Inverse ARP.
The Internet Layer is a layer in the Internet Protocol (IP) suite, which is the set of
protocols that define the Internet. The Internet Layer is responsible for routing packets of
data from one device to another across a network. It does this by assigning each device a
unique IP address, which is used to identify the device and determine the route that
packets should take to reach it.
Example: Imagine that you are using a computer to send an email to a friend. When you
click “send,” the email is broken down into smaller packets of data, which are then sent to
the Internet Layer for routing. The Internet Layer assigns an IP address to each packet and
uses routing tables to determine the best route for the packet to take to reach its
destination. The packet is then forwarded to the next hop on its route until it reaches its
destination. When all of the packets have been delivered, your friend’s computer can
reassemble them into the original email message.
In this example, the Internet Layer plays a crucial role in delivering the email from your
computer to your friend’s computer. It uses IP addresses and routing tables to determine
the best route for the packets to take, and it ensures that the packets are delivered to the
correct destination. Without the Internet Layer, it would not be possible to send data
across the Internet.
4. Transport Layer
The TCP/IP transport layer protocols exchange data receipt acknowledgments and
retransmit missing packets to ensure that packets arrive in order and without error. End-to-
end communication is referred to as such. Transmission Control Protocol (TCP) and User
Datagram Protocol are transport layer protocols at this level (UDP).
 TCP: Applications can interact with one another using TCP as though they were
physically connected by a circuit. TCP transmits data in a way that resembles
character-by-character transmission rather than separate packets. A starting point
that establishes the connection, the whole transmission in byte order, and an ending
point that closes the connection make up this transmission.
 UDP: The datagram delivery service is provided by UDP, the other transport layer
protocol. Connections between receiving and sending hosts are not verified by
UDP. Applications that transport little amounts of data use UDP rather than TCP
because it eliminates the processes of establishing and validating connections.
5. Application Layer
This layer is analogous to the transport layer of the OSI model. It is responsible for end-
to-end communication and error-free delivery of data. It shields the upper-layer
applications from the complexities of data. The three main protocols present in this layer
are:
 HTTP and HTTPS: HTTP stands for Hypertext transfer protocol. It is used by the
World Wide Web to manage communications between web browsers and servers.
HTTPS stands for HTTP-Secure. It is a combination of HTTP with SSL(Secure
Socket Layer). It is efficient in cases where the browser needs to fill out forms, sign
in, authenticate, and carry out bank transactions.
 SSH: SSH stands for Secure Shell. It is a terminal emulations software similar to
Telnet. The reason SSH is preferred is because of its ability to maintain the
encrypted connection. It sets up a secure session over a TCP/IP connection.
 NTP: NTP stands for Network Time Protocol. It is used to synchronize the clocks
on our computer to one standard time source. It is very useful in situations like bank
transactions. Assume the following situation without the presence of NTP. Suppose
you carry out a transaction, where your computer reads the time at 2:30 PM while
the server records it at 2:28 PM. The server can crash very badly if it’s out of sync.
The host-to-host layer is a layer in the OSI (Open Systems Interconnection) model that is
responsible for providing communication between hosts (computers or other devices) on a
network. It is also known as the transport layer.
Some common use cases for the host-to-host layer include:
1. Reliable Data Transfer: The host-to-host layer ensures that data is transferred
reliably between hosts by using techniques like error correction and flow control.
For example, if a packet of data is lost during transmission, the host-to-host layer
can request that the packet be retransmitted to ensure that all data is received
correctly.
2. Segmentation and Reassembly: The host-to-host layer is responsible for breaking
up large blocks of data into smaller segments that can be transmitted over the
network, and then reassembling the data at the destination. This allows data to be
transmitted more efficiently and helps to avoid overloading the network.
3. Multiplexing and Demultiplexing: The host-to-host layer is responsible for
multiplexing data from multiple sources onto a single network connection, and then
demultiplexing the data at the destination. This allows multiple devices to share the
same network connection and helps to improve the utilization of the network.
4. End-to-End Communication: The host-to-host layer provides a connection-
oriented service that allows hosts to communicate with each other end-to-end,
without the need for intermediate devices to be involved in the communication.
Example: Consider a network with two hosts, A and B. Host A wants to send a file to
host B. The host-to-host layer in host A will break the file into smaller segments, add error
correction and flow control information, and then transmit the segments over the network
to host B. The host-to-host layer in host B will receive the segments, check for errors, and
reassemble the file. Once the file has been transferred successfully, the host-to-host layer
in host B will acknowledge receipt of the file to host A.
In this example, the host-to-host layer is responsible for providing a reliable connection
between host A and host B, breaking the file into smaller segments, and reassembling the
segments at the destination. It is also responsible for multiplexing and demultiplexing the
data and providing end-to-end communication between the two hosts.
Other Common Internet Protocols
TCP/IP Model covers many Internet Protocols. The main rule of these Internet Protocols
is how the data is validated and sent over the Internet. Some Common Internet Protocols
include:
 HTTP (Hypertext Transfer Protocol): HTTP takes care of Web Browsers and
Websites.
 FTP (File Transfer Protocol): FTP takes care of how the file is to be sent over the
Internet.
 SMTP (Simple Mail Transfer Protocol): SMTP is used to send and receive data.
Difference between TCP/IP and OSI Model
TCP/IP OSI
OSI refers to Open Systems
TCP refers to Transmission Control Protocol.
Interconnection.
TCP/IP uses both the session and presentation OSI uses different session and presentation
layer in the application layer itself. layers.
TCP/IP follows connectionless a horizontal
OSI follows a vertical approach.
approach.
The Transport layer in TCP/IP does not In the OSI model, the transport layer
provide assurance delivery of packets. provides assurance delivery of packets.
While in the OSI model, Protocols are
Protocols cannot be replaced easily in TCP/IP
better covered and are easy to replace with
model.
the technology change.
TCP/IP model network layer only provides Connectionless and connection-oriented
connectionless (IP) services. The transport services are provided by the network layer
layer (TCP) provides connections. in the OSI model.
******************************

You might also like