0% found this document useful (0 votes)
202 views

OS Lecture Notes 1-2

The document provides an overview of an operating system syllabus. It discusses the objectives of studying operating systems which are to learn the evolution of OSs and how they manage resources and provide computer security. It then outlines the topics that will be covered in the syllabus, including OS structure, processes, scheduling, memory management, file management, I/O, protection, and security. Case studies on Linux and Windows will also be included.

Uploaded by

David
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
202 views

OS Lecture Notes 1-2

The document provides an overview of an operating system syllabus. It discusses the objectives of studying operating systems which are to learn the evolution of OSs and how they manage resources and provide computer security. It then outlines the topics that will be covered in the syllabus, including OS structure, processes, scheduling, memory management, file management, I/O, protection, and security. Case studies on Linux and Windows will also be included.

Uploaded by

David
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

OPERATING SYSTEM

SYLLABUS
Operating System
Objectives: In order to meet the ever increasing need of computers, study of operating system is compulsory. This is core technology
subject and the knowledge of which is absolutely essential for Computer Engineers. It familiarizes the students with the concepts and
functions of operating system. This subject provides knowledge to develop systems using advanced operating system concepts.
 To learn the evolution of Operating systems.
 To study the operations performed by Operating System as a resource manager.
 To study computer security issues and Operating System tools.

1. Introduction: Operating system Meaning, Supervisor & User mode, operating system operations & Functions, Types of OS: Single-
processor system, multiprogramming, Multiprocessing, Multitasking, Parallel, Distributed, RTOS etc.

2. Operating System Structure: OS Services, System Calls, System Programs, OS Structures, layered structure Virtual machines,

3. Processes: Process Concept, PCB, Operation on Processes, Cooperating Processes, Inter process Communication, Process
Communication in Client Server Environment.
Threads: Concept of Thread, Kernel level & User level threads, Multithreading, Thread Libraries, Threading Issues

4. Scheduling: scheduling criteria, scheduling algorithms, Type of Scheduling: Long term, Short term & Medium term scheduling,
multi-processor scheduling algorithm, thread scheduling,

5. Process Synchronization: Critical Section problem, semaphores, monitors, Deadlock characterization, Handling of deadlocks -
deadlock prevention, avoidance, detection, recovery from deadlock.
6. Memory Management: Logical & Physical Address space, Swapping, Contiguous memory allocation, paging, segmentation,
Virtual memory, demand paging, Page replacement & Page Allocation algorithms, thrashing, Performance issues

7. File Management: File concepts, access methods, directory structure, file system mounting, file sharing, protection, Allocation
methods, Free space Mgt., Directory Implementation.

8. I/O & Secondary Storage Structure: I/O H/W, Application I/O Interface, Kernel I/O subsystem, Disk Scheduling, disk management,
swap-space management, RAID structure.

9. System Protection: Goals of protection, Access matrix and its implementation, Access control and revocation of access rights,
capability-based systems

10. System Security: Security problem, program threats, system and network threats, cryptography as a security tools, user
authentication, implementing security defenses, firewalling to protect systems and networks.
Case studies Windows OS, Linux or any other OS
CONTENTS

Unit 1: Introduction to Operating System 1


Unit 2: Operation and Function of Operating System 14
Unit 3: Operating System Structure 29
Unit 4: Process Management 48
Unit 5: Scheduling 70
Unit 6: Process Synchronization 96
Unit 7: Memory Management 119
Unit 8: File Management 139
Unit 9: I/O & Secondary Storage Structure 159
Unit 10: System Protection 182
Unit 11: System Security 200
Unit 12: Security Solution 225
Unit 13: Case Study: Linux 241
Unit 14: Windows 2000 300
Unit 1: Introduction to Operating System

Unit 1: Introduction to Operating System Notes

CONTENTS

Objectives

Introduction

Operating System: Meaning


History of Computer Operating Systems
Supervisor and User Mode
Goals of an Operating System
Generations of Operating Systems
0th Generation

First Generation (1951-1956)


Second Generation (1956-1964)
Third Generation (1964-1979)
Fourth Generation (1979 – Present)

Summary
Keywords
Self Assessment
Review Questions
Further Readings

Objectives

After studying this unit, you will be able to:


 Define operating system
 Know supervisor and user mode
 Explain various goals of an operating system
 Describe generation of operating systems

Introduction

An Operating System (OS) is a collection of programs that acts as an interface between a user of a
computer and the computer hardware. The purpose of an operating system is to provide an
environment in which a user may execute the programs. Operating Systems are viewed as resource
managers. The main resource is the computer hardware in the form of processors, storage,
input/output devices, communication devices, and data. Some of the operating system functions are:
implementing the user interface, sharing hardware among users, allowing users to share data among
themselves, preventing users from interfering with one another, scheduling resources among users,
facilitating input/output, recovering from errors, accounting for resource usage, facilitating parallel
operations, organising data for secure and rapid access, and handling network communications.

1
Operating System

Notes Operating System: Meaning

An operating system (sometimes abbreviated as “OS”) is the program that, after being initially loaded
into the computer by a boot program, manages all the other programs in a computer. The other
programs are called applications or application programs. The application programs make use of the
operating system by making requests for services through a defined Application Program Interface (API).
In addition, users can interact directly with the operating system through a user interface such as a
command language or a Graphical User Interface (GUI).
Figure 1.1: Operating System Interface

Hard Drive

Computer
Mouse Operating System
Monitor
Video Card

Sound Card
Keyboard Speakers

In a computer system, you find four main components: the hardware, the operating system, the
application software and the users. In a computer system, the hardware provides the basic computing
resources. The applications programs define the way in which these resources are used to solve the
computing problems of the users. The operating system controls and coordinates the use of the hardware
among the various systems programs and application programs for the various users.
You can view an operating system as a resource allocator. A computer system has many resources
(hardware and software) that may be required to solve a problem: CPU time, memory space, files
storage space, input/output devices etc. The operating system acts as the manager of these resources
and allocates them to specific programs and users as necessary for their tasks. Since there may be many,
possibly conflicting, requests for resources, the operating system must decide which requests are
allocated resources to operate the computer system fairly and efficiently.
An operating system is a control program. This program controls the execution of user programs to
prevent errors and improper use of the computer. Operating systems exist because: they are a
reasonable way to solve the problem of creating a usable computing system. The fundamental goal of a
computer system is to execute user programs and solve user problems.
While there is no universally agreed upon definition of the concept of an operating system, the
following is a reasonable starting point:
A computer’s operating system is a group of programs designed to serve two basic purposes:
1. To control the allocation and use of the computing system’s resources among the various users
and tasks, and
2. To provide an interface between the computer hardware and the programmer that simplifies and
makes feasible the creation, coding, debugging, and maintenance of application programs.
Unit 1: Introduction to Operating System

An effective operating system should accomplish the following functions: Notes

1. Should act as a command interpreter by providing a user friendly environment.


2. Should facilitate communication with other users.
3. Facilitate the directory/file creation along with the security option.
4. Provide routines that handle the intricate details of I/O programming.
5. Provide access to compilers to translate programs from high-level languages to machine
language.
6. Provide a loader program to move the compiled program code to the computer’s memory for
execution.
7. Assure that when there are several active processes in the computer, each will get fair and non-
interfering access to the central processing unit for execution.
8. Take care of storage and device allocation.
9. Provide for long term storage of user information in the form of files.
10. Permit system resources to be shared among users when appropriate, and be protected from
unauthorised or mischievous intervention as necessary.
Figure 1.2: Abstract View of the Components of a Computer System

User 1 User 2 User 3 User n

Compiler Assembler Text editor Database


system
Application programs

Operating System

Computer hardware

Though systems programs such as editors and translators and the various utility programs (such as sort
and file transfer program) are not usually considered part of the operating system, the operating system
is responsible for providing access to these system resources.
The abstract view of the components of a computer system and the positioning of OS is shown in the
Figure 1.2.

Task “Operating system is a hardware or software”. Discuss.

History of Computer Operating Systems

Early computers lacked any form of operating system. The user had sole use of the machine and would
arrive armed with program and data, often on punched paper and tape. The program would be loaded
into the machine, and the machine would be set to work until the program completed or crashed.
Programs could generally be debugged via a front panel using switches and lights. It is said that Alan
Turing was a master of this on the early Manchester Mark I machine,

3
Operating System

Notes and he was already deriving the primitive conception of an operating system from the principles of the
Universal Turing machine.
Later machines came with libraries of support code, which would be linked to the user’s program to assist
in operations such as input and output. This was the genesis of the modern-day operating system.
However, machines still ran a single job at a time; at Cambridge University in England the job queue was
at one time a washing line from which tapes were hung with different colored clothes-pegs to indicate
job-priority.
As machines became more powerful, the time needed for a run of a program diminished and the time
to hand off the equipment became very large by comparison. Accounting for and paying for machine
usage moved on from checking the wall clock to automatic logging by the computer. Run queues
evolved from a literal queue of people at the door, to a heap of media on a jobs-waiting table, or batches
of punch-cards stacked one on top of the other in the reader, until the machine itself was able to select
and sequence which magnetic tape drives were online. Where program developers had originally had
access to run their own jobs on the machine, they were supplanted by dedicated machine operators
who looked after the well-being and maintenance of the machine and were less and less concerned with
implementing tasks manually. When commercially available computer centers were faced with the
implications of data lost through tampering or operational errors, equipment vendors were put under
pressure to enhance the runtime libraries to prevent misuse of system resources. Automated monitoring
was needed not just for CPU usage but for counting pages printed, cards punched, cards read, disk
storage used and for signaling when operator intervention was required by jobs such as changing
magnetic tapes.
All these features were building up towards the repertoire of a fully capable operating system.
Eventually the runtime libraries became an amalgamated program that was started before the first
customer job and could read in the customer job, control its execution, clean up after it, record its
usage, and immediately go on to process the next job. Significantly, it became possible for programmers
to use symbolic program-code instead of having to hand-encode binary images, once task-switching
allowed a computer to perform translation of a program into binary form before running it. These
resident background programs, capable of managing multistep processes, were often called monitors or
monitor-programs before the term operating system established itself.
An underlying program offering basic hardware-management, software-scheduling and resource-
monitoring may seem a remote ancestor to the user-oriented operating systems of the personal
computing era. But there has been a shift in meaning. With the era of commercial computing, more and
more “secondary” software was bundled in the operating system package, leading eventually to the
perception of an operating system as a complete user-system with utilities, applications (such as text
editors and file managers) and configuration tools, and having an integrated graphical user interface.
The true descendant of the early operating systems is what we now call the “kernel”. In technical and
development circles the old restricted sense of an operating system persists because of the continued
active development of embedded operating systems for all kinds of devices with a data-processing
component, from hand-held gadgets up to industrial robots and real-time control-systems, which do not
run user-applications at the front-end. An embedded operating system in a device today is not so far
removed as one might think from its ancestor of the 1950s.

Supervisor and User Mode

Single user mode is a mode in which a multiuser computer operating system boots into a single
superuser. It is mainly used for maintenance of multi-user environments such as network servers. Some
tasks may require exclusive access to shared resources, for example running fsck on a network share.
This mode may also be used for security purposes – network services are

4
Unit 1: Introduction to Operating System

not run, eliminating the possibility of outside interference. On some systems a lost superuser password Notes
can be changed by switching to single user mode, but not asking for the password in such circumstances
is viewed as a security vulnerability.
You are all familiar with the concept of sitting down at a computer system and writing documents or
performing some task such as writing a letter. In this instance, there is one keyboard and one monitor
that you interact with.
Operating systems such as Windows 95, Windows NT Workstation and Windows 2000 professional are
essentially single user operating systems. They provide you the capability to perform tasks on the
computer system such as writing programs and documents, printing and accessing files.
Consider a typical home computer. There is a single keyboard and mouse that accept input commands,
and a single monitor to display information output. There may also be a printer for the printing of
documents and images.
In essence, a single-user operating system provides access to the computer system by a single user at a
time. If another user needs access to the computer system, they must wait till the current user finishes
what they are doing and leaves.
Students in computer labs at colleges or University often experience this. You might also have
experienced this at home, where you want to use the computer but someone else is currently using it.
You have to wait for them to finish before you can use the computer system.

Goals of an Operating System


The primary objective of a computer is to execute an instruction in an efficient manner and to increase
the productivity of processing resources attached with the computer system such as hardware
resources, software resources and the users. In other words, you can say that maximum CPU utilisation is
the main objective, because it is the main device which is to be used for the execution of the programs
or instructions. Brief the goals as:
1. The primary goal of an operating system is to make the computer convenient to use.

2. The secondary goal is to use the hardware in an efficient manner.

Generations of Operating Systems

Operating systems have been evolving over the years. you will briefly look at this development of the
operating systems with respect to the evolution of the hardware/architecture of the computer systems
in this section. Since operating systems have historically been closely tied with the architecture of the
computers on which they run, you will look at successive generations of computers to see what their
operating systems were like. You may not exactly map the operating systems generations to the
generations of the computer, but roughly it provides the idea behind them.
You can roughly divide them into five distinct generations that are characterized by hardware
component technology, software development, and mode of delivery of computer services.

0th Generation

The term 0th generation is used to refer to the period of development of computing, which predated the
commercial production and sale of computer equipment. You consider that the period might be way
back when Charles Babbage invented the Analytical Engine. Afterwards the computers by John
Atanasoff in 1940; the Mark I, built by Howard Aiken and a group of IBM engineers at Harvard in
1944; the ENIAC, designed and constructed at the University of Pencylvania by

5
Operating System

Notes Wallace Eckert and John Mauchly and the EDVAC, developed in 1944-46 by John Von Neumann, Arthur
Burks, and Herman Goldstine (which was the first to fully implement the idea of the stored program and
serial execution of instructions) were designed. The development of EDVAC set the stage for the
evolution of commercial computing and operating system software. The hardware component
technology of this period was electronic vacuum tubes.
The actual operation of these early computers took place without the benefit of an operating system.
Early programs were written in machine language and each contained code for initiating operation of the
computer itself.
The mode of operation was called “open-shop” and this meant that users signed up for computer time
and when a user’s time arrived, the entire (in those days quite large) computer system was turned over
to the user. The individual user (programmer) was responsible for all machine set up and operation, and
subsequent clean-up and preparation for the next user. This system was clearly inefficient and
dependent on the varying competencies of the individual programmer as operators.

First Generation (1951-1956)

The first generation marked the beginning of commercial computing, including the introduction of
Eckert and Mauchly’s UNIVAC I in early 1951, and a bit later, the IBM 701 which was also known as the
Defence Calculator. The first generation was characterised again by the vacuum tube as the active
component technology.
Operation continued without the benefit of an operating system for a time. The mode was called “closed
shop” and was characterised by the appearance of hired operators who would select the job to be run,
initial program load the system, run the user’s program, and then select another job, and so forth.
Programs began to be written in higher level, procedure-oriented languages, and thus the operator’s
routine expanded. The operator now selected a job, ran the translation program to assemble or compile
the source program, and combined the translated object program along with any existing library
programs that the program might need for input to the linking program, loaded and ran the composite
linked program, and then handled the next job in a similar fashion.
Application programs were run one at a time, and were translated with absolute computer addresses
that bound them to be loaded and run from these reassigned storage addresses set by the translator,
obtaining their data from specific physical I/O device. There was no provision for moving a program to
different location in storage for any reason. Similarly, a program bound to specific devices could not be
run at all if any of these devices were busy or broken.
The inefficiencies inherent in the above methods of operation led to the development of the mono-
programmed operating system, which eliminated some of the human intervention in running job and
provided programmers with a number of desirable functions. The OS consisted of a permanently
resident kernel in main storage, and a job scheduler and a number of utility programs kept in secondary
storage. User application programs were preceded by control or specification cards (in those day,
computer program were submitted on data cards) which informed the OS of what system resources
(software resources such as compilers and loaders; and hardware resources such as tape drives and
printer) were needed to run a particular application. The systems were designed to be operated as batch
processing system.
These systems continued to operate under the control of a human operator who initiated operation by
mounting a magnetic tape that contained the operating system executable code onto a “boot device”,
and then pushing the IPL (Initial Program Load) or “boot” button to initiate the bootstrap loading of the
operating system. Once the system was loaded, the operator entered the date and time, and then
initiated the operation of the job scheduler program which read and interpreted the control
statements, secured the needed resources, executed the first user

6
Unit 1: Introduction to Operating System

program, recorded timing and accounting information, and then went back to begin processing of Notes
another user program, and so on, as long as there were programs waiting in the input queue to be
executed.
The first generation saw the evolution from hands-on operation to closed shop operation to the
development of mono-programmed operating systems. At the same time, the development of
programming languages was moving away from the basic machine languages; first to assembly
language, and later to procedure oriented languages, the most significant being the development of
FORTRAN by John W. Backus in 1956. Several problems remained, however, the most obvious was the
inefficient use of system resources, which was most evident when the CPU waited while the relatively
slower, mechanical I/O devices were reading or writing program data. In addition, system protection was
a problem because the operating system kernel was not protected from being overwritten by an
erroneous application program.
Moreover, other user programs in the queue were not protected from destruction by executing
programs.

Second Generation (1956-1964)

The second generation of computer hardware was most notably characterised by transistors replacing
vacuum tubes as the hardware component technology. In addition, some very important changes in
hardware and software architectures occurred during this period. For the most part, computer systems
remained card and tape-oriented systems. Significant use of random access devices, that is, disks, did
not appear until towards the end of the second generation. Program processing was, for the most part,
provided by large centralised computers operated under mono-programmed batch processing
operating systems.
The most significant innovations addressed the problem of excessive central processor delay due to
waiting for input/output operations. Recall that programs were executed by processing the machine
instructions in a strictly sequential order. As a result, the CPU, with its high speed electronic component,
was often forced to wait for completion of I/O operations which involved mechanical devices (card
readers and tape drives) that were order of magnitude slower. This problem led to the introduction of
the data channel, an integral and special-purpose computer with its own instruction set, registers, and
control unit designed to process input/output operations separately and asynchronously from the
operation of the computer’s main CPU near the end of the first generation, and its widespread adoption
in the second generation.
The data channel allowed some I/O to be buffered. That is, a program’s input data could be read
“ahead” from data cards or tape into a special block of memory called a buffer. Then, when the user’s
program came to an input statement, the data could be transferred from the buffer locations at the
faster main memory access speed rather than the slower I/O device speed. Similarly, a program’s output
could be written another buffer and later moved from the buffer to the printer, tape, or card punch.
What made this all work was the data channel’s ability to work asynchronously and concurrently with
the main processor. Thus, the slower mechanical I/O could be happening concurrently with main
program processing. This process was called I/O overlap.
The data channel was controlled by a channel program set up by the operating system I/O control
routines and initiated by a special instruction executed by the CPU. Then, the channel independently
processed data to or from the buffer. This provided communication from the CPU to the data channel to
initiate an I/O operation. It remained for the channel to communicate to the CPU such events as data
errors and the completion of a transmission. At first, this communication was handled by polling – the
CPU stopped its work periodically and polled the channel to determine if there is any message.
Polling was obviously inefficient (imagine stopping your work periodically to go to the post office to see
if an expected letter has arrived) and led to another significant innovation of the

7
Operating System

Notes second generation – the interrupt. The data channel was able to interrupt the CPU with a message –
usually “I/O complete.” Infact, the interrupt idea was later extended from I/O to allow signalling of
number of exceptional conditions such as arithmetic overflow, division by zero and time-run-out. Of
course, interval clocks were added in conjunction with the latter, and thus operating system came to
have a way of regaining control from an exceptionally long or indefinitely looping program.
These hardware developments led to enhancements of the operating system. I/O and data channel
communication and control became functions of the operating system, both to relieve the application
programmer from the difficult details of I/O programming and to protect the integrity of the system to
provide improved service to users by segmenting jobs and running shorter jobs first (during “prime
time”) and relegating longer jobs to lower priority or night time runs. System libraries became more
widely available and more comprehensive as new utilities and application software components were
available to programmers.
In order to further mitigate the I/O wait problem, system were set up to spool the input batch from slower
I/O devices such as the card reader to the much higher speed tape drive and similarly, the output from
the higher speed tape to the slower printer. In this scenario, the user submitted a job at a window, a
batch of jobs was accumulated and spooled from cards to tape “off line,” the tape was moved to the
main computer, the jobs were run, and their output was collected on another tape that later was taken
to a satellite computer for off line tape-to-printer output. User then picked up their output at the
submission windows.
Toward the end of this period, as random access devices became available, tape-oriented operating
system began to be replaced by disk-oriented systems. With the more sophisticated disk hardware and
the operating system supporting a greater portion of the programmer’s work, the computer system that
users saw was more and more removed from the actual hardware- users saw a virtual machine.
The second generation was a period of intense operating system development. Also it was the period for
sequential batch processing. But the sequential processing of one job at a time remained a significant
limitation. Thus, there continued to be low CPU utilisation for I/O bound jobs and low I/O device
utilisation for CPU bound jobs. This was a major concern, since computers were still very large (room-
size) and expensive machines. Researchers began to experiment with multiprogramming and
multiprocessing in their computing services called the time-sharing system.

Note A noteworthy example is the Compatible Time Sharing System (CTSS),


developed at MIT during the early 1960s.

Task CPU is the heart of computer system what about ALU.

Third Generation (1964-1979)

The third generation officially began in April 1964 with IBM’s announcement of its System/360 family of
computers. Hardware technology began to use Integrated Circuits (ICs) which yielded significant
advantages in both speed and economy.
Operating system development continued with the introduction and widespread adoption of
multiprogramming. This marked first by the appearance of more sophisticated I/O buffering

8
Unit 1: Introduction to Operating System

in the form of spooling operating systems, such as the HASP (Houston Automatic Spooling) system that Notes
accompanied the IBM OS/360 system. These systems worked by introducing two new systems
programs, a system reader to move input jobs from cards to disk, and a system writer to move job
output from disk to printer, tape, or cards. Operation of spooling system was, as before, transparent to
the computer user who perceived input as coming directly from the cards and output going directly to
the printer.
The idea of taking fuller advantage of the computer’s data channel I/O capabilities continued to develop.
That is, designers recognised that I/O needed only to be initiated by a CPU instruction – the actual I/O
data transmission could take place under control of separate and asynchronously operating channel
program. Thus, by switching control of the CPU between the currently executing user program, the
system reader program, and the system writer program, it was possible to keep the slower mechanical
I/O device running and minimizes the amount of time the CPU spent waiting for I/O completion. The net
result was an increase in system throughput and resource utilisation, to the benefit of both user and
providers of computer services.
This concurrent operation of three programs (more properly, apparent concurrent operation, since
systems had only one CPU, and could, therefore execute just one instruction at a time) required that
additional features and complexity be added to the operating system. First, the fact that the input queue
was now on disk, a direct access device, freed the system scheduler from the first-come-first-served
policy so that it could select the “best” next job to enter the system (looking for either the shortest job or
the highest priority job in the queue). Second, since the CPU was to be shared by the user program, the
system reader, and the system writer, some processor allocation rule or policy was needed. Since the
goal of spooling was to increase resource utilisation by enabling the slower I/O devices to run
asynchronously with user program processing, and since I/O processing required the CPU only for short
periods to initiate data channel instructions, the CPU was dispatched to the reader, the writer, and the
program in that order. Moreover, if the writer or the user program was executing when something
became available to read, the reader program would preempt the currently executing program to
regain control of the CPU for its initiation instruction, and the writer program would preempt the user
program for the same purpose. This rule, called the static priority rule with preemption, was
implemented in the operating system as a system dispatcher program.
The spooling operating system in fact had multiprogramming since more than one program was
resident in main storage at the same time. Later this basic idea of multiprogramming was extended to
include more than one active user program in memory at time. To accommodate this extension, both
the scheduler and the dispatcher were enhanced. The scheduler became able to manage the diverse
resource needs of the several concurrently active used programs, and the dispatcher included policies
for allocating processor resources among the competing user programs. In addition, memory
management became more sophisticated in order to assure that the program code for each job or at
least that part of the code being executed, was resident in main storage.
The advent of large-scale multiprogramming was made possible by several important hardware
innovations such as:
1. The widespread availability of large capacity, high-speed disk units to accommodate the spooled
input streams and the memory overflow together with the maintenance of several concurrently
active program in execution.
2. Relocation hardware which facilitated the moving of blocks of code within memory without any
undue overhead penalty.
3. The availability of storage protection hardware to ensure that user jobs are protected from one
another and that the operating system itself is protected from user programs.
4. Some of these hardware innovations involved extensions to the interrupt system in order to
handle a variety of external conditions such as program malfunctions, storage protection

9
Operating System

Notes violations, and machine checks in addition to I/O interrupts. In addition, the interrupt system
became the technique for the user program to request services from the operating system kernel.
5. The advent of privileged instructions allowed the operating system to maintain coordination and
control over the multiple activities now going on with in the system.
Successful implementation of multiprogramming opened the way for the development of a new way of
delivering computing services-time-sharing. In this environment, several terminals, sometimes up to 200
of them, were attached (hard wired or via telephone lines) to a central computer. Users at their
terminals, “logged in” to the central system, and worked interactively with the system. The system’s
apparent concurrency was enabled by the multiprogramming operating system. Users shared not only
the system hardware but also its software resources and file system disk space.
The third generation was an exciting time, indeed, for the development of both computer hardware and
the accompanying operating system. During this period, the topic of operating systems became, in
reality, a major element of the discipline of computing.

Fourth Generation (1979 – Present)

The fourth generation is characterised by the appearance of the personal computer and the
workstation. Miniaturisation of electronic circuits and components continued and Large Scale
Integration (LSI), the component technology of the third generation, was replaced by Very Large Scale
Integration (VLSI), which characterizes the fourth generation. VLSI with its capacity for containing
thousands of transistors on a small chip, made possible the development of desktop computers with
capabilities exceeding those that filled entire rooms and floors of building just twenty years earlier.
The operating systems that control these desktop machines have brought us back in a full circle, to the
open shop type of environment where each user occupies an entire computer for the duration of a job’s
execution. This works better now, not only because the progress made over the years has made the
virtual computer resulting from the operating system/hardware combination so much easier to use, or,
in the words of the popular press “user-friendly.”
However, improvements in hardware miniaturisation and technology have evolved so fast that you now
have inexpensive workstation – class computers capable of supporting multiprogramming and time-
sharing. Hence the operating systems that supports today’s personal computers and workstations look
much like those which were available for the minicomputers of the third generation.

Example: Microsoft’s DOS for IBM-compatible personal computers and UNIX for workstation.
However, many of these desktop computers are now connected as networked or distributed systems.
Computers in a networked system each have their operating systems augmented with communication
capabilities that enable users to remotely log into any system on the network and transfer information
among machines that are connected to the network. The machines that make up distributed system
operate as a virtual single processor system from the user’s point of view; a central operating system
controls and makes transparent the location in the system of the particular processor or processors and
file systems that are handling any given program.

Summary

 This unit presented the principle operation of an operating system. In this unit you had briefly
described about the history, the generations and the types of operating systems.

10
Unit 1: Introduction to Operating System

 An operating system is a program that acts as an interface between a user of a computer and the Notes
computer hardware.
 The purpose of an operating system is to provide an environment in which a user may execute
programs.
 The primary goal of an operating system is to make the computer convenient to use. And the
secondary goal is to use the hardware in an efficient manner.

Keywords
An Operating System: It is the most important program in a computer system that runs all the time, as
long as the computer is operational and exits only when the computer is shut down.
Desktop System: Modern desktop operating systems usually feature a Graphical user interface (GUI)
which uses a pointing device such as a mouse or stylus for input in addition to the keyboard.
Operating System: An operating system is a layer of software which takes care of technical aspects of a
computer’s operation.

Self Assessment

Choose the appropriate answers:


1. GUI stands for

(a) Graphical Used Interface


(b) Graphical User Interface
(c) Graphical User Interchange

(d) Good User Interface


2. CPU stands for
(a) Central Program Unit
(b) Central Programming Unit
(c) Central Processing Unit

(d) Centralization Processing Unit


3. FORTRAN stands for
(a) Formula Translation
(b) Formula Transformation
(c) Formula Transition
(d) Forming Translation

4. VLSI stands for


(a) Very Long Scale Integration
(b) Very Large Scale Interchange
(c) Very Large Scale Interface
(d) Very Large Scale Integration

11
Operating System

Notes 5. API stands for


(a) Application Process Interface
(b) Application Process Interchange
(c) Application Program Interface

(d) Application Process Interfacing Fill

in the blanks:

6. An operating system is a ........................ .


7. Programs could generally be debugged via a front panel using ................................... and lights.
8. The data channel allowed some ..............................to be buffered.
9. The third generation officially began in April ........................ .
10. The system’s apparent concurrency was enabled by the multiprogramming ........................

Review Questions

1. What is the relation between application software and operating system?


2. What is an operating system? Is it a hardware or software?
3. Mention the primary functions of an operating system.
4. Briefly explain the evolution of the operating system.
5. What are the key elements of an operating system?
6. What do you understand by the term computer generations?

7. Who give the idea of stored program and in which year? Who give the basic structure of
computer?
8. Give the disadvantages of first generation computers over second generation computers.
9. On which system, the second generation computers based on? What are the new inventions in the
second generation of computers?
10. Describe the term integrated circuit.
11. What is the significance of third generation computers?

12. Give the brief description of fourth generation computers. How the technology is better than
previous generation?
13. What is the period of fifth generation computers?
14. What are the differences between hardware and software?
15. What are the differences between system software and application software?

Answers: Self Assessment

1. (b) 2. (c) 3. (a) 4. (d)


5. (c) 6. control program 7. switches 8. I/O
9. 1964 10. operating system

12
Unit 1: Introduction to Operating System

Further Readings Notes

Books Andrew M. Lister, Fundamentals of Operating Systems, Wiley.


Andrew S. Tanenbaum and Albert S. Woodhull, Systems Design and Implementation,
Prentice Hall.
Andrew S. Tanenbaum, Modern Operating System, Prentice Hall.
Colin Ritchie, Operating Systems, BPB Publications.
Deitel H.M., “Operating Systems, 2nd Edition, Addison Wesley.
I.A. Dhotre, Operating System, Technical Publications. Milankovic,
Operating System, Tata MacGraw Hill, New Delhi.
Silberschatz, Gagne & Galvin, “Operating System Concepts”, John Wiley & Sons, Seventh
Edition.
Stalling, W., “Operating Systems”, 2nd Edition, Prentice Hall.

Online links www.en.wikipedia.org


www.web-source.net
www.webopedia.com

13
Operating System

Notes Unit 2: Operation and Function of Operating System

CONTENTS

Objectives

Introduction

Operations and Functions of OS


Types of Operating System
Operating System: Examples
Disk Operating System (DOS)

UNIX
Windows
Macintosh
Summary
Keywords

Self Assessment
Review Questions
Further Readings

Objectives

After studying this unit, you will be able to:


 Describe operations and functions of operating system
 Explain various types of operating system

Introduction

The primary objective of operating system is to increase productivity of a processing resource, such as
computer hardware or computer-system users. User convenience and productivity were secondary
considerations. At the other end of the spectrum, an OS may be designed for a personal computer costing
a few thousand dollars and serving a single user whose salary is high. In this case, it is the user whose
productivity is to be increased as much as possible, with the hardware utilization being of much less
concern. In single-user systems, the emphasis is on making the computer system easier to use by
providing a graphical and hopefully more intuitively obvious user interface.

Operations and Functions of OS


The main operations and functions of an operating system are as follows:
1. Process Management
2. Memory Management
3. Secondary Storage Management
4. I/O Management

14
Unit 2: Operation and Function of Operating System

5. File Management Notes


6. Protection
7. Networking Management
8. Command Interpretation.

Process Management

The CPU executes a large number of programs. While its main concern is the execution of user
programs, the CPU is also needed for other system activities. These activities are called processes. A
process is a program in execution. Typically, a batch job is a process. A time-shared user program is a
process. A system task, such as spooling, is also a process. For now, a process may be considered as a
job or a time-shared program, but the concept is actually more general.
The operating system is responsible for the following activities in connection with processes
management:
1. The creation and deletion of both user and system processes
2. The suspension and resumption of processes.
3. The provision of mechanisms for process synchronization
4. The provision of mechanisms for deadlock handling.

Memory Management

Memory is the most expensive part in the computer system. Memory is a large array of words or bytes,
each with its own address. Interaction is achieved through a sequence of reads or writes of specific
memory address. The CPU fetches from and stores in memory.
There are various algorithms that depend on the particular situation to manage the memory. Selection
of a memory management scheme for a specific system depends upon many factors, but especially upon
the hardware design of the system. Each algorithm requires its own hardware support.
The operating system is responsible for the following activities in connection with memory
management.
1. Keep track of which parts of memory are currently being used and by whom.
2. Decide which processes are to be loaded into memory when memory space becomes available.
3. Allocate and deallocate memory space as needed.

Secondary Storage Management

The main purpose of a computer system is to execute programs. These programs, together with the
data they access, must be in main memory during execution. Since the main memory is too small to
permanently accommodate all data and program, the computer system must provide secondary storage
to backup main memory. Most modem computer systems use disks as the primary on-line storage of
information, of both programs and data. Most programs, like compilers, assemblers, sort routines,
editors, formatters, and so on, are stored on the disk until loaded into memory, and then use the disk as
both the source and destination of their processing. Hence the proper management of disk storage is of
central importance to a computer system.

15
Operating System

Notes There are few alternatives. Magnetic tape systems are generally too slow. In addition, they are limited
to sequential access. Thus tapes are more suited for storing infrequently used files, where speed is not a
primary concern.
The operating system is responsible for the following activities in connection with disk management:
1. Free space management
2. Storage allocation
3. Disk scheduling.

I/O Management

One of the purposes of an operating system is to hide the peculiarities or specific hardware devices
from the user. For example, in UNIX, the peculiarities of I/O devices are hidden from the bulk of the
operating system itself by the I/O system. The operating system is responsible for the following activities
in connection to I/O management:
1. A buffer caching system
2. To activate a general device driver code
3. To run the driver software for specific hardware devices as and when required.

File Management

File management is one of the most visible services of an operating system. Computers can store
information in several different physical forms: magnetic tape, disk, and drum are the most common
forms. Each of these devices has it own characteristics and physical organisation.
For convenient use of the computer system, the operating system provides a uniform logical view of
information storage. The operating system abstracts from the physical properties of its storage devices to
define a logical storage unit, the file. Files are mapped, by the operating system, onto physical devices.
A file is a collection of related information defined by its creator. Commonly, files represent programs
(both source and object forms) and data. Data files may be numeric, alphabetic or alphanumeric. Files
may be free-form, such as text files, or may be rigidly formatted. In general a files is a sequence of bits,
bytes, lines or records whose meaning is defined by its creator and user. It is a very general concept.
The operating system implements the abstract concept of the file by managing mass storage device,
such as types and disks. Also files are normally organised into directories to ease their use. Finally, when
multiple users have access to files, it may be desirable to control by whom and in what ways files may be
accessed.
The operating system is responsible for the following activities in connection to the file management:
1. The creation and deletion of files.
2. The creation and deletion of directory.
3. The support of primitives for manipulating files and directories.
4. The mapping of files onto disk storage.

5. Backup of files on stable (non volatile) storage.


6. Protection and security of the files.

16
Unit 2: Operation and Function of Operating System

Protection Notes

The various processes in an operating system must be protected from each other’s activities. For that
purpose, various mechanisms which can be used to ensure that the files, memory segment, CPU and
other resources can be operated on only by those processes that have gained proper authorisation from
the operating system.

Example: Memory addressing hardware ensures that a process can only execute within its own
address space. The timer ensures that no process can gain control of the CPU without relinquishing it.
Finally, no process is allowed to do its own I/O, to protect the integrity of the various peripheral
devices. Protection refers to a mechanism for controlling the access of programs, processes, or users to
the resources defined by a computer controls to be imposed, together with some means of
enforcement.
Protection can improve reliability by detecting latent errors at the interfaces between component
subsystems. Early detection of interface errors can often prevent contamination of a healthy subsystem
by a subsystem that is malfunctioning. An unprotected resource cannot defend against use (or misuse)
by an unauthorised or incompetent user.

Task “Memory is the most expensive part of system.” Discuss.

Networking

A distributed system is a collection of processors that do not share memory or a clock. Instead, each
processor has its own local memory, and the processors communicate with each other through various
communication lines, such as high speed buses or telephone lines. Distributed systems vary in size and
function. They may involve microprocessors, workstations, minicomputers, and large general purpose
computer systems.
The processors in the system are connected through a communication network, which can be
configured in the number of different ways. The network may be fully or partially connected. The
communication network design must consider routing and connection strategies and the problems of
connection and security.
A distributed system provides the user with access to the various resources the system maintains. Access
to a shared resource allows computation speed-up, data availability, and reliability.

Command Interpretation

One of the most important components of an operating system is its command interpreter. The
command interpreter is the primary interface between the user and the rest of the system.
Many commands are given to the operating system by control statements. When a new job is started in
a batch system or when a user logs-in to a time-shared system, a program which reads and interprets
control statements is automatically executed. This program is variously called
(1) the control card interpreter, (2) the command line interpreter, (3) the shell (in Unix), and so on. Its
function is quite simple: get the next command statement, and execute it.
The command statements themselves deal with process management, I/O handling, secondary storage
management, main memory management, file system access, protection, and networking.

17
Operating System

Notes The Figure 2.1 depicts the role of the operating system in coordinating all the functions.
Figure 2.1: Functions Coordinated by the Operating System

I/O
Management File
Protection &
Security Management

Process Secondary
Management Storage
Management
Operating System

Communication
Management Memory
User Management
Interface Networking

Types of Operating System

Modern computer operating systems may be classified into three groups, which are distinguished by the
nature of interaction that takes place between the computer user and his or her program during its
processing. The three groups are called batch, time-sharing and real-time operating systems.

Batch Processing Operating System

In a batch processing operating system environment users submit jobs to a central place where these
jobs are collected into a batch, and subsequently placed on an input queue at the computer where they
will be run. In this case, the user has no interaction with the job during its processing, and the computer’s
response time is the turnaround time the time from submission of the job until execution is complete,
and the results are ready for return to the person who submitted the job.

Time Sharing

Another mode for delivering computing services is provided by time sharing operating systems. In this
environment a computer provides computing services to several or many users concurrently on-line.
Here, the various users are sharing the central processor, the memory, and other resources of the
computer system in a manner facilitated, controlled, and monitored by the operating system. The user,
in this environment, has nearly full interaction with the program during its execution, and the
computer’s response time may be expected to be no more than a few second.

Real-time Operating System (RTOS)

The third class is the real time operating systems, which are designed to service those applications where
response time is of the essence in order to prevent error, misrepresentation or even disaster. Examples of
real time operating systems are those which handle airlines reservations, machine tool control, and
monitoring of a nuclear power station. The systems, in this case, are designed to be interrupted by
external signals that require the immediate attention of the computer system.

18
Unit 2: Operation and Function of Operating System

These real time operating systems are used to control machinery, scientific instruments and ndustrial Notes
systems. An RTOS typically has very little user-interface capability, and no end-user utilities. A very
important part of an RTOS is managing the resources of the computer so that a particular operation
executes in precisely the same amount of time every time it occurs. In a complex machine, having a part
move more quickly just because system resources are available may be just as catastrophic as having it
not move at all because the system is busy.
A number of other definitions are important to gain an understanding of operating systems:

Multiprogramming Operating System

A multiprogramming operating system is a system that allows more than one active user program (or part
of user program) to be stored in main memory simultaneously. Thus, it is evident that a time-sharing
system is a multiprogramming system, but note that a multiprogramming system is not necessarily a
time-sharing system. A batch or real time operating system could, and indeed usually does, have more
than one active user program simultaneously in main storage. Another important, and all too similar,
term is “multiprocessing”.
Figure 2.2: Memory Layout in Multiprogramming Environment

Primary Memory

MONITOR

PROGRAM 1

PROGRAM 2

. . .

. . .

PROGRAM N

Buffering and Spooling improve system performance by overlapping the input, output and computation
of a single job, but both of them have their limitations. A single user cannot always keep CPU or I10
devices busy at all times. Multiprogramming offers a more efficient approach to increase system
performance. In order to increase the resource utilisation, systems supporting multiprogramming
approach allow more than one job (program) to reside in the memory to utilise CPU time at any
moment. More number of programs competing for system resources better will mean better resource
utilisation.
The idea is implemented as follows. The main memory of a system contains more than one program
(Figure 2.2).
The operating system picks one of the programs and starts executing. During execution of program 1 it
needs some I/O operation to complete in a sequential execution environment (Figure 2.3a). The CPU
would then sit idle whereas in a multiprogramming system, (Figure 2.3b) the operating system will
simply switch over to the next program (program 2).

19
Operating System

Notes Figure 2.3: Multiprogramming

Program 1 Program 2

Idle Idle Idle Idle

P1 P1 P1 P2 P2 P2

(a) Sequential Execution

Program 1

P1 P1 P1

Program 2

(b) Execution in Multiprogramming Environment

When that program needs to wait for some 110 operation, it switches over to program 3 and so on. If
there is no other new program left in the main memory, the CPU will pass its control back to the
previous programs.
Multiprogramming has traditionally been employed to increase the resources utilisation of a computer
system and to support multiple simultaneously interactive users (terminals).

Multiprocessing System

A multiprocessing system is a computer hardware configuration that includes more than one
independent processing unit. The term multiprocessing is generally used to refer to large computer
hardware complexes found in major scientific or commercial applications.
A multiprocessor system is simply a computer that has >1 & not <=1 CPU on its motherboard. If the
operating system is built to take advantage of this, it can run different processes (or different threads
belonging to the same process) on different CPUs.
Today’s operating systems strive to make the most efficient use of a computer’s resources. Most of this
efficiency is gained by sharing the machine’s resources among several tasks (multi-processing). Such
“large-grain” resource sharing is enabled by operating systems without any additional information from
the applications or processes. All these processes can potentially execute concurrently, with the CPU (or
CPUs) multiplexed among them. Newer operating systems provide mechanisms that enable applications
to control and share machine resources at a finer grain-, that is, at the threads level. Just as
multiprocessing operating systems can perform more than one task concurrently by running more than
a single process, a process can perform more than one task by running more than a single thread.

20
Unit 2: Operation and Function of Operating System

Figure 2.4: Diagrammatic Structure of Multiprocessor System Notes

PC platform

Memory System interconnect Processors

Multiprocessor chip
sets Intel IA-32 and IA-64
DDR
(Intel, AMD, VIA,
ServerWorks)
AMD Athlon

Multiprocessor
switch fabrics
RAMBUS
(Sun, Unisys) Sun Ultra Sparc

System Bus

AMD Hyper
PCI/PCI-X InfiniBand Transport

Peripheral Bus

USB 2 IEEE 1394 IDE SCSI Serial ATA Fibre Channel

Task Distinguish between multiprocessing and multitasking OS.

Networking Operating System

A networked computing system is a collection of physical interconnected computers. The operating


system of each of the interconnected computers must contain, in addition to its own stand-alone
functionality, provisions for handing communication these additions do not change the essential
structure of the operating systems.

Distributed Operating System

A distributed computing system consists of a number of computers that are connected and managed so
that they automatically share the job processing load among the constituent computers, or separate the
job load as appropriate particularly configured processors. Such a system requires an operating system
which, in addition to the typical stand-alone functionality, provides coordination of the operations and
information flow among the component computers. The networked and distributed computing
environments and their respective operating systems are designed with more complex functional
capabilities. In a network operating system, the users are aware of the existence of multiple computers,
and can log in to remote machines and copy

21
Operating System

Notes files from one machine to another. Each machine runs its own local operating system and has its own
user (or users).
A distributed operating system, in contrast, is one that appears to its users as a traditional uni-processor
system, even though it is actually composed of multiple processors. In a true distributed system, users
should not be aware of where their programs are being run or where their files are located; that should
all be handled automatically and efficiently by the operating system.
True distributed operating systems require more than just adding a little code to a uni-processor
operating system, because distributed and centralised systems differ in critical ways. Distributed systems,
for example, often allow program to run on several processors at the same time, thus requiring more
complex processor scheduling algorithms in order to optimise the amount of parallelism achieved.

Operating Systems for Embedded Devices

As embedded systems (PDAs, cellphones, point-of-sale devices, VCR’s, industrial robot control, or even
your toaster) become more complex hardware-wise with every generation, and more features are put
into them day-by-day, applications they run require more and more to run on actual operating system
code in order to keep the development time reasonable. Some of the popular OS are:
1. Nexus’s Conix: an embedded operating system for ARM processors.
2. Sun’s Java OS: a standalone virtual machine not running on top of any other OS; mainly targeted
at embedded systems.
3. Palm Computing’s Palm OS: Currently the leader OS for PDAs, has many applications and
supporting companies.
4. Microsoft’s Windows CE and Windows NT Embedded OS.

Single Processor System

In theory, every computer system may be programmed in its machine language, with no systems
software support. Programming of the “bare-machines” was customary for early computer systems. A
slightly more advanced version of this mode of operating is common for the simple evaluation boards
that are sometimes used in introductory microprocessor design and interfacing courses.
Programs for the bare machine can be developed by manually translating sequences of instructions into
binary or some other code whose base is usually an integer power of 2. Instructions and data are then fed
into the computer by means of console switches, or perhaps through a hexadecimal keyboard.
Programs are started by loading the program counter with the address of the first instruction. Results of
execution are obtained by examining the contents of the relevant registers and memory locations.
Input/Output devices, if any, must be controlled by the executing program directly, say, by reading and
writing the related I/O ports. Evidently, programming of the bare machine results in low productivity of
both users and hardware. The long and tedious process of program and data entry practically precludes
execution of all but very short programs in such an environment.
The next significant evolutionary step in computer system usage came about with the advent of
input/output devices, such as punched cards and paper tape, and of language translators. Programs,
now coded in a programming language, are translated into executable form by a computer
program, such as compiler or an interpreter. Another program, called the loader, automates the process
of loading executable programs into memory. The user places a program

22
Unit 2: Operation and Function of Operating System

and its input data on an input device, and the loader transfers information from that input device into Notes
memory. After transferring control to the loaded program by manual or automatic means, execution of
the program commences. The executing program reads its input from the designated input device and
may produce some output on an output device, such as a printer or display screen. Once in memory, the
program may be rerun with different set of input data.
The mechanics of development and preparation of programs in such environments are quite slow and
cumbersome due to serial execution of programs and numerous manual operations involved in the
process. In a typical sequence, the editor program is loaded to prepare the source code of the user
program. The next step is to load and execute the language translator and to provide it with the source
code of the user program. When serial input devices, such as card readers, are used, multiple-pass
language translators may require the source code to be repositioned for reading during each pass. If
syntax errors are detected, the whole process must be repeated from the beginning. Eventually, the
object code produced from the syntactically correct source code is loaded and executed. If run-time
errors are detected, the state of the machine can be examined and modified by means of console
switches, or with the assistance of a program called a debugger. The mode of operation described here
was initially used in late fifties, but it was also common in low-end microcomputers of early eighties with
cassettes as I/O devices.
In addition to language translators, system software includes the loader and possibly editor and
debugger programs. Most of them use input/output devices and thus must contain some code to
exercise those devices. Since many user programs also use input/output devices, the logical refinement
is to provide a collection of standard I/O routines for the use of all programs.
In the described system, I/O routines and the loader program represent a rudimentary form of an
operating system. Although quite crude, it still provides an environment for execution of programs far
beyond what is available on the bare machine. Language translators, editors, and debuggers are system
programs that rely on the services of, but are not generally regarded as part of, the operating system.
Although a definite improvement over the bare machine approach, this mode of operation is obviously
not very efficient. Running of the computer system may require frequent manual loading of programs and
data. This results in low utilization of system resources. User productivity, especially in multiuser
environments, is low as users await their turn at the machine. Even with such tools as editors and
debuggers, program development is very slow and is ridden with manual program and data loading.

Parallel Processing System

Parallel operating systems are primarily concerned with managing the resources of parallel machines.
This task faces many challenges: application programmers demand all the performance possible, many
hardware configurations exist and change very rapidly, yet the operating system must increasingly be
compatible with the mainstream versions used in personal computers and workstations due both to
user pressure and to the limited resources available for developing new versions of these system. There
are several components in an operating system that can be parallelized. Most operating systems do not
approach all of them and do not support parallel applications directly. Rather, parallelism is frequently
exploited by some additional software layer such as a distributed file system, distributed shared
memory support or libraries and services that support particular parallel programming languages while
the operating system manages concurrent task execution.
The convergence in parallel computer architectures has been accompanied by a reduction in the
diversity of operating systems running on them. The current situation is that most commercially
available machines run a flavour of the UNIX OS (Digital UNIX, IBM AIX, HP UX, Sun Solaris, Linux).

23
Operating System

Notes Figure 2.5: Parallel Systems

Others run a UNIX based microkernel with reduced functionality to optimize the use of the CPU, such as
Cray Research’s UNICOS. Finally, a number of shared memory MIMD machines run Microsoft Windows
NT (soon to be superseded by the high end variant of Windows 2000).
There are a number of core aspects to the characterization of a parallel computer operating system:
general features such as the degrees of coordination, coupling and transparency; and more particular
aspects such as the type of process management, inter-process communication, parallelism and
synchronization and the programming model.

Multitasking

In computing, multitasking is a method where multiple tasks, also known as processes, share common
processing resources such as a CPU. In the case of a computer with a single CPU, only one task is said to
be running at any point in time, meaning that the CPU is actively executing instructions for that task.
Multitasking solves the problem by scheduling which task may be the one running at any given time, and
when another waiting task gets a turn. The act of reassigning a CPU from one task to another one is
called a context switch. When context switches occur frequently enough the illusion of parallelism is
achieved. Even on computers with more than one CPU (called multiprocessor machines), multitasking
allows many more tasks to be run than there are CPUs.
In the early ages of the computers, they where considered advanced card machines and therefore the
jobs they performed where like: “find all females in this bunch of cards (or records)”. Therefore, utilisation
was high since one delivered a job to the computing department, which prepared and executed the job
on the computer, delivering the final result to you. The advances in electroniv engineering increased the
processing power serveral times, now leaving input/output devices (card readers, line printers) far
behind. This ment that the CPU had to wait for the data it required to perform a given task. Soon,
engineers thought: “what if we could both prepare, process and output data at the same time” and
multitasking was born. Now one could read data for the next job while executing the current job and
outputting the results of a previously job, thereby increasing the utilisation of the very expensive
computer.
Cheap terminals allowed the users themselves to input data to the computer and to execute jobs
(having the department do it often took days) and see results immediately on the screen, which
introduced what was called interactive tasks. They required a console to be updated when a key

24
Unit 2: Operation and Function of Operating System

was pressed on the keyboard (again a task with slow input). Same thing happens today, where your Notes
computer actually does no work most of the time - it just waits for your input. Therefore using
multitasking where serveral tasks run on the same computer improves performance.
Multitasking is the process of letting the operating system perform multiple task at what seems to the
user simultaniously. In SMP (Symmetric Multi Processor systems) this is the case, since there are
serveral CPU’s to execute programs on - in systems with only a single CPU this is done by switching
execution very rapidly between each program, thus givin the impression of simultanious execution. This
process is also known as task swithcing or timesharing. Practically all modern OS has this ability.
Multitasking is, on single-processor machines, implemented by letting the running process own the CPU
for a while (a timeslice) and when required gets replaced with another process, which then owns the
CPU. The two most common methods for sharing the CPU time is either cooperative multitasking or
preempetive multitasking.
Cooperative Multitasking: The simplest form of multitasking is cooperative multitasking. It lets the
programs decide when they wish to let other tasks run. This method is not good since it lets one process
monopolise the CPU and never let other processes run. This way a program may be reluctant to give
away processing power in the fear of another process hogging all CPU-time. Early versions of the MacOS
(up til MacOS 8) and versions of Windows earlier than Win95/ WinNT used cooperative multitasking
(Win95 when running old apps).
Preempetive Multitasking: Preempetive multitasking moves the control of the CPU to the OS, letting
each process run for a given amount of time (a timeslice) and then switching to another task. This
method prevents one process from taking complete control of the system and thereby making it seem
as if it is crashed. This method is most common today, implemented by among others OS/2, Win95/98,
WinNT, Unix, Linux, BeOS, QNX, OS9 and most mainframe OS. The assignment of CPU time is taken care
of by the scheduler.

Operating System: Examples

Disk Operating System (DOS)

DOS (Disk Operating System) was the first widely-installed operating system for personal computers. It
is a master control program that is automatically run when you start your personal computer (PC). DOS
stays in the computer all the time letting you run a program and manage files. It is a single-user
operating system from Microsoft for the PC. It was the first OS for the PC and is the underlying control
program for Windows 3.1, 95, 98 and ME. Windows NT, 2000 and XP emulate DOS in order to support
existing DOS applications.

UNIX

UNIX operating systems are used in widely-sold workstation products from Sun Microsystems, Silicon
Graphics, IBM, and a number of other companies. The UNIX environment and the client/server program
model were important elements in the development of the Internet and the reshaping of computing as
centered in networks rather than in individual computers. Linux, a UNIX derivative available in both
“free software” and commercial versions, is increasing in popularity as an alternative to proprietary
operating systems.
UNIX is written in C. Both UNIX and C were developed by AT&T and freely distributed to government
and academic institutions, causing it to be ported to a wider variety of machine families than any other
operating system. As a result, UNIX became synonymous with “open systems”.

25
Operating System

Notes UNIX is made up of the kernel, file system and shell (command line interface). The major shells are the
Bourne shell (original), C shell and Korn shell. The UNIX vocabulary is exhaustive with more than 600
commands that manipulate data and text in every way conceivable. Many commands are cryptic, but
just as Windows hid the DOS prompt, the Motif GUI presents a friendlier image to UNIX users. Even with
its many versions, UNIX is widely used in mission critical applications for client/server and transaction
processing systems. The UNIX versions that are widely used are Sun’s Solaris, Digital’s UNIX, HP’s HP-UX,
IBM’s AIX and SCO’s UnixWare. A large number of IBM mainframes also run UNIX applications, because
the UNIX interfaces were added to MVS and OS/390, which have obtained UNIX branding. Linux,
another variant of UNIX, is also gaining enormous popularity.

Windows

Windows is a personal computer operating system from Microsoft that, together with some commonly
used business applications such as Microsoft Word and Excel, has become a de facto “standard” for
individual users in most corporations as well as in most homes. Windows contains built-in networking,
which allows users to share files and applications with each other if their PC’s are connected to a
network. In large enterprises, Windows clients are often connected to a network of UNIX and NetWare
servers. The server versions of Windows NT and 2000 are gaining market share, providing a Windows-
only solution for both the client and server. Windows is supported by Microsoft, the largest software
company in the world, as well as the Windows industry at large, which includes tens of thousands of
software developers.
This networking support is the reason why Windows became successful in the first place. However,
Windows 95, 98, ME, NT, 2000 and XP are complicated operating environments. Certain combinations
of hardware and software running together can cause problems, and troubleshooting can be daunting.
Each new version of Windows has interface changes that constantly confuse users and keep support
people busy, and Installing Windows applications is problematic too. Microsoft has worked hard to
make Windows 2000 and Windows XP more resilient to installation of problems and crashes in general.

Macintosh

The Macintosh (often called “the Mac”), introduced in 1984 by Apple Computer, was the first widely-
sold personal computer with a Graphical User Interface (GUI). The Mac was designed to provide users
with a natural, intuitively understandable, and, in general, “user-friendly” computer interface. This
includes the mouse, the use of icons or small visual images to represent objects or actions, the point-
and-click and click-and-drag actions, and a number of window operation ideas. Microsoft was successful
in adapting user interface concepts first made popular by the Mac in its first Windows operating system.
The primary disadvantage of the Mac is that there are fewer Mac applications on the market than for
Windows. However, all the fundamental applications are available, and the Macintosh is a perfectly
useful machine for almost everybody. Data compatibility between Windows and Mac is an issue,
although it is often overblown and readily solved.
The Macintosh has its own operating system, Mac OS which, in its latest version is called Mac OS
X. Originally built on Motorola’s 68000 series microprocessors, Mac versions today are powered by the
PowerPC microprocessor, which was developed jointly by Apple, Motorola, and IBM. While Mac users
represent only about 5% of the total numbers of personal computer users, Macs are highly popular and
almost a cultural necessity among graphic designers and online visual artists and the companies they
work for.

Task DOS is a character based operating system what about Windows operating system.

26
Unit 2: Operation and Function of Operating System

Summary Notes

 Operating systems may be classified by both how many tasks they can perform “simultaneously”
and by how many users can be using the system “simultaneously”. That is: single-user or multi-
user and single-task or multi-tasking.
 A multi-user system must clearly be multi-tasking.
 A possible solution to the external fragmentation problem is to permit the logical address space
of a process to be noncontiguous, thus allowing a process to be allocated physical memory
wherever the latter is available.
 Physical memory is broken into fixed-sized blocks called frames. Logical memory is also broken
into blocks of the same size called pages.
 Memory protection in a paged environment is accomplished by protection bit that are associated
with each frame.
 Segmentation is a memory-management scheme that supports this user view of memory.

 Segmentation may then cause external fragmentation, when all blocks of free memory are too
small to accommodate a segment.

Keywords
Clustered System: A clustered system is a group of loosely coupled computers that work together closely
so that in many respects they can be viewed as though they are a single computer.
Distributed System: A distributed system is a computer system in which the resources resides in
separate units connected by a network, but which presents to the user a uniform computing
environment.
Real-time Operating System: A Real-time Operating System (RTOS) is a multitasking operating system
intended for real-time applications. Such applications include embedded systems (programmable
thermostats, household appliance controllers, mobile telephones), industrial robots, spacecraft,
industrial control and scientific research equipment.

Self Assessment

Fill in the blanks:


1. A .......................... is a program in execution.

2. is a large array of words or bytes, each with its own address.


3. A .......................... is a collection of related information defined by its creator.
4. A ........................ provides the user with access to the various resources the system maintains.
5. An RTOS typically has very little user-interface capability, and no ........................ .
6. A .......................... cannot always keep CPU or I10 devices busy at all times.
7. A multiprocessing system is a computer hardware configuration that includes more than
........................ independent processing unit.

8. A .......................... system is a collection of physical interconnected computers.


9. A system task, such as ............................ , is also a process.
10. ........................ is achieved through a sequence of reads or writes of specific memory address.

27
Operating System

Notes Review Questions

1. Write short note on Distributed System.


2. Explain the nature of real time system.
3. What is batch system? What are the shortcomings of early batch systems? Explain it.
4. Write the differences between the time sharing system and distributed system.

5. Describe real time operating system. Give an example of it.


6. Explain parallel system with suitable example.
7. Write the differences between the real time system and personal system.
8. “Most modern computer systems use disks as the primary on-line storage of information, of both
programs and data”. Explain.
9. Write short note on networking.
10. “The operating system picks one of the programs and starts executing”. Discuss.

Answers: Self Assessment

1. process 2. Memory 3. file


4. distributed system 5. end-user utilities 6. single user
7. one 8. networked computing 9. spooling 10. Interaction

Further Readings

Books Andrew M. Lister, Fundamentals of Operating Systems, Wiley.


Andrew S. Tanenbaum and Albert S. Woodhull, Systems Design and Implementation,
Prentice Hall.
Andrew S. Tanenbaum, Modern Operating System, Prentice Hall.
Colin Ritchie, Operating Systems, BPB Publications.
Deitel H.M., “Operating Systems, 2nd Edition, Addison Wesley.
I.A. Dhotre, Operating System, Technical Publications. Milankovic,
Operating System, Tata MacGraw Hill, New Delhi.
Silberschatz, Gagne & Galvin, “Operating System Concepts”, John Wiley & Sons, Seventh
Edition.
Stalling, W., “Operating Systems”, 2nd Edition, Prentice Hall.

Online links www.en.wikipedia.org


www.web-source.net
www.webopedia.com

28

You might also like