100% found this document useful (1 vote)
58 views

Directorate of Distance Education Kurukshetra University Kurukshetra-136119 Pgdca/Msc. (CS) - 1/mca-1

The document is an introduction to operating systems that discusses: 1. The operating system manages computer resources like memory, processors, devices, and information as a resource manager. It allocates resources and enforces allocation policies. 2. It presents different types of operating systems including batch, multi-programming, multitasking, multi-user, and distributed operating systems. 3. It outlines the evolution of processing from serial to batch to multi-programming and discusses the basic components of a computer system including hardware, operating system, application programs, and users.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
58 views

Directorate of Distance Education Kurukshetra University Kurukshetra-136119 Pgdca/Msc. (CS) - 1/mca-1

The document is an introduction to operating systems that discusses: 1. The operating system manages computer resources like memory, processors, devices, and information as a resource manager. It allocates resources and enforces allocation policies. 2. It presents different types of operating systems including batch, multi-programming, multitasking, multi-user, and distributed operating systems. 3. It outlines the evolution of processing from serial to batch to multi-programming and discusses the basic components of a computer system including hardware, operating system, application programs, and users.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 236

Directorate of Distance Education

Kurukshetra University
Kurukshetra-136119
PGDCA/MSc. (cs)-1/MCA-1

Paper- CS-DE-15 Writer: Dr. Sanjay Tyagi


Lesson No.1 Vetter: Dr. Pardeep Kumar
______________________________________________________________________

INTRODUCTION TO OPERATING SYSTEMS

STRUCTURE

1. Introduction

2. Objective

3. Presentation of Contents

3.1 Operating System as a Resource Manager

3.1.1 Memory Management Functions

3.1.2 Processor / Process Management Functions

3.1.3 Device Management Functions

3.1.4 Information Management Functions

3.1.5 Network Management Functions

3.2 Extended Machine View of an Operating System

3.3 Evolution of Processing Trends

3.3.1 Serial Processing

3.3.2 Batch Processing

3.3.3 Multi Programming

3.4 Types Of Operating Systems

3.4.1 Batch Operating System

1
3.4.2 Multi Programming Operating System

3.4.3 Multitasking Operating System

3.4.4 Multi-user Operating System

3.4.5 Multithreading

3.4.6 Time Sharing System

3.4.7 Real Time Systems

3.4.8 Combination Operating Systems

3.5 Distributed Operating Systems

4. Summary

5. Suggested Readings / Reference Material

6. Self Assessment Questions (SAQ)

1. INTRODUCTION

The Operating System (OS) is system software, which acts as an interface between a user

of the computer and the computer hardware. The main purpose of an Operating System is

to provide an environment in which programs can be executed. It may be viewed as a

collection of software consisting of procedures for operating the computer and providing

an environment for execution of programs. So an Operating System makes everything in

the computer to work together smoothly and efficiently. Operating system is also referred

to as a resource manager because it manages the resources of a computer system namely

devices, memory, processor and information.

The primary need for the Operating System arises from the fact that the user needs to be

provided with services and Operating System ought to facilitate the provisioning of these

services. The central part of a computer system is a processing engine called CPU. A

2
system should make it possible for a user’s application to use the processing unit. A user

application would need to store information. The Operating System makes memory

available to an application when required. Similarly, user applications need some way to

input so as to communicate with the application. This is often in the form of a key board,

or a mouse or a joy stick.

In the same manner, applications may require outputs that may be generated by monitor

or printer. Output may be a video or audio file. So these devices are also managed by

Operating System.

All the applications being used in a computer may require resources for Processing,

Storage of information, Mechanism to inputting information and Provision for outputting

information.

These service facilities are provided by an operating system regardless of the nature of

application. The Operating System offers generic services to support all the above

operations. These operations in turn facilitate the applications mentioned earlier. To that

extent an Operating System operation is application neutral and service specific.

In order to understand operating systems, we must understand the computer hardware and

the development of Operating System from beginning. Hardware means the physical

machine and its electronic components including memory chips, input/output devices,

storage devices and the central processing unit. Software is the program written for these

computer systems. Main memory is where the data and instructions are stored to be

processed. Input / Output devices are the peripherals attached to the system, such as

keyboard, printers, disk drives, CD drives, magnetic tape drives, modem, monitor, etc.

The central processing unit is the brain of the computer system; it has circuitry to control

3
the interpretation and execution of instructions. It controls the operation of entire

computer system. All of the storage references, data manipulations and I/O operations are

performed by the CPU. There are four components of computer systems i.e. Hardware,

Operating System, Application programs & System programs, and Users.

The hardware provides the basic computing power. The system programs provide the

way in which these resources are used to solve the computing problems of the users.

There may be many different users trying to solve different problems. The Operating

System controls and coordinates the use of the hardware among the various users and the

application programs.

User User User User

Compiler Assembler Text Editor Database

Application programs

Operating System

Computer Hardware

1.

Figure 1.1: Basic components of a computer system

4
We can view an Operating System as a resource allocator. A computer system has many

resources, which are to be required to solve a computing problem. These resources are

the CPU time, memory space, files storage space, input/output devices and so on. The

Operating System acts as a manager of all of these resources and allocates them to the

specific programs and users as needed by their tasks. Since there can be many conflicting

requests for the resources, the Operating System must decide which requests are to be

allocated resources to operate the computer system fairly and efficiently.

An Operating System can also be viewed as a control program, used to control the

various I/O devices and the users programs. A control program controls the execution of

the user programs to prevent errors and improper use of the computer resources. It is

especially concerned with the operation and control of I/O devices. As stated above the

fundamental goal of computer system is to execute user programs and solve user

problems. To achieve this goal computer hardware is constructed. But the bare hardware

is not easy to use and for this purpose application/system programs are developed. These

various programs require some common operations, such as controlling/use of some

input/output devices and the use of CPU time for execution. The common functions of

controlling and allocation of resources between different users and application programs

is brought together into one piece of software called operating system. It is easy to define

operating systems by what they do rather than what they are. The primary goal of the

operating systems is to make the use of computer easy. Operating systems make it easier

to compute. A secondary goal is efficient operation of the computer system. The large

computer systems are very expensive, and so it is desirable to make them as efficient as

possible. Operating systems thus makes the optimal use of computer resources.

5
2. OBJECTIVE

In the present lesson, the functions of operating systems as resource manager such as

Memory Management, Processor / Process Management, Device Management,

Information Management and Network Management have been discussed. Evolution of

processing trends, basic characteristics and design objectives of various operating

systems have also been mentioned in the lesson.

In the later part of the lesson, various types of Operating Systems such as Batch

Operating System, Multi Programming Operating System, Multitasking Operating

System, Multi-user Operating System, Multi-threading,Time Sharing System, Real Time

Systems and Distributed Operating Systems have been elaborated.

3. PRESENTATION OF CONTENTS

3.1 OPERATING SYSTEM AS A RESOURCE MANAGER

The operating system is manager of resources. We shall be discussing these resources and

what completely different modules of operating system should do to manage them.

Viewing operating system as a resources manager is just one of the three standard views

of an operating system. Different views being hierarchical view and extended machine

view. The first view is that the operating system may be a assortment of programs

designed to manage the system’s resources, namely, memory, processors, peripheral

devices, and knowledge (programs and data) of these resources are valuable, and it's the

function of operating system to check that they're used expeditiously and to resolve

conflicts arising from competition among the varied users. The operating system should

keep track of standing of every resource; decide that method is to urge the resource (how

6
a lot of, where, and when), apportion it, and eventually reclaim it. We tend to classify

system resources certain classes, namely,

 Memory

 Processors

 Input/Output Devices

 Information.

Viewing the O/S as a resource manager, every manager should do the following:

 Keep track of resources

 Enforce allocation policy

 Allocate the resource

 Reclaim the resource.

We've got classified all the operating system programs into different classes in direct

correspondence with various classes of resources. Here we've got mentioned major

functions of every class of programs and typical name of program/module where it exists.

3.1.1 Memory Management Functions

To execute a program, it must be mapped to absolute addresses and loaded into memory.

As the program executes, it accesses instructions and data from memory by generating

these absolute addresses. The Operating System is responsible for the following memory

management functions:

 Keep track of every physical memory location within the system. What elements

area unit in use and by whom? What elements don't seem to be in use (called

free)?

7
 It decides that method gets memory required for execution. It allocates the

memory once the method requests it and allocation policy permits it.

 Reclaim the memory once the method no needs it or has been terminated.

3.1.2 Processor/Process Management Functions

A process is an instance of a program in execution. While a program is just a passive

entity, process is an active entity performing the intended functions of its related

program. A process needs certain resources like CPU, memory, files and I/O devices. In

multiprogramming environment, there will a number of simultaneous processes existing

in the system. The Operating System is responsible for the following processor/ process

management functions:

 Keeps track of processor and status of processes. The programs that will do this

type of work are called as the traffic controller by some researchers/authors.

 Decide who (which process) can have an opportunity to use the processor; job

scheduler (also called as long term scheduler) chooses from all the submitted jobs

and decides which one are going to be allowed into the system, i.e., have its stint

with CPU . If multiprogramming, decide which process gets the processor, when,

for how much of time. The module that does this is called a process scheduler (or

a short term scheduler).

 Allocate the processor to a process by arranging the required hardware registers.

 Reclaim processor once process stops to use a processor, or exceeds the allowed

quantity of usage.

It may be noted that in creating the choice on that job gets into the system several factors

are taken into account. As an example, a job that is requesting a lot of memory or tape

8
drives than the system has shouldn't be permitted into the system. That is, it should not

have any resource assigned to it, not even the main memory.

3.1.3 Device Management Functions

The Operating System is responsible for the following I/O Device Management

Functions:

 Keep track of the resource (I/O devices, I/O channels, etc.). This module is often

referred to as I/O traffic controller.

 Decide an efficient way to allocate the I/O resource. If it is shared, then decide

who gets it, what quantity of it's to be allotted, and for how long. This can be

referred to as I/O scheduling.

 Allocate the I/O device and start the I/O operation.

Reclaim device as and once its use is over. In most cases I/O terminates

automatically.

The method of deciding how devices are allotted depends whether or not an I/O device is

to be devoted to a process, or is to be shared among several processes, or is that the I/O

device itself a virtual device. Spooling (simultaneous peripheral operations online)

supports the concept of virtual devices.

3.1.4 Information Management Functions

The major information management functions for which Operating system is responsible

are:

 Keeps track of the information its location, its usage, status, etc. The module

known as a file system gives these facilities.

9
 Decides who gets hold of information, apply protection mechanism, and provides

for information access mechanism, etc.

 Allocate the information to a requesting process, e.g., open a file.

 De-allocate the resource, e.g., close a file.

3.1.5 Network Management Functions

An Operating System is responsible for the computer system networking via a distributed

environment. A distributed system is a collection of processors, which do not share

memory, clock pulse or any peripheral devices. Instead, each processor is having its own

clock pulse, and RAM and they communicate through network. Access to shared

resource permits increased speed, functionality and reliability.

3.2 EXTENDED MACHINE VIEW OF AN OPERATING SYSTEM

There is a necessity to identify the system resources that has got to be managed by the

operating system and using the process viewpoint, we have a tendency to indicate once

the corresponding resource manager comes into play. We have a tendency to currently

answer the question, “How are these resource managers activated, and where do they

reside?” Will memory manager ever invoke the process scheduler? Will scheduler ever

call upon the services of memory manager? Is the process concept only for the user or is

it utilized by operating system also?

Here, first we present the concept of bare machine – a computer without its software

clothing. A bare machine does not provide the environment which most programmers’

desire for. The instructions to do resource management functions are typically not

provided by the bare machine and they form a part of operating system services. The user

10
program will request these services by issuing special supervisor instructions. These

services are called much like user’s subroutines.

So the operating system gives several instructions additionally to the bare machine

instructions. Instructions that form a part of bare machine and those given by the

operating system represent the instruction set of the extended machine.

The operating system kernel runs on the bare machine; user programs run on the

extended machine. This means that the kernel of operating system is written by using the

instructions of bare machine only; whereas the users can write their programs with the

use of instructions given by the extended machine.

3.3 EVOLUTION OF PROCESSING TRENDS

3.3.1 Serial Processing

In theory, every computer system may be programmed in its machine language, with no

systems software support. Programming of the bare machine was customary for early

computer systems. Programming of the bare machine results in low productivity of both

users and hardware. The long and tedious process of program and data entry practically

precludes execution of all but very short programs in such an environment.

The next significant evolutionary step in computer-system usage came about with the

advent of input/output devices, such as punched cards and paper tape, and of language

translators. Programs, now coded in a programming language, are translated into

executable form by a computer program, such as a compiler or an interpreter. Another

program, called the loader, automates the process of loading executable programs into

memory. The user places a program and its input data on an input device, and the loader

transfers information from that input device into memory. After transferring control to

11
the loader program by manual or automatic means, execution of the program commences.

The executing program reads its input from the designated input device and may produce

some output on an output device, such as a printer or display screen. Once in memory,

the program may be rerun with a different set of input data.

The mechanics of development and preparation of programs in such environments are

quite slow and cumbersome due to serial execution of programs and to numerous manual

operations involved in the process. In a typical sequence, the editor program is loaded to

prepare the source code of the user program. The next step is to load and execute the

language translator and to provide it with the source code of the user program. When

serial input devices, such as card reader, are used, multiple-pass language translators may

require the source code to be repositioned for reading during each pass. If syntax errors

are detected, the whole process must be repeated from the beginning. Eventually, the

object code produced from the syntactically correct source code is loaded and executed.

If run-time errors are detected, the state of the machine can be examined and modified by

means of console switches, or with the assistance of a program called a debugger.

The mode of operation described here was initially used in late fifties, but it was also

common in low-end microcomputers of early eighties with cassettes as I/O devices.

In addition to language translators, systems software includes the loader and possibly

editor and debugger programs. Most of them use input/output devices and thus must

contain some code to exercise those devices. Since many user programs also use

input/output devices, the logical refinement is to provide a collection of standard I/O

routines for the use of all programs. This realization led to a progression of

implementations, ranging from the placing of card decks with I/O routines into the user

12
code, to the eventual collection of pre-compiled routines and the use of linker and

librarian programs to combine them with each user's object code.

In the described system, I/O routines and the loader program represent a rudimentary

form of an operating system. It still provides an environment for execution of programs

for beyond what is available on the bare machine. Language translators, editors, and

debuggers are systems programs that rely on the service of, but are not generally regarded

as part of, the operating system. For example, the language translator would normally

use the provided I/O routines to obtain its input (the source code) and to produce the

output.

Although a definite improvement over the bare-machine approach, this mode of operation

is obviously not very efficient. Running of the computer system may require frequent

manual loading of programs and data. This results in low utilization of system resources.

User productivity, especially in multi-user environments, is low as users wait their turn at

the machine. Even with such tools as editors and debuggers, program development is

very slow and is ridden with manual program and data loading.

3.3.2 Batch Processing

The next logical step in the evolution of operating systems was to automate the

sequencing of operations involved in program execution and in the mechanical aspects of

program development. The intent was to increase system resource utilization and

programmer productivity by reducing or eliminating component idle times caused by

comparatively lengthy manual operations.

Even when automated, housekeeping operations such as mounting of tapes and filling out

log forms take a long time relative to processors and memory speeds. Since there is not

13
much that can be done to reduce these operations, system performance may be increased

by dividing this overhead among a number of programs. More specifically, if several

programs are "batched" together on a single input tape for which housekeeping

operations are performed only once, the overhead per program is reduced accordingly. A

related concept, sometimes called phasing, is to prearrange submitted jobs so that similar

ones are placed in the same batch.

To realize the resource-utilization potential of batch processing, a mounted batch of jobs

must be executed automatically, without slow human intervention. To this end some

means must be provided to instruct the operating system how to treat each individual job.,

These instructions are usually supplied by means of operating-system commands

embedded in the batch stream. Operating system commands are statements written in Job

Control Language (JCL).

A memory-resident portion of the batch operating system- sometimes called the batch

monitor- reads, interprets, and executes these commands. In response to them, batch jobs

are executed one at a time. A job may consist of several steps, each of which usually

involves loading and execution of a program. For example, a job may consist of

compilation and subsequent execution of a user program. Each particular step to be

performed is indicated to the monitor by means of the appropriate command. When a

Job-END command is encountered, the monitor may look for another job, which may be

identified by a JOB-START command.

By reducing or eliminating component idle time due to slow manual operations, batch

processing offers a greater potential for increased system resource utilization and

throughput than simple serial processing, especially in computer systems that serve

14
multiple users. As far as program development is concerned, batch is not a great

improvement over the simple serial processing. The turnaround time, measured from the

time a job is submitted until its output is received, may be quite long in batch systems.

Phasing may further increase the turnaround time by introducing additional waiting for a

complete batch of the given kind to be assembled. Moreover, programmers are forced to

debug their programs offline using post-mortem memory dumps, as opposed to being

able to examine the state of the machine immediately upon detection of a failure.

Although this may enforce use of better programming disciplines, it is difficult to

reconstruct the state of the system just on the basis of an after-the-fact memory snapshot.

With sequencing of program execution mostly automated by batch operating systems, the

speed discrepancy between fast processors and comparatively slow I/O devices, such as

card readers and printers, emerged as a major performance bottleneck. Further

improvements in batch processing were mostly along the lines of increasing the

throughput and resource utilization by overlapping input and output operations.

Many single-user operating systems for personal computers basically provide for serial

processing. User programs are commonly loaded into memory and executed in response

to user commands typed on the console. A file management system is often provided for

program and data storage. A form of batch processing is made possible by means of files

consisting of commands to the operating system that are executed in sequence. Command

files are primarily used to automate complicated customization and operational sequences

of frequent operations.

3.3.3 Multiprogramming

15
Early computers ran one process at a time. While the process waited for servicing by

another device, the CPU was idle. In an I/O intensive process, the CPU could be idle as

much as 80% of the time. Advancements in operating systems led to computers that load

several independent processes into memory and switch the CPU from one job to another

when the first becomes blocked while waiting for servicing by another device. This idea

of multiprogramming reduces the idle time of the CPU. Multiprogramming accelerates

the throughput of the system by efficiently using the CPU time. In multiprogramming,

many processes are simultaneously resident in memory, and execution switches between

processes. So Multiprogramming is a rudimentary form of parallel processing in which

several programs are run at the same time on a uniprocessor. Since there is only one

processor, there can be no true simultaneous execution of different programs. Instead, the

operating system executes part of one program, then part of another, and so on. To the

user it appears that all programs are executing at the same time. The advantages of

multiprogramming are the same as the commonsense reasons that in life you do not

always wait until one thing has finished before starting the next thing. Specifically:

 More efficient use of computer time. If the computer is running a single process, and

the process does a lot of I/O, then the CPU is idle most of the time. This is a gain as

long as some of the jobs are I/O bound -- spend most of their time waiting for I/O.

 Faster turnaround if there are jobs of different lengths. Consideration (1) applies only

if some jobs are I/O bound. Consideration (2) applies even if all jobs are CPU bound.

For instance, suppose that first job A, which takes an hour, starts to run, and then

immediately afterward job B, which takes 1 minute, is submitted. If the computer has

to wait until it finishes A before it starts B, then user A must wait an hour; user B

16
must wait 61 minutes; so the average waiting time is 60.5 minutes. If the computer

can switch back and forth between A and B until B is complete, then B will complete

after 2 minutes; A will complete after 61 minutes; so the average waiting time will be

31.5 minutes. If all jobs are CPU bound and the same length, then there is no

advantage in multiprogramming; you do better to run a batch system. The

multiprogramming environment is supposed to be invisible to the user processes; that

is, the actions carried out by each process should proceed in the same was as if the

process had the entire machine to itself.

This raises the following issues:

 Process model: The state of an inactive process has to be encoded and saved in a

process table so that the process can be resumed when made active.

 Context switching: How does one carry out the change from one process to another?

 Memory translation: Each process treats the computer's memory as its own private

playground. How can we give each process the illusion that it can reference addresses

in memory as it wants, but not have them step on each other's toes? The trick is by

distinguishing between virtual addresses -- the addresses used in the process code --

and physical addresses -- the actual addresses in memory. Each process is actually

given a fraction of physical memory. The memory management unit translates the

virtual address in the code to a physical address within the user's space. This

translation is invisible to the process.

 Memory management: How does the Operating System assign sections of physical

memory to each process?

17
 Scheduling: How does the Operating System choose which process to run when?

Let us briefly review some aspects of program behavior in order to motivate the basic

idea of multiprogramming. This is illustrated in Figure 1.2, indicated by gray boxes

(representing CPU is in use for computation) and white Box represents that CPU is idle

due to some I/O activity. Idealized serial execution of two programs, with no inter-

program idle times, is depicted in Figure 1.2. For comparison purposes, both programs

are assumed to have identical behavior with regard to processor and I/O times and their

relative distributions. As Figure 1.2 suggests, serial execution of programs causes either

the processor or the I/O devices to be idle at some time even if the input job stream is

never empty. One way to attack this problem is to assign some other work to the

processor and I/O devices when they would otherwise be idling.

Program 1 Program 2

Figure 1.2: Serial Execution of Programs

Figure 1.3 illustrates a possible scenario of concurrent execution of the two programs

introduced in Figure 1.2. It starts with the processor executing the first computational

sequence of Program 1. Instead of idling during the subsequent I/O sequence of Program

1, the processor is assigned to the first computational sequence of the Program 2, which

is assumed to be in memory and awaiting execution. When this work is done, the

processor is assigned to Program 1 again, then to Program 2, and so forth.

18
Program 1

Program 2

P1 P2 P1 P2 P1 Time 

CPU activity

Figure 1.3: Multi-programmed executions


As Figure 1.2 and Figure 1.3 suggests, significant performance gains may be achieved by

interleaved executing of programs, or multiprogramming, as this mode of operation is

usually called. With a single processor, parallel execution of programs is not possible,

and at most one program can be in control of the processor at any time. The example

presented in Figure 1.3 achieves 100% processor utilization with only two active

programs. The number of programs actively competing for resources of a multi-

programmed computer system is called the degree of multiprogramming. In principle,

higher degrees of multiprogramming should result in higher resource utilization. Time-

sharing systems found in many university computer centers provide a typical example of

a multiprogramming system.

3.4 TYPES OF OPERATING SYSTEMS

We are able to classify the universe of operating systems on the idea of many criteria, viz.

Number of at the same time active programs, Number of users operating at the same

time, Number of processors within the computer systems, etc. In the following

discussion, many types of operating systems are mentioned.

19
3.4.1 Batch Operating System

As mentioned earlier, execution typically needs the program, data, and applicable system

commands to be submitted along in the form of a job. Batch operating systems

sometimes permit very little or no interaction between users and executing programs.

Batch Processing incorporates a larger potential for resource utilization than serial

processing in computer systems serving multiple users. Because of turnaround delays and

offline debugging, batch isn't very easy for program development.

Programs that don't need interaction and programs with long execution times is also

served well by a batch operating systems. Payroll, forecasting, statistical analysis, and

large scientific number-crunching programs are examples of such programs.

Scheduling in batch is extremely straightforward. Jobs are generally processed in order of

their submission, that is, first-come first-served fashion.

Memory management in batch systems is also very straightforward. Memory is

sometimes divided into two areas. The resident portion of the operating system for

always occupies one among them, and the other is used to load transient programs for

execution. Once a transient program terminates, a new program is loaded into the same

space of memory.

Figure 1.4: Batch processing systems

20
Since at the most one program is in execution at any time, batch systems don't need any

time-critical device management. For this reason, several serial and I/O and standard

batch operating systems use simple, program controlled method of I/O described later.

The shortage of contention for I/O devices makes their allocation and de-allocation

trivial.

Batch systems typically give straightforward kinds of file management. Since access to

files is also serial, very little protection and no concurrency management of file access in

needed.

3.4.2 Multiprogramming Operating System

A multiprogramming operating system is one that allows end-users to run more than one

program at a time. The development of such a system, the first type to allow this

functionality, was a major step in the development of sophisticated computers. The

technology works by allowing the central processing unit (CPU) of a computer to switch

between two or more running tasks when the CPU is idle. A multiprogramming system

permits multiple programs to be loaded into memory and execute the programs

concurrently. Concurrent execution of programs has a significant potential for improving

system throughput and resource utilization relative to batch and serial processing. This

potential is realized by a class of operating systems that multiplex resources of a

computer system among a multitude of active programs. Such operating systems usually

have the prefix multi in their names, such as multitasking or multiprogramming as shown

in figure 1.5.

21
Figure 1.5: Memory layout in Multiprogramming

3.4.3 Multitasking Operating System

It allows more than one program to run concurrently. The ability to execute more than

one task at the same time is called as multitasking. An instance of a program in execution

is called a process or a task. A multitasking Operating System is distinguished by its

ability to support concurrent execution of two or more active processes. Multitasking is

usually implemented by maintaining code and data of several processes in memory

simultaneously, and by multiplexing processor and I/O devices among them.

Multitasking is often coupled with hardware and software support for memory protection

in order to prevent erroneous processes from corrupting address spaces and behavior of

other resident processes. The terms multitasking and multiprocessing are often used

interchangeably, although multiprocessing sometimes implies that more than one CPU is

involved. In multitasking, only one CPU is involved, but it switches from one program to

another so quickly that it gives the appearance of executing all of the programs at the

same time. There are two basic types of multitasking: preemptive and cooperative. In

preemptive multitasking, the Operating System parcels out CPU time slices to each

program. In cooperative multitasking, each program can control the CPU for as long as it

22
needs it. If a program is not using the CPU, however, it can allow another program to use

it temporarily. OS/2, Windows, and UNIX use preemptive multitasking, whereas

Microsoft Windows 3.x uses cooperative multitasking.

3.4.4 Multi-user Operating System

Multiprogramming operating systems usually support multiple users, in which case they

are also called multi-user systems. Multi-user operating systems provide facilities for

maintenance of individual user environments and therefore require user accounting. In

general, multiprogramming implies multitasking, but multitasking does not imply multi-

programming. In effect, multitasking operation is one of the mechanisms that a

multiprogramming Operating System employs in managing the totality of computer-

system resources, including processor, memory, and I/O devices. Multitasking operation

without multi-user support can be found in operating systems of some advanced personal

computers and in real-time systems. Multi-access operating systems allow simultaneous

access to a computer system through two or more terminals. In general, multi-access

operation does not necessarily imply multiprogramming. An example is provided by

some dedicated transaction-processing systems, such as airline ticket reservation systems,

that support hundreds of active terminals under control of a single program.

In general, the multiprocessing or multiprocessor operating systems manage the operation

of computer systems that incorporate multiple processors. Multiprocessor operating

systems are multitasking operating systems by definition because they support

simultaneous execution of multiple tasks (processes) on different processors. Depending

on implementation, multitasking may or may not be allowed on individual processors.

Except for management and scheduling of multiple processors, multiprocessor operating

23
systems provide the usual complement of other system services that may qualify them as

time-sharing, real-time, or a combination operating system.

3.4.5 Multithreading

Multithreading allows different parts of a single program to run concurrently. The

programmer must carefully design the program in such a way that all the threads can run

at the same time without interfering with each other.

3.4.6 Time-sharing system

Time-sharing may be a well-liked representative of multi-programmed, multi-user

systems. Addition to common program-development environments, several computer-

aided design (CAD) and text-processing systems fit in to this type. One among the first

objectives of multi-user systems normally, and time-sharing specifically, is good terminal

response time. Giving the illusion to every user of getting a machine to oneself, time-

sharing systems typically arrange to give equitable sharing of common resources. For

example, once the system is loaded, users with additional rigorous processing needs are

made to wait longer.

This philosophy is reflected in the selection of scheduling algorithm. Most time-sharing

systems use time-slicing (round robin) scheduling. In this approach, programs are

executed with rotating priority that increases during waiting and decreases after the

service is granted. So as to stop programs from monopolizing the processor, a program

executing longer than the system-defined time slice is interrupted by the operating system

and placed at the end of the queue of waiting programs. This mode of operation typically

provides fast response time to interactive programs.

24
Memory management in time-sharing systems provides for separation and protection of

co-resident programs. Some kinds of controlled sharing are generally provided to

conserve memory and probably to exchange data between programs. Being executed on

behalf of various users, programs in time-sharing systems typically don’t have a lot of

communication with one another.

I/O management in time-sharing systems should be subtle enough to deal with multiple

users and devices. However, due to the comparatively slow speeds of terminals and

human users, speeds of terminals interrupts needn't be time-critical. As in most multi-user

environments, allocation and de-allocation of devices should be done in a way that

preserves system integrity and provides good performance. Given the chance of

simultaneously and probably conflicting tries to access files, file management in a time-

sharing system should give protection and access control. This task is commonly

combined by the necessity for files to be shared among certain users or categories of

users.

3.4.7 Real-time systems

Real time operating systems are used in environments wherever a large number of events,

principally external to the computer system, should be accepted and processed in a short

time or inside certain deadlines. Such applications comprise industrial control, telephone

switching equipment, flight control, and real-time simulations. Real-time systems are also

normally employed in military applications.

A primary objective of real time systems is to supply fast event-response times, and so

meet the scheduling deadlines. User convenience and resource utilization are of

secondary concern to real time system designers. It's not unusual for a real time system to

25
be expected to process bursts of thousands of interrupts per second while not missing one

event. Such needs sometimes cannot be met by multi-programming alone, and real time

operating systems sometimes admit some definite policies and techniques for doing their

job.

Explicit, programmer-defined and controlled processes are usually encountered in real

time systems. Basically, a separate process is charged with handling one external event.

The process is activated upon occurrence of the related event that is commonly signaled

by an interrupt. Multitasking operation is skilled by scheduling processes for execution

independently of each other. Every process is appointed an explicit level of priority that

corresponds to the relative importance of the event that it services. The processor is often

allotted to the highest-priority process among those that are ready to execute. Higher-

priority processes sometimes preempt execution of the lower-priority processes. This

form of scheduling, referred to as priority-based preemptive scheduling, is used by a

majority of real-time systems.

Memory management in real time systems is relatively less demanding than in different

varieties of multiprogramming systems. The first reason for this can be that a lot of

processes permanently reside in memory so as to supply fast response time. Unlike, say,

time-sharing, the process population in real time systems is fairly static, and there's

relatively very little moving of programs between primary and secondary storage. On the

opposite hand, processes in real time systems tend to cooperate closely, so necessitating

support for each separation and sharing of memory.

As already suggested time-critical device management is one among the most

characteristics of real times systems. Additionally to providing sophisticated kinds of

26
interrupt management and I/O buffering, real time operating systems typically give

system calls to permit user processes (programs) to connect themselves to interrupt

vectors and to service events directly.

File management is sometimes found solely in larger installations of real time systems. In

fact, some embedded real time systems, like an onboard automotive controller may not

even have any secondary storage. However, where provided, file management of

controller, might not even have any memory device. However, wherever provided, file

management of real time systems should satisfy a lot of an equivalent needs as found in

time-sharing and different multiprogramming systems. These comprise protection and

access control. The first objective of file management in real time systems is sometimes

speeding of access, rather than efficient utilization of memory device.

3.4.8 Combination of operating systems

Differing kinds of operating systems are optimized, or at least largely geared toward

serving the requirements of specific environments. In practice, however, a given

environment might not precisely match any of the described molds. As an example, each

interactive program development and lengthy simulations are typically encountered in

university computing centers.

For this reason, some commercial operating systems provide a combination of described

services. For example, a time-sharing system may support interactive users and also

incorporate a full-fledged batch monitor. This allows computationally intensive non-

interactive programs to be run concurrently with interactive programs. The common

practice is to assign low priority to batch jobs and thus execute batched programs only

when the processor would otherwise be idle. For this reason, some commercial operating

27
systems give a combination of described services. For instance, a time-sharing system

could support interactive users and incorporate a full-fledged batch monitor. This permits

computationally intensive non-interactive programs to be run at the same time with

interactive programs. The common follow is to assign low priority to batch jobs and so

execute batched programs only the processor would otherwise be idle. In different words,

batch is also used as a filler to enhance processor utilization whereas accomplishing a

helpful service of its own. Similarly, some time-critical events, like receipt and

transmission of network data packets, is also handled in real time fashion on systems that

otherwise give time-sharing services to their terminal users.

3.5 Distributed Operating Systems

A distributed computer system is a collection of autonomous computer systems capable

of communication and cooperation via their hardware and software interconnections.

Historically, distributed computer systems evolved from computer networks in which a

number of largely independent hosts are connected by communication links and

protocols.

A distributed operating system governs the operation of a distributed computer system

and provides a virtual machine abstraction to its users. The key objective of a distributed

operating system is transparency. Ideally, component and resource distribution should be

hidden from users and application programs unless they explicitly demand otherwise.

Distributed operating systems usually provide the means for system-wide sharing of

resources, such as computational capacity, files, and I/O devices. In addition to typical

operating-system services provided at each node for the benefit of local clients, a

distributed operating system may facilitate access to remote resources, communication

28
with remote processes, and distribution of computations. The added services necessary

for pooling of shared system resources include global naming, distributed file system, and

facilities for distribution.

4. SUMMARY

Following the course of the conceptual evolution of operating systems, we have

identified the main characteristics of the program-execution and development

environments provided by the bare machine, serial processing, including batch and

multiprogramming.

On the basis of their attributes and design objectives, different types of operating systems

were defined and characterized with respect to scheduling and management of memory,

devices, and files. The primary concerns of a time-sharing system are equitable sharing of

resources and responsiveness to interactive requests. Real-time operating systems are

mostly concerned with responsive handling of external events generated by the controlled

system. Distributed operating systems provide facilities for global naming and accessing

of resources, for resource migration, and for distribution of computation.

Typical services provided by an operating system to its users were presented from the

point of view of command-language users and system-call users. In general, system calls

provide functions similar to those of the command language but allow finer gradation of

control.

5. SUGGESTED READINGS / REFERENCE MATERIAL

 Operating System Concepts, 5th Edition, Silberschatz A., Galvin P.B., John Wiley

& Sons.

29
 Systems Programming & Operating Systems, 2nd Revised Edition, Dhamdhere

D.M., Tata McGraw Hill Publishing Company Ltd., New Delhi.

 Operating Systems, Madnick S.E., Donovan J.T., Tata McGraw Hill Publishing

Company Ltd., New Delhi.

 Operating Systems-A Modern Perspective, Gary Nutt, Pearson Education Asia,

2000.

 Operating Systems, Harris J.A., Tata McGraw Hill Publishing Company Ltd., New

Delhi, 2002.

 Lecture notes of Directorate of Distance Education, K.U.Kurukshetra.

6. SELF ASSESMENT QUESTIONS (SAQ)

 What are objectives of an operating system? Discuss.

 Discuss historical evolution of operating systems.

 Explain various functions and characteristics of an operating system.

 What is an extended machine view of an operating system?

 Discuss the role of operating system as a resource allocator.

 Discuss whether there are any advantages of using a multitasking operating

system, as opposed to a serial processing one.

 Discuss various types of operating systems.

30
CS-DE-15

OPERATING SYSTEM STRUCTURES

LESSON NO. 2

Writer: Harvinder Singh

Vetter: Dr. Pardeep Kumar

STRUCTURE

1. Introduction

2. Objective

3. Presentation of Contents

3.1 Operating System Services

3.1.1 Program execution

3.1.2 I/O operations

3.1.3 File System manipulation

3.1.4 Communication

3.1.5 Error Detection

3.1.6 Resource Allocation

3.1.7 Protection

3.1.8 User Interface

3.2 System Calls

3.2.1 Types of System Calls

3.2.2 Examples of System Calls

3.3 System Programs

4. Summary

1
5. Suggested Readings / Reference Material

6. Self Assessment Questions (SAQ)

1. INTRODUCTION

An operating system gives the environment within which various types of programs are executed.

The proposal of a new operating system is an important assignment in computer science. It is

essential that the goals of the system be well defined before the design starts. One view of operating

system focuses on the services that the system provides. In this lesson, we look at various aspects of

operating systems. We consider what services an operating system provides and how they are

provided.

2. OBJECTIVES

In this lesson, the services provided by an operating system to users, processes, and other systems

have been explained. The concept of operating system call and its different types has been

described. The concept and use of system programs has also been elaborated.

3. PRESENTATION OF CONTENTS

3.1 Operating-System Services

An operating system provides definite services to programs and to the users of those programs. The

services provided differ from one operating system to another. These operating system services are

provided for the ease of the programmer, to make the programming job easier. The services

provided by operating system are as follows:

3.1.1 Program execution

Operating system handles many kinds of activities from user programs to system programs like

printer spooler, name servers, file server etc. Each of these activities is encapsulated as a process.

A process includes the complete execution context (code to execute, data to manipulate,

2
registers, OS resources in use). Following are the major activities of an operating system with

respect to program management:

 Loads a program into memory.

 Executes the program.

 Handles program's execution.

 Provides a mechanism for process synchronization.

 Provides a mechanism for process communication.

 Provides a mechanism for deadlock handling.

3.1.2 I/O Operation

I/O subsystem is comprised of I/O devices and their corresponding driver software. Drivers hide

the peculiarities of specific hardware devices from the user as the device driver knows the

peculiarities of the specific device. Operating System manages the communication between user

and device drivers. Following are the major activities of an operating system with respect to I/O

Operation:

 I/O operation means read or write operation with any file or any specific I/O device.

 Program may require any I/O device while running.

 Operating system provides the access to the required I/O device when required.

3.1.3 File system manipulation

A file represents a collection of related information. Computer can store files on the disk

(secondary storage), for long term storage purpose. Few examples of storage media are magnetic

tape, magnetic disk and optical disk drives like CD, DVD. Each of these media has its own

3
properties like speed, capacity, and data transfer rate and data access methods. A file system is

normally organized into directories for easy navigation and usage. These directories may contain

files and other directions. Following are the major activities of an operating system with respect

to file management:

 Program needs to read a file or write a file.

 The operating system gives the permission to the program for operation on file.

 Permission varies from read-only, read-write, denied and so on.

 Operating System provides an interface to the user to create/delete files.

 Operating System provides an interface to the user to create/delete directories.

 Operating System provides an interface to create the backup of file system.

3.1.4 Communication

In case of distributed systems, which are a collection of processors that do not share memory,

peripheral devices, or a clock, operating system manages communications between processes.

Multiple processes communicate with one another through communication lines in the network.

OS handles routing and connection strategies, and the problems of contention and security.

Following are the major activities of an operating system with respect to communication:

 Two processes often require data to be transferred between them.

 Both processes can be on the one computer or on different computer, but are

connected through computer network.

 Communication may be implemented by two methods either by Shared Memory

or by Message Passing.

4
3.1.5 Error handling

Error can occur anytime and anywhere. Error may occur in CPU, in I/O devices or in the

memory hardware. Following are the major activities of an operating system with respect to error

handling:

 OS constantly remains aware of possible errors.

 OS takes the appropriate action to ensure correct and consistent computing.

3.1.6 Resource Management

In case of multi-user or multi-tasking environment, resources such as main memory, CPU cycles

and files storage are to be allocated to each user or job. Following are the major activities of an

operating system with respect to resource management:

 OS manages all kind of resources using schedulers.

 CPU scheduling algorithms are used for better utilization of CPU.

3.1.7 Protection

Considering computer systems having multiple users, the concurrent execution of multiple

processes, then the various processes must be protected from each another's activities.

Protection refers to mechanism or a way to control the access of programs, processes, or users to

the resources defined by computer systems. Following are the major activities of an operating

system with respect to protection.

 OS ensures that all access to system resources is controlled.

 OS ensures that external I/O devices are protected from invalid access attempts.

5
 OS provides authentication feature for each user by means of a password.

3.1.8 User interface

Almost all operating systems have a user interface (UI). This interface can take several forms. One

is a command-line interface (CLI), which uses text commands and a method for entering them (say,

a program to allow entering and editing of commands). Another is a batch interface, in which

commands and directives to control those commands are entered into files, and those files are

executed. Most commonly a graphical user interface (GUI) is used. Here, the interface is a window

system with a pointing device to direct I/O, choose from menus, and make selections and a keyboard

to enter text.

3.2 System Calls

System calls allow user-level processes to request some services from the operating system which

process itself is not allowed to do. In handling the trap, the operating system will enter in the kernel

mode, where it has access to privileged instructions, and can perform the desired service on the

behalf of user-level process. It is because of the critical nature of operations that the operating

system itself does them every time they are needed. For example, for I/O a process involves a

system call telling the operating system to read or write particular area and this request is satisfied

by the operating system. These calls are generally available as routines written in C and C++. Before

we discuss how an operating system makes system calls available, let’s first use an example to

illustrate how system calls are used: writing a simple program to read data from one file and copy

them to another file. The first input that the program will need is the names of the two files: the

input file and the output file. These names can be specified in many ways, depending on the

operating system design. One approach is for the program to ask the user for the names of the two

files. In an interactive system, this approach will require a sequence of system calls, first to write a

6
prompting message on the screen and then to read from the keyboard the characters that define the

two files. On mouse-based and icon-based systems, a menu of file names is usually displayed in a

window. The user can then use the mouse to select the source name, and a window can be opened

for the destination name to be specified. This sequence requires many I/O system calls.

Once the two file names are obtained, the program must open the input file and create the output

file. Each of these operations requires another system call. There are also possible error conditions

for each operation. When the program tries to open the input file, it may find that there is no file of

that name or that the file is protected against access. In these cases, the program should print a

message on the console (another sequence of system calls) and then terminate abnormally (another

system call). If the input file exists, then we must create a new output file. We may find that there is

already an output file with the same name. This situation may cause the program to abort (a system

call), or we may delete the existing file (another system call) and create a new one (another system

call). Another option, in an interactive system, is to ask the user (via a sequence of system calls to

output the prompting message and to read the response from the terminal) whether to replace the

existing file or to abort the program. Now that both files are set up, we enter a loop that reads from

the input file (a system call) and writes to the output file (another system call). Each read and write

must return status information regarding various possible error conditions. On input, the program

may find that the end of the file has been reached or that there was a hardware failure in the read

(such as a parity error).The write operation may encounter various errors, depending on the output

device (no more disk space, printer out of paper, and so on).Finally, after the entire file is copied, the

program may close both files(another system call), write a message to the console or window (more

system calls), and finally terminate normally (the final system call). As we can see, even simple

7
programs may make heavy use of the operating system. Frequently, systems execute thousands of

system calls per second. This system calls sequence is shown in Figure 2.1.

Source file Destination file


Example System Call Sequence

Acquire input file name

Write prompt to screen

Accept input

Acquire output file name

Write prompt to screen

Accept input

Open the input file

if file doesn't exist, abort

Create output file

if file exists, abort

Read from input file

Write to output file

Close output file

Write completion message to screen

Terminate normally

Figure 2.1 Example of how system calls are used

8
3.2.1 Types of System Calls

System calls can be grouped roughly into five major categories: process control, file manipulation,

device manipulation, information maintenance, and communication. Figure 2.2 summarizes the

types of system calls normally provided by an operating system.

Process control

 end, abort

 load, execute

 create process, terminate process

 get process attributes, set process attributes

 wait for time

 wait event, signal event

 allocate and free memory

File management

 create file, delete file

 open, close

 read, write, reposition

 get file attributes, set file attributes

Device management

 request device, release device

 read, write, reposition

 get device attributes, set device attributes

9
 logically attach or detach devices

Information maintenance

 get time or date, set time or date

 get system data, set system data

 get process, file, or device attributes

 set process, file, or device attributes

Communications

 create, delete communication connection

 send, receive messages

 transfer status information

 attach or detach remote devices

Figure 2.2 Types of system calls.

 Process Control: A running program needs to be able to halt its execution either normally

(end) or abnormally (abort). If a system call is made to terminate the currently running program

abnormally, or if the program runs into a problem and causes an error trap, a dump of memory is

sometimes taken and an error message generated. The dump is written to disk and may be

examined by a debugger (a system program designed to aid the programmer in finding and

correcting bugs) to determine the cause of the problem. Under either normal or abnormal

circumstances, the operating system must transfer control to the command interpreter. The

command interpreter then reads the next command. In an interactive system, the command

interpreter simply continues with the next command; it is assumed that the user will issue an

appropriate command to respond to any error. In a GUI system, a pop-up window might alert the

user to the error and ask for guidance. In a batch system, the command interpreter usually

10
terminates the entire job and continues with the next job. For example the standard C library

provides a portion of the system-call interface for many versions of UNIX and Linux. As an

example, let us assume a C program invokes the printf() statement. The C library intercepts this

call and invokes the necessary system call(s) in the operating system - in this instance, the

write() system call. The C library takes the value returned by write() and passes it back to the

user program. This is shown in Figure 2.3. Some systems allow control cards to indicate special

recovery actions in case an error occurs. A control card is a batch system concept. It is a

command to manage the execution of a process. If the program discovers an error in its input and

wants to terminate abnormally, it may also want to define an error level.

#include <stdio.h>
int main()
{ printf("hello");
return 0;
}
User mode
Standard C Library
Kernel mode

Write system call

Figure 2.3 C library handling of write( )

More severe errors can be indicated by a higher-level error parameter. It is then possible to combine

normal and abnormal termination by defining a normal termination as an error at level 0. The

command interpreter or a following program can use this error level to determine the next action

automatically.

11
A process or job executing one program may want to load and execute another program. This

feature allows the command interpreter to execute a program as directed by, for example, a user

command, the click of a mouse, or a batch command. An interesting question is where to return

control when the loaded program terminates. This question is related to the problem of whether the

existing program is lost, saved, or allowed to continue execution concurrently with the new

program. If control returns to the existing program when the new program terminates, we must save

the memory image of the existing program; thus, we have effectively created a mechanism for one

program to call another program. If both programs continue concurrently, we have created a new job

or process to be multi-programmed. Often, there is a system call specifically for this purpose.

If we create a new job or process, or perhaps even a set of jobs or processes, we should be able to

control its execution. This control requires the ability to determine and reset the attributes of a job or

process, including the job's priority, its maximum allowable execution time, and so on (get process

attributes and set process attributes). We may also want to terminate a job or process that we created

(terminate process) if we find that it is incorrect or is no longer needed. Having created new jobs or

processes, we may need to wait for them to finish their execution. We may want to wait for a certain

amount of time to pass (wait time); more probably, we will want to wait for a specific event to occur

(wait event). The jobs or processes should then signal when that event has occurred (signal event).

Another set of system calls is helpful in debugging a program. Many systems provide system calls to

dump memory. This provision is useful for debugging. A program trace lists each instruction as it is

executed; it is provided by fewer systems. Even microprocessors provide a CPU mode known as

single step, in which a trap is executed by the CPU after every instruction. The trap is usually caught

by a debugger. Many operating systems provide a time profile of a program to indicate the amount

of time that the program executes at a particular location or set of locations. A time profile requires

12
either a tracing facility or regular timer interrupts. At every occurrence of the timer interrupt, the

value of the program

 File Management: We identify several common system calls dealing with files; we first

need to be able to create and delete files. Either system call requires the name of the file and perhaps

some of the file's attributes. Once the file is created, we need to open it and to use it. We may also

read, write, or reposition (rewinding or skipping to the end of the file, for example). Finally, we need

to close the file, indicating that we are no longer using it. We may need these same sets of

operations for directories if we have a directory structure for organizing files in the file system. In

addition, for either files or directories, we need to be able to determine the values of various

attributes and perhaps to reset them if necessary. File attributes include the file name, a file type,

protection codes, accounting information, and so on. At least two system calls, get file attribute and

set file attribute, are required for this function. Some operating systems provide many more calls,

such as calls for file move and copy. Others might provide an API that performs those operations

using code and other system calls, and others might just provide system programs to perform those

tasks. If the system programs are callable by other programs, then each can be considered an API by

other system programs.

 Device Management: A process may need several resources to execute main memory, disk

drives, access to files, and so on. If the resources are available, they can be granted, and control can

be returned to the user process. Otherwise, the process will have to wait until sufficient resources are

available. The various resources controlled by the operating system can be thought of as devices.

Some of these devices are physical devices (for example, tapes), while others can be thought of as

abstract or virtual devices (for example, files). If there are multiple users of the system, the system

may require us to first request the device, to ensure exclusive use of it. After we are finished with

13
the device, we release it. These functions are similar to the open and close system calls for files.

Other operating systems allow unmanaged access to devices. Once the device has been requested

(and allocated to us), we can read, write, and (possibly) reposition the device, just as we can with

files. In fact, the similarity between I/O devices and files is so great that many operating systems,

including UNIX, merge the two into a combined file-device structure. In this case, a set of system

calls is used on files and devices. Sometimes, I/O devices are identified by special file names,

directory placement, or file attributes. The UI can also make files and devices appear to be similar,

even though the underlying system calls are dissimilar. This is another example of the many design

decisions that go into building an operating system and user interface.

 Information Maintenance: Many system calls exist simply for the purpose of transferring

information between the user program and the operating system. For example, most programs.

Systems have a system call to return the current time and date. Other system calls may return

information about the system, such as the number of current users, the version number of the

operating system, the amount of free memory or disk space, and so on. In addition, the operating

system keeps information about all its processes, and system calls are used to access this

information. Generally, calls are also used to reset the process information (get process attributes

and set process attributes).

 Communication: There are two common models of inter process communication: the

message passing model and the shared-memory model. In the message-passing model, the

communicating processes exchange messages with one another to transfer information. Messages

can be exchanged between the processes either directly or indirectly through a common mailbox.

Before communication can take place, a connection must be opened. The name of the other

communicator must be known, be it another process on the same system or a process on another

14
computer connected by a communications network. Each computer in a network has a host name by

which it is commonly known. A host also has a network identifier, such as an IP address. Similarly,

each process has a process name, and this name is translated into an identifier by which the

operating system can refer to the process. The get host id and get process id system calls do this

translation. The identifiers are then passed to the general purpose open and close calls provided by

the file system or to specific open connection and close connection system calls, depending on the

system's model of communication. The recipient process usually must give its permission for

communication to take place with an accept connection call. Most processes that will be receiving

connections are special-purpose daemons, which are systems programs provided for that purpose.

They execute a wait for connect ion call and are awakened when a connection is made. The source

of the communication, known as the client, and the receiving daemon, known as a server, then

exchange messages by using read message and write message system calls. The close connection

call terminates the communication. In the shared-memory model, processes use shared memory

creates and shared memory attaches system calls to create and gain access to regions of memory

owned by other processes. Recall that, normally, the operating system tries to prevent one process

from accessing another process's memory. Shared memory requires that two or more processes

agree to remove this restriction. They can then exchange information by reading and writing data in

the shared areas. The form of the data and the location are determined by the processes and are not

under the operating system's control. The processes are also responsible for ensuring that they are

not writing to the same location simultaneously. Message passing is useful for exchanging smaller

amounts of data, because no conflicts need be avoided. It is also easier to implement than is shared

memory for inter computer communication.

15
3.2.2 Examples of System Calls

System calls are kernel level service routines for implementing basic operations performed by the

operating system. Below are mentioned some of several generic system calls that most operating

systems provide.

CREATE ( processID, attributes);

In response to the CREATE call, the operating system creates a new process with the

specified or default attributes and identifier. As pointed out earlier, a process cannot create itself-

because it would have to be running in order to invoke the OS, and it cannot run before being

created. So a process must be created by another process. In response to the CREATE call, the

operating system obtains a new PCB from the pool of free memory, fills the fields with provided

and/or default parameters, and inserts the PCB into the ready list-thus making the specified process

eligible to run. Some of the parameters definable at the process-creation time include:

Level of privilege, such as system or user

Priority

Size and memory requirements

Maximum data area and/or stack size

Memory protection information and access rights

Other system-dependent data

Typical error returns, implying that the process was not created as a result of this call, include:

wrongID (illegal, or process already active), no space for PCB (usually transient; the call may be

retries later), and calling process not authorized to invoke this function. Ada uses the INITIATE

statement to create and activate one or more tasks (processes). When several tasks are created with a

single INITIATE statement, they are executed concurrently.

16
DELETE (process ID);

The DELETE service is also called DESTROY, TERMINATE, or EXIT. Its invocation

causes the OS to destroy the designated process and remove it from the system. A process may

delete itself or another process. The operating system reacts by reclaiming all resources allocated to

the specified process (attached I/O devices, memory), closing files opened by or for the process, and

performing whatever other housekeeping is necessary. Following this process, the PCB is removed

from its place of residence in the list and is returned to the free pool. This makes the designated

process dormant. The DELETE service is normally invoked as a part of orderly program

termination.

To relieve users of the burden and to enhance probability of programs across different

environments, many compilers compile the last END statement of a main program into a DELETE

system call.

Almost all multiprogramming operating systems allow processes to terminate themselves,

provided none of their spawned processes is active. Operating system designers differ in their

attitude toward allowing one process to terminate others. The issue here is none of convenience and

efficiency versus system integrity. Allowing uncontrolled use of this function provides a

malfunctioning or a malevolent process with the means of wiping out all other processes in the

system. On the other hand, terminating a hierarchy of processes in a strictly guarded system where

each process can only delete itself, and where the parent must wait for children to terminate first,

could be a lengthy operation indeed. The usual compromise is to permit deletion of other processes

but to restrict the range to the members of the family, to lower-priority processes only, or to some

other subclass of processes.

17
Possible error returns from the DELETE call include: a child of this process is active (should

terminate first), wrongID (the process does not exist), and calling process not authorized to invoke

this function.

Abort (processID);

ABORT is a forced termination of a process. Although a process could conceivably abort

itself, the most frequent use of this call is for involuntary terminations, such as removal of a

malfunctioning process from the system. The operating system performs much the same actions as

in DELETE, except that it usually furnishes a register and memory dump, together with some

information about the identity of the aborting process and the reason for the action. This information

may be provided in a file, as a message on a terminal, or as an input to the system crash-dump

analyzer utility. Obviously, the issue of restricting the authority to abort other processes, discussed

in relation to the DELETE, is even more pronounced in relation to the ABORT call.

Error returns for ABORT are practically the same as those listed in the discussion of the

DELETE call. The Ada language includes the ABORT statement that forcefully terminates one or

more processes. Other than the usual scope and module-boundary rules, Ada imposes no special

restrictions on the capability of a process to abort others.

FORK/JOIN

Another method of process creation and termination is by means of the FORK/JOIN pair,

originally introduced as primitives for multiprocessor systems. The FORK operation is used to split

a sequence of instructions into two concurrently executable sequences. After reaching the identifier

specified in FORK, a new process (child) is created to execute one branch of the forked code while

the creating (parent) process continues to execute the other. FORK usually returns the identity of the

child to the parent process, and the parent can use that identifier to designate the identity of the child

18
whose termination it wishes to await before invoking a JOIN operation. JOIN is used to merge the

two sequences of code divided by the FORK, and it is available to a parent process for

synchronization with a child.

The relationship between processes created by FORK is rather symbiotic in the sense that

they execute from a single segment of code, and that a child usually initially obtains a copy of the

variables of its parent.

Mesa, a Pascal-like language, uses the FORK/JOIN mechanism to implement asynchronous

procedure calls, where both the caller and the called procedure execute concurrently following an

invocation. The JOIN primitive is used to synchronize the caller with the termination of the named

procedure. In most other respects, asynchronous procedure invocation is identical to the

synchronous procedure-call mechanism in Mesa, which is very similar to an ordinary procedure call

in Pascal or in Algol.

SUSPEND (processKD);

The SUSPEND service is called SLEEP or BLOCK in some systems. The designated

process is suspended indefinitely and placed in the suspended state. It does, however, remain in the

system. A process may suspend itself or another process when authorized to do so by virtue of its

level of privilege, priority, or family membership. When the running process suspends itself, it in

effect voluntarily surrenders control to the operating system. The operating system responds by

inserting the target process's PCB into the suspended list and updating the PCB state field

accordingly.

Suspending a suspended process usually has no effect, except in systems that keep track of

the depth of suspension. In such systems, a process must be resumed at least as many times as if was

suspended in order to become ready. To implement this feature, a suspend-count field has to be

19
maintained in each PCB. Typical error returns include: process already suspended, wrongID, and

caller not authorized.

RESUME (processID)

The RESUME service is called WAKEUP is some systems. This call resumes the target

process, which is presumably suspended. Obviously, a suspended process cannot resume itself,

because a process must be running to have its OS call processed. So a suspended process depends on

a partner process to issue the RESUME. The operating system responds by inserting the target

process's PCB into the ready list, with the state updated. In systems that keep track of the depth of

suspension, the OS first increments the suspend count, moving the PCB only when the count reaches

zero.

The SUSPEND/RESUME mechanism is convenient for relatively primitive and unstructured

form of inter-process synchronization. It is often used in systems that do not support exchange of

signals. Error returns include: process already active, wrongID, and caller not authorized.

DELAY (processID, time);

The system call DELAY is also known as SLEEP. The target process is suspended for the

duration of the specified time period. The time may be expressed in terms of system clock ticks that

are system-dependent and not portable, or in standard time units such as seconds and minutes. A

process may delay itself or, optionally, delay some other process.

The actions of the operating system in handling this call depend on processing interrupts

from the programmable interval timer. The timed delay is a very useful system call for

implementing time-outs. In this application a process initiates an action and puts itself to sleep for

the duration of the time-out. When the delay (time-out) expires, control is given back to the calling

process, which tests the outcome of the initiated action. Two other varieties of timed delay are cyclic

20
rescheduling of a process at given intervals (e.g,. running it once every 5 minutes) and time-of-day

scheduling, where a process is run at a specific time of the day. Examples of the latter are printing a

shift log in a process-control system when a new crew is scheduled to take over, and backing up a

database at midnight.

The error returns include: illegal time interval or unit, wrongID, and called not authorized. In

Ada, a task may delay itself for a number of system clock ticks (system-dependent) or for a specified

time period using the pre-declared floating-point type TIME. The DELAY statement is used for this

purpose.

GET_ATTRIBUTES (processID, attribute_set);

GET_ATTRIBUTES is an inquiry to which the operating system responds by providing the

current values of the process attributes, or their specified subset, from the PCB. This is normally the

only way for a process to find out what its current attributes are, because it neither knows where its

PCB is nor can access the protected OS space where the PCBs are usually kept.

This call may be used to monitor the status of a process, its resource usage and accounting

information, or other public data stored in a PCB. The error returns include: no such attribute,

wrongID, and caller not authorized. In Ada, a task may examine the values of certain task attributes

by means of reading the pre-declared task attribute variables, such as T'ACTIVE, T'CALLABLE,

T'PRIORITY, and T'TERMINATED, where T is the identity of the target task.

CHANGE_PRIORITY (processID, new_priority);

CHANGE_PRIORITY is an instance of a more general SET_PROCESS_ATTRIBUTES

system call. Obviously, this call is not implemented in systems where process priority is static.

Run-time modifications of a process's priority may be used to increase or decrease a

process's ability to compete for system resources. The idea is that priority of a process should rise

21
and fall according to the relative importance of its momentary activity, thus making scheduling more

responsive to changes of the global system state. Low-priority processes may abuse this call, and

processes competing with the operating system itself may corrupt the whole system. For these

reasons, the authority to increase priority is usually restricted to changes within a certain range. For

example, maximum may be specified, or the process may not exceed its parent's or group priority.

Although changing priorities of other processes could be useful, most implementations restrict the

calling process to manipulate its own priority only.

The error returns include: caller not authorized for the requested change and wrong ID. In Ada, a

task may change its own priority by calling the SET_PRIORITY procedure, which is pre-declared in

the language.

3.3 System Programs

Another aspect of a modern system is the collection of system programs. At the lowest level is

hardware. Next is operating system, then the system programs, and finally the application programs.

System programs provide a convenient environment for program development and execution. Some

of them are simply user interfaces to system calls; others are considerably more complex. They can

be divided into these categories:

File management: These programs create, delete, copy, rename, print, dump, list, and generally

manipulate files and directories.

Status information: Some programs simply ask the system for the date, time, amount of available

memory or disk space, number of users, or similar status information. Others are more complex,

providing detailed performance, logging, and debugging information. Typically, these programs

format and print the output to the terminal or other output devices or files or display it in a window

22
the GUI. Some systems also support a registry, which is used to store and retrieve configuration

information.

File modification: Several text editors may be available to create and modify the content of files

stored on disk or other storage devices. There may also be special commands to search contents of

files or perform transformations of the text.

Programming-language support: Compilers, assemblers, debuggers and interpreters for common

programming languages (such as C, C++, Java, Visual Basic, and PERL) are often provided to the

user with the operating system.

Program loading and execution: Once a program is assembled or compiled, it must be loaded into

memory to be executed. The system may provide absolute loaders, reloadable loaders, linkage

editors, and overlay loaders. Debugging systems for either higher-level languages or machine

languages are needed as well.

Communications: These programs provide the mechanism for creating virtual connections among

processes, users, and computer systems. They allow users to send messages to one another's screens,

to browse web pages, to send electronic-mail messages, to log in remotely, or to transfer files from

one machine to another. In addition to systems programs, most operating systems are supplied with

programs that are useful in solving common problems or performing common operations. Such

programs include web browsers, word processors and text formatters, spreadsheets, database

systems, compilers, plotting and statistical-analysis packages, and games. These programs are

known as system utilities or application programs. The view of the operating system seen by most

users is defined, by the application and system programs, rather than by the actual system calls.

When his computer is running the Mac OS X operating system, a user might see the GUI, featuring

a mouse and windows interface. Alternatively, or even in one of the windows, he might have a

23
command-line UNIX shell. Both use the same set of system calls, but the system calls look different

and act in different ways.

4. SUMMARY

Operating systems provide a number of services. At the lowest level, system calls allow a running

program to make requests from the operating system directly. System programs are provided to

satisfy many common user requests. The types of requests vary according to level. The system-call

level must provide the basic functions, such as process control and file and device manipulation.

Higher-level requests, satisfied by the command interpreter or system programs, are translated into a

sequence of system calls. System services can be classified into several categories: program control,

status requests, and I/O requests. Program errors can be considered implicit requests for service.

5. SUGGESTED READINGS / REFERENCE MATERIAL

 Operating System Concepts, 5th Edition, Silberschatz A., Galvin P.B., John Wiley & Sons.

 Systems Programming & Operating Systems, 2nd Revised Edition, Dhamdhere D.M., Tata

McGraw Hill Publishing Company Ltd., New Delhi.

 Operating Systems, Madnick S.E., Donovan J.T., Tata McGraw Hill Publishing Company

Ltd., New Delhi.

 Operating Systems-A Modern Perspective, Gary Nutt, Pearson Education Asia, 2000.

 Operating Systems, Harris J.A., Tata McGraw Hill Publishing Company Ltd., New Delhi,

2002.

6. SELF-ASSESSMENT QUESTIONS (SAQ)

 The services and functions provided by an operating system can be divided into two main

categories. Briefly describe the two categories and discuss how they differ.

24
 List five services provided by an operating system that are designed to make it more

convenient for users to use the computer system. In what cases it would be impossible for

user-level programs to provide these services? Explain.

 What are the five major activities of an operating system with regard to file management?

 What are the advantages and disadvantages of using the same system call interface for

manipulating both files and devices?

25
CS-DE-15

CPU SCHEDULING

LESSON NO 3

Writer: Harvinder Singh

Vetter: Dr. Pardeep Kumar

STRUCTURE

1. Introduction

2. Objectives

3. Presentation of Contents

3.1 Process

3.2 Types of Schedulers

3.2.1 The long-term scheduler

3.2.2 The medium-term scheduler

3.2.3 The short-term scheduler

3.3 Scheduling and Performance Criteria

3.4 Scheduler Design

3.5 Scheduling Algorithms

1
3.5.1 First-Come, First-Served (FCFS) Scheduling

3.5.2 Shortest Remaining Time Next (SRTN) Scheduling

3.5.3 Time-Slice Scheduling (Round Robin, RR)

3.5.4 Priority-Based Preemptive Scheduling (Event-Driven, ED)

3.5.5 Multiple-Level Queues (MLQ) Scheduling

3.5.6 Multiple-Level Queues with Feedback Scheduling

4. Summary

5. Suggested Readings / Reference Material

6. Self Assessment Questions (SAQ)

1. INTRODUCTION

One of the most fundamental concepts of modern operating systems is the

distinction between a program and the activity of executing a program. The former is

merely a static set of directions; the latter is a dynamic activity whose properties change

as time progresses. This activity is knows as a process. A process encompasses the

current status of the activity, called the process state. This state includes the current

position in the program being executed (the value of the program counter) as well as the

values in the other CPU registers and the associated memory cells. Roughly speaking, the

process state is a snapshot of the machine at that time. At different times during the

2
execution of a program (at different times in a process) different snapshots (different

process states) will be observed.

In operating systems, the term “scheduling” is the method by which threads

processes are given access to system resources. A scheduler is an OS module that selects

the next job to be admitted into the system and the next process to run. The need for a

scheduling algorithm arises from the requirement for most modern systems to perform

multitasking (execute more than one process at a time) and multiplexing (transmit

multiple flows simultaneously).

2. OBJECTIVES

Objectives of this lesson are to know how about the process and process states. Another

objective is to describe the roles of three different types of schedulers encountered in

operating systems. The scheduler is concerned mainly with throughput, latency and

waiting time. After a discussion of the various performance criteria behind the design of

schedulers, several popular scheduling algorithms have been discussed.

3. PRESENTATION OF CONTENTS

3.1 PROCESS

The notion of process is central to the understanding of operating systems. There are

quite a few definitions presented in the literature, but no "perfect" definition has yet

appeared.

3
The term "process" was first used by the designers of the MULTICS in 1960’s. Since

then, the term process is used somewhat interchangeably with 'task' or 'job'. The process

has been given many definitions for instance

 A program in Execution.

 An asynchronous activity.

 The 'animated sprit' of a procedure in execution.

 The entity to which processors are assigned.

 The 'despatchable' unit.

As we can see from above that there is no universally agreed upon definition, but the

definition "Program in Execution" seem to be most frequently used. Now that we agreed

upon the definition of process, the question is “what is the relation between process and

program?” In the following discussion we point out some of the differences between

process and program. Process is not the same as program rather a process is more than a

program code. A process is an active entity as oppose to program which consider being a

'passive' entity. As we know that a program is an algorithm expressed in some suitable

notation, (e.g., programming language). Being passive, a program is only a part of

process. Process, on the other hand, includes:

 An image of the executable machine code associated with a program.

 Memory (typically some region of virtual memory); which includes the

executable code, process-specific data (input and output), a call stack (to keep

4
track of active subroutines and/or other events), and a heap to hold intermediate

computation data generated during run time.

 Operating system descriptors of resources that are allocated to the process, such as

file descriptors (Unix terminology) or handles (Windows), and data sources and

sinks.

 Security attributes, such as the process owner and the process’s set of permissions

(allowable operations).

 Processor state, such as the content of registers, physical memory addressing, etc.

The state is typically stored in computer registers when the process is executing,

and in memory otherwise.

 Current value of Program Counter (PC)

 Contents of the processors registers

 Value of the variables

 The Process Stack (SP) which typically contains temporary data such as subroutine

parameter, return address, and temporary variables.

 A data section that contains global variables.

A process is the unit of work in a system. In process model, all software on the computer

is organized into a number of sequential processes. The process state consist of

5
everything necessary to resume the process execution if it is somehow put aside

temporarily.

At any given point in time, while the program is executing, this process can be uniquely

characterized by a number of elements, including the following:

 Identifier: A unique identifier associated with this process, to distinguish it from

all other processes.

 State: If the process is currently executing, it is in the running state.

 Priority: Priority level relative to other processes.

 Program counter: The address of the next instruction in the program to be

executed.

 Memory pointers: Includes pointers to the program code and data associated

with this process, plus any memory blocks shared with other processes.

 Context data: These are data that are present in registers in the processor while

the process is executing.

 I/O status information: Includes outstanding I/O requests, I/O devices (e.g., tape

drives) assigned to this process, a list of files in use by the process, and so on.

 Accounting information: May include the amount of processor time and clock

time used, time limits, account numbers, and so on.

The process state consists of at least following:

6
(i) Code for the program.

(ii) Program's static data.

(iii) Program's dynamic data.

(iv) Program's procedure call stack.

(v) Contents of general purpose register.

(vi) Contents of program counter (PC)

(vii) Contents of program status word (PSW).

(viii) Operating Systems resource in use.

A process goes through a series of discrete process states. The following typical process

states are possible on computer systems of all kinds. In most of these states, processes are

stored on main memory.

(a) New State: The process being created. When a user request for a Service from the

System, then the System will first initialize the process or the System will call it an

initial Process. So every new Operation which is requested to the System is known as

the New Born Process.

(b) Running State: A process is said to be running if process actually using the CPU at

that particular instant. A process moves into the running state when it is chosen for

execution. The process's instructions are executed by one of the CPUs (or cores) of

the system. There is at most one running process per CPU or core. A process can run

in either of the two modes, namely kernel mode or user mode.


7
Kernel mode

 Processes in kernel mode can access both: kernel and user addresses.

 Kernel mode allows unrestricted access to hardware including execution of

privileged instructions.

 Various instructions (such as I/O instructions and halt instructions) are privileged

and can be executed only in kernel mode.

 A system call from a user program leads to a switch to kernel mode.

User mode

 Processes in user mode can access their own instructions and data but not kernel

instructions and data (or those of other processes).

 When the computer system is executing on behalf of a user application, the

system is in user mode. However, when a user application requests a service from

the operating system (via a system call), the system must transition from user to

kernel mode to fulfill the request.

 User mode avoids various catastrophic failures:

o There is an isolated virtual address space for each process in user mode.

o User mode ensures isolated execution of each process so that it does not

affect other processes as such.

o No direct access to any hardware device is allowed.

8
(c) Blocked State: A process is said to be blocked if it is waiting for some event to

happen before it can proceed. A process may be blocked due to various reasons such

as when a particular process has exhausted the CPU time allocated to it or it is

waiting for an event to occur.

(d) Ready State: A process is said to be ready if it use a CPU as soon as it is available. A

"ready" or "waiting" process has been loaded into main memory and is awaiting

execution on a CPU (to be context switched onto the CPU by the dispatcher, or short-

term scheduler). There may be many "ready" processes at any one point of the

system's execution—for example, in a one-processor system, only one process can be

executing at any one time, and all other "concurrently executing" processes will be

waiting for execution. A ready queue or run queue is used in computer scheduling.

Modern computers are capable of running many different programs or processes at

the same time. However, the CPU is only capable of handling one process at a time.

Processes that are ready for the CPU are kept in a queue for "ready" processes. Other

processes that are waiting for an event to occur, such as loading information from a

hard drive or waiting on an internet connection, are not in the ready queue. The run

queue may contain priority values for each process, which will be used by the

scheduler to determine which process to run next. To ensure each program has a fair

share of resources, each one is run for some time period (quantum) before it is paused

and placed back into the run queue.

9
(e) Terminated state: The process has finished execution. A process may be terminated,

either from the "running" state by completing its execution or by explicitly being

killed. In either of these cases, the process moves to the "terminated" state.

Two additional states are available for processes in systems that support virtual memory.

In both of these states, processes are "stored" on secondary memory (typically a hard

disk).

Swapped out and waiting

This is also called suspended and waiting. In systems that support virtual memory, a

process may be swapped out, that is, removed from main memory and placed on external

storage by the scheduler. From here the process may be swapped back into the waiting

state.

Swapped out and blocked

This is also called suspended and blocked. Processes that are blocked may also be

swapped out. In this event the process is both swapped out and blocked, and may be

swapped back in again under the same circumstances as a swapped out and waiting

process (although in this case, the process will move to the blocked state, and may still be

waiting for a resource to become available).

3.2 TYPES OF SCHEDULERS

There are three distinct types of schedulers, a long-term scheduler (also known as an

admission scheduler or high-level scheduler), a mid-term or medium-term scheduler and

10
a short-term scheduler. The scheduler is an operating system module that selects the next

jobs to be admitted into the system and the next process to run. Figure 3.1 shows the

possible traversal paths of jobs and programs through the components and queues,

depicted by rectangles, of a computer system. The primary places of action of the three

types of schedulers are marked with down-arrows. As shown in Figure 3.1, a submitted

batch job joins the batch queue while waiting to be processed by the long-term scheduler.

Whenever the CPU becomes idle, it is the job of the CPU Scheduler (the short-term

scheduler) to select another process from the ready queue to run next. After becoming

suspended, the running process may be removed from memory and swapped out to

secondary storage. Such processes are subsequently admitted to main memory by the

medium-term scheduler in order to be considered for execution by the short-term

scheduler.

Suspended and
Medium Term Scheduler Swapped

Interactive Out Queue


programs

Short Term Scheduler

CPU
Batch Ready Exit
Batch Queue
Jobs Queue Exit

Long Term Suspended


Scheduler
Queue

Figure 3.1 - Process Schedulers

11
3.2.1 The long-term scheduler

The long-term, or admission scheduler, decides which jobs or processes are to be

admitted to the ready queue (in the Main Memory). The long-term scheduler, when

present, works with the batch queue and selects the next batch job to be executed. Batch

is usually reserved for resource-intensive (processor time, memory, special I/O devices),

low-priority programs that may be used as fillers to keep the system resources busy

during periods of low activity of interactive jobs. As pointed out earlier, batch jobs

contain all necessary data and commands for their execution. Batch jobs usually also

contain programmer-assigned estimates of their resource needs, such as memory size,

expected execution time, and device requirements.

The primary objective of the long-term scheduler is to provide a balanced mix of

jobs, such as processor-bound and I/O-bound, to the short-term scheduler. Thus, this

scheduler dictates what processes are to run on a system, and the degree of concurrency

to be supported at any one time i.e., whether a high or low amount of processes are to be

executed concurrently, and how the split between input output intensive and CPU

intensive processes is to be handled. So long term scheduler is responsible for controlling

the degree of multiprogramming. For example, when the processor utilization is low, the

scheduler may admit more jobs to increase the number of processes in a ready queue, and

with it the probability of having some useful work awaiting processor allocation.

Conversely, when the utilization factor becomes high as reflected in the response time,

the long-term scheduler may opt to reduce the rate of batch-job admission accordingly. In

modern operating systems, this is used to make sure that real time processes get enough

12
CPU time to finish their tasks. Without proper real time scheduling, modern GUIs would

seem sluggish. As a result of the relatively infrequent execution and the availability of an

estimate of its workload's characteristics, the long-term scheduler may incorporate rather

complex and computationally intensive algorithms for admitting jobs into the system. In

terms of the process state-transition diagram, the long-term scheduler is basically in

charge of the dormant-to-ready transitions.

3.2.2 The medium-term scheduler

Scheduler temporarily removes processes from main memory and places them on

secondary memory (such as a disk drive) or vice versa. This is commonly referred to as

"swapping out" or "swapping in" (also incorrectly as "paging out" or "paging in"). The

medium-term scheduler may decide to swap out a process which has not been active for

some time, or a process which has a low priority, or a process which is page

faulting frequently, or a process which is taking up a large amount of memory in order to

free up main memory for other processes, swapping the process back in later when more

memory is available, or when the process has been unblocked and is no longer waiting

for a resource. In practice, the main-memory capacity may impose a limit on the number

of active processes in the system. When a number of those processes become suspended,

the remaining supply of ready processes in systems where all suspended processes remain

resident in memory may become reduced to a level that impairs functioning of the short-

term scheduler by leaving it few or no options for selection. In systems with no support

for virtual memory, moving suspended processes to secondary storage may alleviate this

problem.

13
In the system depicted in Figure 3.1, a portion of the suspended processes is

assumed to be swapped out. The remaining processes are assumed to remain in memory

while they are suspended.

The medium-term scheduler is in charge of handling the swapped-out processes.

It has little to do while a process remains suspended. However, once the suspending

condition is removed, the medium-term scheduler attempts to allocate the required

amount of main memory, and swap the process in and make it ready. To work properly,

the medium-term scheduler must be provided with information about the memory

requirements of swapped-out processes. This is usually not difficult to implement,

because the actual size of the process may be recorded at the time of swapping and stored

in the related process control block.

In many systems today (those that support mapping virtual address space to

secondary storage other than the swap file), the medium-term scheduler may actually

perform the role of the long-term scheduler, by treating binaries as "swapped out

processes" upon their execution.

3.2.3 The short-term scheduler

The short-term scheduler (also known as the CPU scheduler) decides which of the

ready, in-memory processes are to be executed (allocated a CPU) after a clock interrupt,

an I/O interrupt, an operating system call or another form of signal. Its main objective is

to maximize system performance in accordance with the chosen set of criteria. Since it is

in charge of ready-to-running state transitions, the short-term scheduler must be invoked

14
for each process switch to select the next process to be run. Given that any such change

could result in making the running process suspended or in making one or more

suspended processes ready, the short-term scheduler should be run to determine whether

such significant changes have indeed occurred and, if so, to select the next process to be

run. Some of the events occurred and, if so, to select the next process to be run. Some of

the events introduced thus far that cause rescheduling by virtue of their ability to change

the global system state are:

 Clock-ticks (time-base interrupts)

 Interrupts and I/O completions

 Most operational OS calls (as opposed to queries)

 Sending and receiving of signals

 Activation of interactive programs

In general, whenever one of these events occurs, the operating system invokes the

short-term scheduler to determine whether another process should be scheduled for

execution.

Most of the process-management OS services discussed in this lesson requires

invocation of the short-term scheduler as part of their processing. For example, creating a

process or resuming a suspended one adds another entry to the ready list (queue), and the

scheduler is invoked to determine whether the new entry should also become the running

process. Suspending a running process, changing priority of the running process, and

15
exiting or aborting a process are also events that may necessitate selection of a new

running process, changing priority of the running process, and exiting or aborting a

process are also events that may necessitate selection of a new running process. Among

other things, this service is useful for invoking the scheduler from user-written event-

processing routines, such as device (I/O) drivers.

As indicated in Figure 3.1, interactive programs often enter the ready queue

directly after being submitted to the OS, which then creates the corresponding process.

Unlike-batch jobs, the influx of interactive programs are not throttled, and they may

conceivably saturate the system. The necessary control is usually provided indirectly by

deterioration response time, which tempts the users to give up and try again later, or at

least to reduce the rate of incoming requests.

Figure 3.1 illustrates the roles and the interplay among the various types of

schedulers in an operating system. It depicts the most general case of all three types being

present. For example, a larger operating system might support both batch and interactive

programs and rely on swapping to maintain a well-behaved mix of active processes.

Smaller or special-purpose operating systems may have only one or two types of

schedulers available. A long-term scheduler is normally not found in systems without

support for batch, and the medium-term scheduler is needed only when swapping is used

by the underlying operating system. When more than one type of scheduler exists in an

operating system, proper support for communication and interaction is very important for

attaining satisfactory and balanced performance. This scheduler can be preemptive,

implying that it is capable of forcibly removing processes from a CPU when it decides to

16
allocate that CPU to another process, or non-preemptive (also known as "voluntary" or

"co-operative"), in which case the scheduler is unable to "force" processes off the CPU.

3.3 SCHEDULING AND PERFORMANCE CRITERIA

There is no “one, true scheduling algorithm,” because the “goodness” of an

algorithm can be measured by many, sometimes contradictory, criteria:

1. Processor utilization

2. Throughput

3. Turnaround time

4. Waiting time

5. Response time

The scheduler should also work for fairness, predictability, and repeatability, so that

similar workloads exhibit similar behavior. Although these two considerations may, in a

sense, be regarded as scheduling objectives, we will treat them as design constraints.

In practice, these goals often conflict (e.g. throughput versus latency), thus a

scheduler will implement a suitable compromise. Preference is given to any one of the

above mentioned concerns depending upon the user's needs and objectives. For simplicity

of presentation, in the description that follows, the terms job and process are used

interchangeably to designate a basic unit of work.

Processor utilization is the average fraction of time during which the processor is

busy usually refers to the processor not being idle, and includes both the time spent
17
executing user programs and executing the operating system. With this interpretation,

processor utilization may be relatively easily measured-for example, by means of a

special NULL process that runs when nothing else can run. An alternative is to consider

the user-state operation only and thus exclude the time spent executing the operating

system. In any case, the idea is that, by keeping the processor busy as much as possible,

other component utilization factors will also be high and provide a good return on

investment. Unfortunately, as subsequent sections and analyses of scheduling indicate,

with processor utilization approaching 100%, average waiting times and average queue

lengths tend to grow excessively.

Throughput refers to the amount of work completed in a unit of time. One way to

express throughput is by means of the number of user jobs executed in a unit of time. The

higher the number, the more work is apparently being done by the system. In closed

environments, where a more or less fixed collection of processes cycles in the system,

such as embedded real-time systems, throughput can be a measure of scheduling

efficiency. In systems where a large population of users, such as time-sharing, generates

service demands throughput in the long run is dictated by external factors and is a

function of the request arrival rate and the processor service rate. In such systems,

scheduling basically affects the distribution of waiting times among the users.

Turnaround time, T, is defined as the time that elapses from the moment a

program or a job is submitted until it is completed by a system. It is the time spent in the

system, and it may be expressed as a sum of the job service time (execution time) and

waiting time.

18
Waiting time, W, is the time that a process or a job spends waiting for resource

allocation due to contentions with others in a multiprogramming system. In other words,

waiting time is the penalty imposed for sharing resources with others. Waiting time may

be expressed as turnaround time less the actual execution time:

W(x) = T(x) - x

where x is the service time, W(x) is the waiting time of the job requiring x units of

service, and T(x) is the job's turnaround time.

Waiting time is Equal to CPU time to each process (or more generally appropriate times

according to each process' priority). It is the time for which the process remains in the

ready queue. For example, a long job executed without preemption and a short job

executed with several preemptions may experience identical turnaround times. However,

the waiting times of the two jobs would differ and clearly indicate the effects and extent

of interference experienced by each job.

Response time in interactive systems is defined as the amount of time it takes from when

a request was submitted until the first response is produced. This is usually called the

terminal response time. In real-time systems, on the other hand, the response time is

essentially latency. It is defined as the time from the moment an event (internal or

external) is signaled until the first instruction of its respective service routine is executed.

This time is often called the event response time.

19
3.4 SCHEDULER DESIGN

Design process of a typical scheduler consists of selecting one or more primary

performance criteria and ranking them in relative order of importance. The next step is to

design a scheduling strategy that maximizes performance for the specified set of criteria

while obeying the design constraints. One should intentionally avoid the word

"optimization" because most scheduling algorithms actually implemented do not

schedule optimally. They are based on heuristic techniques that yield good or near-

optimal performance but rarely achieve absolutely optimal performance. The primary

reason for this situation lies in the overhead that would be incurred by computing the

optimal strategy at run-time, and by collecting the performance statistics necessary to

perform the optimization. Of course, the optimization algorithms remain important, at

least as a yardstick in evaluating the heuristics. Schedulers typically attempt to maximize

the average performance of a system, relative to a given criterion. However, due

consideration must be given to controlling the variance and limiting the worst-case

behavior. For example, a user experiencing 10-second response time to simple queries

has little consolation in knowing that the system's average response time is under 2

seconds.

One of the problems in selecting a set of performance criteria is that they often

conflict with each other. For example, increased processor utilization is usually achieved

by increasing the number of active processes, but then response time deteriorates. As is

the case with most engineering problems, the design of a scheduler usually requires

careful balance of all the different requirements and constraints. With the knowledge of

20
the primary intended use of a given system, operating-system designers tend to maximize

the criteria most important in a given environment. For example, throughput and

component utilization are the primary design objectives in a batch system. Multi-user

systems are dominated by concerns regarding the terminal response time, and real-time

operating systems are designed for the ability to handle burst of external events

responsively.

3.5 SCHEDULING ALGORITHMS

Scheduling disciplines are algorithms used for distributing resources among

parties which simultaneously and asynchronously request them. Scheduling disciplines

are used in routers (to handle packet traffic) as well as in operating systems (to

share CPU time among both threads and processes), disk drives (I/O scheduling), printers

(print spooler), most embedded systems, etc. Depending on whether a particular

scheduling discipline is primarily used by the long-term or by the short-term scheduler,

we illustrate its working by using the term job or process for a unit of work, respectively.

In general scheduling disciplines may be preemptive or non-preemptive. In batch,

non-preemption implies that, once scheduled, a selected job runs to completion. With

short-term scheduling, no preemption implies that the running process retains ownership

of allocated resources, including the processor, until it voluntarily surrenders control to

the operating system. In other words, the running process is not forced to relinquish

ownership of the processor when a higher-priority process becomes ready for execution.

However, when the running process becomes suspended as a result of its own action, say,

by waiting for an I/O completion, another ready process may be scheduled.

21
With preemptive scheduling, on the other hand, a running process may be

replaced by a higher-priority process at any time. This is accomplished by activating the

scheduler whenever an event that changes the state of the system is detected. The main

purposes of scheduling algorithms are to minimize resource starvation and to ensure

fairness amongst the parties utilizing the resources. Scheduling deals with the problem of

deciding which of the outstanding requests is to be allocated resources. There are many

different scheduling algorithms. In this section, we introduce several of them.

3.5.1 First-Come, First-Served (FCFS) Scheduling

Also known as First-in, First-out is by far the simplest scheduling discipline. The

workload is simply processed in the order of arrival, with no preemption. Implementation

of the FCFS scheduler is quite straightforward, and its execution results in little overhead.

By failing to take into consideration the state of the system and the resource

requirements of the individual scheduling entities, FCFS scheduling may result in poor

performance. As a consequence of no preemption, component utilization and the system

throughput rate may be quite low. Since there is no discrimination on the basis of the

required service, short jobs may suffer considerable turnaround delays and waiting times

when one or more long jobs are in the system. For example, consider a system with two

jobs, J1 and J2, with total execution times of 20 and 2 time units, respectively. If they

arrive shortly one after the other in the order J1-J2, the turnaround times are 20 and 22

time units, respectively (J2 must wait for J1 to complete), thus yielding an average of 21

time units. The corresponding waiting times are 0 and 20 unit, yielding an average of 10

time units. However, when the same two jobs arrive in the opposite order, J2-J1, the

22
average turnaround time drops to 11, and the average waiting time is only 1 time unit.

This simple example demonstrates how short jobs may be hurt by the long jobs in FCFS

systems, as well as the potential variability in turnaround and waiting times from one run

to another.

FCFS has relatively low throughput and poor event-response time due to the lack

of preemption and process prioritization. FCFS scheduling effectively eliminates the

notion and importance of process priorities. Process arrival times (i.e. becoming ready)

are the sole scheduling criterion.

3.5.2 Shortest Remaining Time Next (SRTN) Scheduling

Shortest remaining time next is very similar to Shortest Job First (SJF). With this

strategy the scheduler arranges processes with the least estimated processing time

remaining to be next in the queue. This requires advanced knowledge or estimations

about the time required for a process to complete. SRTN scheduling may be implemented

in either the non-preemptive or the preemptive variety. The non-preemptive version of

SRTN is called shortest job first (SJF). In either case, whenever the SRTN scheduler is

invoked, it searches the corresponding queue (batch or ready) to find the job or the

process with the shortest remaining execution time. The difference between the two cases

lies in the conditions that lead to invocation of the scheduler and, consequently, the

frequency of its execution. Without preemption, the SRTN scheduler is invoked

whenever a job is completed or the running process surrenders control to the OS. The

scheduler is invoked to compare the remaining processor execution time of the running

process with the time needed to complete the next processor burst of the newcomer.
23
Depending on the outcome, the running process may continue, or it may be preempted

and replaced by the shortest-remaining-time process. If preempted, the running process

joins the ready queue.

SRTN is a provably optimal scheduling discipline in terms of minimizing the

average waiting time of a given workload. If a shorter process arrives during another

process' execution, the currently running process may be interrupted (known as

preemption), dividing that process into two separate computing blocks. This creates

excess overhead through additional context switching. The scheduler must also place

each incoming process into a specific place in the queue, creating additional overhead.

The SRTN discipline schedules optimally assuming that the exact future

execution times of jobs or processes are known at the time of scheduling. Dependence on

future knowledge tends to limit the effectiveness of SRTN implementations in practice,

because future process behavior is unknown in general and difficult to estimate reliably,

except for some very specialized deterministic cases.

Predictions of process execution requirements are usually based on observed past

behavior, perhaps coupled with some other knowledge of the nature of the process and its

long-term statistical properties, if available. A relatively simple predictor, called the

exponential smoothing predictor, has the following form:

Pn = 0n-1 + (1 - )P-1

where 0n is the observed length of the (n-1)th execution interval, Pn-1 is the predictor for

the same interval, and  is a number between 0 and 1. The parameter  controls the

24
relative weight assigned to the past observations and predictions. For the extreme case of

 = 1, the past predictor is ignored, and the new prediction equals the last observation.

For  = 0, the last observation is ignored. In general, expansion of the recursive

relationship yields

n-1
Pn =   (1 - )i0n-i-1
I=0

Thus the predictor includes the entire process history, with its more recent history

weighted more.

Many operating systems measure and record elapsed execution time of a process

in its PCB. This information is used for scheduling and accounting purposes.

Implementation of SRTN scheduling obviously requires rather precise measurement and

imposes the overhead of predictor calculation at run time. Moreover, some additional

feedback mechanism is usually necessary for corrections when the predictor is grossly

incorrect.

SRTN scheduling algorithm is designed for maximum throughput in most

scenarios. Waiting time and response time increase as the process's computational

requirements increase. Since turnaround time is based on waiting time plus processing

time, longer processes are significantly affected by this. Overall waiting time is smaller

than FIFO, however since no process has to wait for the termination of the longest

process.

The preemptive variety of SRTN incurs the additional overhead of frequent

process switching and scheduler invocation to examine each and every process transition
25
into the ready state. This work is wasted when the new ready process has a longer

remaining execution time than the running process.

SRTN provides good event and interrupt response times by giving preference to

the related service routines since their processor bursts are typically of short duration.

3.5.3 Time-Slice Scheduling (Round Robin, RR)

In interactive environments, such as time-sharing systems, the primary

requirement is to provide reasonably good response time and, in general, to share system

resources equitably among all users. Obviously, only preemptive disciplines may be

considered in such environments, and one of the most popular is time slicing, also known

as round robin (RR).

It is a preemptive scheduling policy. This scheduling policy gives each process a

slice of time (i.e., one quantum) before being preempted. As each process becomes ready,

it joins the ready queue. A clock interrupt is generated at periodic intervals. When the

interrupt occurs, the currently running process is preempted, and the oldest process in the

ready queue is selected to run next. The time interval between each interrupt may vary.

It is one of the most common and most important scheduler. This is not the simplest

scheduler, but it is the simplest preemptive scheduler. It works as follows:

 The processes that are ready to run (i.e. not blocked) are kept in a FIFO queue, called

the "Ready" queue.

 There is a fixed time quantum (50 msec is a typical number) which is the maximum

length that any process runs at a time.

26
 The currently active process P runs until one of two things happens:

 P blocks (e.g. waiting for input). In that case, P is taken off the ready queue; it is

in the "blocked" state.

 P exhausts its time quantum. In this case, P is pre-empted, even though it is still

able to run. It is put at the end of the ready queue.

In either case, the process at the head of the ready queue is now made the active

process.

 When a process unblocks (e.g. the input it's waiting for is complete) it is put at the

end of the ready queue.

Suppose the time quantum is 40 msec, process P is executing, and it blocks after

15 msec. When it unblocks, and gets through the ready queue, it gets the standard 40

msec again; it doesn't somehow "save" the 25 msec that it missed last time.

It is an important preemptive scheduling policy and is essentially the preemptive version

of FCFS. The key parameter here is the quantum size q. When a process is put into the

running state a timer is set to q. If the timer goes off and the process is still running, the

Operating System preempts the process. This process is moved to the ready state where it

is placed at the rear of the ready queue. The process at the front of the ready list is

removed from the ready list and run (i.e., moves to state running). When a process is

created, it is placed at the rear of the ready list. As q gets large, RR approaches FCFS.

As q gets small, RR approaches PS (Processor Sharing).

27
What value of q should we choose? Actually it is a tradeoff (1) Small q makes

system more responsive, (2) Large q makes system more efficient since less process

switching.

Round robin scheduling achieves equitable sharing of system resources. Short

processes may be executed within a single time quantum and thus exhibit good response

times. Long processes may require several quanta and thus be forced to cycle through the

ready queue a few times before completion. With RR scheduling, response time of long

processes is directly proportional to their resource requirements. For long processes that

consist of a number of interactive sequences with the user, primarily the response time

between the two consecutive interactions matters. If the computational requirements

between two such sequences may be completed within a single time slice, the user should

experience good response time. RR tends to subject long processes without interactive

sequences to relatively long turnaround and waiting times. Such processes, however, may

best be run in the batch mode, and it might even be desirable to discourage users from

submitting them to the interactive scheduler.

Implementation of round robin scheduling requires support of an interval timer-

preferably a dedicated one, as opposed to sharing the system time base. The timer is

usually set to interrupt the operating system whenever a time slice expires and thus force

the scheduler to be invoked. The scheduler itself simply stores the context of the running

process, moves it to the end of the ready queue, and dispatches the process at the head of

the ready queue. The scheduler is also invoked to dispatch a new process whenever the

running process surrenders control to the operating system before expiration of its time

28
quantum, say, by requesting I/O. The interval timer is usually reset at that point, in order

to provide the full time slot to the new running process. The frequent setting and resetting

of a dedicated interval timer makes hardware support desirable in systems that use time

slicing.

Round robin scheduling is often regarded as a "fair" scheduling discipline. It is

also one of the best-known scheduling disciplines for achieving good and relatively

evenly distributed terminal response time. The performance of round robin scheduling is

very sensitive to the choice of the time slice. For this reason, duration of the time slice is

often made user-tunable by means of the system generation process.

The relationship between the time slice and performance is markedly nonlinear.

Reduction of the time slice should not be carried too far in anticipation of better response

time. Too short a time slice may result in significant overhead due to the frequent timer

interrupts and process switches. On the other hand, too long a time slice reduces the

preemption overhead but increases response time.

Too short a time slice results in excessive overhead, and too long a time slice

degenerates from round-robin to FCFS scheduling, as processes surrender control to the

Operating System rather than being preempted by the interval timer. The "optimal" value

of the time slice lies somewhere in between, but it is both system-dependent and

workload-dependent. For example, the best value of time slice for our example may not

turn out to be so good when other processes with different behavior are introduced in the

system, that is, when characteristics of the workload change. This, unfortunately, is

29
commonly the case with time-sharing systems where different types of programs may be

submitted at different times.

In summary, round robin is primarily used in time-sharing and multi-user systems

where terminal response time is important. Round robin scheduling generally

discriminates against long non-interactive jobs and depends on the judicious choice of

time slice for adequate performance. Duration of a time slice is a tunable system

parameter that may be changed during system generation.

3.5.4 Priority-Based Preemptive Scheduling (Event-Driven, ED)

The OS assigns a fixed priority rank to every process, and the scheduler arranges

the processes in the ready queue in order of their priority. Lower priority processes get

interrupted by incoming higher priority processes. Priorities may be static or dynamic. In

either case, the user or the system assigns their initial values at the process-creating time.

The level of priority may be determined as an aggregate figure on the basis of an initial

value, characteristic, resource requirements, and run-time behavior of the process. In this

sense, many scheduling disciplines may be regarded as being priority-driven, where the

priority of a process represents its likelihood of being scheduled next. Priority-based

scheduling may be preemptive or non-preemptive.

A common problem with priority-based scheduling is the possibility that low-

priority processes may be effectively locked out by the higher priority ones. In general,

completion of a process within finite time of its creation cannot be guaranteed with this

scheduling policy. If the number of rankings is limited it can be characterized as a

30
collection of FIFO queues, one for each priority ranking. Processes in lower-priority

queues are selected only when all of the higher-priority queues are empty.

We introduced event-driven scheduling in the earlier sections of this lesson, in

presenting the multitasking example. ED scheduling consists of assigning fixed or

dynamically varying priorities to all processes and scheduling the highest-priority ready

process whenever a significant event occurs. ED scheduling is used in systems where

response time, especially to external events, is of utmost importance.

By means of assigning priorities to processes, system programmers can influence

the order in which an ED scheduler services coincident external events. However, the

high-priority ones may starve low-priority processes. Since it gives little consideration to

resource requirements of processes, event-driven scheduling cannot be expected to excel

in general-purpose systems, such as university computing centers, where a large number

of user processes are run at the same (default) level of priority.

Another variant of priority-based scheduling is used in the so-called hard real-

time systems, where each process must be guaranteed execution before expiration of its

deadline. In such systems, time-critical processes are assumed to be assigned execution

deadlines. The system workload consists of a combination of periodic processes,

executed cyclically with a known period, and of periodic processes, executed cyclically

with a known period, and of a periodic process whose arrival times are generally not

predictable. An optimal scheduling discipline in such environments is the earliest-

deadline scheduler, which schedules for execution the ready process with the earliest

deadline. Another form of scheduler, called the least laxity scheduler or the least slack

31
scheduler, has also been shown to be optimal in single-processor systems. This scheduler

selects the ready process with the least difference between its deadline and computation

time. Interestingly, neither of these schedulers is optimal in multiprocessor environments.

3.5.5 Multiple-Level Queues (MLQ) Scheduling

The scheduling disciplines described so far are more or less suited to particular

applications, with potentially poor performance when applied inappropriately. What

should one use in a mixed system, with some time-critical events, a multitude of

interactive users, and some very long non-interactive jobs? This description easily fits

any university computing center, with a variety of devices and terminals (interrupts to be

serviced), interactive users (student programs), and simulations (batch jobs). One

approach is to combine several scheduling disciplines. A mix of scheduling disciplines

may best service a mixed environment, each charged with what it does best. For example,

operating-system processes and device interrupts may be subjected to event-driven

scheduling, interactive programs to round robin scheduling, and batch jobs to FCFS or

STRN.

One way to implement complex scheduling is to classify the workload according

to its characteristics, and to maintain separate process queues serviced by different

schedulers. This approach is often called multiple-level queues (MLQ) scheduling. A

division of the workload might be into system processes, interactive programs, and batch

jobs. This would result in three ready queues, as depicted in Figure 3.2. A process may be

assigned to a specific queue on the basis of its attributes, which may be user-or system-

supplied. Each queue may then be serviced by the scheduling discipline best suited to the

32
type of workload that it contains. Given a single server (the processor), some discipline

must also be devised for scheduling between queues. Typical approaches are to use

absolute priority or time slicing with some bias reflecting relative priority of the

processes within specific queues. In the absolute priority case, the processes from the

highest-priority queue (e.g. system processes) are serviced until that queue becomes

empty. The scheduling discipline may be event-driven, although FCFS should not be

ruled out given its low overhead and the similar characteristics of processes in that queue.

When the highest-priority queue becomes empty, the next queue may be serviced using

its own scheduling discipline (e.g., RR for interactive processes). Finally, when both

higher-priority queues become empty, a batch-spawned process may be selected. A

lower-priority process may, of course, be preempted by a higher-priority arrival in one of

the upper-level queues. This discipline maintains responsiveness to external events and

interrupts at the expense of frequent preemption’s. An alternative approach is to assign a

certain percentage of the processor time to each queue, commensurate with its priority.

33
Figure 3.2 Multilevel queue Scheduling

Multiple queues scheduling is a very general discipline that combines the

advantages of the "pure" mechanisms discussed earlier. MLQ scheduling may also

impose the combined overhead of its constituent scheduling disciplines. However,

assigning classes of processes that a particular discipline handles poorly by itself to a

more appropriate queue may offset the worst-case behavior of each individual discipline.

Potential advantages of MLQ were recognized early on by the O/S designers who have

employed it in the so-called fore-ground/background (F/B) system. A F/B system, in its

usual form, uses a two-level queue-scheduling discipline. The workload of the system is

divided into two queues-a high-priority queue of interactive and time-critical processes

and other processes that do not service external events. The foreground queue is serviced

in the event-driven manner, and it can preempt processes executing in the background.

34
3.5.6 Multiple-Level Queues with Feedback Scheduling

Multiple queues in a system may be used to increase the effectiveness and

adaptiveness of scheduling in the form of multiple-level queues with feedback. Rather

than having fixed classes of processes allocated to specific queues, the idea is to make

traversal of a process through the system dependent on its run-time behavior. For

example, each process may start at the top-level queue. If the process is completed within

a given time slice, it departs the system after having received the royal treatment.

Processes that need more than one time slice may be reassigned by the operating system

to a lower-priority queue, which gets a lower percentage of the processor time. If the

process is still now finished after having run a few times in that queue, it may be moved

to yet another, lower-level queue. The idea is to give preferential treatment to short

processes and have the resource-consuming ones slowly "sink" into lower-level queues,

to be used as fillers to keep the processor utilization high. This philosophy is supported

by program-behavior research findings suggesting that completion rate has a tendency to

decrease with attained service. In other words, the more service a process receives, the

less likely it is to complete if given a little more service. Thus the feedback in MLQ

mechanisms tends to rank the processes dynamically according to the observed amount of

attained service, with a preference for those that have received less.

On the other hand, if a process surrenders control to the OS before its time slice

expires, being moved up in the hierarchy of queues may reward it. As before, different

queues may be serviced using different scheduling discipline. In contrast to the ordinary

multiple-level queues, the introduction of feedback makes scheduling adaptive and

35
responsive to the actual, measured run-time behavior of processes, as opposed to the

fixed classification that may be defeated by incorrect guessing or abuse of authority. A

multiple-level queue with feedback is the most general scheduling discipline that may

incorporate any or all of the simple scheduling strategies discussed earlier. Its overhead

may also combine the elements of each constituent scheduler, in addition to the overhead

imposed by the global queue manipulation and the process-behavior monitoring

necessary to implement this scheduling discipline.

4. SUMMARY

An important, although rarely explicit, function of process management is

processor allocation. Three different schedulers may coexist and interact in a complex

operating system: long-term scheduler, medium-term scheduler, and short-term

scheduler. Of the presented scheduling disciplines, FCFS scheduling is the easiest to

implement but is a poor performer. SRTN scheduling is optimal but unrealizable. RR

scheduling is most popular in time-sharing environments, and event-driven and earliest-

deadline scheduling are dominant in real-time and other systems with time-critical

requirements. Multiple-level queue scheduling, and its adaptive variant with feedback, is

the most general scheduling discipline suitable for complex environments that serve a

mixture of processes with different characteristics.

5. SUGGESTED READINGS / REFERENCE MATERIAL

 Operating System Concepts, 5th Edition, Silberschatz A., Galvin P.B., John Wiley

& Sons.

36
 Systems Programming & Operating Systems, 2nd Revised Edition, Dhamdhere

D.M., Tata McGraw Hill Publishing Company Ltd., New Delhi.

 Operating Systems, Madnick S.E., Donovan J.T., Tata McGraw Hill Publishing

Company Ltd., New Delhi.

 Operating Systems-A Modern Perspective, Gary Nutt, Pearson Education Asia,

2000.

 Operating Systems, Harris J.A., Tata McGraw Hill Publishing Company Ltd.,

New Delhi, 2002.

6. SELF-ASSESSMENT QUESTIONS (SAQ)

 What are the different states of a process?

 What is a process?

 Discuss various process scheduling policies with their cons and pros.

 Which type of scheduling is used in real life operating systems? Why?

 Which action should the short-term scheduler take when it is invoked but no

process is in the ready state? Is this situation possible?

 How can we compare performance of various scheduling policies before actually

implementing them in an operating system?

37
Directorate of Distance Education
Kurukshetra University
Kurukshetra-136119
PGDCA/MSc. (cs)-1/MCA-1

Paper- CS-DE-15 Writer: Dr. Sanjay Tyagi


Lesson No.7 Vetter: Dr. Pardeep Kumar
________________________________________________

FILE SYSTEMS

STRUCTURE

1. Introduction

2. Objective

3. Presentation of Contents

3.1 File Concepts

3.1.1 File Naming

3.1.2 File Types

3.1.3 File Operations

3.1.4 Symbolic Link

3.1.5 File Sharing & Locking

3.1.6 File-System Structure

3.1.7 File-System Mounting

3.1.8 File Attributes

3.2 File Access Methods

3.2.1 Sequential Access

3.2.2 Direct Access

3.2.3 Index-sequential

3.3 File Space Allocations

3.3.1 Contagious Space Allocation

1
3.3.2 Linked Allocation

3.3.3 Indexed Allocation

3.3.4 Performance

3.4 Hierarchical Directory Systems

3.4.1 Directory Structure

3.4.2 The Logical Structure of a Directory

3.4.2.1 Single-level Directory

3.4.2.2 Two-level Directory

3.4.2.3 Tree-structured Directories

3.4.2.4 Acyclic-Graph Directories

3.4.2.5 General Graph Directory

3.4.3 Directory Operations

3.5 File Protection and Security

3.5.1 Type of Access

3.5.2 Access Lists and Groups

3.5.3 Other Protection Approaches

4. Summary

5. Suggested Readings / Reference Material

6. Self Assessment Questions (SAQ)

1. INTRODUCTION

The file system provides the mechanism for controlling both storage and retrieval of both

data and programs of the operating system. The file system consists of two distinct parts:

a collection of files, each storing associated data and a directory structure, which

organizes and provides information about all the files in the system. Some file systems

2
have a third part, partitions, which are used to separate physically or logically large

collections of directories.

2. OBJECTIVES

In this lesson, we discuss various file concepts and the variety of data structures. File

Access Methods such as Sequential Access, Direct Access Index-sequential along with

File Space Allocations such as Contagious Space Allocation, Linked Allocation, Indexed

Allocation will also be discussed. We have also elaborated the Logical Structure of

Directory i.e. Single-level Directory, Two-level Directory, Tree-structured Directories,

Acyclic-Graph Directories & General Graph Directory and various Directory Operations.

We also discuss the ways to handle file protection, which is necessary when multiple

users have access to files and where it is usually desirable to control by whom and in

what ways files may be accessed.

3. PRESENTATION OF CONTENTS

3.1 FILE CONCEPTS

The file system provides the means for controlling online storage and access to both data

and programs. The file system resides permanently on secondary storage, because file

system must be able to hold a large amount of data, permanently. This lesson is primarily

concerned with issues concerning file storage and access on the most common secondary

storage medium, the disk. We look at the ways to allocate disk space, to recover freed

space, to track the locations of data, and to interface other parts of the operating system to

secondary storage.

3.1.1 File Naming

Each file is a distinct entity and therefore a naming convention is required to distinguish

one from another. The operating systems generally employ a naming system for this

3
purpose. In fact, there is a naming convention to identify each resource in the computer

system and not files alone.

3.1.2 File Types

The files under UNIX / LINUX can be categorized as follows:

 Ordinary files.

 Directory files.

 Special files.

We are discussing about these files below.

Ordinary Files

Ordinary files may contain executable programs, text files, binary files or databases. You

can add, modify or delete them or remove the file entirely.

Directory Files

Directory files are the one that contain information that the system needs to access all

types of files such as list of file names and other information related to these files, but

directory files do not contain the actual file data. Some commands which are used for

manipulation of directory files differ from those for ordinary files.

Special Files

Special files are also known as device files. These files represent physical devices such as

terminals, disks, printers and tape-drives etc. These files are read from or written into

similar to ordinary files, but the operation on these files activates some physical devices.

These files can be of two types (i) character device files and (ii) block device file. In

character device files, data are handled character by character while in block device files,

data are handled in large chunks of blocks, as in the case of disks and tapes.

3.1.3 File operations

Major file operations are:

4
 Read operation

 Write operation

 Execute

 Coping file

 Renaming file

 Moving file

 Deleting file

 Creating file

 Merging files

 Sorting file

 Appending file

 Comparing file

3.1.4 Symbolic Link

A link is a special file that contains reference to another file or subdirectory in the form

of an absolute or relative path name. When a reference to a file is made, we search the

directory. The directory entry is marked as a link and the name of the real file (or

directory) is given. We determine the link by using the path name to locate the real file.

Links are easily identified by their format in the directory entry (or by their having a

special type on systems that support types), and are also called as indirect pointers.

A symbolic link can be deleted without deleting the actual file it links. There can be any

number of symbolic links attached to a single file.

Symbolic links are useful in sharing a single file called by different names. Each time a

link is created, the reference count in its inode is incremented by one. Whereas deletion

of link decreases the reference count by one. The operating system cannot delete files

5
whose reference count is not 0, because non 0 reference count indicates that the file is in

use.

In a system where symbolic links are used, the deletion of a link does not need to affect

the original file; only the link is removed. If the file entry itself is deleted, the space for

the file is de-allocated, leaving the links dangling. We can search for these links and

remove them also, but unless a list of the associated link is kept with each file, this search

can be expensive. Alternatively, we can leave the links until an attempt is made to use

them. At that time, we can determine that the file of the name given by the link does not

exist, and can fail to resolve the link name; the access is treated just like any other illegal

file name. In the case of UNIX/LINUX, symbolic links are left when a file is deleted, and

it is up to the user to realize that the original file is gone or has been replaced.

Another approach to deletion is to preserve the file till the all references to it are deleted.

To implement this approach, we must have some mechanism for determining that the last

reference to the file has been deleted. We could keep a list of all references to a file

(directory entries or symbolic links). When a link or a copy of the directory entry is

established, a new entry is added to the file-reference list. When a link or directory entry

is deleted, we remove its entry on the list. The file is deleted when its file-reference list is

empty.

The trouble with this approach is that the variable and potentially large size of the file-

reference list. However, we need to keep only a count of the number of references. A new

link or directory entry increments the reference count; deleting a link or entry decrements

the count. Once the count is 0, the file will be deleted; there are no remaining references

to it. The UNIX/LINUX operating system uses this approach for non-symbolic links, or

hard links, keeping a reference count within the file information block or inode). By

6
effectively prohibiting multiple references to directories, we have tendency to maintain

an acyclic-graph structure.

To avoid these issues, some systems do not permit shared directories links. For example,

in MS-DOS, the directory structure is a tree structure.

3.1.5 File Sharing and Locking

In a multi-user environment a file is needed to be shared among more than one user.

There are many techniques and approaches to affect this operation. Simple approach is to

copy the file at the user’s local hard disk. This approach primarily creates different files

and therefore cannot be treated as file sharing.

A file can be shared in three different modes:

 Read only: In this mode, the user can only read or copy the file.

 Linked shared: In this mode, all the users sharing the file can make changes in this

file. However, the changes are reflected in the order determined by the operating

systems.

 Exclusive mode: In this mode, a single user who can make the changes (while others

can only read or copy it) acquires the file.

Another approach is to share a file through symbolic links. This approach poses a few

issues - concurrent updation problem, deletion problem. If two users try to update the

same file, the updating of one of them will be reflected at a time. Besides, another user

must not delete a file while it is in use.

Locking is mechanism through which operating systems make sure that the user making

changes to the file is the one who has the lock on the file. As long as the lock remains

with this user, no other user can make changes in the file.

3.1.6 File-System Structure

7
Disks provide the large secondary storage on which a file system is maintained. To

enhance I/O efficiency, I/O transfers between memory and disks are performed in units of

blocks. Each block is one or more sectors. Depending on the disk drive, sectors vary from

32 bytes to 4096 bytes; usually, they are 512 bytes. Disks have two vital characteristics

that make them a convenient medium for storing multiple files:

 They can be rewritten in place; it is possible to read a block from the disk, to

modify the block, and to write it back into the same place.

 One can access directly any given block of information on the disk. Thus, it is

easy to access any file either sequentially or randomly, and switching from

one file to another added requires only moving the read-write heads and

waiting for the disk to rotate.

To provide an efficient and convenient access to the disk, the operating system imposes a

file system to allow the data to be stored, located, and retrieved easily. A file system

poses two different design issues. The first problem is defining how the file system

should look to the user. This task involves the definition of a file and its attributes,

operations allowed on a file and the directory structure for organizing the files. Next,

algorithms and data structure must be created to map the logical file system onto the

physical secondary storage devices.

3.1.7 File-System Mounting

Just as a file must be opened before it is used, a file system must be mounted before it can

be available to processes on the system. The mount procedure is simple. The operating

system is given the name of the device and also the location within the file structure at

which to attach the file system (called the mount point). For example, on the

UNIX/LINUX system, a file system containing user’s home directory might be mounted

as /home; then, to access the directory structure within that file system, one could precede

8
the directory names with /home, as in /home/sanjay. Mounting that file system under

/users would result in the path name /users/sanjay to reach the same directory.

Next, the operating system verifies that the device contains a valid file system. It does so

by asking the device driver to read the device directory and verifying that the directory

has the expected format. Finally, the operating system notes its directory structure that a

file system is mounted at the specified mount point. This scheme enables the operating

system to traverse its directory structure, switching among file systems as appropriate.

Consider the actions of the Macintosh Operating System. Whenever the system

encounters a disk for the first time (hard disks are found at boot time, floppy disks are

seen once they are inserted into the drive), the Macintosh Operating System searches for

a file system on the device. If it finds one, it automatically mounts the file system at the

boot-level, adds a folder icon to the screen labelled with the name of the file system (as

stored in the device directory). The user is then ready to click on the icon and thus to

display the recently mounted file system.

3.1.8 File attributes

Attributes are properties of a file. The operating system treats a file according to its

attributes. Following are a few common attributes of a file:

 H for hidden

 A for archive

 D for directory

 X for executable

 R for read only

 W for Write

These attributes can be used in combination also.

3.2 FILE ACCESS METHODS

9
Files store information, which is when needed, may be read into the main memory. There

are several different ways, in which the data stored in a file may be accessed for reading

and writing. The operating system is responsible for supporting these file access methods.

The file access methods are discussed as under:

3.2.1 Sequential access

A sequential file is the most primitive of all file structures. It has no directory and no

linking pointers. The records are usually organized in lexicographic order on the value of

some key. In other words, a specific attribute is chosen whose value will determine the

order of the records. Sometimes when the attribute value is constant for a large number of

records, a second key is chosen to give an order when the first key fails to distinguish.

The implementation of this file structure requires the use of a sorting routine.

Its main advantages are:

 It is simple to implement

 It provides fast access to the next record using lexicographic order.

Its disadvantages:

 It is difficult to update - inserting a new record may require moving a large proportion

of the file

 Random access is very slow.

Sometimes a file is considered to be sequentially organised despite the fact that it is not

ordered according to any key. Perhaps the date of acquisition is considered to be the key

value, the newest entries are added to the end of the file and so their no difficulty to

updating.

3.2.2 Direct Access

Sometimes it is not necessary to process every record in a file. It may not be necessary to

process records in the order in which they are present. Information present in a record of

10
a file is to be accessed only if some key value in that record is known. In all such cases,

direct access is used. Direct access is based on the disk that is a direct access device and

allows random access of any file block. Since a file is a collection of physical blocks, any

block and hence the records in that block are accessed. For example, master files.

Databases are often of this type since they allow query processing that involves

immediate access to large amounts of information. All reservation systems fall into this

category. Not all operating systems support direct access files. Usually files are to be

defined as sequential or direct at the time of creation and accessed accordingly later.

Sequential access of a direct access file is possible but direct access of a sequential file is

not.

3.2.3 Index Sequential Access

This access method is a slight modification of the direct access method. It is in fact a

combination of both the sequential access as well as direct access. The main concept is to

access a file direct first and then sequentially from that point onwards. This access

method involves maintaining an index. The index is a pointer to a block. To access a

record in a file, a direct access of the index is made. The information obtained from this

access is used to access the file. For example, the direct access to a file will give the

block address and within the block the record is accessed sequentially. Sometimes

indexes may be big. So hierarchies of indexes are built in which one direct access of an

index leads to info to access another index directly and so on till the actual file is

accessed sequentially for the particular record. The main advantage in this type of access

is that both direct and sequential access of files is possible.

3.3 FILE ALLOCATION METHODS

The direct-access nature of disks permits flexibility within the implementation of files. In

almost every case, several files will be stored on the same disk. One main problem in file

11
management is how to allocate space for files so that disk space is utilized effectively and

files can be accessed quickly. Three major methods of allocating disk space are

contiguous, linked, and indexed. Each method has its advantages and disadvantages.

Accordingly, some systems support all three (e.g. Data General's RDOS). More

commonly, a system will use one particular method for all files.

3.3.1 Contiguous Allocation

The contiguous allocation method requires each file to occupy a set of contiguous

address on the disk. Disk addresses define a linear ordering on the disk. Notice that, with

this ordering, accessing block b+1 after block b normally requires no head movement.

When head movement is needed (from the last sector of one cylinder to the first sector of

the next cylinder), it is only one track. Thus, the number of disk seeks required for

accessing contiguous allocated files in minimal, as is seek time when a seek is finally

needed. Contiguous allocation of a file is defined by the disk address and the length of

the first block. If the file is n blocks long, and starts at location b, then it occupies blocks

b, b+1, b+2, …, b+n-1. The directory entry for each file indicates the address of the

starting block and the length of the area allocated for this file.

The difficulty with contiguous allocation is finding space for a new file. If the file to be

created is n blocks long, then the OS must search for n free contiguous blocks. First-fit,

best-fit, and worst-fit strategies are the most common strategies used to select a free hole

from the set of available holes. Simulations have shown that both first-fit and best-fit are

better than worst-fit in terms of both time & storage utilization. Neither first-fit nor best-

fit is clearly best in terms of storage utilization, but first-fit is generally faster.

These algorithms also suffer from external fragmentation. As files are allocated and

deleted, the free disk space is broken into little pieces. External fragmentation exists

when enough total disk space exists to satisfy a request, but this space not contiguous;

12
storage is fragmented into a large number of small holes.

Another problem with contiguous allocation is determining how much disk space is

needed for a file. When the file is created, the total amount of space it will need must be

known and allocated. How does the creator (program or person) know the size of the file

to be created. In some cases, this determination may be fairly simple (e.g. copying an

existing file), but in general the size of an output file may be difficult to estimate.

3.3.2 Linked Allocation

The problems in contiguous allocation can be traced directly to the requirement that the

spaces be allocated contiguously and that the files that need these spaces are of different

sizes. These requirements can be avoided by using linked allocation.

In linked allocation, each file is a linked list of disk blocks. The directory contains a

pointer to the first and (optionally the last) block of the file. For example, a file of 5

blocks which starts at block 4, might continue at block 7, then block 16, block 10, and

finally block 27. Each block contains a pointer to the next block and the last block

contains a NIL pointer. The value -1 may be used for NIL to differentiate it from block 0.

With linked allocation, each directory entry has a pointer to the first disk block of the

file. This pointer is initialized to nil (the end-of-list pointer value) to signify an empty

file. A write to a file removes the first free block and writes to that block. This new block

is then linked to the end of the file. To read a file, the pointers are just followed from

block to block.

There is no external fragmentation with linked allocation. Any free block can be used to

satisfy a request. Notice also that there is no need to declare the size of a file when that

file is created. A file can continue to grow as long as there are free blocks.

Linked allocation, does have disadvantages, however. The major problem is that it is

13
inefficient to support direct-access; it is effective only for sequential-access files. To find

the ith block of a file, it must start at the beginning of that file and follow the pointers

until the ith block is reached. Note that each access to a pointer requires a disk read.

Another severe problem is reliability. A bug in OS or disk hardware failure might result

in pointers being lost and damaged. The effect of it could be picking up a wrong pointer

and linking it to a free block or into another file.

3.3.3 Indexed Allocation

The indexed allocation method is the solution to the problem of both contiguous and

linked allocation. This is done by bringing all the pointers together into one location

called the index block. Of course, the index block will occupy some space and thus could

be considered as an overhead of the method.

In indexed allocation, each file has its own index block, which is an array of disk sector

of addresses. The ith entry in the index block points to the ith sector of the file. The

directory contains the address of the index block of a file (Figure 7.1). To read the ith

sector of the file, the pointer in the ith index block entry is read to find the desired sector.

Figure 7.1 Indexed allocation of disk space

Indexed allocation supports direct access, without suffering from external fragmentation.

Any free block anywhere on the disk may satisfy a request for more space.

14
3.3.4 Performance

The allocation methods that we have mentioned vary in their storage efficiency and data-

block access times. Both are necessary criteria in selecting the proper method or methods

for an operating system to implement.

One difficulty in comparing the performance of the various systems is determining how

the systems will be used. A method used for system with sequential access should use a

method different from that for a system with mainly random access. For any type of

access, contiguous allocation requires only one access to get a disk block. Since we can

easily keep the initial address of the file in memory, we can calculate immediately the

disk address of the ith block (or the next block) and read it directly.

For linked allocation, we can also keep the address of the next block in memory and read

it directly. This method is fine for sequential access; for direct access, however, an access

to the ith block might require i disk reads. This problem signifies why linked allocation

should not be used for an application requiring direct access.

As a result, some systems support direct-access files by using contiguous allocation and

sequential access by linked allocation. For these systems, the type of access to be made

must be declared when the file is created. A file created for sequential access will be

linked and cannot be used for direct access. A file created for direct access will be

contiguous and can be used for both direct access and sequential access, but its maximum

length must be declared with it.

3.4 HIERARCHICAL DIRECTORY SYSTEMS

In a typical file storage system, various files are to be stored on storage of giga-byte

capacity. In order to handle such situation, the files are to be organized. The organization,

usually, done in two parts. In the first part, the file system is broken into partitions, also

known as minidisks or volumes. Typically, a disk contains minimum of one partition,

15
which is a low-level structure in which files and directories reside. Sometimes, there may

be more than one partition on a disk, each partition acting as a virtual disk. The users do

not have to concern themselves with the translating the physical address; the system does

the required job.

Figure 7.2 Directory Hierarchy

Partitions contain information about itself in a file called partition table. It also contains

information about files and directories on it. Typical file information is name, size, type,

location etc. The entries are kept in a device directory or volume table of contents.

Directories can be created within another directory. Directories have parent-child

relationship as shown in Figure 7.2.

3.4.1 Directory Structure

The file systems of computers can be extensive. Some systems store thousands of files on

hundreds of gigabytes of disk. To manage all these data, we should organize them. This

organization is generally done in two parts; first, the file system is broken into in the IBM
16
world or volumes in the PC and Macintosh arenas. Sometimes, partitions are used to

provide several separate areas within one disk, each treated as a separate storage device,

whereas other systems allow partitions to be larger than a disk to group disks into one

logical structure. In this way, the user needs to be concerned with only the logical

directory and file structure, and can ignore completely the problems of physically

allocating space for files. For this reason partitions can be thought of as virtual disks.

Second, every partition contains information about files within it. This information is

kept in a device directory or volume table of contents. The device directory records

information such as name, location, size, and type for all files on that partition.

3.4.2 The Logical Structure of a Directory

3.4.2.1 Single-Level Directory

In a single-level directory system, all the files are placed in one directory. This is very

common on single-user OS's.

A single-level directory has significant limitations, however, when the number of files

increases or when there is more than one user. Since all files are in the same directory,

they must have unique names. If there are two users who call their data file "test", then

the unique-name rule is violated. Although file names are generally selected to reflect the

content of the file, they are often quite limited in length.

Even with a single-user, as the number of files increases, it becomes difficult to

remember the names of all the files in order to create only files with unique names.

17
Figure 7.3 Single-level Directory

3.4.2.2 Two-Level Directory

In the two-level directory system, the system maintains a master block that has one entry

for each user. This master block contains the addresses of the directory of the users.

There are still problems with two-level directory structure. This structure effectively

isolates one user from another. This is an advantage when the users are completely

independent, but a disadvantage when the users want to cooperate on some task and

access files of other users. Some systems simply do not allow local files to be accessed

by other users.

Figure 7.4 Two-Level Directory

3.4.2.3 Tree-Structured Directories

18
In the tree-structured directory, the directory themselves are files. This leads to the

possibility of having sub-directories that can contain files and sub-subdirectories.

An interesting policy decision in a tree-structured directory structure is how to handle the

deletion of a directory. If a directory is empty, its entry in its containing directory can

simply be deleted. However, suppose the directory to be deleted id not empty, but

contains several files, or possibly sub-directories. Some systems will not delete a

directory unless it is empty. Thus, to delete a directory, someone must first delete all the

files in that directory. If these are any sub-directories, this procedure must be applied

recursively to them, so that they can be deleted also. This approach may result in a

insubstantial amount of work. An alternative approach is just to assume that, when a

request is made to delete a directory, all of that directory's files and sub-directories are

also to be deleted.

Figure 7.5 Tree-Structured Directories

3.4.2.4 Acyclic-Graph Directories

19
The acyclic directory structure is an extension of the tree-structured directory structure.

In the tree-structured directory, files and directories starting from some fixed directory

are owned by one particular user. In the acyclic structure, this prohibition is taken out and

thus a directory or file under directory can be owned by several users.

Figure 7.6 Acyclic-Graph Directories

3.4.2.5 General Graph Directory

One major problem with using an acyclic graph structure is ensuring that there no cycles.

If we start with a two-level directory and allow users to create subdirectories, a tree-

structured directory is created. It should be very easy to see that simply adding new files

and subdirectories to existing tree structure preserves the tree-structured nature. However,

when we add links to an existing tree-structured directory, the tree structure is destroyed,

resulting in a simple graph structure.

The main advantage of an acyclic graph is the relative simplicity of the algorithms to

traverse file is the graph and to determine when there are no more references to a file. We

20
want to avoid file is traversing shared sections of an acyclic graph twice, mainly for

performance reasons. If we have just searched a major shared subdirectory for a

particular file, without finding that file, we want to avoid searching that subdirectory

again; the second search would be a waste of time.

Figure 7.7 General Graph Directory

If cycles are allowed to exist in the directory, we likewise want to avoid searching any

component twice, for reasons of correctness as well as performance. A poorly designed

algorithm might tries or result in an infinite loop frequently searching through the cycle

and never terminating. One solution is to arbitrarily limit the number of directories,

which will be accessed during a search.

A similar problem exists when we are trying to find out when a file can be deleted. As

with acyclic-graph directory structures, a value zero in the reference count means that

there are no more references to the file or directory, and the file can be deleted. However,

it is also possible that when cycles exist and the reference count may be nonzero, even

when it is no longer possible to refer to a directory or file. This anomaly results from the

21
possibility of self-referencing (a cycle) in the directory structure. In this case, it is usually

necessary to use a garbage collection scheme to determine when the last reference has

been deleted and the disk space can be reallocated.

3.4.3 Directory Operations

The directory can be viewed as a symbol table that translates file names into their

directory entries. If we take such a view, then it becomes obvious that the directory itself

can be organized in many ways. We want to be able to insert entries, to delete entries, to

search for a named entry, and to list all the entries in the directory.

Now, we examine several schemes for defining the logical structure of the directory

system. When considering a particular directory structure, we have to keep in mind the

operations that are to be performed on a directory:

 Search for a directory: We need to be able to search a directory structure to

find the entry for a particular file. Since files have symbolic names and similar

names may indicate a relationship between files, we may want to be able to

find all files whose names match a particular pattern.

 Create a directory: New files need to be created and added to the directory.

 Delete a directory: When a file is no longer needed, we want to remove it from

the directory.

 List a directory: We need to be able to list the files in a directory and the

contents of the directory entry for each file in the list.

 Rename a directory: Because the name of a file represents its contents to its

users, the name must be changeable when the contents or use of the file

changes. Renaming a file may also allow its position within the directory

structure to be changed.

22
 Traverse the file system: It is useful to be able to access every directory and

every file within a directory structure. For reliability it is a good idea to save

the contents and structure of the entire file system at regular intervals. This

saving often consists of copying all files to magnetic tape. This technique

provides a backup copy in case of system failure or if the file is simply no

longer in use. In this case, the file can be copied to tape, and the disk space of

that file released for reuse by another file.

 Copying a directory: A directory may be copied from one location to another.

 Moving a directory: A directory may be moved from one location to a new

location with all its contents.

3.5 FILE PROTECTION & SECURITY

When information is kept in a computer system, a significant concern is its protection

from both physical damage (reliability) and improper access (protection).

Duplicate copies of files generally provide reliability. Many computers have systems

programs that automatically (or through computer-operator intervention) copy disk files

to tape at regular intervals (once per day or week or month) to maintain a copy should a

file system be accidentally destroyed.

File systems can be damaged by hardware problems (such as errors in reading or

writing), power surges or failures, head crashes, dirt, temperature extremes, and

vandalism. Files may be deleted accidentally. Bugs in the file-system software can also

cause file contents to be lost.

Protection can be provided in different ways. For a small single-user system, we might

provide protection by physically removing the floppy disks and locking them in a desk

drawer or file cabinet. In a multi-user system, however, other mechanisms are needed.

3.5.1 Types of Access

23
The necessity for protecting files is a direct result of the ability to access files. On

systems that do not allow access to the files of other users, protection is not required.

Thus, one extreme would be to provide complete protection by prohibiting access. The

other extreme is to provide free access with no protection. Both of these approaches are

too extreme for general use. What is needed is the controlled access.

Protection mechanisms give controlled access by limiting the kinds of file access that can

be made. Access is granted or denied depending on many factors, one of which is the

type of access requested. Several different types of operations may be controlled:

 Read - Read from the file.

 Write - Write or rewrite the file.

 Execute - Load the file into memory and execute it.

 Append - Write new information at the end of the file.

 Delete - Delete the file and free its space for possible reuse.

 List - List the name and attributes of the file.

Other operations, such as renaming, copying, or editing the file, may also be controlled.

For many systems, however, these higher-level functions (such as copying) may be

implemented by a system program that makes lower-level system calls. Protection is

provided at only the lower level. For instance, copying a file may be implemented simply

by a sequence of read requests. In this case, a user with read access can also cause the file

to be copied, printed, and so on.

Many different protection mechanisms have been proposed. Each scheme has its

advantages and disadvantages and must be selected as appropriate for intended

application. A small computer system that is used by only a few members of a research

group may not need the same types of protection as will a large corporate computer that

is used for research, finance, and personnel iterations.

24
3.5.2 Access Lists and Groups

The most common approach to the protection problem is to make access dependent on

the identity of the user. Various users may need different types of access to a file or

directory. The most general scheme to implement identity-dependent access is to

associate with each file and directory an access list, specifying the user name and the

types of access allowed for each user.

When a user requests access to a specific file, the operating system first checks the access

list associated with that file. If that user is listed for the requested access, the access is

allowed. Otherwise, a protection violation occurs, and the user job is denied access to the

file.

The main problem with access lists is their length. If we want to allow everyone to read a

file, we must list all users with read access. This technique has two undesirable

consequences:

 Constructing such a list may be a tedious and unrewarding task, especially if

we do not know in advance the list of users in the system.

 The directory entry that previously was of fixed size needs now to be of

variable size, resulting in space management being more complicated.

These problems can be resolved by use of a condensed version of the access list. To

condense the length of the access list, many systems recognize three classifications of

users in connection with each file:

 Owner - The user who created the file is the owner

 Group - A set of users who are sharing the file and need similar access is a

group or workgroup.

 Universe - All other users in the system constitute the universe.

25
As an example, consider a person, Sara, who is writing a new book. She has hired three

graduate students (Jim, Dawn, and Jill) to help with the project. The text of the book is

kept in a file named book. The protection associated with this file allows:

 Sara should be able to invoke all operations on the file.

 Jim, Dawn; and Jill should be able only to read and write the file; they should

not be allowed to delete the file.

 All others users should be able to read the file. (Sara is interested in letting as

many people as possible read the text so that she can obtain appropriate

feedback.

To achieve such a protection, we must create a new group, say text, with members Jim,

Dawn, and Jill. The name of the group text must be then associated with the file book,

and the access right must be set in accordance with the policy we have outlined.

Note that, for this scheme to work properly, group membership must be controlled

tightly. This control can be accomplished in a number of different ways. For example, in

the UNIX system, groups can be created and modified by only the manager of the facility

(or by any super-user). Thus, this control is achieved through human interaction. In the

VMS system, with each file, an access list (also known as an access control list) may be

associated, listing those users who can access the file. The owner of the file can create

and modify this access lists are discussed above.

With this more limited protection classification, only three fields are needed to define

protection. Each field is often a collection of bits, each of which either allows or prevents

the access associated with it. For example, the UNIX system defines three fields of 3 bits

each: rwx, where r controls read access, w controls write access and x controls execution.

A separate field is kept for the file owner, for the owner's group and for all other users. In

this scheme, 9 bits per file are needed to record protection information. Thus, for our

26
example, the protection fields for the file book are as follows; For the owner Sara, all 3

bits are set; for the group text, the r and w bits are set; and for the universe. Notice,

however, that this scheme is not as general as is the access-list scheme.

3.5.3 Other Protection Approaches

Another approach to the protection problem is to associate a password with each file. Just

as access to the computer system itself is often controlled by a password, access to each

file can be controlled by a password.

If the passwords are chosen randomly and changed frequently, this method may be

effective in limiting access to a file to only those users who know the password.

There are number of disadvantages to this scheme. First, if we associate a separate

password with each file, then the number of passwords that a user needs to remember

may become very large, making the scheme impractical. If only one password is used for

all the files, then, once it is exposed, all files are accessible. Some systems (for example,

TOPS-20) allow a user to associate a password with a subdirectory, rather than with an

individual file, to deal with this problem. The IBM VM/CMS operating system allows

three passwords for a minidisk: one each for read, write, and multi write access. Second,

commonly, only one password is associated with each file. Hence, protection is on an all-

or-nothing basis. To provide protection on a more detailed level, we must use multiple

passwords.

Limited file protection is also currently available on single user systems, such as MS-

DOS and Macintosh operating system. These operating systems, when originally

designed, essentially ignored dealing with the protection problem. However, since these

systems are being placed on networks where file sharing and communication is

necessary, protection mechanisms have to be retrofitted into the operating system. Note

that it is almost always easier to design a feature into a new operating system than it is to

27
add a feature to an existing one. Such updates are usually less effective and are not

seamless.

We should note that, in a multilevel directory structure, we not only need to protect

individual files, but also to protect collections of files contained in a subdirectory, that is,

we need to provide a mechanism for directory protection.

The directory operations that must be protected are somewhat different from the file

operations. We want to control the creation and deletion of files in a directory. In

addition, we probably want to control whether a user can determine the existence of a file

in a directory. Sometimes, knowledge of the existence and name of a file may be

significant in itself. Thus, listing the contents of a directory must be a protected

operation. Therefore, if a path name refers to a file in a directory, the user must be

allowed access to both the directory and the file. In systems, where files may have

numerous path names (such as acyclic or general graphs), a given user may have different

access rights to a file, depending on the path name used.

4. SUMMARY

The file system resides permanently on secondary storage, which has the main

requirement that it must be able to hold a large amount of data, permanently.

The various files can be allocated space on the disk in three ways: through contagious,

linked or indexed allocation. Contagious allocation can suffer from external

fragmentation. Direct-access is very inefficient with linked-allocation. Indexed allocation

may require substantial overhead for its index block. There are many ways in which these

algorithms can be optimised.

Free space allocation methods also influence the efficiency of the use of disk space, the

performance of the file system and the reliability of secondary storage.

5. SUGGESTED READINGS / REFERENCE MATERIAL

28
 Operating System Concepts, 5th Edition, Silberschatz A., Galvin P.B.,

John Wiley & Sons.

 Systems Programming & Operating Systems, 2nd Revised Edition,

Dhamdhere D.M., Tata McGraw Hill Publishing Company Ltd., New

Delhi.

 Operating Systems, Madnick S.E., Donovan J.T., Tata McGraw Hill

Publishing Company Ltd., New Delhi.

 Operating Systems-A Modern Perspective, Gary Nutt, Pearson Education

Asia, 2000.

 Operating Systems, Harris J.A., Tata McGraw Hill Publishing Company

Ltd., New Delhi, 2002.

6. SELF-ASSESSMENT QUESTIONS (SAQ)

 Discuss various file access methods.

 Explain various file allocation methods

 What are the limitations of acyclic directory structure?

 Which file operations applicable to directories? Which are not?

 How is a directory different from a file?

 What is the significance of file attributes?

 Differentiate between a file and a file system.

 What is formatting a disk?

 How is moving a file different from copying?

29
CS-DE-15

HARDWARE MANAGEMENT AND DISK SCHEDULING

LESSON NO. 8

Writer: Harvinder Singh

Vetter: Dr. Pardeep Kumar

STRUCTURE

1. Introduction

2. Objectives

3. Presentation of Contents

3.1 Storage Disk

3.2 Disk Scheduling

3.3 Selection of Scheduling algorithm

3.4 Formatting

3.5 RAID

3.6 RAM Disks

4. Summary

5. Suggested Readings / Reference Material

6. Self Assessment Questions (SAQ)

1. INTRODUCTION

The greatest challenge in an operating system is perhaps the device management.

The system must be able to control a collection of devices having multidimensional

differences. Wide diversity is found in information volume, device speed, purpose,

communication protocol and direction of information flow. In addition to variety, the

1
operating system wants the capability to deal with a large number of devices. In some

systems, a machine is connected with the thousands of different devices and each device

requires a unique operating system support. This kind of management must be

accomplished in a parallel environment. Devices use their own timing and operate

independent of the CPU. The operating system must be able to deal with concurrent

requests from a number of devices, while running on a single CPU.

While dealing with a different set of devices, the abstraction offered to

applications should be as device-independent as possible. There is a great variation in the

physical characteristics of CD-ROMs, floppy disks, main memory and printers, but

applications should be able to read from or write to these devices, like if they were all the

same.

2. OBJECTIVES

Device management services are provided not just to application programs. File

management is built upon the abstract I/O system described in this lesson. Using the

device independent abstraction created by device management software greatly simplifies

the task of creating a file system. Similarly, swapping software relies on device

management to handle its I/O requirements. The present lesson aims at presenting

hardware I/O organization, software organization and devices.

3. PRESENTATION OF CONTENTS

3.1 Storage Disk

Disk comes in many sizes and speeds, and information may be stored optically or

magnetically. However, all disks share a number of important features. A disk is a flat

circular object called a platter. Information may be stored on both sides of a platter

2
(although some multiplatter disk packs do not use the top most or bottom most surfaces).

Platter rotates around its own axis. The circular surface of the platter is coated with a

magnetic material on which the information is stored. A read/write head is used to

perform read/write operations on the disk. The read write head can move radially over the

magnetic surface. For each position of the head, the recorded information forms a circular

track on the disk surface. Within a track information is written in blocks. The blocks may

be of fixed size or variable size separated by block gaps. The variable length size block

scheme is flexible but difficult to implement. Blocks can be separately read or written.

The disk can access any information randomly using an address of the record of the form

(track no, record no.). When the disk is in use a drive motor spins it at high speed. The

read/write head positioned just above the recording surface stores the information

magnetically on the surface. On floppy disks and hard disks, the media spins at a

constant rate. Sectors are organized into a number of concentric circles or tracks. As one

moves out from the center of the disk, the tracks get larger. Some disks store the same

number of sectors on each track, with outer tracks being recorded using lower bit

densities. Other disks place more sectors on outer tracks. On such a disk, more

information can be accessed from an outer track than an inner one during a single rotation

of the disk.

3
Figure 8.1: Moving head disk mechanism

The disk speed is composed of three parts:

(a) Seek Time

(b) Rotational Latency

(c) Transfer time

(a) Seek Time

To access a block from the disk, first of all the system has to move the read/write head to

the required position. The time consumed in this operation is known as seek time and the

head movement is called seek. When anything is read or written to a disc drive, the

read/write head of the disc needs to move to the right position. The actual physical

positioning of the read/write head of the disc is called seeking. The amount of time that it

takes the read/write head of the disc to move from one part of the disk to another is called

the seek time. The seek time can differ for a given disc due to the varying distance from

the start point to where the read/write head has been instructed to go. Because of these

4
variables, seek time is generally measured as an average seek time. Seek time (S) is

determined in terms of:

I: startup delays in initiating head movement.

H: the rate at which read/write head can be moved.

C: How far the head must travel.

S=H*C+I

(b) Rotational Latency

Rotational latency (sometimes called rotational delay or just latency) is the delay waiting

for the rotation of the disk to bring the required disk sector under the read-write head. It

depends on the rotational speed of a disk, measured in revolutions per minute (RPM).

Once the head is positioned at the right track, the disk is to be rotated to move the desired

block under the read/write head. On average this latency will be one-half of one

revolution. Thus latency can be computed by dividing the number of revolution per

minute (RPM), R, into 30.

L = 30 / R

(c) Transfer time

Finally the actual data is transferred from the disk to main memory. The time consumed

in this operation is known as transfer time. Transfer time T, is determined by the amount

of information to be read, B; the number of bytes per track, N; and the rotational speed.

T = 60B/RN

So the total time (A) to service a disk request is the sums of these three i.e. seek time,

latency time, and transfer time.

A=S+L+T

5
Since most of the systems depend heavily on the disk, so it become very important to

make the disk service as fast as possible.

So a number of variations have been observed in disk organization motivated by the

desire to reduce the access time, increase the capacity of the disk and to make optimum

use of disk surface. For example there may be one head for every track on the disk

surface. Such arrangement is known as fixed-head disk. In this arrangement it is very

easy for computer to switch from one track to another quickly but it makes the disk very

expensive due to the requirement of a number of heads. Generally there is one head that

moves in and out to access different tracks because it is cheaper option.

Higher disk capacities are obtained by mounting many platters on the same spindle to

form a disk pack. There is one read/write head per circular surface of a platter. All heads

of the disk are mounted on a single disk arm, which moves radially to access different

tracks. Since the heads are located on the identically positioned tracks of different

surfaces, so such tracks can be accessed without any further seeks. So placing that data in

one cylinder can speed up sequential access. Cylinder is a collection of identically

positioned tracks of different surfaces.

The hardware for a disk system can be divided into two parts. The disk drive is the

mechanical part, including the device motor, the read/write heads and associated logic.

The other part called the disk controller determines the logical interaction with the

computer. The controller takes instructions from the CPU and orders the disk drive to

carry out the instructions.

Every disk drive has a queue of pending requests to be serviced. Whenever a process

needs I/O to or from the disk, it issues a request to the operating system, which is placed

6
in the disk queue. The request specifies the disk address, memory address, amount of

information to be transferred, and the type of operation (input or output).

3.2 DISK SCHEDULING

For a multiprogramming system with many processes, the disk queue may often be

nonempty. Thus, when a request is complete, the disk scheduler has to pick a new request

from the queue and service it.

Primary memory is volatile whereas secondary memory is non-volatile. When power is

switched off the primary memory loses the stored information whereas the secondary

memories retain the stored information. The most common secondary storage is a disk. In

Figure 8.1, we have described the mechanism of a disk and information storage on it. A

disk has several platters. Each platter has several rings or tracks. The rings are divided

into sectors where information is actually stored. The rings with similar position on

different platters are said to form a cylinder. As the disk spins around a spindle, the heads

transfer the information from the sectors along the rings. Note that information can be

read from the cylinder surface without any additional lateral head movement. So it is

always a good idea to organize all sequentially-related information along a cylinder. This

is done by first putting it along a ring and then carrying on with it across to a different

platter on the cylinder. This ensures that the information is stored on a ring above or

below this ring. Information on different cylinders can be read by moving the arm by

relocating the head laterally. This requires an additional arm movement resulting in some

delay, often referred to as seek latency in response. Clearly, this delay is due to the

mechanical structure of disk drives. In other words, there are two kinds of mechanical

delays involved in data transfer from disks. The seek latency, as explained earlier, is due

7
to the time required to move the arm to position the head along a ring. The other delay,

called rotational latency, refers to the time spent in waiting for a sector in rotation to

come under the read or write head. The seek delay can be considerably reduced by having

a head per track disk. The motivation for disk scheduling comes from the need to keep

both the delays to a minimum. Usually a sector which stores a block of information

additionally has a lot of other information. For example, a 512 byte block has nearly 100

bytes of additional information which is utilized to synchronize and also check

correctness of the information transfer as it takes place.

Scheduling Disk Operations:

A user as well as a system spends a lot of time of operation communicating with files

(programs, data, system utilities, etc.) stored on disks. All such communications have the

following components:

1. The I/O is to read from, or write into, a disk.

2. The starting address for communication in main memory

3. The amount of information to be communicated (in number of bytes / words)

4. The starting address in the disk and the current status of the transfer.

The disk I/O is always in terms of blocks of data. So even if one word or byte is required

we must bring in (or write in) a block of information from (to) the disk. Suppose we have

only one process in a system with only one request to access data. In that case, a disk

access request leads finally to the cylinder having that data. However, because processor

and memory are much faster than disks, it is quite possible that there may be another

request made for disk I/O while the present request is being serviced. This request would

queue up at the disk. With multi-programming, there will be many user jobs seeking disk

8
access. These requests may be very frequent. In addition, the information for different

users may be on completely different cylinders. When we have multiple requests pending

on a disk, accessing the information in a certain order becomes very crucial. Some

policies on ordering the requests may raise the throughput of the disk, and therefore, that

of the system.

As apparent, the amount of head movement needed to satisfy a series of I/O requests

could affect the performance. For this reason, a number of scheduling algorithms have

been proposed.

3.2.1 First-in-First-Out (FIFO)

First come first serve is the simplest form of disk scheduling. This algorithm services

requests in the order they are received. Let us illustrate it with a request queue (0-199).

Suppose initially the disk head is at cylinder 53, and then it will first move to 98, then to

183, then to 37 and so on and finally to 67, with a total head movement of 640 cylinders.

Figure 8.2 FIFO Disk Scheduling

9
3.2.2 Shortest Seek Time First (SSTF)

Shortest Seek Time First services the request with minimum seek time from the current

track position. Similar to Shortest job first CPU Scheduling algorithm, this algorithm

leads to the starvation problem. Let us consider the example-

Figure 8.3 SSTF Disk Scheduling

Cylinder 65 is the closest request position corresponding to the initial head position (53).

Once we are at cylinder 65, the next closest request is 67, then is 37 and so on and finally

leads to the total head movement of 238 cylinders as shown in Fig 8.3. This algorithm is

better than the FCFS but not the optimal one.

3.2.3 SCAN

In Scan algorithm, the read/write head move back and forth between the innermost and

outermost tracks. As the head approaches to a track, it satisfies all the outstanding

requests for that track. The read write disk arms starts at one end of the disk and moves

toward the other end and continue servicing requests until it approaches to the other end

10
of the disk, then the head movement is reversed and again continue the servicing of

requests as shown in figure 8.4.

Figure 8.4 SCAN Disk Scheduling

This algorithm is also known as the elevator algorithm. Total 208 cylinder head

movement is required in this example for SCAN algorithm.

3.2.4 LOOK

In this algorithm, disk arm head start servicing the request in one direction. The head

service the request for the closest track in that direction. The arm ends only as far as the

final request in each direction and then, it immediately reverses direction, without going

to the end of the disk. This algorithm is like to SCAN but, it is differ from SCAN in the

way that the head does not unnecessarily service to the innermost and outermost track on

each cylinder.

11
3.2.5 C-SCAN

This algorithm is the circular version of SCAN algorithm. It offers more homogeneous

wait time than SCAN. The head shift from one end of the disk to the other and handles

requests as it moves towards the end. When it arrives at the other end, conversely, it

instantly comes back to the start of the disk, without servicing any requests on the return

tour. This algorithm treats the cylinders as a circular list with the aim of wrapping in the

region of from the last cylinder to the first one. Let us consider the example shown in

Figure 8.5.

Figure 8.5 C-SCAN Disk Scheduling

3.2.6 C-LOOK

"Circular" versions of LOOK algorithm that only assure requests while going in one

direction. As the arm head reached to the last, the algorithm return back to the starting

track as shown in Figure 8.6. C-LOOK is better than the LOOK as it minimizes the delay

before a request will be serviced.

12
Figure 8.6 C-LOOK Disk Scheduling

3.2.7 N-step SCAN

Firstly, the request queue is divided into subqueues. Each subqueues having the

maximum length of N. All the subqueues are processed in FIFO manner. In order to

perform inter-subqueue processing, SCAN algorithm would be used. Incoming requests

are placed in the next unfilled subqueue while servicing a subqueue. There is a possibility

of eliminates of starvation problem in N-step SCAN algorithm.

3.2.8 FSCAN

The "F" stands for "freezing" the request queue at a certain time. It is just like N-step

scan but there are two sub queues only and each is of unlimited length. While requests in

one sub queue are serviced, new requests are placed in other sub queue.

3.3 Selection of scheduling algorithm

As there are so many disk-scheduling algorithms, an important question is how to choose

a scheduling algorithm that will optimize the performance. The commonly used

13
algorithm is Shortest-Seek-Time-First and it has a natural appeal also. San and its

variants are more appropriate for system with a heavy load on the disk. It is possible to

define an optimal scheduling algorithm, but computational overheads required for that

may not justify the savings over Shortest-Seek-Time-First and scan.

No doubt in any scheduling algorithm the performance depends on the number and types

of the requests. If every time there is only one outstanding request, then the performance

of all the scheduling algorithms will be more or less equivalent. Studies suggest that even

First-Come-First-Serve performance will also be reasonably well.

It is also observed that performance of scheduling algorithms is also greatly influenced

by the file allocation method. The requests generated by the contiguously allocated files

will result in minimum movement of the head. But in case of indexed access and direct

access where the blocks of a file are scattered on the disk surface resulting into a better

utilization of the storage space, there may be a lot of movement of the head.

In all these algorithms, to improve the performance, the decision is taken on the basis of

head movement i.e. seek time. Latency time is not considered as a factor. Because it is

not possible to predict latency time because rotational location cannot be determined.

However, multiple requests for the same track may be serviced based on latency.

3.4 Formatting

All the administrative data must be written to the disks and organizing it into sectors

before writing the data to a disk. This low-level formatting or physical formatting is often

done by the manufacturer. During formatting process, many sectors could be found to be

defective. Many disks have additional sectors, and a remapping mechanism substitutes

spare sectors for defective ones.

14
For sectors which fail after formatting, the operating system may implement a bad

block mechanism. Such mechanisms are usually in terms of blocks and are implemented

at a level above the device driver. Disk performance can be affected by the manner in

which sectors are located on a track. If disk I/O operations are limited to transferring to

read multiple sectors in sequence, single sector at a time, separate I/O operations should

be performed. Interrupt must be processed and the second I/O operation must be issued

after the completion of first I/O operation. During this time, the disk continues to spin. If

the start of the next sector to be read has already spun past the read/write head, the sector

cannot be read until the next revolution of the disk brings the cylinder by the read/write

head. In a worst-case scenario, the disk must wait almost a full revolution. Sectors may

be interleaved so as to avoid this problem. The degree of interleaving is determined by

how far the disk revolves in the time from the end of one I/O operation until the

controller can issue a subsequent I/O operation. The sector layout with different degrees

of interleaving is illustrated in Fig. 8.7.

In most modern hard-disk controllers, interleaving is not used. The controller

contains enough amount of memory to store the entire track; so, a single I/O operation

can be used to read all the sectors on a track. Interleaving is most commonly used on less

sophisticated disk systems, like floppy disks.

15
Figure 8.7 Interleaving

Many operating systems also provide the capability for disks to be divided into

one or more virtual disks called partitions. On personal computers, DOS, Windows, and

UNIX all stick to a common partitioning scheme so that they all may co-reside on a

single disk.

An empty file system must be created on a disk partition, before writing the files

to a disk. This requires the different data structures for different file systems to be written

to the partition, and is called as a high level format or logical format. Sometimes file

system is not required to make use of a disk. Some operating systems let the applications

to write directly to a disk device. For such an application, a directly accessible disk

device is just a large sequential collection of storage blocks. In such a case, it is the

responsibility of the application to enforce an order on the information written there.

3.5 RAID

A Redundant Array of Inexpensive Disks (RAID) may be used to increase disk

reliability. RAID can be implemented in hardware or in the operating system. There are

six different types of RAID systems that are described below and illustrated in Fig. 8.8.

16
RAID level 0 develops one large virtual disk from a number of smaller disks.

Storage is combined into logical units called strips with the size of a strip being some,

multiple (possibly one) of the sector size. The virtual storage is a sequence of array of

strips interleaved among the disks. The basic benefit of RAID-0 is its capability to

generate a large disk. But its reliability benefits are limited. Files generally gets scattered

over a number of disks. Hence, even after a disk failure, some file data can be retrieved

safely. Performance benefits can be achieved by accessing the sequentially stored data.

On a RAID-0 system, the sequential data is stored on different disks.

The second disk can start reading the second strip, when the first disk is in the

process of reading the first strip. If there are N disks in the array, N I/O operations can

be occurring simultaneously. The process of overlapping requests in this approach is

known as pipelining.

RAID level 1 stores duplicate copies of each strip, with each copy on a different

disk. The simplest organization consists of two disks, one being an exact duplicate of the

other. Read requests can be optimized if they are handled by the copy that can accessed

quickly. Under some circumstances requests can be pipelined, as with RAID-0.

Sometimes write requests result in duplicate write operations, one for each copy. Writing

is not as efficient as reading; so the completion is delayed until the copy with slowest

access time is updated.

17
Figure 8.8 RAID levels

18
Single copies of each strip are maintained in RAID levels 2 to 5. Redundant

information maintains functionality regardless of disk failures. In RAID level 2, an error-

correcting code (such as a Hamming code) is calculated for the corresponding bits on

each data disk. The bits of the code are stored on multiple drives. The strips are very

small, so when a block is read, all disks are accessed in parallel. RAID-3 is similar, but a

single parity bit is used instead of an error-correcting code. RAID-3 also requires just one

extra disk. If any disk fails in the array, its data can be regenerate from the data on the

remaining disks.

RAID level 4 is like to RAID-3, except strips are larger. So, an operation to read a

block involves only a single disk. Write operations require parity information to be

calculated, and writes must be performed on both the data and parity disks. As the parity

disk must be written whenever any data disk is written, the parity disk can be a bottleneck

during situation of heavy write activity.

RAID level 5 is similar to RAID-4, except the parity information is distributed

among all the disks. RAID-5 eliminates the potential bottleneck found in RAID-4.

3.6 RAM Disks

A RAM disk is a virtual block device created from main memory. Commands to

read or write disk blocks are implemented by the RAM disk device driver. Unlike real

disks, main memory provides direct access to data. Seek and rotational delay, generally

found on disk devices, do not exist in RAM disks. They are mainly useful in storing small

files that are recurrently or temporary accessed.

The two biggest disadvantages of RAM disks are its cost and volatility. To

implement a RAM disk, the operating system must reserve a section of memory. The

19
other major disadvantage is that when power is lost, memory contents wash off. If the

RAM disk is to store a file system, that file system must be remade each time the system

is booted. Any files stored on a RAM disk file system will be lost when the system is

rebooted. In case of a power failure, any important data stored on a RAM disk will be

lost.

If the memory is contiguous, implementation of a RAM disk can be simplified.

Allocating a large contiguous section of memory for RAM disk use can be easily done

when a system is first booted.

4. SUMMARY

The major secondary-storage I/O device in most computers is the disk drive.

Requests to access disk I/O are created by the virtual memory system and by the file

system. Every request generates the referred address on the disk, in the type of a logical

block number. Disk-scheduling policies may improve the general bandwidth, the average

response time, and the variation in response time. Policies like FIFO, SSTF, C-SCAN,

SCAN, LOOK, and C-LOOK are designed in order to decrease the total seek time.

RAID is used to enhance the reliability of disk. A RAM disk is a virtual block

device built from main memory. Commands to read or write disk blocks are processed by

the RAM disk device driver.

5. SUGGESTED READINGS / REFERENCE MATERIAL

 Operating System Concepts, 5th Edition, Silberschatz A., Galvin P.B., John Wiley

& Sons.

 Systems Programming & Operating Systems, 2nd Revised Edition, Dhamdhere

D.M., Tata McGraw Hill Publishing Company Ltd., New Delhi.

20
 Operating Systems, Madnick S.E., Donovan J.T., Tata McGraw Hill Publishing

Company Ltd., New Delhi.

 Operating Systems-A Modern Perspective, Gary Nutt, Pearson Education Asia,

2000.

 Operating Systems, Harris J.A., Tata McGraw Hill Publishing Company Ltd.,

New Delhi, 2002.

6. SELF-ASSESSMENT QUESTIONS (SAQ)

 Compare the throughput of scan and C-scan assuming a uniform distribution of

requests.

 What do you understand by seek time, latency time, and transfer time? Explain.

 Shortest Seek Time First favors tracks in the center of the disk. On an operating

system using Shortest Seek Time First, how might this affect the design of the file

system?

 When there is no outstanding request in the queue, all the disk-scheduling

algorithms reduce to First Come First Serve scheduling? Explain why?

 What is the difference between Look and C-Look? Discuss using suitable

example.

 The entire disk scheduling algorithms except First Come First Serve may cause

starvation and hence not truly fair.

o Explain why.

o Come up with a scheme to ensure fairness.

o Why is fairness an important goal in a time-sharing system?

 What do you understand by RAID? What are the objectives of it? Explain.

21
 What are the limitations due to redundancy in RAID? What are the advantages of

redundancy? Explain.

 Write a detailed note on different RAID organizations? Discuss their merits and

demerits also.

22
CS-DE-15

OPERATING SYSTEMS

Lesson No. -10

Writer: Dr. Sanjay Tyagi

Vetter: Dr. Pardeep Kumar

WINDOWS-I

STRUCTURE

1. Introduction

2. Objectives

3. Presentation of Contents

3.1 Windows

3.1.1 Introduction to Windows

3.1.2 Components of Windows

3.1.3 Switching between Windows

3.1.4 Cascading Windows

3.1.5 Tiling Windows

3.1.6 Scrollbars

3.2 Various Versions of Windows

3.3 Desktop

3.3.1 Icon

3.3.2 Task bar

1
3.3.3 Start button

3.3.4 My computer

3.3.5 My documents

3.3.6 Recycle bin

3.3.7 Items of start menu

3.3.8 Desktop shortcut

3.4 Shortcut Keys

3.5 Windows Explorer

4. Summary

5. Suggested Readings / Reference Material

6. Self Assessment Questions (SAQ)

2
1. INTRODUCTION

An operating system is an interface between hardware and user which is responsible for

the management and coordination of activities and the sharing of the resources of a

computer, that acts as a host for computing applications run on the machine. As a host,

one of the purposes of an operating system is to handle the resource allocation and access

protection of the hardware. This relieves application programmers from having to

manage these details.

In this lesson, basics of windows operating system have been discussed. Various versions

of windows operating system have also been discussed to give an inner sight about the

basic framework of windows operating system. Windows is being upgraded to give it a

better look which can be distinguished in the various versions. This lesson is very helpful

in understanding the usefulness of windows.

2. OBJECTIVES

This lesson is designed to help you learn the basic commands and elements of Windows.

This lesson is not geared toward a certain version of Windows for instance Windows 95,

Windows 98, or Windows 2000, but discusses those aspects that are common throughout

all versions. If you are trying to catch your computer knowledge up to modern times, then

this lesson on windows operating system will help you.

3
3. PRESENTATION OF CONTENTS

3.1 Windows

Windows is a series of software operating systems and graphical user interfaces.

Microsoft first introduced an operating environment named Windows in November 1985

as an add-on to MS-DOS in response to the growing interest in graphical user interfaces

(GUIs). Microsoft Windows came to dominate the world's personal computer market,

overtaking Mac OS, which had been introduced previously. As of October 2009,

Windows had approximately 91% of the market share of the clients.

3.1.1 Introduction to Windows

It is a rectangular area of Screen that displays different information. In windows every

folder or application has a window. Following are the properties of a window:

1. Every window has a title bar which displays the name of the window.

2. A window can be resized, minimized and maximized by pressing the buttons at

the top right corner, of the title bar.

3. A window can be closed by pressing the x button at the right of the title bar.

4. A window can be moved at any location of the screen

3.1.1 Components of Windows

4
Control Box The control box provides a menu that enables you to restore, move,

size, minimize, maximize, or close a window.

Border The border separates the window from the desktop. You resize the

window by dragging its borders outward to expand it and inward to

contract it.

Title bar The title bar displays the name of the current file and the name of the

current program.

Minimize button Use the Minimize button to temporarily decrease the size of a

window or remove a window from view. While a window is

minimized, its title appears on the taskbar.

Maximize button Click the Maximize button and the window will fill the screen.

Restore button After you maximize a window, if you click the Restore button, the

window will return to its former size.

Close button Click the Close button to exit the window and close the program.

Menu bar The menu bar displays the program menu. You send commands to

the program by using the menu

5
Toolbars Toolbars generally display right below the menu, but you can drag

them and display them along any of the window borders. You use the

icons on the toolbars to send commands to the program.

Work area The work area is located in the center of the window. You perform

most of your work in the work area.

Status bar The status bar provides you with information about the status of your

program.

Figure above shows the screenshot of word pad window

6
3.1.2 Switching between windows

If you have several windows open at the same time, the window on top is the window

with focus. You can only interact with the window with focus. To change windows, do

one of the following:

1. Click anywhere on the window you want to have focus.

2. Hold down the Alt key and press the Tab key (Alt-Tab) until you have selected

the window to which you want to change.

3. All active files are displayed on the taskbar. Click the taskbar button for the

window you want to have focus.

3.1.3 Cascading windows

Cascading your windows is a way of organizing your windows on your desktop.

Cascading windows fan out across your desktop with the title bar of each window

showing.

To cascade your windows:

1. Right-click the taskbar. A menu will appear.

2. Click Cascade Windows.

7
3.1.4 Tiling windows

Tiling your windows is a way of organizing your windows onscreen. When you tile your

windows, Windows places each window on the desktop in such a way that no window

overlaps any other window. You can tile your windows horizontally or vertically.

To tile your windows:

1. Right-click the taskbar. A menu will appear.

2. Click Tile Windows Horizontally or Tile Windows Vertically, whichever you

prefer.

3.1.5 Scrollbars

In many programs, if the contents of the work area do not fit in the window, scrollbars

will appear. A vertical scrollbar will appear at the right side of the window and a

horizontal scrollbar at the bottom of the window, depending on the fit. The vertical

scrollbar provides a way to move up and down. The horizontal scrollbar provides a way

to move from left to right.

The scroll box indicates where you are in your document. If the scroll box is at the top of

the scrollbar, you are at the top of the document. If the scroll box is in the center of the

scrollbar, you are in the center of the document.

To move up and down one line at a time:

 Click the arrow at either end of the vertical scrollbar.

8
To move from side to side one character at a time:

 Click the arrow at either end of the horizontal scrollbar.

To move approximately one window at a time:

 Click above the scroll box to move up.

 Click below the scroll box to move down.

To scroll continuously:

 Click the appropriate arrow and hold down the mouse button.

To move to a specific location:

 Left-click the scrollbar and hold down the left mouse button until you arrive at the

location. For example, if you want to go to the center of the document, click the

center of the scrollbar and hold down the left mouse button.

 Or drag the scroll box until you arrive at the desired location.

3.2 Various versions of Windows

 Windows 1.0

Windows 1.0 presents incomplete multitasking of the MS-DOS programs and

focuses on generating an interfacial pattern, an effecting replica and a steady API

for indigenous programs for the next generation.

9
 Windows 2.0

Windows 2.0 permits functional windows to overlie on one another, dissimilar to

Windows 1.0, which is capable just to exhibit tiled windows. Windows 2.0 has

also brought up additional stylish keyboard-shortcuts and the terms like

"Maximize", and "Minimize" as opposite to "Zoom" and "Iconize" in Windows

1.0.

 Windows 3.0

Windows 3.0 was the 3rd most important production of Microsoft Windows

which was released on 22nd May 1990. It turned out to be the 1st broadly used

version of Windows.

 Windows 3.1

Windows 3.1 (also known as Janus), had came up on March 18, 1992. This

version includes a TrueType inbuilt font system making the Windows a solemn

desktop issuing platform for the 1st time. Windows 3.0 could have similar

functionality with the use of the Adobe Type Manager (ATM) font system from

Adobe.

 Windows NT

Windows NT is a family of operating systems produced by Microsoft, the first

version of which was released in July 1993. It was originally designed to be a

10
powerful high-level-language-based, processor-independent, multiprocessing,

multiuser operating system with features comparable to UNIX.

 Windows 95

Windows 95 was planned to amalgamate MS-DOS and Windows goods and

contain an improved version of DOS.

 Windows 98

Windows 98 is a modernize version of Windows 95. It was released on May 5,

1999. It includes fixes for many minor issues, improved USB support, and the

replacement of Internet Explorer 4.0 with relatively faster Internet Explorer 5.0.

 Windows ME

Windows ME was the successor to Windows 98 and just like Windows 98, was

targeted specifically at home PC users. It included Internet Explorer 5.5,

Windows Media Player 7, and the new Windows Movie Maker software, which

provided basic video editing and was designed to be easy for home users.

 Windows 2000

Windows 2000 is an extension of the Windows 9x version, but with access to the

actual mode MS-DOS limited so as to get a move for the system boot time. It was

one of the most revealed alterations in Windows ME, Because of some

11
applications that require actual mode DOS to run those could not be made to run

on Windows ME.

 Windows XP

Microsoft developed Windows XP in a line of operating systems for applying on

general functional computer systems which includes business and home desktops,

media centers and notebook computers. Windows XP was released on 25th

October 2001.

 Windows Vista

After a world-wide success of XP and its service packs Microsoft designed and

created Windows Vista the operating system for use on personal computers,

including business and home desktops, Tablet PCs, laptops and media centers. It

was first named as "Longhorn" but later on 22nd July 2005, the name was

announced as Windows Vista. The development of Vista was finished on 8th

November, 2006. In the next three months, Vista was available in steps to

computer software and hardware manufacturers, retail counters, and business

organizations. It was released globally on 30th January 2007 for the general

public.

 Windows 7

Windows 7 is one of the latest public release version of Microsoft Windows, a

series of operating systems produced by Microsoft for use on personal computers,


12
including home and business desktops, laptops, notebooks, tablet PCs, and media

center PCs. Windows 7 was released to manufacturing on July 22, 2009, and

reached general retail availability on October 22, 2009, less than three years after

the release of its predecessor, Windows Vista.

3.3 Desktop

Windows "Desktop" is like a working surface of a desk. Desktop is where the

applications, folders and shortcuts are located. Desktop contains the following items:

 Icons

 Taskbar

 Start Button

The function of these desktop items is given below:

3.3.1 Icon

An icon is a graphic image. Icons help you execute commands quickly. Commands tell

the computer what you want the computer to do. To execute a command by using an

icon, click the icon.

Windows operating system uses different icons to represent files, folders and

applications. Icons found on the desktop are normally left aligned. The icons provided by

windows are:

1. My Documents
13
2. My Computer

3. My Network Places

4. Recycle Bin

5. Internet Explorer

3.3.2 Task Bar

The task bar is at the bottom of the desktop, but you can move it to the top or either side

of the screen by clicking and dragging it to the new location. Buttons representing

programs currently running on your computer appears on the task bar. At the very left of

the task bar is the start button. At the right side is an area called the system tray. Here you

will find graphical representation of various background operations. It also shows the

system clock.

3.3.3 Start Button

Start button is found at the lower left corner of the screen. Click once on the start button

to open a menu of choices. Through this button, we can open the programs installed on

your computer and access all the utilities available in the windows environment.

We can shutdown, restart and/or standby the computer by using the start button.

Various choices available on Start menu will be discussed later on.

14
Screenshot of Desktop

3.3.4 My Computer

My computer lets you browse the contents of your computer. The common tasks that we

can perform through my computer are:

1. Access information stored as different storage devices connected with the

computer, such as hard disk, floppy disk or CD ROM.

2. Create, move, copy, delete or rename files, folders and programs from one

disk to another disk.

3. Execute or run programs from the disks.

4. Configure devices of the computer.

15
5. Add or remove a printer.

3.3.5 My Documents

It is a desktop folder that provides a convenient place to store documents, graphics or

other files that you want to access quickly. On the desktop, it is represented by a folder

with a sheet of paper in it. When you save a file in a program such as word pad or paint,

the file is by default saved in my documents unless you choose a different location.

The following steps may be followed to open a document from its window.

1. Move the mouse pointer to My Documents icon.

2. Double click on it to open its windows.

3. Double click on any of its item to open it.

3.3.6 Recycle Bin

Recycle bin makes it easy to delete and undelete files and folders. When a file or folder is

deleted from any location, Windows stores it in the recycle bin. If a file is deleted

accidentally, you can move it back from the recycle bin. We can also empty recycle bin

to save disk space.

Steps to move back the file or folder from the recycle bin:

1. Open Recycle bin by double clicking on its icon.

16
2. Select the file or folder you want to move back.

3. Click the right mouse.

4. A menu will appear, choose restore from it.

5. Windows will move the file or folder back to the location from where it was deleted.

3.3.7 Items of Start Menu

Start menu displays a menu of choices:

 Programs

 Favorites

 Documents

 Settings

 Find

 Help

 Run

 Shutdown

Programs

Place the mouse pointer to the programs entry and a sub menu will open, showing all

programs or applications currently installed. To open a program, which has been installed

on your computer, click on it and the program will open.

17
Favorites

Favorites menu present a list of the Internet addresses that you have added to your

Internet explorer favorite list.

Documents

The Documents menu lists the files you have recently worked on. You can open the most

recently used document directly from here. To open a document from this list, simply

click on it and the document will open.

Settings

This menu provides the facility to change or configure the hardware or software settings

of the computer. This menu leads to several choices.

The individual icons in the Control Panel refer to a variety of tools to control the way of

your computer, its components presents information, as well as the tools to control the

behavior and appearance of the Windows interface.

The Find/Search

This option of the start menu helps in locating files or folders stored on the hard disk or

network.

This command is very helpful in case we forget the exact location of a file or folder that

we want to access. The search option presents different ways for finding a file or folder.

18
These options include search based on name, type, size, and date and storage location of

the file or folder. It opens a dialog box, where the user can type a name of the file or

folder to search for. The procedure of using this command is given below:

1. Click on Find option of the start menu, the Find dialog box will appear.

2. Enter the name of the file or folder in the Named text box.

3. From the Look in drop down list box choose the location where you imagine that your

desired file or folder may be present.

4. Click on the Find now button to start search.

5. If find dialog box successfully searches the location of the desired file or folder, it will

display it in the window below this dialog box.

Help

To access the Help system of windows, you can select Help from the start menu. Help

option helps us how to use the commands and menus and in case of problems how to

trouble shoot the windows operating system.

Run

This command is used to execute a command or program directly instead of using the

icon or program menu. Press the "Browse" button to locate the program you want to open

through Run command.

19
Shut Down

Shutdown is a process in which computer closes all programs currently running and

disconnects the devices connected with it and turns it off. Following steps are followed to

shutdown the computer:

1. Click on the start button to open the Start Menu.

2. Click on the Shut Down.

3. Shut down dialog box will appear.

4. Choose the shut down option from the list and click the "OK" button.

3.3.8 Desktop shortcut

A desktop shortcut, usually represented by an icon, is a small file that points to a

program, folder, document, or Internet location. Clicking on a shortcut icon takes you

directly to the object to which the shortcut points. Shortcut icons contain a small arrow in

their lower left corner.

Shortcuts are merely pointers and deleting a shortcut will not delete the item to which the

shortcut points.

Create a desktop shortcut

To create a shortcut to an item located on the Start menu:

20
1. Click Start. The Start menu will appear.

2. Locate the item to which you want to create a shortcut. If the item is located on a

submenu, go to the submenu.

3. Click and drag the item to your desktop.

To create a shortcut to items visible in Windows Explorer:

1. Open Windows Explorer.

2. Minimize the Windows Explorer window.

3. Locate in Windows Explorer the item to which you want to create a shortcut.

4. Hold down the right mouse button and drag the item onto the desktop.

5. Release the right mouse button. A context menu will appear.

6. Click Create Shortcuts Here.

Change the icon associated with an object

To change the icon associated with an object:

1. Right-click the icon. The context menu will appear.

2. Click Properties.

3. Click the Change Icon button.

4. Click the icon of your choice.

5. Click OK.

Please note that icons can be changed. If you do not see the Change Icon button, the icon

cannot be changed.

21
3.4 Shortcut Keys

You can use shortcut keys to execute a command quickly by pressing key combinations

instead of selecting the commands directly from the menu or clicking on an icon. When

you look at a menu, you will notice that most of the options have one letter underlined.

You can select a menu option by holding down the Alt key and pressing the underlined

letter. You can also make Alt-key selections from drop-down menus and dialog boxes. A

key name followed by a dash and a letter means to hold down the key while pressing the

letter. For example, "Alt-f" means to hold down the Alt key while pressing "f" (this will

open the File menu in many programs). As another example, holding down the Ctrl key

while pressing "b" (Ctrl-b) will bold selected text in many programs. In some programs,

you can assign your own shortcut keys.

3.5 Windows Explorer

Windows Explorer is a place where you can view the drives on your computer and

manipulate the folders and files. Using Windows Explorer, you can cut, copy, paste,

rename, and delete folders and files.

To open Windows Explorer:

1. Click the Start button, located in the lower left corner of your screen.

2. Highlight programs.

3. Highlight Accessories.

4. Click Windows Explorer.

22
Alternatively, you can open Windows Explorer by holding down the Windows key and

typing e (Windows-e).

To add an item located in Windows Explorer to the Start menu or to a Program menu:

1. Click the Start button. The Start menu will appear.

2. Highlight Settings. A submenu will appear.

3. Click Taskbar and Start Menu. A dialog box will appear.

4. Click the Start Menu tab.

5. Click the Customize button.

6. Click Add.

7. Type the path to the item you want to add, or use Browse to navigate to the item.

8. Click Next.

9. Double-click an appropriate folder for the item.

10. Click Finish.

11. Click OK.

12. Click OK again. The item will appear on the menu.

To remove an item from the Start menu or from a Program menu:

1. Click the Start button. The Start menu will appear.

2. Highlight Settings. A submenu will appear.

3. Click Taskbar and Start Menu. A dialog box will appear.

4. Click the Start Menu tab.

5. Click Customize.

23
6. Click the Remove button.

7. Find and click the item you want to remove.

8. Click the Remove button. You will be prompted.

9. Click Yes.

10. Click Close.

11. Click OK.

12. Click OK again.

To copy an item located on the Start menu or on a Program menu:

1. Highlight the item.

2. Right-click. A context menu will appear.

3. Click Copy.

To rename an item on the Start menu or on a Program menu:

1. Highlight the item.

2. Right-click the item.

3. Click Rename. The Rename dialog box will appear.

4. Type the new name in the New Name field.

5. Click OK.

To delete a file from the Start menu or from a Program menu:

1. Highlight the item.

2. Right-click.

24
3. Click Delete. You will be prompted.

4. Click Yes.

To sort a menu:

1. Go to the menu.

2. Right-click.

3. Click Sort by Name.

4. SUMMARY

Through this lesson, you have learnt the features of Windows, Various versions of

Windows & its use. We discussed about the basics of windows operating system like My

Computer, Recycle bin, Desktop, Icons and Windows Explorer. In the next lesson, we

will discuss about the Toolbars, simple operations like copy, delete, moving of files and

folders from one drive to another and Windows Settings using Control Panel.

5. SUGGESTED READINGS / REFERENCE MATERIAL

 Microsoft Windows XP Step by Step (Microsoft)) by Online Training Solutions

Inc. and Joan Preppernau (Paperback - Aug 25, 2004).

25
 Microsoft Windows 2000 Core Requirements Training Kit- Microsoft Press,

Corporation Microsoft Corporation, Microsoft Corporation.

 Windows XP Home Edition: The Missing Manual (O'Reilly Windows) - David

Pogue.

 Windows XP for Dummies- Andy Rathbone.

6. SELF ASSESSMENT QUESTIONS (SAQ)

 Explain various components of window operating system.

 Discuss various versions of window operating system.

 Explain the purpose of shortcut keys.

 Explain the start button of Windows operating system and what are the various

components come in the menu?

 Write a short note on following :

o Recycle Bin

o Taskbar

o My Computer

 Explain the purpose of Window explorer.

26
CS-DE-15

OPERATING SYSTEMS

Lesson No. -11

Writer: Dr. Sanjay Tyagi


Vetter: Dr. Pardeep Kumar
WINDOWS-II

STRUCTURE

1. Introduction

2. Objectives

3. Presentation of Contents

3.1 Dialog box

3.1.1 Design concepts of dialog box

3.1.2 Usage patterns

3.1.3 Some common examples of Dialog boxes.

3.2 Tool bar

3.3 Files and Folders

3.3.1 Organize files and folders on drives

3.3.2 Create new folder

3.3.3 Delete a file or folder

3.3.4 Copy a file or folder

3.3.5 Cut a file or folder

3.3.6 Paste a file or folder

3.3.7 Rename a file or folder


3.3.8 Save a file

3.4 Drives

3.5 Accessories

3.6 Windows Settings using Control Panels

3.6.1 How to access Windows Control Panel

4. Summary

5. Suggested Readings / Reference Material

6. Self Assessment Questions (SAQ)

1. INTRODUCTION

In the previous lesson, we have learnt the various versions of the windows operating systems and

its uses, and learnt something about the Desktop, Icons and Windows Explorer. This lesson is

intended to discuss some useful and advance activities in windows like windows accessories,

control panel, dialog box & tool bar. These are very important activities which help the system to

work in an efficient way.

2. OBJECTIVES

In this lesson, we will explain about some advance operations on the windows operating system.

The basic objective of this lesson is to make you understand about the some core operating

system operations and its uses make you understand about the Dialog Boxes & Toolbars. Here

we will discuss the working of files & folders; simple operations like copy, delete, moving of

files and folders from one drive to another. Lastly, we will explain windows accessories and

windows settings using control panel.


3. PRESENTATION OF CONTENTS

3.1 Dialog Boxes

A dialog box is a special window, used in user interfaces to display information to the user, or to

get a response if needed. They are so-called because they form a dialog between the computer

and the user, either informing the user of something, or requesting input from the user, or both. It

provides controls that allow you to specify how to carry out an action.

Dialog boxes consist of a title bar (to identify the command, feature, or program where a dialog

box came from), an optional main instruction (to explain the user's objective with the dialog

box), various controls in the content area (to present options), and commit buttons (to indicate

how the user wants to commit to the task).

Screen shot of a Save as Dialog box.

Dialog boxes have two fundamental types:


 Modal dialog boxes require users to complete and close before continuing with the

owner window. These dialog boxes are best used for critical or infrequent, one-off tasks

that require completion before continuing.

 Modeless dialog boxes allow users to switch between the dialog box and the owner

window as desired. These dialog boxes are best used for frequent, repetitive, on-going

tasks.

A task dialog is a dialog box implemented using the task dialog application programming

interface (API). They consist of the following parts, which can be assembled in a variety of

combinations:

 A title bar to identify the application or system feature where the dialog box came from.

 A main instruction, with an optional icon, to identify the user's objective with the

dialog.

 A content area for descriptive information and controls.

 A command area for commit buttons, including a Cancel button, and optional more

options and Don't show this <item> again controls.

 A footnote area for optional additional explanations and help typically targeted at less

experienced users.
3.1.1 Design concepts of dialog box

When properly used, dialog boxes are a great way to give power and flexibility to your program.

When misused, dialog boxes are an easy way to annoy users, interrupt their flow, and make the

program feel indirect and tedious to use. Modal dialog boxes demand user’s attention. Dialog

boxes are often easier to implement than alternative UIs, so they tend to be overused.

A dialog box is most effective when its design characteristics match its usage. A dialog box's

design is largely determined by its purpose (to offer options, ask questions, provide information

or feedback), type (modal or modeless), and user interaction (required, optional response, or

acknowledgement), whereas its usage is largely determined by its context (user or program

initiated), probability of user action, and frequency of display.

3.1.2 Usage patterns

Dialog boxes have several usage patterns:

 Question dialogs (using buttons) ask users a single question or to confirm a command,

and use simple responses in horizontally arranged command buttons.


 Question dialogs (using command links) ask users a single question or to select a task

to perform, and use detailed responses in vertically arranged command links.


 Choice dialogs present users with a set of choices, usually to specify a command more

completely. Unlike question dialogs, choice dialogs can ask multiple questions.

 Progress dialogs present users with progress feedback during a lengthy operation (longer

than five seconds), along with a command to cancel or stop the operation.
 Informational dialogs display information requested by the user.

3.1.3 Some common examples of Dialog boxes.

Tabs
Some programs provide us dialog boxes with several pages of options. We can move to a page

by clicking on the tab or by using Ctrl-Tab.

List Boxes

List boxes enable us to make a choice from a list of options. To make our selection, simply we

have to click the option we want to choose. In some list boxes, we can choose more than one

item. To choose more than one item, hold down the Ctrl key while you make your selections.

Radio buttons
A radio button is a type of graphical user interface element that allows the user to choose only

one of a predefined set of options .Windows XP and programs that run under Windows XP use

radio buttons to present a list of mutually exclusive options. We can select only one of the

options presented. Radio buttons are usually round. A dot in the middle indicates that the option

is selected.

Check boxes

Check box is a selection tool designed so that a user can choose one or more items from a list.
We can click the checkbox to select the item. An X or a checkmark appears in a selected box.

We can toggle checkboxes on and off by clicking in the box.

3.2 Toolbar

A toolbar is a set of icons or buttons that are part of a software program's interface or an open

window. When it is part of a program's interface, the toolbar typically sits directly under the

menu bar. For example, Adobe Photoshop includes a toolbar that allows you to adjust settings

for each selected tool. If the paintbrush is selected, the toolbar will provide options to change the

brush size, opacity, and flow. Microsoft Word has a toolbar with icons that allow you to open,

save, and print documents, as well as change the font, text size, and style of the text. Like many

programs, the Word toolbar can be customized by adding or deleting options. It can even be

moved to different parts of the screen.

The toolbar can also reside within an open window. For example, Web browsers, such as Internet

Explorer, include a toolbar in each open window. These toolbars have items such as Back and

Forward buttons, a Home button, and an address field. Some browsers allow you to customize

the items in toolbar by right-clicking within the toolbar and choosing "Customize..." or selecting

"Customize Toolbar" from the browser preferences. Open windows on the desktop may have

toolbars as well

Here we are giving one commonly used windows operating system toolbar i.e.

Windows quick launch toolbar

The Quick Access Toolbar is a customizable toolbar that contains a set of commands that are

independent of the tab that is currently displayed. You can move the Quick Access Toolbar
from one of the two possible locations, and you can add buttons that represent commands to

the Quick Access Toolbar.

Restoring the Quick Launch Toolbar

1. Right click the Taskbar and select Toolbars > Address

Here you put address to launch the Quick toolbar.

2. Right click the Taskbar and uncheck Lock the taskbar

3. The view of unlocked quick launch taskbar.


4. To view the large Icons in the Quick launch taskbar.

5. Right click on Quick launch taskbar, and point to Toolbars context menu item, and

then select New Toolbar….to add new toolbar to your quick launch toolbar.

3.3 Files and Folders

Folders are used to organize the data stored on your drives. The files that make up a program are

stored together in their own set of folders, as we want to organize the files we create in folders.

Files of a like kind are stored in a single folder.


3.3.1 Organize files and folders on drives

Windows XP organizes folders and files in a hierarchical system. The drive is the highest level

of the hierarchy. You can put all of your files on the drive without creating any folders, but that

is like putting all of your papers in a file cabinet without organizing them into folders. It works

fine if you have only a few files, but as the number of files increases, there comes a point at

which things become very difficult to find. So you create folders and put related material

together in folders.

A diagram of a typical drive and how it is organized is shown here.

At the highest level, you have some folders and perhaps some files. You can open any of the

folders and put additional files and folders into them. This creates a hierarchy.

3.3.2 Create new folder

To create a new folder:

1. In the left pane, click the drive or folder in which you want to create the new folder.

2. Click any free area in the right pane. A context menu will appear.
3. Highlight New.

4. Click Folder.

5. Type a name for the folder.

3.3.3 Delete a file or folder

To delete a file or folder:

1. Right-click the file or folder you want to delete. A context menu will appear.

2. Click Delete. Windows Explorer will ask, "Are sure you want to send this object to the

recycle bin?"

3. Click Yes.

3.4.4 Copy a file or folder

To copy a file or folder:

1. Right-click the file or folder you want to copy. A context menu will appear.

2. Click Copy. The file or folder should now be on the Clipboard.

3.4.5 Cut a file or folder

To cut a file or folder:

1. Right-click the file or folder you want to cut. A context menu will appear.

2. Click Cut. The file or folder should now be on the Clipboard.


Cutting differs from deleting. When you cut a file, the file is placed on the Clipboard. When you

delete a file, the file is sent to the Recycle Bin.

3.4.6 Paste a file or folder

To paste a file or folder:

1. After cutting or copying the file, right-click the object or right-click in the right pane of

the folder to which you want to paste. A context menu will appear.

2. Click Paste.

3.4.7 Rename a file or folder

To rename a file or folder:

1. Right-click the file or folder. A context menu will appear.

2. Click Rename.

3. Type the new name.

3.4.8 Save a file

To save a file:

1. Click File, which is located on the menu bar. A drop-down menu will appear.

2. Click Save. A dialog box similar to the one shown here will appear.
Field/Icon Entry

Save In field Click to open the menu-box and select the

drive and folder to which you want to save the

file.

Up One Level icon Click this icon to move up one level in the

folder hierarchy.

View Desktop icon Click this icon to move to the Desktop folder.

Create a New Folder icon Use the Create a New Folder icon to create a

new folder:

1. Click the Create New Folder icon.


2. Type the folder name and press Enter.

3. Click the folder you just created to

open the folder.

List icon Your program displays files and folders in the

center of the dialog box. To have the files

display without the size, type, and date

modified, click the List icon.

Detail icon Your program displays files and folders in the

center of the dialog box. To have the files

display with the size, type, and date modified,

click the Detail icon.

File/Folder box Your program displays files and folders in

File/Folder box. Click a folder to open the

folder. Click a file if you want the current file

to write over (replace) that file.

File Name field Enter the name you want your file to have in

this field.

Save As Type field Click to open the drop-down box and select a
file type.

Save button Click the Save button to save your file.

Cancel button Click the Cancel button if you have changed

your mind and do not wish to save your file.

3.5 Drives

Drives are used to store data. Almost all computers come with at least two drives: a hard drive

(which is used to store large volumes of data) and a CD drive (which stores smaller volumes of

data that can be easily transported from one computer to another). The hard drive is typically

designated the C:\ drive and the CD drive is typically designated the D:\ drive. If you have an

additional floppy drive, it is typically designated the A:\ drive. If your hard drive is partitioned or

if you have additional drives, the letters E:\, F:\, G:\ and so on are assigned.

3.6 Windows Accessories

The accessories are set of tools provided by Windows for configuring your system to meet our

vision, hearing and ability needs. Windows accessories enable us to maintain our system for

optimal performance.

Here is the way how we can use these Windows tools. Click:

Start > Programs > Accessories > Accessibility


Here we are presenting a few vision, hearing and ability tools.

 The Magnifier

The Magnifier is a display utility that makes the computer screen more readable by people who

have low vision by creating a separate window that displays a magnified portion of the screen.

Magnifier provides a minimum level of functionality for people who have slight visual

impairments.

When you open the Magnifier a new window appears: The Magnifier Settings window Form

that window you can change the level of magnification, change the tracking and the presentation.

To get out of the magnifier mode simply click Exit

 The Narrator

The Narrator is a text-to-speech utility for people who are blind or have low vision. Narrator

reads what is displayed on the screen—the contents of the active window, menu options, or text

that has been typed.

The Narrator is designed to work with Notepad, WordPad, Control Panel programs, Internet

Explorer, the Windows desktop, and some parts of Windows Setup. Narrator may not read words

aloud correctly in other programs. Narrator has a number of options that allow you to customize

the way screen elements are read.

The narrator tool is available in Windows 2000 and newer versions of windows

 The On-screen Keyboard


On–Screen Keyboard is a utility that displays a virtual keyboard on the computer screen that

allows people with mobility impairments to type data by using a pointing device or joystick.

Besides providing a minimum level of functionality for some people with mobility impairments,

3.7 Windows Settings using Control Panels

The Control Panel is a part of the Microsoft Windows graphical user interface which allows

users to view and manipulate basic system settings and controls via applets, such as adding

hardware, adding and removing software, controlling user accounts, and changing accessibility

options. Additional applets can be provided by third party software.

The Control Panel has been an inherent part of the Microsoft Windows operating system since its

first release (Windows 1.0), with many of the current applets being added in later versions.

Beginning with Windows 95, the Control Panel is implemented as a special folder, i.e. the folder

does not physically exist, but only contains shortcuts to various applets such as Add or Remove

Programs and Internet Options. Physically, these applets are stored as .cpl files. For example, the

Add or Remove Programs applet is stored under the name appwiz.cpl in the SYSTEM32 folder.

3.7.1 How to access Windows Control Panel

Windows 2000/NT and Windows 95/98/Me


 Select Start|Settings|Control Panel

Windows XP, Windows 2003 and Windows Vista

 Select Start|Control Panel

If you are using 'Classic' view in Windows XP and Windows Vista

 Select Start|Settings|Control Panel

In Vista, the word 'Start' is only visible when you hover over the 'Start' icon.

The standard way to open Control Panel is through Start-Control Panel. There are two methods

of displaying the contents. One is called the "Category View" and displays tasks by generalized

categories as shown in the figure below.


Choosing a category leads to another box with a further choice of tasks or icons for specific

control panel applets. The figure below shows the choice when "Performance and Maintenance"

is clicked.

A second way of displaying Control Panel is called the "Classical View" and displays icons for

individual applets. A partial view is shown below. Some of these applets may have several tabs

that open different functions.


Now we will go over the functions of the various Control Panel icons so you can get an idea of

what they are for and how you can use them to improve your Windows experience. The

explanation of control panel icons:

Accessibility Options – Here you can change settings for your keyboard, mouse, display and

sound.

Add Hardware – This will open the Add Hardware Wizard which will search your computer for

new hardware that you have installed when Windows does not recognize it on its own.
Add or Remove Programs – If you need to install or uninstall any software on your computer,

this is where you will do it. You should always uninstall software rather than delete it from your

hard drive.

Administrative Tools – This section of your Control Panel is used for administrative functions

such as managing your computer, monitoring performance, editing your security policy and

administering your computer’s services.

Automatic Updates – Here you tell Windows how and when to update itself. You can control

whether or not it downloads updates automatically or at all and when you want them installed or

to ask you before installing them.

Bluetooth Devices – If you are using any Bluetooth devices on your computer, here you can add,

remove and manage them.

Date and Time – This one explains itself. You can set your computer’s date, time and regional

settings here.

Display – The display settings allow you to change the way things appear on the screen. You can

adjust items like the screen resolution and color depth. Here you can select your background

wallpaper and setup your screensaver.

Folder Options – This is where you can adjust the way you view your files and folders from

within My Computer or Windows Explorer.

Fonts – The Fonts applet allows you to add, remove and manage fonts on your computer. It will

show you what fonts are installed in your system.


Game Controllers – If you use a joystick, steering wheel or any other type of game controller

you can use this section to add, remove and troubleshoot the devices.

Internet Options – If you use Internet Explorer for your web browser you will go here to

change settings such for history, connections and security among other things.

Keyboard – Here you can adjust settings such as how fast the keyboard will repeat a character

when a key is held down and the cursor blink rate.

Mail – The Mail applet lets you adjust your properties for your Outlook or Exchange email

settings.

Mouse – Here you can adjust your mouse setting for features such as double click speed, button

assignment and scrolling. You can also change your mouse pointers and effects as well as view

details about your mouse.

Network Connections – This item is where you can check and adjust your network connection

settings. It will take you to the same place as if you were to right click My Network Places and

choose properties. It will show all of your active network, dialup and wireless connections. There

is also a New Connection Wizard to help you setup a new connection.

Phone and Modem Options – If you have a modem installed on your system and uses it for

dialup connections or faxing you can change the settings here. The Dialing Rules tab allows you

to change settings for things such as dialing a number to get an outside line and setting up carrier

codes for long distance and using calling cards. The Modems tab allows you to add, remove and
changed the properties for installed modems. The Advanced tab is for setting up telephony

providers.

Power Options – Here you adjust the power settings of your computer. Windows has built in

power schemes for different settings such as when to turn off the monitor or hard drives and

when to go into standby mode. You can even create your own schemes and save them. The

advanced tab allows you to assign a password to bring the computer out of standby and tell the

computer what to do when the power or sleep buttons are pressed. If you want to enable

hibernation or configure an attached UPS then you can do it here as well. This area can also be

accessed from the display properties settings under the Screensaver tab.

Printers and Faxes – This area is where your printers are installed and where you would go to

manage their settings. It’s the same area that is off of the Start menu. There is an add printer

wizard which makes it easy to install new printers. To manage a printer you would simply right

click it and select properties.

Regional and Language Options – If you need to have multiple languages or formats for

currency, date and time you can manage them here.

Scanners and Cameras – Windows provides a central place to manage your attached scanners

and camera and adjust their settings. There is even a wizard to add new devices to make the

process of installing a scanner or camera easier.

Scheduled Tasks – This item provides the ability for you to schedule certain programs to run at

certain times of the day. For example if you have a batch file you want to run every night you
can set it up here. You can also have it run a program at any scheduled interval you choose.

There is a handy wizard to help you through the process.

Security Center – The Windows Security Center checks the status of your computer for the stats

of your firewall, virus protection and automatic updates. A firewall helps protect your computer

by preventing unauthorized users from gaining access to it through a network or the Internet.

Antivirus software can help protect your computer against viruses and other security threats.

With Automatic Updates, Windows can routinely check for the latest important updates for your

computer and install them automatically.

Sounds and Devices – Here you can adjust your sound and speaker settings. The Volume tab

has settings to mute your system, have a volume icon placed in the taskbar and tell your

computer what type of speakers you are using such as a 5.1 system. The sounds tab lets you

adjust what sounds occur for what windows events. If you need to change what device is used for

playback and recording you can do it under the Audio tab. Voice playback and recording settings

are under the Voice tab. To troubleshoot your sound device you can use the Hardware tab. This

is where you can get information about your particular sound device.

Speech Properties – Windows has a feature for text to speech translation where the computer

will read text from documents using a computer voice that you can hear through your speakers.

The type of voice and speed of the speech can be adjusted here.

System – If you have ever right clicked My Computer and selected Properties then you have

used the System feature of Control Panel. This area gives you information about your computer’s

configuration, name and network status. You can click on the Hardware tab to view details about
hardware profiles and driver signing as well as get to Device Manager. The Advanced tab lets

you change settings for virtual memory (page files) and other performance settings. There is also

an area to change startup and recovery settings if needed. If you want to enable remote access to

your computer for Remote Desktop or Remote Assistance you can enable it here.

Taskbar and Start Menu – This is where you change the setting for your taskbar and Start

menu.

User Accounts – If you need to manage your local computer users then you need to go to user

accounts. You can add remove users and change the account types for users who log into your

system.

Windows Firewall – This is the same firewall setting described in the Windows Security Center

section.

Wireless Network Setup Wizard - This wizard is used to help you setup a security enabled

wireless network in which all of your computer and devices connect through a wireless access

point.

4. SUMMARY

In this lesson, we have learnt about the toolbar of the windows operating system and various

dialog boxes like Tabs, List Boxes, Radio button, Check Boxes etc. Here we have explained

some of the operations of files & folders, like how to save a file into the different drivers, how

organize a folder within computer. In this lesson, different accessories of the Windows operating
system like the magnifier, the narrator and the on screen keyboard have been explained. Lastly

we have discussed in detail about the working of control panel.

5. SUGGESTED READINGS / REFERENCE MATERIAL

 Easy Microsoft Windows XP (4th Edition) (Easy): by Shelley O'Hara

 Teach Yourself VISUALLY Windows XP (Teach Yourself Visually): by Paul McFedries

 Basic Computing with Windows XP: Learning Made Simple: by P K McBride

 Microsoft Windows XP Professional Resource Kit, Third Edition: by Charlie Russel,

Sharon Crawford, Microsoft Windows Team

6. SELF ASSESSMENT QUESTIONS (SAQ)

 What is dialog box? Explain the functions of dialog box using suitable examples.

 What is a control panel? Explain the working of control panel in detail.

 What is a toolbar? Discuss the importance of toolbar.

 Discuss various tools available in windows Accessories?


CS-DE-15

LINUX

LESSON NO. 12

Writer: Dr. Sanjay Tyagi

Vetter: Dr. Pardeep Kumar

STRUCTURE

1. Introduction

2. Objectives

3. Presentation of Contents

3.1 Components of Linux System

3.2 Features of Linux

3.3 Linux Architecture

3.4 Using Linux

3.5 Linux Commands

3.5.1 File System, Commands, Permissions Changing

3.5.2 Some Commonly Used Commands

4. Summary

5. Suggested Readings / Reference Material

6. Self Assessment Questions (SAQ)

1. INTRODUCTION

Linux is very similar to other operating systems, such as Windows and UNIX. But

something sets Linux apart from these operating systems. Since its inception in 1991,

Linux has grown to become a force in computing, powering everything from the New

York Stock Exchange to mobile phones to supercomputers to consumer devices.

1
On August 25, 1991, a Finn computer science student named Linus Torvalds made the

following announcement to the Usenet group comp.os.minux: “I'm doing a (free)

operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT

clones”. Linux has gained strong popularity amongst UNIX developers, who like it for its

portability to many platforms, its similarity to UNIX, and its free software license.

Today, Linux is a multi-billion dollar industry, with companies and governments around

the world taking advantage of the operating system's security and flexibility. Thousands

of companies use Linux for day-to-day use, attracted by the lower licensing and support

costs. Governments around the world are deploying Linux to save money and time, with

some governments commissioning their own versions of Linux.

Linux is open source as its source code is freely available. It is free to use. Linux was

designed considering UNIX compatibility. Its functionality list is quite similar to that of

UNIX.

Early in its development, Linux's source code was made available for free on the Internet.

As a result, its history has been one of collaboration by many users from all around the

world, corresponding almost exclusively over the Internet. From an initial kernel that

partially implemented a small subset of the UNIX system services, Linux has grown to

include evermore UNIX functionally. In its early days, Linux development revolved

largely around the central operating system kernel - the core, privileged executive that

manages all system resources and that interacts directly with the hardware.

Much more than this kernel is needed to produce a full operating system, of course. It is

useful to make the distinction between the Linux kernel and a Linux system. The kernel

in Linux is an entirely original piece of software developed from scratch by the Linux

community; the Linux system, as we know it today, includes a multitude of components,

some written from scratch, others borrowed from other development projects or created

2
in collaboration with other teams.

The basic Linux system is a standard environment for applications and for user

programming, but it does not enforce any standard means of managing the available

functionality as a whole. As Linux has matured, there has been a need for another layer of

functionality on top of the Linux system. A Linux distribution includes all the standard

components of the Linux system, plus a set of administrative tools to simplify the initial

installation and subsequent upgrading of Linux, and to manage installation and un-

installation of other packages on the system. A modem distribution also typically

includes tools for management of file systems, creation and management of user

accounts, networking administration, and so on.

2. Objectives

As an open operating system, Linux is developed collaboratively, meaning no one

company is solely responsible for its development or ongoing support. Companies

participating in the Linux economy share research and development costs with their

partners and competitors. This spreading of development burden amongst individuals and

companies has resulted in a large and efficient ecosystem and unheralded software

innovation.

In the present chapter, Linux operating system has been introduced along with its

components, features, architecture and some Linux commands.

3. Presentation of Contents

3.1 Components of Linux System

Linux Operating System has primarily three components:

3
 Kernel - Kernel is the core part of Linux. It is responsible for all major activities

of this operating system. It is consists of various modules and it interacts directly

with the underlying hardware. Kernel provides the required abstraction to hide

low level hardware details to system or application programs.

 System Library - System libraries are special functions or programs using which

application programs or system utilities accesses Kernel's features. These libraries

implements most of the functionalities of the operating system and do not requires

kernel module's code access rights.

 System Utility - System Utility programs are responsible to do specialized,

individual level tasks.

Figure 12.1

The Linux system is composed of three main bodies of code, in line with most traditional

UNIX implementations:

 The kernel is responsible for maintaining all the important abstractions of the

operating system, including such things as virtual memory and processes.

4
 The system libraries define a standard set of functions through which applications can

interact with the kernel, and which implement much of the operating-system

functionality that does not need the full privileges of kernel code.

 The system utilities are programs that perform individual, specialized management

tasks. Some system utilities may be invoked just once to initialize and configure some

aspect of the system; others (known as daemons in UNIX terminology) may run

permanently, handling such tasks as responding to incoming network connections,

accepting logon requests from terminals, or updating log files.

Figure 12.1 illustrates the various components that make up a full Linux system.

The most important distinction here is between the kernel and every- thing else. All the

kernel code executes in the processor's privileged mode with full access to all the

physical resources of the computer. Linux refers to this privileged mode as kernel mode,

equivalent to the monitor mode. Under Linux, no user-mode code is built into the kernel.

3.2 Features of Linux

Following are some of the important features of Linux Operating System:

 Portable - Portability means software can works on different types of hardware in

same way. Linux kernel and application programs support their installation on

any kind of hardware platform.

 Open Source - Linux source code is freely available and it is community based

development project. Multiple teams work in collaboration to enhance the

capability of Linux operating system and it is continuously evolving.

 Multi-User - Linux is a multiuser system means multiple users can access system

resources like memory/ RAM/ application programs at same time.

5
 Multiprogramming - Linux is a multiprogramming system means multiple

applications can run at same time.

 Hierarchical File System - Linux provides a standard file structure in which

system files/ user files are arranged.

 Shell - Linux provides a special interpreter program which can be used to execute

commands of the operating system. It can be used to do various types of

operations, call application programs etc.

 Security - Linux provides user security using authentication features like

password protection/ controlled access to specific files/ encryption of data. It

supports a very strong security system. It enforces security at three levels. Firstly,

each user is assigned a login name and a password. So, only the valid users can

have access to the files and directories. Secondly, each file is bound around

permissions (read, write, execute). The file permissions decide who can read or

modify or execute a particular file. The permissions once decided for a file can

also be changed from time to time. Lastly, file encryption comes into picture. It

encodes your file in a format that cannot be very easily read. So, if anybody

happens to open your file, even then he will not be able to read the text of the file.

However, you can decode the file for reading its contents. The act of decoding a

coded file is known as decryption.

 Multitasking – Linux has the facility to carry out more than one job at the same

time. This feature of LINUX is called multitasking. You can keep typing in a

program in its editor while at the same time execute some other command given

earlier like copying a file, displaying the directory structure, etc. The latter job is

performed in the background and the earlier job in the foreground Multitasking is

6
achieved by dividing the CPU time intelligently between all the jobs that are

being carried out. Each job is carried out according to its priority number. Each

job gets appropriately small timeslots in the order of milliseconds or

microseconds for its execution giving the impression that the tasks are being

carried out simultaneously.

 Built-in Networking – LINUX has got built in networking support with a large

number of programs and utilities. It also offers an excellent media for

communication with other users. The users have the liberty of exchanging mail,

data, programs, etc. You can send your data at any place irrespective of the

distance over a computer network.

 Multiplatform – LINUX runs on many different CPUs, not just Intel.

 Multiprocessor - SMP support is available on the Intel and SPARC platforms

(with work currently in progress on other platforms), and Linux is used in several

loosely-coupled MP applications, including Beowulf systems and the Fujitsu

AP1000+ SPARC-based supercomputer.

 Multithreading: LINUX has native kernel support for multiple independent

threads of control within a single process memory space.

 It has memory protection between processes, so that one program can't bring the

whole system down.

 Demand loads executables: Linux only reads from disk those parts of a program

that are actually used.

 It uses virtual memory using paging (not swapping whole processes) to disk to a

separate partition or a file in the filesystem, or both, with the possibility of adding

7
more swapping areas during runtime (yes, they're still called swapping areas). A

total of 16 of these 128 MB (2GB in recent kernels) swapping areas can be used at

the same time, for a theoretical total of 2 GB of useable swap space. It is simple to

increase this if necessary, by changing a few lines of source code.

 There is a unified memory pool for user programs and disk cache, so that all free

memory can be used for caching, and the cache can be reduced when running

large programs.

 It has dynamically linked shared libraries (DLL’s) and static libraries too, of

course.

 It does core dumps for post-mortem analysis, allowing the use of a debugger on a

program not only while it is running but also after it has crashed.

 Linux is mostly compatible with POSIX, System V, and BSD at the source level.

 All source code is available, including the whole kernel and all drivers, the

development tools and all user programs; also, all of it is freely distributable.

Plenty of commercial programs are being provided for Linux without source, but

everything that has been free, including the entire base operating system, is still

free.

 It supports for many national or customized keyboards, and it is fairly easy to add

new ones dynamically.

 It supports multiple virtual consoles, several independent login sessions through

the console and you switch by pressing a hot-key combination. These are

dynamically allocated; you can use up to 64.

8
 It supports several common filesystems, including minix, Xenix, and all the

common system V filesystems, and has an advanced filesystem of its own, which

offers filesystems of up to 4 TB, and names up to 255 characters long.

 Linux has transparent access to MS-DOS partitions (or OS/2 FAT partitions) via a

special file system. You don't need any special commands to use the MS-DOS

partition; it looks just like a normal UNIX filesystem (except for funny

restrictions on filenames, permissions, and so on). MS-DOS 6 compressed

partitions do not work at this time without a patch (dmsdosfs). VFAT (WNT,

Windows 95) support and FAT-32 is available in Linux 2.0

 It has special filesystem called UMSDOS which allows Linux to be installed on a

DOS filesystem.

 HFS (Macintosh) file system support is available separately as a module.

 It has CD-ROM filesystem which reads all standard formats of CD-ROMs.

 It allows TCP/IP networking, including ftp, telnet, NFS, etc.

 It uses many networking protocols. The base protocols available in the latest

development kernels include TCP, IPv4, IPv6, AX.25, X.25, IPX, DDP

(Appletalk), Netrom, and others. Stable network protocols included in the stable

kernels currently include TCP, IPv4, IPX, DDP, and AX.25.

3.3 Linux Architecture

The interaction between the user and the hardware happens through the operating

system. The operating system interacts directly with the hardware. It provides common

services to programs and hides the hardware intricacies from them. The high level

architecture of the Linux system has been shown in next Figure 12.2.

9
The hardware of a Linux system is present in the centre of the diagram. It

provides the basic services such as memory management, processor execution level, etc.

to the operating system.

Figure 12.2 System Architecture of Linux

Linux System Architecture consists of following layers:

 Hardware layer - Hardware consists of all peripheral devices (RAM/ HDD/ CPU

etc).

 Kernel - Core component of Operating System, interacts directly with hardware,

provides low level services to upper layer components.

 Shell - An interface to kernel, hiding complexity of kernel's functions from users.

Takes commands from user and executes kernel's functions.

10
 Utilities - Utility programs giving user most of the functionalities of an operating

systems.

3.4 Using Linux

Steps to Login

Logging in is a procedure that tells the Linux System who you are; the system

responds by asking you the password. So, in order to login, first, connect your PC to the

Linux system. After a successful connection is established, you would find the following

prompt coming in the on the screen.

Login:

Each user on the Linux system is assigned an account name, which identifies him

as a unique user. The account name has eight characters or less and is usually based on

the first name or the last name. It can have any combination of letters and numbers.

Thus, if you want to access the Linux resources, you should have your account

name first. If you don't as yet have an account name, ask the system administrator to

assign you one. Now, at the login prompt, enter your account name. Press Enter Key.

Type your account name in lowercase letters. Linux treats uppercase letters differently

from lowercase letters.

Login: sanjay

Password: ******

Once the login name is entered, Linux prompts you to enter a password. While

you are entering your password, it will not be shown on the screen. This is just a security

measure adopted by Linux. The idea behind is that people standing around you are not

able to look through the secret password by looking at the screen. Be careful while you

are typing your password because you will not be able to see what you have typed.

However, if you give either the login name or the password wrong, then Linux denies

11
you the permission to access its resources. The system then shows an error message on

the screen, which is given below:

Login: sanjay

Password: ******

Login incorrect:

Login:

Many Linux systems give you three or four chances to enter your login and

password correct. So, key in your correct login name and the password again. Once you

have successfully logged on by giving a correct login and password, you are given some

information about the system, some news for users and a message whether you have an

electronic mail or not.

Login: sanjay

Password:

Last login: Sun May 14 9:00:29 on tty02

You have mail

The dollar sign is the Linux's method of telling that it's ready to accept commands

from the user. You can have a different prompt also in a case where your system is

configured for showing a different prompt. By default a $ is shown for the Korn or

Bourne Shells.

At this point, you are ready to enter your first Linux command. Now, when you

are done working on your Linux system and decide to leave your terminal - then it is

always a good idea to log off the system. It is very dangerous to leave your system

without logging out because some mischievous minds could tamper with your files and

directories. They can delete your files. They can also read through your private files.

12
Thus, logging off the system is always a better idea than turning off you terminal. In

order to log off the system, type the following command:

$ exit

login:

The above command will work if you are using a Bourne or a Korn shell.

However, if you are working on C shell, exit will work or you can give another command

to log off.

$ logout

login:

3.5 Linux Commands

There are a few of Linux commands, that you can type them standalone. For

example, ls, date, pwd, logout and so on. But Linux commands generally require some

additional options and / or arguments to be supplied in order to extract more information.

The Linux commands follow the following format: Command [options] [arguments]

The options/arguments are specified within square brackets if they are optional. The

options are normally specified by a “-“ (hyphen) followed by letter, one letter per option.

There are many commands you will use regularly. Let us discuss some of these

commands:

 banner - To display information.

 cal - To display calendar on the screen.

 date - To display and set the current system date and time.

 passwd - To install or change the password on the system.

 who - To determine the currently logged users on the system.

 finger - Gives specific information about a user .

cal Command

13
The cal command creates a calendar of the specified month for the specified year, if you

do not specify the month, it creates a calendar for the entire year. By default this

command shows the calendar for the current month based on the system date. The cal

writes its output to the standard output.

Syntax: cal [ [mm] yy ]

where mm is the month, an integer between 1 and 12 and yy is the year; an integer

between 1 and 9999, For current years, a 4-digit number must be used, '98' will not

produce a calendar of 1998.

Options: None

Examples

(i) $ cal

(ii) $ cal 1998

The latter command displays the calendar for entire year, 1998. The entire year

will whip to the by, month by month.

(iii) $ cal 1998 | lpr

The above command prints the calendar for the entire year onto the printer.

date Command

It shows or sets the system date and time. If no argument is specified, it displays the

current date and the current time.

Syntax: date [+options] Options:

%d displays date as mm/dd/yy

%a displays abbreviated weekday (Sun to Sat)

%t displays time as HH:MM:SS

%r displays time as HH:MM:SS(A.M/P.M.)

%d displays only dd

14
%m displays only mm

Examples

(i) $date

It will display date as -Mon June 9 04: 50:24 EDT 1998.

(ii) $ date +%d

It will display date as -11/12/98

(iii) $date +%r

It will display time as -07: 20:50 PM

If you are working in the superuser mode, you can set the date as shown below:

(iv) $ date MMddhhmm[yy]

where MM = Month (1-12)

dd = day (1-31)

hh = hour (1-23)

mm = minutes (1-59)

yy = Year

It sets the system date and time to the value specified by the argument.

passwd Command

The passwd command allows the user to set or change the password. Passwords are set to

prevent unauthorized users from accessing your account.

Syntax: passwd [user-name] Options

-d Deletes your password

-x days. This sets the maximum number of days that the password will be date active.

After the specified number of days you will be required to give a new password.

-n days. This sets the minimum number of days the password has to be active, before it

can be changed.

15
-s: This gives you the status of the user's password.

Only the superuser can use the above options.

Examples

$ passwd -x 40 bobby.

The above command will set the password of the user as 'bobby' which will be active for

a maximum of 40 days. Also note, that passwd program will prompt you twice to enter

your new password. If you don't type the same thing both the times, it will give you one

more chance to set your password.

$passwd bobby

Old password:

New password:

Re-enter new password:

who Command

The who command lists the users that are currently logged into the system.

Syntax: who [options]

Options:

 u - lists the currently logged-in users.

 t - gives the status of all logged-in users, am

 i - this lists login-id and terminal of the user invoking this command.

Examples:

(i) $who –t

Output:

Sanjay tty01 Jan 12 9:50

Bobby tty02 Jan 12 10:10

The second column here shows whether the user has write permission or not.

16
(ii) $who –u

Output:

Sanjay tty01 Jan 12 9:50 1235

Bobby tty02 Jan 12 10:10 2401

The last column here denotes the process-id.

(iii) $who am I

Output:

Sanjay tty06 Jan 12 14:34

This command shows the account name, where and when I logged in. It also shows the

computer terminal being used.

finger Command

In larger system, you may get a big list of users shown on the screen. The finger

command with an argument gives you more information about the user. The finger

command followed by an argument can give complete information for a user who is not

logged onto the system.

Syntax: finger [user-name] Options: none

Examples

(i) $ finger sanjay

This command will give more information about sanjay's identity as shown below:

Login name: sanjay

Directory: /home/sanjay

Last login Fri May 16 12:14:40 on tty0l

Project: X window programming

sanjay tty0l May 18 20:05

17
If you want to know about everyone currently logged onto the system, give the following

command:

$finger

3.5.1 File System, Commands, Permissions Changing

File is a unit of storing information in a Linux system. All utilities, applications and data

are represented as files. The file may contain executable programs, texts or databases.

They are stored on secondary memory storage such as a disk or magnetic tape.

Naming Files

You can give filenames up to 14 characters long. The name may contain alphabets, digits

and a few special characters. Files in Linux do not have the concept of primary or

secondary name as in DOS, and therefore file names may contain more than one

period(.).

Therefore, the following file names are all valid filenames:

mkt.c, name2.c, .star, a.out

However, Linux file names are case sensitive. Therefore, the following names represent

two different files in Linux - mkt.c and Mkt.c

Types

The files under Linux can be categorized as follows:

 Ordinary files

 Directory file.

 Special files

We are discussing about these files below:

18
Ordinary Files

Ordinary files are the one, with which we all are familiar. They may contain executable

programs, text or databases. You can add, modify or delete them or remove the file

entirely.

Directory Files

Directory files as discussed earlier also represent a group of files. They contain list of file

names and other information related to these files. Some of the commands, which

manipulate these directory files, differ from those for ordinary files.

Special Files

Special files are also referred to as device files. These files represent physical devices

such as terminals, disks, printers and tape-drives etc. These files are read from or written

into just like ordinary files, except that operation on these files activates some physical

devices. These files can be of two types Character device files and block device files. In

character device files data is handled character by character, as in case of terminals and

printers. In block device files, data is handled in large chunks of blocks, as in the case of

disks and tapes.

File Names and Meta characters

In Linux, we can refer to a group of files with the help of METACHARACTERS. These

are similar to wild card characters in DOS. The valid meta characters are ?, [, and ]. It

replaces any number of characters including a null character.

? - used in place of one and only one character.

[ ] - brackets are used to specify a set of a range of characters.

Examples

(i) $ls []c

19
It will list all files starting with any character or characters and ending with the character

c.

(ii) $ ls robin[]

It will list all the files starting with robin and ending with any character.

(iii) $ ls x?yz[]

It will list all those files, in which the first character is x, the second character can

be anything, the third and fourth characters should be respectively y and z and that the

rest of the name can be anything.

(iv) $ls I[abc]mn

It will list all those files, in which the first character is I, the second character can

be either a, b or c the last two characters i.e., 3rd and 4th should be m and n respectively.

Alternatively the above command can also be given in the following manner.

$ ls I[a-c]mn

You should be very careful in using these meta characters while deleting files,

they may unintentionally erase a lot many files at one time.

File Security and Ownership

The data is centralized on a system working with Linux. However, if you do not

take care of your data, then it can be accessed by all other users who login. And there

cannot be anything private to a person or a group of persons. The first step towards data

security is the usage of passwords. The next step should be to guard the data among these

users. If the number of users is small, it is not much of a problem. But it can be

problematic on a large system supporting many users.

Linux can thus differentiate files belonging to an individual, the owner of a file or

group of users or the others with different limited accesses, as the case may be. The

different ways to access a file are:

20
Read(r) - You can just look through the file.

Write(w) - You can also modify it.

Execute(x) - You can just execute it

Therefore if you have a file called vendor.c and that you are the owner of it, you

may provide; yourself with all the rights rwx [read, write and execute]. You can provide

rx (read, and execute) rights to the members of your group and only the x (execute) right

to all others.

Normally, when you create a file, you are the owner of the file and your group

becomes the group id for the file. The system assigns a default set of permissions for file,

as set by the system administrator. The user can also change these permissions at his will.

But only a super user can change these permissions (rwx), ownership and group id's of

any file in the system.

Giving the execute permission to the data file is absolutely meaningless.

Similarly, giving the write (w) permission to an executable file does not carry any sense.

The execute (x) permission on directories mean that you can search through the directory

and the write (w) permission means that you can create or remove files in the directory.

The read (r) permission means that you can list the files in the directory.

3.5.2 Some Commonly Used Commands

Now, let us discuss some of the commonly used file and directory commands as

listed below.

ls - listing of files and directories can be seen,

cat – concatenates the contents of files,

mkdir – creates a new directory,

cd – changes the current directory,

rmdir removes a directory from file system, and

21
chmod – changes the access modes of a file, etc.

The ls Command

The ls command is used for listing information about files and directories.

Syntax: ls [-options] [filename]

Here, filename can be the name of the directory or file. If it is directory, it lists

information about all the files in the directory and if it is a file, it lists information about

the file specified. You can also use meta characters to choose specific files.

Options:

-a - List all directory entries including dot (.) entries.

-d - Give the name of directories only.

-9 - Print group id only in the long listing.

-i - It Print inode number of each file in the first column.

-l - Lists in the long or detailed format owner's id only in the long listing.

-s - This lists the disk blocks (of 512 bytes each), occupied by a file.

Sort file names by time since last access.

-t - Sort file names by time since last modified.

-r - Recursively lists all subdirectories.

-f - Marks type of each file.

You can make use of more than one option at a given time, just group them together and

precede them with "-".

Examples

(i) $ls -lu /usr/*.c

Output: mkt

(ii) $ls -l /usr/mkt

22
The above command gives a long listing of the mkt directory, which is inside the

usr directory as illustrated below:

drwxr-x- -3 bobby engineer 79 may 15 15:22 bin

-rwxr-x--2 bobby engineers 422 jan 12 10:38 sale.c

(iii) $ ls –l /dev

The above command gives a typical listing of the dev directory in the root as

illustrated below:

crw- -w- -w 1 root 0,0 Feb 19 12:05 console

brw- 1 root 0,1 Feb 1911:08 hpob

brw-rw- --1 root 6,2 Iun 0611:18 ntl

crw-rw- --1 gemale 1,5 Feb 10 12:30 tty04

The cp Command

The cp command creates a duplicate copy of a file.

Syntax: cp file1 file2

Options: None

The cp command of Linux copies one file to another or one or more files to a

directory. Here, the file1 is copied as file2. If file2 already exists, the new file overwrites

it. The file names specified may be full path names or just the name (current working

directory, will then be assumed).

Examples

(i) $ cp mkt.c new-mkt.c

Here the file 'mkt.c' which is present in the current directory will be copied as

'new-mkt.c' in the same current directory.

(ii) $ cp *.c /usr/mkt

23
The above command will copy all the files ending with the letter '.c' to the

directory called mkt present under usr directory.

The mv Command

This command moves or renames files.

Syntax: mv filel file2

Options: None

The mv command moves a file from one directory to the another directory. Here

file1 refers to the source filename and 'file2' refers to the destination filename. Moving a

file to another within the same directory is equivalent to renaming the file. Otherwise

also, mv doesn't really move the file, it just renames it and changes directory entries.

Examples

(i) $ mv *.c /usr/mkt

This command will move all the files ending with the letter '.c' to the directory

called 'mkt'.

(ii) $ mv mkt.c new-mkt.c

Here, mv will rename the file 'mkt.c' present in the current working directory as

'new-mkt.c' in the same working directory.

The ln Command

The 'ln' command adds one or more links to a file. Syntax: ln filel file2

The ln command here establishes a link to an existing file. File name 'file1'

specifies the file that has to be linked and file name 'file2' specifies the directory into

which the link has to be established. If the 'file2' is in the same directory as file1 then the

file seems to carry names, but physically there is only one copy. If you use the ls -li

command, you will find that the link count has been incremented by one and that both the

files have the same inode number, as they refer to the same data blocks in the disk. Any

24
changes that are made to one file will be reflected in the other. And if 'file2' specifies a

different directory, the file will be physically present at one place but will appear as if it

is present in other directory, thereby allowing different users to access the file. It saves a

lot of disk space because the file is not duplicated.

But you should note that you should have write permission to the directory under

which the link is to be created.

(i) $ln /usr/mkt/mkt.c /usr/mktl/new-mkt.c

This will create a link for file mkt.c in 'mkt' directory to 'mktl' directory by the

name 'new-mkt.c'.

(ii) $ In myfile.prg newfile.prg

The above command links the file 'myfile.prg' as 'new-file.prg' in the same

directory. You can see these files by giving the ls command.

The rm Command

The 'rm' command removes files or directories. Syntax: rm [options] file(s)

This command is used for removing either a single file or a group of files. When

you remove a file, you are actually removing a link. The space occupied by the file on the

disk is freed, only thin when you remove the last link to a file.

The Options:

c -confirms on each file before deleting it.

f - removes only those files which do not have write permission.

r - deletes the directory and its contents along with all the sub-directories and their

contents.

Examples

(i) $ rm -ir /usr/mkt/new-mkt.c

25
The above command will remove all the files and sub-directories (and their

contents) of the mkt directory and the mkt directory itself.

(ii) $ rm /usr/mkt/*.c

This will remove all the ‘c’ program files (.c) from the mkt directory.

The cat Command

It displays the contents of a file onto the screen.

Syntax: cat file.

The cat writes the contents of one of more files, onto the screen in the sequence specified.

If you do not specify an input file, cat reads its data from the standard input file, generally

the keyboard.

Examples

(i) $ cat /usr/mkt/new-mkt.c

This command will display the contents of ‘c’ program file ‘new-mkt.c’ onto the screen.

Only a superuser can change these permissions for any file on the system. In the syntax,

‘for whom’ denotes the user type, and can be a user or the file owner

9 - the group to which the file owner belongs

o - other users, not part of the group

a - all users

'operation' denotes the options to be done, and can be: + add permission

- remove permission

= assign permission

‘permission' can be:

r - read permission

w - write permission

x - execute permission

26
‘filename(s)’ can be the files on which you want to carry out this command.

Examples - Now let us take an example.

(i) First see the file permissions using the ls -1 command for mkt.c as shown below:

$ls -l mkt.c

Output: -rwx- -x- -2 root other 1428 May 1507:34 mkt.c

i.e. user has rwx, group has x and all others also have x permission.

(ii) Now use the chmod command as illustrated below:

$chmod u-x g+w o+r mkt.c

The above command remove execute (x) permission for user, give write (w) permission

to group and give read (r) permission to all others.

(iii) Then again use the ls -l command to verify, if the permissions have been or not

set.

$1s -1 mkt.c

Output: -rw- -wxr-x 2 root other 1428 May 1507:34 mkt.c

Alternatively, you could have also used the following commands to do the same work:

$chmod u=rw g=wx o=rx mkt.c

If we use

$ chmod a=rwx mkt.c "

$ ls -l mkt.c

Output:-rwxrwxrwx 2 root other 1428 May 15 07:34 mkt.c

In the above command, a=rwx assigns read, write and execute permission to all users.

The chown Command

The chown command changes the owner of the specified file(s). Syntax: chown new-

owner filename. The chown command changes the owner of the specified file (s). This

command requires you to be in the superuser mode. The new owner can be the user ID of

27
the new owner or the new owner's user number. You can also specify the owner by his

name. But the new owner should have an entry in the /etc/passwd file. The filename is the

name(s) of the file(s), whose owner is to be changed.

Options: None

Examples

(i) $chown bobby sales.c

The above command now makes bobby the owner of sales.c file.

(ii) $ls -l sales.c

Output: rwxr-x--x 1 bobby engineer 1826 May 1517:56 sales.c

The chgrp Command

The purpose of chgrp command is to change the group of a file.

Syntax: chgrp group filename.

Only the superuser can use this command. This command changes the group ownership

of a file. Here group denotes the new group-ID and filename denotes the file whose

group-ID is desired to be changed.

Example

$ chgrp sanjay sales.c

This changes the group-ID of the file 'sales.c' to the group called 'sanjay'.

Quick Guide to Commands:

Following is a quick guide which lists commands, their syntax and brief description. For

any help with respect to any Linux command, we can use:

$man command

Files and Directories:

These commands allow you to create directories and handle files.

28
Command Description

cat Display File Contents

cd Changes Directory to dirname

chgrp change file group

chmod Changing Permissions

cp Copy source file into destination

file Determine file type

find Find files

grep Search files for regular expressions.

head Display first few lines of a file

ln Create softlink on oldname

ls Display information about file type.

mkdir Create a new directory dirname

more Display data in paginated form.

mv Move (Rename) a oldname to newname.

pwd Print current working directory.

rm Remove (Delete) filename

rmdir Delete an existing directory provided it is empty.

tail Prints last few lines in a file.

touch Update access and modification time of a file.

Manipulating data:

The contents of files can be compared and altered with the following commands.

Command Description

29
awk Pattern scanning and processing language

cmp Compare the contents of two files

comm Compare sorted data

cut Cut out selected fields of each line of a file

diff Differential file comparator

expand Expand tabs to spaces

join Join files on some common field

perl Data manipulation language

sed Stream text editor

sort Sort file data

split Split file into smaller files

tr Translate characters

uniq Report repeated lines in a file

wc Count words, lines, and characters

vi Opens vi text editor

vim Opens vim text editor

fmt Simple text formatter

spell Check text for spelling error

ispell Check text for spelling error

ispell Check text for spelling error

emacs GNU project Emacs

ex, edit Line editor

emacs GNU project Emacs

30
emacs GNU project Emacs

Compressed Files:

Files may be compressed to save space. Compressed files can be created and examined:

Command Description

Compress Compress files

gunzip Uncompress gzipped files

gzip GNU alternative compression method

uncompress Uncompress files

unzip List, test and extract compressed files in a ZIP archive

zcat Cat a compressed file

zcmp Compare compressed files

zdiff Compare compressed files

zmore File perusal filter for crt viewing of compressed text

Getting Information:

Various Linux manuals and documentation are available on-line. The following Shell

commands give information:

Command Description

apropos Locate commands by keyword lookup

info Displays command information pages online

man Displays manual pages online

whatis Search the whatis database for complete words.

yelp GNOME help viewer

31
Network Communication:

These following commands are used to send and receive files from a local Linux hosts to

the remote host around the world.

Command Description

ftp File transfer program

rcp Remote file copy

rlogin Remote login to a Linux host

rsh Remote shell

tftp Trivial file transfer program

telnet Make terminal connection to another host

ssh Secure shell terminal or command connection

scp Secure shell remote file copy

sftp secure shell file transfer program

Some of these commands may be restricted at your computer for security reasons.

Messages between Users:

The Linux systems support on-screen messages to other users and world-wide electronic

mail:

Command Description

Evolution GUI mail handling tool on Linux

Mail Simple send or read mail program

Mesg Permit or deny messages

32
Parcel Send files to another user

Pine Vdu-based mail utility

Talk Talk to another user

Write Write message to another user

Programming Utilities:

The following programming tools and languages are available based on what you have

installed on your Linux.

Command Description

Dbx Sun debugger

gdb GNU debugger

make Maintain program groups and compile programs.

nm Print program's name list

size Print program's sizes

strip Remove symbol table and relocation bits

cb C program beautifier

cc ANSI C compiler for Suns SPARC systems

ctrace C program debugger

gcc GNU ANSI C Compiler

indent Indent and format C program source

bc Interactive arithmetic language processor

gcl GNU Common Lisp

perl General purpose language

php Web page embedded language

33
py Python language interpreter

asp Web page embedded language

CC C++ compiler for Suns SPARC systems

g++ GNU C++ Compiler

javac JAVA compiler

appletvieweir JAVA applet viewer

netbeans Java integrated development environment on Linux

sqlplus Run the Oracle SQL interpreter

sqlldr Run the Oracle SQL data loader

mysql Run the mysql SQL interpreter

Misc Commands:

These commands list or alter information about the system:

Command Description

chfn Change your finger information

chgrp Change the group ownership of a file

chown Change owner

date Print the date

determin Automatically find terminal type

du Print amount of disk usage

echo Echo arguments to the standard options

exit Quit the system

finger Print information about logged-in users

groupadd Create a user group

34
groups Show group memberships

homequota Show quota and file usage

iostat Report I/O statistics

kill Send a signal to a process

last Show last logins of users

logout log off Linux

lun List user names or login ID

netstat Show network status

passwd Change user password

passwd Change your login password

printenv Display value of a shell variable

ps Display the status of current processes

ps Print process status statistics

quota -v Display disk usage and limits

reset Reset terminal mode

script Keep script of terminal session

script Save the output of a command or process

setenv Set environment variables

stty Set terminal options

time Time a command

top Display all system processes

tset Set terminal mode

tty Print current terminal name

35
umask Show the permissions that are given to view files by default

uname Display name of the current system

uptime Get the system up time

useradd Create a user account

users Print names of logged in users

vmstat Report virtual memory statistics

w Show what logged in users are doing

who List logged in users

4. Summary

Linux is a very user friendly and powerful open source operating system, which is always

having the scope for further improvements. It has the features similar to that of Unix

Operating System, but at the same time helps the users with the GUI based environment,

which is missing in case of Unix.

5. SUGGESTED READINGS / REFERENCE MATERIAL

 Operating System Concepts, 5th Edition, Silberschatz A., Galvin P.B., John Wiley

& Sons.

 Systems Programming & Operating Systems, 2 nd Revised Edition, Dhamdhere

D.M., Tata McGraw Hill Publishing Company Ltd., New Delhi.

 Operating Systems-A Modern Perspective, Gary Nutt, Pearson Education Asia,

2000.

 Operating Systems, Harris J.A., Tata McGraw Hill Publishing Company Ltd.,

New Delhi, 2002.

6. SELF-ASSESSMENT QUESTIONS (SAQ)

 What are various features of Linux?

36
 Discuss the architecture of Linux operating system.

 How many types of files there can be in Linux?

 How can security of files be maintained?

 What are different types of users in Linux for files?

 What is function of ls command? Give various options.

 Differentiate between cp & mv Command.

 What is rm command used for?

 Differentiate between chmod & chown Command.

37

You might also like