0% found this document useful (0 votes)
1 views

OS Unit-1

An operating system is system software that manages computer hardware and allows users to execute programs efficiently. It consists of four main components: hardware, operating system, application programs, and users. The document also discusses computer system organization, memory structure, process and memory management, and the architecture of single and multi-processor systems.

Uploaded by

maneyacx
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

OS Unit-1

An operating system is system software that manages computer hardware and allows users to execute programs efficiently. It consists of four main components: hardware, operating system, application programs, and users. The document also discusses computer system organization, memory structure, process and memory management, and the architecture of single and multi-processor systems.

Uploaded by

maneyacx
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

UNIT-1 INTRODUCTION TO OPERATING SYSTEM

CHAPTER 1

What is an Operating System?


An operating system is system software that acts as an intermediary between a user of a
computer and the computer hardware. It is software that manages the computer hardware and
allows the user to execute programs in a convenient and efficient manner.

Operating system goals:


 Make the computer system convenient to use. It hides the difficulty in managing the
hardware.
 Use the computer hardware in an efficient manner
 Provide and environment in which user can easily interface with computer.
 It is a resource allocator

Computer System Structure (Components of Computer System)


Computer system mainly consists of four components-

 Hardware – provides basic computing resources CPU, memory, I/O devices


 Operating system - Controls and coordinates use of hardware among various applications and
users
 Application programs – define the ways in which the system resources are used to solve
the computing problems of the users, Word processors, compilers, web browsers, database
systems, video games
 Users - People, machines, other computer

Computer System Organization


Computer - system operation
One or more CPUs, device controllers connect through common bus providing access to shared
memory. Each device controller is in-charge of a specific type of device. To ensure orderly
access to the shared memory, a memory controller is provided whose function is to synchronize
access to the memory. The CPU and other devices execute concurrently competing for memory
cycles. Concurrent execution of CPUs and devices competing for memory cycle.
 When system is switched on, ‘Bootstrap’ program is executed. It is the initial program to
run in the system. This program is stored in read-only memory (ROM) or in electrically
erasable programmable read-only memory (EEPROM).
 It initializes the CPU registers, memory, device controllers and other initial setups. The
program also locates and loads, the OS kernel to the memory. Then the OS starts with the
first process to be executed (ie. ‘init’ process) and then wait for the interrupt from the user.

Switch on ‘Bootstrap’ program


 Initializes the registers, memory and I/O devices
 Locates & loads kernel into memory
 Starts with ‘init’ process
 Waits for interrupt from user

Interrupt handling –

An interrupt is like a signal that tells the CPU something important needs attention, either from hardware
(like a keyboard or mouse) or from software. When an interrupt happens, the CPU pauses whatever it is
doing and switches to handle the interrupt. After dealing with it, the CPU goes back to what it was doing
before.

For example, if a program needs to read data from a hard drive, the CPU can keep working on other things
while the hard drive gets the data ready. Once the data is ready, the hard drive sends an interrupt to the
CPU to tell it that the data transfer is complete, and the CPU then resumes the task that needed the data.
Storage Structure

In computer systems memory can be used to store data and programs. All the memory devices are put in a
hierarchical order based on following criteria.
Size of memory device
Accessing speed of memory device
Cost per one bit of memory
Size can be increased from top to bottom speed and cost can be decreased from top to bottom.

Memory devices can be classified based on volatile ability into two categories.
Volatile devices
In this content can be lost when the power supply is off.
Non-volatilable devices
In this content cannot be lost when the power supply is off.

Register
Registers are used by the cpu for its internal purpose while executing programs. In registers there are two types.
General Purpose Registers
Used to store input data while performing arithmetic operations
Ex: Accumulator

Cache Memory
It is placed between registers and main memory it is used to store data. recently used by the processor . it increases
accessing speed.

Random Access Memory


It is also called as main memory.it is used to store program and data from secondary memory before starting the
program execution. In this data can be accessed directly by specifying the address

Electronic disk
It is also called as flash memory.it acts as RAM until the power supply is ON. Once the power supply is off.it uses
internal backup power supply to store the current data in magnetic disk.so it is called as volatile and non-volatile
device.

Magnetic disk
It is also called as secondary memory. It is used to store all the programs.it divides into sectors and tracks to store the
data. It is a non-volatile device.in general it is used to provide battery.

Optical disk
It is also called as removable memory. It is used to carry data along with the user.it is manufactured using optical
technology
Ex: CD-ROM, DVD
Magnetic Tape
It is the oldest form of memory device. on this the data can be read/write in a sequential order. In this the
tape can be divided into two tracks horizontally and stores the data in track1 and tracks separately. Track1 data can
be accessed in forward direction and track2 data can be accessed in reverse direction.

I/O Structure
 A large portion of operating system code is dedicated to managing I/O, both because of its
importance to the reliability and performance of a system and because of the varying nature of
the devices.
 Every device has a device controller, maintains some local buffer and a set of special- purpose
registers. The device controller is responsible for moving the data between the peripheral devices.
The operating systems have a device driver for each device controller.
 Interrupt-driven I/O is well suited for moving small amounts of data but can produce high
overhead when used for bulk data movement such as disk I/O. To solve this problem, direct
memory access (DMA) is used.
 After setting up buffers, pointers, and counters for the I/O device, the device controller transfers
an entire block of data directly to or from its own buffer storage to memory, with no intervention
by the CPU. Only one interrupt is generated per block, to tell the device driver that the operation
has completed.

Computer System Architecture


Categorized roughly according to the number of general-purpose processors used.

1. Single-Processor Systems –
 The variety of single-processor systems range from PDAs through mainframes. On a single-
processor system, there is one main CPU capable of executing instructions from user processes.
It contains special-purpose processors, in the form of device-specific processors, for devices such
as disk, keyboard, and graphics controllers.
 All special-purpose processors run limited instructions and do not run user processes. These are
managed by the operating system; the operating system sends them information about their next
task and monitors their status.
 For example, a disk-controller processor, implements its own disk queue and scheduling
algorithm, thus reducing the task of main CPU. Special processors in the keyboard, converts the
keystrokes into codes to be sent to the CPU.
 The use of special-purpose microprocessors is common and does not turn a single- processor
system into a multiprocessor. If there is only one general-purpose CPU, then the system is a
single-processor system.
2. Multi -Processor Systems
(parallel systems or tightly coupled systems)
On a multi-processor system, there will be two or more number of general purpose processors are closely
communicate with each other.

Multiprocessor systems have three main advantages:

 Increased throughput: In a multiprocessor system (a system with more than one processor), different
programs can run at the same time, which speeds up execution. However, even if you add more processors,
the performance won’t increase at the same rate. This is because of the extra work needed to coordinate
everything and because the processors may need to compete for shared resources. So, the system doesn’t
run as fast as you might expect when adding more processors.

 Economy of scale: Multiprocessor systems can be cheaper than having several single-processor systems.
This is because they share things like storage, power supplies, and other peripherals. When multiple
processors work on the same data, they can also share that data, making the system more cost-effective.

 Increased reliability: In multiprocessor systems, if one processor stops working, the system doesn’t shut
down completely—it just slows down. The other processors can take over the tasks of the failed one,
allowing the system to keep running, though at a slower speed.

Different types of multiprocessor systems

1. Asymmetric multiprocessing
2. Symmetric multiprocessing

1) Asymmetric multiprocessing – (Master/Slave architecture) Here each processor is


assigned a specific task, by the master processor. A master processor controls the other
processors in the system. It schedules and allocates work to the slave processors.

2) Symmetric multiprocessing (SMP) – All the processors are considered peers. There is
no master-slave relationship. All the processors have their own registers and CPU, only
memory is shared.

3.Clustered Systems
Clustered systems are two or more individual systems connected together via a network and sharing
software resources. Clustering provides high availability of resources and services. The service will
continue even if one or more systems in the cluster fail. High availability is generally obtained by storing
a copy of files (s/w resources) in the system.

There are two types of Clustered systems – asymmetric and symmetric


1. Asymmetric clustering – one system is in hot-standby mode while the others are
running the applications. The hot-standby host machine does nothing but monitor the active
server. If that server fails, the hot-standby host becomes the active server.

2. Symmetric clustering – two or more systems are running applications, and are
monitoring each other. This mode is more efficient, as it uses all of the available hardware. If
any system fails, its job is taken up by the monitoring system.
Computer System Structure
1. Multiprogramming

One of the most important aspects of operating systems is the ability to multiprogram. A single user
cannot keep either the CPU or the I/O devices busy at all times. Multiprogramming increases CPU
utilization by organizing jobs, so that the CPU always has one to execute.

Fig - Memory layout for a multiprogramming system

 The operating system keeps several jobs in memory simultaneously as shown in figure. This set
of jobs is a subset of the jobs kept in the job pool. Since the number of jobs that can be kept
simultaneously in memory is usually smaller than the number of jobs that can be kept in the job
pool (in secondary memory). The operating system picks and begins to execute one of the jobs
in memory. Eventually, the job may have to wait for some tasks, such as an I/O operation, to
complete. In a non-multiprogram system, the CPU would sit idle.
 In a multiprogrammed system, the operating system simply switches to, and executes, another
job. When that job needs to wait, the CPU is switched to another job, and so on.
 Eventually, the first job finishes waiting and gets the CPU back. Thus, the CPU is never idle.

2. Multitasking Systems

 In Time sharing (or multitasking) systems, a single CPU executes multiple jobs by switching
among them, but the switches occur so frequently that the users can interact with each program
while it is running. The user feels that all the programs are being executed at the same time.
 Time sharing requires an interactive (or hands-on) computer system, which provides direct
communication between the user and the system. The user gives instructions to the operating
system or to a program directly, using a input device such as a keyboard or a mouse, and waits
for immediate results on an output device. Accordingly, the response time should be short—
typically less than one second.
 A time-shared operating system allows many users to share the computer simultaneously. As the
system switches rapidly from one user to the next, each user is given the impression that the entire
computer system is dedicated to his use only, even though it is being shared among many users.
Computer System Operations

Modern operating systems are interrupt driven. If there are no processes to execute, no I/O devices
to service, and no users to whom to respond, an operating system will sit quietly, waiting for something
to happen. Events are signaled by the occurrence of an interrupt or a trap. A trap (or an exception) is a
software-generated interrupt. For each type of interrupt, separate segments of code in the operating
system determine what action should be taken. An interrupt service routine is provided that is responsible
for dealing with the interrupt.

Dual-Mode Operation
Since the operating system and the user programs share the hardware and software resources of the
computer system, it has to be made sure that an error in a user program cannot cause problems to other
programs and the Operating System running in the system.
The approach taken is to use a hardware support that allows us to differentiate among various modes
of execution.

The system can be assumed to work in two separate modes of operation:


1. User mode
2. Kernel mode (supervisor mode, system mode, or privileged mode).

 A hardware bit of the computer, called the mode bit, is used to indicate the current mode: kernel
(0) or user (1). With the mode bit, we are able to distinguish between a task that is executed by
the operating system and one that is executed by the user.
 When the computer system is executing a user application, the system is in user mode. When a
user application requests a service from the operating system (via a system call), the transition
from user to kernel mode takes place.

At system boot time, the hardware starts in kernel mode. The operating system is then loaded and starts
user applications in user mode. Whenever a trap or interrupt occurs, the hardware switches from user
mode to kernel mode (that is, changes the mode bit from 1 to 0). Thus, whenever the operating system
gains control of the computer, it is in kernel mode.

Process Management

 A program under execution is a process. A process needs resources like CPU time, memory,
files, and I/O devices for its execution. These resources are given to the process when it is created
or at run time. When the process terminates, the operating system reclaims the resources.
 The program stored on a disk is a passive entity and the program under execution is an active
entity. A single-threaded process has one program counter specifying the next instruction to
execute. The CPU executes one instruction of the process after another, until the process
completes. A multithreaded process has multiple program counters, each pointing to the next
instruction to execute for a given thread.

 The operating system is responsible for the following activities in connection with process
management:
 Scheduling process and threads on the CPU
 Creating and deleting both user and system processes
 Suspending and resuming processes
 Providing mechanisms for process synchronization
 Providing mechanisms for process communication


Memory Management
Main memory is a large array of words or bytes. Each word or byte has its own address.
Main memory is the storage device which can be easily and directly accessed by the CPU. As the
program executes, the central processor reads instructions and also reads and writes data from main
memory.

To improve both the utilization of the CPU and the speed of the computer's response to its users, general-
purpose computers must keep several programs in memory, creating a need for memory management.

The operating system is responsible for the following activities in connection with memory management:

 Keeping track of which parts of memory are currently being used by user.
 Deciding which processes and data to move into and out of memory.
 Allocating and deallocating memory space as needed.

Storage Management
There are three types of storage management
i) File system management
ii) Mass-storage management
iii) Cache management.

File-System Management
 File management is one of the most visible components of an operating system. Computers can
store information on several different types of physical media. Magnetic disk, optical disk, and
magnetic tape are the most common. Each of these media has its own characteristics and physical
organization. Each medium is controlled by a device, such as a disk drive or tape drive, that also
has its own unique characteristics.
 A file is a collection of related information defined by its creator. Commonly, files represent
programs and data. Data files may be numeric, alphabetic, alphanumeric, or binary. Files may
be free-form (for example, text files), or they may be formatted rigidly (for example, fixed fields).
 The operating system implements the abstract concept of a file by managing mass storage media.
Files are normally organized into directories to make them easier to use. When multiple users
have access to files, it may be desirable to control by whom and in what ways (read, write,
execute) files may be accessed.

The operating system is responsible for the following activities in connection with file management:
 Creating and deleting files
 Creating and deleting directories to organize files
 Supporting primitives for manipulating files and directories
 Mapping files onto secondary storage
 Backing up files on stable (nonvolatile) storage media

Mass-Storage Management
 As the main memory is too small to accommodate all data and programs, and as the data that it
holds are erased when power is lost, the computer system must provide secondary storage to
back up main memory. Most modern computer systems use disks as the storage medium for both
programs and data.

 Most programs—including compilers, assemblers, word processors, editors, and formatters—


are stored on a disk until loaded into memory and then use the disk as both the source and
destination of their processing. Hence, the proper management of disk storage is of central
importance to a computer system.

The operating system is responsible for the following activities in connection with disk management:
 Free-space management
 Storage allocation
 Disk scheduling
As the secondary storage is used frequently, it must be used efficiently. The entire speed of operation of
a computer may depend on the speeds of the disk. Magnetic tape drives and their tapes, CD, DVD drives
and platters are tertiary storage devices. The functions that operating systems provides include
mounting and unmounting media in devices, allocating and freeing the devices for exclusive use by
processes, and migrating data from secondary to tertiary storage.

Caching
 Caching is an important principle of computer systems. Information is normally kept in some
storage system (such as main memory). As it is used, it is copied into a faster storage system—
the cache—as temporary data. When a particular piece of information is required, first we check
whether it is in the cache. If it is, we use the information directly from the cache; if it is not in
cache, we use the information from the source, putting a copy in the cache under the assumption
that we will need it again soon.
 Because caches have limited size, cache management is an important design problem. Careful
selection of the cache size and page replacement policy can result in greatly increased
performance.
 The movement of information between levels of a storage hierarchy may be either explicit or
implicit, depending on the hardware design and the controlling operating-system software. For
instance, data transfer from cache to CPU and registers is usually a hardware function, with no
operating-system intervention. In contrast, transfer of data from disk to memory is usually
controlled by the operating system.
 In a hierarchical storage structure, the same data may appear in different levels of the storage
system. For example, suppose to retrieve an integer A from magnetic disk to the processing
program. The operation proceeds by first issuing an I/O operation to copy the disk block on
which A resides to main memory. This operation is followed by copying A to the cache and to
an internal register. Thus, the copy of A appears in several places: on the magnetic disk, in main
memory, in the cache, and in an internal register.

 In a multiprocessor environment, in addition to maintaining internal registers, each of the CPUs


also contains a local cache. In such an environment, a copy of A may exist simultaneously in
several caches. Since the various CPUs can all execute concurrently, any update done to the value
of A in one cache is immediately reflected in all other caches where A resides. This situation is
called cache coherency, and it is usually a hardware problem (handled below the operating-
system level).

Computing Environments
The different computing environments are –

1. Traditional Computing
 In traditional computing the user can use a traditional method like static memory allocation and
it is mainly useful in single user operating systems.
 In this technique there will be a particular operating system that is going to perform all tasks to
that particular computer system.
 It is like one task is performed by the CPU at a given time and the CPU utilizes the memory that
is used only for one task.
 Now-a-days the traditional computing uses the CPU scheduling method and it is used in desktop,
server and other different computers so that the user can manage and give a slice of computer
memory to other processes.
2. Client-Server Computing
Designers shifted away from centralized system architecture to - terminals connected to centralized
systems. As a result, many of today’s systems act as server systems to satisfy requests generated
by client systems. This form of specialized distributed system, called client- server system

General Structure of Client – Server System

Server systems can be broadly categorized as compute servers and file servers:
 The compute-server system provides an interface to which a client can send a request to
perform an action (for example, read data); in response, the server executes the action and
sends back results to the client. A server running a database that responds to client requests
for data is an example of such a system.
 The file-server system provides a file-system interface where clients can create, update, read,
and delete files. An example of such a system is a web server that delivers files to clients
running the web browsers

3. Peer-to-Peer Computing
In this model, clients and servers are not distinguished from one another; here, all nodes within the
system are considered peers, and each may act as either a client or a server, depending on whether
it is requesting or providing a service.
 In a client-server system, the server is a bottleneck, because all the services must be served by
the server. But in a peer-to-peer system, services can be provided by several nodes distributed
throughout the network.
 To participate in a peer-to-peer system, a node must first join the network of peers. Once a node
has joined the network, it can begin providing services to—and requesting services from—other
nodes in the network.
 The Peer to Peer network is a network which is based on specific tasks and it is a centralized
service where one single node will be connected to all the other nodes just like our computer
where CPU is connected to all resources to perform operation.
The figure given below depicts peer to peer network computing –

4. Distributed computing / Web based Computing


Distributed computing refers to a system where processing and data storage is distributed across
multiple devices or systems, rather than being handled by a single central device. In a distributed
system, each device or system has its own processing capabilities and may also store and manage its
own data. These devices or systems work together to perform tasks and share resources, with no single
device serving as the central hub.
One example of a distributed computing system is a cloud computing system, where resources such as
computing power, storage, and networking are delivered over the Internet and accessed on demand. In
this type of system, users can access and use shared resources through a web browser or other client
software.
CHAPTER 2
OPERATING SYSTEM SERVICES

An operating system provides an environment for the execution of programs. It provides certain
services to programs and to the users of those programs.

OS provide services for the users of the system, including:

 User Interfaces - Means by which users can issue commands to the system. Depending on the
operating system these may be a command-line interface ( e.g. sh, csh, ksh, tcsh, etc.), a
Graphical User Interface (e.g. Windows, X-Windows, KDE, Gnome, etc.), or a batch
command systems.
In Command Line Interface (CLI)- commands are given to the system.
In Batch interface – commands and directives to control these commands are put in a file and
then the file is executed.
In GUI systems- windows with pointing device to get inputs and keyboard to enter the text.
 Program Execution - The OS must be able to load a program into RAM, run the program, and
terminate the program, either normally or abnormally.
 I/O Operations - The OS is responsible for transferring data to and from I/O devices, including
keyboards, terminals, printers, and files. For specific devices, special functions are provided
(device drivers) by OS.
 File-System Manipulation – Programs need to read and write files or directories. The services
required to create or delete files, search for a file, list the contents of a file and change the file
permissions are provided by OS

 Communications - Inter-process communications, IPC, either between processes running on the


same processor, or between processes running on separate processors or separate machines. May
be implemented by using the service of OS- like shared memory or message passing.
 Error Detection - Both hardware and software errors must be detected and handled
appropriately by the OS. Errors may occur in the CPU and memory hardware (such as power
failure and memory error), in I/O devices (such as a parity error on tape, a connection failure on
a network, or lack of paper in the printer), and in the user program (such as an arithmetic
overflow, an attempt to access an illegal memory location).
OS provide services for the efficient operation of the system, including:

 Resource Allocation – Resources like CPU cycles, main memory, storage space, and I/O devices
must be allocated to multiple users and multiple jobs at the same time.
 Accounting – There are services in OS to keep track of system activity and resource usage, either
for billing purposes or for statistical record keeping that can be used to optimize future
performance.
 Protection and Security – The owners of information (file) in multiuser or networked computer
system may want to control the use of that information. When several separate processes execute
concurrently, one process should not interfere with other or with OS. Protection involves
ensuring that all access to system resources is controlled. Security of the system from outsiders
must also be done, by means of a password.

System Calls
 System calls provides an interface to the services of the operating system. These are generally
written in C or C++, although some are written in assembly for optimal performance.
 The below figure illustrates the sequence of system calls required to copy a file content from
one file (input file) to another file (output file).

An example to illustrate how system calls are used: writing a simple program to read data from one
file and copy them to another file

 There are number of system calls used to finish this task. The first system call is to write a
message on the screen (monitor). Then to accept the input filename. Then another system call
to write message on the screen, then to accept the output filename.

 When the program tries to open the input file, it may find that there is no file of that name or
that the file is protected against access. In these cases, the program should print a message
on the console (another system call) and then terminate abnormally (another system call) and
create a new one (another system call).
 Now that both the files are opened, we enter a loop that reads from the input file (another
system call) and writes to output file (another system call).
 Finally, after the entire file is copied, the program may close both files (another system call),
write a message to the console or window (system call), and finally terminate normally (final
system call).
 Most programmers do not use the low-level system calls directly, but instead use an
"Application Programming Interface", API.
 Instead of direct system calls provides for greater program portability between different
systems. The API then makes the appropriate system calls through the system call interface,
using a system call table to access specific numbered system calls.
 Each system call has a specific numbered system call. The system call table (consisting of
system call number and address of the particular service) invokes a particular service routine
for a specific system call.
 The caller need know nothing about how the system call is implemented or what it does
during execution.

Figure: The handling of a user application invoking the open() system call.

Figure: Passing of parameters as a table.

Three general methods used to pass parameters to OS are –


i) To pass parameters in registers
ii) If parameters are large blocks, address of block (where parameters are stored in memory) is
sent to OS in the register. (Linux & Solaris).
iii) Parameters can be pushed onto the stack by program and popped off the stack by OS.

Types of System Calls

The system calls can be categorized into six major categories:


1. Process Control
2. File management
3. Device management
4. Information management
5. Communications
6. Protection

1. Process Control

 Process control system calls include end, abort, load, execute, create process, terminate
process, get/set process attributes, wait for time or event, signal event, and allocate and free
memory.
 Processes must be created, launched, monitored, paused, resumed, and eventually stopped.
 When one process pauses or stops, then another must be launched or resumed
 Process attributes like process priority, max. allowable execution time etc. are set and
retrieved by OS.
 After creating the new process, the parent process may have to wait (wait time), or wait for
an event to occur (wait event). The process sends back a signal when the event has occurred
(signal event)

2. File Management

The file management functions of OS are –


 File management system calls include create file, delete file, open, close, read, write,
reposition, get file attributes, and set file attributes.
 After creating a file, the file is opened. Data is read or written to a file.
 The file pointer may need to be repositioned to a point.
 The file attributes like filename, file type, permissions, etc. are set and retrieved using
system calls.
 These operations may also be supported for directories as well as ordinary files.

3. Device Management

 Device management system calls include request device, release device, read, write,
reposition, get/set device attributes, and logically attach or detach devices.
 When a process needs a resource, a request for resource is done. Then the control is
granted to the process. If requested resource is already attached to some other process,
the requesting process has to wait.
 In multiprogramming systems, after a process uses the device, it has to be returned to
OS, so that another process can use the device.
 Devices may be physical (e.g. disk drives ), or virtual / abstract ( e.g. files, partitions,
and RAM disks ).

4. Information Maintenance
 Information maintenance system calls include calls to get/set the time, date, system data, and
process, file, or device attributes.
 These system calls care used to transfer the information between user and the OS. Information
like current time & date, no. of current users, version no. of OS, amount of free memory, disk
space etc. are passed from OS to the user.

5. Communication
 Communication system calls create/delete communication connection, send/receive
messages, transfer status information, and attach/detach remote devices.
 The message passing model must support calls to:
o Identify a remote process and/or host with which to communicate.
o Establish a connection between the two processes.
o Open and close the connection as needed.
o Transmit messages along the connection.
o Wait for incoming messages, in either a blocking or non-blocking state.
o Delete the connection when no longer needed.
 The shared memory model must support calls to:
o Create and access memory that is shared amongst processes (and threads. )
o Free up shared memory and/or dynamically allocate it as needed.
 Message passing is simpler and easier, (particularly for inter-computer communications), and
is generally appropriate for small amounts of data. It is easy to implement, but there are system
calls for each read and write process.
 Shared memory is faster, and is generally the better approach where large amounts of data are
to be shared. This model is difficult to implement, and it consists of only few system calls.

6. Protection
 Protection provides mechanisms for controlling which users / processes have access to which
system resources.
 System calls allow the access mechanisms to be adjusted as needed, and for non- privileged
users to be granted elevated access permissions under carefully controlled temporary
circumstances.

Operating-System Structure
1. Simple Structure

 Many operating systems do not have well-defined structures. They started as small, simple, and
limited systems and then grew beyond their original scope. Eg: MS-DOS.
 In MS-DOS, the interfaces and levels of functionality are not well separated. Application
programs can access basic I/O routines to write directly to the display and disk drives. Such
freedom leaves MS-DOS in bad state and the entire system can crash down when user programs
fail.
 UNIX OS consists of two separable parts: the kernel and the system programs. The kernel is
further separated into a series of interfaces and device drivers. The kernel provides the file
system, CPU scheduling, memory management, and other operating-system functions through
system calls.
Figure: MS-DOS layer structure.

2. Layered Approach

 The OS is broken into number of layers (levels). Each layer rests on the layer below it, and relies
on the services provided by the next lower layer.
 Bottom layer (layer 0) is the hardware and the topmost layer is the user interface.
 A typical layer, consists of data structure and routines that can be invoked by higher-level layer.
 Advantage of layered approach is simplicity of construction and debugging.
 The layers are selected so that each uses functions and services of only lower-level layers. So
simplifies debugging and system verification. The layers are debugged one by one from the
lowest and if any layer doesn’t work, then error is due to that layer only, as the lower layers are
already debugged. Thus, the design and implementation are simplified.
 A layer need not know how its lower-level layers are implemented. Thus hides the operations
from higher layers.

Disadvantages of layered approach:


 The various layers must be appropriately defined, as a layer can use only lower-level layers.
 Less efficient than other types, because any interaction with layer 0 required from top layer.
The system call should pass through all the layers and finally to layer 0. This is an overhead.

3. Microkernels
 This method structures the operating system by removing all nonessential components from the
kernel and implementing them as system and user-level programs thus making the kernel as
small and efficient as possible.
 The removed services are implemented as system applications.
 Most microkernels provide basic process and memory management, and message passing
between other services.
 The main function of the microkernel is to provide a communication facility between the client
program and the various services that are also running in user space.

user
mode

kernel
mode

Benefit of microkernel –
 System expansion can also be easier, because it only involves adding more system
applications, not rebuilding a new kernel.
 Mach was the first and most widely known microkernel, and now forms a major component of
Mac OSX.
Disadvantage of Microkernel -
 Performance overhead of user space to kernel space communication

4. Modules

 Modern OS development is object-oriented, with a relatively small core kernel and a set of
modules which can be linked in dynamically.
 Modules are similar to layers in that each subsystem has clearly defined tasks and interfaces, but
any module is free to contact any other module, eliminating the problems of going through
multiple intermediary layers.
 The kernel is relatively small in this architecture, similar to microkernels, but the kernel does not
have to implement message passing since modules are free to contact each other directly. Eg:
Solaris, Linux and MacOSX.

Figure: Solaris loadable modules

 The Max OSX architecture relies on the Mach microkernel for basic system management
services, and the BSD kernel for additional services. Application services and dynamically
loadable modules (kernel extensions ) provide the rest of the OS functionality.
 Resembles layered system, but a module can call any other module.
 Resembles microkernel, the primary module has only core functions and the knowledge of how
to load and communicate with other module

System Boot
 Operating system must be made available to hardware so hardware can start it.
 Small piece of code – bootstrap loader, locates the kernel, loads it into memory, and starts it
Sometimes two-step process where boot block at fixed location loads bootstrap loader.
 When power initialized on system, execution starts at a fixed memory location Firmware used
to hold initial boot code
CHAPTER-3
PROCESS MANAGEMENT

Process Concept

 A process is a program under execution.


 Its current activity is indicated by PC (Program Counter) and the contents of the processor's
registers.

The Process

Process memory is divided into four sections as shown in the figure below:
 The stack is used to store temporary data such as local variables, function parameters, function
return values, return address etc.
 The heap which is memory that is dynamically allocated during process run time
 The data section stores global variables.
 The text section comprises the compiled program code.
 Note that, there is a free space between the stack and the heap. When the stack is full, it grows
downwards and when the heap is full, it grows upwards.

Figure: Process in memory.

Process State
Process State
A Process has 5 states. Each process may be in one of the following states –

1. New - The process is in the stage of being created.


2. Ready - The process has all the resources it needs to run. It is waiting to be assigned to
the processor.
3. Running – Instructions are being executed.
4. Waiting - The process is waiting for some event to occur. For example, the process may
be waiting for keyboard input, disk access request, inter-process messages, a timer to go
off, or a child process to finish.
5. Terminated - The process has completed its execution.
Figure: Diagram of process state

Process Control Block

For each process there is a Process Control Block (PCB), which stores the process-specific
information as shown below –

 Process State – The state of the process may be new, ready, running, waiting, and so on.
 Program counter – The counter indicates the address of the next instruction to be executed for
this process.
 CPU registers - The registers vary in number and type, depending on the computer architecture.
They include accumulators, index registers, stack pointers, and general-purpose registers. Along
with the program counter, this state information must be saved when an interrupt occurs, to allow
the process to be continued correctly afterward.
 CPU scheduling information- This information includes a process priority, pointers to
scheduling queues, and any other scheduling parameters.
 Memory-management information – This includes information such as the value of the base
and limit registers, the page tables, or the segment tables.
 Accounting information – This information includes the amount of CPU and real time used,
time limits, account numbers, job or process numbers, and so on.
 I/O status information – This information includes the list of I/O devices allocated to the
process, a list of open files, and so on.

The PCB simply serves as the repository for any information that may vary from process to process.

Figure: Process control block (PCB)


Process Scheduling

Scheduling Queues
In process execution a process need to visit 3 types of scheduling queues.
1. Job Queue- The process entered into the system can be placed in job queue.
2. Ready Queue- the jobs which are loaded into main memory and ready for execution are placed in
the ready queue.
3. Device Queue- Jobs which are wait for the availability of an I/O device are placed in device queue.
In a system, each I/O device associates with separate device queue for place the process waiting for
the I/O device only.

Ready Queue and Various I/O Device Queues


 A common representation of process scheduling is a queuing diagram. Each rectangular box in
the diagram represents a queue. Two types of queues are present: the ready queue and a set of
device queues. The circles represent the resources that serve the queues, and the arrows indicate
the flow of processes in the system.
 A new process is initially put in the ready queue. It waits in the ready queue until it is selected
for execution and is given the CPU. Once the process is allocated the CPU and is executing, one
of several events could occur:
 The process could issue an I/O request, and then be placed in an I/O queue.
 The process could create a new sub process and wait for its termination.
 The process could be removed forcibly from the CPU, as a result of an interrupt,
and be put back in the ready queue.
In the first two cases, the process eventually switches from the waiting state to the ready state, and is
then put back in the ready queue. A process continues this cycle until it terminates, at which time it is
removed from all queues.

Figure: Queuing-diagram representation of process scheduling.

Schedulers
Schedulers are software which selects an available program to be assigned to CPU.

1. long-term scheduler or Job scheduler – selects jobs from the job pool (of secondary
memory, disk) and loads them into the memory. In this the scheduler makes selection after
minutes of time from the first selection, long term scheduler select processes carefully to balance
the system. there are 2 types of processes.
 CPU-Bound Processes: spent most of its time for the computation.
 I/O-Bound Processes: spent most of its time for I/O operation.
The long term scheduler intermix the 2 types of processes in equal proportionality.
Otherwise system balanced is failed. For example, all the processes are cpu bound, then device
queues are empty and increases the burden of CPU. If all the processes are I/O bound, then
ready queue is empty and CPU have no processes to execute.

2. The short-term scheduler, or CPU Scheduler – selects job from memory and assigns the
CPU to it. It must select the new process for CPU frequently.
3. The medium-term scheduler - selects the process in ready queue and reintroduced into the
memory.
Time sharing systems employ a medium-term scheduler. It swaps out the process from ready
queue and swap in the process to ready queue. When system loads get high, this scheduler will
swap one or more processes out of the ready queue for a few seconds, in order to allow smaller
faster jobs to finish up quickly and clear the system.

Advantages of medium-term scheduler –


 To remove process from memory and thus reduce the degree of multiprogramming
(number of processes in memory).
 To make a proper mix of processes (CPU bound and I/O bound)

Interprocess Communication
Interproses Communication (IPC) is a mechanism that allows processes to communicate with each other

Process can be divided into two categories.

 Independent Processes – processes that cannot affect other processes or be affected by other
processes executing in the system.
 Cooperating Processes – processes that can affect other processes or be affected by other
processes executing in the system.

Co-operation among processes are allowed for following reasons –

 Information Sharing - There may be several processes which need to access the same file. So
the information must be accessible at the same time to all users.
 Computation speedup - Often a solution to a problem can be solved faster if the problem can
be broken down into sub-tasks, which are solved simultaneously (particularly when multiple
processors are involved.)
 Modularity - A system can be divided into cooperating modules and executed by sending
information among one another.
 Convenience - Even a single user can work on multiple tasks by information sharing.

Cooperating processes require some type of inter-process communication. This is allowed by


two models:
1. Shared Memory systems
2. .Message passing systems.

Sl No Shared Memory Message passing


A region of memory is shared by
Message exchange is done among
1. communicating processes, into which
the processes by using objects.
the information is written and read
2. Useful for sending large block of data Useful for sending small data.
System call is used only to create System call is used during every
3.
shared memory read and write operation.
Message is sent faster, as there are no
4. Message is communicated slowly.
system calls

 Shared Memory is faster once it is set up, because no system calls are required and access occurs
at normal memory speeds. Shared memory is generally preferable when large amounts of
information must be shared quickly on the same computer.
 Message Passing requires system calls for every message transfer, and is therefore slower, but it
is simpler to set up and works well across multiple computers. Message passing is generally
preferable when the amount and/or frequency of data transfers is small.
MULTITHREADED PROGRAMMING
Thread is a light weight process. A thread is used to execute module of a process. Thread has its
own program counter, register, stack, code, data, and files. All thread belongs to a single process
which shares code segment, data segment, memory of that process as shown below.
If a process has multiple threads of control, it can perform more than one task at a time.
Such a process is called multithreaded process. In multithreading multiple threads are executed
parallel. Multithreading allows an application program to perform two or more activities
simultaneously.

Fig: Single-threaded and multithreaded processes

Benefits of Multithreaded Programming

 Responsiveness A program may be allowed to continue running even if part of


it is blocked. Thus, increasing responsiveness to the user.
 Resource Sharing By default, threads share the memory (and resources) of
the process to which they belong. Thus, an application is allowed to have
severaldifferent threads of activity within the same address-space.
 Economy Allocating memory and resources for process-creation is costly. Thus,
it is more economical to create and context-switch threads.
 Utilization of Multiprocessor Architectures In a multiprocessor architecture, threads may
be running in parallel on different processors. Thus, parallelism will be increased
MULTITHREADING MODELS
 Support for threads may be provided at either
1. The user level, for user threads or
2. By the kernel, for kernel threads.
 User-threads are created by the user at user space.
 Kernel-threads are created and managed by kernel in kernel space.
A user level thread must need the support of kernel level thread for its execution.so there are
3 types of relations are available between user-level threads and kernel level threads.
1. Many-to-one model
2. One-to-one model
3. Many-to-many model.

Many-to-One Model
In this two or more user level threads are mapped with one kernel level thread.in this one kernel level thread
provide support to execute multiple user level threads.

Fig: Many-to-one model


In this the one kernel level thread blocked in its execution then all the user level threads are blocked
so, it provides less reliability.

One-to-One Model

In this one user level thread maps with one kernel level thread.

Fig: one-to-one model


In this if a kernel level block then only corresponding user-level thread is blocked. all the remaining
threads continues its execution.so, it provides high reliability. But in this the kernel needs to create a separate
thread to handle one user level thread.so, the burden to the kernel is increased.

Many-to-Many Model
In this many user level threads are mapped with many kernel level threads.

Fig: Many-to-many model

In this if one kernel thread blocked another kernel thread will handles the user-level thread. so, in this
reliability is provided. The burden to the kernel level is increased because the kernel need not to create a separate
kernel thread to each user level thread.

You might also like