0% found this document useful (0 votes)
71 views88 pages

COMP 20103 Operating Systems

The document is a comprehensive overview of operating systems, detailing their definitions, functions, goals, and historical development across different generations. It covers various modules including computer system structure, process management, storage management, memory management, and virtual memory management. The content is structured into lessons that provide insights into the roles of operating systems in managing hardware, software, and user interactions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views88 pages

COMP 20103 Operating Systems

The document is a comprehensive overview of operating systems, detailing their definitions, functions, goals, and historical development across different generations. It covers various modules including computer system structure, process management, storage management, memory management, and virtual memory management. The content is structured into lessons that provide insights into the roles of operating systems in managing hardware, software, and user interactions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

OPERATING

SYSTEMS

Compiled by:

Dr. Gisela May A. Albano


Janelle Kyra A. Sagum
John Dustin D. Santos
Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

TABLE OF CONTENTS
MODULE 1: INTRODUCTION TO OPERATING SYSTEM................................................................. 1
Overview ................................................................................................................................................. 1
Objectives .............................................................................................................................................. 1
Lesson 1: Definition, Function, and Goals of Operating System ........................................................ 1
What is Operating System? ............................................................................................................... 1
Functions of Operating System ....................................................................................................... 2
1. Memory Management ................................................................................................................. 2
2. Processor Management ............................................................................................................. 2
3. Device Management .................................................................................................................... 2
4. File Management .......................................................................................................................... 3
5. Security .......................................................................................................................................... 3
6. Control over system performance .......................................................................................... 3
7. Job Accounting ............................................................................................................................ 3
8. Error detecting aids .................................................................................................................... 3
9. Coordination between other software and users ................................................................ 3
Goals of the Operating System ........................................................................................................ 3
Lesson 2: History of Operating System .................................................................................................. 4
The First Generation (1940’s to early 1950’s) ............................................................................... 4
The Second Generation (1955 to 1965) .......................................................................................... 4
The Third Generation (1965 - 1980) ................................................................................................. 5
The Fourth Generation (1980-Present Day)................................................................................... 6
Lesson 3: Types of Operating System .................................................................................................... 7
Batch Operating System .................................................................................................................... 7
Time-Sharing Operating System ...................................................................................................... 7
Distributed Operating System .......................................................................................................... 8
Network Operating System................................................................................................................ 9
Real-Time Operating System .......................................................................................................... 10
Handheld Operating System ........................................................................................................... 11
Lesson 4: Components of Operating System ...................................................................................... 13
Components of Operating System ................................................................................................ 13
Reading Assignment ......................................................................................................................... 14
References / Sources / Bibliography ............................................................................................ 14

COMP 20103 - OPERATING SYSTEM PAGE | I


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

MODULE 2: COMPUTER SYSTEM STRUCTURE ............................................................................ 17


Overview ............................................................................................................................................... 17
Objectives ............................................................................................................................................ 17
Lesson 1: The Computer System .......................................................................................................... 17
What is a Computer System? ......................................................................................................... 17
Computer System Structure............................................................................................................ 18
The Modern Computer System Structure ................................................................................ 18
The Central Processing Unit (CPU) ........................................................................................... 19
Lesson 2: The Computer Boot-Up ......................................................................................................... 20
Introduction to Computer Boot-Up................................................................................................ 20
The Bootloader ............................................................................................................................... 20
The Boot Sequence ....................................................................................................................... 20
Lesson 3: Traps and Interrupts .............................................................................................................. 21
Traps and Interrupts .......................................................................................................................... 21
Lesson 4: I/O Structure ........................................................................................................................... 22
Introduction to I/O Structure ........................................................................................................... 22
Device Drivers ................................................................................................................................. 22
I/O Operation ................................................................................................................................... 23
Polling ............................................................................................................................................... 23
Lesson 5: Storage Structure ................................................................................................................... 24
Introduction to Storage Structure.................................................................................................. 24
Primary Storage .............................................................................................................................. 24
Secondary Storage ........................................................................................................................ 25
Lesson 6: Hardware Protection .............................................................................................................. 25
Hardware Protection and Types of Hardware Protection ........................................................ 25
Reading Assignment ......................................................................................................................... 26
References / Sources / Bibliography ............................................................................................ 27
MODULE 3: PROCESS MANAGEMENT ............................................................................................. 30
Overview ............................................................................................................................................... 30
Objectives ............................................................................................................................................ 30
Lesson 1: Process Concept .................................................................................................................... 30
What Is a Process? ............................................................................................................................ 30
Lesson 2: Process States ....................................................................................................................... 31

COMP 20103 - OPERATING SYSTEM PAGE | II


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Context Switching .............................................................................................................................. 32


Context Switch versus Mode Switch ............................................................................................ 32
CPU-Bound versus I/O-Bound Processes ................................................................................... 32
Lesson 3: Process Creation and Termination ...................................................................................... 33
Process Creation ................................................................................................................................ 33
Process Termination ......................................................................................................................... 34
Lesson 4: Process Threads .................................................................................................................... 34
Multithreading ..................................................................................................................................... 35
Thread versus Process ..................................................................................................................... 35
Lesson 5: Process Schedulers ............................................................................................................... 36
Process Scheduling Queues ........................................................................................................... 36
Schedulers ........................................................................................................................................... 37
Lesson 6: Process Scheduling Concepts ............................................................................................. 38
Lesson 7: Process Scheduling Algorithms ........................................................................................... 38
Objectives of Process Scheduling Algorithms .......................................................................... 38
Scheduling Algorithms ..................................................................................................................... 39
Reading Assignment ......................................................................................................................... 43
References / Sources / Bibliography ............................................................................................ 43
Online Videos to Watch .................................................................................................................... 44
MODULE 4: STORAGE MANAGEMENT ............................................................................................. 44
Overview ............................................................................................................................................... 44
Objectives ............................................................................................................................................ 44
Storage Management ........................................................................................................................ 44
Lesson 1: Disk Scheduling Concepts.................................................................................................... 44
Lesson 2: Disk Scheduling Algorithms.................................................................................................. 45
First Come First Serve (FCFS) Scheduling Algorithm ............................................................. 45
Shortest Seek Time First (SSTF) Scheduling Algorithm ......................................................... 46
SCAN Scheduling Algorithm ........................................................................................................... 46
LOOK Scheduling Algorithm .......................................................................................................... 47
Circular SCAN (C-SCAN) Scheduling Algorithm ....................................................................... 47
C-LOOK Scheduling Algorithm ...................................................................................................... 48
Reading Assignment ......................................................................................................................... 49
References / Sources ........................................................................................................................ 49

COMP 20103 - OPERATING SYSTEM PAGE | III


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

EXERCISES / WRITTEN ASSIGNMENT ............................................................................................. 50


MODULE 5: MEMORY MANAGEMENT .............................................................................................. 51
Overview ............................................................................................................................................... 51
Objectives ............................................................................................................................................ 51
Introduction ......................................................................................................................................... 51
Terminologies ..................................................................................................................................... 52
Multiple Fixed Partitions .................................................................................................................. 52
Multiple Variable Partitions ............................................................................................................. 58
Simple Paging ..................................................................................................................................... 60
Simple Segmentation ........................................................................................................................ 60
Segmentation with Paging ............................................................................................................... 60
Swapping.............................................................................................................................................. 61
Overlaying ............................................................................................................................................ 61
Buddy System ..................................................................................................................................... 61
Reading Assignment ......................................................................................................................... 62
References / Sources ........................................................................................................................ 62
MODULE 6: VIRTUAL MEMORY MANAGEMENT ............................................................................ 66
Overview ............................................................................................................................................... 66
Objectives ............................................................................................................................................ 66
Lesson 1: Virtual Memory ....................................................................................................................... 66
Virtual Memory Paging ..................................................................................................................... 67
Virtual Memory Segmentation ........................................................................................................ 70
Reading Assignment ......................................................................................................................... 72
Lesson 2: Page Replacement Algorithms ............................................................................................ 74
Demand Paging .................................................................................................................................. 74
Page Replacement Algorithms ....................................................................................................... 74
First-In First-Out (FIFO) ................................................................................................................ 77
Optimal (OPT) .................................................................................................................................. 77
Least Recently Used (LRU).......................................................................................................... 77
Page / Frame Allocation Algorithms ............................................................................................. 78
Reading Assignment ......................................................................................................................... 78
Research .............................................................................................................................................. 78
References / Sources ........................................................................................................................ 79

COMP 20103 - OPERATING SYSTEM PAGE | IV


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

COMP 20103 - OPERATING SYSTEM PAGE | V


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

MODULE 1: INTRODUCTION TO OPERATING SYSTEM


Overview
This module provides an understanding of operating systems as an intermediary between the
user of a computer and the computer hardware. The purpose of an operating system is to provide
an environment in which a user can execute programs in a convenient and efficient manner. An
aspect of operating systems is how they vary in accomplishing these tasks. We have mainframe
operating systems that optimize utilization of the hardware. While operating system in personal
computers support complex games, business application, etc. There are other functions also that
the operating system does such as traffic manager, dispatcher, etc.

Objectives
At the end of this module, the student can be able to:
 Summarize the objectives and functions of modern operating systems.
 Determine the functions of a contemporary operating system with respect to convenience,
efficiency, and ability to evolve.
 Compare and contrast different types of operating system.

Lesson 1: Definition, Function, and Goals of Operating


System
What is Operating System?
Operating System— is the primary
software installed on a computer that
manages all the hardware and other
software on a computer the Operating
System also known as OS interface with the
computer hardware and provides services
that application can use.

An operating system, or "OS," is also a


software that communicates with the
hardware and allows the running of other
programs. It consists of system software, or
the basic files that your computer needs to
boot and work. Every desktop computer,
tablet, and Smartphone includes an
operating system that provides basic
functionality for the device.

A program that is initially loaded by a boot


program into the computer manages all other application programs within a computer. The
application programs make use of the Operating System by making request to service through
application program interface (API).

Additionaly, users can interact directly with the Operating System through a user interface such
a computer command line or graphical user interface (GUI).In its most general sense is software

1.0 INTRODUCTION TO OPERATING SYSTEM PAGE | 1


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

that allows the user to run another application on a computing device. Without an Operating
System a computer and software would be useless.

Since the operating system serves as a computer's fundamental user interface, it significantly
affects how you interact with the device. Therefore, many users prefer to use a specific operating
system.

Functions of Operating System


The Operating System determines which application should run in a specific order and how much
time should be allowed for each application before giving another application a turn to be
processed. In Operating System there are multiple programs that can be running at the same
time. It provides services to facilitate the efficient execution and management of memory
allocations for any additional installed software application programs. It manages the sharing
internal memory among multiple applications. OS handles Input and Output to and from attached
hardware devices such as hard disk drives, printers, and dial up ports. The management of batch
jobs (i.e. printing) may be offloaded in order to free the initiating application from this task. On
computers that can provide parallel processing, an Operating System can manage how to divide
the program so that it runs on more than one processor at a time. There are different kinds of
Functions of Operating System:

1. Memory Management
It refers to management of Primary Memory or Main Memory. There are some activities
that follow for the memory management: It keeps tracks of primary memory, the memory
addresses that have already been allocated and the memory addresses of the memory that has
not yet been used. In multi programming, the OS decides the order in which process are granted
access to memory, and for how long. It allocates the memory to a process when the process
requests it and deallocates the memory when the process has terminated or is performing an I/O
operation.

Table 1.1: Difference between Primary Memory and Secondary Memory

Random Access Memory Read-Only Memory


Temporary storage Permanent storage
Store data in MBs Store data in GBs
Volatile Non- volatile
Used in normal operations Used for startup process of computer.
Writing data is faster Writing data is slower

2. Processor Management
The OS decides which process gets the processor when and for how much time this
function is called process scheduling. It does 3 job. The first one is to keep tracks of processor
status of a process. The program responsible for this task is known as traffic controller. The
second one is allocating the processor (GPU) to a process and the third one is De-allocating the
processor when a process is no longer required.

3. Device Management
An Operating System manages device communication via their respective drivers. It does
the following activities for device management: a. Keeps tracks for all devices. The program
responsible for this task is known as the I/O Controller. b. Decides which process gets the device

1.0 INTRODUCTION TO OPERATING SYSTEM PAGE | 2


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

when and for how much time. c. Allocates the device in the most efficient way. d. De-allocates
devices.

4. File Management
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directories. An OS does the following activities for file
management: a. Keeps track of information, location, uses, status, etc. The collective facilities are
often known as file system. b. Decides who gets the resources. c. Allocates the resources d. De-
allocates the resources

5. Security
Data is an important part of computer system. The operating system protects the data stored
on the computer from illegal use, modification or deletion. By means of password and similar other
techniques, it prevents unauthorized access to programs and data.

6. Control over system performance


Recording delays between request for a service and response from the system.

7. Job Accounting
Keeping track of time and resources used by various jobs and users.

8. Error detecting aids


Protection of dumps, traces, error messages, and other debugging and error detecting
aids.

9. Coordination between other software and users


Coordination and assignment of compilers, interpreters, assemblers, and other software
to the various users of the computer systems.

Goals of the Operating System


The Operating System has many goals. The main goal is efficient use. The OS ensures efficient
use of memory, CPU and Input Output devices. Operating system can consume CPU and
memory resources

Overhead, besides operating system monitors use of resources to make certain of efficiency.

The second goal of Operating System is user convenience to make user comfortable to done their
task OS ensures user friendly interfaces such as GUI (Graphical User Interface) it makes that
user is easy to done their task further more noninterference in the activities of its user. User can
face interference in computational activities. The operating system also prevents illegal file access
because the system knows which data can access by whom only the authorize user can access
the data.

Two major goals of an Operating System:


1. Make the computer system convenient to use.
- Convenience is important in personal computers.

2. Use the computer hardware in an efficient manner.


- Efficient use is important when a computer is shared with several users.

1.0 INTRODUCTION TO OPERATING SYSTEM PAGE | 3


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Note: Use convenience has higher priority than efficient use of a computer in windows operating
system while efficient use of a computer system has higher priority than use convenience in Unix
Operating system.

Lesson 2: History of Operating System


The history of operating system is linked with the
history and development of various generations of
computer systems. The first true digital computer was
designed by English mathematician Charles Babbage.
He is also known as a Father of Digital Computer.
Computer had a Mechanical Design, It was slow and
unreliable and they are known as “Analytical Engine”.

As you can see in this figure were Charles Babbage


has developed an analytical engine like a computer.
Figure 1.1: Babbage's Analytical Engine (Generation Zero)

The First Generation (1940’s to early 1950’s)


Back in the 1940’s, first electronic computers were introduced. These electronic
computers were made without any Operating Systems. All programming was done in absolute
machine language often by wiring up plug boards to control the machine’s basic functions.
Operating systems were not necessary because the general purpose of the computer during this
time is to perform simple mathematical calculations.

The First Generation started from 1945 to 1955 were technology has been used as a
vacuum tubes and operating system was not present and the language was a machine language
which is called binary language.

The Second Generation (1955 to 1965)


General Motors created the first operating system in the early 1950s. It was called GMOs
and is used to run single IBM (International Business Machines) mainframe computers. This
machine performs in large computer rooms and is handled by professional operators. Only
huge corporations and government agencies could afford these types of machines. The
operation works having people with different set off jobs. First the programmers would construct
a program on a piece of paper. The program will then be punched on cards and will be
brought to the operator. The program will be handed to the operator until the input is done
and ready to be given to the programmer again. Using this new system, the data
accumulated was submitted into groups and thus they were called single-stream batch processing
systems.

The second generation was from 1955 to 1965 were technology was transistors, operating
system is present, and the language was used assembly and high level language. Around 1955,
transistors were introduced and first operating system was Fortran Monitor System was introduce
in a computers, and also FORTRAN is a high level language was used.

Batch System was used in this generation to reduce the time new methodology is adopted
known as a batch system. Also to execute the program two commuters were used IBM 1401 for
reading cards, copying tapes, and printing output, and IBM 7094 for real computing. The working

1.0 INTRODUCTION TO OPERATING SYSTEM PAGE | 4


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

of batch system in second generation were all magnetic tapes are mounted on a tape drive;
operator load special program which read the first job from the tape and run it. Output was written
unto the second tape, after each job finished, the OS automatically read next job from the tape
and output tape inserted into IBM 1401 for printing.

The advantages of second generation able to compute scientific and engineering


calculations, cost and size of a computer is reduced, and programmer’s job was simplified. The
features on second generation computers are computers used transistors as their main electronic
component. These computers were much smaller, reliable and more powerful. Some high level
languages of this generation are COBOL and FORTAN were introduced towards the end of
second generation. Printers, Tapes storage, memory were started from second generation of
computers and also processing speed improve to microseconds.

The Third Generation (1965 - 1980)


During the early 1960s, computer manufacturers have two different product lines.
New problems regarding the time consumption of the operating system aroused. Before, it
was easy for programmers during the first generation since they have the machines for
themselves; however, in the third generation, they are in need of a machine that can quickly
process jobs.

By the late 1960’s Operating System designers were able to develop the system of
multiprogramming in which a computer program will be able to perform multiple jobs at the same
time. The introduction of multiprogramming is a major part in the development of Operating
Systems because it allowed a CPU to be busy nearly 100 % of the time that it was in operation.
Another major development during the third generation was the phenomenal growth of mini
computers starting with the DEC PDP-1 in 1961. The PDP-1 had only 4K of 18-bit words, but at
$120,000 per machine (less than 5 percent of the price of a 7094), it sold like hotcakes. These
microcomputers help create a whole new industry and the development of more PDP's. These
PDP's helped lead to the creation of personal computers which are created in the fourth
generation. In the same year, the first version of the UNIX operating system was created. This
operating system was easily adapted to new systems and has attained rapid acceptance. It
is written in C and is free to use during the first few years.

Third generation was from 1965-1975 were technology was integrated circuits, operating
system was present and the language was used is high level language. In this generation
computers are based on Integrated Circuits and was invented by Robert Noyce and Jack Kilby in
1958-1959.
Integrated Circuits are
single component containing
number of transistors. Few
examples of this are PDP-8, PDP-
11, ICL 2900,IBM 360,IBM 370
and many more. Also like the
second generation, third
generation are also fast, reliable
and high-level languages
appeared. The used of IC in a
computer provides a small part of
a computer not only to reduce but
also improves the performance of Figure 1.2: Integrated Circuits (ICs)
a computer as compared to

1.0 INTRODUCTION TO OPERATING SYSTEM PAGE | 5


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

others or previous computers because these computers reduced the computational time from
microseconds to nanoseconds. In this generation, it uses an operating system for better resources
and used the concept of time-sharing and multiple programing. But IC chips are difficult to
maintain first it required air conditioning and the technology required the manufacturing of IC
chips. And this IC chips are cheaper than the second generation.

The Fourth Generation (1980-Present Day)


Personal computers became largely popular during the fourth generation. They are
quite similar to minicomputers but are less costly that almost every individual is able to one.
Microsoft soon began to emerge. In the 1980s, IBM was creating a new PC and is looking for a
suitable software to run on it. They approached Bill Gates and together found a suitable
operating system called the DOS (Disk Operating System). The system was revised and
renamed into MS-DOS (Microsoft Disk Operating System) and it quickly topped the market.
MS-DOS was widely used. However, these operating systems only used typing as a method
of inputting commands. Doug Engelbart of the Stanford Research Institute soon invented
the GUI (Graphical User Interface), it uses icons, menus, and windows for easier access and
is user-friendly.

Along with the creation of Microsoft is the making of Apple Macintosh. Steve Jobs, co-
inventor of Apple computer, adapted GUI and Apple Macintosh became a huge success not
only because it is cheaper but with the adaption of GUI it is user-friendly. When Microsoft
decided to build a newer version of MS- DOS, it became solely based on the success of
Macintosh and Windows was created.

During the mid-1980s, there where growths of network operating systems and
distributed operating systems being used in every personal computer. Today, numerous types
of operating system are being used in different types of machines.

1.0 INTRODUCTION TO OPERATING SYSTEM PAGE | 6


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Lesson 3: Types of Operating System


Batch Operating System
In this type of operating system, there is no immediate cooperation or interaction between
the OS and the PC. An operator is involved in batch OS, this operator takes similar jobs with the
same requirement and it will group them into different groups or batches. Basically, the operator
is responsible for sorting the tasks with similar demands or requirements. Examples of this OS
include bank statements, payroll system, etc.

Figure 1.3: Batch OS Visual Representation

Advantages of Batch Operating System:


 It is very difficult to guess or know the time required by any job to complete. Processors of the
batch systems know how long the job would be when it is in queue
 Multiple users can share the batch systems
 The idle time for batch system is very less
 It is easy to manage large work repeatedly in batch systems

Disadvantages of Batch Operating System:


 The computer operators should be well known with batch systems
 Batch systems are hard to debug
 It is sometime costly
 The other jobs will have to wait for an unknown time if any jobs fail.

Time-Sharing Operating System


In this type of operating system (OS), every one of the various tasks are given their own
opportunity to execute for them to work efficiently. The time given to each task to execute is
called quantum. After this time interval is finished, the OS executes the next task. Every user
gets the time of the central processing unit (CPU) because they use only one system. Time-
Sharing OS are also known as Multitasking systems. As the name suggests, Time-sharing
OS can work with, or perform more than one task. The tasks can come from a single user or
from different users as well. Examples of Time-Sharing Operating System include Unix, Multics
etc.

1.0 INTRODUCTION TO OPERATING SYSTEM PAGE | 7


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Figure 1.4. Time-Sharing OS Visual Representation

Advantages of Time-Sharing OS:


 Each task gets an equal opportunity
 Less chances of duplication of software
 CPU idle time can be reduced

Disadvantages of Time-Sharing OS:


 Reliability problem
 One must have to take care of security and integrity of user programs and data
 Data communication problem

Distributed Operating System


This type of operating system is commonly described as a recent development in the field
of computer technology and are currently widely accepted throughout the world.

In this type of OS, several independent yet interconnected computers can communicate
with each other using a shared communication network. These so called “independent systems”
have their own memory unit and central processing unit (CPU). These systems can also be
referred to as loosely coupled (distributed) systems. In loosely coupled systems, the
components are less dependent on each other or depend on each other to the least extent
possible. Also, in this OS, it is possible for a user’s system to access the files or software of
another system given that the systems are both connected on the same network. An example of
this OS is LOCUS.

Advantages of Distributed Operating System:


 Failure of one will not affect the other network communication, as all systems are
independent from each other
 Electronic mail increases the data exchange speed
 Since resources are being shared, computation is highly fast and durable
 Load on host computer reduces
 These systems are easily scalable as many systems can be easily added to the network
 Delay in data processing reduces

Disadvantages of Distributed Operating System:


 Failure of the main network will stop the entire communication
 To establish distributed systems the language which are used are not well defined yet

1.0 INTRODUCTION TO OPERATING SYSTEM PAGE | 8


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

 These types of systems are not readily available as they are very expensive. Not only that
the underlying software is highly complex and not understood well yet.

Figure 1.5: Distributed OS Visual Representation

Network Operating System


This type of operating system involves a server on which the systems run on. This system
provides the ability or capacity to manage users, data, applications, and other several networking
functions. This type of system also allows sharing access to files, printers, applications, etc. in a
small private network.

Figure 1.6: Network OS Visual Representation

In this OS, all users are aware of the configuration of other users within the same network
(i.e., individual connections) which allows the users and the system itself to function or work more
efficiently. Which is why systems in this type of OS are also known as tightly coupled systems.
In tightly coupled systems, hardware and software components are highly independent on each
other, which explains the reason why all the users should be aware of the configuration of other
users. Examples of this OS include Microsoft Windows Server 2008, UNIX, Linux, Mac OS X.

1.0 INTRODUCTION TO OPERATING SYSTEM PAGE | 9


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Advantages of Network Operating System:


 Highly stable centralized servers
 Security concerns are handled through servers
 New technologies and hardware up-gradation are easily integrated to the system
 Server access are possible remotely from different locations and types of systems

Disadvantages of Network Operating System:


 Servers are costly
 User has to depend on central location for most operations
 Maintenance and updates are required regularly

Real-Time Operating System


This type of operating system is used for the real-time

[Link]
The Crazy Programmer. 10 Aug. 2019,
systems. Examples of real-time systems are: Air Traffic Control

Source: Real-Time OS. Digital Image.


Systems, Networked Multimedia Systems, and Command Control
Systems etc. In real- time systems, the time interval needed to

2019/03/types-of-operating-
process and respond to inputs is very small. This time interval is called
response time.

In this type of system, every second counts, even a second is

[Link]
crucial for the system to function efficiently. This system is used when
the time requirements for the specific response time is very strict like
missile systems, air traffic control systems, robots, etc. Examples of
Real-Time Operating Systems include medical imaging systems, Figure 1.7: Real-Time OS Visual
industrial control systems, robots, air traffic control systems, etc. Representation

Two types of Real-Time Operating System (RTOS):

 Hard Real-Time Systems: These OSs are meant for the applications where time
constraints are very strict and even the shortest possible delay is not acceptable. These
systems are built for saving life like automatic parachutes or air bags which are required to
be readily available in case of any accident. Virtual memory is almost never found in these
systems.

 Soft Real-Time Systems: This type of Real-Time OS is used when the application’s time
constraints is less strict. In soft real-time system, meeting the time limit is not compulsory
for every time and task but of course, the system cannot miss the deadline for every task or
process, it can miss the deadline but not every time. If the deadline is not met frequently
then there will be consequences, up to the point where the system can no longer be used.
Examples of soft real-time systems are personal computers, audio system, video system,
etc.

Table 1.2: Major Differences between Hard and Soft Real-Time Systems
Characteristic Hard Real-Time Soft Real-Time
Response Time Hard-required Soft-required
Peak-load performance Predictable Degraded
Control of pace Environment Computer
Safety Often critical Non-critical
Size of data files Small/Medium Large

1.0 INTRODUCTION TO OPERATING SYSTEM PAGE | 10


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Redundancy type Active Checkpoint-recovery


Data integrity Short-term Long-term
Error detection Autonomous User assisted

The above table shows the major differences between hard and soft real-time systems.
The response time requirements of hard real-time systems are in the order of milliseconds or less
and can result in a catastrophe if not met. In contrast, the response time requirements of soft real-
time systems are higher and not very stringent. In a hard real-time system, the peak-load
performance must be predictable and should not violate the predefined deadlines. In a soft real-
time system, a degraded operation in a rarely occurring peak load can be tolerated. A hard real-
time system must remain synchronous with the state of the environment in all cases. On the other
hand soft real-time systems will slow down their response time if the load is very high. Hard real-
time systems are often safety critical. Hard real-time systems have small data files and real-time
databases. Temporal accuracy is often the concern here. Soft real-time systems for example, on-
line reservation systems have larger databases and require long-term integrity of real-time
systems. If an error occurs in a soft real-time system, the computation is rolled back to a previously
established checkpoint to initiate a recovery action. In hard real-time systems, roll-back/recovery
is of limited use.

Advantages of RTOS:
 Maximum Consumption: Maximum utilization of devices and system, thus more output
from all the resources
 Task Shifting: Time assigned for shifting tasks in these systems are very less. For example
in older systems it takes about 10 micro seconds in shifting one task to another and in latest
systems it takes 3 micro seconds.
 Focus on Application: Focus on running applications and less importance to applications
which are in queue.
 Real time operating system in embedded system: Since size of programs are small,
RTOS can also be used in embedded systems like in transport and others.
 Error Free: These types of systems are error free.
 Memory Allocation: Memory allocation is best managed in these type of systems.

Disadvantages of RTOS:
 Limited Tasks: Very few tasks run at the same time and their concentration is very less on
few applications to avoid errors.
 Use heavy system resources: Sometimes the system resources are not so good and they
are expensive as well.
 Complex Algorithms: The algorithms are very complex and difficult for the designer to
write on.
 Device driver and interrupt signals: It needs specific device drivers and interrupt signals
to response earliest to interrupts.
 Thread Priority: It is not good to set thread priority as these systems are very less prone to
switching tasks.

Handheld Operating System


This type of operating system is designed to run on machines that have processors with
lower speed and less memory, they are designed to use less memory and they also require fewer
resources compared to other types of OS.

1.0 INTRODUCTION TO OPERATING SYSTEM PAGE | 11


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Handheld OS are created to work with different types of hardware other than the standard
desktop operating systems. Examples of handheld operating systems include Palm OS, Pocket
PC, Symbian OS, Linux, Windows, etc. Handheld systems also include Personal Digital
Assistants (PDAs), such as Palm- Pilots or Cellular Telephones the ability to connect to a network
such as the Internet. They are usually of limited size, most handheld devices have limited amount
of memory, slow processors, and small display screens.

Advantages of Handheld OS:


 Used on portable devices in which it allows the user to take their work wherever they go.
 Costs will be cheaper since the resources used are limited compared to the other types of
OS.
 It does not rely on non-portable power sources.

Disadvantages of Handheld OS:


 Memory is limited. As a result, the operating system must manage memory efficiently.
 Many of the handheld devices do not support or use virtual memory, which is why the
developers need to work with very limited physical memory.
 Faster processors require more power. Since handheld devices have limited source of
power, the processors that can be used have limited speed.

1.0 INTRODUCTION TO OPERATING SYSTEM PAGE | 12


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Lesson 4: Components of Operating System


A component is a process, program, utility, or another part of a computer's operating
system that helps to manage different areas of the computer. Not to be confused with a hardware
component, a system component is similar to a computer program, but is not something an end-
user directly interact with when using a computer.

Components of Operating System


1. Shell - In computing, a shell is a user interface for access to an operating system's services.
In general, operating system shells use either a command-line interface (CLI) or graphical user
interface (GUI), depending on a computer's role and particular operation. It is named a shell
because it is the outermost layer around the operating system.

GUI (Graphic User Interface) - This interface uses graphics to interact with an operating
system such as windows, scrollbars, buttons, wizards, painting pictures, alternative icons
and many more. In this interface the information will show in the use of videos, images,
plain text, and many more.

CLI (Command Line Interface) - In this interface you have a permission to put in writing
command or console window to interact with operating system. An example of this is in
the command prompt you will put the writing command you wanted to perform so that the
computer will do the task that you have given to it.

2. Memory Manager – This mainly focuses on the allocation of memory to the different tasks.
Memory manager handles the Main Memory or RAM (also known as Random Access Memory)
and tracks of memory spaces that is needed by the running process. Multitasking consumes
memory space. Allocation of memory happens if the portion of the memory after being checked
is valid for the request, while de-allocation happens when it’s time to reclaim the memory. The
memory manager protects the space in the main memory by not allowing any alterations to
happen.

3. Process Manager – This focuses on the scheduling of tasks and the utilization of the
processor. Process scheduling is the function in the operating system that decides about the
process that gets the processor. Information sharing and exchange of processes, protection of
resources from one process to another and providing facilities for sharing and synchronization of
processes are the examples of activities being handled by the process manager. The traffic
controller is the program that keeps track of the processor as well as its status. Handling the jobs
that enter the system and managing each process within the jobs are the two levels of Process
Manager.

4. Security Manager - This manager secure the whole computer into any unauthorized process,
application and many more.

5. Secondary Storage Manager - Unlike the main memory (RAM), this manager is used to store
data that can be manipulated further in the system.

6. Device Manager – This pertains to the control of the operating system on the peripheral
devices such as the mouse, monitor, and other pertinent devices. Whenever you save a file into
a disk, the operating system does instruct the device drivers to write/store the specific file into the
auxiliary storage device. The same happens whenever you want to print a document. The
operating system is the one that instructs the printer about the accommodation of the request to

1.0 INTRODUCTION TO OPERATING SYSTEM PAGE | 13


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

print the document. I/O controller is the program that keeps track of all the devices. Allocation and
de-allocation of resources are done by the Device Manager too.

7. File System Manager – this refers to the ability of the user in addition, deletion, modification,
and manipulation of the files. Some of the activities being handled by file manager are naming
and renaming specific files, copying files from one directory to another, and backup and recovery
of file. It simply pertains to the use of files. This manager does allocation and de-allocation of the
resources as well.

Reading Assignment
 [Link]
ng%20system%20(OS)%20is,are%20run%20on%20the%20machine.&text=Almost%20
all%20computers%20use%20an%20OS%20of%20some%20type.
 [Link]
 [Link]
 [Link]
 [Link]

References / Sources / Bibliography


 Nutt, G. (2009), Operating Systems: A Modern Perspective, 3rd Edition, Pearson Addison-
Wesley Inc, Boston , Massachusetts.

 Silberschatz, A., Galvin, P., and Gagne, G.(2018). “Operating ‘System Concepts”, 10th
ed. John Wiley and “Sons Inc.

 Stallings, Wiliam. (2018). “Operating Systems: Internals and Design Principles” 9 th ed.
Pearson Education Inc.

 Harris, J. Archer., “Schaum’s Outline of Theory and Problems of Operating Systems.


McGraw Hill Companies Inc., 2002

 [Link]
20system/operating%[Link]

 [Link]

 [Link]

 [Link]

 [Link]

 [Link]

 [Link]

1.0 INTRODUCTION TO OPERATING SYSTEM PAGE | 14


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Exercises
I. Familiarize the phrases and keywords below and match them with their corresponding
generations.
 First electronic computer  Plug boards
 Microsoft emerged  GUI
 Rapid growth of minicomputers  Multiprogramming system
 Personal computers  Punch cards
 Machines handheld by  Batch processing systems
professional operator  Time-sharing OS
 Simple mathematical calculations  UNIX
 First operating system  MS DOS
 GMOs

First Second Third Fourth


Generation Generation Generation Generation

II. Multiple Choices. Match each definition to their corresponding type of operating system.
Write the letter of the correct answer.
Choices
B. Time-Sharing Operating C. Distributed Operating
A. Batch Operating System
System System
D. Network Operating E. Real-Time Operating F. Handheld Operating
System System System

1. Also known as Multi-tasking systems.


2. An example of this operating system is the Payroll system.
3. This OS involves several independent yet interconnected computers that can
communicate with each other using a shared communication network.
4. Involves independent systems that are also called as loosely coupled systems.

1.0 INTRODUCTION TO OPERATING SYSTEM PAGE | 15


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

5. Systems in this type of OS are also known as tightly coupled systems.


6. An example of this OS is the Palm OS.
7. Designed to run on machines that have processors with lower speed and less
memory.
8. An example of this OS is Microsoft Windows Server 2008.
9. This OS involves an operator that takes similar jobs with the same requirement
and divides them in groups.
10. A type of operating system where time constraints are very strict.

III. Give at least three (3) examples of Hard Real-Time OS and Soft Real-Time OS.
No. Hard Real-Time OS Soft Real-Time OS
1
2
3

1.0 INTRODUCTION TO OPERATING SYSTEM PAGE | 16


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

MODULE 2: COMPUTER SYSTEM STRUCTURE


Overview
This module will gives us an idea about the computer system structure before we proceed to the
details of computer system operation. We will discuss the basic functions of system startup, I/O,
and storage early in this module. We will also describe the basic computer architecture that makes
it possible to write a functional operating system.

Objectives
At the end of this module, the student can be able to:
 Discuss the basic functions of system startup.
 Distinguish potential threats to operating systems and the security features designed to
guard against them.
 Describe how computing resources are used by application software and managed by
system software.
 Discuss the advantages and disadvantages of using interrupt processing.
 Contras kernel and user mode in an operating system.

Lesson 1: The Computer System


What is a Computer System?
A computer system is composed of three (3) main components: the Hardware, the
Software, and the Liveware. These three will then work together to process, receive, manipulate,
display, and move data or information. The framework of this kind of system is ordinarily
incorporated by a PC, mouse, monitor, and other possible components.

These components can be coordinated and form a single device like laptops. Computer
Systems can work independently from anyone else it can also access devices that are associated
with other computer systems.

As stated above, there are three components:

1. The Hardware – These are the physical parts that play an integral role in computer
systems. It serves as the physical medium used by the clients to send, receive, and
store data. Basic examples are the motherboard, input and output devices (such as
keyboard, mouse), CPU, and storage devices.

Figure 2.2: Examples of hardware


2. The Software – Basically, these are just the programs and applications installed in
your computer. This component is divided into two, namely, the System Software and
the Application Software.

2.0 COMPUTER SYSTEM STRUCTURE PAGE | 17


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

 System Software – A software that provides platforms to other software. These


are low-level programs that runs in the background by the operating system at a
fundamental degree. Some examples are system servers, device drivers, and
utility software.
 Application Software – A software created or written to perform a variety of
specific tasks for the user. Some applications were already installed on the
computer, but still, the user can install other applications. Some examples are
Microsoft Office (Word, Excel, PowerPoint), Browsers (Google Chrome, Mozilla
Firefox), Media Player, Auto CAD, etc.

3. The Liveware – Also known as computer users. The user instructs the PC to execute
on guidelines. Basically, it’s just you using the computer.

Computer System Structure

Figure 2.3: The Modern Computer System Structure

The Modern Computer System Structure


A Computer System Structure consists of connected devices to a brain device that gives
entry to memory sharing. The modern general-purpose computer system is consists of one or
more CPU(s) and a number of device controllers connected through a common bus that provides
access to shared memory.

Figure 2.2 shows:

 That there is a specific device that is in charge of each device controller.


 One of the purposes of the memory controller is to guarantee efficient access to shared
memory by that, a memory controller is given whose capacity is to integrate memory
access.
 The CPU and the device controllers can execute concurrently, competing for memory
cycles.

There are five main hardware components that make up a computer system:

2.0 COMPUTER SYSTEM STRUCTURE PAGE | 18


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

1. Input Devices – used for entering data in the computer. Examples are:
a. Keyboard
b. Microphone
c. Gamepad Controllers
d. Scanner

2. Output Devices – any device that puts out the information or data to the user or to another
device. Examples are:
a. Monitor
b. Speakers
c. Headphones
d. Printer

3. Processing Devices – these are the core parts of the computer assigned to process data.
Examples are:
a. Central Processing Unit (CPU)
b. Graphics Processing Unit (GPU)

4. Storage Devices – devices that store the data in computer and has to subcategories:
a. Primary Storage Devices – smaller in size and have the fastest data speed. One
example is the Random Access Memory (RAM).
b. Secondary Storage Devices – bigger in size but has slow data speed. Some
examples are Hard Disk Drive, Optical Disk Drive, and USB Flash Disks/Drives.

5. Communication Devices – hardware devices that are assigned to transmit analog or


digital signals/messages, and it can be wireless or hardwired. Examples are:
a. Bluetooth devices
b. LAN Card
c. Modulator Demodulator (or Modem)
d. Router

The Central Processing Unit (CPU)


The CPU (sometimes called as the heart or the
brain of a computer) is the electronic circuitry within a
computer that executes instructions that make up a
computer program. It does everything what the
software, the hardware, and the user do. The CPU
performs basic arithmetic, logic, controlling, and
input/output (I/O) operations specified by the
instructions in the program. Here is a block diagram of
how a CPU works (see Figure 3).

Figure 2.4: A diagram of how a CPU works

2.0 COMPUTER SYSTEM STRUCTURE PAGE | 19


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Lesson 2: The Computer Boot-Up


Introduction to Computer Boot-Up
When we start our computer, then there is an operation which is performed automatically by
the computer which is called booting. In booting, the system will check all the hardware and software
that are installed or attached in the system, and will also load the files that are needed for running a
system.

Basically, there are two types of booting:

1. Warm Booting – This is the process of restarting a computer that is already powered on.
2. Cold Booting – This is the process of turning on a computer after it had been powered
off completely.

The Bootloader
Relevant data in the system software has to be loaded into the main memory as soon as
the device is started, and stays there as long as the computer is running. This is made possible
by a so-called bootloader, which comes permanently integrated as a standard in most modern
OS.

A bootloader, also known as a boot program or bootstrap loader, is a special operating


system software that loads into the working memory of a computer after start-up. For this purpose,
immediately after a device starts, a bootloader is generally launched by a bootable medium like
a hard drive, a CD/DVD or a USB stick. The boot medium receives information from the
computer’s firmware (e.g. BIOS) about where the bootloader is. The whole process is also
described as “booting”.

The Boot Sequence


A boot sequence (also called as boot order) is the order in which a computer searched for
nonvolatile data storage devices containing program code to load the operating system (OS).
Typically, a Basic Input/Output System (BIOS) is used to start the boot sequence. Once the
instructions are found, the CPU takes control and loads the operating system into system
memory. The devices that are usually listed as boot order options in the BIOS settings are hard
disks, optical drives, flash drives, etc. The user is able to change the boot sequence through the
CMOS setup.

Prior to boot sequence is the power-on self-test (POST), which is the initial diagnostic test
performed by a computer when it is switched on. When POST is finished, the boot sequence
begins. If there are problems during POST, the user is alerted by beep codes, POST codes or
on-screen POST error messages.

Unless programmed otherwise, the BIOS looks for the OS on Drive A first, then looks for
the Drive C. It is possible to modify the boot sequence from BIOS settings. Different BIOS models
have different key combination and onscreen instructions to enter the BIOS and change the boot
sequence. Normally, after the POST, BIOS will try to boot using the first device assigned in the
BIOS boot order. If that device is not suitable for booting, then the BIOS will try to boot from the

2.0 COMPUTER SYSTEM STRUCTURE PAGE | 20


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

second device listed, and this process continues till the BIOS finds the boot code from the devices
listed.

If the boot device is not found, an error message is displayed and the system crashes or
freezes. Errors can be caused by an unavailable boot device, boot sector viruses or an inactive
boot partition.

Figure 2.5: Normal PC Boot-up Process

Lesson 3: Traps and Interrupts


Traps and Interrupts

Traps and Interrupts are events that breaks a normal sequence of orders or instructions
that is being process or executed by the CPU. In short, it is an interruption to the CPU‟s sequence
of events.

Traps or known as a fault or an exception, are synchronous interrupts that are sensed by
the CPU as an abnormal condition that means an error has happened. This will happen when
actions from programs causes the hardware devices to transfer command from the user mode.
Traps will usually switch the operating system to kernel mode, where the operating system will
not go back to its originating process until it performs various actions. Keep in mind that a trap in
a kernel mode is more fatal than a trap in a user mode. The errors can be in the form of:

Invalid memory access.


Division by zero.
Undefined code execution.
Non-existing peripheral devices access.
Breakpoint.
Restricted memory location access.

The transfer of command to the trap handler is like a call to a periodic procedure, except
that it results from some outstanding situation induced by the program and identified by the
hardware, not from an explicit procedure call in the application program.

2.0 COMPUTER SYSTEM STRUCTURE PAGE | 21


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Interrupts are asynchronous which are generated or signals made by the hardware
(devices like graphics card, I/O ports, hard disk, etc.). It is in the form of request or a message
that an I/O is required. An example for this is when a key is pressed on the keyboard or when a
mouse is moved and clicked, this type of interrupts are hardware interrupts. On the other hand, a
program requiring disk input or output will generate a software interrupt.

Note: We can consider traps as “software interrupts”, and interrupts as “hardware interrupts”.

In traps and interrupts occurrence, the CPU responds by temporarily stopping the process
of a current program. It preserved and saves all the values that are related to the stopped program
such as registers, memory location of the last line of a code that was executed, etc. after saving,
the CPU transfers command to an interrupt service routine or an interrupt handler that will settle
with the appropriate action that is needed to be taken in return to the interrupt. The CPU will
resume the temporarily stopped or interrupted process after the interrupt service routine finishes
its task. The register values, etc. are restored with the use of the previously saved values that will
assure the program’s execution will continue from where it was left off.

A good analogy from this is when you will to pause the movie that you are watching
(process) to obey your mother and buy (process) outside. After you brought the thing that is
needed by your mother, you can resume the movie that you are watching. You can still be
interrupted by other means of events like an emergency phone call or an accident happen and
you had to pause it again to help.

When an interrupt is already being executed, other interrupts are disabled for the first
interrupt to be done. As the interrupt is completed its task, the interrupt mechanism is now enabled
again. This implementation scheme for the interrupt are made to avoid overlapping interrupts
when there is an ongoing interrupt.

Lesson 4: I/O Structure


Introduction to I/O Structure
The management of different devices that are connected to a computing device is a major
concern of operating-system developers. This is because I/O devices differs so widely in their
functionality and performance, and different techniques or ways are required for controlling them.
These methods build the I/O sub-system of the kernel of OS that splits the rest of the kernel from
the complexity of processing I/O devices.

Device Drivers
Device drivers are software components that can be augmented into an OS to handle a
specific device. Some of the common devices are keyboards, printers, scanners, digital cameras
and external storage devices. The operating system manages from device drivers to handle all
I/O devices. A device driver is the one responsible for managing the data between the peripheral
devices and it alters its local buffer storage. A device executes processes with the operating
system for a computer by transferring signals over cable or even in wireless. These devices

2.0 COMPUTER SYSTEM STRUCTURE PAGE | 22


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

communicate with computing device through a connection point which are called ports. Other
devices use a set of wires or cables, those connecting cables which are called bus. A bus is a
series of wires that is set as a pathway to manage transport of data and also a firmly defined
protocol which specifies a set of messages that can be sent on the wires. The following are the
tasks that a device driver performs:

 Manage requests from the device independent software by accepting the appropriate
tasks.
 Execute required error handling and interact to give and take I/O with the device controller.
 See to it that all processes are completed and the request is executed successfully.

I/O Operation

 To begin an I/O operation, the device driver starts by loading the registers within the device
controller.
 The device controller examines the files and processes the registers to determine the
appropriate tasks or action to execute.
 The controller begins to manage the transfer of data from the device to its local buffer.
 Once the transfer of data has been successfully completed, the device controller signals
the device driver via interrupt that has finished its task.
 Then the device driver returns control to the operating system.

There are also two (2) methods in I/O:

1. Synchronous I/O – CPU process waits while I/O proceeds.


2. Asynchronous I/O – I/O executes simultaneously with the processes of CPU.

Polling
The process of regularly checking status of the device to see if it is time for the next I/O
operation, is called polling. Also called as the polling operation, polling actively examines the
status of an external device by a client program as a synchronous activity. Polling is most
frequently utilized in terms of I/O and additionally referred as the polled I/O or software package
driven I/O. polled operation or polling is the simplest method for an I/O device to interact with the
processor. The I/O device simply puts the data in a status register, and the processor must come
and get the data. Most of the time, devices will not require attention and when one does, it will
have to wait until it is next interrogated by the polling program.

The distinction of polling to interrupt is that it’s not a hardware mechanism, whereas the
interrupt is a hardware mechanism, polling is a protocol in which the processor steadily checks
whether the device needs attention. Wherever device tells process unit that it desires hardware
processing, in polling method, process unit keeps asking the I/O device whether or not it desires
CPU processing. The CPU continuously check every and each device hooked up thereto for
sleuthing whether or not any device desires hardware attention.

2.0 COMPUTER SYSTEM STRUCTURE PAGE | 23


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Lesson 5: Storage Structure


Introduction to Storage Structure

Computer storage contains a lot of different computer components and modules that are
used to store, alter, and edit data. Storage is a process through which digital or non-physical data
is managed and saved within a data storage device by the usage of computing technology. It is
a mechanism that gives access to a computer to retain data, either permanently or temporarily.

Storage is one of the primary and at the very least, an essential component of a computer
system because most of the processes that are being executed in a computer is using memory.
There are two natures of memory: the non-volatile type and volatile type.

Non-volatile memory is a type or computer memory that has the capability to hold saved
data even if the computer power is turned off. The data is permanent and is used commonly for
secondary storage or consistent storage. While volatile memory is a type of computer memory
that only manages and saves data while the device is currently used or turned on. These data
are only temporary and will be automatically removed when the device is turned off.

Memory is a fundamental component of the computer that is categorized into primary and
secondary memory.

Primary Storage

Primary storage, also as the main memory, is the area in a computer which stores data
and information for fast access.

Semiconductor chips are the principle technology used for primary memory. It’s a memory
which is used to store frequently used programs which can be directly accessed by the processing
unit for further processing. Some of the primary storage devices are the following:

 Random Access Memory (RAM) – also called as the read-write memory, this type of
primary storage is volatile type of memory because the data will be removed once the
power is turned off. RAM is the major device of primary memory as it is fast but is also
quite expensive.
 Read Only Memory (ROM) – this is a primary non-volatile type of memory. This memory
cannot be altered and can only be read when required or prompted. Since it is an
unchangeable memory, it used programs and processes inside of the device that are
frequently required like the system boot program.
 Cache Memory – it is used to store data and processes that are often required by the
CPU so it does not have to search and manage them in the main memory. This is a small
memory and fast memory, but more expensive than RAM.

2.0 COMPUTER SYSTEM STRUCTURE PAGE | 24


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Secondary Storage

Secondary storage is another memory device for a computer that can hold, store and
manage data and information that can be permanently saved. These devices do not use power
and can store data permanently but the data from secondary storage needs to be brought into
the primary storage before the CPU can use it as the CPU usually cannot directly access the data
inside secondary storage devices. Some of the secondary devices are:

 Hard Disk Drives – commonly used as secondary storage devices. They are flat, round
pieces of metal covered with magnetic disk or platter that can rotate at fast speed. It is a
non-volatile storage device in which you can store data permanently. This is also a type
of magnetic disk device.
 Floppy Disks – Also known as floppies and diskettes. These secondary storage devices
were very common from mid-1970’s to 1990’s and were the most commonly used storage
devices at that time. It can store megabytes of data at most. Also a type of magnetic disk
device.
 Memory Cards – these are card-shaped and small storage devices. They can be easily
plugged in a certain device due to its size, and is commonly used in smaller gadgets like
phones and cameras. It is available in different storage sizes such as 8 megabytes to 32
gigabytes and higher, depending on the brand.
 Flash Drives – also called as pen drives, these are a type of portable storage device. It
is a compact and small memory storage device that can store permanent data that varies
from different sizes.
 CD-ROM – CD-ROMs or Compact Disks are a disk-like optical storage device that usually
has silver color. It can store data and usually have a storage capacity of MB to GB worth
of storage.

All of these storage devices are essential to a computer and have different speed,
availability, and performance that affect processes that is being executed in a computing device.
To understand further, we can look at the figure below:

Lesson 6: Hardware Protection


Hardware Protection and Types of Hardware Protection

Hardware structure refers to the identification of a system’s physical components and


their interrelationships. This description allows hardware designers to understand how their
components fit into a system architecture and provides to software components designers
important information needed for software development and integration. Clear definition of
hardware architecture allows the various traditional engineering disciplines (e.g. electrical and
mechanical engineering) to work more effectively together to develop and manufacture new
machines, devices and components.

Hardware protection is one of the major areas of concern in computer-system structures.

2.0 COMPUTER SYSTEM STRUCTURE PAGE | 25


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

From a very simple architecture, the computer has evolved into a highly dynamic, interactive, and
complex machine. Computers which were previously standalone have also been asked to
communicate with one another through networking. This added another level of intricacy in the
design and operation of the computer which has a profound effect on its security.

Dual-mode operation forms the basis of I/O protection, memory protection, and CPU
protection. In dual-mode operation, there are two separate modes: monitor mode (also called as
“system mode” or “kernel mode”) and user mode. In monitor mode, the CPU can use all
instructions and access all areas of memory. While in user mode, the CPU is restricted to
unprivileged instructions and a specified area of memory. User code should always be executed
in user mode and the OS design ensures that it is. When responding to system calls,
traps/exceptions, and interrupts, OS code is run. The CPU automatically switches to monitor
mode.

Basically, hardware protection is divided into three (3) categories: CPU protection,
Memory protection, and I/O protection, which are explained below:

1. CPU Protection – CPU protection prevents the user program getting stuck in an
infinite loop, and neve returning control to OS. CPU usage is protected by using the
timer device, the associated timer interrupts, and OS code called the scheduler. While
running in user mode, the CPU cannot change the timer value or turn off the timer
interrupt, because these required privileged operations. Before passing the CPU to a
user process, the scheduler ensures that the timer is initialized and interrupts are
enabled. When a timer interrupt occurs, the timer interrupt handler (OS code) can run
the scheduler (more OS code), which decides whether or not to remove the current
process from the CPU.
2. Memory Protection – Memory is protected by partitioning the memory into pieces.
While running in user mode, the CPU can only access some of these pieces. The
boundaries for these pieces are controlled by the base register and the limit register
(specifying bottom bound and number or locations, respectively). These registers can
only be set via privileged instructions.
3. I/O Protection – The I/O is protected by making all input/output instructions privileged.
While running in user mode, the CPU cannot execute them, thus, user code, which
runs in user mode, cannot execute them. User code requests I/O by making
appropriate system calls. After checking the request, the OS code, which is running in
monitor mode, can actually perform the I/O using the privileged instructions.

Reading Assignment
 [Link]
Architecture#:~:text=A%20computer%20system%20is%20basically,Logic%20Unit%2C
%20Control%20Unit%20etc
 [Link]
 [Link]

2.0 COMPUTER SYSTEM STRUCTURE PAGE | 26


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

 [Link]
is%20a%20startup%20sequence,computer%20has%20a%20boot%20sequence.
 [Link]

References / Sources / Bibliography


 Nutt, G. (2009), Operating Systems: A Modern Perspective, 3rd Edition, Pearson Addison-
Wesley Inc, Boston , Massachusetts.

 Silberschatz, A., Galvin, P., and Gagne, G.(2018). “Operating ‘System Concepts”, 10th
ed. John Wiley and “Sons Inc.

 Stallings, Wiliam. (2018). “Operating Systems: Internals and Design Principles” 9 th ed.
Pearson Education Inc.

 Harris, J. Archer., “Schaum’s Outline of Theory and Problems of Operating Systems.


McGraw Hill Companies Inc., 2002

 [Link]
20system/operating%[Link]

 [Link]

 [Link]

 [Link]

 [Link]
[Link]

 [Link]

 [Link]
bootloader/#:~:text=A%20bootloader%2C%20also%20known%20as,DVD%20or%20a%
20USB%20stick.

 [Link]
sequence#:~:text=Boot%20sequence%20is%20the%20order,to%20start%20the%20boo
t%20sequence.

 [Link]

2.0 COMPUTER SYSTEM STRUCTURE PAGE | 27


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Exercises
I. Enumeration
1. Five (5) examples of input devices.
2. Two (2) modes in dual-mode operation.
3. Three (3) primary storage devices.
4. Two (2) I/O methods.
5. Three (3) communication devices

II. Complete the diagram.

III. Now that you already know some of the primary storage devices, let’s focus on the
two basic primary ones, the RAM and ROM. In this exercise, use the Venn diagram to
compare and contrast these two storages.

RAM ROM

2.0 COMPUTER SYSTEM STRUCTURE PAGE | 28


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

MODULE 3: PROCESS MANAGEMENT


Overview
Early computers allowed only one program to be executed at a time. This program had complete
control of the system and had access to all the system’s resources. In contrast, contemporary
computer systems allow multiple programs to be loaded into memory and executed concurrently.
This evolution required firmer control and more compartmentalization of the various programs;
and these resulted in the notion of a process, which is a program in execution. A process is the
unit of work in a modern time-sharing system.

The more complex the operating system is, the more it is expected to do on behalf of its users.
Although its main concern is the execution of user programs, it also needs to take care of various
system tasks that are better left outside the kernel itself. A system therefore consists of a collection
of processes: operating-system processes executing system code and user processes executing
user code. Potentially, all these processes can execute concurrently, with the CPU (or CPUs)
multiplexed among them. By switching the CPU between processes, the operating system can
make the counter more productive. In this module, you will read about what processes are and
how they work.

Objectives
At the end of this module, the student can be able to:
 Introduce the notion of a process – a program in execution, which forms the basis of all
computation.
 Describe the various features of processes, including scheduling, creation, and
termination.
 Explore interprocess communication using shared memory and message passing.
 Describe communication in client-server systems.

Lesson 1: Process Concept


What Is a Process?
A process is a program in execution. For example, when we write a program in C or C++ and
compile it, the compiler creates binary code. The original code and binary code are both programs.
When we actually run the binary code, it becomes a process.

A process is an active entity, as opposed to a program, which is considered to be a passive entity.


A single program can create many processes when run multiple times; for example, when we
open a .exe or binary file multiple times, multiple processes are being created.

3.0 PROCESS MANAGEMENT PAGE | 30


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

What does a Process Look Like in Memory?

Figure 3.6: Process in memory

 Text Section – A process, sometimes known as the Text Section, also includes the
current activity represented by the value of the program counter.
 Stack – Contains the temporary data, such as function parameters, returns addresses,
and local variables.
 Data Section – Contains the global variable.
 Heap Section – Dynamically allocated memory to process during its run time.

Lesson 2: Process States


As process executes, it changes state. The state of a process is defined in part by the current
activity of that process. A process may be in one of the following states:

 New – Newly created process or the process being created.


 Ready – After creation process moves to Ready state, in other words, the process is ready
for execution.
 Run – Currently running process in CPU (only one process at a time can be under
execution in a single processor).
 Wait (or Block) – When a process request I/O access.
 Complete (or Terminated) – The process that completed its execution.
 Suspended Ready – When the ready queue becomes full, some processes are moved
to this state.
 Suspended Block – When waiting queue becomes full.

3.0 PROCESS MANAGEMENT PAGE | 31


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Figure 3.7: Process states

Context Switching
The process of saving the context of one process and loading the context of another
process is known as context switching. In simple terms, it is like loading and unloading the
process from running state to ready state.

Context switching happens when:

 When a high-priority process comes to ready state (i.e. with higher priority than the running
process)
 An interrupt occurs.
 User and kernel mode switch (though, this is not necessary)
 Preemptive CPU scheduling is used.

Context Switch versus Mode Switch


A mode switch occurs when CPU privilege level is changed, for example when a system
call is made or a fault occurs. The kernel works in a more privileged mode than a standard user
task. If a user process wants to access things which are only accessible to the kernel, a mode
switch must occur. The currently executing process need not to be changed during a mode switch.

A mode switch typically occurs for a process context switch to occur. Only the kernel can
cause a context switch.

CPU-Bound versus I/O-Bound Processes


A CPU-bound process requires more CPU time or spends more time in the running state,
while an I/O-bound process requires more I/O time and less CPU time. An I/O bound process
spends more time in the waiting state.

3.0 PROCESS MANAGEMENT PAGE | 32


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Lesson 3: Process Creation and Termination


There are two basic operations that can be performed on a process: Creation and
Termination.

Process Creation
A process may be created in the system for different operations. Some of the events that
lead to process creation are:

 User request for process creation


 System initialization
 Batch job initialization
 Execution of a process creation system call by a running process.

Here are the steps in process creation:

1. When a new process is created, the operating system assigns a unique Process
Identifier (PID) to it and inserts a new entry in primary process table.
2. Then the required memory space for all the elements of process such as program,
data, and stack is allocated including space for its Process Control Block (PCB).
3. Next, the various values in PCB are initialized such as:
a. Process identification part is filled with PID assigned to it in step 1 and also its
parent’s PID.
b. The processor register values are mostly filled with zeroes, except for the stack
pointer and program counter. Stack pointer is filled with the address of stack
allocated to it in step 2 and program counter is filled with the address of its
program entry point.
c. The process state information would be set to “New”.
d. Priority would be lowest by default, but user can specify any priority during
creation.

In the beginning, process is not allocated to any I/O devices or files. The user has
to request them or if this is a child process it may inherit some resources from its
parent.

4. Then, the operating system will link this process to scheduling queue and the process
state would be changed from “New” to “Ready”. Now, process is competing for the
CPU.
5. Additionally, operating system will create some other date structures such as log files
or accounting files to keep track of processes activity.

A process may be created by another process using fork(). The creating process is called
the parent process and the created process is called the child process. A child process can have
only one parent but a parent process may have many children. Both the parent and child
processes have the same memory image, open files and environment strings. However, they
have distinct address spaces.

A diagram that demonstrates process creation using fork() is as follows:

3.0 PROCESS MANAGEMENT PAGE | 33


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Figure 3.8: Process creation using fork()

Process Termination
Process termination occurs when the exit system is called. A process usually exits after
its completion but sometimes there are cases where the process aborts execution and terminates.
When a process exits, resources, memory, and I/O devices are all deallocated.

Some of the causes of process termination are as follows:

 When a parent process is terminated, its child process is also terminated. This
happens when a child process cannot exist without its parent process.
 If a process needs additional memory more than what the system allocated,
termination happens because of memory scarcity.
 A process tries to access a resource that is not allowed to use.
 If the process fails to allocate a resource within the allotted time.
 When a parent process requests the termination of its child process.
 The task is no longer required or needed.

Lesson 4: Process Threads


A thread is a path of execution within a process code, with its own program counter that
keeps track of which instruction to execute next, system registers which hold its current working
variables, and a stack which contains the execution history.

Each thread belongs to exactly one process and no thread can exist outside a process.
Each thread represents a separate flow of control. Threads have been successfully used in
implementing network servers and web server. They also provide a suitable foundation for parallel
execution of applications on shared memory multiprocessors. The following shows the working of
a single-threaded and a multithreaded process.

3.0 PROCESS MANAGEMENT PAGE | 34


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Figure 3.9: Single-threaded (left) and multithreaded process (right).

Multithreading
A thread is also known as lightweight process. The idea is to achieve parallelism by
dividing a process into multiple threads. For example, in a browser, multiple tabs can be different
threads. Microsoft Word uses multiple threads: one thread to format the text, another thread to
process the inputs, etc.

Thread versus Process


The primary difference is that threads within the same process run in a shared memory
space, while processes run is separate memory spaces. Threads are not independent of one
another like processes are, and as a result, threads share with other threads their code section,
data section, and OS resources (like open files and signals). But, like processes, a thread has its
own program counter (PC), register set, and stack space.

Below are some of the differences between the thread and process:

Table 3.1 – Differences between Thread and Process

Process Thread
Process is heavy weight or resource Thread is light weight, taking lesser
intensive. resources than a process.
Process switching needs interaction with Thread switching does not need to interact
operating system. with operating system.
In multiple processing environments, each
All threads can share same set of open files,
process executes the same code but has its
child processes.
own memory and file resources.
If one process is blocked, then no other
While one thread is blocked and waiting, a
processes can execute until the first process
second thread in the same task can run.
is unblocked.
Multiple processes without using threads use Multiple threaded processes use fewer
more resources. resources.

3.0 PROCESS MANAGEMENT PAGE | 35


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Process Thread
In multiple processes, each process operates One thread can read, write or change another
independently of the others. thread’s data.

Advantages of Thread over Process

1. Responsiveness – If the process is divided into multiple threads, if one thread completes
its execution, then its output can be immediately returned.
2. Faster context switch – Context switch time between threads is lower compared to
process context switch. Process context switching requires more overhead from the CPU.
3. Effective utilization of multiprocessor system – If we have multiple threads in a single
process, then we can schedule threads on multiple processor. This will make process
execution faster.
4. Resource sharing – Resources like code, data, and files can be shares among all threads
within a process (Note: Stack and registers can’t be shared among the threads. Each
thread has its own stack and registers.)
5. Communication – Communication between multiple threads is easier, as the thread
shares common address space. While in process we have to follow some specific
communication technique for communication between two processes.
6. Enhanced throughput of the system – If a process is divided into multiple threads, and
each thread function is considered as one job, then the number of jobs completed per unit
of time is increased, thus increasing the throughput of the system.

Lesson 5: Process Schedulers


Process scheduling is an activity of the process manager that controls and manages the
removing of processes and choosing another process using a certain strategy in the CPU.

Let’s put it in this way. It’s like a student attending subjects on a single room. When a time
allotted for a certain subject goes up, another professor will come in, teaching a different subject,
depending on the schedule assigned by the school. It can be also be compared to a queue in a
grocery store. The same goes for the scheduling of processes. If a process is finished, another
one will take its place, or simply be just removed from the queue, depending on the chosen
strategy.

Process Scheduling Queues


The operating system maintains all PCBs (Process Control Blocks) in scheduling queues.
PCB is a structure of data which contains the information related to the process. The system
maintains a different queue for each of the process states and PCBs of all processes at the same
execution state are placed in the same queue.

3.0 PROCESS MANAGEMENT PAGE | 36


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

The system maintains the three important scheduling queues for processes:

 Job Queue – This keeps all


the processes in the system.
 Ready Queue – This keeps all
processes wait in the main
memory, ready and waiting to
be executed. A new process is
always put in this queue.
 Device Queue – The
processes which are blocked
due to unavailability of an I/O Figure 3.10: Process scheduling queues.
device make up this queue.

This can be compared to baking batches of cookies. Let’s say that you only have two trays
and a single oven. When you have prepared two full trays of cookies, and if you’re going to make
another batch, you have to put it somewhere else first, for example, on a plate (Job Queue),
because the trays (Ready Queue) are still unavailable. Meanwhile, a tray of unbaked cookies
must be set aside first while another tray is still being baked inside the oven (CPU) since one
cannot bake more than a tray, which then will stay on the ready queue.

The operating system can use various strategies or policies to manage each queue (such
as First Come First Serve, Shortest Job First, etc. which will be discussed on the next part of this
module). The scheduler determines how to move processes between read and run queues which
can only have one entry per processor core on the CPU as shown in the diagram above.

Schedulers
There are three types of schedulers that chooses the jobs to be submitted to the system
to decide which process is to be run. These are the long-term, short-term, and medium-term
schedulers which will be discussed below.

Long-term scheduler (also known as job scheduler) is the one responsible for
determining which programs should be admitted to the system. It selects processes from the
queue and puts them in the memory for it to be executed.

Short-term scheduler’s (also known as CPU scheduler) main task is to improve system
performance while following a certain set of criteria. It is where the change happens from ready
state to running state of the process. These are also known as dispatchers, because the manage
processes that are ready to execute and allocates CPU to one of them.

Medium-term scheduler is where the swapping part of the process occurs. Swapping
takes place whenever a process becomes suspended. When processes are suspended, it cannot
be completed anymore. Thus, removing it from the memory and placing it on the secondary
storage. This reduces the degree of multiprogramming and is necessary to improve the process
mix. For example, if a person on a line for payment forgets his/her money, he is forced to leave
the queue and come at the back again.

3.0 PROCESS MANAGEMENT PAGE | 37


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Lesson 6: Process Scheduling Concepts


Process scheduling is the process manager’s ability to remove running process from the
CPU and selecting another process through a particular strategy. It is the cycle by which the CPU
determines which processes are ready to be moved to the running state.

The main goal of this is to keep the CPU occupied all the time and to provide the least
amount of response time for every program. To do this, the scheduler should apply the appropriate
rules to swap processes in and out of the CPU.

There are two scheduling categories, namely the non-preemptive scheduling and the
preemptive scheduling. Non-preemptive scheduling is a where process will not be interrupted
and will be executed until it is finished, while preemptive scheduling is where a process can be
interrupted to give way for another process that has a higher priority of shorter job than the current
ongoing one.

Scheduling of processes is also done to finish the work/process on time. Below are
different time with respect to a process:

 Arrival Time (AT) – Time at which the process arrives in the ready queue.
 Completion Time – Time at which process completes its execution.
 Burst Time (BT) – Time required by a process for CPU execution.
 Turn Around Time (TaT) – Time difference between completion time and arrival time.
Formula: Turn Around Time = Completion Time – Arrival Time
 Waiting Time (WT) – Time difference between TaT and BT.
Formula: Waiting Time = Turn Around Time – Burst Time

Lesson 7: Process Scheduling Algorithms


A process scheduler uses various algorithms to execute processes on the CPU. These
scheduling algorithms are divided into two categories mentioned above. These algorithms are as
follows:

Table 3.2 Scheduling Algorithms

Non-Preemptive Preemptive
First Come First Serve (FCFS) Round Robin
Shortest Job First (SJF) Shortest Remaining Time First (SRTF)
Non-Preemptive Priority Preemptive Priority

Additional two algorithms are Multi-Level Queue (MLQ) and Multi-Level Feedback
Queue (MLFQ) which contain two or more algorithms.

Objectives of Process Scheduling Algorithms


The following are the objectives of these scheduling algorithms:

 Maximize CPU utilization [Keep CPU as busy as possible]


 Fair allocation of CPU
 Maximize throughput [Number of processes that complete their execution per time unit]

3.0 PROCESS MANAGEMENT PAGE | 38


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

 Minimize turnaround time, waiting time, and response time.

Scheduling Algorithms
1. First Come First Serve (FCFS) – Simplest scheduling algorithm that schedules according
to arrival times of processes. First come first serve scheduling algorithm states that the
process that requests the CPU first is allocated the CPU first. It is implemented by using
the First In First Out (FIFO) queue. When a process enters the ready queue, its PCB is
linked onto the tall of the queue. When the CPU is free, it is allocated to the process at the
head of the queue. The running process is then removed from the queue. To put it in this
way, imagine that you are in a fast food chain, when you order a food at the cashier, you
must fall in line. Whoever comes first in line will be served first by the cashier.

Consider the following table:


Job Arrival Time Burst Time
J1 0 5
J2 1 4
J3 3 3
J4 5 6

Step 1: Draw a timeline for you to know the arrangement of jobs in chronological order.
J1 J2 J3 J4
0 1 3 5
Step 2: Create a Gantt chart by following the arrangement of the jobs and taking their
respective burst times.
The first job to come is J1 that arrived in time 0 and with a burst time of 5. Therefore the
Gantt chart will be
J1
5

Next job to come is J2 with a burst time of 4.


J1 J2
0 5 9

Next job to come is J3 with a burst time of 3.


J1 J2 J3
0 5 9 12

Then, the last job to come is J4 with a burst time of 6.


J1 J2 J3 J4
0 5 9 12 18

Now, let’s check the CPU Utilization, turnaround time, and waiting time.
To compute for the CPU Utilization, use this formula:
𝑇𝑜𝑡𝑎𝑙 𝐵𝑢𝑟𝑠𝑡 𝑇𝑖𝑚𝑒
𝑥 100
𝐸𝑛𝑑 𝑇𝑖𝑚𝑒
Take note that the End Time that we are talking here is the end time of the Gantt
char. Therefore,
18
𝑥 100 = 100% 𝐶𝑃𝑈 𝑈𝑡𝑖𝑙𝑖𝑧𝑎𝑡𝑖𝑜𝑛
18

3.0 PROCESS MANAGEMENT PAGE | 39


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Next, we will get the turnaround and waiting time of each job as well as the average
turnaround and waiting times. Use the formula given on the previous lesson.
Job Turnaround Time Waiting Time
J1 5–0=5 5–5=0
J2 9–1=8 8–4=4
J3 12 – 3 = 9 9–3=6
J4 18 – 5 = 13 13 – 6 = 7
Average 8.75 4.25

One of the problems that the FCFS suffers is the convoy effect. Convoy effect is a
phenomenon associated with the FCFS algorithm, in which the whole operating system
slows down due to few slow processes.

2. Shortest Job First (SJF) – Process which have the shortest burst time are scheduled
first. If two processes have the same burst time then FCFS is used to break the tie.

Given the same example in FCFS, The Gantt chart using SJF will be:
J3 J2 J1 J4
0 3 7 12 18

Same formula will be used for the CPU utilization, turnaround time and waiting time.

3. Non-Preemptive Priority (NPP) – The job with the highest priority is processed first. In
case of jobs having the same priority, the job’s arrival time will be considered, therefore
processing first the job who arrived first.

Using the example in the FCFS, we will adding a priority to each job. The highest priority
will be 0, and so on.
Job Arrival Time Burst Time Priority
J1 0 5 3
J2 1 4 0
J3 3 3 2
J4 5 6 1

The Gantt chart of these jobs using the NPP will be:
J2 J4 J3 J1
0 4 10 13 18

4. Preemptive Priority – This is the preemptive version of the NPP. Both prioritizes the job
with higher priority but it preempts every time a process enters the ready state, it then
compares the priority of the new job to the current job and to all of the process inside the
ready queue and will execute the process with higher priority.

Let’s still use the given in NPP and to create a Gantt chart using the Preemptive Priority.
However, we will change the priority to be able to simulate the latter algorithm.
Job Arrival Time Burst Time Priority
J1 0 5 1
J2 1 4 0
J3 3 3 2
J4 5 6 1

3.0 PROCESS MANAGEMENT PAGE | 40


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

The Gantt chart will be:


J1 J2 J2 J1 J4 J3
0 1 3 5 9 15 18

5. Shortest Remaining Time First (SRTF) – This is the preemptive version of SJF but
similar process to Preemptive Priority in which the job will preempt every time a new job
enters the ready state. In this algorithm, the job with shortest burst time or remaining time
will be processed first.

Using the given in the previous algorithm, the Gantt chart for this algorithm will be:
J1 J2 J2 J3 J1 J4
0 1 3 5 8 12 18

6. Round Robin (RR) – This is the preemptive version of the FCFS. Each process is
assigned a fixed time (Time Quantum / Time Slice) in cyclic way. It is designed especially
for the time sharing system. The ready queue is treated as a circular queue. The CPU
scheduler goes around the ready queue, allocating the CPU to each process for a time
interval of up to 1-time quantum. To implement Round Robin scheduling, we keep the
ready queue as FIFO queue of processes. New processes are added to the tail of the
ready queue. The CPU scheduler pick the first process from the ready queue, sets a timer
ton interrupt after 1-time quantum, and dispatches the process. One of the two things will
then happen. The process may have a CPU burst of less than 1-quantum. In this case,
the process itself will release the CPU voluntarily. The scheduler will then proceed to the
next process in the ready queue. Otherwise, if the CPU burst of the currently running
process is longer than 1-time quantum, the timer will go off and will cause an interrupt to
the operating system. A context switch will be executed, and the process will be put at the
tail of the ready queue. The CPU scheduler will then select the next process in the ready
queue.

To have a better understanding of this algorithm, consider yourself queueing at the ATM
station where there maximum transactions allowed is 2. If you have more than 2
transactions, after your second transaction, you must go to the back of the line to give way
to the person next to you.

Using again the given in the previous algorithm, let’s create a Gantt chart using the Round
Robin with a Time Slice of 2.

J1 J2 J1 J3 J2 J4 J1 J3 J4 J4
0 2 4 6 8 10 12 13 14 16 18

7. Multilevel Queue (MLQ) – According to the priority of process, processes are places in
different queues. Generally high priority processes are placed in the top level queue.
Online after completion of processes from top level queue, lower level queued processes
are scheduled.

Example: Consider the given table:

3.0 PROCESS MANAGEMENT PAGE | 41


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Job Arrival Time Burst Time Priority Queue No.


J1 0 5 1 0
J2 0 4 1 1
J3 0 3 2 0
J4 5 6 0 1

As you can see, the Level column is added to the table. That column is connected to this
table below in which each level is assigned to a single algorithm and a priority.
Queue No. Algorithm Priority
0 Shortest Job First High Level Queue
1 Non-Preemptive Priority Low Level Queue

Therefore, since Queue No. 0 has the highest level queue, all jobs under the said queue
will be executed first, then followed by the queue number with the next to the highest level
queue and so on.

The Gantt chart using MLQ will be:


J3 J5 J4 J2
0 3 8 14 18

Following the algorithm on the Queue No. 0, J3 was executed first followed by J5. Since,
there no jobs left for Queue No. 1, the next queue number will be executed then. Hence,
having J4 to be executed first followed by J2, following the algorithm on the Queue No. 1.

8. Multi-Level Feedback Queue (MLFQ) – It allows the process to move between queues.
The idea is to separate processes according to the characteristics of their CPU bursts. If
a process uses too much CPU time, it is moved to a lower-priority queue. This is similar
to MLQ however, this process can move between the queues. MLFQ keep analyzing the
behavior (time of execution) of processes and according to which it changes its priority.
The diagram and the explanation below will make you understand how MLFQ works.

Figure 3.12 MLFQ process flow

Let us suppose that Queue 1 and 2 follow Round Robin with a time slice of 4 and
8, respectively, and Queue 3 follows FCFS. The implementation of MLFQ will be:

1. When a process starts executing then it enters Queue 1.


2. In Queue 1, process executes for 4 unit and if it completes its Burst time,
then the CPU will be ready for the next process.

3.0 PROCESS MANAGEMENT PAGE | 42


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

3. If a process completes the 4 unit and still has remaining burst time, then its
priority gets reduced and will be shifted to Queue 2.
4. Above steps 2 and 3 are true for Queue 2, however, must use a time slice
of 8 instead of 4. If a process has still remaining burst time after executing
for 8 unit, it will be shifted to Queue 3.
5. In the last queue, processes are scheduled in FCFS.
6. A process in lower priority can be only executed when higher priority
queues are empty.
7. A process running in the lower priority queue is interrupted by a process
arriving in the higher priority queue.

Example: Consider the given tables:


Job Arrival Time Burst Time
J1 0 5
J2 1 4
J3 2 3
J4 4 6
J5 5 5
J6 6 4

Queue No. Algorithm Priority


0 Round Robin (q = 2) High Level Queue
1 Round Robin (q = 3) Medium Level Queue
2 FCFS Low Level Queue

The Gantt chart using MLFQ is:

J1 J2 J3 J4 J5 J6 J1 J2 J3 J4 J5 J6 J4
0 2 4 6 8 10 12 15 17 18 21 24 26 27

Reading Assignment
 [Link]
m
 [Link]
 [Link]
 [Link]
 [Link]
 [Link]
 [Link]

References / Sources / Bibliography


 Nutt, G. (2009), Operating Systems: A Modern Perspective, 3rd Edition, Pearson Addison-
Wesley Inc, Boston , Massachusetts.

 Silberschatz, A., Galvin, P., and Gagne, G.(2018). “Operating ‘System Concepts”, 10th
ed. John Wiley and “Sons Inc.

3.0 PROCESS MANAGEMENT PAGE | 43


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

 Stallings, Wiliam. (2018). “Operating Systems: Internals and Design Principles” 9 th ed.
Pearson Education Inc.

 Harris, J. Archer., “Schaum’s Outline of Theory and Problems of Operating Systems.


McGraw Hill Companies Inc., 2002.

 [Link]
20system/operating%[Link]

 [Link]

 [Link]

 [Link]
ion_process_creation_termination.htm

Online Videos to Watch


 [Link]
 [Link]
 [Link]
 [Link]

3.0 PROCESS MANAGEMENT PAGE | 44


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Exercises
I. Essay
1. To fully prove that you have actually understood how process scheduling works
and runs, provide two or more instances/examples that can be compared to
process scheduling queues aside from those stated in this lesson.

2. Answer the following questions based on your understanding:


a. How does the three important scheduling queues work?
b. What are the differences between the three types of schedulers? What
each of these schedulers is for?

II. Make a Gantt chart of each algorithm using the given table below. Find the CPU
Utilization, Turn Around Time and Waiting Time (per job and average).

Job Arrival Time Burst Time Priority Queue No.


J1 0 6 0 0
J2 4 8 0 1
J3 0 3 2 0
J4 2 10 1 1
J5 1 5 3 1
J6 3 3 4 0
J7 5 8 3 0
J8 8 3 1 1
For Round Robin algorithm, Quantum Time = 3

III. Using the given table above, make a Gantt chart for Multilevel Queue and Multi-level
Feedback Queue algorithms. Please refer to the table below for the job levels.

Queue No. Algorithm Priority


0 Non-preemptive Algorithm High Level Queue
1 Shortest Job First Low Level Queue

IV. Using the same given, make a Gantt chart using the Multilevel Feedback Queue with
the given table below.
Queue No. Algorithm Priority
0 Round Robin (q = 3) High Level Queue
1 Preemptive Priority Medium Level Queue
2 FCFS Low Level Queue

3.0 PROCESS MANAGEMENT PAGE | 45


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

MODULE 4: STORAGE MANAGEMENT


Overview
In this module, we will discuss the structure of secondary storage. This module describe the
concepts of disk scheduling as well as its scheduling algorithms, which schedule the order of disk
I/Os to maximize performance.

Objectives
After successful completion of this module, the student can be able to:
 Learn the hardware activities involved in the retrieval/storage of data on a direct
access storage device; and
 Compute the different Disk Scheduling algorithms in order to make contrasts and
comparisons of performance.

Storage Management
Storage management is defined as referring to data storage equipment management
that is used to store data generated by user/computer. Hence, it is an administrator’s tool or
collection of processes used to keep the data and storage equipment secure.

Storage management is a mechanism for users to maximize the usage of storage


resources and maintain data integrity for any media they reside on, and the storage management
category typically includes various types of sub-categories covering issues such as security,
virtualization, and more.

Lesson 1: Disk Scheduling Concepts


Disk scheduling is done by operating systems to schedule I/O requests arriving for the disk.
Disk scheduling is also known as I/O scheduling.
Disk scheduling is important because:
 Multiple I/O requests may arrive by different processes and only one I/O request can be
served at a time by the disk controller. Thus, other I/O requests need to wait in the
waiting queue and need to be scheduled.
 Two or more request may be far from each other so can result in greater disk arm
movement.
 Hard drives are one of the slowest parts of the computer system and thus need to be
accessed in an efficient manner.

There are many Disk Scheduling Algorithms but before discussing them let’s have a quick
look at some of the important terms:
 Seek Time: Seek time is the time taken to locate the disk arm to a specified track where
the data is to be read or write. So, the disk scheduling algorithm that gives minimum
average seek time is better.
 Rotational Latency: Rotational Latency is the time taken by the desired sector of disk
to rotate into a position so that it can access the read/write heads. So the disk scheduling
algorithm that gives minimum rotational latency is better.

4.0 STORAGE MANAGEMENT PAGE | 44


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

 Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating
speed of the disk and number of bytes to be transferred.
 Disk Access Time: Disk Access Time is:
Disk Access Time= Seek time + Rotational Delay + Controller
Overhead + Queuing delay

Disk Response Time: Response Time is the average of time spent by a request waiting to
perform its I/O operation. Average Response time is the response time of the all
requests. Variance Response Time is measure of how individual request are serviced with
respect to average response time. So, the disk scheduling algorithm that gives minimum
variance response time is better.

Lesson 2: Disk Scheduling Algorithms


The purpose of disk scheduling algorithms is to reduce the total seek time. Various disk
scheduling algorithms are:

First Come First Serve (FCFS) Scheduling Algorithm


The concept of this algorithm is same as the FCFS in CPU Scheduling Algorithm.
However, it differs in the way how the idea is applied. FCFS is the simplest disk scheduling
algorithm which services disk locations based on the arrival order of the requests. Since it follows
the order of arrival, it causes wild swings from the innermost to the outermost tracks of the disk
or vice versa. The farther the location to be serviced by the read/write head from its current
location the higher the seek time will be.

Example: We illustrate them with a request queue (0 – 199), with the order or request 98, 183,
37, 122, 14, 124, 65, 67. The head pointer is a 53

Figure 4.11: Illustration of the head movement in FCFS


So, the total head
movement will be:
THM = (98–53)+(183–98)+(183–37)+(122–37)+(122–14)+(124–14)+(124–65)+(67–65)
= 45 + 85 + 146 + 85 + 108 + 110 + 59 + 2
THM = 640

For instance, if it takes 3ms per cylinder move to seek the request, we shall get the seek
time. To get the seek time, it will be:
Seek Time = THM x Seek Rate

4.0 STORAGE MANAGEMENT PAGE | 45


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

= 640 x 3ms
= 1920ms

Shortest Seek Time First (SSTF) Scheduling Algorithm


This algorithm is founded on the idea that the read/write should proceed to the track that
is closest to its current position. This scheme enables the R/W to service a particular location
immediately in just a short distance travel. This may sound very efficient but it actually has a
drawback.

This algorithm selects the disk I/O request which requires the least disk arm movement
from its current position regardless of the direction. It allows the head to move to the closest track
in the service queue. SSTF scheduling is a form of SJF scheduling; may cause starvation of some
requests.

Using the same given in the previous algorithm, the chart would be

Figure 4.12: Illustration shows head movement in SSTF

So, the total head movement will be:


= (65-53) + (67-65) + (67-37) + (37-14) + (98-14) + (122-98) + (124-122) + (183-124)
= 12 + 2 + 30 + 23 + 84 + 24 + 2 + 59
= 236

SCAN Scheduling Algorithm


This algorithm is performed by moving the access arm back-and-forth the innermost and
outermost track. As it scans the tracks from end to end, it sweeps all the requests found in the
direction it is headed. This scheme ensures that all tracks, whether in the outermost, middle or
innermost location, will be traversed by the access arm thereby finding all the requests. This
algorithm is also known as the Elevator Algorithm.
Head starts from one end of the disk and move towards the other end servicing all the
request in between. After reaching the other end, head reverses its direction and move towards
the starting end servicing all the requests in between.

In this algorithm, we need another given to be able to create a chart. It can be a previous
position of the read/write head or a direction such as upward or downward. Using the same
example from the previous algorithms, we will add 40 as the previous read/write head position
prior to cylinder 53. The chart in this algorithm would be:

4.0 STORAGE MANAGEMENT PAGE | 46


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Figure 4.3: Illustration shows head movement in SCAN

So, the total head movement will be:


= (65-53)+(67-65)+(98-67)+(122-98)+(124-122)+(183-124)+(199-183)+(199-37)+(37-14)
= 331

LOOK Scheduling Algorithm


LOOK is similar to the SCAN algorithm except for the end-to-end reach of each sweep.
Instead, the read/write head is only tasked to go to the farthest location in need of servicing. Since,
it is also a directional algorithm, as soon as it is done with the last request in one direction it then
sweeps in the other direction.

Using the given example from the previous algorithm with same directions from SCAN,
the chart in this algorithm would be:

Figure 4.4: Illustration shows head movement in LOOK

So, the total head movement will be:


= (65-53)+(67-65)+(98-67)+(122-98)+(124-122)+(183-124)+(183-37)+(37-14)
= 299

Circular SCAN (C-SCAN) Scheduling Algorithm


A modified version of the SCAN, C-SCAN sweeps the disk from end-to-end, but as soon
as it reaches one of the end tracks it then moves to the other end track without servicing any
requesting location. As soon as it reaches the other end track it when starts to look and grant
service requests. This algorithm improves the unfair situation of the end tracks against the middle
tracks. Notice that in this algorithm an alpha symbol was used to represent the dash line. This
return sweep is sometimes given a numerical value which is included in the computation of the
THM. The alpha symbolizes a reset of the access arm to the starting end of the disk track. An

4.0 STORAGE MANAGEMENT PAGE | 47


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

analogy for this concept is the carriage return lever of a typewriter. Once it is pulled to the
rightmost direction, it resets the typing point to the leftmost margin of the paper. A typist is not
supposed to type during the movement of the carriage return lever because the line spacing is
being adjusted. The frequent use of this lever consumes time, similar to the time consumed when
the access arm is reset to its starting position.

Using the given example and direction from the previous algorithm, and the alpha (α) is
negligible, the chart in this algorithm would be:

Figure 4.5: Illustration shows head movement in C-SCAN

So, the THM will be:


= (65-53)+(67-65)+(98-67)+(122-98)+(124-122)+(183-124)+(199-183)+(14-0)+(37-14)
= 183+ α (since alpha has no value or is negligible)

If α has assigned value, it should be added to the THM. However, in computing the seek
time, the α must be discarded.

C-LOOK Scheduling Algorithm


Circular look is like C-Scan which uses a return sweep before processing a set of disk
requests. Like the Look Algorithm, C-Look does not reach the end of the tracks unless there is a
request, either a read or a write, on such disk location.

Using the given example and direction from the previous algorithm, and the alpha (α) is
negligible, the chart in this algorithm would be:

Figure 4.6: Illustration shows head movement in C-LOOK

So, the THM will be:


= (65-53)+(67-65)+(98-67)+(122-98)+(124-122)+(183-124)+(37-14)
= 153+ α

4.0 STORAGE MANAGEMENT PAGE | 48


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Reading Assignment
 E-books
 [Link]
 [Link]
 [Link]

 YouTube Tutorials;
 [Link]
 [Link]
 [Link]
 [Link]
 [Link]
 [Link]

References / Sources

 Nutt, G. (2009), Operating Systems: A Modern Perspective, 3rd Edition, Pearson


Addison-Wesley Inc, Boston , Massachusetts.

 Silberschatz, A. Galvin, P. and Gagne, G. (2018). “Operating Systems Concepts”, 10th


Ed. John Wiley and Sons Inc.

 Stallings, Wiliam. (2018). “Operating Systems: Internals and Design Principles” 9th ed.
Pearson Education Inc.

 Harris, J. Archer., “Schaum’s Outline of Theory and Problems of Operating Systems.


McGraw Hill Companies Inc., 2002.

 Albano, GMA., Pastrana, AG. “Fundamentals of Operating Systems”, A&C Printers,


ISBN 978-971-94635-0-4,2009

4.0 STORAGE MANAGEMENT PAGE | 49


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

EXERCISES / WRITTEN ASSIGNMENT


1. Suppose that a disk drive has 5,000 cylinders, numbered 0 to 4,999. The driver is currently
serving a request at cylinder 160 and has previously serviced cylinder 80. The queue of
pending requests is as follows:
94 1455 915 1787 955 1505 1030 1753 1905

Starting from the current head position, what is the total distance traversed by the disk
arm in satisfying all the requests using the ff. disk scheduling algorithm: FCFS, SSTF,
SCAN,C-SCAN, LOOK and C-LOOK.

2. The head of moving disk, with 200 tracks numbers 0 to 199, is currently serving a request
at track 133 and had just finished a request at track 123. If the rate of movement from one
track to another track is 6. What is the total head movement and seek time of each
scheduling algorithm?
The disk requests are as follows: 90 160 97 190 98 153 110 180 140

4.0 STORAGE MANAGEMENT PAGE | 50


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

MODULE 5: MEMORY MANAGEMENT


Overview
In this module, we discuss various ways to manage memory. The algorithms for memory
management differ from a primitive bare-machine approach to paging and segmentation
strategies. Every approach has its own advantages and drawbacks. Selection of memory-
management method for a specific system depends on many factors, especially on the hardware
design of the system. As we will see, many algorithms require hardware support, which leads to
close integration of hardware and operating system memory management.

Objectives
After successful completion of this module, the student can be able to:
 Identify the responsibilities of the memory manager;
 Identify the different memory management strategies;
 Differentiate the principles of the memory management strategies presented; and
 Solve problems using the different strategies in memory management.

Introduction
Aside from managing processes, the operating system must efficiently manage the
primary storage of the computer. The component of the operating system responsible for this
function is the memory manager. As a review, module 2 has enumerated the main responsibilities
of a memory manager as the component that:
 manages the memory;
 allocates and de-allocates memory space needed;
 keeps track of the memory spaces needed by active processes and by the operating
system itself;
 keeps track of which part of memory are currently being used and by whom;
 brings in and swaps out blocks of data from the secondary storage; and
 decides which processes are to be loaded into memory when memory spaces become
available.

In a uniprogramming environment, the primary storage is divided into two regions: the
resident monitor where the operating system reside and the user memory where the current
program is being stored. The memory manager is not too burdened in this kind of scenario unlike
if it becomes a multiprogramming environment where the user memory is further partitioned to
accommodate several processes simultaneously processing in the system.

Question is, why share memory between processes? Why provide multiple processes to
reside in physical memory at the same time? So as to multiprogram the processor, time share
system and overlap computation and I/O, in short, maximize system performance.

The task of allocating spaces, remembering where these processes were stored, checking
that each process do not interfere with one another, and taking notes on which space is unused
calls for efficient memory management, increased utilization of spaces and increased speed in
data access. It is in this context that we need to study the different memory management
strategies available. The memory management strategies to be discussed are:

5.0 MEMORY MANAGEMENT PAGE | 51


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

 Multiple Fixed Partitions


 Multiple Variable Partitions
 Buddy System
 Swapping
 Overlay
 Simple Paging
 Simple Segmentation
 Segmentation with Paging

Terminologies

Hole - a block of available memory, unused space in memory


Region – a partition or division of a user memory
Contiguous allocation – continuous allocation, whole process is stored continuously in
adjacent location.
Non-contiguous allocation – process may be subdivided into small portions to fit in a hole which
may be found in different locations inside the primary memory.

Multiple Fixed Partitions


Multiple fixed partition divides the user memory into several regions of fixed sizes. This
technique is simple to implement with the following advantages / disadvantages:
1) a minimal operating system overhead but encounters internal fragmentation and
2) the maximum number of processes accepted is based on the number of partitions.

A user memory with 12K, 4K, 6K, 3K,10K, 15K partition means that the user memory of size
50K is divided into 6 regions as shown below.

12K 4K 6k 3K 10K 15K

If several processes are to enter the primary storage, the memory manager will be burdened
on how these processes will be allocated a specific region. Several strategies of allocating spaces
are available but we will only tackle the more popular ones shown in Table 5.1 like First Fit, Next
Fit, Best Fit and Worst Fit.

Using the various allocation algorithms mentioned in Table 5.1 creates spaces which cannot
be used because a process maybe too big or too small to fit into that partition. This wasted space
is called fragmentation. There are two types of fragmentation namely internal fragmentation and
external fragmentation. Internal fragmentation is wasted spaces not used within a specific
partition. External fragmentation on the other hand is a partition that was not used because the
process may be too big to fit the partition or the partition maybe too small to fit any process. Bear
in mind that external fragmentation occurs only when there is an incoming process which does
not fit any region.

5.0 MEMORY MANAGEMENT PAGE | 52


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Table 5.1: Popular Allocation Algorithms


Allocation
Description Advantages Disadvantages
Algorithm
First Fit Search for the first hole No need to search a lot, Highly ineffective
which is big enough to fit less overhead Small holes tend to
he process. Search Simple algorithm is used accumulate in the first
always start with the first Fast search few regions
region.
Next Fit Search for the first hole Better than First Fit Small holes are not
which is big enough to fit accumulated in the first
the process. Search few regions
starts from the last hole
allocated.
Best Fit Search for the smallest Produces the smallest More complex algorithm
unused hole available in internal fragmentation. used.
which it will fit. Higher overhead
because need to search
the whole user memory
Worst Fit Search for the largest Produces the largest
unused hole available in internal fragmentation.
which it will fit. High overhead

One solution to be able to use this wasted space is memory compaction, where free spaces
are merged together to create a bigger space that may now fit a process. Compaction is often
called “burping the memory”.

A better way to understand is through an example:

Example:
Given a job pool:
A B C D E F G H
9k 7k 3k 8k 11k 5k 2k 13k

Use the partitions:


12K 4K 6k 3K 10K 15K

1. In what region will each of the jobs be allocated using:


a. First Fit
b. Next Fit
c. Best Fit
d. Worst Fit

2. What is the Internal Fragmentation, External Fragmentation and % Memory Utilization using
the allocation strategies above?

5.0 MEMORY MANAGEMENT PAGE | 53


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Solution:

a. First Fit Allocation Algorithm


1. The region occupied by each job are presented below:

Region Size 12 4 6 3 10 15

A C F G B D

wasted space 3 1 1 1

Jobs not allocated = E and H


2. The fragmentation and memory utilization computed are as follows:
Internal Fragmentation (IF) = 3+1+1+1+3+7 = 16
External Fragmentation (EF) = 0
Total Fragmentation (TF) = 16 + 0 = 16
𝑇𝑜𝑡𝑎𝑙 𝑀𝑒𝑚𝑜𝑟𝑦 − 𝑇𝑜𝑡𝑎𝑙 𝐹𝑟𝑎𝑔𝑚𝑒𝑛𝑡𝑎𝑡𝑖𝑜𝑛 50 − 16
% Memory Utilization = × 100 = × 100 = 68%
𝑇𝑜𝑡𝑎𝑙 𝑀𝑒𝑚𝑜𝑟𝑦 50

b. Next Fit Allocation Algorithm


1. The regions occupied by each job presented below:

Region Size 12 4 6 3 10 15

A F G B C

wasted space 3 1 1 3 12

Jobs not allocated = D, E and H


2. The fragmentation and memory utilization computed are as follows:
Internal Fragmentation (IF) = 3+1+1+3+12 = 20
External Fragmentation (EF) = 4
Total Fragmentation (TF) = 20 + 4 = 24
50 − 24
% Memory Utilization = × 100 = 52%
50
c. Best Fit Allocation Algorithm
1. The regions occupied by each job are presented below:

Region Size 12 4 6 3 10 15

B G F C A D

wasted space 5 2 1 1 7

Jobs not allocated = E and H


1. The fragmentation and memory utilization computed are as follows:

5.0 MEMORY MANAGEMENT PAGE | 54


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Internal Fragmentation (IF) = 5+2+1+0+1+7 = 16


External Fragmentation (EF) = 0
Total Fragmentation (TF) = 18+0=16
50 − 16
% Memory Utilization = × 100 = 68%
50

d. Worst Fit Allocation Algorithm


1. The regions occupied by each job are presented below:

Region Size 12 4 6 3 10 15

B G F C A

wasted space 5 2 1 7 6

Jobs not allocated = D, E and H


2. The fragmentation and memory utilization computed are as follows:
Internal Fragmentation (IF) = 5+2+1+7+6 = 21
External Fragmentation (EF) = 3
Total Fragmentation (TF) = 21 + 3 = 24
50 − 24
% Memory Utilization = × 100 = 52%
50
You might be wondering when will the other jobs be allocated? The above data are not
sufficient to know when the other jobs be allocated the memory. The above problems merely
show how the different allocation algorithms work.

Now, a complete example is given below.

Given a job pool and a user memory partitioned as 12K, 6K, 6K, 6K

Arrival Memory
Job Burst Time
Time Size 12K
A 0 9 6
B 1 5 8 6K
C 2 8 4 6K
D 3 7 5 6K

Assume:
CPU Scheduling Algorithm – First Come First Served
Memory Allocation Strategy – First Fit
Memory Management Strategy – Multiple Fixed Partition
Compute IF, EF and %MU

5.0 MEMORY MANAGEMENT PAGE | 55


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

At time = 0
Region Size 12 6 6 6
A
wasted space 3

Jobs not allocated = none


Internal Fragmentation (IF) = 3
External Fragmentation (EF) = 0
Total Fragmentation (TF) = 3
30 − 3
% Memory Utilization (MU) = × 100 = 90%
30

At time = 1
Region Size 12 6 6 6
A B
wasted space 3 1

Jobs not allocated = none


Internal Fragmentation (IF) = 3 + 1 = 4
External Fragmentation (EF) = 0
Total Fragmentation (TF) = 4
30 − 4
% Memory Utilization (MU) = × 100 = 87%
30

At time = 2
Region Size 12 6 6 6
A B
wasted space 3 1

Job/s not allocated = C


Internal Fragmentation (IF) = 3 + 1 = 4
External Fragmentation (EF) = 12
Total Fragmentation (TF) = 16
30 − 16
% Memory Utilization (MU) = × 100 = 47%
30

At time = 3
Region Size 12 6 6 6
A B
wasted space 3 1

Job/s not allocated = C and D


Internal Fragmentation (IF) = 3 + 1 = 4
External Fragmentation (EF) = 12
Total Fragmentation (TF) = 16
30 − 16
% Memory Utilization (MU) = × 100 = 47%
30

5.0 MEMORY MANAGEMENT PAGE | 56


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Our previous knowledge of CPU scheduling comes handy to know when jobs A and B will
finish their execution. Based on the Gantt Chart using FCFS CPU Scheduling Algorithm, process
A will finish at time = 6 and will release its allocation.

Gantt Chart
A
0 6

At time = 6
Region Size 12 6 6 6
C B
wasted space 4 1

A releases its memory space


Job/s not allocated = D (does not fit any region)
Internal Fragmentation (IF) = 4 + 1 = 5
External Fragmentation (EF) = 12
Total Fragmentation (TF) = 17
30 − 17
% Memory Utilization (MU) = × 100 = 43%
30

Gantt Chart
A B
0 6 14

At time = 14
Region Size 12 6 6 6
C
wasted space 4

B releases its memory space


Job/s not allocated = D (does not fit any region)
Internal Fragmentation (IF) = 4
External Fragmentation (EF) = 18
Total Fragmentation (TF) = 4 + 18 = 22
30 − 22
% Memory Utilization (MU) = × 100 = 27%
30

Gantt Chart
A B C
0 6 14 18

At time = 18
Region Size 12 6 6 6
D
wasted space 5

C releases its memory space

5.0 MEMORY MANAGEMENT PAGE | 57


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Job/s not allocated = No incoming jobs


Internal Fragmentation (IF) = 5
External Fragmentation (EF) = 0
Total Fragmentation (TF) = 5 + 0 = 5
30 − 5
% Memory Utilization (MU) = × 100 = 83%
30
Gantt Chart
A B C D
0 6 14 18 23

At time = 23, all jobs finished processing and memory is empty.

Multiple Variable Partitions

Multiple variable partition, also known as Dynamic Partition views the memory as one big
hole and allocates the memory based on the size of the process. The partition size is variable,
thus eliminating internal fragmentation. This technique is more efficient than multiple fixed
partition but the memory needs to be “burped” (compaction) from time to time to counter external
fragmentation.

Given a job pool:


Arrival Memory
Job Burst Time
Time Size
A 0 9 6
B 1 5 8
C 2 8 4
D 3 7 5

Assume:
CPU Scheduling Algorithm – Round Robin with q = 5
Memory Allocation Strategy – First Fit
Memory Management Strategy – Multiple Variable Partition
User Memory starts from 25 to 45.
Compute IF, EF and %MU

At time = 0
0 25 34 45
A
Jobs not allocated = none
Internal Fragmentation (IF) = 0
External Fragmentation (EF) = 0
Total Fragmentation (TF) = 0
20 − 0
% Memory Utilization (MU) = × 100 = 100%
20

5.0 MEMORY MANAGEMENT PAGE | 58


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

At time = 1
0 25 34 39 45
A B
Jobs not allocated = none
Internal Fragmentation (IF) = 0
External Fragmentation (EF) = 0
Total Fragmentation (TF) = 0
20 − 0
% Memory Utilization (MU) = × 100 = 100%
20

At time = 2
0 25 34 39 45
A B
Jobs not allocated = C
Internal Fragmentation (IF) = 0
External Fragmentation (EF) = 45 – 39 = 6
Total Fragmentation (TF) = 6
20 − 6
% Memory Utilization (MU) = × 100 = 70%
20

Gantt Chart: Round Robin with quantum = 5


A B A B C D
0 5 10 11 14 18 23

At time = 11
0 25 33 34 39 45
C B
A releases its memory space
Job not allocated = D
Internal Fragmentation (IF) = 0
External Fragmentation (EF) = 6 + 1 = 7
Total Fragmentation (TF) = 7
20 − 7
% Memory Utilization (MU) = × 100 = 65%
20

At time = 14
0 25 33 40 45
C D
B releases its memory space
Jobs not allocated = none
Internal Fragmentation (IF) = 0
External Fragmentation (EF) = 0

5.0 MEMORY MANAGEMENT PAGE | 59


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Simple Paging
One of the problems encountered in Multiple Variable Partition strategy is external
fragmentation. To avoid external fragmentation, allocate the physical memory to processes in
fixed size pages called frames. The physical memory is divided into a number of equal size
frames to accommodate processes which are also divided into a number of equal size page. The
process is put in by loading all of its pages into the available frames not necessarily in contiguous
frames.

If the advantage of paging is the elimination of external fragmentation, a weakness is internal


fragmentation.

Simple Segmentation
If paging is referred to memory allocation strategy using fixed length pages, then, the
processes of allocating memory using variable length blocks called segments is known as
Segmentation. Here, each process is divided into several segments and loaded into a variable
partition that is non-contiguous. This strategy gets rid of internal fragmentation and is an
improvement of multiple variable partition or dynamic partitioning.

Segmentation with Paging


This technique combines the principle of using segmentation and paging to provide the
efficiency of paging and the protection and sharing capabilities of segmentation. The principle
involves breaking down the logical address of a segment to segment number and segment offset.
The segment offset is further divided into a page number and page offset. The segment table
entry contains the address of the segment’s page table. The hardware implementation adds the
logical address’s page number bits to the page table address to locate the page table entry. The
physical address is formed by appending the page offset to the page frame number specified in
the page table entry. Figure 5.1 illustrates the Segmentation with Paging Architecture.

Figure 5.1: Segmentation with Paging Architecture


Source: Dr. Yair Amir, Department of Computer Science,The Johns Hopkins University

5.0 MEMORY MANAGEMENT PAGE | 60


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Swapping
Swapping is a technique that permits a computer system to execute programs that are larger
than the primary storage. The technique is for the operating system to copy a part of the process
found in the secondary storage into the primary storage. When the other part of the process is
needed, it exchanges a portion of the data in primary storage (swap out) with a portion of the data
in the secondary storage (swap in). The choice on which process to swap out and which process
to swap in is made by the medium term scheduler. Figure 5.2 illustrates swapping in and out
data.

Figure 5.2: Swapping


Overlaying
Overlaying permits a process to execute even though it is larger than the primary storage.
The technique is for the programmer to define two or more overlay segments within the program
such that no two overlay segments need be in memory at one time. The operating system takes
charge in swapping overlay segments, thus limiting the physical size of the process in the primary
memory. It only keeps in memory those data and instructions needed at any given time.

This technique is implemented by the programmer and compiler which maybe complex and
very time consuming task.

Buddy System
Overlaying permits a process to execute even though it is larger than the primary storage.
The technique is for

The buddy system memory management technique splits memory by half until a best fit
for a specific request is met. For example, if we have 512KB and we need 40KB, we divide 512
in half and then the first half in half until we can fit 40KB.

512

256 256

128 128 256

64 64 128 256

40 64 128 256

5.0 MEMORY MANAGEMENT PAGE | 61


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Once a partition is de-allocated, it will merged with its buddy to re-form the original memory
size. That is, 64KB will merge with the other 64KB and merge with the 128KB until the memory
size reverts to 512KB. Buddy system is comparatively easier to implement than paging, has little
external fragmentation and little overhead trying to do compaction of memory but, it encounters a
lot of internal fragmentation.

Reading Assignment
 E-Books
 [Link]
 Gokey, C. (1997), "Fragmentation Example,"
[Link]
 [Link]
 [Link]/modules
 [Link]/class/sp07/cs241/Lectures/28

 YouTube tutorials:
 Virtual Memory 1, 3, 4 By David Black Schaffer
 Memory Management Tech guiders Tutorials 1-3,
 Contiguous Memory allocation Fixed Partitioning Tech Guiders tutorial 6
 Contiguous Memory Allocation – Dynamic Partitioning Tech Guiders Tutorial 8
 First fit Best Fit worst Fit for Dynamic Partitioning Tutorial 12
 Introduction to Paging, Tutorial 13.

References / Sources
 Nutt, G. (2009), Operating Systems: A Modern Perspective, 3rd Edition, Pearson
Addison-Wesley Inc, Boston , Massachusetts.

 Silberschatz, A. Galvin, P. and Gagne, G. (2018). “Operating Systems Concepts”, 10th


Ed. John Wiley and Sons Inc.

 Stallings, Wiliam. (2018). “Operating Systems: Internals and Design Principles” 9th ed.
Pearson Education Inc.

 Harris, J. Archer., “Schaum’s Outline of Theory and Problems of Operating Systems.


McGraw Hill Companies Inc., 2002.

 [Link] [Link]

 Gokey, C. (1997), "Fragmentation Example,"

 [Link]

 "Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard."


[Link]/~rinard/osnotes/

 [Link]

 [Link]/class/sp07/cs241/Lectures/28

5.0 MEMORY MANAGEMENT PAGE | 62


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

 [Link]

EXERCISES
1. Using the Job pool below:
A B C D E F G H
13k 5k 3k 11k 8k 7k 2k 9k

Use the partitions:


12K 4K 6k 3K 10K 15K

In what region will each of the jobs be allocated using:


a. First Fit
b. Next Fit
c. Best Fit
d. Worst Fit

2. Given the job stream:


Arrival Memory
Job Burst Time
Time Size 12K
A 0 9 6
B 1 5 8 6K
C 2 8 4 6K
D 3 7 5 6K

Assume:
CPU Scheduling Algorithm – Shortest Job First
Memory Allocation Strategy – First Fit
Memory Management Strategy – Multiple Fixed Partition
Compute IF, EF and %MU

3. Given the job stream:


Arrival Memory
Job Burst Time
Time Size
A 0 9 6
B 1 5 8
C 2 8 4
D 3 7 5

Assume:
CPU Scheduling Algorithm – Round Robin with q = 3
Memory Allocation Strategy – First Fit
Memory Management Strategy – Multiple Variable Partition
User Memory starts from 25 to 45.
Compute IF, EF and %MU

4. On a system with 1024KB memory using the buddy system, draw a diagram showing the
allocation of memory after each of the following events.
a. Process A, request 40K

5.0 MEMORY MANAGEMENT PAGE | 63


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

b. Process B, request 140K


c. Process C, request 50K
d. Process D, request 50K
e. Process E, request 50K
f. Process D, exit
g. Process C, exit
h. Process E, exit
i. Process A, exit
j. Process F, request 115K
k. Process G, request 140K
l. Process F, exit
m. Process G, exit
n. Process B, exit

5. Given the following data using the memory management strategies enumerated:
Simulate and calculate the Internal, External and %Memory Utilization, CPU Scheduling
algorithm used is First Come First Serve and the Memory allocation strategy is First Fit.

a. Fixed Partition of 12: 3: 10: 15: 9: 4


b. Dynamic Partition of 32
c. Buddy System of Memory size 32.

Job Arrival time Burst Time Memory Size

A 0 4 9

B 0 7 7

C 0 5 3

D 0 2 8

E 0 6 11

F 0 3 5

G 0 8 2

H 0 4 13

6. Given the following data using the memory management strategies enumerated:
Simulate and calculate the Internal, External and %Memory Utilization, CPU Scheduling
algorithm used is First Come First Serve and the Memory allocation strategy is First Fit.

a. Fixed Partition of 3: 6: 5 : 14: 4


b. Dynamic Partition of 32
c. Buddy System of Memory size 32.

5.0 MEMORY MANAGEMENT PAGE | 64


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Job Arrival time Burst Time Memory Size

A 1 6 3

B 0 3 9

C 0 4 12

D 1 7 4

E 0 2 2

F 2 5 5

G 2 8 3

H 1 3 4

I 4 8 12

Very Important Disclaimer: Values of memory sizes and partitions should be a


power of 2. However, since very large numbers will be used, I have decided to
use sizes not a power of 2. Note that this is for exercise purposes only. Bear in
mind that in reality, sizes of partitions are always a power of 2

5.0 MEMORY MANAGEMENT PAGE | 65


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

MODULE 6: VIRTUAL MEMORY MANAGEMENT


Overview
Virtual memory is a technique that allows the execution of processes that are not completely in
memory. One major advantage of this scheme is that programs can be larger than physical
memory. Further, virtual memory abstracts main memory into an extremely large, uniform array
of storage, separating logical memory as viewed by the user from physical memory. This
technique frees programmers from concerns of memory-storage limitations. Virtual memory also
allows processes to share files easily and to implement shared memory. In addition, it provides
an efficient mechanism for process creation. Virtual memory is not easy to implement, however,
and may substantially decrease performance if it is used carelessly. In this module, we discuss
virtual memory in the form of demand paging and examine its complexity and cost.

Objectives
After successful completion of this module, the student can be able to:
 Identify the virtual memory management strategies;
 Apply the different virtual memory management strategies;
 Describe demand paging;
 Identify the various page allocation algorithms;
 Ascertain the principle involved in each of the different page replacement algorithms; and
 Evaluate reference strings using the different page replacement algorithms.

Lesson 1: Virtual Memory


The utilization of the primary memory can be maximized by 1) dividing the program into
small parts such that some of the parts remain in the RAM while the rest are only loaded into RAM
when access to the data is needed, and 2) by using a virtual memory, like the hard drive and
storing the needed instructions and data used by the processor in the RAM. Note that the
processor can only communicate with the real memory, in this case, the RAM. Any other memory
used should be communicated with the operating system so it can perform its duty as a memory
manager.

It is a technique that gives an application program the notion that it has a big contiguous
real memory, when it may be physically fragmented and may even spill over to the secondary
storage. Brookshear provides a better explanation. To quote:

"Suppose, for example, that a main memory of 64 megabytes is required but only 32
megabytes is actually available. To create the illusion of the larger memory space, the
memory manager would divide the required space into units called pages and store the
contents of these pages in mass storage. A typical page size is no more than four
kilobytes. As different pages are actually required in main memory, the memory manager
would exchange them for pages that are no longer required, and thus the other software
units could execute as though there were actually 64 megabytes of main memory in the
machine."

Virtual memory concept makes use of the different computer memory available as
additional spaces to store data and instructions hence expanding the memory such that only data

6.0 VIRTUAL MEMORY MANAGEMENT PAGE | 66


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

and instructions which are currently needed are stored in the RAM. The operating system through
its memory manager coordinates the use of the different types of memory by tracking available
spaces, what process to be allocated or deallocated and how to move data between these various
types of memory. In essence, the operating system creates a temporary file where data is stored
when the RAM is inadequate. This temporary file uses a backing store, a fast secondary storage
device just like the hard disk drive to perform as an extension of the RAM. In this way, the
memory seems to be very big when in fact; it is just using another area where it can swap in and
out data. This technique increases the degree of multiprogramming but… beware… thrashing
may occur. Thrashing is the phenomenon of excessively shuffling portions of a process in and
out of the RAM thus slowing the processor. This can happen either because one process requires
a large amount of RAM (assuming Process A), so the operating system swaps out another
process (assuming Process B) to the secondary storage to accommodate process A. However,
Process B needs to be in main memory also.

The degree of multiprogramming can only be increased to some extent, when it reaches
its peak, thrashing occurs. To resolve thrashing problem, a user may either 1) increase the
amount of RAM in the computer or 2) decrease the number of programs concurrently running on
the computer.

Virtual Memory Paging


Table 6.1 summarizes the characteristics of simple paging and paging with the use of
virtual memory.

Table 6.1: Characteristics of Paging with or without the use of Virtual


Simple Paging Virtual Memory Paging
 main memory is partitioned into small fixed size parts called frames.
 program is broken into pages by the compiler or memory management system
 Internal fragmentation occurs
 External fragmentation is eliminated
 Operating system maintains a page table for each process presenting the frame each page
occupies
 Operating system maintains a free frame list
 Processor uses the page number and page displacement (offset) to calculate physical
address.
 All pages of the process must be in main  Not all pages of the process need to be in
memory to execute. main memory frames to execute. Pages
maybe brought it when needed or
demanded.
 Bringing a page into main memory (swap in)
may require one or more pages to be
brought out of the main memory (swap out).
Source: Stallings, W., “Modern Operating systems” 4th ed. Prentice Hall, 2001

6.0 VIRTUAL MEMORY MANAGEMENT PAGE | 67


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Figure 6.1 Basic Paging Architecture


Source: Dr. Yair Amir, Department of Computer Science,
The Johns Hopkins University

Figure 6.1 show the paging architecture. In the paging implementation, the main memory
is divided into small fixed size parts known as frames. Since this frames as limited in size, we
need to split the process into smaller parts known as pages to fit the frames. The size of the page
is the same as the size of the frame. When pages of a process are created, we need to identify
each page by using a virtual / logical address. The virtual address consist of two parts namely:
page number (p) and page offset / displacement (o). To be able to access data at a given
address, the memory manager automatically gets 1) the page number (p), 2) the page offset (o),
3) translate the page number to frame number (f) and 4) gets the data at the same offset in the
frame number given. Here, the virtual address (V) is translated to physical address in main
memory(R). The figure shows that the virtual address consist of p,o and physical address consist
of f,o where the offset or displacement of the virtual (u) and physical address (R) is the same. All
we need to know is the relationship between the page number (p) and the frame number (f) which
is mapped in the page table. To be able to access a specific data in the real memory, we need
to access the page table and then fetch the data in the frame specified in the page table. In short,
we need two physical memory accesses for each virtual memory access.

To translate virtual memory (U) to physical memory (R), the formula to use:
Let U= virtual memory / logical memory R = Physical / Real Memory
P = page size f = frame number
p = page number o = page offset

p = U div P o = U mod P U = p*P + o R = f*P + o

6.0 VIRTUAL MEMORY MANAGEMENT PAGE | 68


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

For a concrete example, suppose a 32 bytes process is to be stored in a memory with a page
size of 4bytes. The process will be split into small pages of 4 bytes to fit into the physical memory.
The whole process will have 8 pages. Figure 6.2 shows that page 0 will be stored in frame 3,
page 1 in frame 5 and page 3 in frame 1. Notice that the user views that the pages are stored in
a contiguous manner but physically it is not. The only way to know where the other pages are
stored is through the page table.

Now, how do we compute for the physical address of logical address 6?


Using the formula
p=U div P = 6 div 4 = 1 {logical address 6 is found in page 1}
o = U mod P = 6 mod 4 = 2 {logical address 6 if found in page 1 offset 2}
R = f*P + offset from the page table, page 1 is found in frame no. 5, f=5
R = 5*4 + 2 = 22 ;

from figure 6.2; logical address 6 which contain the value G is found in physical address 22.

Figure 6.2 Paging example for a 32 byte memory with 4 byte pages

6.0 VIRTUAL MEMORY MANAGEMENT PAGE | 69


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

What about if we would like to know the address of page no = 2 with page offset = 2. From
the formula given, we can immediately compute for the logical address as well as the physical
address. From the page table, p=2 can be found in frame no = 6.

U = p*P + o = 2*4 + 2 = 10 and R = f*P + o = 6*4 + 2 = 26

To check, the content of p=2, o=2 is K and from figure 6.2, K can be found in physical address
26.

Presuming a user program is 2353 bytes and it should be stored in a page size of 64
bytes. How many frames are needed to store 2353 bytes? What is the offset of the last page?

U = 2353 and P = 64 ;

the last page can be found in p = U div P = 2353 div 64 = 36 and the offset (last instruction/data
of the process) is found in o = 2353 mod 64 = 49. The number of frames needed to store the
program is 37, why? because the first page starts at page 0 and base on our computation, the
last page is page 36 which equates to 37 pages = 37 frames. The last data is found in page 36
line 49, was the whole frame used? No, the size of the whole frame is 64 and the part used is
only up to 49 which shows that there is an unused space of 15 {internal fragmentation}.

Virtual Memory Segmentation


Table 6.2 summarizes the characteristics of simple segmentation and segmentation with
the use of virtual memory

Table 6.2: Characteristics of Segmentation with or without the use of Virtual Memory
Simple Segmentation Virtual Memory Segmentation
 main memory is not partitioned
 program is broken into segments specified by the programmer to the compiler
 there is no Internal fragmentation
 External fragmentation exists
 Operating system maintains a segment table for each process presenting the base address
and the size of each segment.
 Operating system maintains a list of free holes in main memory
 Processor uses the segment number and segment offset to calculate physical address.
 All segments of the process must be in  Not all segments of the process need to be
main memory to execute. in main memory to execute. Segments
maybe brought it when needed or
demanded.
 Bringing a segment into main memory (swap
in) may require one or more segments to be
brought out of the main memory (swap out).
Source: Stallings, W., “Modern Operating systems” 4th ed. Prentice Hall, 2001

This technique divides the program into variable sized segments like main program,
procedure, function, local variables and global variables, stack, and arrays. Each segment has a
name and a length found in a segment table as shown in Figure 6.3. Segment 0 of P1 is found
in physical memory with an address 43062. Its size is described in the limit as 25286, thus, the

6.0 VIRTUAL MEMORY MANAGEMENT PAGE | 70


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

whole segment 0 is from address 43062 to 68343 (43062 + 25286). Segment 1 of P1 is found in
physical memory address 68348 up to 72773.

Figure 6.3. Segmentation Example

The logical address space consists of the segment number (s) and a segment
displacement (d). This can be visualized in Figure 6.4. The physical address of a segment is
then equal to the base. The extent of the segment is computed as: base + limit. The protection
mechanism of accessing a segment in memory is to check if the limit found in the segment table
is < the displacement. If it is, then access is granted, otherwise, addressing error occurs (d should
never be > than limit).

Figure 6.4: Basic Segmentation Architecture


Source: Dr. Yair Amir, Department of Computer Science, The John Hopkins University

The memory management unit (MMU) is responsible for translating a segment (s) and
displacement (d) within the segment into a physical memory address and for performing checks
to make sure that the reference to that segment (s) and displacement (d) is permitted.

6.0 VIRTUAL MEMORY MANAGEMENT PAGE | 71


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Reading Assignment
 E-books
 [Link]/modules
 [Link]
 [Link]

6.0 VIRTUAL MEMORY MANAGEMENT PAGE | 72


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

EXERCISES / WRITTEN ASSIGNMENTS


1. Explain the difference between logical and physical address.

2. A user process with 4063 bytes of memory needs to be allocated in primary memory with a
frame size of 64 bytes.
a. How many frames will be needed to store the user process?
b. Will there be an internal fragmentation? If so, what is the internal fragmentation?
c. If the snap shot of the page table shows:
Page Table

p f

5 9

43 2

28 44

i. What is the physical address of page 43 offset 21?


ii. What is the logical address of page 43 offset 21?
iii. What is the page number and offset of logical address 1827?
iv. What is the physical address of logical address 1827?
v. What is the physical address of 3626?

3. A system using segmentation provides the segment table below. Compute the physical
address for each of the logical addresses. If the address generates a segment fault, indicate
so.
a. 0, 323
b. 1, 856
c. 2, 234
d. 3, 1009
e. 2,533
Segment Base Length

0 1311 400

1 2711 900

2 411 500

3 4211 1100

6.0 VIRTUAL MEMORY MANAGEMENT PAGE | 73


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Lesson 2: Page Replacement Algorithms


Demand Paging
In demand paging, pages of a process are usually stored in a secondary storage and
brought into primary memory when needed or demanded. In particular, a process usually begins
execution with no pages loaded in the primary memory. In this way, we maximize the utilization
of the primary memory by only bringing what is needed. The memory manger or memory
management unit (mmu) keep track of all pages brought into the primary memory. This
information is stored in a page table. There are instances when a page needed is not yet in the
primary memory so the memory manager needs to bring in the specific page. Bringing in a page
that is not currently loaded into primary memory is known as page fault. To handle page faults,
the memory manager performs the following:

1. Locate the missing page in the secondary storage.


2. Check if the primary memory is full. If so, choose a page to be dropped off from the
primary memory and bring in the page needed.
3. Adjust the page table to reflect the new state of the memory.
4. Restart the user process.

There are two major problems to implement demand paging: page/frame allocation and
page replacement. Page/Frame allocation answers the question “how many frames need to be
allocated for each process loaded in the primary memory?” and page replacement answers the
question “if a page is needed to be unloaded from the primary memory, how do we choose the
page to drop off?” Let us tackle each of these problems.

Page Replacement Algorithms


The algorithm that selects the page to be dropped off from the memory is called page
replacement algorithm. The description of the page replacement algorithms are found in Table
6.3. The evaluation of the best page replacement algorithm will be based on the ability to produce
the lowest page faulty rate.

Table 6.3: Page Replacement Algorithms


Page Replacement Description
Algorithms
First-in First-out Select the page that has been in memory for the longest period of
(FIFO) time. Each page table entry includes a timer that keeps track when it
was loaded in memory (swap-in time).
Easy to implement.
Inexpensive, low overhead and little book-keeping for OS
Not very efficient because it exhibits Belady’s anomaly. Belady’s
anomaly proves that it is possible to have more page faults when
increasing the number of page frames while using FIFO.

Optimal (OPT) or Selects that page that will not be referenced again for the longest
Clairvoyant period of time.
Implementation of this algorithm would require knowledge of what
pages are needed in the future.

6.0 VIRTUAL MEMORY MANAGEMENT PAGE | 74


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

Least Recently Used Keeps track of the last time each page was used and not when it was
(LRU) swapped in. Each page table entry contains a counter that registers
the number of times a page is referenced.
Searches table for the entry with the lowest counter value and selects
that page to be swapped out.
Uses stack data structure wherein if the pagte is not in the stack, a
page fault occurs.

Not Recently Used An approximation of LRU which makes use of a dirty bit ( a page has
(NRU) been modified or written to) and a reference bit. As a page is loaded,
the operating system sets the reference bit to 0. When the page is
accessed, the reference bit is changed to 1. The OS periodically
initializes the reference bit to 0, hence, a reference bit 0 means that it
has not been recently referenced. The page may categorically be in
any of the 4 classes.
Class 0: not referenced, clean
Class 1: not referenced, dirty
Class 2: referenced, clean
Class 3: referenced, dirty
The NRU algorithm picks a random page from the lowest category for
removal. Note that this algorithm implies that a reference page is
more important than a modified page.

Second Chance A combination of the FIFO and NRU algorithms. The oldest page is
selected as a candidate for replacement and removes it if its reference
bit is 0. If the reference bit is 1, the page is inserted at the back of the
queue, and the reference bit is set to 0. The process is repeated until
a page is selected for replacement.
It can be considered a circular queue.
Fares better than FIFO.

Clock Keeps a circular list of pages in memory, with the hand or pointer
pointing to the oldest page in the list known as the victim. When a
page needs to be replaced and no empty frames available, the victim
is inspected. If the reference bit is 0, new page replaces the victim. If
the reference bit of the victim is 1, the victim’s reference bit is set to 0.
The clock hand or pointer is incremented and process is repeated until
a page is replaced.

Random Replace a random page from all the pages currently in primary
memory.
No overhead cost in tracking page references.
Does not perform well.

Let us evaluate using a particular string of memory references and compute the page fault
incurred by each algorithm. Two important notes to consider are:
1. For a given page size, the page number will only be the one considered and not
the entire address.
2. If a page is brought into primary memory, any succeeding reference to that page
while it is still in memory will not cause a page fault.

6.0 VIRTUAL MEMORY MANAGEMENT PAGE | 75


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

For example: A particular process generates the following address sequence:

0120 0405 0135 0650 0111 0104 0109 0145 0643 0104
0109 0145 0632 0103 0104 0107 0453 0101 0118 0160
If the size of the page is 100 bytes per page, the page denoted by the particular address sequence
is:

1 4 1 6 1 1 1 1 6 1
1 1 6 1 1 1 4 1 1 1
This is further reduced to the following reference string (ordered list of page numbers accessed
by a program):

1 4 1 6 1 6 1 6 1 4 1
To determine the number of page faults for the actual reference string, the number of page
frames needs to be given. Apparently, as the number of available frames increases, the page
fault decreases.

Reference to pages tends to be localized on a small set of pages. This locality may
manifest in two forms. If a memory location has been referenced, it is most likely to be referenced
again soon. This form is called temporal locality. The other form, spatial locality, refutes that if a
memory location is accesses, there is a good chance that a location near it will be accessed soon.
As quoted by J. Archer Harris,

“Based on experiments, references tend to group together in a number of different


localities. For example, a program executing in a loop will access a set of pages that
contain the instructions and data referenced in tat loop. Calls to functions will tend to
increase the number of pages in the locality. When the program breaks out of the loop, it
may move to another section of code whose locality of pages is virtually distinct from the
previous locality.”

A number of factors affect the determination of the page size:

1. Page size is always in power of 2.


2. A small page size will incur less internal fragmentation.
3. A large number of pages reduces the number of pages needed thus
a. Reducing the size of the page table
b. Less memory required for the page table
c. Less load time to page registers.
4. A larger page size reduces the overhead in swapping-in and swapping-out pages.
5. A smaller page size reduces the amount of unused information stored in main memory
thus more memory will be available for other processes.

Let us elaborate the principle of FIFO, OPT, and LRU.

How many page faults occur for the algorithms FIFO, OPT, and LRU, given the following reference
strings using three page frames?

6.0 VIRTUAL MEMORY MANAGEMENT PAGE | 76


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

0 3 1 2 3 6 4 0 1 4 2 4

First-In First-Out (FIFO)


0 3 1 2 3 6 4 0 1 4 2 4
0 3 1 2 2 6 4 0 1 1 2 4
0 3 1 1 2 6 4 0 0 1 2
0 3 3 1 2 6 4 4 0 1
No No
pf pf pf pf pf pf pf pf pf pf
pf pf

Page fault (pf) = 10


10
page fault rate = = 0.833
12
10
% page fault = × 100 = 83.33 %
12
Optimal (OPT)

0 3 1 2 3 6 4 0 1 4 2 4
0 3 1 2 2X 2X 2 2X 2 2 2 2
0 3X 3 3 6 4 4X 4 4 4 4
0X 0 0X 0X 0 0 1 1 1 1
No No No No No
pf pf pf pf pf pf pf
pf pf pf pf pf

Page fault (pf) = 7


7
page fault rate = = 0.5833
12
7
% page fault = × 100 = 58.33 %
12
Least Recently Used (LRU)
0 3 1 2 3 6 4 0 1 4 2 4
0 3 1 2 3 6 4 0 1 4 2 4
0 3 1 2 3 6 4 0 1 4 1
0 3 1 2 3 6 4 0 1 2
No No No
pf pf pf PF pf pf pf pf pf
pf pf pf
Page fault (pf) = 9
9
page fault rate = = 0.75
12

6.0 VIRTUAL MEMORY MANAGEMENT PAGE | 77


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

9
% page fault = × 100 = 75.0 %
12

Page / Frame Allocation Algorithms


The easiest way to split available frames among processes is to use equal allocation. An
alternative is used to split the available frames based on the size of the process which is called
proportional allocation. Table 6.4 shows the description of the two frame allocation algorithms.

Table 6.4 Page / Frame Allocation Algorithms


Page / Frame Allocation
Description Formula used:
Algorithm
Equal Allocation Split the available frames Let m = available frames
equally to all the processes in n = no. of processes
memory. a = allocation
𝑚
𝑎=
𝑛
**note that integer division is
used
Proportional Allocation Split the available frames Let m = available frames
proportionate to the size of the si = size of process i
process. st = total size of processes
ai = allocation of pi
ai = (si / st) x m

Reading Assignment
 Read supplemental books
 Stallings, William. “Operating systems: Internals and Design Principles”, Prentice
Hall.

Silberchatz, A., Galvin P. and Gagne G. “Applied Operating Systems Concepts”,



1st ed. John Wiley and Sons Inc., 2000.
 E-books
 [Link]
 [Link]/Online/BS/Seitenersetzung/documentation/Strategie/packa
[Link]
 [Link]
 Tanenbaum, Andrew S. “Modern Operating Systems, 2nd ed.” New Jersey:
Prentice-Hall 2001. Online excerpt on page replacement algorithms: Page
Replacement Algorithms.

Research
1. There are other variants of LRU. Name some and describe the strategy done.
2. Differentiate local replacement and global replacement.
3. What is Working Set Model (WSM)?
4. What is Least Recently Used (LRU) algorithm? What about Not Frequently Used (NFU)
Algorithm? Is there a difference between these two algorithms?
5. Discuss other page replacement algorithms like LIFO and MFU.

6.0 VIRTUAL MEMORY MANAGEMENT PAGE | 78


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

References / Sources

 Nutt, G. (2009), Operating Systems: A Modern Perspective, 3rd Edition, Pearson Addison-
Wesley Inc, Boston , Massachusetts.

 Silberschatz, A. Galvin, P. and Gagne, G. (2018). “Operating Systems Concepts”, 10 th


Ed. John Wiley and Sons Inc.

 Stallings, Wiliam. (2018). “Operating Systems: Internals and Design Principles” 9 th ed.
Pearson Education Inc.

 Harris, J. Archer., “Schaum’s Outline of Theory and Problems of Operating Systems.


McGraw Hill Companies Inc., 2002.

 “Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard.”. Martin Rinard,
osnotes@[Link], [Link]/~rinard

 Coutinho, M. (1996) “Virtual memory simulation applet,”

 [Link]

 [Link]

 Song, Q. (1997), “Simulating Memory Page Replacement,”. [Link]

6.0 VIRTUAL MEMORY MANAGEMENT PAGE | 79


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

EXERCISES / WRITTEN ASSIGNMENT


A. Using the reference string below with 4 page frames:
0 3 1 3 2 3 6 0 4 1 4 2 4 6 0 5 1 5 2 5 4 0 3 1 3

1. What is the page fault using:


a. FIFO
b. OPT
c. LRU

2. If the page frame is increased to 5, what is the page fault using:


a. FIFO
b. OPT
c. LRU

B. Given the processes below:


Equal Proportional
Process Process Size Allocation
Allocation

A 38

B 57

C 43

D 25

E 19

1. The available frames inside the primary storage is 20, how many frames will be
allocated to each of the processes using:
a. Equal allocation
b. Proportional allocation

2. If the process is to be split into equal pages where the page size is 8,
a. How many frames will be used by each of the processes?
b. Will there be an internal fragmentation? If so, what is the internal
fragmentation?

6.0 VIRTUAL MEMORY MANAGEMENT PAGE | 80


Polytechnic University of the Philippines
COLLEGE OF COMPUTER AND INFORMATION SCIENCES

6.0 VIRTUAL MEMORY MANAGEMENT PAGE | 81

You might also like