0% found this document useful (0 votes)
21 views6 pages

Unit-Iv-Ssos-Notes BU

The document discusses various concepts related to virtual storage management strategies including page replacement strategies, working sets, demand paging and page size. It also discusses processor management strategies like job scheduling, preemptive vs non-preemptive scheduling, priorities and deadline scheduling.

Uploaded by

22ucs57
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views6 pages

Unit-Iv-Ssos-Notes BU

The document discusses various concepts related to virtual storage management strategies including page replacement strategies, working sets, demand paging and page size. It also discusses processor management strategies like job scheduling, preemptive vs non-preemptive scheduling, priorities and deadline scheduling.

Uploaded by

22ucs57
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

lOMoARcPSD|368 541 92

Unit-IV - SSOS notes

B.Sc. Computer Science (Bharathiar University)

Downloaded by Gomathi Saminathan ([email protected])


SYSTEM SOFTWARE AND OPERATING SYSTEM

Unit –IV

Virtual Storage: Virtual Storage Management Strategies – Page Replacement Strategies – Working
Sets – Demand Paging – Page Size. Processor Management: Job and Processor Scheduling:
Preemptive Vs Non-preemptive scheduling – Priorities – Deadline scheduling.

VIRTUAL STORAGE
Virtual storage management strategies
There are three main strategies namely
Fetch strategies – concerned with when a page or segment should be brought from secondary
to primary storage
Placement strategies – concerned with where in primary storage to place an incoming page or
segment
Replacement strategies – concerned with deciding which page or segment to displace to make
room for an incoming page or segment when primary storage is already fully committed

Page replacement algorithms


There are many page replacement algorithms and the most important three are FIFO, optimal
replacement and least recently used. This subsection explains the above three algorithms.
FIFO
The simplest page replacement algorithm is first in first out. In this scheme, when a page
must be replaced, the oldest page is chosen. For example consider the page reference string
1, 5, 6, 1, 7, 1, 5, 7, 6, 1, 5, 1, 7
For a three frame case, the FIFO will work as follows. Let all our 3 frames are initially empty.
11177766
5551117
666555
You can see, FIFO creates eight page faults.
Optimal replacement
In optimal page replacement algorithm, we replace that page which will not be used for the
longest period of time. For example for the reference string

Downloaded by Gomathi Saminathan ([email protected])


1, 5, 6, 1, 7, 1, 5, 7, 6, 1, 5, 1, 7
with 3 frames, the page faults will be as follows
111111
55555
6767
You can see that Optimal replacement, creates six page faults
Least recently used
Most of the case, predicting the future page references is difficult and hence implementing
optimal replacement is difficult. Hence there is a need of other scheme which approximates the
optimal replacement. Least recently used (LRU) schemes approximate the future uses by the past
used pages. In LRU scheme, we replace those pages which have not been used for the longest
period of time.
For example for the reference string
1, 5, 6, 1, 7, 1, 5, 7, 6, 1, 5, 1, 7
with 3 frames, the page faults will be as follows
111116667
55777755
6655111
You can see that LRU creates nine page faults
Working sets
If the number of frames allocated to a low-priority process falls below the minimum
numberrequired, we must suspend its execution. We should then page out it remaining pages,
freeing all of its allocated frames. A process is thrashing if it is spending more time paging than
executing.
Thrashing can cause severe performance problems. To prevent thrashing, we must
provide a process with as many frames as it needs. There are several techniques available to
know how many frame a process needs. Working sets is a strategy which starts by looking at
what a program is actually using.
Demand paging
Demand paging is the most common virtual memory system. Demand paging is similar to a
paging system with swapping. When we need a program, it is swapped from the backing storage.

Downloaded by Gomathi Saminathan ([email protected])


There are also lazy swappers, which never swaps a page into memory unless it is needed.
The lazy swapper decreases the swap time and the amount of physical memory needed, allowing
an increased degree of multiprogramming.

Page size
There is no single best page size. The designers of the Operating system will decide the
page
size for an existing machine. Page sizes are usually be in powers of two, ranging from 28 to 212
bytes or words. The size of the pages will affect in the following way.
a) Decreasing the page size increases the number of pages and hence the size of the page
table.
b) Memory is utilized better with smaller pages.
c) For reducing the I/O time we need to have smaller page size.
d) To minimize the number of page faults, we need to have a large page size

PROCESSOR MANAGEMENT:
Introduction:
When one or more process is runnable, the operating system must decide which oneto run
first. The part of the operating system that makes decision is called the Scheduler; the algorithm
it uses is called the Scheduling Algorithm.
An operating system has three main CPU schedulers namely the long term scheduler,
short term scheduler and medium term schedulers. The long term scheduler determines which
jobs are admitted to the system for processing. It selects jobs from the job pool and loads them
into memory for execution. The short term scheduler selects from among the jobs in memory
which are ready to execute and allocated the cpu to one of them. The medium term scheduler
helps to remove processes from main memory and from the active contention for the cpu and
thus reduce the degree of multiprogramming.
The cpu scheduler has another component called as dispatcher. It is the module that
actually gives control of the cpu to the process selected by the short term scheduler which
involves loading of registers of the process, switching to user mode and jumping to the proper
location.

Downloaded by Gomathi Saminathan ([email protected])


Before looking at specific scheduling algorithms, we should think about what the
scheduler is trying to achieve. After all the scheduler is concerned with deciding on policy, not
providing a mechanism. Various criteria come to mind as to what constitutes a good scheduling
algorithm. Some of the possibilities include:
1. Fairness – make sure each process gets its fair share of the CPU.
2. Efficiency (CPU utilization) – keep the CPU busy 100 percent of the time.
3. Response Time [Time from the submission of a request until the first response is produced] –
minimize response time for interactive users.
4. Turnaround time [The interval from the time of submission to the time of completion]
– minimize the time batch users must wait for output.
5. Throughput [Number of jobs that are completed per unit time] – maximize the number of jobs
processed per hour.
6. Waiting time – minimize the waiting time of jobs

Preemptive Vs Non-Preemptive
The Strategy of allowing processes that are logically runnable to be temporarily
suspended is called Preemptive Scheduling. ie., a scheduling discipline is preemptive if the CPU
can be taken away. Preemptive algorithms are driven by the notion of prioritized computation.
The process with the highest priority should always be the one currently using the processor. If a
process is currently using the processor and a new process with a higher priority enters, the ready
list, the process on the processor should be removed and returned the ready list until it is once
again the highest-priority process in the system.
Run to completion is also called Nonpreemptive Scheduling. ie., a scheduling discipline
is nonpreemptive if, once a process has been given the CPU, the CPU cannot be taken away from
that process. In short, Non-preemptive algorithms are designed so that once a process enters the
running state(is allowed a process), it is not removed from the processor until it has completed its
service time ( or it explicitly yields the processor). This leads to race condition and necessitates
of semaphores, monitors, messages or some other sophisticated method for preventing them. On
the other hand, a policy of letting a process run as long as it is wanted would mean that some
process computing π to a billion places could deny service to all other processes
indefinitely.

Downloaded by Gomathi Saminathan ([email protected])


Priorities
A priority is associated with each job, and the cpu is allocated to the job with the highest
priority. Priorities are generally some fixed numbers such as 0 to 7 or 0 to 4095. However there
is no general agreement on whether 0 is the highest or lowest priority. Priority can be defined
either internally or externally. Examples of internal priorities are time limits, memory
requirements, number of open files, average I/O burst time, CPU burst time, etc. External
priorities are given by the user.
A major problem with priority scheduling algorithms is indefinite blocking or starvation.
A solution to this problem is aging. Aging is a technique of gradually increasing the priority of
jobs that wait in the system for a long time.

Deadline scheduling
Certain jobs have to be completed in specified time and hence to be scheduled based on
deadline. If delivered in time, the jobs will be having high value and otherwise the jobs will be
having nil value. The deadline scheduling is complex for the following reasons
a) Giving resource requirements of the job in advance is difficult
b) A deadline job should be run without degrading other deadline jobs
c) In the event of arriving new jobs, it is very difficult to carefully plan resource requirements
d) Resource management for deadline scheduling is really an overhead

Downloaded by Gomathi Saminathan ([email protected])

You might also like