Osmodule III
Osmodule III
Main memory and the registers built into the processor itself are the only storage that
the CPU can access directly. There are machine instructions that take memory
addresses as arguments, but none that take disk addresses. Therefore, any instructions
in execution, and any data being used by the instructions, must be in one of these
direct-access storage devices. If the data are not in memory, they must be moved there
before the CPL (command programming language) can operate on them. Registers that
are built into the CPU are generally accessible within one cycle of the CPU clock. Most
CPUs can decode instructions and perform simple operations on register contents at
the rate of one or more operations per clock tick. The same cannot be said of main
memory, which is accessed via a transaction on the memory bus. Memory access may
take many cycles of the CPU clock to complete, in which case the processor normally
needs to stall (obstruct, stop), since it does not have the data required to complete the
instruction that it is executing. This situation is intolerable because of the frequency of
memory accesses. The remedy is to add fast memory between the CPU and main
memory. A memory buffer used to accommodate a speed differential, called a cache.
Not only are we concerned with the relative speed of accessing physical memory, but
we also must ensure correct operation, has to protect the operating system from access
by user processes. This protection must be provided by the hardware. We first need to
make sure that each process has a separate memory space.
To do this, we need the ability to determine the range of legal addresses that the
process may access and to ensure that the process can access only these legal
addresses. We can provide this protection by using two registers, usually a base and a
limit. The base register holds the smallest legal physical memory address; the limit
register specifies the size of the range.
For example, if the base register holds 300040 and limit register is 120900, then the
program can legally access all addresses from 300040 through 420940 (inclusive).
The base and limit registers can be loaded only by the operating system, which
uses a special privileged instruction. Since privileged instructions can be executed
only in kernel mode, and since only the operating system executes in kernel mode, only
the operating system can load the base and limit registers. This scheme allows the
operating system to change the value of the registers but prevents user programs from
changing the registers' contents.
Memory Management is also known as Storage or Space Management.
Memory management involves subdividing memory to accommodate multiple
processes. It is the process of allocating memory efficiently to pack as many processes
into memory as possible.
Main memory is generally the most critical resource in a computer system in terms of
the speed at which programs run and hence it is important to manage it as efficiently as
possible.
The requirements of memory management are:
• Relocation
• Protection
• Sharing
• Logical Organization
• Physical Organization
Relocation
Programmer does not know where the program will be placed in memory when it is
executed. While the program is executing, it may be swapped to disk and returned to
main memory at a different location (relocated).so memory references must be
translated to actual physical memory address.
Protection
Processes should not be able to reference memory locations of another process without
permission.
Sharing
Allow several processes to access the same portion of memory. It means allow each
process to access the same copy of the program rather than their own separate copy.
Logical Organization
Programs are written in modules .Modules can be written and compiled independently.
Different degrees of protection given to modules (read-only, execute-only).
Physical Organization
Overlaying allows various modules to be assigned the same region of memory.
Programmer does not know how much space will be available
The binding of instructions and data to memory can be done in any of the
following ways.
Swapping
Swapping is the act of moving processes between memory and a backing store. This is
done to free up available memory. Swapping is necessary when there are more
processes than available memory. A process can be swapped temporarily out of
memory to a backing store, and then brought back into memory for continued execution.
Swapping is used to implement multiprogramming in system with little
hardware support for memory management. Swapping is helpful in improving processor
utilization in partitioned memory environment.
Consider a multiprogramming environment with a round-robin CPU
scheduling algorithm. When a quantum (time-slice) expires, the memory manger will
start to swap out processes that just finished, and to swap in another process to the
memory space that has been freed. In the meantime, the CPU scheduler will allocate a
time slice to some other process in memory. Thus when each process finishes its
quantum, it will be swapped with another process.
A variant of this swapping policy is used for priority based scheduling algorithms. If a
high priority process arrives, the memory manager swaps out the low priority process so
that it can load and execute high priority process. When high priority process finishes,
the lower priority process can be swapped back in and continued. This variant of
swapping is called roll out, roll in.
Swapper
When the scheduler decides to admit a new process for which no suitable free
partition can be found, the swapper is invoked to vacate such a partition. The swapper
is an operating system process whose major responsibility include:
Selection of processes to swap out
Selection of process to swap in
Allocation and management of swap space
The swapper usually selects a victim among the suspended processes
that occupy partition or memory large enough to satisfy the needs of incoming process.
Among the qualifying processes swapper select the one with low priority. Another
important criterion for selection is the time spent in memory by the potential victim. The
choice of the process to swap in usually based on the amount of time spent on
secondary storage, priority etc.
Contiguous Allocation
Contiguous literally means adjacent. Here it means that the program is loaded into a
series of adjacent (contiguous) memory locations. In contiguous memory allocation, the
memory is usually divided into two partitions, one for the OS and the other for the user
process.
At any time, only one user process is in memory and it is run to completion and then the
next process is brought into the memory. This scheme is sometimes referred to as the
Single Contiguous Memory Management.
We usually want several user processes to reside in memory at the same time. We
therefore need to consider how to allocate available memory to the processes that are
in the input queue waiting to be brought into memory. In contiguous memory allocation,
each process is contained in a single section of memory that is contiguous to the
section containing the next process.
Memory Protection
If we have a system with a relocation register together with a limit register, we
accomplish our goal. The relocation register contains the value of the smallest physical
address; the limit register contains the range of logical addresses (for example,
relocation = 100040 and limit = 74600). Each logical address must fall within the range
specified by the limit register. The MMU maps the logical address dynamically by
adding the value in the relocation register. This mapped address is sent to memory.
When the CPU scheduler selects a process for execution, the dispatcher loads the
relocation and limit registers with the correct values as part of the context switch.
Because every address generated by a CPU is checked against these registers, we can
protect both the operating system and the other users’ programs and data from being
modified by this running process.
Memory Allocation
Eventually, as you will see, memory contains a set of holes of various sizes. As
processes enter the system, they are put into an input queue. The operating system
takes into account the memory requirements of each process and the amount of
available memory space in determining which processes are allocated memory. When a
process is allocated space, it is loaded into memory, and it can then compete for CPU
time. When a process terminates, it releases its memory, which the operating system
may then fill with another process from the input queue.
As processes enter the system, they are put into an input queue. The operating system
takes into account the memory requirements of each process and the amount of
available memory space in determining which processes are allocated memory. When a
process is allocated space it is loaded into memory, and it can then compete for CPU
time. When a process terminates, It releases its memory, which the operating system
may then fill with another process from the input queue.
This procedure is a particular instance of the general dynamic storage allocation
problem, which concerns how to satisfy a request of size n from a list of free holes.
There are many solutions to this problem.
Allocation Policies
The processes are allocated to the partitions based on the allocation
policy of the system. The allocation policies are:
First Fit
Best Fit
Worst Fit
In first fit policy, the memory manager will choose the first available partition that can
accommodate the process even though its size is more than that of the process.
In worst fit policy, the memory manager will choose the largest available partition that
can accommodate the process.
In best-fit policy, the memory manager will choose the partition that is just big enough
to accommodate the process.
Free partitions are 1 and 4. Consider a new process of size 50K.So, First Fit and Worst
Fit will allocate Partition 1 while Best Fit will allocate Partition 4.
Advantage: There is no internal fragmentation.
Disadvantage: Management is very difficult as memory is becoming purely fragmented
after some time leading to External fragmentation.
Contiguous literally means adjacent. Here it means that the program is loaded into a
series of adjacent (contiguous) memory locations. In contiguous memory allocation, the
memory is usually divided into two partitions, one for the OS and the other for the user
process.
At any time, only one user process is in memory and it is run to completion and then the
next process is brought into the memory. This scheme is sometimes referred to as the
Single Contiguous Memory Management
Advantages:
Starting physical address of program is known at compile time
Executable machine code has absolute addresses only. They need not be
changed/translated at execution time
Fast access time as there is no need for address translation·
Does not have large wasted memory
Time complexity is small·
Disadvantages:
It does not support multi-programming and hence no concept of sharing.
FRAGMENTATION
As processes are loaded and removed from the memory, the free memory space is
broken into little pieces. No other processes can be loaded to this free space because
of its smaller size and hence, it remains unused. This problem is known as
Fragmentation.
Both the first-fit and best-fit strategies for memory allocation suffer from external
fragmentation. As processes are loaded and removed from memory, the free memory
space is broken into little pieces. External fragmentation exists when there is enough
total memory space to satisfy are quest but the available spaces are not contiguous:
storage is fragmented into a large number of small holes. This fragmentation problem
can be severe. Wasting of memory between partitions, due to scattering of free space
into a number of discontinuous areas, is called External Fragmentation.
When partitioning is static, memory is wasted in each
partition where a process of smaller size than the partition is loaded .Wasting of
memory within a partition, due to a difference in size of a partition and the process
within it is called Internal Fragmentation.
One solution to these problems is compaction. It is possible to combine all the
holes (free spaces) into a large block by pushing all the processes downward as far as
possible. Compaction is possible only if relocation is dynamic, and is done at execution
time.
The following figure illustrates the compaction of memory.
Compaction is usually not done because it consumes a lot of CPU time. It is usually
done on a large machine like mainframe or supercomputer because they are supported
with a special hardware to perform this task (compaction).
NON-CONTIGUOUS ALLOCATION
Non-contiguous memory means that the available memory is not contiguous but is
distributed. This scheme has the benefit of minimizing external fragmentation. The
logical address space of the process is allowed to be non- contiguous, thus allowing a
process to be allocated physical memory wherever the later is available
1. Paging
2. Segmentation
3. Segmentation with paging
1. PAGING
A non-contiguous policy with a fixed size partition is called paging. Paging is a
process whereby Physical memory is broken down into fixed size blocks called page
frame. Logical memory is broken down into blocks of the same size as physical
memory blocks and is called pages. When a process is to be executed, its pages are
loaded from the backing store into any available memory frames.
Every process will have a separate page table. The entries in the page table are the
base address of each page in physical memory. At each entry either we have an invalid
pointer which means the page is not in main memory or we will get the corresponding
frame number. When the frame number is combined with instruction of set D than we
will get the corresponding physical address. Size of a page table is generally very large
so cannot be accommodated inside the PCB, therefore, PCB contains a register value
PTBR( page table base register) which leads to the page table.
Every address generated by CPU is divided into two parts: page number (p) and page
offset (d). P is used as an index into a page table .Page table contains base address of
each page in physical memory. The base address is combined with d to get physical
memory address that is then put into the MAR.
Address translation
Logical address=Page number + Page Offset
Physical address=Frame Number + Page offset
Advantages and disadvantages of paging
Advantages of Segmentation
No internal fragmentation
Average Segment Size is larger than the actual page size.
Less overhead
It is easier to relocate segments than entire address space.
The segment table is of lesser size as compare to the page table in paging.
Disadvantages
It can have external fragmentation.
it is difficult to allocate contiguous memory to variable sized partition.
Costly memory management algorithms.
Segment Number specifies the specific segment from which CPU wants to reads
the data.
Page Number specifies the specific page of that segment from which CPU wants
to read the data.
Page Offset specifies the specific word on that page that CPU wants to read.
Advantages-
The advantages of segmented paging are-
Segment table contains only one entry corresponding to each segment.
It reduces memory usage.
The size of Page Table is limited by the segment size.
It solves the problem of external fragmentation.
Disadvantages-
VIRTUAL MEMORY
Demand paging
Demand segmentation
Demand Paging
Demand paging is similar to paging with swapping. In demand paging a page of the
process is loaded in main memory only when it is demanded. In pure demand paging
initially the main memory is considered as a set of free frames. We start the execution
of the process with no pages of the process in main memory. Each page of the process
is loaded only when it is demanded i.e., never bring a page until it is required.
Page Faults
The logical address is in the range of valid addresses, but the corresponding page is not
currently present in memory, but rather is stored on disk. The operating system must
bring it into memory before the process can continue to execute this condition a page
fault.
To illustrate the problems that are possible with a FIFO replacement algorithm, we
consider the reference string.
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
For our example reference string, our three frames are initially empty. The first three
references (7,0,1) cause page faults and are brought into these empty frames. The next
reference (2) replaces page 7, because page 7 was brought in first. Since 0 is the next
reference and 0 is already in memory, we have no fault for this reference. The first
reference to 3 results in replacement of page 0, since it is now first in line. Because of
this replacement, the next reference, to 0, will fault. Page 1 is then replaced by page 0.
The first three references (7, 0, 1) cause page faults and get loaded into three empty
frames. The next reference (2)causes a page fault and page replacement. The page
replaced is page 7 ,because it was brought in the first. H represents a ‘HIT’ means the
referenced page is already in the memory (no page fault). . The FIFO page-replacement
algorithm is easy to understand and program. However, its performance is not always
good.
Generally, on increasing the number of frames to a process’ virtual memory, its
execution becomes faster as less number of page faults occur. Sometimes the reverse
happens, i.e. more number of page faults occur when more frames are allocated to a
process. This most unexpected result is termed as Belady’s Anomaly.
Belady’s anomaly is the name given to the phenomenon where increasing the number
of page frames results in an increase in the number of page faults for a given memory
access pattern.
Performance
Hit ratio = 5/20 = 25%
Advantages:
Easy to understand/ implement.
Disadvantages:
It has a poor performance.
One result of the discovery of Belady’s anomaly was the search for an optimal page-
replacement algorithm—the algorithm that has the lowest page-fault rate of all
algorithms and will never suffer from Belady’s anomaly. Such an algorithm does exist
and has been called OPT or MIN. Replace the page that will not be used for the
longest period of time. Use of this page-replacement algorithm guarantees the lowest
possible page fault rate for a fixed number of frames.
Consider the reference string.
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
The first three references (7, 0, 1) cause page and get loaded into three empty
frames. The reference to page 2 replaces page 7, because 7 will not be used until
reference 18, whereas page 0 will be used at reference 5, and page 1 at reference
14.The reference to page 3 replaces page 1 as it would be the last of the three pages in
memory to be referenced again.
Performance:
Hit ratio = 11/20 = 55%
Advantages:
Has the lowest page fault rate for a fixed number of frames.
Disadvantages:
Difficult to implement, because it requires future knowledge of reference string.
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
LRU algorithm sees that, of the three frames in memory, page 2 was used least
recently. The most recently used page was page 0 and just before that, page 3 was
used. Thus LRU replaces page 2 not knowing that page 2 is about to be used again.
When page 2 is again referenced, LRU replaces page 3 since it is the least recently
used.
Performance
Hit ratio = 8/20 = 40%
Advantages:
Reasonable approximation of Optimal Algorithm.
Disadvantages:
Requires substantial hardware assistance.
THRASHING
Too much or over allocation of memory can lead to a serious performance problem
known as thrashing. Thrashing occurs when all of the pages that are memory resident
are high-demand pages that will be referenced in the near future. Thus, when a page
fault occurs, the page that is removed from memory will soon give rise to a new fault,
which in turn removes a page that will soon give rise to a new fault . In a system that is
thrashing, a high percentage of the system’s resources is devoted to paging, and overall
CPU utilization an throughput drop dramatically.
Security
Authentication
User attribute - fingerprint/ eye retina pattern/ signature − User need to pass
his/her attribute via designated input device used by operating system to login
into the system.
Random numbers − Users are provided cards having numbers printed along
with corresponding alphabets. System asks for numbers corresponding to few
alphabets randomly chosen.
Secret key − User are provided a hardware device which can create a secret id
mapped with user id. System asks for such secret id which is to be generated
every time prior to login.
Program Threats
Operating system's processes and kernel do the designated task as instructed. If a
user program made these process do malicious tasks, then it is known as Program
Threats. One of the common example of program threat is a program installed in a
computer which can store and send user credentials via network to some hacker.
Following is the list of some well-known program threats.
Trojan Horse − Such program traps user login credentials and stores them to
send to malicious user who can later on login to computer and can access
system resources.
System Threats
System threats refers to misuse of system services and network connections to put
user in trouble. System threats can be used to launch program threats on a complete
network called as program attack. System threats creates such an environment that
operating system resources/ user files are misused. Following is the list of some well-
known system threats.