0% found this document useful (0 votes)
41 views27 pages

Coa Unit-4

The main memory, also known as random access memory (RAM), is a type of volatile memory that can be written to and accessed randomly. There are two main types of RAM - static RAM (SRAM) and dynamic RAM (DRAM). SRAM retains data as long as power is supplied using flip-flops, while DRAM uses capacitors and must be regularly refreshed to prevent data loss. DRAM is cheaper and denser than SRAM, making it the predominant type used as main memory in computers.

Uploaded by

singhshiva8082
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views27 pages

Coa Unit-4

The main memory, also known as random access memory (RAM), is a type of volatile memory that can be written to and accessed randomly. There are two main types of RAM - static RAM (SRAM) and dynamic RAM (DRAM). SRAM retains data as long as power is supplied using flip-flops, while DRAM uses capacitors and must be regularly refreshed to prevent data loss. DRAM is cheaper and denser than SRAM, making it the predominant type used as main memory in computers.

Uploaded by

singhshiva8082
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

What is the Main Memory?

The main memory is the fundamental storage unit in a computer system. It is


associatively large and quick memory and saves programs and information during
computer operations. The technology that makes the main memory work is based on
semiconductor integrated circuits.
RAM is the main memory. Integrated circuit Random Access Memory (RAM) chips are
applicable in two possible operating modes are as follows −

 Static − It consists of internal flip-flops, which store the binary information. The
stored data remains solid considering power is provided to the unit. The static
RAM is simple to use and has smaller read and write cycles.
 Dynamic − It saves the binary data in the structure of electric charges that are
used to capacitors. The capacitors are made available inside the chip by Metal
Oxide Semiconductor (MOS) transistors. The stored value on the capacitors
contributes to discharge with time and thus, the capacitors should be regularly
recharged through stimulating the dynamic memory.

Random Access Memory


The term Random Access Memory or RAM is typically used to refer to memory that is
easily read from and written to by the microprocessor. For a memory to be called
random access, it should be possible to access any address at any time. This
differentiates RAM from storage devices such as tapes or hard drives where the data is
accessed sequentially.
RAM is the main memory of a computer. Its objective is to store data and applications
that are currently in use. The operating system controls the usage of this memory. It
gives instructions like when the items are to be loaded into RAM, where they are to be
located in RAM, and when they need to be removed from RAM.

Read-Only Memory
In each computer system, there should be a segment of memory that is fixed and
unaffected by power failure. This type of memory is known as Read-Only Memory or
ROM.

SRAM
RAMs that are made up of circuits and can preserve the information as long as power
is supplied are referred to as Static Random Access Memories (SRAM). Flip-flops form
the basic memory elements in an SRAM device. An SRAM consists of an array of flip-
flops, one for each bit. SRAM consists of an array of flip-flops, a large number of flip-
flops are needed to provide higher capacity memory. Because of this, simpler flip-flop
circuits, BJT, and MOS transistors are used for SRAM.

DRAM
SRAMs are faster but their cost is high because their cells require many transistors.
RAMs can be obtained at a lower cost if simpler cells are used. A MOS storage cell
based on capacitors can be used to replace the SRAM cells. Such a storage cell
cannot preserve the charge (that is, data) indefinitely and must be recharged
periodically. Therefore, these cells are called dynamic storage cells. RAMs using these
cells are referred to as Dynamic RAMs or simply DRAMs.

What is RAM?
RAM, which stands for Random Access Memory, is a hardware device generally located
on the motherboard of a computer and acts as an internal memory of the CPU. It allows
CPU store data, program, and program results when you switch on the computer. It is
the read and write memory of a computer, which means the information can be written
to it as well as read from it.

RAM is a volatile memory, which means it does not store data or instructions
permanently. When you switch on the computer the data and instructions from the hard
disk are stored in the RAM, e.g., when the computer is rebooted, and when you open a
program, the operating system (OS), and the program are loaded into RAM, generally
from an HDD or SSD. CPU utilizes this data to perform the required tasks. As soon as
you shut down the computer, the RAM loses the data. So, the data remains in the RAM
as long as the computer is on and lost when the computer is turned off. The benefit of
loading data into RAM is that reading data from the RAM is much faster than reading
from the hard drive.

A computer's performance mainly depends on the size or storage capacity of the RAM.
If it does not have sufficient RAM (random access memory) to run the OS and software
programs, it will result in slower performance. So, the more RAM a computer has, the
faster it will work. Information stored in RAM is accessed randomly, not in a sequence as
on a CD or hard drive. So, its access time is much faster.

Function of RAM
RAM has no potential for storing permanent data due to its volatility. A hard drive can
be compared to a person's long-term memory and RAM to their short-term memory.
Short-term memory can only hold a limited number of facts in memory at any given
time; however, it concentrates on immediate tasks. Facts kept in the brain's long-term
memory can be used to replenish short-term memory when it becomes full.

How does RAM work?


RAM is much like a collection of boxes, where each box can store either a 0 or a 1. You
may find the specific address for each box by numbering up the rows and down the
columns. An array is a collection of RAM boxes, and a cell is a single RAM box in an
array.

The RAM controller transfers the column and row address down a thin electrical wire
etched into the chip in order to locate a particular cell. In a RAM array, each row and
column contain its own address line. Any read data flow back on a different data line.

RAM is contained in microchips and is physically small. Additionally, it has a limited


storage capacity for holding data. A typical laptop computer may have 8 GB of RAM,
whereas a hard disk can store 10 terabytes.

RAM access time is measured in nanoseconds.

How much RAM do you need?


What the user is doing on the system determine how much RAM is required. For
example, a system should have at least 16 GB of RAM, while more is preferred for
editing videos. Also, Adobe advises a system should have at least 3GB of RAM in order
to run Photoshop CC on a Mac for photo editing. However, even 8GB of RAM can slow
things down if the user is simultaneously using other apps.

Types of RAM:
Integrated RAM chips can be of two types:
1. Static RAM (SRAM):
2. Dynamic RAM (DRAM):

Both types of RAM are volatile, as both lose their content when the power is turned off.

1) Static RAM:

Static RAM (SRAM) is a type of random access memory that retains its state for data bits
or holds data as long as it receives the power. It is made up of memory cells and is
called a static RAM as it does not need to be refreshed on a regular basis because it
does not need the power to prevent leakage, unlike dynamic RAM. So, it is faster than
DRAM.

It has a special arrangement of transistors that makes a flip-flop, a type of memory cell.
One memory cell stores one bit of data. Most of the modern SRAM memory cells are
made of six CMOS transistors, but lack capacitors. The access time in SRAM chips can be
as low as 10 nanoseconds. Whereas, the access time in DRAM usually remains above 50
nanoseconds.

The drawback with Static RAM is that its memory cells occupy more space on a chip
than the DRAM memory cells for the same amount of storage space (memory) as it has
more parts than a DRAM. So, it offers less memory per chip.

2) Dynamic RAM:
Dynamic Ram (DRAM) is also made up of memory cells. It is an integrated circuit (IC)
made of millions of transistors and capacitors which are extremely small in size and each
transistor is lined up with a capacitor to create a very compact memory cell so that
millions of them can fit on a single memory chip. So, a memory cell of a DRAM has one
transistor and one capacitor and each cell represents or stores a single bit of data in its
capacitor within an integrated circuit.

The capacitor holds this bit of information or data, either as 0 or as 1. The transistor,
which is also present in the cell, acts as a switch that allows the electric circuit on the
memory chip to read the capacitor and change its state.

The access time in DRAM is around 60 nanoseconds.

Difference between Static RAM and Dynamic RAM:

SRAM DRAM

It is a static memory as it does not need to be It is a dynamic memory as it needs to be


refreshed repeatedly. refreshed continuously or it will lose the data.

Its memory cell is made of 6 transistors. So its cells Its memory cell is made of one transistor and one
occupy more space on a chip and offer less capacitor. So, its cells occupy less space on a chip
storage capacity (memory) than a DRAM of the and provide more memory than a SRM of the
same physical size. same physical size.

It is more expensive than DRAM and is located on It is less expensive than SRAM and is mostly
processors or between a processor and main located on the motherboard.
memory.

It has a lower access time, e.g. 10 nanoseconds. It has a higher access time, e.g. more than 50
So, it is faster than DRAM. nanoseconds. So, it is slower than SRAM.

It stores information in a bistable latching circuitry. The information or each bit of data is stored in a
It requires regular power supply so it consumes separate capacitor within an integrated circuit so
more power. it consumes less power.

It is faster than DRAM as its memory cells don't It is not as fast as SRAM, as its memory cells are
need to be refreshed and are always available. So, refreshed continuously. But still, it is used in the
it is mostly used in registers in the CPU and cache motherboard because it is cheaper to
memory of various devices. manufacture and requires less space.
Its cycle time is shorter as it does not need to be Its cycle time is more than the SRAM's cycle time.
paused between accesses and refreshes.

Examples: L2 and LE cache in a CPU. Example: DDR3, DDR4 in mobile phones,


computers, etc.

Size ranges from 1 MB to 16MB. Size ranges from 1 GB to 3 GB in smartphones


and 4GB to 16GB in laptops.

Keep in mind the 32-bit Windows versions


Finally, simply installing infinite amounts of RAM into your computer system will not
make it functional. Running a 64-bit version of Windows is required to use more than
4GB of RAM in your system; 32-bit versions have only the potential to use 3.5GB. If you
are a user of the 32-bit edition of Windows 7, you are required to update from the 32-
bit version of Windows 7 to the 64-bit version to use 4GB of RAM or more.

However, keep in mind that if you have an old system and are installing a 64-bit version
of Windows on this PC, it may have a negative impact. Windows addresses are now 64
bits long instead of simply 32 bits. This results in each application having a bigger
memory footprint. The amount of RAM used by Windows 64-bit may increase by 20-
50% on the basis of the applications you use. Therefore, using a 64-bit version only
makes sense if your system has more memory.

Get more RAM the easy way


If you are upgrading or manually cleaning your RAM, it can be a hassle. A unique
technology called Sleep Mode is available that identifies and shuts down resource-
hogging applications when they are not in use, which helps to improve the performance
of the system.

For example, you can download a free antivirus program, such as Avast Cleanup, and
look for Background and Startup Programs. You will notice performance benefits as
soon as you put the programs you don't actively need to use to sleep.

Read Only Memory


ROM stands for Read Only Memory. The memory from which we can only read but
cannot write on it. This type of memory is non-volatile. The information is stored
permanently in such memories during manufacture. A ROM stores such instructions
that are required to start a computer. This operation is referred to as bootstrap. ROM
chips are not only used in the computer but also in other electronic items like washing
machine and microwave oven.

Let us now discuss the various types of ROMs and their characteristics.

MROM (Masked ROM)


The very first ROMs were hard-wired devices that contained a pre-programmed set of
data or instructions. These kind of ROMs are known as masked ROMs, which are
inexpensive.

PROM (Programmable Read Only Memory)


PROM is read-only memory that can be modified only once by a user. The user buys a
blank PROM and enters the desired contents using a PROM program. Inside the PROM
chip, there are small fuses which are burnt open during programming. It can be
programmed only once and is not erasable.

EPROM (Erasable and Programmable Read Only Memory)


EPROM can be erased by exposing it to ultra-violet light for a duration of up to 40
minutes. Usually, an EPROM eraser achieves this function. During programming, an
electrical charge is trapped in an insulated gate region. The charge is retained for more
than 10 years because the charge has no leakage path. For erasing this charge, ultra-
violet light is passed through a quartz crystal window (lid). This exposure to ultra-violet
light dissipates the charge. During normal use, the quartz lid is sealed with a sticker.

EEPROM (Electrically Erasable and Programmable Read


Only Memory)
EEPROM is programmed and erased electrically. It can be erased and reprogrammed
about ten thousand times. Both erasing and programming take about 4 to 10 ms
(millisecond). In EEPROM, any location can be selectively erased and programmed.
EEPROMs can be erased one byte at a time, rather than erasing the entire chip. Hence,
the process of reprogramming is flexible but slow.

Advantages of ROM
The advantages of ROM are as follows −

 Non-volatile in nature
 Cannot be accidentally changed
 Cheaper than RAMs
 Easy to test
 More reliable than RAMs
 Static and do not require refreshing
 Contents are always known and can be verified
Secondary Memory
You know that processor memory, also known as primary memory, is expensive as well
as limited. The faster primary memory are also volatile. If we need to store large amount
of data or programs permanently, we need a cheaper and permanent memory. Such
memory is called secondary memory. Here we will discuss secondary memory devices
that can be used to store large amount of data, audio, video and multimedia files.
Characteristics of Secondary Memory
These are some characteristics of secondary memory, which distinguish it from primary
memory −

 It is non-volatile, i.e. it retains data when power is switched off


 It is large capacities to the tune of terabytes
 It is cheaper as compared to primary memory
Depending on whether secondary memory device is part of CPU or not, there are two
types of secondary memory – fixed and removable.
Let us look at some of the secondary memory devices available.

Hard Disk Drive


Hard disk drive is made up of a series of circular disks called platters arranged one
over the other almost ½ inches apart around a spindle. Disks are made of non-
magnetic material like aluminum alloy and coated with 10-20 nm of magnetic material.

Standard diameter of these disks is 14 inches and they rotate with speeds varying from
4200 rpm (rotations per minute) for personal computers to 15000 rpm for servers. Data
is stored by magnetizing or demagnetizing the magnetic coating. A magnetic reader arm
is used to read data from and write data to the disks. A typical modern HDD has
capacity in terabytes (TB).

CD Drive
CD stands for Compact Disk. CDs are circular disks that use optical rays, usually
lasers, to read and write data. They are very cheap as you can get 700 MB of storage
space for less than a dollar. CDs are inserted in CD drives built into CPU cabinet. They
are portable as you can eject the drive, remove the CD and carry it with you. There are
three types of CDs −
 CD-ROM (Compact Disk – Read Only Memory) − The data on these CDs are
recorded by the manufacturer. Proprietary Software, audio or video are released
on CD-ROMs.
 CD-R (Compact Disk – Recordable) − Data can be written by the user once on
the CD-R. It cannot be deleted or modified later.
 CD-RW (Compact Disk – Rewritable) − Data can be written and deleted on
these optical disks again and again.

DVD Drive
DVD stands for Digital Video Display. DVD are optical devices that can store 15 times
the data held by CDs. They are usually used to store rich multimedia files that need high
storage capacity. DVDs also come in three varieties – read only, recordable and
rewritable.

Pen Drive
Pen drive is a portable memory device that uses solid state memory rather than
magnetic fields or lasers to record data. It uses a technology similar to RAM, except that
it is nonvolatile. It is also called USB drive, key drive or flash memory.

Blu Ray Disk


Blu Ray Disk (BD) is an optical storage media used to store high definition (HD) video
and other multimedia filed. BD uses shorter wavelength laser as compared to CD/DVD.
This enables writing arm to focus more tightly on the disk and hence pack in more data.
BDs can store up to 128 GB data.
What are Optical Disks?
The optical disk storage system includes a rotating disk coated with a diminished layer
of metal that facilitates a reflective surface and a laser beam, which is used as a
read/write head for recording information onto the disk. Unlike magnetic disk, the
optical layer consists of a single long track in the form of a spiral shape. The spiral
shape of the track produces the optical disk applicable for reading huge blocks of
sequential information onto it, including music.

Types of Optical Disks


There are two types of optical disks which are as follows −
 Compact Disk (CD) − The terminology CD used for audio stands for Compact
Disks. For use in digital computers similar terminology is used. The disks used for
data storage are known as Compact Disk Read-Only Memory (CD-ROM). A
compact disk is a round disk of clear polycarbonate plastic, coated with a very
thin reflective layer of aluminum. During the manufacturing process of this 4.8
inches disk, pits are created on the surface of the disk. The portions between
these pits are called lands. A typical CD can store data up to 700MB. Such high
storage capacity is only possible due to a very high data density.

Types of Compacts Disks


There are three types of CDs which are as follows −
 WORM disks − WORM means write once and read many. The audio CDs that
purchase from the market are WORM disks which are recorded by the company
and can be played many times.
 CD-Recordable − The recordable disks can be written only once. To produce
multiple copies of CD or CD-ROM, a CD-recordable drive is attached to a
computer that allows us to create our CD. It can record data on this disk, the laser
forms bump in the due layer.
 CD-Rewritable − These are the compact disks, which can be recorded, erased,
and then re-recorded. These disks are used to store files that are frequently
modified. These disks also provide a good means for the transportation of data.
Once the data is transferred to the destination, the contents are deleted from the
disk to make it empty again.
 DVD Disks − DVD is the latest product in the market. DVD-ROM is a high-density
medium capable of saving a complete-length movie on an individual disk. Its size
is equal to the size of a CD. DVD-ROM produces such high storage capacities by
using both sides of the disk, by using unique data-compression technologies, and
by using intensely small tracks for saving information.
What is Cache Memory in Computer Architecture?
The data or contents of the main memory that are used generally by the CPU are
saved in the cache memory so that the processor can simply create that information in
a shorter time. Whenever the CPU requires to create memory, it first tests the cache
memory. If the data is not established in cache memory, so the CPU transfers into the
main memory.
Cache memory is located between the CPU and the main memory. The block diagram
for a cache memory can be represented as −

The concept of reducing the size of memory can be optimized by placing an even
smaller SRAM between the cache and the processor, thereby creating two levels of
cache. This new cache is usually contained inside the processor. As the new cache is
put inside the processor, the wires connecting the two become very short, and the
interface circuitry becomes more closely integrated with that of the processor.
These two conditions together with the smaller decoder circuit facilitate faster data
access. When two caches are present, the cache within the processor is referred to as
a level 1 or L1 cache. The cache between the L1 cache and memory is referred to as a
level 2 or L2 cache.
The figure shows the placement of L1 and L2 cache in memory.

The split cache, another cache organization, is shown in the figure. Split cache requires
two caches. In this case, a processor uses one cache to store code/instructions and a
second cache to store data.
This cache organization is typically used to support an advanced type of processor
architecture such as pipelining. Here, the mechanisms used by the processor to handle
the code are so distinct from those used for data that it does not make sense to put
both types of information into the same cache.

The success of caches depends upon the principle of locality. The principle proposes
that when one data item is loaded into a cache, the items close to it in memory should
be loaded too.
If a program enters a loop, most of the instructions that are part of that loop are
executed multiple times. Therefore, when the first instruction of a loop is being loaded
into the cache, it loads its bordering instructions simultaneously to save time. In this
way, the processor does not have to wait for the main memory to provide subsequent
instructions.
As a result of this, caches are organized in such a way that when one piece of data or
code is loaded, the block of neighbouring items is loaded too. Each block loaded into
the cache is identified with a number known as a tag.
This tag can be used to find the original addresses of the data in the main memory.
Therefore, when the processor is in search of a piece of data or code (hereafter
referred to as a word), it only needs to check the tags to see if the word is contained in
the cache.

Cache Mapping:
There are three different types of mapping used for the purpose of cache memory which
are as follows:

o Direct mapping,
o Associative mapping
o Set-Associative mapping
Direct Mapping -
In direct mapping, the cache consists of normal high-speed random-access memory.
Each location in the cache holds the data, at a specific address in the cache. This address
is given by the lower significant bits of the main memory address. This enables the block
to be selected directly from the lower significant bit of the memory address. The
remaining higher significant bits of the address are stored in the cache with the data to
complete the identification of the cached data.

The tag consists of the higher significant bits of the address and these bits are stored
with the data in cache. The index consists of the lower significant b of the address.
Whenever the memory is referenced, the following sequence of events occurs

1. The index is first used to access a word in the cache.


2. The tag stored in the accessed word is read.
3. This tag is then compared with the tag in the address.
4. If two tags are same this indicates cache hit and required data is read from the cache
word.
5. If the two tags are not same, this indicates a cache miss. Then the reference is made to
the main memory to find it.

For a memory read operation, the word is then transferred into the cache. It is possible
to pass the information to the cache and the process simultaneously.

In direct mapped cache, there can also be a line consisting of more than one word as
shown in the following figure

Set Associative Mapping -


In set associative mapping a cache is divided into a set of blocks. The number of blocks
in a set is known as associatively or set size. Each block in each set has a stored tag. This
tag together with index completely identifies the block.

Thus, set associative mapping allows a limited number of blocks, with the same index
and different tags.

An example of four way set associative cache having four blocks in each set is shown in
the following figure
In this type of cache, the following steps are used to access the data from a cache:

1. The index of the address from the processor is used to access the set.
2. Then the comparators are used to compare all tags of the selected set with the incoming
tag.
3. If a match is found, the corresponding location is accessed.
4. If no match is found, an access is made to the main memory.

The tag address bits are always chosen to be the most significant bits of the full address,
the block address bits are the next significant bits and the word/byte address bits are
the least significant bits. The number of comparators required in the set associative
cache is given by the number of blocks in a set. The set can be selected quickly and all
the blocks of the set can be read out simultaneously with the tags before waiting for the
tag comparisons to be made. After a tag has been identified, the corresponding block
can be selected.

Fully associative mapping


In fully associative type of cache memory, each location in cache stores both memory
address as well as data.

Whenever a data is requested, the incoming memory address a simultaneously


compared with all stored addresses using the internal logic the associative memory.

If a match is found, the corresponding is read out. Otherwise, the main memory is
accessed if address is not found in cache.

This method is known as fully associative mapping approach because cached data is
related to the main memory by storing both memory address and data in the cache. In
all organisations, data can be more than one word as shown in the following figure.

The main advantage of fully associative mapped cache is that it provides greatest
flexibility of holding combinations of blocks in the cache and conflict for a given cache.
Page Replacement Algorithms in Operating Systems (OS)
Today we are going to learn about Page Replacement Algorithms in Operating Systems
(OS). Before knowing about Page Replacement Algorithms in Operating Systems let us
learn about Paging in Operating Systems and also a little about Virtual Memory.

Only after understanding the concept of Paging we will understand about Page
Replacement Algorithms.

Paging in Operating Systems (OS)


Paging is a storage mechanism. Paging is used to retrieve processes from secondary
memory to primary memory.

The main memory is divided into small blocks called pages. Now, each of the pages
contains the process which is retrieved into main memory and it is stored in one frame
of memory.

It is very important to have pages and frames which are of equal sizes which are very
useful for mapping and complete utilization of memory.

Virtual Memory in Operating Systems (OS)


A storage method known as virtual memory gives the user the impression that their
main memory is quite large. By considering a portion of secondary memory as the main
memory, this is accomplished.

By giving the user the impression that there is memory available to load the process,
this approach allows them to load larger size programs than the primary memory that is
accessible.

The Operating System loads the many components of several processes in the main
memory as opposed to loading a single large process there.

By doing this, the level of multiprogramming will be enhanced, which will increase CPU
consumption.

Demand Paging
The Demand Paging is a condition which is occurred in the Virtual Memory. We know
that the pages of the process are stored in secondary memory. The page is brought to
the main memory when required. We do not know when this requirement is going to
occur. So, the pages are brought to the main memory when required by the Page
Replacement Algorithms.

So, the process of calling the pages to main memory to secondary memory upon
demand is known as Demand Paging.

The important jobs of virtual memory in Operating Systems are two. They are:

o Frame Allocation
o Page Replacement.

Frame Allocation in Virtual Memory


Demand paging is used to implement virtual memory, an essential component of
operating systems. A page-replacement mechanism and a frame allocation algorithm
must be created for demand paging. If you have numerous processes, frame allocation
techniques are utilized to determine how many frames to provide to each process.

A Physical Address is required by the Central Processing Unit (CPU) for the frame
creation and the physical Addressing provides the actual address to the frame created.
For each page a frame must be created.

Frame Allocation Constraints


o The Frames that can be allocated cannot be greater than total number of frames.
o Each process should be given a set minimum amount of frames.
o When fewer frames are allocated then the page fault ration increases and the process
execution becomes less efficient
o There ought to be sufficient frames to accommodate all the many pages that a single
instruction may refer to

Frame Allocation Algorithms


There are three types of Frame Allocation Algorithms in Operating Systems. They are:

1) Equal Frame Allocation Algorithms

Here, in this Frame Allocation Algorithm we take number of frames and number of
processes at once. We divide the number of frames by number of processes. We get the
number of frames we must provide for each process.

This means if we have 36 frames and 6 processes. For each process 6 frames are
allocated.

It is not very logical to assign equal frames to all processes in systems with processes of
different sizes. A lot of allocated but unused frames will eventually be wasted if a lot of
frames are given to a little operation.

2) Proportionate Frame Allocation Algorithms

Here, in this Frame Allocation Algorithms we take number of frames based on the
process size. For big process more number of frames is allocated. For small processes
less number of frames is allocated by the operating system.

The problem in the Proportionate Frame Allocation Algorithm is number of frames are
wasted in some rare cases.

The advantage in Proportionate Frame Allocation Algorithm is that instead of equally,


each operation divides the available frames according to its demands.

3) Priority Frame Allocation Algorithms

According to the quantity of frame allocations and the processes, priority frame
allocation distributes frames. Let's say a process has a high priority and needs more
frames; in such case, additional frames will be given to the process. Processes with lower
priorities are then later executed in future and first only high priority processes are
executed first.

Page Replacement Algorithms


There are three types of Page Replacement Algorithms. They are:

o Optimal Page Replacement Algorithm


o First In First Out Page Replacement Algorithm
o Least Recently Used (LRU) Page Replacement Algorithm

First in First out Page Replacement Algorithm


This is the first basic algorithm of Page Replacement Algorithms. This algorithm is
basically dependent on the number of frames used. Then each frame takes up the
certain page and tries to access it. When the frames are filled then the actual problem
starts. The fixed number of frames is filled up with the help of first frames present. This
concept is fulfilled with the help of Demand Paging

After filling up of the frames, the next page in the waiting queue tries to enter the frame.
If the frame is present then, no problem is occurred. Because of the page which is to be
searched is already present in the allocated frames.

If the page to be searched is found among the frames then, this process is known as
Page Hit.

If the page to be searched is not found among the frames then, this process is known as
Page Fault.

When Page Fault occurs this problem arises, then the First In First Out Page
Replacement Algorithm comes into picture.

The First In First Out (FIFO) Page Replacement Algorithm removes the Page in the frame
which is allotted long back. This means the useless page which is in the frame for a
longer time is removed and the new page which is in the ready queue and is ready to
occupy the frame is allowed by the First In First Out Page Replacement.

Let us understand this First In First Out Page Replacement Algorithm working with the
help of an example.

Example:

Consider the reference string 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0 for a


memory with three frames and calculate number of page faults by using FIFO (First In
First Out) Page replacement algorithms.
Points to Remember

Page Not Found - - - > Page Fault

Page Found - - - > Page Hit

Reference String:

Number of Page Hits = 8

Number of Page Faults = 12

The Ratio of Page Hit to the Page Fault = 8 : 12 - - - > 2 : 3 - - - > 0.66

The Page Hit Percentage = 8 *100 / 20 = 40%

The Page Fault Percentage = 100 - Page Hit Percentage = 100 - 40 = 60%

Explanation

First, fill the frames with the initial pages. Then, after the frames are filled we need to
create a space in the frames for the new page to occupy. So, with the help of First in
First Out Page Replacement Algorithm we remove the frame which contains the page is
older among the pages. By removing the older page we give access for the new frame
to occupy the empty space created by the First in First out Page Replacement Algorithm.

OPTIMAL Page Replacement Algorithm


This is the second basic algorithm of Page Replacement Algorithms. This algorithm is
basically dependent on the number of frames used. Then each frame takes up the
certain page and tries to access it. When the frames are filled then the actual problem
starts. The fixed number of frames is filled up with the help of first frames present. This
concept is fulfilled with the help of Demand Paging

After filling up of the frames, the next page in the waiting queue tries to enter the frame.
If the frame is present then, no problem is occurred. Because of the page which is to be
searched is already present in the allocated frames.

If the page to be searched is found among the frames then, this process is known as
Page Hit.

If the page to be searched is not found among the frames then, this process is known as
Page Fault.

When Page Fault occurs this problem arises, then the OPTIMAL Page Replacement
Algorithm comes into picture.

The OPTIMAL Page Replacement Algorithms works on a certain principle. The principle
is:

Replace the Page which is not used in the Longest Dimension of time in future

This principle means that after all the frames are filled then, see the future pages which
are to occupy the frames. Go on checking for the pages which are already available in
the frames. Choose the page which is at last.

Example:

Suppose the Reference String is:

0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0

6, 1, 2 are in the frames occupying the frames.

Now we need to enter 0 into the frame by removing one page from the page

So, let us check which page number occurs last


From the sub sequence 0, 3, 4, 6, 0, 2, 1 we can say that 1 is the last occurring page
number. So we can say that 0 can be placed in the frame body by removing 1 from the
frame.

Let us understand this OPTIMAL Page Replacement Algorithm working with the help of
an example.

Example:

Consider the reference string 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 4, 0 for a


memory with three frames and calculate number of page faults by using OPTIMAL Page
replacement algorithms.

Points to Remember

Page Not Found - - - > Page Fault

Page Found - - - > Page Hit

Reference String:

Number of Page Hits = 8

Number of Page Faults = 12

The Ratio of Page Hit to the Page Fault = 8 : 12 - - - > 2 : 3 - - - > 0.66

The Page Hit Percentage = 8 *100 / 20 = 40%

The Page Fault Percentage = 100 - Page Hit Percentage = 100 - 40 = 60%
Explanation

First, fill the frames with the initial pages. Then, after the frames are filled we need to
create a space in the frames for the new page to occupy.

Here, we would fill the empty spaces with the pages we and the empty frames we have.
The problem occurs when there is no space for occupying of pages. We have already
known that we would replace the Page which is not used in the Longest Dimension of
time in future.

There comes a question what if there is absence of page which is in the frame.

Suppose the Reference String is:

0, 2, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0

6, 1, 5 are in the frames occupying the frames.

Here, we can see that page number 5 is not present in the Reference String. But the
number 5 is present in the Frame. So, as the page number 5 is absent we remove it
when required and other page can occupy that position.

Least Recently Used (LRU) Replacement Algorithm


This is the last basic algorithm of Page Replacement Algorithms. This algorithm is
basically dependent on the number of frames used. Then each frame takes up the
certain page and tries to access it. When the frames are filled then the actual problem
starts. The fixed number of frames is filled up with the help of first frames present. This
concept is fulfilled with the help of Demand Paging

After filling up of the frames, the next page in the waiting queue tries to enter the frame.
If the frame is present then, no problem is occurred. Because of the page which is to be
searched is already present in the allocated frames.

If the page to be searched is found among the frames then, this process is known as
Page Hit.

If the page to be searched is not found among the frames then, this process is known as
Page Fault.

When Page Fault occurs this problem arises, then the Least Recently Used (LRU) Page
Replacement Algorithm comes into picture.
The Least Recently Used (LRU) Page Replacement Algorithms works on a certain
principle. The principle is:

Replace the page with the page which is less dimension of time recently used page in
the past.

Example:

Suppose the Reference String is:

6, 1, 1, 2, 0, 3, 4, 6, 0

The pages with page numbers 6, 1, 2 are in the frames occupying the frames.

Now, we need to allot a space for the page numbered 0.

Now, we need to travel back into the past to check which page can be replaced.

6 is the oldest page which is available in the Frame.

So, replace 6 with the page numbered 0.

Let us understand this Least Recently Used (LRU) Page Replacement Algorithm working
with the help of an example.

Example:

Consider the reference string 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0 for a


memory with three frames and calculate number of page faults by using Least Recently
Used (LRU) Page replacement algorithms.

Points to Remember

Page Not Found - - - > Page Fault

Page Found - - - > Page Hit

Reference String:
Number of Page Hits = 7

Number of Page Faults = 13

The Ratio of Page Hit to the Page Fault = 7 : 12 - - - > 0.5833 : 1

The Page Hit Percentage = 7 * 100 / 20 = 35%

The Page Fault Percentage = 100 - Page Hit Percentage = 100 - 35 = 65%

Explanation

First, fill the frames with the initial pages. Then, after the frames are filled we need to
create a space in the frames for the new page to occupy.

Here, we would fill the empty spaces with the pages we and the empty frames we have.
The problem occurs when there is no space for occupying of pages. We have already
known that we would replace the Page which is not used in the Longest Dimension of
time in past or can be said as the Page which is very far away in the past.

Explain the performance of cache in


computer architecture?
Computer ArchitectureComputer ScienceNetwork

The main reason for containing cache memory in a computer is to increase system performance
by decreasing the time required to access memory. The component of cache performance
are cache hits and cache misses.
Each time the CPU accesses memory, it determines the cache. If the requested data is in the
cache, the CPU accesses the data in the cache, instead of physical memory, this is a cache hit. If
the data is not in the cache, the CPU accesses the data from the main memory. This is a cache
miss.
The average memory access time, TM is the weighted average of the cache access time, TC plus
the access time for physical memory TP. The weighting factor is the hit ratio h. TM can be
expressed as
TM=h TC+(1−h)TP
Since TC is much less than TP, increasing the hit ratio reduces the average memory access time.
The table shows for TC=10 ns,TP=60 ns, and various values of h.
Hit ratios and average memory access times

h 𝐓𝐌

0.0 60 ns

0.1 55 ns

0.2 50 ns

0.3 45 ns

0.4 40 ns

0.5 35 ns

0.6 30 ns

0.7 25 ns

0.8 20 ns

0.9 15 ns

1.0 10 ns
Consider that a computer system has an associative cache, a direct-mapped cache, or a two-way
set-associative cache of 8 bytes. The CPU accesses the following locations in the order shown.
The subscript for each value is the low-order 3 bits of its address in physical memory.
A0 B0 C2 A0 D1 B0 E4 F5 A0 C2 D1 B0 G3 C2 H7 I6 A0 B0
It can determine the hit ratio and average memory access times for this access pattern for each of
the three cache configurations, assuming again that
TC=10 ns and
TP=60ns.
First, consider the associative cache. It is initially empty uses a FIFO replacement policy. The
table shows the contents of the cache as each value is accessed. Seven out of the 18 accesses are
hits, yielding a hit ratio of h = 0.389 and an average memory access time of
TM=40.56 ns.
Cache activity using associative ache

Data A B C A D B E F A C D B G C H I A B

A A A A A A A A A A A A A A A A A A

C B B B B B B B B B B B B B B B B B

A C C C C C C C C C C C C C C C C

C D D D D D D D D D D D D D D

H E E E E E E E E E E E E

E F F F F F F F F F F F

G G G G G G

H H H H

Hit? √ √ √ √ √ √ √

You might also like