0% found this document useful (0 votes)
165 views2 pages

Understanding Computer Memory Hierarchy

The memory hierarchy is an organizational structure in computing that ranks storage components by speed, cost, and access time, with CPU registers at the top and tape backups at the bottom. It leverages the principle of locality, allowing programs to access frequently used data quickly, thus improving performance. As one moves down the hierarchy, speed decreases while storage capacity and cost per bit increase.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
165 views2 pages

Understanding Computer Memory Hierarchy

The memory hierarchy is an organizational structure in computing that ranks storage components by speed, cost, and access time, with CPU registers at the top and tape backups at the bottom. It leverages the principle of locality, allowing programs to access frequently used data quickly, thus improving performance. As one moves down the hierarchy, speed decreases while storage capacity and cost per bit increase.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Memory Hierarchy

What is hierarchy?
A hierarchy is an organizational structure in which items are ranked according to levels of importance.
Most governments, corporations are hierarchical.
The computer memory hierarchy ranks components in terms of response times,
with processor registers at the top of the pyramid structure and tape backup at the bottom
What is Memory Hierarchy?
The memory system is a hierarchy of storage devices with different capacities, costs, and access times.
The idea of memory hierarchy centers on a fundamental property of computer programs known as
locality. Programs with good locality tend to access the same set of data items over and over again,
or they tend to access sets of nearby data items. Programs with good locality tend to access more
data items from the upper levels of the memory hierarchy than programs with poor locality, and thus
run faster. The next figure shows a typical memory hierarchy with relationship between the cost per
bit, the access time and the hierarchy

Register
s:-
1) CPU registers are at the top most level of this hierarchy, they hold the most frequently used data. They are
very limited in number and are the fastest.
2) They are often used by the CPU and the ALU for performing arithmetic and logical operations, for
temporary storage of data. Registers are located inside CPU.
Cache Memory:-
1) It is used to increase the speed of operations.
2) It is implemented using SRAM chips.
3) SRAM is much faster than DRAM. But is more expensive, larger in size and consumes more power.
4) It is located very close to the CPU on the Mother board.
Main Memory (also called primary memory):-
1) This is the actually memory accessed by the CPU.
2) Size of the main memory depends on size of address bus of the CPU.
3) It is implemented using semiconductor chips.
4) It comprises mainly of RAM and small amount of ROM.
5) These memories are located on the mother board.

1
Secondary Memory:-
1) It is generally used to increase the storage space.
2) It is independent of the size of address bus of CPU.
3) It is implemented in the form of magnetic storage devices which have a lower cost per bit than
semiconductor chips.
4) Only its connector is present on the mother board. Some examples are: hard disk, flash disk, optical disk, etc
Offline Memory:-
1) It is implemented using magnetic tapes.
2) Used to increase the storage capacity.
3) It is very slow but durable.
4) It is easily portable. A typical example is a tape storage used for offline back up
As we move away from the CPU in the memory hierarchy the speed decrease, storage space increase, cost
per bit decrease.

Memory hierarchy in detail


Computer pioneers correctly predicted that programmers would want unlimited amounts of fast
memory. The solution is to use memory hierarchy. The principle of locality says that most programs
do not access all code or data uniformly. Locality occurs in time (temporal locality) and in space
(spatial locality). This principle led to hierarchies based on memories of different speeds and sizes.
The next figure shows a multilevel memory hierarchy, including typical sizes and speeds of access.

Figure: The levels in a typical memory hierarchy in a server computer. As we move farther away from
the processor, the memory in the level below becomes slower and larger. Note that the time units
change by a factor of 109 - from picoseconds to milliseconds - and that the size units change by a factor
of 1012 - from bytes to terabytes.

Since fast memory is expensive, a memory hierarchy is organized into several levels - each smaller,
faster, and more expensive per byte than the next lower level, which is farther from the processor.
The goal is to provide a memory system with cost per byte almost as low as the cheapest level of
memory and speed almost as fast as the fastest level. In most cases (but not all), the data contained in
a lower level are a superset of the next higher level. This inclusion property is always required for the
lowest level of the hierarchy, which consists of main memory in the case of caches and disk memory in
the case of virtual memory.

You might also like