0% found this document useful (0 votes)
14 views9 pages

MEMORY ORGANIZATION.docx

The document discusses memory organization, categorizing memory into volatile and non-volatile types, and detailing the memory hierarchy from auxiliary to cache memory. It explains the differences between RISC and CISC processors, highlighting their architectures, advantages, and disadvantages. Additionally, it covers cache memory characteristics, mapping techniques, and applications, emphasizing its role in improving CPU performance despite its higher cost.

Uploaded by

annoymouslock
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views9 pages

MEMORY ORGANIZATION.docx

The document discusses memory organization, categorizing memory into volatile and non-volatile types, and detailing the memory hierarchy from auxiliary to cache memory. It explains the differences between RISC and CISC processors, highlighting their architectures, advantages, and disadvantages. Additionally, it covers cache memory characteristics, mapping techniques, and applications, emphasizing its role in improving CPU performance despite its higher cost.

Uploaded by

annoymouslock
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Memory Organization

A memory unit is the collection of storage units or devices together. The memory unit stores the
binary information in the form of bits. Generally, memory/storage is classified into 2 categories:
● Volatile Memory: This loses its data, when power is switched off.
● Non-Volatile Memory: This is a permanent storage and does not lose any data when
power is switched off.
Memory Hierarchy

The total memory capacity of a computer can be visualized by hierarchy of components. The
memory hierarchy system consists of all storage devices contained in a computer system from
the slow Auxiliary Memory to fast Main Memory and to smaller Cache memory. Auxillary
memory access time is generally 1000 times that of the main memory, hence it is at the bottom
of the hierarchy. The main memory occupies the central position because it is equipped to
communicate directly with the CPU and with auxiliary memory devices through Input/output
processor (I/O). When the program not residing in main memory is needed by the CPU, they are
brought in from auxiliary memory. Programs not currently needed in main memory are
transferred into auxiliary memory to provide space in main memory for other programs that are
currently in use. The cache memory is used to store program data which is currently being
executed in the CPU. Approximate access time ratio between cache memory and main memory
is about 1 to 7~10

Microprocessor is divided into three sections. 1. RISC 2. CISC 3. Special Processor


RISC stands for Reduced Instruction Set Computer Processor, a microprocessor architecture
with a simple collection and highly customized set of instructions. It is built to minimize the
instruction execution time by optimizing and limiting the number of instructions. It means each
instruction cycle requires only one clock cycle, and each cycle contains three parameters: fetch,
decode and execute. The RISC processor is also used to perform various complex instructions by
combining them into simpler ones. RISC chips require several transistors, making it cheaper to
design and reduce the execution time for instruction. Examples of RISC processors are SUN's
SPARC, PowerPC, Microchip PIC processors, RISC-V.
Advantages of RISC Processor
1. The RISC processor's performance is better due to the simple and limited number of the
instruction set.
2. It requires several transistors that make it cheaper to design.
3. RISC allows the instruction to use free space on a microprocessor because of its
simplicity.
4. RISC processor is simpler than a CISC processor because of its simple and quick design,
and it can complete its work in one clock cycle.
Disadvantages of RISC Processor
1. The RISC processor's performance may vary according to the code executed because
subsequent instructions may depend on the previous instruction for their execution in a
cycle.
2. Programmers and compilers often use complex instructions.
3. RISC processors require very fast memory to save various instructions that require a large
collection of cache memory to respond to the instruction in a short time.
RISC Architecture
It is a highly customized set of instructions used in portable devices due to system reliability
such as Apple iPod, mobiles/smartphones, Nintendo DS,

Features of RISC Processor: Some important features of RISC processors are:

1. One cycle execution time: For executing each instruction in a computer, the RISC
processors require one CPI (Clock per cycle). And each CPI includes the fetch, decode
and execute method applied in computer instruction.
2. Pipelining technique: The pipelining technique is used in the RISC processors to
execute multiple parts or stages of instructions to perform more efficiently.
3. A large number of registers: RISC processors are optimized with multiple registers that
can be used to store instruction and quickly respond to the computer and minimize
interaction with computer memory.
4. It supports a simple addressing mode and fixed length of instruction for executing the
pipeline.
5. It uses LOAD and STORE instruction to access the memory location.
6. Simple and limited instruction reduces the execution time of a process in a RISC.

CISC Processor: The CISC Stands for Complex Instruction Set Computer, developed by the
Intel. It has a large collection of complex instructions that range from simple to very complex
and specialized in the assembly language level, which takes a long time to execute the
instructions. So, CISC approaches reducing the number of instruction on each program and
ignoring the number of cycles per instruction. It emphasizes to build complex instructions
directly in the hardware because the hardware is always faster than software. However, CISC
chips are relatively slower as compared to RISC chips but use little instruction than RISC.
Examples of CISC processors are VAX, AMD, Intel x86 and the System/360.

Characteristics of CISC Processor


Following are the main characteristics of the RISC processor:
1. The length of the code is shorts, so it requires very little RAM.
2. CISC or complex instructions may take longer than a single clock cycle to execute the
code.
3. Less instruction is needed to write an application.
4. It provides easier programming in assembly language.
5. Support for complex data structure and easy compilation of high-level languages.
6. It is composed of fewer registers and more addressing nodes, typically 5 to 20.
7. Instructions can be larger than a single word.
8. It emphasizes the building of instruction on hardware because it is faster to create than
the software.
CISC Processors Architecture
The CISC architecture helps reduce program code by embedding multiple operations on each
program instruction, which makes the CISC processor more complex. The CISC
architecture-based computer is designed to decrease memory costs because large programs or
instruction required large memory space to store the data, thus increasing the memory
requirement, and a large collection of memory increases the memory cost, which makes them
more expensive.
Advantages of CISC Processors
1. The compiler requires little effort to translate high-level programs or statement languages
into assembly or machine language in CISC processors.
2. The code length is quite short, which minimizes the memory requirement.
3. To store the instruction on each CISC, it requires very less RAM.
4. Execution of a single instruction requires several low-level tasks.
5. CISC creates a process to manage power usage that adjusts clock speed and voltage.
6. It uses fewer instructions set to perform the same instruction as the RISC.
Disadvantages of CISC Processors
1. CISC chips are slower than RSIC chips to execute per instruction cycle on each program.
2. The performance of the machine decreases due to the slowness of the clock speed.
3. Executing the pipeline in the CISC processor makes it complicated to use.
4. The CISC chips require more transistors as compared to RISC design.
5. In CISC it uses only 20% of existing instructions in a programming event.
Difference between the RISC and CISC Processors
RISC CISC
It is a Reduced Instruction Set Computer. It is a Complex Instruction Set Computer.
It emphasizes on software to optimize the instruction It emphasizes on hardware to optimize the
set. instruction set.
It is a hard wired unit of programming in the RISC
Microprogramming unit in CISC Processor.
Processor.
It requires multiple register sets to store the It requires a single register set to store the
instruction. instruction.
RISC has simple decoding of instruction. CISC has complex decoding of instruction.
Uses of the pipeline are simple in RISC. Uses of the pipeline are difficult in CISC.
It uses a limited number of instruction that requires It uses a large number of instruction that
less time to execute the instructions. requires more time to execute the instructions.
It uses LOAD and STORE that are independent
It uses LOAD and STORE instruction in the
instructions in the register-to-register a program's
memory-to-memory interaction of a program.
interaction.
CISC has transistors to store complex
RISC has more transistors on memory registers.
instructions.
The execution time of RISC is very short. The execution time of CISC is longer.
RISC architecture can be used with high-end CISC architecture can be used with low-end
applications like telecommunication, image applications like home automation, security
processing, video processing, etc. system, etc.
It has fixed format instruction. It has variable format instruction.
The program written for RISC architecture needs to Program written for CISC architecture tends
take more space in memory. to take less space in memory.

Cache Memory
Cache Memory is a special very high-speed memory. The cache is a smaller and faster memory
that stores copies of the data from frequently used main memory locations. There are various
different independent caches in a CPU, which store instructions and data. The most important
use of cache memory is that it is used to reduce the average time to access data from the main
memory.
Characteristics of Cache Memory
● Cache memory is an extremely fast memory type that acts as a buffer between RAM and
the CPU.
● Cache Memory holds frequently requested data and instructions so that they are
immediately available to the CPU when needed.
● Cache memory is costlier than main memory or disk memory but more economical than
CPU registers.
● Cache Memory is used to speed up and synchronize with a high-speed CPU.

Levels of Memory
● Level 1 or Register: It is a type of memory in which data is stored and accepted that are
immediately stored in the CPU. The most commonly used register is Accumulator,
Program counter, Address Register, etc.
● Level 2 or Cache memory: It is the fastest memory that has faster access time where
data is temporarily stored for faster access.
● Level 3 or Main Memory: It is the memory on which the computer works currently. It is
small in size and once power is off data no longer stays in this memory.
● Level 4 or Secondary Memory: It is external memory that is not as fast as the main
memory but data stays permanently in this memory.
Cache Performance
When the processor needs to read or write a location in the main memory, it first checks for a
corresponding entry in the cache.
● If the processor finds that the memory location is in the cache, a Cache Hit has occurred
and data is read from the cache.
● If the processor does not find the memory location in the cache, a cache miss has
occurred. For a cache miss, the cache allocates a new entry and copies in data from the
main memory, then the request is fulfilled from the contents of the cache.
The performance of cache memory is frequently measured in terms of a quantity called Hit
ratio.
Hit Ratio(H) = hit / (hit + miss) = no. of hits/total accesses
Miss Ratio = miss / (hit + miss) = no. of miss/total accesses = 1 - hit ratio(H)
We can improve Cache performance using higher cache block size, and higher associativity,
reduce miss rate, reduce miss penalty, and reduce the time to hit in the cache.
Cache Mapping
There are three different types of mapping used for the purpose of cache memory which is as
follows:
● Direct Mapping
● Associative Mapping
● Set-Associative Mapping
1. Direct Mapping
The simplest technique, known as direct mapping, maps each block of main memory into only
one possible cache line. or In Direct mapping, assign each memory block to a specific line in the
cache. If a line is previously taken up by a memory block when a new block needs to be loaded,
the old block is trashed. An address space is split into two parts index field and a tag field. The
cache is used to store the tag field whereas the rest is stored in the main memory. Direct
mapping`s performance is directly proportional to the Hit ratio.
i = j modulo m
where
i = cache line number
j = main memory block number
m = number of lines in the cache

For purposes of cache access, each main memory address can be viewed as consisting of three
fields. The least significant w bits identify a unique word or byte within a block of main memory.
In most contemporary machines, the address is at the byte level. The remaining s bits specify one
of the 2s blocks of main memory. The cache logic interprets these s bits as a tag of s-r bits (the
most significant portion) and a line field of r bits. This latter field identifies one of the m=2r lines
of the cache. Line offset is index bits in the direct mapping.
2. Associative Mapping
In this type of mapping, associative memory is used to store the content and addresses of the
memory word. Any block can go into any line of the cache. This means that the word id bits are
used to identify which word in the block is needed, but the tag becomes all of the remaining bits.
This enables the placement of any word at any place in the cache memory. It is considered to be
the fastest and most flexible mapping form. In associative mapping, the index bits are zero.

3. Set-Associative Mapping
This form of mapping is an enhanced form of direct mapping where the drawbacks of direct
mapping are removed. Set associative addresses the problem of possible thrashing in the direct
mapping method. It does this by saying that instead of having exactly one line that a block can
map to in the cache, we will group a few lines together creating a set. Then a block in memory
can map to any one of the lines of a specific set. Set-associative mapping allows each word that
is present in the cache can have two or more words in the main memory for the same index
address. Set associative cache mapping combines the best of direct and associative cache
mapping techniques. In set associative mapping the index bits are given by the set offset bits. In
this case, the cache consists of a number of sets, each of which consists of a number of lines.
Set-Associative Mapping
Relationships in the Set-Associative Mapping can be defined as:
m=v*k
i= j mod v

where
i = cache set number
j = main memory block number
v = number of sets
m = number of lines in the cache number of sets
k = number of lines in each set

Application of Cache Memory


Here are some of the applications of Cache Memory.
1. Primary Cache: A primary cache is always located on the processor chip. This cache is
small and its access time is comparable to that of processor registers.
2. Secondary Cache: Secondary cache is placed between the primary cache and the rest of
the memory. It is referred to as the level 2 (L2) cache. Often, the Level 2 cache is also
housed on the processor chip.
3. Spatial Locality of Reference: Spatial Locality of Reference says that there is a chance
that the element will be present in close proximity to the reference point and next time if
again searched then more close proximity to the point of reference.
4. Temporal Locality of Reference: Temporal Locality of Reference uses the Least
recently used algorithm will be used. Whenever there is page fault occurs within a word
will not only load the word in the main memory but the complete page fault will be
loaded because the spatial locality of reference rule says that if you are referring to any
word next word will be referred to in its register that’s why we load complete page table
so the complete block will be loaded.
Advantages of Cache Memory
● Cache Memory is faster in comparison to main memory and secondary memory.
● Programs stored by Cache Memory can be executed in less time.
● The data access time of Cache Memory is less than that of the main memory.
● Cache Memory stored data and instructions that are regularly used by the CPU, therefore
it increases the performance of the CPU.
Disadvantages of Cache Memory
● Cache Memory is costlier than primary memory and secondary memory.
● Data is stored on a temporary basis in Cache Memory.
● Whenever the system is turned off, data and instructions stored in cache memory get
destroyed.
● The high cost of cache memory increases the price of the Computer System.

You might also like