0% found this document useful (0 votes)
32 views

Chapter 4 - Memory Part 2

This document discusses different types of DRAM architectures and cache memory. It describes asynchronous and synchronous DRAM, as well as double data rate SDRAM, Rambus DRAM, and cache memory organization, operation, addressing and mapping procedures including direct mapping and set associative mapping. The purpose of cache memory is to speed up processing by allowing faster access to frequently used data compared to main memory.

Uploaded by

Yaseen Ashraf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views

Chapter 4 - Memory Part 2

This document discusses different types of DRAM architectures and cache memory. It describes asynchronous and synchronous DRAM, as well as double data rate SDRAM, Rambus DRAM, and cache memory organization, operation, addressing and mapping procedures including direct mapping and set associative mapping. The purpose of cache memory is to speed up processing by allowing faster access to frequently used data compared to main memory.

Uploaded by

Yaseen Ashraf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Type of DRAM Architectures:

1- Asynchronous DRAMs
2- Synchronous DRAM (SRAM)
3- Double Data Rate SDRAM (DDR SDRAM)
4- Rambus DRAM (RDRAM)
Type of DRAM Architectures:
1- Asynchronous DRAMs
Type of DRAM Architectures:
Synchronous DRAM (SRAM)
Type of DRAM Architectures:
Synchronous DRAM (SRAM)
• In SDRAM, data exchanges with the processor are synchronized to
an external clock signal and running at the full speed of the
processor/memory bus without imposing wait states.

• The processor or other master issues the instruction and address


information which is latched by the DRAM. The DRAM then
responds after a set number of clock cycles. Meanwhile the
master can safely do other tasks while the SDRAM is processing.

• With synchronous access the DRAM moves data in and out


under control of the system clock.
Type of DRAM Architectures:
Synchronous DRAM (SRAM)
Type of DRAM Architectures:
Synchronous DRAM (SRAM) - Burst Mode
• SDRAM have different modes of operation, which can be
selected by writing control information into a mode register.

• For example, burst operations of different lengths can be


specified. It is not necessary to provide externally-generated
pulses on the CAS line to select successive columns. The
necessary control signals are generated internally using a
column counter and the clock signal.

• New data are placed on the data lines at the rising edge of each
clock pulse.

• The SDRAM performs best when it is transferring large


blocks of data serially, such as for applications like word
processing, spreadsheets, and multimedia.
 SDRAM Mode of operation

 Memory Latency
The amount of time it takes to transfer a word of
data to or from the memory
 Memory Bandwidth
The number of bits or bytes that can be
transferred in one second

In block transfer, the time between successive words of a block


is much shorter than the time needed to transfer the first word.
Type of DRAM Architectures:
Double Data Rate SDRAM (DDR SDRAM)

• Standard SDRAM performs all actions on the rising edge of the


clock signal.
• DDR SDRAM accesses the cell array in the same way, but
transfers the data on both edges of the clock signal.

Several versions of DDR chips have been developed (DDR2, DDR3,


and DDR4). They offer:
 Increased storage capacity.
 Lower power consumption.
 Faster clock speed.
Type of DRAM Architectures:
Double Data Rate SDRAM (DDR SDRAM)
Type of DRAM Architectures:
Rambus DRAM (RDRAM)
Rambus DRAM (RDRAM) does not use dedicated address, control,
data, and chip-select portions of the bus. Instead, the bus is fully
multiplexed: the address, control, data, and chip-select information
all travel over the same set of electrical wires but at different times.

Transactions occur on the bus using a split request/response


protocol which resemble network request/response pairs. Rather
than being controlled by the explicit RAS, CAS, R/W, and CE
signals used in conventional DRAMs, an RDRAM gets a memory
request over the high-speed bus.

This request contains the desired address, the type of operation,


and the number of bytes in the operation.
Type of DRAM Architectures:
Rambus DRAM (RDRAM)
Cache Memory (Speed-up for main store)

• Cache memories are (relatively) small, high speed memories


inserted into the system between the processor and the main
store.
• The purpose of the cache memory is to speed up the processing
rate by allowing the processor to execute at a higher rate than the
possible by using the main store alone.
• It utilizes many of the same concepts used with virtual memories,
but in a slightly different fashion.
Cache Memory (Speed-up for main store)

• The basic operation of the cache is as follows: when the CPU needs
to access memory, the cache is examined. If the word is found in
the cache, it is read from the fast memory. If the word addressed by
the CPU is not found in the cache, the main memory is accessed to
read the word.

• A block of words containing the one just accessed is then


transferred from main memory to cache memory. The block size
may vary from one word (the one just accessed) to about 16 words
adjacent to the one just accessed. In this manner, some data are
transferred so that future references to memory may find the
required word in the fast cache.
Cache Memory
Cache Memory

The speed up of the system


Cache Memory Operation
Cache Memory Operation
• The processor generates the read address (RA) of a word to be
read. If the word is contained in the cache, it is delivered to the
processor. Otherwise, the block containing that word is loaded into
the cache, and the word is delivered to the processor.

• Cache memory connects to the processor via data, control, and


address lines. The data and address lines also attach to data and
address buffers, which attach to a system bus from which main
memory is reached.

• When a cache hit occurs, the data and address buffers are disabled
and communication is only between processor and cache, with no
system bus traffic.
Cache Memory Operation
• When a cache miss occurs, the desired address is loaded onto the
system bus and the data are returned through the data buffer to
both the cache and the processor. In other organizations, the
cache is physically interposed between the processor and the
main memory for all data, address, and control lines. In this latter
case, for a cache miss, the desired word is first read into the cache
and then transferred from cache to processor.
Cache Memory Organization
Cache Memory Addressing
Cache Memory Addressing
When virtual memory is used, the address fields of machine
instructions contain virtual addresses. For reads to and writes from
main memory, a hardware memory management unit (MMU)
translates each virtual address into a physical address in main
memory.

A logical cache, also known as a virtual cache, stores data using


virtual addresses. The processor accesses the cache directly,
without going through the MMU.

A physical cache stores data using main memory physical


addresses.
Cache Memory Addressing
The advantage of the logical cache is that cache access speed is
faster than for a physical cache, because the cache can respond
before the MMU performs an address translation.

The disadvantage of the logical cache


most virtual memory systems supply each application with the
same virtual memory address space. That is, each application
sees a virtual memory that starts at address 0. Thus, the same
virtual address in two different applications refers to two different
physical addresses. The cache memory must therefore be
completely flushed with each application context switch, or extra
bits must be added to each line of the cache to identify which
virtual address space this address refers to.
Main Memory Mapping Procedures

• Associative Mapping
• Direct Mapping
• Set Associative Mapping
Main Memory Mapping Procedures
Main Memory Mapping Procedures
2- Direct Mapping
Main Memory Mapping Procedures
2- Direct Mapping
Main Memory Mapping Procedures
3- Set Associative Mapping

You might also like