Microcontrollers: Caches
Microcontrollers: Caches
Module 5
CACHES
CACHES: The Memory Hierarchy and Cache Memory, Caches and Memory
Management Units
CACHE Architecture: Basic Architecture of a Cache Memory, Basic
Operation of a Cache Controller, The Relationship between Cache and Main
Memory, Set Associativity, Write Buffers, Measuring Cache Efficiency,
CACHE POLICY: Write Policy—Writeback or Writethrough, Cache Line
Replacement Policies, Allocation Policy on a Cache Miss.
Coprocessor 15 and caches.
1.Explain the basic architecture of cache memory.
• ARM uses two bus architectures in its cached cores, the Von Neumann and the
Harvard.
• The Von Neumann and Harvard bus architectures differ in the separation of the
instruction and data paths between the core and memory.
• A different cache design is used to support the two architectures. In processor
cores using the Von Neumann architecture, there is a single cache used for
instruction and data. This type of cache is known as a unified cache.
• A unified cache memory contains both instruction and data values.
• The Harvard architecture has separate instruction and data buses to improve
overall system performance, but supporting the two buses requires two caches.
• A simple cache memory is shown in figure.
• It has three main parts: a directory store, a data section, and status
information.
• All three parts of the cache memory are present for each cache line.
• The cache must know where the information stored in a cache line originates
from in main memory.
• It uses a directory store to hold the address identifying where the cache line
was copied from main memory. The directory entry is known as a cache-tag.
•A cache memory must also store the data read from main memory. This
information is held in the data section.
• The size of a cache is defined as the actual code or data the cache can store
from main memory.
• Not included in the cache size is the cache memory required to support
cache-tags or status bits.
• There are also status bits in cache memory to maintain state information.
• Two common status bits are the valid bit and dirty bit.
• A valid bit marks a cache line as active, meaning it contains live data
originally taken from main memory and is currently available to the
processor core on demand.
• A dirty bit defines whether or not a cache line contains data that is different
from the value it represents in main memory.
2)Explain how main memory maps to a cache memory.
• As the associativity of a cache controller goes up, the probability of thrashing goes down.
• The ideal goal would be to maximize the set associativity of a cache by designing it so any main
memory location maps to any cache line.
• A cache that does this is known as a fully associative cache.
• However, as the associativity increases, so does the complexity of the hardware that supports it.
• One method used by hardware designers to increase the set associativity of a cache includes a
content addressable memory (CAM).
• A CAM uses a set of comparators to compare the input tag address with a cache-tag stored in each
valid cache line.
• A CAM works in the opposite way a RAM works.
• Where a RAM produces data when given an address value, a CAM produces an address if a given
data value exists in the memory.
• The cache controller uses the address tag as the input to the CAM and the
output selects the way containing the valid cache line.
• The tag portion of the requested address is used as an input to the four
CAMs that simultaneously compare the input tag with all cache-tags stored
in the 64 ways.
• If there is a match, cache data is provided by the cache memory.
• If no match occurs, a miss signal is generated by the memory controller.
• The controller enables one of four CAMs using the set index bits.
• The indexed CAM then selects a cache line in cache memory and the data
index portion of the core address selects the requested word, halfword, or
byte within the cache line.
4)Briefly explain cache line replacement policies.
• On a cache miss, the cache controller must select a cache line from the
available set in cache memory to store the new information from main memory.
• The cache line selected for replacement is known as a victim. If the victim
contains valid, dirty data, the controller must write the dirty data from the cache
memory to main memory before it copies new data into the victim cache line.
• The process of selecting and replacing a victim cache line is known as eviction.
• The strategy implemented in a cache controller to select the next victim is
called its replacement policy.
• Thereplacement policy selects a cache line from the available associative
member set; that is, it selects the way to use in the next cache line replacement.
• To summarize the overall process, the set index selects the set of cache lines
available in the ways, and the replacement policy selects the specific cache line
from the set to replace.
• ARM cached cores support two replacement policies, either pseudorandom or
round-robin.
• Round-robin or cyclic replacement simply selects the next cache line in a set to
replace.
• The selection algorithm uses a sequential, incrementing victim counter that
increments each time the cache controller allocates a cache line.
• When the victim counter reaches a maximum value, it is reset to a defined base
value.
• Pseudorandom replacement randomly selects the next cache line in a set to replace.
• The selection algorithm uses a nonsequential incrementing victim counter.
• In a pseudorandom replacement algorithm the controller increments the victim
counter by randomly selecting an increment value and adding this value to the
victim counter.
• When the victim counter reaches a maximum value, it is reset to a defined base
value.
5) Explain Memory Hierarchy and Cache memory
• The innermost level of the hierarchy is at the processor core.
• This memory is so tightly coupled to the processor that in many ways it is difficult to think of it as
separate from the processor.
• This memory is known as a register file.
• These registers are integral to the processor core and provide the fastest possible memory access in the
system.
• At the primary level, memory components are connected to the processor core through dedicated on-
chip interfaces.
• It is at this level we find tightly coupled memory (TCM) and level 1 cache.
• The purpose of main memory is to hold programs while they are running on a system.
• The next level is secondary storage—large, slow, relatively inexpensive mass storage devices such as
disk drives or removable memory.
• Secondary memory is used to store unused portions of very large programs that do not fit in main
memory and programs that are not currently executing.
• It is useful to note that a memory hierarchy depends as much on architectural design as on the
technology surrounding it.
• For example, TCM and SRAM are of the same technology yet differ in architectural placement: TCM is
located on the chip, while SRAM is located on a board.
• A cache may be incorporated between any level in the hierarchy where there is a
significant access time difference between memory components.
• A cache can improve system performance whenever such a difference exists.
• A cache memory system takes information stored in a lower level of the hierarchy
and temporarily moves it to a higher level.
• Figure 12.1 includes a level 1 (L1) cache and write buffer.
• The L1 cache is an array of high-speed, on-chip memory that temporarily holds
code and data from a slower level.
•A cache holds this information to decrease the time required to access both
instructions and data.
• The write buffer is a very small FIFO buffer that supports writes to main memory
from the cache.
• Not shown in the figure is a level 2 (L2) cache. An L2 cache is located between the
L1 cache and slower memory.
• The L1 and L2 caches are also known as the primary and secondary caches.