0% found this document useful (0 votes)
9 views

Memory Hierarchy Presentation Detailed

Uploaded by

waziramadhani
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Memory Hierarchy Presentation Detailed

Uploaded by

waziramadhani
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Memory Hierarchy

Cache Memory, Main Memory, and


Virtual Memory
Cache Memory: Design and
Mapping Techniques

• • Cache is a small, fast memory that stores


frequently accessed data.
• • Levels: L1 (fastest), L2, L3.
• • Case Study: ARM Cortex-A Cache Hierarchy –
ARM processors use set-associative caches to
balance speed and power consumption. These
processors optimize performance by storing
instructions and data separately in L1 caches
(split cache). This design is crucial for mobile
devices, where power efficiency is as
important as processing power.
Cache Mapping Techniques
• Cache mapping is needed to identify where the
cache memory is present in cache memory.
Mapping provides the cache line number where
the content is present in the case of cache hit or
where to bring the content from the main
memory in the case of cache miss.

• Mapping Techniques: Direct, Fully Associative,


Set-Associative.
Mapping Techniques:
➔ Direct Mapping: Each block of main memory maps to
exactly one cache line.

➔ Fully Associative Mapping: Any block of main memory


can be stored in any cache line.

➔ Set-Associative Mapping: A compromise between


direct and fully associative, where the cache is divided
into sets, and each block maps to a specific set but
can be stored in any line within the set.

Mapping
Cache Memory: Replacement
Policies
• Least Recently Used (LRU): Replaces the cache block that
hasn’t been used for the longest time.

• First-In-First-Out (FIFO): Replaces the oldest block in the


cache.

• Random Replacement: Replaces a random block.

• Least Frequently Used (LFU): Replaces the block used least


often.
• Example: LRU in Intel CPUs – Modern Intel
CPUs use LRU for managing cache
replacements, prioritizing the most recently
accessed data to ensure the highest
performance.
• • Case Study: Web Browsers and Random
Replacement – Browsers like Google Chrome
employ a variation of random replacement to
handle large-scale caching of web pages. This
ensures that frequently accessed pages are
cached, while less important data is removed
without the need for complex tracking
algorithms.
Main Memory: Types and Structure

• Types: DRAM (Dynamic RAM) – used in main


memory; SRAM (Static RAM) – used in cache.
• Structure: Memory organized in rows and
columns with multiple banks.
• • Example: DRAM in Gaming Consoles (e.g.,
PlayStation 5) – DRAM in gaming consoles
handles the massive data needed for
high-resolution textures and fast gaming
environments. The design ensures high
bandwidth, and low latency is crucial for
reducing frame drops during gameplay.
• • Case Study: SRAM in Network Switches –
Network switches rely on SRAM to buffer
packets in real-time. SRAM’s speed makes it
ideal for low-latency applications where even
small delays can cause data packet loss or
errors, which are unacceptable in high-speed
networks.
Virtual Memory

Virtual Memory is a memory management technique that


gives an application the illusion of having more memory
than is physically available.

It allows processes to execute without needing all the


data in physical memory (RAM) at once.

Instead, the system only loads the necessary parts of the


program into RAM, and the rest stays on the disk (swap
space). This greatly improves the flexibility and efficiency
of memory usage.
Example: In a system with 4 GB of physical
RAM, virtual memory might allow an application
to use up to 16 GB by swapping less-used
memory pages to disk.
Paging:
Paging is a technique where memory is divided
into fixed-size blocks called pages. Both physical
memory and virtual memory are divided into
these pages, which simplifies memory
management.
• Example:
Windows Page Fault Handling – Windows
operating system uses a demand paging
approach, loading pages into memory only
when they are needed. This approach
optimizes memory usage and reduces
unnecessary page loads.
Linux Paging Mechanism – Linux uses a
multi-level page table system that efficiently
handles large address spaces, minimizing the
overhead of page faults. The page table
entries are cached in the TLB, reducing the
need to access main memory frequently.
Segmentation:
Segmentation divides the memory into
variable-sized segments that correspond to
logical units such as functions, arrays, or data
structures. Each segment has a base (starting
address) and a limit (size of the segment).
Example:
In segmentation, a process might have one
segment for code, another for data, and another
for the stack. Each segment can grow or shrink
dynamically based on program needs.
Levels of Memory

➔ Level 1 or Register: It is a type of memory in


which data is stored and accepted that are
immediately stored in the CPU. The most
commonly used register is Accumulator,
Program counter , Address Register, etc.

➔ Level 2 or Cache memory: It is the fastest


memory that has faster access time where
data is temporarily stored for faster access.
➔ Level 3 or Main Memory: It is the memory on
which the computer works currently. It is small in
size and once power is off data no longer stays in
this memory.

➔ Level 4 or Secondary Memory: It is external


memory that is not as fast as the main memory
but data stays permanently in this memory.

Main memory
Conclusion

• • Memory hierarchy plays a crucial role in


system performance.
• • Optimizing cache and memory access
improves execution speed and efficiency.

You might also like