0% found this document useful (0 votes)
44 views

Topics: - Cache Operations

This document discusses cache operations, virtual memory, and the memory hierarchy. It covers cache placement and replacement strategies, read and write policies for cache hits and misses. It then discusses virtual memory, including why it is used, an overview of terminology, and how virtual addresses are translated to physical addresses using a memory management unit. The advantages of virtual memory include making memory management simpler and allowing programs to behave as if they have larger memories than actually available.

Uploaded by

buvi_ks
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views

Topics: - Cache Operations

This document discusses cache operations, virtual memory, and the memory hierarchy. It covers cache placement and replacement strategies, read and write policies for cache hits and misses. It then discusses virtual memory, including why it is used, an overview of terminology, and how virtual addresses are translated to physical addresses using a memory management unit. The advantages of virtual memory include making memory management simpler and allowing programs to behave as if they have larger memories than actually available.

Uploaded by

buvi_ks
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Topics

• Cache Operations
– Placement strategy
Caches and Virtual Memory – Replacement strategy
– Read and write policy

CS 333 • Virtual Memory


– Why?
Fall 2006
– General overview
– Lots of terminology

Cache Operations
• Placement strategy
– Where to place an incoming block in the
Cache Operations cache
• Read and write policies
– How to handle reads and writes on cache hits
and misses
• Replacement strategy
– Which block to replace on cache miss

Write Policies on Cache Hit


• Write-through
– Update both cache and main memory on a write

Read and Write Policies • Advantageous when:


– Few writes
• Disadvantageous:
– Many writes (takes time)

• Write-back
– Write to cache only. Update main memory when block
is replaced (needs dirty bit per cache block)

1
Read Miss Policies Write Miss Policies
• Read block in from main memory. • Write allocate
– Two approaches: – Bring block to cache from memory, then update
• Forward desired word first (faster, but more • Write-no allocate
hardware needed) – Only update main memory (don’t bring into cache)
• Delay until entire block has been moved to cache
Write through caches usually use write-no allocate
Write-back caches usually use write allocate

Block Replacement Policies


• If the cache is full
– Least recently used (LRU)
Block Replacement Strategies • Uses counters
– Random replacement

What do you need to implement LRU?

Why Have A Memory Hierarchy?


Want to create the illusion of large memory
at small memory speeds

Virtual Memory • Infinite storage


Can’t have everything:
• Fast
•Bigger → Slower (speed of light)
• Cheap •Faster, denser → hotter
• Compact •Faster → More expensive
• Cold
• Non-volatile (can remember w/o electricity)

2
The Memory Hierarchy Why Use Disks?
CPU registers hold words retrieved
Smaller, L0: from L1 cache.
faster, registers
and • Multiprogramming
costlier L1: on-chip L1 L1 cache holds cache lines retrieved
(per byte)
storage
cache (SRAM) from the L2 cache memory.
– More than one application running at one time
off-chip L2 L2 cache holds cache lines
devices L2:
cache (SRAM) retrieved from main memory. • May need more than available main memory to
store all needed programs and data
main memory Main memory holds disk
L3:
(DRAM)
blocks retrieved from local • May want to share data between applications
Larger, disks.
slower,
and
cheaper
L4: local secondary storage virtual memory
(per byte) (local disks)
Local disks hold files
retrieved from disks on
remote network servers.
• Cheap, large (but, slow)
storage
devices

L5: remote secondary storage


(distributed file systems, Web servers)

What was there before Virtual


Program Overlays
Memory?
• Overlays
– Programmers had to use the principal of locality to
choose segments of a program to keep in memory
during program execution
• 80/20 rule (80% of the time executing 20% of the code)
• Use program phase behavior

• Tedious to do by hand
• Difficult to determine overlay set, especially
when considering multiprogramming

What is Virtual Memory? Virtual Memory Overview


• Technique (an abstraction) for using disks • Components
to extend the apparent size of physical – Memory Management Unit (MMU)
memory beyond it’s actual physical size • mapping function between logical addresses and
physical addresses
– Automatic storage allocation
– Operating System
• Logical memory – memory as seen by
• controls the MMU
the process
– Mapping tables
• Physical memory – memory as seen by • guide the translation
the processor

3
Virtual Memory Terminology
2. Generate virtual
address
THEN • Effective address – address computed by
MMU translate to physical
address
processor (as viewed inside CPU)
• Logical address – same as effective
4. Request logical page
1. logical address 3. Physical address
address (as viewed outside CPU)
• Virtual address – generated by MMU
• Physical address – address in physical
Main Disk Storage
Memory memory (main memory)
CPU

5. Deliver logical page

Why Have Virtual Addresses? Virtual Memory Advantages


Virtual address can be larger than the logical • Simpler addressing
address, allowing program units to be mapped – Programs can be compiled with their own address
to a much larger virtual address space space
• No need for compiler to generate addresses that are unique
from addresses for other programs
• Don’t need to break program into fragments (overlays) to
Example: PowerPC 601 accommodate memory limitations
– Operating system
• Disks are cheaper
logical address: 32 bits • Access control (read, write, execute)
virtual address: 52 bits – “bus error” – invalid virtual address
– “segmentation fault” – improper permissions for the
physical address: depends on how much memory type of access

Memory Management Approaches


• Segmentation
Approaches to Implementing • Paging
Virtual Memory

4
Address Translation in Segmented
Segmentation
Memory
• Divide memory into segments (varying
sizes) Q: How can OS
switch processes?
Problem:
External fragmentation

Intel 8086 Paging


• 4 segment bases initialized by operating • Fixed size pages
system • Demand paging
– code – Pages brought in as needed
– data • Components
– stack – Page table
– extra (programmer use) • mapping of virtual pages to physical pages
• 16-bit logical address • generally one page table per process in the system

• 20-bit physical address – Virtual address


Page Number Offset in Page

Page Table Entry Paging


• Access control bits
– Read, write, execute, etc.
• Presence bit
– present in main memory (or not)
• Dirty bit
– If the page has been modified
• Use bit
– Recently used?
• Page number
– physical page number OR pointer to secondary
storage

Access Present Dirty Use Page Number

5
Address Translation in a Paged
Paging
MMU
• Problem
– Internal fragmentation
• Last page is unlikely to be full
• Page Placement
– Page table is direct-mapped
• Page Replacement
– Use bits

Summary
• Cache Operations
– Placement strategy
– Replacement strategy
– Read and write policy
• Virtual Memory
– Why?
– General overview
– Lots of terminology

You might also like