0% found this document useful (0 votes)
4 views

Overview of Direct Memory Mapping

Uploaded by

James Del Pilar
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Overview of Direct Memory Mapping

Uploaded by

James Del Pilar
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Overview of Direct Memory Mapping

- The session focuses on different cache memory mapping techniques, starting with direct
memory mapping.
- The organization of cache memory and main memory is similar, with main memory parts
termed as blocks and cache parts as lines.

 Memory Structure and Addressing


- Programs reside in secondary storage and become processes during execution, which
are subdivided into pages.
- Main memory is divided into frames of equal size to pages, with the operating system
managing this subdivision.
- The smallest addressable memory unit is a word, and a byte addressable memory means
each word is one byte.
- For a main memory of 64 words and block size of 4 words, there are 16 blocks numbered
from 0 to 15.
- Addressing requires a certain number of bits; for 64 words, 6 bits are needed, split into
block identification and word addressing within the block.

 Cache Memory and Mapping


- A cache of 16 words with a block size of 4 words results in 4 lines.
- Mapping occurs in a round-robin manner, where each block of main memory is assigned
to a cache line.
- The least significant two bits of the block number determine which cache line it maps
onto, creating a many-to-one relationship.

 Tag Bits and Identification


- The physical address bits are split into block numbers and line offsets, with the
remaining bits known as tag bits.
- Tag bits identify which block is present in the cache, allowing for efficient retrieval.
- The direct mapping technique is characterized by a strict mapping of main memory
blocks to cache lines.

 Conclusion
- The session concludes with an assurance of a clear understanding of direct memory
mapping, with future sessions planned to solve numerical problems related to the
concept.
01:27

Introduction to Cache Memory

Memory units are divided into words, with byte-addressable memory indicating each word size as one
byte. A main memory of 64 words organized into blocks of 4 results in 16 blocks, numbered from 0 to 15.
Addressing these locations requires bit placements, where one bit can address two locations, two bits
can address four, and further expansion allows for addressing more memory cells.

02:43

Memory Organization

In a memory system addressing 0 to 63 words, 6 physical address (PA) bits are required, derived from log
64 base 2. These 6 bits are divided with the most significant 4 bits identifying one of the 16 blocks, while
the least significant 2 bits specify the word within each block. For instance, the PA of 000111 gives a
block identifier of 7, demonstrating the importance of this bit allocation.

04:00

Physical Address Bits

Analyzing the generated physical address reveals that the last word of block seven corresponds to the
value 31. In a cache of 16 words with a block size of four words, there are four lines, requiring two bits to
identify them. However, since not all main memory blocks can be assigned to cache lines simultaneously,
a round-robin mapping is utilized.

05:19

Cache and Block Size

Blocks are mapped to cache lines in a straightforward manner, with the first four blocks assigned to the
corresponding lines. When additional blocks exceed available lines, a round robin approach allows for
mapping the fourth block back to the zeroth line, continuing this cycle for subsequent blocks. The least
significant two bits of the block number determine the cache line assignment, following a many-to-one
relationship, with the offset bits specifying individual words within each block or line.

06:43

Mapping Process

In the context of direct memory mapping, the last two bits of block numbers function as line numbers,
indicating which cache line a specific block will map onto, while the remaining bits are known as tag bits.
For example, analyzing block number 3 reveals its contents and corresponding tag bits, which identify
the blocks mapped onto the same cache line, demonstrating how the tags facilitate the tracking of
cached data. Ultimately, these patterns of tag bits help pinpoint which block is currently present in the
cache.

08:05

Direct Mapping Technique


Direct mapping is a technique where main memory blocks are directly assigned to cache lines, adhering
to a strict mapping procedure. The session aimed to provide a clear understanding of this concept, with
future sessions dedicated to solving numerical problems related to it. Appreciation was expressed for
viewer engagement, with an invitation to join the next discussion.

08:05

Tag Bits Explanation

Direct mapping is a technique where main memory blocks are assigned directly to specific cache lines,
with a strict mapping procedure. The session concludes with an assurance of a clearer understanding of
the concept, leading into upcoming sessions focused on numerical problems related to this technique.

You might also like