0% found this document useful (0 votes)
20 views

MODULE-4 - Memory-System

Uploaded by

tasmiyashaikh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

MODULE-4 - Memory-System

Uploaded by

tasmiyashaikh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 110

Computer Organization

and
Architecture
Carl Hamacher, Zvonko Vranesic, Safwat
Zaky,
Computer Organization, 5th
Edition,
Tata McGraw Hill, 2002.
Module-4
MEMORY SYSTEM
Basic Concepts
 The maximum size of the memory that
can be used in any computer is
determined by the addressing scheme.
 For example, a computer that generates

16-bit addresses is capable of


addressing up to 216 = 64K memory
locations.
 Machines whose instructions generate 32-

bit addresses can utilize a memory that


contains up to 232 = 4G (giga) locations.
 The number of locations represents the
Basic Concepts..
 Most modern computers are byte-
addressable.
 The memory is usually designed to

store and retrieve data in word-length


quantities.
 Data transfer between the memory and

the processor takes place through the


use of two processor registers, usually
called MAR (memory address register)
and MDR (memory data register).
Basic Concepts..
 If MAR is k bits long and MDR is n bits long,
then the memory unit may contain up to 2k
addressable locations.
 During a memory cycle, n bits of data are
transferred between the memory and the
processor.
 This transfer takes place over the
processor bus, which has k address lines
and n data lines.
 The bus also includes the control lines
Read/Write (R/Wഥ ) and Memory Function
Basic Concepts..
 The connection between the processor
and the memory is shown in Figure 5.1

Processor Memory
k-bit
address bus
MAR
n-bit
data
bus Up to 2 k
MDR addressable
locations

Word length = n
Control lines bits
( R /W, MFC, etc.)
Basic Concepts..
 The processor reads data from the
memory by loading the address of the
required memory location into the MAR
register

The R/Wഥ line is set to 1.
 The memory responds by placing the data
from the addressed location onto the data
lines, and confirms this action by asserting
the MFC signal.
 Upon receipt of the MFC signal, the processor
loads
the data on the data lines into the MDR
Basic Concepts..
 The processor writes data into a memory
location by loading the address of this
location into MAR and loading the data into
MDR.

The R/Wഥ line is set to 0.
 If read or write operations involve
consecutive address locations in the main
memory, then a "block transfer" operation
can be performed.
 The only address sent to the memory is the
one that
identifies the first location.
Basic Concepts..
 Measures for the speed of a
memory:
 Memory access time - time that elapses
between the initiation of an operation
and the completion of that operation
 Memory cycle time - minimum time

delay required between the


initiation of two successive
memory operations,
Basic Concepts..
 A memory unit is called random-access
memory (RAM) if any location can be
accessed for a Read or Write operation in
some fixed amount of time that is
independent of the location's address.
 This distinguishes such memory units

from serial, or partly serial, access


storage devices such as magnetic disks
and tapes.
 Access time on the latter devices depends
on the address or position of the data.
Basic Concepts..
 The processor of a computer can
usually process instructions and data
faster than they can be fetched from a
reasonably priced memory unit.
 The memory cycle time, then, is the
bottleneck in the system.
 One way to reduce the memory access
time is to use a cache memory.
 This is a small, fast memory that is inserted
between the larger, slower main memory and
the processor.
 It holds the currently active segments of a
Basic Concepts..
 An important design issue is to
provide a computer system with as
large and fast a memory as
possible, within a given cost target.
 Several techniques to increase the

effective size and speed of the


memory.
 Cache memory - increases the
effective speed
 Virtual memory - increases the
Basic Concepts..
 Cache memory
 This is a small, fast memory that is inserted
between the larger, slower main memory and
the processor.
 It holds the currently active segments of a
program and their data.
 Reduces memory access time
Basic Concepts..
 Virtual memory
 Data is stored in physical memory locations that have
addresses different from those specified by the
program.
 An address generated by the processor is referred to as
a virtual
or logical address.
 The virtual address space is mapped onto the physical
memory where data are actually stored.
 The mapping function is implemented by a special

memory control circuit, often called the memory


management unit.
 Virtual memory is used to increase the apparent
size of the physical memory.
 Data are addressed in a virtual address space that
Semiconductor RAM
Memories
Semiconductor RAM Memories
 Semiconductor memories are available in a wide
range of speeds.
 Their cycle times range from 100 ns to less than
10 ns.
 When first introduced in the late 1960s, they
were much more expensive than the magnetic-
core memories they replaced.
 Because of rapid advances in VLSI (Very Large
Scale Integration) technology, the cost of
semiconductor memories has dropped
dramatically.
 As a result, they are now used almost
exclusively in implementing memories.
Internal Organization of
Memory Chips
 Each memory cell can hold one bit of
information.
 Memory cells are organized in the form
of an array.
 One row is one memory word.
 All cells of a row are connected to a
common line, known as the word line.
 Word line is connected to the address
decoder.
 The cells in each column are
connected to a Sense/Write circuit by
Internal Organization of
Memory Chips..
Internal Organization of
Memory Chips..
 Figure 5.2 is an example of a very small
memory circuit consisting of 16 words of 8
bits each.
 This is referred to as a 16 × 8
organization.
 The data input and the data output of

each Sense/Write circuit are connected


to a single bidirectional data line that
can be connected to the data lines of a
computer.

Two control lines, R/Wഥ and CS, are
Internal Organization of
Memory Chips..
 The memory circuit in Figure 5.2 stores 128 bits
and requires 14 external connections for
address, data, and control lines.
 It also needs two lines for power supply and ground
connections.
 If the circuit has 1K (1024) memory cells, this
circuit can be organized as a 128 × 8 memory,
requiring a total of 19 external connections.
 Alternatively, the same number of cells can be
organized into a 1K×1 format.
 In this case, a 10-bit address is needed, but there
is only one data line, resulting in 15 external
connections.
Internal Organization of
Memory Chips..
 Figure 5.3 shows such an organization.
 The required 10-bit address is divided

into two groups of 5 bits each to form


the row and column addresses for the
cell array.
 A row address selects a row of 32 cells,

all of which are accessed in parallel.


 But, only one of these cells is connected to

the external data line, based on the


column address.
Internal Organization of
Memory Chips..
Internal Organization of
Memory Chips..
 Commercially available memory chips
contain a much larger number of
memory cells than the examples shown
in Figures 5.2 and 5.3.
 Large chips have essentially the
same organization as Figure 5.3, but
use a larger memory cell array and
have more external connections.
 For example, a 1G-bit chip may have

a
256M × 4 organization, in which case a
Static Memories
 Memories that consist of circuits
capable of retaining their state as
long as power is applied are known
as static memories.
Static RAM (SRAM)
 Figure 5.4 illustrates how a static RAM (SRAM) cell
may
be implemented.
 Two inverters are cross-connected to form a latch.
 The latch is connected to two bit lines by
transistors T1
and T2.
 These transistors act as switches that can be
opened or closed under control of the word line.
 When the word line is at ground level, the
transistors are turned off and the latch retains
its state.
 For example, if the logic value at point X is 1 and
SRAM Cell
Static RAM (SRAM)..
 Read operation:
 The word line is activated to close
switches T1
and T2.

line 𝑏 is high and the signal on bit line 𝑏


 If the cell is in state 1, the signal on bit

′ is low.

 Thus, 𝑏 and 𝑏′ are always complements


 The opposite is true if the cell is in state 0.

of each other.
 The Sense/Write circuit at the end of
Static RAM (SRAM)..
 Write operation:
 The Sense/Write circuit drives bit lines 𝑏

𝑏′, instead of sensing their state.


and

line 𝑏 and its complement on 𝑏′ and


 It places the appropriate value on bit

activates the word line.


 This forces the cell into the
corresponding state, which the cell
retains when the word line is
deactivated.
CMOS Cell
 A CMOS realization of the cell in Figure
5.4 is given in Figure 5.5.
 Transistor pairs (T , T ) and (T , T ) form
3 5 4 6
the inverters in the latch.
 The state of the cell is read or written

as just explained.
 For example, in state 1, the voltage at

point X is maintained high by having


transistors T3 and T6 on, while T4 and T5

 If T and T are turned on, bit lines 𝑏


are off.
1 2
CMOS Cell..
Static RAM (SRAM)..
 SRAMs are said to be volatile memories because
their
contents are lost when power is interrupted.
 Advantage of CMOS SRAMs is their very low
power consumption, because current flows in the
cell only when the cell is being accessed.
 Otherwise, T1, T2, and one transistor in each inverter
are turned off, ensuring that there is no continuous
electrical path between Vsupply and ground.
 Static RAMs can be accessed very quickly.
 Access times on the order of a few nanoseconds are
found in commercially available chips.
 SRAMs are used in applications where speed is
of
SRAMs vs. DRAMs
 Static RAMs (SRAMs):
 Consist of circuits that are capable of retaining their
state as long
as the power is applied.
 Volatile memories, because their contents are lost when
power is interrupted.
 Access times of static RAMs are in the range of few
nanoseconds.
 However, the cost is usually high.
 Dynamic RAMs (DRAMs):
 Do not retain their state indefinitely.
 Contents must be periodically refreshed.
 Contents may be refreshed while accessing them for
reading.
Asynchronous DRAMs
 Information is stored in a dynamic memory
cell in the form of a charge on a capacitor
 This charge can be maintained for only
tens of milliseconds.
 Since the cell is required to store information
for a much longer time, its contents must be
periodically refreshed by restoring the
capacitor charge to its full value.
 This occurs when the contents of the cell are read
or when new information is written into it.
 An example of a dynamic memory cell that
consists of a capacitor, C, and a transistor, T, is
shown in Figure 5.6.
Asynchronous DRAMs..
Asynchronous DRAMs..
 To store information in this cell, transistor
T is turned on and an appropriate
voltage is applied to the bit line.
 This causes a known amount of charge to be
stored in the capacitor.
 After the transistor is turned off, the
capacitor begins to discharge.
 The information stored in the cell can

be retrieved correctly only if it is read


before the charge in the capacitor
drops below some threshold value.
Asynchronous DRAMs..
 During a Read operation, the
transistor in a selected cell is turned
on.
 A sense amplifier connected to the bit line

detects whether the charge stored in the


capacitor is above or below the threshold
value.
 If the charge is above the threshold, the

sense amplifier drives the bit line to the


full voltage representing the logic value
1.
Asynchronous DRAMs..
 If the sense amplifier detects that the
charge in the capacitor is below the
threshold value, it pulls the bit line to
ground level to discharge the capacitor
fully.
 Thus, reading the contents of a

cell automatically refreshes its


contents.
 Since the word line is common to all

cells in a row, all cells in a selected row


are read and refreshed at the same
Asynchronous DRAMs..
 A 16-Megabit DRAM chip, configured as 2M
× 8, is shown in Figure 5.7.
 The cells are organized in the form of a 4K x
4K array.
 The 4096 cells in each row are divided into 512
groups of 8.
 A row can store 512 bytes of data.
 12 address bits are needed to select a row.
 Another 9 bits are needed to specify a group of
8 bits in
the selected row.
 Thus, a 21-bit address is needed to access a byte
in this memory.
Asynchronous DRAMs..
Asynchronous DRAMs..
 The high-order 12 bits constitute the row
address and the low-order 9 bits of the
address constitute column address of a byte.
 To reduce the number of pins needed for
external connections, the row and column
addresses are multiplexed on 12 pins.
 During a Read or a Write operation, the row
address is applied first.
 It is loaded into the row address latch in
response to a signal pulse on the Row Address
Strobe (RAS) input of the chip.
Asynchronous DRAMs..
 Then a Read operation is initiated, in which all cells on
the
selected row are read and refreshed.
 Shortly after the row address is loaded, the column
address is applied to the address pins and loaded into
the column address latch under control of the Column
Address Strobe (CAS) signal.
 The information in this latch is decoded and the
appropriate
group of 8 Sense/Write circuits are selected.

If the R/Wഥ control signal indicates a Read operation,
the output values of the selected circuits are
transferred to the data lines,
D7-0.

Asynchronous DRAMs..
 In commercial DRAM chips, the RAS and
CAS control signals are active low.

shown on diagrams as RAS and CAS.


 To indicate this fact, these signals are

 To ensure that the contents of a DRAM are


maintained, each row of cells must be
accessed periodically.
 A refresh circuit usually performs this
function automatically.
Asynchronous DRAMs..
 The timing of the memory device is
controlled asynchronously.
 A specialized memory controller circuit

provides the necessary control signals,


RAS and CAS, that govern the timing.
 The processor must take into account the

delay in the response of the memory.


 Such memories are referred to as
asynchronous DRAMs.
Fast Page Mode
 Suppose if we want to access the consecutive
bytes in the selected row.
 This can be done without having to reselect the
row.
 Add a latch at the output of the sense circuits in each
row.
 All the latches are loaded when the row is selected.
 Different column addresses can be applied to select
and place different bytes on the data lines.
 Consecutive sequence of column addresses can be
applied
under the control signal CAS, without reselecting the
row.
 Allows a block of data to be transferred at a much faster
rate than random accesses.
 A small groups of bytes is referred to as blocks and larger
Read-Only Memories
(ROMs)
Read-Only Memories -
Introduction
 Both SRAM and DRAM chips are volatile:
 Lose the contents when the power is turned off.
 Many applications need memory
devices to retain the stored information
if power is turned off.
 For example, computer is turned on, the
operating system
must be loaded from the disk into the memory.
 This requires execution of a program that
"boots" the operating system.
 Need to store these instructions so that they will
not be lost after the power is turned off.
 We need to store the instructions into a
Read-Only Memories –
Introduction..
 Nonvolatile memory is used extensively in
embedded systems.
 Such systems typically do not use disk storage
devices.
 Their programs are stored in nonvolatile semiconductor
memory devices.
 Non-volatile memory is read in the same
manner as volatile memory.
 Separate writing process is needed to place
information in this memory.
 Normal operation involves only reading of data,
this type of memory is called Read-Only memory
(ROM).
Read-Only Memory (ROM)
Read-Only Memory (ROM)..
 Figure 5.12 shows a possible configuration for a ROM
cell.
 A logic value 0 is stored in the cell if the
transistor is connected to ground at point P;
otherwise, a 1 is stored.
 The bit line is connected through a resistor to
the power supply.
 To read the state of the cell, the word line is
activated.
 Thus, the transistor switch is closed and the voltage
on the bit line drops to near zero if there is a
connection between the transistor and ground.
 If there is no connection to ground, the bit line
remains at the high voltage, indicating a 1.
Programmable Read-Only
Memory (PROM)
 Allows the data to be loaded by a user.
 Programmability is achieved by inserting a
fuse at point P in Figure 5.12.
 Before it is programmed, the memory contains
all 0s.
 The user can insert 1s at the required locations
by burning
out the fuses at these locations using high-
current pulses.
 This process is irreversible.
 PROMs provide flexibility and convenience.
 PROMs provide a faster and less expensive
approach because they can be programmed
Erasable Programmable Read-
Only Memory (EPROM)
 Allows the stored data to be erased and new
data to be loaded.
 It provides considerable flexibility
during the development phase of
digital systems.
 They can be used in place of ROMs
while software is
being developed.
 Memory changes and updates can be
easily made.
 An EPROM cell has a structure similar
to the ROM cell in
Figure 5.12.
Erasable Programmable Read-
Only Memory (EPROM)..
 Advantage
 Their contents can be erased and
reprogrammed.
 Erasure is done by exposing the
chip to ultraviolet (UV) light.
 EPROM chips are mounted in packages that
have transparent windows.
 Disadvantage
 Chip must be physically removed from the
circuit for reprogramming and that its entire
contents are erased by the ultraviolet light.
Electrically Erasable Programmable
Read-Only Memory (EEPROM)
 They can be both programmed and
erased electrically.
 They do not have to be removed for

erasure.
 It is possible to erase the cell

contents selectively.
 The only disadvantage of EEPROMs is

that different voltages are needed for


erasing, writing, and reading the
stored data.
Flash Memory
 A flash cell is based on a single transistor
controlled by trapped charge, just like an
EEPROM cell.
 In a flash device, it is possible to read the
contents of a single cell, but it is only possible
to write an entire block of cells.
 Flash devices have greater density, which
leads to higher capacity and a lower cost
per bit.
 They require a single power supply voltage,
and
consume less power in their operation.
 Applications – used in portable equipment that is
battery driven.
Flash Memory..
 Single flash chips do not provide
sufficient storage capacity for the
applications mentioned above.
 Larger memory modules consisting of a
number of chips are needed.
 There are two popular choices for
the implementation of larger
memory modules
 Flash cards
 Flash drives
Flash Cards
 Flash chips are mounted on a small card.
 Such flash cards have a standard
interface that makes them usable in a
variety of products.
 A card is simply plugged into a

conveniently accessible slot.


 Flash cards come in a variety of

memory sizes.
 Typical sizes are 8, 32, and 64 Mbytes.
 A minute of music can be stored in about 1
Mbyte of memory, using the MP3 encoding
Flash Drives
 Flash drives are designed to fully
emulate the hard disks.
 The storage capacity of flash

drives is significantly lower.


 Advantages
 Flash drives are solid state
electronic devices.
 They have shorter seek and access
times, which
results in faster response.
 They have lower power consumption, which
makes them attractive for battery driven
Flash Drives..
 Disadvantages
 Smaller capacity and higher cost per bit,
compared to hard disk drives.
 Flash memory will deteriorate after it bas been
written a number of times (Typically one
million times).
Memory Hierarchy
Cache Memories
Cache Memories
 Processor is much faster than the main
memory.
 As a result, the processor has to spend
much of its time waiting while instructions
and data are being fetched from the main
memory.
 Major obstacle towards achieving good
performance.
 Speed of the main memory cannot be
increased beyond a certain point.
 Cache memory is an architectural
arrangement which makes the main
memory appear faster to the processor than
Locality of Reference
 Analysis of programs shows that most
of their execution time is spent on
routines in which many instructions are
executed repeatedly.
 These instructions may constitute a
simple loop, nested loops, or a few
procedures that repeatedly call each
other.
 Many instructions in localized areas of the

program are executed repeatedly during


some time period, and the remainder of
the program is accessed relatively
Locality of Reference..
 Temporal locality of reference
 Recently executed instruction is likely
to be executed again very soon.
 Spatial locality of reference
 Instructions with addresses close to a
recently executed instruction are likely
to be executed soon.
Operation of Cache Memories
 If the active segments of a program can be
placed in a fast cache memory, then the total
execution time can be reduced significantly.
 The memory control circuitry is designed to take
advantage of the property of locality of
reference.
 The temporal aspect of the locality of reference suggests that
whenever an information item (instruction or data) is first
needed, this item should be brought into the cache where it
will hopefully remain until it is needed again.
 The spatial aspect suggests that instead of fetching just one
item from the main memory to the cache, it is useful to fetch
several items that reside at adjacent addresses as well.
 The term block refers to a set of contiguous
address
locations of some size.
Operation of Cache Memories..
Operation of Cache Memories..
 Processor issues a Read request, a block of
words is transferred from the main memory
to the cache, one word at a time.
 Subsequent references to the data in this block
of words are found in the cache.
 At any given time, only some blocks in the main
memory are held in the cache.
 Which blocks in the main memory are in the cache is
determined
by a “mapping function”.
 When the cache is full, and a block of words
needs to be transferred from the main memory,
some block of words in the cache must be
Cache Hit
 The processor does not need to know explicitly
about the existence of the cache.
 It simply issues Read and Write requests
using addresses that refer to locations in
the memory.
 The cache control circuitry determines
whether the
requested word currently exists in the
cache.
 If it does, the Read or Write operation is
performed on the appropriate cache location.
 In this case, a read hit or write hit is said to have
occurred.
 In a Read operation, the main memory is
Cache Hit..
 For a Write operation, the system can
proceed in two ways.
 Write-through protocol
 The cache location and the main memory
location are
updated simultaneously.
 Write-back or copy-back protocol
 Only the cache location is updated and it is
marked as updated with an associated flag bit,
often called the dirty or modified bit.
 The main memory location of the word is
updated later, when the block containing this
marked word is to be removed from the cache
Cache Miss
 When the addressed word in a Read
operation is not in the cache, a read miss
occurs.
 The block of words that contains the
requested word is copied from the main
memory into the cache.
 After the entire block is loaded into the
cache, the particular word requested is
forwarded to the processor.
 Alternatively, this word may be sent to the
processor as soon as it is read from the main
memory.
 This approach is called load-through, or early restart,
Cache Miss..
 During a Write operation, if the addressed
word is not in the cache, a write miss
occurs.
 Then, if the write-through protocol is used,
the information is written directly into the
main memory.
 In the case of the write-back protocol, the
block containing the addressed word is first
brought into the cache, and then the desired
word in the cache is overwritten with the
new information.
Virtual Memories
Virtual Memories
 Recall that an important challenge in the
design of a computer system is to provide a
large, fast memory system at an affordable
cost.
 Architectural solutions to increase the
effective speed and size of the memory
system.
 Cache memories were developed to
increase the effective speed of the
memory system.
 Virtual memory is an architectural solution
to increase the effective size of the
Virtual Memories..
 In most modern computer systems, the
physical main memory is not as large as the
address space spanned by an address issued
by the processor.
 For example, a processor that issues 32-bit addresses
has an addressable space of 4G bytes.
 The size of the main memory in a typical
computer ranges from a few hundred
megabytes to 1G bytes.
 When a program does not completely fit into the
main memory, the parts of it not currently being
executed are stored on secondary storage
devices, such as magnetic disks.
Virtual Memories..
 All parts of a program that are eventually
executed are
first brought into the main memory.
 When a new segment of a program is to be moved into a
full memory, it must replace another segment already in
the memory.
 In modern computers, the operating system
moves programs and data automatically
between the main memory and secondary
storage.
 Thus, the application programmer does not
need to be aware of limitations imposed by the
available main memory.
 Techniques that automatically move program
Virtual Memories..
 Programs, and hence the processor,
reference an instruction and data space
that is independent of the available
physical main memory space.
 The binary addresses that the processor

issues for either instructions or data are


called virtual or logical addresses.
 These addresses are translated into

physical addresses by a combination


of hardware and software components.
Virtual Memories..
 If a virtual address refers to a part of
the program or data space that is
currently in the physical memory, then
the contents of the appropriate
location in the main memory are
accessed immediately.
 If the referenced address is not in the

main memory, its contents must be


brought into a suitable location in the
memory before they can be used.
Virtual Memories..
Virtual Memories..
 Figure 5.26 shows a typical
organization that implements virtual
memory.
 A special hardware unit, called the
Memory Management Unit (MMU),
translates virtual addresses into
physical addresses.
 When the desired data (or instructions) are
in the main memory, these data are
fetched.
 If the data are not in the main memory, the
MMU causes the operating system to bring
Secondary Storage
Secondary Storage
 Semiconductor memories cannot be
used to provide all of the storage
capability needed in computers.
 Their main limitation is the cost per bit of
stored information.
 Large storage requirements of most
computer systems are economically
realized in the form of magnetic disks,
optical disks, and magnetic tapes,
which are usually referred to as
secondary storage devices.
Magnetic Hard Disks
 The storage medium in a magnetic-disk
system consists of one or more disks
mounted on a common spindle.
 A thin magnetic film is deposited on

each disk, usually on both sides.


 The disks are placed in a rotary drive so

that the magnetized surfaces move in


close proximity to read/write heads, as
shown in Figure 5.29a.
 The disks rotate at a uniform speed.

 Each head consists of a magnetic yoke

and a magnetizing coil, as indicated in


Magnetic Hard Disks..
Magnetic Hard Disks..
 Digital information can be stored on the
magnetic film by applying current pulses of
suitable polarity to the magnetizing coil.
 This causes the magnetization of the film in
the area immediately underneath the head
to switch to a direction parallel to the applied
field.
 The same head can be used for reading the
stored information.
 In this case, changes in the magnetic field in
the vicinity of the head caused by the
movement of the film relative to the yoke
induce a voltage in the coil, which now
Magnetic Hard Disks..
 The polarity of this voltage is monitored
by the control circuitry to determine
the state of magnetization of the film.
 Only changes in the magnetic field under
the head can be sensed during the Read
operation.
 Therefore, if the binary states 0 and 1 are
represented by two opposite states of
magnetization, a voltage is induced in the
head only at 0-to-1 and at 1-to-0 transitions
in the bit stream.
 A long string of 0s or 1s causes an induced
voltage only at the beginning and end of
Magnetic Hard Disks..
 To determine the number of consecutive
0s or 1s stored, a clock must provide
information for synchronization.
 In some early designs, a clock was stored on
a separate track, where a change in
magnetization is forced for each bit period.
 Using the clock signal as a reference,
the data stored on other tracks can be
read correctly.
Magnetic Hard Disks..
 The modern approach is to combine the
clocking information with the data (self-
clocking schemes).
 One simple scheme, depicted in Figure
5.29c, is known as phase encoding or
Manchester encoding.
 In this scheme, changes in magnetization occur
for each data bit, as shown in the figure.
 A change in magnetization is guaranteed at the
midpoint of
each bit period, thus providing the clocking
information.
 The drawback of Manchester encoding is
Magnetic Hard Disks..
 Read/write heads must be maintained at
a very small distance from the moving
disk surfaces in order to achieve high bit
densities and reliable read/write
operations.
 The flexible spring connection between the
head and its arm mounting permits the
head to fly at the desired distance away
from the surface in spite of any small
variations in the flatness of the surface.
 In most modern disk units, the disks and the
read/write heads are placed in a sealed, air-
Magnetic Hard Disks..
 In such units, the read/write heads can
operate closer to the magnetized track
surfaces because dust particles are
absent..
 The closer the heads are to a track surface,
the more densely the data can be packed
along the track, and the closer the tracks can
be to each other.
 Thus, Winchester disks have a larger
capacity for a given physical size compared
to unsealed units.
 Another advantage of Winchester technology
Magnetic Hard Disks..
 The disk system consists of three key parts.
 Disk Platters – usually referred to as the disk.
 Disk Drive – comprises the electromechanical
mechanism that spins the disk and moves the
read/write heads.
 Disk Controller – the electronic circuitry that
controls the operation of the system.
 The disk controller may be implemented as a separate
module, or it may be incorporated into the enclosure
that contains the entire disk system.
 The term disk is often used to refer to the
combined package of the disk drive and the
disk it contains.
Organization and Accessing of
Data on a Disk
Organization and Accessing of
Data on a Disk..
 The organization of data on a disk is illustrated in
Figure 5.30.
 Each surface is divided into concentric tracks,
and each track is divided into sectors.
 The set of corresponding tracks on all surfaces
of a
stack of disks forms a logical cylinder.
 The data on all tracks of a cylinder can be
accessed without moving the read/write
heads.
 The data are accessed by specifying the
surface number, the track number, and the
sector number.
Organization and Accessing of
Data on a Disk..
 Data bits are stored serially on each track.
 Each sector usually contains 512 bytes of data,
but other sizes may be used.
 The data are preceded by a sector header that
contains identification (addressing) information
used to find the desired sector on the selected
track.
 Following the data, there are additional
bits that constitute an error-correcting code
(ECC).
 The ECC bits are used to detect and correct errors that
may have occurred in writing or reading of the 512 data
bytes.
Organization and Accessing of
Data on a Disk..
 An unformatted disk has no information on its
tracks.
 The formatting process divides the disk
physically into tracks and sectors.
 The capacity of a formatted disk is a proper
indicator of
the storage capability of the given disk.
 The formatting information accounts for about 15
percent of the total information that can be
stored on a disk.
 It comprises the sector headers, the ECC bits, and
intersector
gaps.
 In a typical computer, the disk is subsequently
Organization and Accessing of
Data on a Disk..
 Figure 5.30 indicates that each track has
the same number of sectors.
 So all tracks have the same storage capacity.
 Thus, the stored information is packed more
densely on inner tracks than on outer tracks.
 This arrangement is used in many disks because it
simplifies the electronic circuits needed to access
the data.
 But, it is possible to increase the storage
density by placing more sectors on outer
tracks, which have longer circumference.
 Requires more complicated access circuitry.

Access Time
 Seek time – time required to move the
read/write head to the proper track.
 Depends on the initial position of the head
relative to the track specified in the address.
 Average values are in the 5 to 8 ms range.
 Rotational delay (Latency time) - amount of time
that elapses after the head is positioned over
the correct track until the starting position of
the addressed sector passes under the
read/write head.
 On average, this is the time for half a rotation of the
disk.

Data Buffer/Cache
 A disk drive is connected to the rest of a
computer system using some standard
interconnection scheme, such as SCSI or
SATA.
 SCSI – Small Computer System Interface
 SATA – Serial Advanced Technology Attachment
 The interconnection hardware is usually
capable of transferring data at much higher
rates than the rate at which data can be
read from disk tracks.
 An efficient way to deal with the possible
differences in transfer rates is to include a
Data Buffer/Cache..
 The buffer is a semiconductor memory,
capable of storing a few megabytes of
data.
 The requested data are transferred
between the disk tracks and the buffer at
a rate dependent on the rotational speed
of the disk.
 Transfers between the data buffer and

the main memory can then take place at


the maximum rate allowed by the bus.
Data Buffer/Cache..
 The data buffer can also be used to
provide a caching mechanism for the
disk.
 When a read request arrives at the disk,

the controller can first check to see if the


desired data are already available in the
cache (buffer).
 If so, the data are transferred to the
memory in microseconds instead of
milliseconds.
 Otherwise, the data are read from a disk
Disk Controller
 Operation of a disk drive is controlled
by a
disk controller circuit
 Also provides an interface between the
disk drive and the rest of the computer
system.
 One disk controller may be used to
control more than one drive.
 Figure 5.31 shows a disk controller

which controls two disk drives.


Disk Controller..

Processor Main memory

System bus

Disk controller

Disk drive Disk drive


Disk Controller..
 A disk controller that communicates
directly with the processor contains a
number of registers that can be read and
written by the operating system.
 Thus, communication between the OS

and the disk controller is achieved in


the same manner as with any I/O
interface.
 The disk controller uses the DMA

scheme to transfer data between the


disk and the main memory.
Disk Controller..
 The OS initiates the transfers by issuing
Read and Write requests,
 Controller’s registers are loaded with

necessary information like:


 Main memory address – The address of the first
main memory location of the block of words
involved in the transfer.
 Disk address – The location of the sector
containing
the beginning of the desired block of words.
 Word count – The number of words in the block
to be transferred.
Disk Controller..
 On the disk drive side, the controller’s
major functions are:
 Seek – Causes the disk drive to move the
read/write head from its current position to
the desired track.
 Read – Initiates a Read operation, starting at
the
address specified in the disk address register.
 Data read serially from the disk are assembled
into words and placed into the data buffer for
transfer to the main memory.
 The number of words is determined by the
word count register.
Disk Controller..
 Write – Transfers data to the disk, using a
control method similar to that for Read
operations.
 Error checking – Computes the error correcting
code (ECC) value for the data read from a
given sector and compares it with the
corresponding ECC value read from the disk.
 In the case of a mismatch, it corrects the error if
possible; otherwise, it raises an interrupt to
inform the OS that an error has occurred.
 During a Write operation, the controller computes
the ECC value for the data to be written and stores
this value on the disk.
Software and Operating
System Implications
 All data transfer activities involving disks are
initiated by the operating system.
 The disk is a nonvolatile storage medium, so
the OS itself is stored on a disk.
 During normal operation of a computer,
parts of the OS are loaded into the main
memory and executed as needed.
 When power is turned off, the contents of
the main memory are lost.
 When the power is turned on again, the OS
has to be loaded into the main memory,
which takes place as part of a process
Software and Operating
System Implications..
 To initiate booting, a tiny part of main
memory is implemented as a nonvolatile
ROM.
 This ROM stores a small monitor program
that can read and write main memory
locations as well as read one block of data
stored on the disk at address 0.
 This block, referred to as the boot block,
contains a loader program.
 After the boot block is loaded into memory by
the ROM monitor program, it loads the main
parts of the OS into the main memory.
Floppy Disks
 Floppy disks are smaller, simpler, and
cheaper disk units that consist of a
flexible, removable, plastic diskette
coated with magnetic material.
 The diskette is enclosed in a plastic jacket,

which has an opening where the


read/write head can be positioned.
 A hole in the center of the diskette

allows a spindle mechanism in the disk


drive to position and rotate the diskette.
Floppy Disks..
 The main feature of floppy disks is their
low cost and shipping convenience.
 However, they have much smaller

storage capacities, longer access


times, and higher failure rates than
hard disks.
 In recent years, they have largely been

replaced by CDs, DVDs, and flash cards


as portable storage media.
RAID Disk Arrays
 Redundant Array of Independent Disks
 Originally Redundant Array of Inexpensive
Disks
 Using multiple disks makes it cheaper
for huge storage, and also possible
to improve the reliability of the
overall system.
 Different configurations were
proposed, and many more have been
developed since.
RAID Disk Arrays..
 RAID0 – data striping
 RAID1 – identical copies of data on two

disks
 RAID2, 3, 4 – increased reliability

 RAID5 – parity-based error-recovery

 RAID 10 – combines the features of

RAID 0 and RAID 1.

You might also like