Computer Org - L3
Computer Org - L3
2
Computer organization is concerned with the way the hardware components
operate and the way they are connected together to form the computer system.
Computer organization addresses issues such as control signals (how the computer
3
The study of computer architecture, on the other hand, refers to these attributes of a
formats, operation codes, data types, the number and types of registers,
addressing modes, main memory access methods, and various I/O mechanisms.
Studying computer architecture helps us to answer the question:
4
A computer is a complex system: contain millions of elementary electronic
components. How, then can one clearly describe them?
The key is to recognize the hierarchic nature of most complex systems,
including the computer.
A hierarchic system is a set of interrelated subsystems.
The hierarchic nature of complex system is essential to both their design
and their description.
The designer need only deal with a particular level of the system
at a time.
At each level, the designer is concerned with structure and function.
5
- Structure: the way in which the components are interrelated.
6
The basic function that computers can
perform, in general terms consists
of four parts as shown in figure 1
7
1. Data Processing.
The computer must be able to process wide variety forms of data.
2. Data Storage.
Also it is essential that computer store data. the computer must temporarily store at least those pieces of data
that are being worked on at any given moment.
Thus, there is at least a short-term data storage function.
Equally important, the computer performs a long-term data storage function . Files of data are stored on the
computer for subsequent retrieval and update.
3. Data Movement.
The computer must be able to move data between itself and the outside world.
When data are received from or delivered to a device that is directly connected to the computer, the process
is known as input-output (I/O), and the device is referred to as a peripheral.
When data are moved over longer distances, to or from a remote device, the process is known as data
communications
8
4. Control.
These three functions must be control.
This control is executed by the individual who provides the computer with instructions.
At this general level of discussion, the number of possible operations that can be performed is few.
Figure 1.2 depicts the four possible types of operations.
The computer can function as a data movement device (Figure 1.2a), simply transferring data from
one peripheral or communication line to another.
It can also function as a data storage device (Figure 1.2b), with data transferred from the external
environment to computer storage (read) and vice versa (write).
The final two diagrams show operations involving data processing, on data either in storage (Figure
1.2c) or in route between storage and the external environment (Figure 1.2d).
9
10
Figure 1.3 is the simplest possible
depiction of a computer.
The computer interacts in some fashion
with its external environment.
In general, all of its linkage to the external
environment can be classified a
peripheral devices or communication
lines
11
Top level of computer can classify as shown in figure 4.
1. Central Processing Unit (CPU) Controls the operation of the computer and performs its
data processing functions; often simply referred to as processor.
2. Main Memory: Stores data.
3. I/O: moves data between the computer and its external environment.
4. System Interconnection Some mechanism that provides for communication among
CPU, main memory, and I/O.
12
13
Each of these components have its major structural components complex
14
15
16
A microprogrammed control unit operates
17
For example, a microprogrammed processor might
translate the instruction ADD r1, r2, r3 into six micro-operations:
one that reads the value of r2 and sends it to one input of the adder,
one that reads the value of r3 and send s it to the other input of the adder,
one that performs the actual addition,
One that writes the result of the addition into r1,
one that increments the PC to point to the next instruction,
and one that fetches the next instruction from the memory..
18
Year by year, the cost of computer systems continues to drop dramatically, while the
performance and capacity of those systems continue to rise equally dramatically.
Today’s laptops have the computing power of an IBM mainframe from 10 or 15 years
ago.
This continuing technological revolution has enabled the development of applications
of astounding complexity and power.
For example, desktop applications that require the great power of today’s
microprocessor-based systems include:
• Image processing
• Speech recognition
• Videoconferencing
• Multimedia authoring
• Voice and video annotation of files
• Simulation modeling
19
Workstation systems now support highly sophisticated engineering and scientific
applications, as well as simulation systems, and have the ability to support image
and video applications.
20
What gives Intel x86 processors or IBM mainframe computers such mind-boggling
power is the relentless pursuit of speed by processor chip manufacturers.
So long as this law holds, chipmakers can unleash a new generation of chips every
three years—with four times as many transistors.
In memory chips, this has quadrupled the capacity of dynamic random-access
memory (DRAM),
In microprocessors, the addition of new circuits, and the speed boost that comes
from reducing the distances between them, has improved performance four- or
fivefold every three years or so since Intel launched its x86 family in 1978.
21
But the raw speed of the microprocessor will not achieve its potential unless
Accordingly, while the chipmakers have been busy learning how to fabricate chips
of greater and greater density, the processor designers must come up with ever
more elaborate techniques for feeding the monster.
22
Among the techniques built into contemporary processors are the following:
instructions.
For example, while one instruction is being executed, the computer is decoding the
next instruction.
23
24
• Branch prediction: The processor looks ahead in the instruction code fetched
from memory and predicts which branches, or groups of instructions, are likely
to be processed next.
If the processor guesses right most of the time, it can prefetch the correct
instructions and buffer them so that the processor is kept busy
25
26
Speculative execution: Using branch prediction and data flow analysis, some
This enables the processor to keep its execution engines as busy as possible by
27
While processor power has raced ahead at breakneck speed, other critical
components of the computer have not kept up.
The result is a need to look for performance balance: an adjusting of the organization
and architecture to compensate for the mismatch among the capabilities of the
various components.
While processor speed has grown rapidly, the speed with which data can be
transferred between main memory and the processor has lagged badly.
If memory or the pathway fails to keep pace with the processor’s insistent demands,
the processor stalls in a wait state, and valuable processing time is lost.
28
A system architect can attack this problem in a number of ways, all of which are reflected in
contemporary computer designs. Consider the following examples:
Increase the number of bits that are retrieved at one time by making DRAMs “wider” rather
than “deeper” and by using wide bus data paths.
Increase the interconnect bandwidth between processors and memory by using higher-speed
buses.
Change the DRAM interface to make it more efficient by including a cache or other buffering
scheme on the DRAM chip.
Reduce the frequency of memory access by incorporating increasingly complex and efficient
cache structures between the processor and main memory. This includes the incorporation of
one or more caches on the processor chip as well as on an off-chip cache close to the processor
chip.
29