0% found this document useful (0 votes)
55 views

Unit I-Basic Structure of A Computer: System

The document discusses eight great ideas in computer architecture that have been developed over the last 60 years: 1) Designing for Moore's Law to anticipate future increases in processing power. 2) Using abstraction to simplify design through multiple levels of representation. 3) Optimizing for common cases to maximize performance. 4) Employing parallelism to perform operations simultaneously. 5) Utilizing pipelining to break instructions into stages to improve throughput. 6) Implementing prediction techniques like branch prediction to optimize instruction flow. 7) Establishing a memory hierarchy with different types of memory balanced for speed, size, and cost. 8) Incorporating redundancy to enhance dependability through failover and error
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views

Unit I-Basic Structure of A Computer: System

The document discusses eight great ideas in computer architecture that have been developed over the last 60 years: 1) Designing for Moore's Law to anticipate future increases in processing power. 2) Using abstraction to simplify design through multiple levels of representation. 3) Optimizing for common cases to maximize performance. 4) Employing parallelism to perform operations simultaneously. 5) Utilizing pipelining to break instructions into stages to improve throughput. 6) Implementing prediction techniques like branch prediction to optimize instruction flow. 7) Establishing a memory hierarchy with different types of memory balanced for speed, size, and cost. 8) Incorporating redundancy to enhance dependability through failover and error
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 64

UNIT I-BASIC STRUCTURE OF A COMPUTER

SYSTEM
TOPICS TO BE COVERED
Functional Units – Basic Operational Concepts –
Performance – Instructions: Language of the
Computer – Operations, Operands – Instruction
representation – Logical operations – decision
making – MIPS Addressing.
8 Great Ideas in Computer Architecture.
The following are eight great ideas that computer architects have invented in the last
60 years of computer design.

1. Design for Moore’s Law.

2. Use Abstraction to Simplify Design

3. Make the common case fast

4. Performance via parallelism

5. Performance via pipelining

6. Performance via prediction

7.  Hierarchy of memories

8.  Dependability via redundancy


1. Design for Moore’s Law.
The Number of Transistors in a integrated circuit doubles
approximately every two years. (Gordon Moore, one of the founders of Intel.)

As computer designs can take years, the resources available per chip can easily
double or quadruple between the start and finish of the project.

Computer architects must anticipate where the technology will be when the
design finishes rather than design for where it starts.

2. Use Abstraction to Simplify Design

In computer architecture, a computer system is usually represented as consisting of


five abstraction levels: hardware  firmware, assembler, operating system and processes.

In computer science, an abstraction level is a generalization of a model


or algorithm.

The simplification provided by a good abstraction layer facilitates easy reuse.


3. Make the common case fast

-Making the common case fast will tend to enhance performance better than
optimizing the rare case.

- It implies that you know what the common case is, which is only possible
with careful experimentation and measurement. 

4. Performance via parallelism

-Computer architects have offered designs that get more performance by


performing operations in parallel. 

-Parallel computing is a form of computation in which many calculations are


carried out simultaneously, operating on the principle that large problems
can often be divided into smaller ones which are then solved concurrently
( “Parallel”).

- Parallelism has been employed for many years, mainly in high performance
computing.
5. Performance via pipelining

-Pipelining  is a technique used in the design of computers to increase the


instruction throughput (the number of instructions that can be executed in
a unit of time).

-The basic instruction cycle is broken up into a series of pipeline stages.

6.  Performance via prediction

--To improve the flow and throughput in a instruction pipeline, Branch


predictors play a critical role in achieving high effective  performance in
many modern pipelined microprocessor architectures.

--Without branch prediction, the processor would have to wait until the
conditional jump instruction has passed the execute stage before the
next instruction can enter the fetch stage in the pipeline.
7. Hierarchy of memories

--Programmers want memory to be fast, large, and cheap, as memory


speed often shapes performance, capacity limits the size of problems that
can be solved, and the cost of memory today is often the majority of
computer cost.

--Architects have found that they can address these conflicting demands
with a hierarchy of memories, with the fastest, smallest, and most
expensive memory per bit at the top of the hierarchy and the slowest,
largest, and cheapest per bit at the bottom. 
Memory Hierarchy.
Levels of Memory Hierarchy
Capacity Upper Level
Access Time
Cost faster
CPU Registers
100s Bytes Registers
1s ns
Instr. Operands
Cache
K Bytes Cache
4 ns
1-0.1 cents/bit
Blocks
Main Memory
M Bytes Memory
100ns- 300ns
$.0001-.00001 cents /bit
Disk Pages
G Bytes, 10 ms
(10,000,000 ns) Disk
-5 -6
10 - 10 cents/bit
Files
Tape Larger
infinite
sec-min Tape Lower Level
-8
10
8. Dependability via redundancy

-- Computers not only need to be fast; they need to be dependable. 

-- Since any physical device can fail, we make systems dependable by including
redundant components that can take over when a failure occurs and to help detect
failures. 

-Examples: Systems designers usually provide failover capability in servers, systems


or networks requiring continuous availability -- the used term is high availability --
and a high degree of reliability.
Technologies for building Processors and
Memory.
Technology Cont…
--The IC manufacturing process starts with a silicon crystal ingot.
-- The Ingots are 8-12 inches in diameter and about 12 to inches
long.
--An Ingot is finely sliced into wafers no more than 0.1 inches
thick.
--These wafers then go through a series of processing steps,
during which patterns of chemicals are placed on each wafer,
creating the Transistors, conductors, and insulators .
-- In figure, One wafer produced 20 dies ,of which 17 passed
testing.( X means the die is bad). The yield of good dies in this
case was 17/20. or 85 %.
--These good dies are then bonded into packages(connected to
the input/output pins of a Package) and tested one more time
before shipping the packaged parts to customers.
--As in fig, one bad packaged part was found in the final list.
die :The individual rectangular sections that are cut from a wafer,
more informally known as chips.

yield :The percentage of good dies from the total number of dies on
the wafer.

Transistor: An on/ off switch controlled by an electric signal.

VLSI : A device(IC) containing millions of transistors.

Silicon: A natural element that is a semiconductor.

Semiconductor: A substance that does not conduct electricity well.


Performance:
Response time or execution time:The total time required for the computer
to complete a task, including disk accesses, memory
accesses, I/O activities, operating system overhead, CPU execution
time, and so on.

Throughput: The total amount of work done in a given time.

Bandwidth:  The amount of data that can be carried from one point to
another in a given time period (usually a second). This kind of bandwidth is
usually expressed in bits (of data) per second (bps). Occasionally, it's
expressed as bytes per second (Bps).

clock cycles per instruction (CPI): Average number of clock cycles per
instruction for a program or program fragment.
Performance:
we can relate performance and execution time for a computer X:
Measuring Performance:

clock cycle: Also called tick, clock tick, clock period, clock, or cycle.
The time for one clock period, usually of the processor clock, which
runs at a constant rate.

clock period : The length of each clock cycle.


Example:
Instruction Performance:
Clock cycles per instruction (CPI): Average number of clock cycles per
instruction for a program or program fragment.

Example:

Suppose we have two implementations of the same instruction set


architecture. Computer A has a clock cycle time of 250 ps and a CPI
of 2.0 for some program, and computer B has a clock cycle time of
500 ps and a CPI of 1.2 for the same program. Which computer is
faster for this program and by how much?
The Classic CPU Performance Equation.
Example:
The basic Components of Performance and how each is Measured.
Uniprocessors To Multiprocessors

In Multi-core processors, the benefit is more on throughput than on


response time.

In the past, programmers could rely on innovations in the hardware,


Architecture and compilers to double performance of their programs
every 18 months without having to change a line of code.

Today, for programmers to get significant improvement in response


time, they need to rewrite their programs to take advantage of multiple
processors and also they have to improve performance of their code as the
number of core increases.

The need of the hour is……..

Ability to write Parallel programs

Care must be taken to reduce Communication and Synchronization


overhead. Challenges in Scheduling, load balancing have to be addressed.
Power & Energy
---The dominant technology for integrated circuits is called CMOS
(complementary metal oxide semiconductor). For CMOS, the primary source of
energy consumption is so-called dynamic energy—that is, energy that is
consumed when transistors switch states from 0 to 1 and vice versa.
--The dynamic energy depends on the capacitive loading of each transistor and
the voltage applied:
Instruction Formats
(Contd..)
Three-Address Instructions
ADD R1, R2, R3 R1 ← R2 + R3
Two-Address Instructions
ADD R1, R2 R1 ← R1 + R2
ADD M AC ← AC + M[AR]
One-Address Instructions
Zero-Address Instructions
ADD
Lots of registers. Memory is restricted to Load & Store

RISC Instructions

Opcode Operand(s) or Address(es)


MIPS Instructions
• Million Instruction Per Second
• MIPS Operands
Instructions (Contd..)
Instructions (Contd..)
Instructions (Contd..)
Operations and Operands
Every computer must be able to perform
arithmetic.
The MIPS assembly language notation add a,
b, c instructs a computer to add the two
variables b and c and to put their sum in a.
This notation is rigid in that each MIPS
arithmetic instruction performs only one
operation and must always have exactly three
variables.
E.g;

p
 add a, b, c # The sum of b and c is
Logical Operations
Although the first computers operated on full
words, it soon became clear that it was useful
to operate on fields of bits within a word or
even on individual bits.
Examining characters within a word, each
of which is stored as 8 bits, is one example
of such an operation.
It follows that operastions were added to
programming languages and instruction set
architectures to simplify, among other things,
the packing and unpacking of bits into words.
4.4. Logical Operations
(Contd..)
4.5. Control Operations
Program control instructions change or modify the
flow of a program.
 The most basic kind of program control is
the unconditional branch or unconditional jump.
Branch is usually an indication of a short change
relative to the current program counter.
Jump is usually an indication of a change in program
counter that is not directly related to the current
program counter
Control transfer instructions
 Unconditional branch
 Conditional branch
 Procedure call
 Return
Addressing Modes: The method used to
identify the location of an operand.
The Following are the MIPS Addressing Modes.

1. Immediate addressing
2. Register addressing
3. Base or displacement addressing
4. PC-relative addressing
5. Pseudodirect addressing
Addressing Modes: Cont…..

1. Immediate addressing: The operand is a constant within the instruction


itself. i.e. The operand is specified in the instruction itself.

2. Register addressing: The operand is in a CPU register. The


register is specified in the instruction.
Addressing Modes: Cont…..

3. Base or displacement addressing: The operand is at the memory


location whose address is the sum of a register and a constant in the
instruction.

4. PC-relative addressing: The branch address is the sum of the PC


and a constant in the instruction.
5. Pseudo-direct addressing: The jump address is the 26
bits of the instruction concatenated with the upper Four
bits of the PC.
Pseudo-direct addressing Cont……
Addressing Modes Summary
The method used to identify the location of an operand.
The Following are the MIPS Addressing Modes.

1. Immediate addressing, where the operand is a constant within the


instruction itself.
2. Register addressing, where the operand is a register.
3. Base or displacement addressing, where the operand is at the memory
location whose address is the sum of a register and a constant in the
instruction.
4. PC-relative addressing, where the branch address is the sum of the PC
and a constant in the instruction.
5. Pseudodirect addressing, where the jump address is the 26 bits of the
instruction concatenated with the upper Four bits of the PC.
Representing instructions in the Computer.
--Difference between the way humans instruct computers and
the way computers see instructions.
--Instructions are kept in computer as a series of high and low
electronic signals and may be represented as numbers.
--Each piece of an instruction can be considered as an
individual number, and placing these numbers side by side
forms the instruction.
--
Representing instructions in the Computer
Cont…..

Instruction format :A form of representation of an instruction


composed of fields of binary numbers.
Memory Operands:
Logical Operations:
Instructions for making decisions:
MIPS Multiply & Division Instructions:

You might also like