0% found this document useful (0 votes)
17 views

Computer Archtecture

The document discusses the history and types of computers. It covers early mechanical computers, early electronic computers like ENIAC which used vacuum tubes, electromechanical computers like MARK I, and the development of digital computers including the UNIVAC. The document also discusses binary numbers and digital vs analog computing.

Uploaded by

0abubakar221
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Computer Archtecture

The document discusses the history and types of computers. It covers early mechanical computers, early electronic computers like ENIAC which used vacuum tubes, electromechanical computers like MARK I, and the development of digital computers including the UNIVAC. The document also discusses binary numbers and digital vs analog computing.

Uploaded by

0abubakar221
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 194

ICT DEPARTMENT

ITT 05105 Computer Architecture

By: Mjema A. R

©2017 Copyright reserved


Introduction

Computer
 From the Latin – Computare – which means ‘calculate’.

 Computer is a machine that only can execute


instructions that given by the user and operate the data
base on the related instruction.

 The computer will process the data to produce


information.
Introduction

Architecture
 The structure and design of a system or product.

Computer Architecture
 A conceptual structure and logical organization of a
computer or computer based system.

 Computer architecture refers to those attributes of a


system visible to a programmer or, put another way,
those attributes that have a direct impact on the logical
execution of a program.
Computer Architecture

Data & Information


 Data is a raw fact. There are 5 types of data:

1. Text : alphabetic, numeric, special symbol


2. Graphics : picture
3. Audio: any kind of sound
4. Video: a series of photograph frame which record the real
5. Animation: A series of image which is displayed one by one to
produce a movement illusion

 Information – data that has been processed and contains meaning.


Types of Computer

Computer Architecture and


Organization 5
Computers

 In the past, early


computers used
mechanical systems to
do calculations and
other simple
computations.

 They required human


efforts to run the
machine
Computers

 In the past, early


computers used
mechanical systems to
do calculations and
other simple
computations.

 They required human


efforts to run the
machine
Computers

 In the past, early


computers used
mechanical systems to
do calculations and
other simple
computations.

 They required human


efforts to run the
machine
Electronic Computers
 The Analogue systems utilizes
values that can take any form and
can be quantized with decimal
numbers

 Analogue signals are directly


varying with the information they
represent, see on the right.

 A simple analogue signal is a


trace of an electrical sign wave
from AC power supplies. E.g
Tanesco, generators e.t.c
Electronic Computers
 This can accept even unwanted electrical signals to be part of the
information signal and cause electrical noise.

 But despite the noisy problems of the analogue signals, they


provide a very close representation of the information than digital
signal.
Electronic Computers
 First electronic computers were analogue as compared to today’s
ones

 The Analogue systems utilizes values that can take any form and can
be quantized with decimal numbers

- the decimal digits 0, 1, 2,….,9, provide 10 discrete values, but


digital computers function more reliably if only two states are used.

 An analogue computer performs a direct simulation of a physical


system. Each section of the computer is the analogue of some
particular portion of the process under study
Electronic Computers

 The variables in the analogue computer are represented by


continuous signals, usually electric voltages that vary with time.

 The signal variables are considered analogous to those of the


process and behave in the same manner.

 Thus, measurements of the analogue voltage can substituted for


variables of the process.

 The term analogue signal is sometimes substituted for continuous


signal because “ analogue computer” has come to mean a computer
that manipulates continuous variables.
Electronic Computers
 One of the early decimal
computer was the Electronic
Numerical Integrator And
Computer -ENIAC

 Its memory consisted of 20


“accumulators,” each capable of
holding a 10-digit decimal
number.
 A ring of 10 vacuum tubes
represented each digit. At any
time, only one vacuum tube was
in the ON state, representing
one of the 10 digits.
Electronic Computers
 The major drawback of the ENIAC was that it had to be programmed
manually by setting switches and plugging and unplugging cables.

 The ENIAC was completed in 1946, too late to be used in the war effort.
Electronic Computers

 Instead, its first task was


to perform a series of
complex calculations
that were used to help
determine the feasibility
of the hydrogen bomb.

 The use of the ENIAC


for a purpose other than
that for which it was built
demonstrated its
general-purpose nature
Electronic Computers

 A digital signal refers to an electrical signal that is converted into a pattern of


bits.

 Unlike an analog signal, which is a continuous signal that contains time-


varying quantities, a digital signal has a discrete value at each sampling
point.
Electronic Computers

 The precision of the signal is


determined by how many samples
are recorded per unit of time.

 A digital signal is easily represented


by a computer because each sample
can be defined with a series of bits
that are either in the state 1 (on) or 0
(off).

 Digital signals can be compressed


and can include additional
information for error correction.
Electronic Computers
 If the input source is an
analogue one, the signal
is usually changed to a
digital signal.

 This involves a process of


Analogue to Digital
Conversion with an
Analogue-Digital Converter
ADC.

 If the Analogue signal is


required back or as an
output from a system, then
the opposite is performed
with a DAC.
Electromechanical Computers

 In 1984, the
electromechanical computer
was born with the
development of MARK I by
Howard Aiken and other was
constructed MARK I at
Harvard University under the
sponsorship of IBM
(International Business
Machine).

 The MARK I also known as


the Automatic Sequence
Controlled Calculator.
Electromechanical Computers

 It was 50ft. long and 8ft. high.


It used electronic tubes and
electrical relays.

 MARK I took 4½ seconds to


multiply two 23 digit numbers.

 It was able to produce


ballistics tables that were
used in connection with
Second World War.

 The input device used was the


punched paper tape
Electromechanical Computers

 Electromechanical computers varied greatly in design and capabilities.


 Some later units capable of floating point arithmetic.
Electromechanical Computers

 Some relay-based computers


remained in service after the
development of vacuum-tube
computers, where their slower speed
was compensated for by good
reliability

 Some models were built as duplicate


processors to detect errors, or could
detect errors and retry the instruction.

 A few models were sold commercially


with multiple units produced, but many
designs were experimental one-off
productions.
Digital Computers
 The term Digital implies that the
information in the computer is
represented by variables that
take a limited number of
discrete values.

 Because of the physical


restriction of components, and
because human logic tends to
be binary(true/false, yes/no),
digital component are further
constrained to take only two
values and are said to be
binary.
Digital Computers
 Digital Computers came in
varieties of types, sizes and
capabilities.

 One example of the first digital


computers is a UNIVAC
produced by Eckert-Mauchly
Computer Corporation .

 It is the electronic digital


stored-program computers

 UNIVAC stands for


UNIVersal Automatic Computer
Digital Computers
 The ENIAC computer system was still big in size and weight but with
improved capabilities than the analogues.
Digital Computers
 Computers are used in commercial and
business data processing, scientific
calculations, air traffic control, space
guidance, education and many other
areas.

 The most striking property of a digital


computer is its generality. It can follow a
sequence of instructions, called a
program, that operates on a given data.
Digital Computers

 The general purpose digital computer


is the best-known example of a digital
system.

 Other Examples include telephone


switching exchanges, digital
voltmeters, digital counters, digital
cameras, personal digital assistant-
PDA, electronic calculators and digital
displays
Digital Computers
 Characteristic of a digital system is its
manipulation of discrete elements of
information.

 Such discrete elements may be


electric impulses, the decimal digits,
and the letters of an alphabet,
arithmetic operations, punctuation
marks or any other set of meaningful
symbols.
Digital Computers

 Early digital computers were used


mostly for numerical computations.
In this case, the discrete elements
used are the digits. See Fig. 1.1

 From this application, the term


digital has emerged.

 A more appropriate name for a


digital computer would be a
discrete information-processing
system.
Binary Numbers

 A decimal number such as 7392 represents a quantity


equal to 7 thousands plus 3 hundreds, plus 9 tens, plus 2
units.

 The thousands, hundreds, etc. are powers of 10 implied


by the position of the coefficients.

 To be more exact, 7392 should be written as:

- 7x103 + 3 x 102 + 9 X 101 + 2 X 100


Binary Numbers

 However, the good idea is to write only the coefficients and from their position
obtain the necessary powers of 10.

 In general, a number with a decimal point is represented by a series of


coefficients as follows;
a5a4a3a2a1a0.a-1a-2a-3

 The aj coefficients are one of the ten digits (0,1,2,…, 9), and the subscript j
gives the value and, hence, the power of 10 by which the coefficient must be
multiplied.
105a5 + 104a4 + 103a3 + 102a2 + 101a1 + 100a0 + 10-1a-1 + 10-2a-2 + 10-3a-3.

 The decimal number is known to be of base or radix, 10 because it uses ten


digits and the coefficients are multiplied by powers of 10.
Binary Numbers

 The binary number is a different number system. The coefficients


have two possible values: 0 and 1.

 Each coefficient aj is multiplied by 2j. For example let’s look at the


following example:
The binary number 11010.11 is;
1 x 24 + 1 x 23 + 0 x 22 + 1 x 21 + 0 x 20 + 1 x 2-1 + 1 x 2-2 = 26.75.

 In general, a number expressed in base-r system has coefficients


multiplied by powers of r:
Hexadecimal Numbers

 When representing information on digital systems, it is


usually more convenient to choose a number system that
can represent as much information as possible.

 Remember that the binary was chosen at first because it


directly represents the electrical switching states,
ON/OFF.

 There may be a need to represent digital information in


more than binaries and decimals could take. That means
more than base 2 and base 10.
Hexadecimal Numbers

 When representing information on digital systems, it is


usually more convenient to choose a number system that
can represent as much information as possible.

 The Hexadecimal number system uses base-16 i.e radix


16.

 This borrows some digits from the decimal system for


digits less than 10.
Hexadecimal Numbers

 The letters of alphabet are used to supplement the


numbers from 10 to 15 (remember that from 0 the 16 th
number is 15).

 These are usually represented by letters A, B, C, D, E,


and F for 10, 11, 12, 13, 14 and 15 respectively.

 Take the example;


B65F(Hex) is 11 x 163 + 6 x 162 + 5 x 161 + 15 x 160 =
46687(Dec)
Hexadecimal Numbers

 Other number systems includes the octal, base 8. See the table below
Number Base Conversions

 It is quite possible and easy to convert numbers from and into


various bases or radices.

 First of all, it is a convenient way to indicate the number with a


subscript of its base index appended on it, Example;

(i) Decimal 124 written as 12410

(ii) Binary 10111 written as 101112

(iii) Octal 123 written as 1238

(iv) Hexadecimal 2B3A written as 2B3A16


Number Base Conversions

 It is also acceptable to write an abbreviation of the base system in


parentheses next to the number as shown;

(i) Decimal 124 written as 124(DEC)

(ii) Binary 10111 written as 10111(BIN)

(iii) Octal 123 written as 123(OCT)

(iv) Hexadecimal 2B3A written as 2B3A(HEX)


Number Base Conversions

 A binary number can be converted to decimal by forming the sum of


the powers of 2 of those coefficients whose value is 1.
For example;
(1010.011)2 = 23 + 21 + 2-2 + 2-3 = (10.375)10

 As said earlier, the inverse is also possible.

 The conversion from decimal to binary or to any other base-r system


is more convenient if the number is separated into an integer part
and a fraction part and the conversion of each part done separately.
Number Base Conversions

 Example: Convert decimal number 41 to binary.

 The rule is that first, 41 is divided by 2 to get quotients


and remainders.

 Remainders are taken aside as binary bits while the


quotient is further divided by 2 continuously follows;
Number Base Conversions

 The conversion from decimal intergers to any base-r system is


similar to the example, except that division is done by r instead
of 2.
Number Base Conversions

 Similarly, a number expressed in base-r can be converted to its


decimal equivalent by multiplying each coefficient with corresponding
power of r and adding.

 The following is an example of octal to decimal conversion:

(630.4)8 = 6 x 82 + 3 x 8 + 4 x 8-1 = (408.5)10

 Example2: Convert decimal 153 to octal

Solution: The required base r is 8


Number Base Conversions

 The process is manipulated as follows:

 The conversion of a decimal fraction to binary is


accomplished by a method similar to that used for
integers.
Number Base Conversions

 However, multiplication is used instead of division, and


integers are accumulated instead of remainders.

 The number is usually multiplied by the base –r to give a


new integer a new fraction.

 This process continues until the fraction becomes 0 or


until the number digits have sufficient accuracy.

Example: Convert (0.6875)10 to binary


Number Base Conversions

Example: Convert (0.6875)10 to binary


Number Base Conversions

Example2: Convert (0.513)10 to octal


Solution:

 The answer, to seven significant figures, is obtained from the integer


part of the products:
Therefore (0.513)10 = (0.406517…)8
Number Base Conversions

 As said earlier, the conversion of decimal numbers with


both integer and fraction parts is done by converting the
integer and fraction separately and then combining the
two answers.

Example 3; Try these; Convert (41.6875)10 to binary and


(153.513)10 to octal

 Answers: (101001.1011)2 and (231.406517)8


respectively.
Number Base Conversions

Octal and Hexadecimal.


 Digital computers use binary numbers and it is sometimes necessary
for the human operator or user to communicate directly with the
machine by means of binary numbers.

 But binary numbers are difficult to work with because they require
three or four times as many digits as their decimal equivalent.

 For example,

the binary number 111111111111 is equivalent to decimal 4095.


Number Base Conversions

Octal and Hexadecimal.


 One scheme that retains the binary system in the
computer but reduces the number of digits the human
must consider utilizes the relationship between the
binary number system and the octal or hexadecimal
system.

 By this method, the human thinks in terms of octal or


hexadecimal numbers and performs the required
conversion by inspection when direct communication
with the machine is necessary.
Number Base Conversions

Octal and Hexadecimal.


 The conversion from and to binary, octal and
hexadecimal plays an important part in digital
computers.

 Since 23 = 8 and 24 = 16, each octal digit corresponds


to three binary digits and each hexadecimal digit
corresponds to four binary digits.
Number Base Conversions

Octal and Hexadecimal.


 The conversion from binary to octal is easily
accomplished by partitioning the binary into groups of
three digits each, starting from the binary point and
proceeding to the left and to the right.

 The corresponding octal digit is then assigned to each


group. Example;
Number Base Conversions

Octal and Hexadecimal.


 Conversion from binary to hexadecimal is similar,
except that the binary number is divided into groups of
four digits:

 Example 2: Convert 10110001101011.11110010 to


Hexadecimal.
Number Base Conversions

Octal and Hexadecimal.


 Conversion from octal or hexadecimal to binary is done
by a procedure reverse to the previous one.

 Each octal is converted to its three-digit binary


equivalent.

 Similarly, each hexadecimal digit is converted to its


four-digit binary equivalent.
Number Base Conversions

Octal and Hexadecimal.


 Similarly, each hexadecimal digit is converted to its
four-digit binary equivalent.

Examples: Convert octal 673.124 and Hexadecimal 306.D


to binary.
Binary Arithmetic Operations

 Arithmetic operations with numbers in base r follow the same rules as for
decimal numbers.

 When other than the familiar base 10 is used, one must be careful to use
only the r allowable digits.

 Examples of addition, subtraction and multiplication of two binary numbers


are as follows:
Binary Arithmetic Operations

Complements
 The subtraction operation is not easily performed by a
digital circuit.

 Thus, complements are used in digital computers for


simplifying the subtraction operation for logical
manipulation.

 There are two types of complements for each base-r


system: the radix complement and the diminished radix
complement.
Binary Arithmetic Operations

Complements

 The first is referred to as the r’s complement and the


second as the (r-1)’s compliment.

 When the value of the base r is substituted in the name,


the two types are referred to as 2’s complement and 1’s
complement for binary numbers and 10’s compliment and
9’s compliment for the decimal number.
Binary Arithmetic Operations

Complements

 The 1’s complement of a binary number can be formed


by simply changing 1’s to 0’s and 0’s to 1’s.

Examples;
Binary Arithmetic Operations

Complements

 Similarly the 2’s complement of a binary number can be


formed by simply adding 1 to its 1’s complement as in:

2 examples;
Binary Arithmetic Operations

Subtraction with Complements


 The ordinary method of subtraction is a direct one using
the borrow concept.

 In this method we borrow a 1 from a higher significant


position when the minuend digit is smaller than the
subtrahend digit.

 This seems to be easy when performing subtraction with


paper and pencil.
Binary Arithmetic Operations

Subtraction with Complements

 When the subtraction is implemented with digital hardware, this


method is found to be less efficient than the method that uses
complements.

 In subtraction by 1’s complement we subtract two binary numbers


using addition carried by 1’s complement.
Binary Arithmetic Operations

Subtraction with Complements

The steps to be followed in subtraction by 1’s complement are:


i) To write down 1’s complement of the subtrahend.

ii) To add this with the minuend.

iii) If the result of addition has a carry over then it is dropped and an
1 is added in the last bit.

iv) If there is no carry over, then 1’s complement of the result of


addition is obtained to get the final result and it is negative.
Binary Arithmetic Operations

Subtraction with Complements

The following examples illustrate the procedure;

Evaluate:

(i) 110101 – 100101


Solution:

1’s complement of 100101 is 011010.


Binary Arithmetic Operations

Subtraction with Complements

Hence
Minued - 110101

1’s complement of subtrahend - 011010

Carry over - 1 001111

010000
Thereforet he required difference is 10000
Binary Arithmetic Operations
Subtraction with Complements

(ii) 101011 – 111001

Solution:
1’s complement of 111001 is 000110.
Hence Minued - 101011

1’s complement - 000110

110001

Hence the difference is –1110


Binary Arithmetic Operations
Subtraction with Complements

(iii) 1011.001 – 110.10


Solution: 1’s complement of 0110.100 is 1001.011
Hence Minued - 1011.001

1’s complement of subtrahend - 1001.011

Carry over - 1 0100.100

0100.101
Binary Arithmetic Operations
Subtraction with Complements

(iv) 10110.01 – 11010.10


Solution:
1’s complement of 11010.10 is 00101.01

10110.01

00101.01

11011.10

Hence the required difference is – 00100.01 i.e. – 100.01


Binary Arithmetic Operations
Subtraction with Complements

Subtraction with 2’s complements

 With the help of subtraction by 2’s complement method we can easily


subtract two binary numbers.

 As usual, there is a governing procedure as seen on next slide or


section.
Binary Arithmetic Operations
Subtraction with Complements

The operation is carried out by means of the following steps:


(i) At first, 2’s complement of the subtrahend is found.

(ii) Then it is added to the minuend.

(iii) If the final carry over of the sum is 1, it is dropped and the result
is positive.

(iv) If there is no carry over, the two’s complement of the sum will be
the result and it is negative.
Binary Arithmetic Operations
Subtraction with Complements

The following examples on subtraction by 2’s complement illustrate the


procedures:
Evaluate:
(i) 110110 – 10110

Solution:
The numbers of bits in the subtrahend is 5 while that of minuend is 6.

We make the number of bits in the subtrahend equal to that of


minuend by taking a `0’ in the sixth place of the subtrahend.
Binary Arithmetic Operations
Subtraction with Complements

 Now, 2’s complement of 010110 is (101101 + 1) i.e.101010. Adding


this with the minuend.

1 10110 Minuend

1 01010 2’s complement of subtrahend

Carry over 1 1 00000 Result of addition

 After dropping the carry over we get the result of subtraction to be


100000.
Binary Arithmetic Operations
Subtraction with Complements

(ii) Evaluate : 10110 – 11010


Solution:
2’s complement of 11010 is (00101 + 1) i.e. 00110. Hence

Minued - 10110
2’s complement of subtrahend - 00110
Result of addition - 11100
As there is no carry over, the result of subtraction is negative and is
obtained by writing the 2’s complement of 11100 i.e.(00011 + 1) or
00100. Hence the difference is – 100.
Binary Arithmetic Operations
Subtraction with Complements

(iii) Compute: 1010.11 – 1001.01


Solution:
2’s complement of 1001.01 is 0110.11. Hence
Minued - 1010.11

2’s complement of subtrahend - 0110.11

Carry over 1 0001.10

After dropping the carry over we get the result of subtraction as 1.10.
Binary Codes

 Electronic digital systems use signals that have two distinct values
and circuit elements that have two stable states.

 These Digital systems represent and manipulate not only binary


numbers, but also many other discrete elements of information.

 Binary codes play an important role in digital computers.

 The codes must be in binary because computers can only hold 1’s
and 0’s.
Binary Codes

 If we inspect the bits of a computer at random , we will find that most


of the time they represent some type of coded information rather than
binary numbers.

 To represent a group of 2n distinct elements in a binary code requires


a minimum of n bits.

 This is because it is possible to arrange n bits in 2n distinct ways.

 For example, a group of four distinct quantities can be represented


by a two-bit code, with each quantity assigned one of the following bit
combinations: 00, 01, 10, 11.
Binary Codes

 A group of eight elements requires a three-bit code, with each


element assigned to one and only one of the following: 000, 001,
010, 011, 100, 101, 110, 111.

 There are so many ways of coding binary numbers and so many


binary codes used for representing digital information.

 Some of the codes are standardized and are most commonly or


universally used but others are solely for proprietary uses.
Binary Codes
 The common codes includes:

 Decimal Codes such as the Binary Coded Decimal –BCD code

 Error Detection Code such as Parity Codes

 Signal Conversion Codes such as Gray Codes

 Alphanumeric Codes such as ;


i) American Standard Code for Information Interchange- ASCII
ii) IBM’s Extended Binary- Coded Decimal Interchange Code-
EBCDIC
Binary Codes

Binary Coded Decimal –BCD code.

 Binary codes for decimal digits


require a minimum of four bits.

 Numerous different codes can be


obtained by arranging four or more
bits in ten distinct possible
combinations as shown on the
right:
Binary Codes

Binary Coded Decimal –BCD code.

 Thus, when this code is used on any


digital machine, produces decimal
numbers from 0 to 9.

 When the number exceed to tens, it


is no longer written as 1010 but
rather 0001 0000 and 23 written 0010
0011.

 This system is used in most simple to


medium digital machines e.g counters
Binary Codes
Alphanumeric Codes
American Standard Code for Information Interchange- ASCII

 Many applications of digital computers require the handling of data not


only of numbers, but also of letters.

 So, it is necessary to formulate a binary code for the letters of alphabet. In


addition , the same binary code must represent numerals and special
characters such as $, @ e.t.c.

 So alphanumeric character set includes a set of 10 decimal digits, the 26


letters of the alphabet and a number of special characters.
Binary Codes
Binary Codes
Alphanumeric Codes
American Standard Code for Information Interchange- ASCII

 The ASCII code is a standard code for alpha-numerals. It uses seven bits
to code 128 characters.

 The seven bits of the code are designated by b1 through b7, with b7 being
the most significant bit.

 The letter A, for example, is represented in ASCII as 1000001 (column


100, row 0001).
 The letter a, again, is represented in ASCII as 1100001 (column 110, row
0001). And % is 0100101 (column 010 row 0101)
Binary Codes
– Other characters are for controlling the information formulation and
interchanges
Binary Codes

Exercise!

Represent the word “Hello” with binary ASCII code


BINARY LOGIC

 Binary logic deals with variables that take on two discrete values and
with operations that assume logical meaning.

 The two values the variables take may be called by different names
(e.g., true and false, yes and no, etc).

 For our purpose, it is convenient to think in terms of bits and assign


the values of 1 and 0.

 Binary logic is used to describe, in a mathematical way, the


manipulation and processing of binary information.
BINARY LOGIC

 Binary logic consists of binary variables and logical operations.

 The variables are designated by letters of the alphabet such as A, B,


C, x, y, z etc, with each variable having two and only two distinct
possible values; 1 and 0.

 There are three basic logical operations: AND, OR and NOT. Other
operations includes Exclusive-OR(X-OR), NAND and NOR.

 The rules of operations of the basic logical operations are as follows


on the next slide.
BINARY LOGIC
BINARY LOGIC

 Logic operations are usually implemented by the use of digital


electronic circuits called logic gates

 The logic gates are also called logic circuits because, with proper
input, they establish logical manipulation paths.

 These are also sometimes called switching circuits as they operate


just like switches configured to produce outputs based on the applied
logic.

 Remember that computers have arithmetic and logic units to perform


various operations to produce logical results.
BINARY LOGIC

 Figure below shows the switching circuits for AND and OR


operations;

 Here the lamp L will be lit following operations which involves oeping
or closing switches A and or B. These are taken as inputs and the
bulb as the output.
LOGIC GATES

 As said earlier, logic operations are


usually implemented by the use of
digital electronic circuits called logic
gates.

 These are constructed by transistors


and other discrete components.

 An example of practical NOT and


NAND gates are shown on the left.

 Remember that a gate must be


powered in order to operate
LOGIC GATES

 To reduce design
complexities, the
logic gates uses
designated graphic
symbols.

 Example the previous


NOT gate is hidden
inside its symbol as
shown.
LOGIC GATES

 The standard symbols of logic


gates are used in modern
electronic circuits but some
literatures still use old
symbols developed by various
organizations such as ANSI,
IEC and IEEE.

 Logic gates are usually drawn


against their truth tables as
shown
ANSI- American National Symbols Institute
IEC-International Electro-technical Commission
IEEE- Institute of Electrical/Electronic Engineers
LOGIC GATES

 The OR gate
LOGIC GATES

 The NOT or
inverter
gate.
LOGIC GATES

 The
Exclusive
OR gate.
LOGIC GATES

 The NAND
gate.
LOGIC GATES

 Many other gates may be constructed by extending or


complementing the functionalities of the previous ones.

 Others may include; NOR, X-NOR, e.t.c

 Although most examples here have most logic gates with two
inputs, they can be made to have more than that; three or four but
this will not change their truth tables
LOGIC GATES

 Logic gates are usually packed


in integrated circuit packages
for ordinary and medium scale
electronic circuits.

 In case of computers and other


large scale circuits they are
part of processors and
microprocessors.

 One must refer to


manufacturer’s datasheet for
input/output pins of the gates
LOGIC GATES

 In real life situations, the inputs


to the logic gates is a series of
binary bits depending on the
information or data content.

 So, the signals may continues


to flow as long as the circuit
operates in a given time.

 The signals can thus be


represented in a sequence of
pulses with time, this known as
timing diagrams as shown.
Digital Combinational Circuits

 Logic circuits for digital systems may be combinational or


sequential.

 A combinational logic circuit consists of logic gates whose outputs


at any time are determined directly from the present combination of
inputs without regard to previous inputs.

 A combinational circuit performs a specific information-processing


operation fully specified logically by a set of Boolean function F.

 A combinational circuit consists of input variables, logic gates and


output variables.
Digital Combinational Circuits

 The logic gates accept signals from the


inputs and generate signals to the outputs.

 This process transforms binary information


from the given input data to the required
output.

 The number n of input binary variables


come from an external source; the m
output variables go to an external
destination.

 The block diagram of a combinational


circuit is shown on the right.
Digital Combinational Circuits
Digital Combinational Circuits

Digital Arithmetic Circuits

 Digital computers perform a variety of information-processing tasks.


Among the basic functions encountered are the various arithmetic
operations.

 The basic arithmetic operation, no doubt, is the addition of two


binary digits.

 A combinational circuit that performs the addition of two bits is called


a half adder.

 One that performs the addition of three bits( two significant bits and a
previous carry) is a full adder.
Digital Combinational Circuits

Half Adder

 Because the adder is a combinational circuit, to design it we follow


the same rules as stated earlier.

 The problem is already known as we need to add to binary digits.

 We need to establish a truth table for both inputs and outputs.

 In this case we need two inputs x and y as well as two outputs carry
C and sum S.

 The truth table will tell us which gates to use and connect to which
other ones.
Digital Combinational Circuits

Half Adder

 The truth table is given on the right;

 The Boolean function F is usually obtained from the outputs, thus;


Digital Combinational Circuits

Half Adder

 The required circuit is

 But remember from previous classes that S=x’y + xy’ =


which is an X-OR operation
Digital Combinational Circuits

Half Adder

 It is always advised to take into consideration some economical


factors when designing digital circuits in order to lower the cost of
the digital equipment.

 So the last circuit can be designed to share the same inputs and
number of gates reduced to the following design;
Digital Combinational Circuits

General conclusions

 Digital combinational circuits are used to design binary arithmetic


circuits such as adders, subtractors, comparators, decoder,
encoder multiplexers/ demultiplexers e.t.c

 These circuits can be easily designed with straight forward methods


of truth tables, Boolean functions and common simplification
methods and finally implemented.

 It is upon a designer to define a problem and put on designs

 Such circuits can be easily understood even in self studies after


understanding the basics.
Digital Sequential Circuits

Introduction
 Sequential circuits employ memory elements (binary cells) in addition
to logic gates. Their outputs are a function of the inputs and the state
of the memory elements.

 The state of memory elements in turn, is a function of previous


inputs.

 As a consequence , the outputs of a sequential circuit depend not


only on present inputs, but also on past inputs, and the circuit
behavior must be specified by a time sequence of inputs and internal
states.
Digital Sequential Circuits

Introduction
 Block diagram of a
sequential circuit is
shown here →

 This consists of a
combinational circuit to
which memory elements are
connected to form a
feedback path.
Digital Sequential Circuits

Introduction
 The binary information stored in the memory at any given time defines
the state of the sequential circuit.

 The sequential receives binary information from external inputs.

 These inputs, together with the present state of the memory


elements, determine the binary value at the output terminals.
Digital Sequential Circuits

Introduction
 There are two main types of sequential circuits. Their classification
depends on the timing of their signals.

 A synchronous sequential circuit is a system whose behavior can be


defined from the knowledge of its signal at discrete instants of time.

 An asynchronous sequential circuit is the one whose behavior depends


upon the order in which its input signals change and can be affected at
any instant of time. The memory elements commonly used in
asynchronous sequential circuits are time-delay devices.
Digital Sequential Circuits

Synchronous Sequential Circuits


 A synchronous sequential logic system, by definition, must employ
signals that affect the memory elements only at discrete instants of
time.

 One way of achieving this goal is to use pulses of limited duration


throughout the system so that one pulse amplitude represents logic 1
and pulse amplitude (or the absence of a pulse) represents logic 0.

 Practical synchronous sequential logic systems use fixed amplitudes


such as voltage levels for the binary signals. Synchronization is
achieved by a timing device called master- clock generator, which
generates a periodic train of clock pulses.
Digital Sequential Circuits

Synchronous Sequential Circuits


 The clock pulses are distributed throughout the system in such a way
that memory elements are affected only with the arrival of the
synchronization pulse.

 In practice, the clock pulses are applied into AND gates together with
the signals that specify the required change in memory elements.

 The AND gate outputs can transmit signals only at instants that
coincide with the arrival of clock pulses.
Digital Sequential Circuits

Synchronous Sequential Circuits

 Synchronous sequential circuits that use clock pulses in the inputs of


memory elements are called clocked sequential circuits and are the
type encountered most frequently.

 The memory elements used in clocked sequential circuits are called


flip-flops.

 These circuits are binary cells capable of storing one bit of


information.
Digital Sequential Circuits

Flip flops

 A flip flop circuit has two outputs, one for the normal value and one for
the complement value of the bit stored in it.

 Binary information can enter a flip-flop in a variety of ways, a fact that


gives rise to different types of flip flops.

 A basic flip flop can be constructed from two NAND gates or two NOR
gates.

 These constructions form a basic flip-flop upon which other more


complicated types can be built.
Digital Sequential Circuits

Flip flops

 Each flip-flop has two outputs,


Q and Q’, and two inputs, set
and reset.

 This type of flip-flop is


sometimes called a direct-
coupled RS flip-flop, or SR
latch.

 The R and S are the first


letters of the two input names.
Digital Sequential Circuits

Flip flops

 The operation of the basic flip-


flop in the figure can be
analyzed by remembering the
operation of the NOR gate.

 Its output is 0 if any input is 1,


and that the output is 1 only
when all inputs are 0.

 Now assume that the set input


is 1 and the reset input is 0.
Digital Sequential Circuits

Flip flops

 Since gate 2 has an input 1, its


output Q’ must be 0, which
puts both inputs of gate 1 at 0,
so that output Q is 1.

 When the set input is returned


to 0, the outputs remain the
same, because output Q
remains a 1, leaving one input
of gate 2 at 1.
Digital Sequential Circuits

Flip flops

 That causes output Q’ to stay


at 0, which leaves both inputs
of gate number 1 at 0, so that
output Q is a 1.

 In the same manner, it is


possible to show that a 1 in the
reset input changes output Q
to 0 and Q’ to 1.
 When the reset input returns to
0, the outputs do not change.
Digital Sequential Circuits

Flip flops

 When a 1 is applied to both the


set and the reset inputs, both Q
and Q’ outputs go to 0.

 This condition violates the fact that


output Q and Q’ are the
complements of each other.

 In normal operation, this condition


must be avoided by making sure
that 1’s are not applied to both
inputs simultaneously.
Digital Sequential Circuits
Clocked Flip flops

 The operation of the basic flip-flop


can be modified by providing an
additional control input that
determines when the state of the
circuit is to be changed.

 An RS flip-flop with a clock pulse


CP input is shown in figure at the
right.

 It consists of a basic flip-flop


circuit and two additional NAND
gates.
Digital Sequential Circuits
Flip flops

 The pulse input acts as an enable


signal for the other two inputs.

 The outputs of NAND gates 3 and


4 stay at the logic 1 level as long
as the CP input remains at 0. This
is the quiescent condition for the
basic flip-flop.

 When the pulse input goes to 1,


information from the S or R input
is allowed to reach the output.
Digital Sequential Circuits
Flip flops

 The set state is reached with S = 1,


R = 0, and CP = 1.

 This causes the output of gate 3 to


go to 0, the output of gate 4 to
remain at 1, and the output of the
flip-flop at Q to go to 1.

 To change to the reset state, the


inputs must be S = 0, R = 1 and
CP = 1
Digital Sequential Circuits
Flip flops

 In either case, when CP returns to


0, the circuit remains in its previous
state.

 When CP = 1 and both the S and R


inputs are equal to 0, the state of
the circuit does not change.

 An indeterminate condition occurs


when CP = 1 and both S and R are
equal to 1.
Digital Sequential Circuits
Flip flops

 This condition places 0’s in the


outputs of gates 3 and 4 and 1’s in
both outputs Q and Q.

 When the CP input goes back to 0


(while S and R are maintained at
1), it is not possible to determine
the next state, as it depends on
whether the output of gate 3 or
gate 4 goes to 1 first.
Digital Sequential Circuits
Flip flops

 This indeterminate condition


makes this circuit difficult to
manage and it is seldom used in
practice.

 But most all other flip-flops are


constructed from it.

 The graphic symbol of the RS flip-


flop is shown on bottom picture
Digital Sequential Circuits
D-Flip flops

 One way to eliminate the undesirable condition of the indeterminate


state in the RS flip-flop is to ensure that inputs S and R are never equal
to 1 at the same time.

 This is done in the D flip-flop in the shown figure on the bottom.


Digital Sequential Circuits
D-Flip flops

 It has only two inputs: D and CP. The D input goes directly to the S
input and its complement is applied to the R input.

 As long as the pulse input is at 0, the outputs of gates 3 and 4 are at


the 1 level and the circuit cannot change state regardless of the value
of D.
Digital Sequential Circuits
D-Flip flops

 The D input is sampled when CP = 1. If D is 1, the Q


output goes to 1, placing the circuit in the set state.

 If D is 0, output Q goes to 0 and the circuit switches to the


clear state.
Digital Sequential Circuits
Other Flip flops

 Other flip-flops includes a JK flip-flop and a T


flip-flop.

 The JK flip-flop is a refinement of the RS flip-


flop and the T flip flop is a modification of this
JK.

 Both have the ability to complement the outputs


when undetermined output occurs.

 The graphical symbols are on the


Digital Sequential Circuits
Other Flip flops

 As it has been seen earlier that the state of a flip flop is switched by a
momentarily change in the input signal.

 This momentarily change is called a trigger and the transition it causes is


said to trigger the flip-flop.

 There are many ways of triggering flip flops such as pulse triggering and
edge triggering.

 Clocked flip-flops are triggered by pulses. A pulse starts from an initial


value of 0, goes momentarily to 1, and after a short time, returns to its initial
0 value.
Digital Sequential Circuits
Other Flip flops

 Remember that logic gates have a propagation delay from the input to the
output.

 The time interval from the application of the pulse until the output transition
occurs is a critical factor.

 One way of achieving this is to depend on the pulse transition rather than
pulse duration.

 This is where edge triggering comes in.


Digital Sequential Circuits
Master-Slave Flip flops

 Another solution is to use a


method in which two individual
flip-flop circuits are connected
with one flip-flop driving the
other; this is called a master
flip-flop.

 The other which is driven is


called a slave flip-flop and the
overall circuit is known as a
Master-slave flip-flop.
Digital Sequential Circuits
Master-Slave Flip
flops

 The clock drives the


slave through an
inverter.

 This will create a


timed delay which
isolates the two flip-
flops from
interference.
Computer Components
Basic Computer
Archtecture

 Input/output
units

 Memory/storage
units

 CPU (Central
Processing Unit)
Computer Components
Motherboard Structure

 Chipset
 Northbridge
– Connected to CPU in high
speed
 Southbridge
– Connected in low speed

 Bus
– Related to “omnibus”
– Communication system
between components
Computer System
Components Connection
Computer Components
Basic CPU
Architecture
CPU Components

 ALU (Arithmetic Logic Unit)


– Performs calculations and comparisons (data changed)
 CU (Control Unit): performs fetch/execute cycle
– Functions:
 Moves data to and from CPU registers and other hardware
components (no change in data)
 Accesses program instructions and issues commands to the ALU
– Subparts:
 Memory management unit: supervises fetching instructions and
data
 I/O Interface: sometimes combined with memory management
unit as Bust Interface Unit
 Registers
– Example: Program Counter (PC) or instruction pointer
determines next instruction for execution
CPU Components

Control Unit- CU

 Provides control signals for the operation and coordination of all


processor components.

 The paths among components can carry control signals. For


example, a gate will have one or two data inputs plus a control
signal input that activates the gate.

 When the control signal is ON, the gate performs its function on
the data inputs and produces a data output.
CPU Components

Control Unit- CU

 Similarly, the memory cell will store the bit that is on its input lead
when the WRITE control signal is ON and will place the bit that is
in the cell on its output lead when the READ control signal is ON.

 Thus, a computer consists of gates, memory cells, and


interconnections among these elements.

 The gates and memory cells are, in turn, constructed of simple


digital electronic components.
CPU Components

Registers

 Registers are small, permanent storage locations within the CPU


used for a particular purpose

 They are manipulated directly by the CU

 Wired for specific function

 Size in bits or bytes (not MB like memory)

 Can hold data, an address or an instruction


CPU Components

Functions of Registers
 Stores values from other locations (registers and memory)
 Addition and subtraction
 Shift or rotate data
 Test contents for conditions such as zero or positive

 A register is a group of binary cells called flip-flops or other storage


components like capacitors e.t.c.

 Since a cell stores one bit of information, it follows that the register
with n cells can store any discrete quantity of information that
contains n bits.
CPU Components

Functions of Registers
 The number and arrangements of flip-flops in the memory unit or
registers depend on the word size of the processor and memory.

 The state of a register is an n-tuple number of 1’s and 0’s, with each
bit designating the state of one cell in a register.

 The content of a register is a function of the interpretation given to the


information stored in it

 Now that you know the internal structure of flip-flops, let’s see an
example of a four bit register.
CPU Components

Functions of Registers
 The diagram shows a 4-bit register constructed with 4 D-flip-flops.

 It has four data inputs, I4, I3,I2, and I1 as well as four data outputs
A4,A3,A2 and A1.

 The clock CP is used to trigger the register for data in and out.
CPU Components

Functions of Registers
 Let’s see one example of
register data shift or transfer
operation.

 Assume the word “JOHN” is


typed on the keyboard of a
machine that uses a single
parity bit for error detection.

 This machine is now an 8-bit


wide, thus each register has 8
cells
CPU Components

Functions of Registers
 Each time a key is struck, the
information from the input
register is transferred into the
eight least significant cells of a
processor register.

 After every transfer, the input


register is cleared to enable the
control to insert a new eight-bit
code when the keyboard is
struck again.
CPU Components

Functions of Registers
 Each 8-bit character
transferred to the processor
register is preceded by a shift
the previous character to the
next eight cells on its left.

 When a transfer of four


characters is completed, the
processor register is full and
its contents are transferred
into a memory register.
CPU Components

Types of Registers

 Program Counter (PC)


register

 Instruction Register (IR)

 Status register: status, flags

 Data registers

 Accumulators
CPU Components

Register Memories

 Computer memory is organized into a hierarchy. At the highest


level (closest to the processor) are the processor registers.

 Next comes one or more levels of cache,

 When multiple levels are used, they are denoted L1, L2, and so on.

 Next comes main memory, which is usually made out of dynamic


random-access memory (DRAM).
CPU Components

Register Memories

 All of these are considered internal


to the computer system.

 The hierarchy continues with


external memory, with the next level
typically being a fixed hard disk.

 One or more levels below that


consisting of removable media such
as optical disks, flash memories and
tape.
CPU Components

Register Memories

 As one goes down the memory


hierarchy, one finds decreasing
cost/bit, increasing capacity, and
slower access time.

 It would be nice to use only the


fastest memory, but because that is
the most expensive memory, we
trade off access time for cost by
using more of the slower memory.
CPU Components

Register Memories

 The design challenge is to organize


the data and programs in memory
so that the accessed memory words
are usually in the faster memory.

 In general, it is likely that most future


accesses to main memory by the
processor will be to locations
recently accessed.
CPU Components

The cache Memories

 Cache memory, also called CPU memory, is random access


memory (RAM) that a computer microprocessor can access more
quickly than it can access regular RAM.

 So the cache automatically retains a copy of some of the recently


used words from the DRAM.

 If the cache is designed properly, then most of the time the


processor will request memory words that are already in the cache.
CPU Components

The cache Memories

 This memory is typically integrated directly with the CPU chip or


placed on a separate chip that has a separate bus interconnect with
the CPU.

 Cache memory is fast and expensive.

 Traditionally, it is categorized as "levels" that describe its closeness


and accessibility to the microprocessor.
CPU Components

The cache Memories

Levels

 Level 1 (L1) cache is extremely fast but relatively small, and is


usually embedded in the processor chip (CPU).

 Level 2 (L2) cache is often more capacious than L1.

 L2 may be located on the CPU or on a separate chip


or coprocessor with a high-speed alternative system bus
interconnecting the cache to the CPU, so as not to be slowed by
traffic on the main system bus.
CPU Components

The cache Memories

Levels

 Level 3 (L3) cache is typically specialized memory that works to


improve the performance of L1 and L2.

 It can be significantly slower than L1 or L2, but is usually double the


speed of RAM.

 In the case of multicore processors, each core may have its own
dedicated L1 and L2 cache, but share a common L3 cache. .
CPU Components

Operation of Memory

 Each memory location has a


unique address

 Address from an instruction


is copied to the MAR
(Memory Address Register)
which finds the location in
memory.

 CPU determines if it is a
store or retrieval
CPU Components

Operation of Memory

 Transfer takes place


between the MDR
(Memory Data Register)
and memory

 MDR is a two way


register
CPU Components

Memory Capacity
 Determined by two factors ;
1. Number of bits in the MAR
 2K where K = width of the register in bits.

2. Size of the address portion of the instruction


 4 bits allows 16 locations
 8 bits allows 256 locations
 32 bits allows 4,294,967,296 or 4 GB

 Important for performance.


– Insufficient memory can cause a processor to work at 50% below
performance.
CPU Components

Random Access Memory- RAM


 DRAM (Dynamic RAM)
– Most common, cheap
– Volatile: must be refreshed (recharged with power) 1000’s of
times each second

 SRAM (Static RAM)


– Faster than DRAM and more expensive than DRAM
– Volatile
– Frequently small amount used in cache memory for high-speed
access used.
CPU Components

Read Only Memory- ROM


 Non-volatile memory to hold software that is not expected to change
over the life of the system

 Magnetic core memory

 EEPROM
– Electrically Erasable Programmable ROM
– Slower and less flexible than Flash ROM
 Flash ROM
– Faster than disks but more expensive
– Uses
 BIOS: initial boot instructions and diagnostics
 Digital cameras
CPU Components

CMOS Memory
 CMOS (Complimentary Metal Oxide Semiconductor) TR (Transistor)
– Low power consumption, cheap TR

 BIOS (Basic I/O System) and system settings that users can change
CPU Components

Machine Cycle
Fetch-decode-execute-store
CPU Components

Machine Cycle

 At the beginning of each instruction cycle, the processor fetches an


instruction from memory.

 In a typical processor, a register called the program counter (PC)


holds the address of the instruction to be fetched next.

 Unless told otherwise, the processor always increments the PC after


each instruction fetch so that it will fetch the next instruction in
sequence (i.e., the instruction located at the next higher memory
address).
CPU Components

Machine Cycle

 So, for example, consider a computer in which each instruction


occupies one 16-bit word of memory.

 Assume that the program counter is set to location 300.The


processor will next fetch the instruction at location 300.

 On succeeding instruction cycles, it will fetch instructions from


locations 301, 302, 303, and so on. This sequence may be altered,
as explained presently.
CPU Components

Machine Cycle

 The fetched instruction is loaded into a register in the processor


known as the instruction register (IR).

 The instruction contains bits that specify the action the processor is
to take.

 The processor interprets the instruction and performs the required


action. In general, these actions fall into four categories:

 Processor-memory: Data may be transferred from processor to memory or


from memory to processor.
CPU Components

Machine Cycle

 Processor-I/O: Data may be transferred to or from a peripheral device


by transferring between the processor and an I/O module.

 Data processing: The processor may perform some arithmetic or logic


operation on data.

 Control: An instruction may specify that the sequence of execution be


altered. For example, the processor may fetch an instruction from
location 149, which specifies that the next instruction be from location
182. The processor will remember this fact by setting the program
counter to 182.Thus, on the next fetch cycle, the instruction will be
fetched from location 182 rather than 150.
Computer System

Basically it is divided into two parts;

1. Computer Architecture
2. Computer Organization

 Computer Architecture - The computer attribute which can be


recognized by programmer. This attribute has a direct effect to the
program execution such as instruction set, data representation,
addressing and I/O.

 Example : Intel x86 share same architecture


Computer System

Computer Organization

 The connection of the sources of computer hardware.

 Including the integration between systems.

 The communication flow control between the physical component.

Note: Each computer version have different organization.


Computer System

Computer Classification
Based on;

• CPU speed

• The number of register inside the CPU

• The word size

• Main memory size (RAM)


Computer System

Computer Classification
Based on;
 Complexity of the Operating System
• Physical size
• Cost
• Cyber Memory Space
• Secondary memory size
• The multiple-programming degree
Computer System

Common System Architectures


 Von Neumann Architecture which is also known as the Von
Neumann model describes a design architecture for an
electronic digital computer with parts consisting of a processing
unit containing an arithmetic logic unit, processor registers and a
control unit .

 The control unit has program counter, instruction


registers and a memory to store both data and instructions.
Computer System

Von Neumann
Architecture
 It also has external
mass storage and
input and output
mechanisms.

 It is named after the


mathematician and
early computer
scientist John Von
Neumann.
Computer System

Von Neumann
Architecture
 The computer has
single storage
system(memory) for
storing data as well as
program to be
executed.

 A single set of
address/data buses
between CPU and
memory.
Computer System

Von Neumann
Architecture
 Processor needs two
clock cycles to complete
an instruction.

 Pipelining the
instructions is not
possible with this as the
first clock cycle the
processor gets the
instruction from memory
and decodes it.
Computer System

Von Neumann
Architecture

 In the next clock cycle


the required data is
taken from memory.

 For each instruction this


cycle repeats and hence
needs two cycles to
complete an instruction.
Computer System

Von Neumann Architecture


 This is a relatively older
architecture and was replaced
by Harvard architecture.

 The meaning has evolved to be


any stored-program
computer in which
an instruction fetch and a data
operation cannot occur at the
same time because they share
a common bus.
Computer System

Von Neumann Architecture


 This is referred to as the von Neumann bottleneck and often limits
the performance of the system.

 The design of a von Neumann architecture machine is simpler than


that of a Harvard architecture machine, which is also a stored-
program system but has one dedicated set of address and data
buses for reading data from and writing data to memory, and another
set of address and data buses for instruction fetching.

 A stored-program digital computer is one that keeps its program


instructions, as well as its data, in read-write, random-access
memory (RAM).
Computer System

Harvard Architecture
 The Harvard architecture is a computer architecture with physically
separate storage and signal pathways for instructions and data.

 The term originated from the Harvard Mark I relay-based computer,


which stored instructions on punched tape (24 bits wide) and data
in electro-mechanical counters.

 Programs needed to be loaded by an operator; the processor could


not initialize itself.
Computer System

Harvard Architecture
 Today, most processors
implement such separate
signal pathways for
performance reasons, but
actually implement
a modified Harvard
architecture, so they can
support tasks like loading a
program from disk
storage as data and then
executing it.
Computer System

Harvard Architecture
 The name is originated from
"Harvard Mark I" a relay
based old computer,which
stored instruction on punched
tape(24 bits wide) and data in
electo-mechanical counters.

 The computer has two


separate memories for
storing data and program.
Computer System

Harvard Architecture
 Two sets of address/data
buses between CPU and
memory.
 Processor can complete an
instruction in one cycle if
appropriate pipelining
strategies are implemented.

 In the first stage of pipeline


the instruction to be executed
can be taken from program .
Computer System

Harvard Architecture
 In the second stage of
pipeline data is taken from
the data memory using the
decoded instruction or
address.

 Most of the modern


computing architectures are
based on Harvard
architecture. But the number
of stages in the pipeline
varies from system to system.
Computer System

Pipelining
 In computing, a pipeline is a set of data processing elements
connected in series, where the output of one element is the input of
the next one.

 The elements of a pipeline are often executed in parallel or in time-


sliced fashion; in that case, some amount of buffer storage is often
inserted between elements.

 Pipelining is an implementation technique where multiple instructions


are overlapped in execution.

 The computer pipeline is divided in stages.


Computer System

Pipelining
 Each stage completes a part of an instruction in parallel.

 The stages are connected one to the next to form a pipe - instructions
enter at one end, progress through the stages, and exit at the other
end.

 Pipelining does not decrease the time for individual instruction


execution. Instead, it increases instruction throughput.

 The throughput of the instruction pipeline is determined by how often


an instruction exits the pipeline.
Computer System

Pipelining
 Because the pipe stages are hooked together, all the stages must be
ready to proceed at the same time.

 We call the time required to move an instruction one step further in the
pipeline a machine cycle .

 The length of the machine cycle is determined by the time required for
the slowest pipe stage.
Computer System

Pipelining
 The pipeline designer's goal is to balance the length of each pipeline
stage .

 If the stages are perfectly balanced, then the time per instruction on
the pipelined machine is equal to;

Time per instruction on nonpipelined machine


Number of pipe stages
Computer System
Computer-related pipelines include:
 Instruction pipelines, such as the classic RISC pipeline, which are
used in central processing units (CPUs) to allow overlapping
execution of multiple instructions with the same circuitry.

 The circuitry is usually divided up into stages and each stage


processes one instruction at a time.

 Examples of stages are instruction decode, arithmetic/logic and


register fetch.
Computer System
Computer-related pipelines include:
 Graphics pipelines, found in most graphics processing units (GPUs),
which consist of multiple arithmetic units, or complete CPUs, that
implement the various stages of common rendering operations
(perspective projection, window clipping, color and light calculation,
rendering, etc.).

 Software pipelines, where commands can be written where the output


of one operation is automatically fed to the next, following operation.
The Unix system call pipe is a classic example of this concept,
although other operating systems do support pipes as well.

 HTTP pipelining, where multiple requests are sent without waiting for
the result of the first request.
Questions….!?

Think and ask…!


REVIEWS

 Recall all previous topics covered so far.

 Evaluate how much knowledge you have before


you exit this class.

 Try to analyze carefully the problems that might


arise from the topics

 Develop possible solutions to the problems.

You might also like