ECE 310 Lecture 3 - Introduction To Microprocessors
ECE 310 Lecture 3 - Introduction To Microprocessors
Microprocessors and
Microcontrollers
Introduction to Microprocessors (2)
Mohammed Sharaf Sayed
[email protected]
Lecture Outline
• What is Microprocessor?
• Address Bus, Data Bus, and Control Bus
• Tristate Bus
• Clock Generation
• Connecting Microprocessor to I/O Devices
o I/O Mapped I/O Interface
o Memory Mapped I/O Interface
• Data Transfer Schemes
o Parallel Data Transfer
o Serial Data Transfer
• Architectural Advancements of Microprocessors
o Pipelining
o Cache Memory
2
What is Microprocessor?
9
Connecting Microprocessor to I/O Devices
11
Connecting Microprocessor to I/O Devices
12
Data Transfer Schemes
13
Data Transfer Schemes
16
Data Transfer Schemes
17
Data Transfer Schemes
18
Data Transfer Schemes
19
Data Transfer Schemes
20
Data Transfer Schemes
22
Data Transfer Schemes
23
Architectural Advancements of Microprocessors
• Pipelining:
– Let us consider an example in which there is continuous flow
of tasks T(1), T(2), T(3), T(4) … each of which can be divided
into four sub-tasks [e.g., task T(1) will have sub-tasks T1(1),
T2(1), T3(1) and T4(1)] as before.
– Now the pipeline will look like.
25
Architectural Advancements of Microprocessors
• Pipelining:
– The task T(1) consisting of sub-tasks T1(1), T2(1), T3(1) and
T4(1) will get completed at the end of cycle 4.
– The task T(2) consisting of sub-tasks T1(2), T2(2), T3(2) and
T4(2) will get completed at the end of cycle 5.
– Similarly task T(3) will get completed at the end of cycle 6.
– Thus all tasks after task T(1) take one cycle to get completed.
– Thus for subsequent tasks, 4 times enhancement in speed
has been achieved.
– If the number of tasks are very large, the total speed
enhancement (including task T(1)) approximates to 4.
26
Architectural Advancements of Microprocessors
• Pipelining:
– The pipelining concept has been adopted in the instructions
execution; ‘Instruction Pipeline’.
– An instruction execution contains the following four
independent sub-tasks.
(a) Instruction Fetch (b) Instruction Decode
(c) Operand Fetch (d) Execute
– If four hardware modules or processing elements are
designed to execute these four sub-tasks, 4 times
enhancement in the execution speed is possible.
– This however is the maximum theoretical enhancement
possible since not all instructions are independent.
27
Architectural Advancements of Microprocessors
• Pipelining:
– For dependent instructions, Control unit inserts stall or
wasted clock cycles into pipeline until such dependencies
are resolved.
– Example: the result of instruction I1 is the operand for
instruction I2, then the operand fetch for I2 has to wait for I1
to get completed.
– Branch instructions can also affect the processing pipeline.
– Example, if instruction I2 is a branch instruction, the
execution of instructions I3 and I4 in the pipeline becomes
meaningless as program will branch to a new location.
In such cases, the entire pipeline must be flushed.
28
Architectural Advancements of Microprocessors
• Pipelining:
– Intel 8086, 80186, 80286 and 80386 have a 2-stage pipeline,
i.e. Bus Interface Unit (Instruction Fetch) and Execution Unit
(Instruction Execution).
– Intel 80486 has a 5-stage instruction pipeline whereas
Pentium processor has a 20-stage instruction pipeline.
29
Architectural Advancements of Microprocessors
• Cache Memory:
– Increasing memory throughput will increase the execution
speed of the processor.
– One or more fast buffers named cache between the
processor and the main memory are used to increase the
execution speed.
– The cache memory has cycle time compatible with the
processor speed.
– The cache memory consists of a few kilobytes of high speed
Static RAM (SRAM), whereas the main memory consists of a
few Megabytes to Gigabytes of slower but cheaper Dynamic
RAM (DRAM).
30
Architectural Advancements of Microprocessors
31
Architectural Advancements of Microprocessors
33
Architectural Advancements of Microprocessors
34
Architectural Advancements of Microprocessors
35
Architectural Advancements of Microprocessors
• Cache Memory:
– Pipelined CPUs access memory from multiple points in the
pipeline.
– Modern processors have incorporated both specialized
caches as well as cache hierarchy.
– Different physical caches for each of these points are used in
the architecture.
36
Lecture Summary
• We have discussed the following topics:
What is Microprocessor?
Address Bus, Data Bus, and Control Bus
Tristate Bus
Clock Generation
Connecting Microprocessor to I/O Devices
o I/O Mapped I/O Interface
o Memory Mapped I/O Interface
Data Transfer Schemes
o Parallel Data Transfer
o Serial Data Transfer
Architectural Advancements of Microprocessors
o Pipelining
o Cache Memory
37
Mohammed Sharaf Sayed
[email protected]