0% found this document useful (0 votes)
143 views

Parallelism in Microprocessor

1. Parallelism in microprocessors involves executing multiple operations simultaneously to improve performance. It can be achieved through instruction-level parallelism (ILP), pipelining, and superscalar architectures. 2. Pipelining divides instruction execution into stages handled by separate hardware to allow overlapping execution. Superscalar architectures use multiple functional units in a single pipeline to execute multiple instructions in parallel. 3. Processor-level parallelism uses multiple CPUs to allocate tasks between them for increased throughput and performance.

Uploaded by

Towsif
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
143 views

Parallelism in Microprocessor

1. Parallelism in microprocessors involves executing multiple operations simultaneously to improve performance. It can be achieved through instruction-level parallelism (ILP), pipelining, and superscalar architectures. 2. Pipelining divides instruction execution into stages handled by separate hardware to allow overlapping execution. Superscalar architectures use multiple functional units in a single pipeline to execute multiple instructions in parallel. 3. Processor-level parallelism uses multiple CPUs to allocate tasks between them for increased throughput and performance.

Uploaded by

Towsif
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 17

Parallelism in

Microprocessor
Mr. Sunanda Das
Parallelism
• Microprocessor performance is largely determined by the degree of
organization of parallel work of various units.

• Executing two or more operations at the same time is known as


Parallelism.

• Parallelism can be used in high-performance computing.


Goals of Parallelism

• The purpose of parallel processing is to speedup the computer


processing capability or in words, it increases the computational
speed.

• Increases throughput, i.e. amount of processing that can be


accomplished during a given interval of time.

• Improves the performance of the computer for a given clock speed.


Instruction-level parallelism
• Instruction-level parallelism (ILP) is a measure of how many of
the instructions in a computer program can be executed
simultaneously.
1. C = a + b
2. D = e + f
3. M = C*D

• Operation 3 depends on the results of operations 1 and 2, so it cannot


be calculated until both of them are completed.

• However, operations 1 and 2 do not depend on any other operation,


so they can be calculated simultaneously.
• We will consider two approached to achieve ILP
– Pipelining
– Superscalar Architectures
Pipelining
• Fetching of instructions from memory is a major bottleneck in instruction execution speed.
However, computers have the ability to fetch instructions from memory in advance

• These instructions were stored in a set of registers called the prefetch buffer.

• Thus, instruction execution is divided into two parts: fetching and actual execution;

• The concept of a pipeline carries this strategy much further.

• Instead of dividing instruction execution into only two parts, it is often divided into many
parts, each one handled by a dedicated piece of hardware, all of which can run in parallel.
• A technique for implementing instruction-level parallelism within a
single processor.
• A technique used in advanced microprocessors where the
microprocessor begins executing a second instruction before the first
has been completed.
• That is, several instructions are in the pipeline simultaneously, each at
a different processing stage.
Example of Pipelining

Fig: A five-stage pipeline


• The classic RISC pipeline comprises:
• Instruction fetch (IF)
• Instruction decode (ID)
• Execute (Ex)
• Memory access (MEM)
• Register write back (WB)
Basic five-stage pipeline
Clock
cycle
1 2 3 4 5 6 7
Instr.
No.

1 IF ID EX MEM WB

2 IF ID EX MEM WB

3 IF ID EX MEM WB

4 IF ID EX MEM
5 IF ID EX
Dual Pipelines

• If one pipeline is good, then surely two pipelines are better.

• Here a single instruction fetch unit fetches pairs of instructions


together and puts each one into its own pipeline, complete with its
own ALU for parallel operation.

• To be able to run in parallel, the two instructions must not conflict


over resource usage (e.g., registers), and neither must depend on the
result of the other.
Fig: Dual five-stage pipelines with a common instruction fetch unit
Superscalar Architectures

• Going to four pipelines is conceivable, but doing so duplicates too much hardware

• Instead, a different approach is used on highend CPUs.

• The basic idea is to have just a single pipeline but give it multiple functional units.

• This is a superscalar architecture – using more than one ALU, so that more than one
instruction can be executed in parallel.

• Implicit in the idea of a superscalar processors that the S3 stage can issue instructions
considerably faster than the S4 stage is able to execute them.
Superscalar Architectures

Fig: A superscalar processor with five functional units.


Processor Level Parallelism
• Multiprocessing is the use of two or more central processing units
(CPUs) within a single computer system.

• The term also refers to the ability of a system to support more than
one processor and/or the ability to allocate tasks between them.
References
• https://en.wikibooks.org/wiki/Microprocessor_Design
• https://www.slideshare.net/attrimahesh/
• https://en.wikipedia.org/wiki/Instruction-level_parallelism
• Computer Systems Organization:Processors & Parallelism
Dr. Barry O’Sullivan(CS1101: Lecture 14,[email protected])
• https://www.webopedia.com/TERM/P/pipelining.html
• https://en.wikipedia.org/wiki/Instruction_pipelining

You might also like