0% found this document useful (0 votes)
67 views

Computer System Organization

This document discusses pipelining in CPUs. It explains that pipelining involves arranging hardware elements so that multiple instructions can be executed simultaneously through different stages. Common pipeline stages are instruction fetch, decode, execute, memory access, and write back. Dependencies like structural hazards from shared resources, control hazards from branches, and data hazards can cause stalls in the pipeline. Various techniques like renaming, branch prediction, and bypassing are used to reduce stalls from dependencies and improve pipeline throughput.

Uploaded by

Carl Anderson
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views

Computer System Organization

This document discusses pipelining in CPUs. It explains that pipelining involves arranging hardware elements so that multiple instructions can be executed simultaneously through different stages. Common pipeline stages are instruction fetch, decode, execute, memory access, and write back. Dependencies like structural hazards from shared resources, control hazards from branches, and data hazards can cause stalls in the pipeline. Various techniques like renaming, branch prediction, and bypassing are used to reduce stalls from dependencies and improve pipeline throughput.

Uploaded by

Carl Anderson
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 26

PIPELINING

Lesson 2: Execution Stages and Throughput


Dependencies and Data Hazards
TOPICS

 Options to improve performance


 Pipelining
 Application of Pipelining
 Design of a basic pipeline
 Execution in a pipeline processor
 RSIC Processor
 Stages in Pipeline
 Dependencies in a pipelined processor
2 OPTIONS TO IMPROVE CPU
PERFORMANCE
OPTION 1 OPTION 2

 Improve the hardware by introducing  Arrange the hardware such that more
faster circuits. than one operation can be performed
at the same time.
WHAT IS PIPELINING

 Pipelining is a process of arrangement of hardware elements of the CPU such that


its overall performance is increased. Simultaneous execution of more than one
instruction takes place in a pipelined processor.
REAL LIFE SCENARIO

 Let us see a real life example that works on the concept of pipelined operation.
Consider a water bottle packaging plant. Let there be 3 stages that a bottle should
pass through, Inserting the bottle(I), Filling water in the bottle(F), and Sealing the
bottle(S). Let us consider these stages as stage 1, stage 2 and stage 3 respectively.
Let each stage take 1 minute to complete its operation.

 Now, in a non pipelined operation, a bottle is first inserted in the plant, after 1
minute it is moved to stage 2 where water is filled. Now, in stage 1 nothing is
happening. Similarly, when the bottle moves to stage 3, both stage 1 and stage 2
are idle. But in pipelined operation, when the bottle is in stage 2, another bottle
can be loaded at stage 1. Similarly, when the bottle is in stage 3, there can be one
bottle each in stage 1 and stage 2. So, after each minute, we get a new bottle at
the end of stage 3.
DESIGN OF PIPELINE

 In a pipelined processor, a pipeline has two ends, the input end and the output
end. Between these ends, there are multiple stages/segments such that output of
one stage is connected to input of next stage and each stage performs a specific
operation.
 Interface registers are used to hold the intermediate output between two stages.
These interface registers are also called latch or buffer.

 All the stages in the pipeline along with the interface registers are controlled by a
common clock.
EXECUTION IN PIPELINED PROCESSOR

 Execution sequence of instructions in a pipelined processor can be visualized


using a space-time diagram. For example, consider a processor having 4 stages
and let there be 2 instructions to be executed. We can visualize the execution
sequence through the following space-time diagrams:
NON OVERLAPPED EXECUTION
OVERLAPPED EXECUTION
RISC PROCESSOR

 A reduced instruction set computer, or RISC (/rɪsk/), is one whose instruction set
architecture (ISA) allows it to have fewer cycles per instruction (CPI) than a
complex instruction set computer (CISC)

 Most common processor Alpha, ARC, ARM, AVR, MIPS, PA-RISC, PIC
5 PIPELINE STAGES
Following are the 5 stages of RISC pipeline with their respective operations
STAGE 1 (INSTRUCTION FETCH)

 In this stage the CPU reads instructions from the address in the memory whose
value is present in the program counter
STAGE 2 (INSTRUCTION DECODE)

 In this stage, instruction is decoded and the register file is accessed to get the
values from the registers used in the instruction.
STAGE 3 (INSTRUCTION EXECUTE)

 In this stage, ALU operations are performed (Arithmetic Logic Units)


STAGE 4 (MEMORY ACCESS)

 In this stage, memory operands are read and written from/to the memory that is
present in the instruction.
STAGE 5 (WRITE BACK)

 In this stage, computed/fetched value is written back to the register present in the
instructions.
DEPENDENCIES IN A PIPELINED
PROCESSOR
 Structural Dependency
 Control Dependency
 Data Dependency

 These dependencies may introduce stalls in the pipeline.


WHAT IS STALL?

 A stall is a cycle in the pipeline without new input.


1. STRUCTURAL DEPENDENCY

 This dependency arises due to the resource conflict in the pipeline. A resource
conflict is a situation when more than one instruction tries to access the same
resource in the same cycle. A resource can be a register, memory, or ALU.
HOW TO SOLVE STRUCTURAL
DEPENDENCY?
 To avoid this problem, we have to keep the instruction on wait until the required
resource (memory in our case) becomes available. This wait will introduce stalls in
the pipeline as shown below:
SOLUTION FOR STRUCTURAL
DEPENDENCY
 To minimize structural dependency stalls in the pipeline, we use a hardware
mechanism called Renaming.

 Renaming : According to renaming, we divide the memory into two independent


modules used to store the instruction and data separately called Code
memory(CM) and Data memory(DM) respectively. CM will contain all the
instructions and DM will contain all the operands that are required for the
instructions.
2. CONTROL DEPENDENCY (BRANCH
HAZARDS)
 This type of dependency occurs during the transfer of control instructions such as
BRANCH, CALL, JMP, etc. On many instruction architectures, the processor will
not know the target address of these instructions when it needs to insert the new
instruction into the pipeline. Due to this, unwanted instructions are fed to the
pipeline.
HOW TO SOLVE CONTROL DEFICIENCY?

 we need to stop the Instruction fetch until we get target address of branch
instruction. This can be implemented by introducing delay slot until we get the
target address.
SOLUTION FOR CONTROL DEFICIENCY

 Branch Prediction is the method through which stalls due to control dependency
can be eliminated. In this at 1st stage prediction is done about which branch will
be taken.For branch prediction Branch penalty is zero.

 Branch penalty : The number of stalls introduced during the branch operations in
the pipelined processor is known as branch penalty.
3. DATA DEPENDENCY (DATA HAZARD)

 Data hazards occur when instructions that exhibit data dependence, modify data
in different stages of a pipeline. Hazard cause delays in the pipeline. There are
mainly three types of data hazards:
 1) RAW (Read after Write) [Flow/True data dependency]
2) WAR (Write after Read) [Anti-Data dependency]
3) WAW (Write after Write) [Output data dependency]

You might also like