0% found this document useful (0 votes)
28 views30 pages

CoA Batch13

Coa file

Uploaded by

kavinjothi18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views30 pages

CoA Batch13

Coa file

Uploaded by

kavinjothi18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 30

ISSUES IN

INSTRUCTION
PIPELINING
23I274 VISHWESHVAR S
23I275 VISWANATH V M
23I276 YASODHARAN N S
24I431 ARUNKUMARAN B
24I432 JAYASURYA M
INSTRUCTION PIPELINING
• Instruction pipelining is a technique where the
execution of instructions is divided into
sequential stages, allowing multiple
instructions to be processed simultaneously.
• Improves the throughput of the processor by
completing one instruction per clock cycle
(after pipeline filling).
• Enhances CPU efficiency by reducing idle time
and maximizing resource utilization.
Basic Concept of Instruction Pipelining
• An instruction passes through multiple stages such as Fetch,
Decode, Execute, Memory Access, and Write Back.
• While one instruction is in the "Fetch" stage, the next can be
in the "Decode" stage, and so on.
• This overlap ensures parallelism

Without pipelining: A single instruction may take 5


cycles to complete, and the processor executes one
instruction per 5 cycles.
With pipelining: After the initial pipeline fill, the processor
completes one instruction every cycle, achieving a speed-up
close to the number of pipeline stages.
4 segment Instruction Pipelining
Applications of Pipelining
• Widely used in microprocessors (e.g., Intel, AMD CPUs).
• Found in graphics processing units to enhance rendering
performance.
• Essential in digital signal processing for audio and video
processing.
• Utilized in network processors for fast packet handling.

Issue in Instruction Pipelining


• Resource Conflict
• Data Dependency Conflict
• Branch prediction
STRUCTURAL HAZARDS IN
INSTRUCTION PIPELINES
Structural Hazards
• A structural hazard occurs in a pipelined processor
when the hardware resources required to execute an
instruction are not available due to simultaneous
resource demand by multiple instructions.
• In simpler terms, it happens when multiple instructions
in the pipeline need the same resource at the same
time, causing a conflict.
• This situation forces one or more instructions to be
delayed, which can reduce the performance of the
processor.
Hazard Types:

• Data Hazards
• Control Hazards
• Structural Hazards
Examples of resource conflicts
in a pipeline
Functional Unit Conflicts
Two instructions need the same functional unit, like an Arithmetic
Logic Unit (ALU) or a multiplier, at the same time.
Instruction 1: ADD R1 ,R2, R3
Instruction 2: SUB R4, R5, R6
Memory Port Conflicts
Multiple instructions trying to access memory at the same time but
the processor has only one memory port for load and store
operations.
Instruction 1: LOAD R1, 0(R2)
Instruction 2: STORE 0(R3), R4
• Register File Conflicts
• Two instructions trying to read or write to the same register at
the same time.
• Instruction 1: ADD R1, R2, R3(Reads R2, writes to R1)
• Instruction 2: SUB R1, R4, R5(Reads R4, writes to R1)
• Pipeline Stages Conflict
• Multiple instructions competing for access to a shared
pipeline stage.
• Instruction 1: ADD R1, R2, R3(Uses ALU in the EX stage)
• Instruction 2: MUL R4, R5, R6(Uses a multiplier in the
EX stage)
Methods to minimize structural
Hazards
• Increase Resource Availability
• Pipeline Stage Duplication
• Instruction Scheduling (Out-of-Order Execution)
• Pipeline Stalls (or NOPs)
• Multiple Execution Units
• Resource Arbitration (Multiplexing)
• Dedicated Pipeline Stages for Critical Resources
• Use of Caches
• Branch Prediction
• Instruction-Level Parallelism (ILP)
Data Hazards and
Their Impact
Data Hazards occur when an instruction depends
on the result of previous instruction.

Types of Data Hazards:


1)Read After Write (RAW):
Occurs when an instruction depends on the result
of a previous instruction.
Instruction 1: ADD R1, R2, R3
Instruction 2: SUB R4, R1, R5
2) Write After Write (WAW):
Occurs when two instructions write to the same
register or memory location.
Instruction 1: ADD R1, R2, R3
Instruction 2: SUB R1, R4, R5
3) Write After Read (WAR):
Occurs when an instruction writes to a register or
memory location that a previous instruction is
reading.
Instruction 1: SUB R4, R1, R5
Instruction 2: ADD R1, R2, R3
How to Handle Data Hazards:
1. Data Forwarding (Bypassing):
Data forwarding is a technique used to
resolve data hazards by directly passing the result
of one instruction to the next instruction.
Advantages:
• Reduces pipeline stalls.
• Maintains high throughput.
2. Pipeline Stalling:
Stalling involves inserting no-operation (NOP)
instructions into the pipeline to delay the execution of
dependent instructions until the required data becomes
available.
Advantages:
• Simple to implement in hardware.
• Guarantees correctness.
Comparsion Of Data Forwarding & Pipeline
Stalling:
Features Data Pipeline
Forwarding Stalling
Performance High Low

Complexity Requires Simple to


additional implement
hardware
Latency Reduces Increases
instruction latency instruction latency
Hardware Cost is High Cost is Low
Cost
Control Hazards and
Branching Issues
Challenges and Solutions in Computer
Architecture
Introduction to Control Hazards

• Control hazards arise in pipelined processors due to


branch (or control) instructions like conditional jumps
and loops.
• These instructions disrupt the sequential flow of the
pipeline because the next instruction's address depends
on the outcome of the branch, which may not be known
immediately.
Challenges with Control Hazards Due to Branch
Instructions
1. Pipeline Stalling: The pipeline may need to pause until the
branch decision is resolved, leading to lost cycles and reduced
throughput.
2. Wasted Work: Instructions fetched after the branch may need to
be discarded if the branch prediction is incorrect.

Role of Branch Prediction


• Branch prediction helps guess the outcome of branch instructions.
• Improves instruction flow by predicting the path to follow.
• Reduces stalls and improves performance in pipelined architectures.
Branch Prediction Algorithms:
• Static Prediction: Assumes a fixed outcome for branches.
• Dynamic Prediction: Uses historical information (stored in
branch history tables or pattern tables) to predict branches.
Branch Target Buffer (BTB):
• Caches target addresses of recently executed branches to
quickly fetch the next instruction.
Limitations of Branch Prediction:
• Incorrect predictions lead to wasted cycles.
• High complexity in designing accurate branch predictors.
• Limited by hardware constraints and prediction algorithms.
Pipeline Flushing Due to Incorrect Branch
Prediction
• Occurs when the prediction is wrong, and the pipeline
must discard instructions.
• Leads to performance penalties as the pipeline is
reloaded with correct instructions.
• Affects overall throughput of the processor.
• Control hazards are critical challenges in modern
processors.
• Branch prediction significantly reduces their impact but
has limitations.
• Effective handling of pipeline flushing is essential for
optimized performance.
Solutions and Mitigation Strategies for Pipeline
Issues
Ø In modern processors, pipelines help improve
performance by allowing multiple instructions to be
processed in parallel.
Ø However, pipeline architectures introduce several
challenges such as hazards, data dependencies, and
resource conflicts. Various techniques and strategies
have been developed to address these issues, each
balancing pipeline complexity with performance
Overview of techniques like instruction reordering,
pipeline interlocks, and speculative execution.

1.Instruction Reordering Problem:


ØIn a pipeline, instructions may depend on the results of previous
instructions.If these dependencies are not handled properly, it
can lead to pipeline stalls and inefficient performance.
Solution:
ØInstruction reordering involves reordering instructions so that
independent operations can be executed while waiting for
dependent ones to complete. By reordering instructions, the
CPU can keep the pipeline filled with useful work, reducing the
idle time.
ØReal-World Example: Superscalar processors
2. Pipeline Interlocks Problem:
A pipeline interlock happens when an instruction requires
data that is not yet available, potentially stalling the
pipeline. This typically occurs when one instruction
depends on the result of another that hasn’t finished
processing.
ØSolution: Pipeline interlocks are mechanisms that
automatically detect such data hazards and stall the
pipeline until the required data is available. These
interlocks introduce delays but prevent incorrect
execution.
ØReal-World Example: MIPS processors use pipeline
interlocks to deal with load-use data hazards
3. Speculative Execution Problem:
Instruction pipelines may encounter delays when they
need to make decisions based on control flow
ØSolution: Speculative execution involves executing
instructions based on predicted control flow before the
actual decision is made.
ØReal-World Example: Modern CPUs, such as Intel’s
Skylake and AMD’s Ryzen architectures, use speculative
execution to improve branch prediction.
4. Out-of-Order Execution (OoOE) Problem:
If instructions have dependencies, they cannot be
executed until the necessary data is available, leading to
pipeline stalls.
 Solution: Out-of-order execution allows the processor
to execute independent instructions while waiting for
the dependent instructions to complete.
 Real-World Example: Intel’s Core and AMD Ryzen
processors use out-of-order execution to maximize
instruction throughput and minimize stalls, ensuring
better performance even in complex pipelines.
5. Branch Prediction and Branch Target Buffers
Problem: Branch instructions can cause pipeline stalls
due to the uncertainty in which path the program will
take.
ØSolution: Branch prediction tries to predict the
direction of a branch before it is executed, allowing the
processor to fetch instructions along the predicted path.
ØReal-World Example: Intel processors implement
sophisticated branch prediction algorithms combined
with branch target buffers, reducing the performance
penalty associated with
Balancing Pipeline Complexity with Performance

Complexity: Techniques such as out-of-order execution, speculative


execution, and branch prediction require additional hardware and
logic, which increases the complexity and power consumption of the
CPU.

Performance: Although these techniques improve performance by


reducing stalls and making better use of the pipeline, they also
introduce challenges such as misprediction penalties and increased
design and manufacturing costs due to the extra hardware required.

Real-World Examples of How Modern CPUs Address Pipeline


Issues
1.Intel Core Architectures
2.AMD Ryzen

You might also like