0% found this document useful (0 votes)
10 views

Unit v -Sources of Power Dissipation

Uploaded by

kiilleerr0614
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Unit v -Sources of Power Dissipation

Uploaded by

kiilleerr0614
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

Unit - V

Sources of Power Dissipation


Importance of low power design
• Power is considered as the most important
constraint in embedded systems
Low power design is essential in:
• high-performance systems (reason: excessive
power dissipation reduces reliability and increases
the cost imposed by cooling systems and
packaging)
• portable systems (reason: battery technology
cannot keep the pace with large demands for
devices with light batteries and long time
between recharges)
Sources of Power Consumption
Trends in Power Management
Power dissipation in CMOS circuits

Reduce power per transition


Reduced voltage operation – voltage scaling
Capacitance minimization – device sizing
Reduce number of transitions
Glitch elimination
Glitch Power Reduction
Techniques to reduce Short Circuit Power

• Short-circuit power is consumed by each


transition (increases with input transition time).
• Reduction requires that gate output transition
should not be faster than the input transition
(faster gates can consume more short-circuit
power).
• Increasing the output load capacitance reduces
short-circuit power.
• Scaling down of supply voltage with respect to
threshold voltages reduces short-circuit power.
Leakage Power
VDD
Ground IG
R

n+ n+
Isub
IPT
IGIDL ID
Leakage Current Components
• Subthreshold conduction, Isub
• Reverse bias pn junction conduction, ID
• Gate induced drain leakage, IGIDL due to
tunneling at the gate-drain overlap
• Drain source punchthrough, IPT due to short
channel and high drain-source voltage
• Gate tunneling, IG through thin oxide
Reducing Leakage Power
• Leakage power as a fraction of the total power
increases as clock frequency drops. Turning supply
off in unused parts can save power.
• For a gate it is a small fraction of the total power; it
can be significant for very large circuits.
• Scaling down features requires lowering the
threshold voltage, which increases leakage power;
roughly doubles with each shrinking.
• Multiple-threshold devices are used to reduce
leakage power.
Algorithmic Level Design – factivity Reduction
• Minimization the switching
activity, at high level, is one way
to reduce the power dissipation
of digital processors.
• One method to minimize the
switching signals, at the
algorithmic level, is to use an
appropriate coding for the
signals rather than straight
binary code.
• The table shown below shows a
comparison of 3-bit
representation Binary Code Gray
Code Decimal Equivalent.
State Encoding for a Counter
Logic Level Optimization
Recap: Logic Restructuring
▪ Logic restructuring: changing the topology of a logic
network to reduce transitions

AND: P0→1 = P0 * P1 = (1 - PAPB) * PAPB


3/16
0.5 A Y
0.5 (1-0.25)*0.25 = 3/16
A W 7/64 = 0.109 0.5 B 15/256
B X F
15/256 0.5
0.5 C C
0.5 D F
0.5 0.5 D Z
3/16 = 0.188
🡺 Chain implementation has a lower overall switching activity than tree
implementation for random inputs
▪ BUT: Ignores glitching effects
Source: Timmernann, 2007
Micro transductors ‘08, Low
22
Power 2
Glitch
• An important architecture-level measure to
reduce switching activity is based on delay
balancing and the reduction of glitches.
• In general, if all input signals of a gate change
simultaneously, no glitching occurs.
• But a dynamic hazard or glitch can occur if input
signals change at different times.
• Thus, a node can exhibit multiple transitions in a
single clock cycle before settling to the correct
logic level
Glitch - example
• Glitches occur primarily due to
a mismatch or imbalance in
the path lengths in the logic
network.
• Such a mismatch in path
length results in a mismatch of
signal timing with respect to
the primary inputs.
• As an example, consider the
simple parity network shown
in Fig
• Assuming that all XOR blocks
have the same delay, it can be
seen that the network in Fig.
(a) will suffer from glitching
due to the wide disparity
between the arrival times of
the input signals for the gates.
• In the network shown in Fig. (b), on the other
hand, all arrival times are identical because the
delay paths are balanced.
• Such redesign can significantly reduce the
glitching transitions, and consequently, the
dynamic power dissipation in complex multi-level
networks.
• Also notice that the tree structure shown in Fig.
(b) results in smaller overall propagation delay.
Recap: Input Ordering
(1-0.5x0.2)*(0.5x0.2)=0.09 (1-0.2x0.1)*(0.2x0.1)=0.0196
0.5 0.2
A B X
X
B C
F 0.1 A F
0.2 C
0.1 0.5
AND: P0→1 = (1 - PAPB) * PAPB

Beneficial: postponing introduction of signals with a high


transition rate (signals with signal probability close to 0.5)

Source: Irwin, 2000


Micro transductors ‘08, Low
28
Power 2
Pin Swapping
Gate Sizing
• Gate sizing is the process of
changing the CMOS gate
size, smaller or larger, so
that the total power is
minimized without violating
timing constraints.
• The goal is to find the gate
size that consumes the least
power. In general, reducing
the size of a gate leads to a
decrease in the power and
an increase in the delay.
Control logic power minimization
Power optimization of controller circuitry is very
important.
The datapath of a design can be selectively
turned OFF when not in use, while controller is
always active.
A controller is often realized as a state machine.
1. Storage element
design
2. State assignment
3. FSM partitioning
State Assignment
Codes with lesser Hamming distance are
allocated to the states having higher transition
probabilities between them.
This minimizes the number of transitions in the
next state and output transition logic.
FSM partitioning
FSM can be partitioned into two or more sub-
FSMs.
At any point of time, only one sub-FSM is active.
The first technique is to stop clocks to them -
Clock gating - No dynamic power consumption
in these sub-FSMs.
The second approach is to use power gating by
introducing sleep transistors to turn OFF power
supply to them - No dynamic power
consumption as well as leakage power. Wake up
Clock Gating - FSM partitioning
• Clock gating ANDs a clock signal with an enable to
turn off the clock to idle blocks. This is highly
effective since the clock has a high activity factor,
and by gating the clock to input register, it
prevents them from switching and thus stops all
activity in the fan-out combination logic.
• When a large block of logic is turned off, the clock
can be gated early in the clock tree, turning off a
portion of the global network. The clock network
has an activity factor of 1 and a high capacitance,
so this save significant power.
Power Gating
Architectural level power minimization
Algorithmic level power
minimization
In many processors, the cost of an
addition/subtraction operation may be different
from a logical operation.
Thus to check whether “x is equal to y”, one
may first perform a subtraction operation
followed by checking the status register for
zero-bit.
On the other hand if the logical operation takes
lesser power, x may be directly compared with y
using a comparison instruction.
1. MEMORY REFERENCE
This is very important as memory is normally
off-chip from the processor.
A large number of accesses to the memory
mean good amount of activity in the address
/data bus lines.
The memory access pattern is important.
If the access pattern is sequential only the least
significant bits of the address change, where as
for random access most of the address bits will
switch thus creating higher power dissipation.
2. Presence of cache memory
Cache can be fruitfully utilized to reduce both
execution time and power dissipation based on
the locality.
3. Recomputation vs memory
load/store
Normal power minimization techniques at
algorithm level attempt to reduce the number
of arithmetic operations.
To reduce the number of operations, some
frequently performed computation is done only
once and stored at a memory location.
Later whenever required it is accessed from the
memory but increases the power consumption.
4. Compiler Optimization
1. Common subexpression elimination
2. Reducing memory access
5. Number Representation
1. Fixed vs Floating point representation
2. Sign - magnitude vs Two’s complement
3. Precision of operation
System level power management
• All the modules in the system need not be
active at the same time.
• Thus the system level power management
requires a policy to selectively turn ON or OFF
the subsystems.
• The following strategies can be employed at
the system level:
1. Going to a low power state takes time.
2. Avoiding a power down mode will cost
necessary power.
3. Frequent power down will affect system
performance.
The technique employed is predictive analysis.

You might also like