0% found this document useful (0 votes)
126 views17 pages

Lpvlsi - V Unit Q & A

The document discusses multiple methods for fabricating transistors with different threshold voltages, including using different channel doping densities, gate oxide thicknesses, channel lengths, and body biases. It also summarizes three approaches to minimize standby leakage power: transistor stacking, VTCMOS, and MTCMOS.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
126 views17 pages

Lpvlsi - V Unit Q & A

The document discusses multiple methods for fabricating transistors with different threshold voltages, including using different channel doping densities, gate oxide thicknesses, channel lengths, and body biases. It also summarizes three approaches to minimize standby leakage power: transistor stacking, VTCMOS, and MTCMOS.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

Low Power VLSI Circuits and Systems

UNIT-V

10 Marks Questions:

1. a) Discuss various fabrication methods for getting multiple threshold voltages.

Ans: Fabrication of Multiple Threshold Voltages:


The present-day process technology allows the fabrication of metal–oxide–semi- conductor
field-effect transistors (MOSFETs) of multiple threshold voltages on a single chip. This has
opened up the scope for using dual-Vt CMOS circuits to realize high-performance and low-
power CMOS circuits. The basic idea is to use high-Vt transistors to reduce leakage current and
low-Vt transistors to achieve high performance.

Multiple Channel Doping


The most commonly used technique for realizing multiple-VT MOSFETs is to use
different channel-doping densities based on the following expression:

where Vfb is the flat-band voltage, Na is the doping density in the substrate, and

τB = kT /q(Lx (Na / x)) .


Based on this expression, the variation of threshold voltage with channel-doping
density is shown in Fig. 1.
Fig. 1.  Variation of threshold voltage doping concentration
A higher doping density results in a higher threshold voltage. However, to fabricate two
types of transistors with different threshold voltages, two additional masks are required
compared to the conventional single-Vt fabrication process.

Multiple Oxide CMOS


The expression for the threshold voltage shows a strong dependence on the value of Cox,
the unit gate capacitance. Different gate capacitances can be realized by using different gate
oxide thicknesses. The variation of threshold voltage with oxide thickness ( tox) for a 0.25-μm
device is shown in Fig. 2.

Fig. 2 .Variation of threshold voltage with gate oxide thickness

Dual-Vth MOSFETs can be realized by depositing two different oxide thicknesses. A lower
gate capacitance due to higher oxide thickness not only reduces subthreshold leakage current
but also provides the following benefits:
a. Reduced gate oxide tunnelling because the oxide tunnelling current exponentially decreases
with the increase in oxide thickness.
Reduced dynamic power dissipation due to reduced gate capacitance, because of higher
gate oxide thickness. Although the increase in gate oxide thickness has the above benefits, it has
some adverse effects due to an increase in short-channel effect. For short-channel devices as the
gate oxide thickness increases, the aspect ratio (AR), which is defined by AR = lateral
dimension/vertical dimension, decreases:
where ε si and εox are silicon and oxide permittivities, L, tox, Wdm, and Xj are channell length,
gate oxide thickness, depletion depth, and junction depth, respectively.

Immunity to the short-channel effect decreases as the AR value reduces. Figure 3 shows the
channel lengths for different oxide thicknesses to maintain AR.

Fig. 3 Variation of threshold voltage with oxide thickness for constant AR. AR aspect ratio
Multiple Channel Length

In the case of short-channel devices, the threshold voltage decreases as the channel length is
reduced, which is known as Vth roll-off. This phenomenon can be exploited to realize transistors
of dual threshold voltages. The variation of the threshold voltage with channel length is shown in
Fig. 4.
Fig. 4 Variation of thresh- old voltage with channel length

Multiple Body Bias


The application of reverse body bias to the well-to-source junction leads to an increase in
the threshold voltage due to the widening of the bulk depletion region, which is known as body
effect. This effect can be utilized to realize MOSFETs having multiple threshold voltages.
However, this necessitates separate body biases to be applied to different nMOS transistors,
which means the transistors cannot share the same well. Therefore, costly triple-well
technologies are to be used for this purpose. Another alternative is to use silicon-on-insulator
(SoI) technology, where the devices are isolated naturally.

b) Briefly explain the various approaches to minimize stand by leakage power dissipation.

Ans: There are three approaches for minimize stand by leakage power dissipation n. They are

i. Transistor Stacking
ii. VTCMOS Approach
iii. MTCMOS Approach

i. Transistor Stacking
When more than one transistor is in series in a CMOS circuit, the leakage current has a
strong dependence on the number of turned off transistors. This is known as the stack effect. The
mechanism of the stack effect can be best understood by considering the case when all the
transistors in a stack are turned off.
Considering the case when all the transistors in a stack are turned off. Figure shows four
nMOS devices of a four-input NAND gate in a stack. The source and drain voltages of the MOS
transistors obtained by simulation are shown in the figure. These voltages are due to a small
drain current passing through the circuit. The source voltages of the three transistors on top of
the stack have positive values. Assuming all gate voltages are equal to zero, the gate-to-source
voltages of the three transistors are negative. Moreover, the drain-to-source potential of the MOS
transistors is also reduced.
Fig. 8  a Source voltages of the nMOS transistors in the stack
The following three mechanisms come into play to reduce the leakage current:
i Due to the exponential dependence of the subthreshold current on gate-to-source
voltage, the leakage current is greatly reduced because of negative gate-to-source
voltages.
ii The leakage current is also reduced due to body effect, because the body of all the three
transistors is reverse-biased with respect to the source.
iii As the source-to-drain voltages for all the transistors are reduced, the subthreshold
current due to drain-induced barrier lowering (DIBL) effect will also be lesser.
ii. VTCMOS Approach
VTCMOS circuits make use of the body effect to reduce the subthreshold leakage
current, when the circuit is in normal mode. We know that the threshold voltage is a function of
the voltage difference between the source and the substrate. The substrate terminals of all the n-
channel metal–oxide–semiconductor (nMOS) transistors are connected to the ground potential
and the substrate terminals of all the p-channel metal–oxide–semiconductor (pMOS) transistors
are connected to Vdd, as shown in Fig. 1. This ensures that the source and drain diffusion regions
always remain reversed-biased with respect to the substrate and the threshold voltages of the
transistors are not significantly influenced by the body effect.

Fig. 1.  Physical structure of a CMOS inverter a) without body bias, b) with body bias.
CMOS com plementary metal–oxide–semiconductor
On the other hand, in the case of VTCMOS circuits, the substrate bias voltages of
nMOS and pMOS transistors are controlled with the help of a substrate bias control circuit, as
shown in Fig.

Fig.  Substrate bias control circuit


When the circuit is operating in the active mode, the substrate bias voltages for nMOS
and pMOS transistors are VBn = 0V and VBp = Vdd , respectively. Assuming Vdd = 1V,
the corresponding threshold voltages for the nMOS transistors and pMOS transistors are Vtn = 0.2V
and Vtp = -0.2V , respectively. On the other hand, when the circuit is in standby mode, the substrate
bias control circuit generates a lower substrate bias voltage of VBn = -VB for the nMOS transistor
and a higher substrate bias voltage VBp = Vdd + VB for the pMOS transistors. This leads to an
increase in threshold voltages for nMOS and pMOS transistors to Vtn = 0.5V and Vtp = -0.5V,
respectively. This, in turn, leads to substantial reduction in subthreshold leakage currents because of the
exponential dependence of sub- threshold leakage current on the threshold voltage.
Although the VTCMOS technique is a very effective technique for controlling threshold
voltage and reducing subthreshold leakage current, it requires a twin-well or triple-well CMOS
fabrication technology so that different substrate bias voltages can be applied to different parts
of the chip.
iii. MTCMOS Approach
MTCMOS approach is introduced to implement high-speed and low-power circuits
operating from a 1-V power supply. In this approach, MOSFETs with two different threshold
voltages are used in a single chip. It uses two operational modes—active and sleep for efficient
power management. A basic MTCMOS circuit scheme is shown in Fig. 1.
Fig. 1  MTCMOS basic structure

The realization of a two- input NAND gate is shown in the figure. The CMOS logic gate
is realized with transistors of low-threshold voltage of about 0.2–0.3 V. Instead of connecting the
power terminal lines of the gate directly to the power supply lines Vdd and GND, here these are
connected to the ‘virtual’ power supply lines (VDDV and GNDV). The real and virtual power
supply lines are linked by the MOS transistor Q1 and Q2. These transistors have a high-threshold
voltage in the range 0.5–0.6 V and serve as sleep control transistors. Sleep control signals SL and
SLl are connected to Q1 and Q2, respectively, and used for active/sleep mode control. In the
active mode, when SL is set to LOW, both Q1 and Q2 are turned ON connecting the real power
lines to VDDV and GNDV. In this mode, the NAND gate operates at a high speed cor-
responding to the low-threshold voltage of 0.2 V, which is relatively low compared to the supply
voltage of 1.0 V. In the sleep mode, SL is set to HIGH to turn both Q 1 and Q2 OFF, thereby
isolating the real supply lines from VDDV and GNDV. As the sleep transistors have a high-
threshold voltage (0.6 V), the leakage current flowing through these two transistors will be
significantly smaller in this mode. As a consequence, the leakage power consumption during the
standby period can be dramatically reduced by sleep control.

2. a) Distinguish between conventional charging (used in static CMOS circuits) and


adiabatic charging of a load capacitance and state the condition to minimize dynamic
power dissipation.
Ans: To understand the basic concept of adiabatic circuits, first consider the conventional
charging of a capacitor C through a resistor R, followed by adiabatic charging. Figure 1a consists
of a resistor R and capacitor C in series and a supply voltage Vdd. As the switch is closed at time
t = 0, current starts flowing. Initially, at time t = 0, the capacitor does not have any charge and
therefore the voltage across the capacitor is 0 V and the voltage across the resistor is Vdd. So, a
current of Vdd/R flows through the circuit. As current flows through the circuit, charge
accumulates in the capacitor and voltage builds up. As the time progresses, the voltage across the
resistor decreases with a consequent reduction in current through the circuit. At any instant of
time, Vdd = IR + Q/C, where Q is the charge stored in the capacitor. Figure 1b shows how
conventional charging of a capacitor leads to the dissipation of an energy of 1/2CLVdd 2

Fig. 1  a Charging of a capacitor C through a resis- tor R using a power supply.


b As charging progresses, current decreases and charge increases
Now let us consider the adiabatic charging of a capacitor as shown in Fig. 2.

Fig. 2 Adiabatic charging of a capacitor

Here, a capacitor C is charged through a resistor R using a constant current I( t) instead of a fixed
voltage Vdd. Here also it is assumed that initially at time t = 0, there is no charge in the capacitor.
The voltage across the capacitor Vc( t) is a function of time and it can be represented by the
following expression:

Assuming the current is constant, the energy dissipated by the resistor in the time interval 0 to T
is given by

where Vc( T) is the voltage across the capacitor at time T.


The following conclusions will be make from above equation:
 For T > 2RC, the dissipated energy is smaller than CLVdd2 which is dissipated in the case
of conventional charging using a supply voltage of Vdd.
 The energy dissipated can be made arbitrarily small by increasing the time T. Longer
the value of T, smaller is the dissipated energy.
 Contrary to the conventional charging where the energy dissipation is independent of
the value of the resistor R, here the energy dissipated is proportional to the value of R.
This has opened up the possibility of reducing the energy dissipation by decreasing the
value of R.
 In the case of adiabatic charging, the charge moves from the power supply slowly but
efficiently.
Adiabatic switching circuits require non-constant, non-standard power supply with time-
varying voltage. This supply is called “Pulsedpower supply” which is shown in Figure 3.
It may be noted that it has four phases: precharge hold, recover and wait. In the pre-
charge phase, the load capacitor is adiabatically charged, in the hold phase necessary
computation is performed, in the recover phase the charge is transferred back to the power
supply and finally the wait phase before a new pre-charge phase starts.

Fig. 3  Output waveform of a pulsed power supply

b) Realize a two-input OR/NOR gate using Positive Feedback Adiabatic Logic (PFAL)
and Efficient Charge Recover Logic (ECRL)?

Follow running notes

3. a) Explain battery characteristics and how does the battery capacity is effected by rate
capacity and recovery?
Ans: The important characteristics of a battery are given as follows:
Voc Open-circuit voltage of a fully charged battery under no-load condition.

Vcut Cutoff voltage of a battery is the voltage at which a battery is considered to be fully
discharged.
Theoretical Capacity The theoretical capacity of a battery is based on the amount of energy
stored in the battery, and is an upper bound on the total energy that can be extracted in practice.
Standard Capacity The standard capacity of a battery is the energy that can be extracted from it
when it is discharged under standard load conditions as specified by the manufacturer.
Actual Capacity The actual capacity of a battery is the amount of energy that the battery
delivers under a given load, and is usually used (along with battery life) as a metric to judge the
battery efficiency of the load system.
Units of Battery Capacity The energy stored in a battery, called the battery capacity, is
measured in either watt-hours (Wh), kilowatt-hours (kWh), or ampere-hours (Ah). The most
common measure of battery capacity is Ah, defined as the number of hours for which a battery
can provide a current equal to the discharge rate at the nominal voltage of the battery.

Rate Capacity Effect


It is an established fact that the charging/discharging rates affect the rated battery
capacity. If the battery is being discharged very quickly (i.e., the discharge current is high), then
the amount of energy that can be extracted from the battery is reduced and the battery capacity
becomes lower. This is due to the fact that the necessary components for the reaction to occur do
not necessarily have enough time to move to their necessary positions. Then, only a fraction of
the total reactants is converted to other forms, and therefore the energy available is reduced.
Alternatively, if the battery is discharged at a very slow rate using a low current, more
energy can be extracted from the battery and the battery capacity becomes higher. Therefore, the
capacity of a battery should take into consideration the charging/discharging rate. A common
way to specify battery capacity is to provide the battery capacity as a function of the time which
it takes to fully discharge the battery.

Recovery Effect

The battery voltage recovers in idle periods. It is known that the rate at which the
positively charged ions are supplied depends on the concentration of positively charged ions near
the cathode, which improves as a battery is kept idle. This is known as recovery effect.

b) Explain various approaches for battery driven system design.


Ans: Battery-driven system design involves the use of one or more of the following techniques:
Voltage and Frequency Scaling As the power dissipation has square law dependence on the
supply voltage and linear dependence on the frequency. Depending upon the performance
requirement, the supply voltage Vdd and frequency of operation of the circuit driven by the
battery can be dynamically adjusted to optimize energy consumption from the battery.
Information from a battery model is used to vary the clock frequency dynamically at run time
based on the workload characteristics. If the workload is higher, higher voltage and clock
frequency are used, and, for lower workload, the voltage and clock frequency can be lowered
such that the battery is discharged at a lower rate. This, in turn, improves the actual capacity of
the battery.
Dynamic Power Management The state of charge of the battery can be used to frame a policy
that controls the operation state of the system.
Battery-Aware Task Scheduling The current discharge profile is tailored to meet battery
characteristics to maximize the actual battery capacity.

The following test set consisted of five variable-current load profiles P1–P5, and are shown in
Fig. To obtain P1–P4, four currents of certain durations, 1011 mA for 10 min, 814 mA for 15
min, 518 mA for 20 min, and 222 mA for 15 min, were selected, which were arranged in
different order. For each of these four profiles, the total length and delivered charge are 60 min
and 36,010 mA-min, respectively. The results show that P1 is the best sequence, and P2 is the
worst sequence, from the battery utilization perspective.

Static Battery Scheduling These are essentially open-loop approaches, such as serial
scheduling, random scheduling, round-robin scheduling, where scheduling is done without
checking the condition of a battery.
Terminal Voltage-Based Battery Scheduling The scheduling algorithm makes use of the state
of charge of the battery.

4. Explain machine independent software optimizations.


Ans: If we analyze the operation of a general purpose computer system, it is observed that the
processor runs different types of software, which can be broadly categorized into two types:
systems software and application software. Systems software consists of operating system and
compilers, whereas application software consists of programs related to different applications
developed by the users. These optimization approaches are based on one or more of the
following ideas:
1. Lesser Executable Code
2. Avoid Redundant Computation
3. Fewer Jumps
4. Optimize the Common Case

5. Locality
6. Exploit Memory Hierarchy
7. Parallelize

Compilation For Low Power:


The compiler sits between the front end and the code generator, and it works with the
intermediate code. The compiler can analyse control and data flows on the intermediate code and
can perform appropriate transformations to improve the intermediate code. Some of the compiler
optimization approaches are mentioned below:
Inlining Small Functions While performing computation, inline expansion, or inlining, is a
manual or compiler optimization that replaces a function call site with the body of the callee.
This optimization may lead to a lesser execution time and better utilization of memory space at
runtime. Normally, when a function is invoked, control is transferred to its definition by a branch
or call instruction. With in-lining, control drops through directly to the code for the function,
without a branch or call instruction. In-lining improves performance in several ways:
• It removes the cost of the function call and return instructions, as well as any other
prologue and epilogue code injected into every function by the compiler.
• Eliminating branches and keeping code that is executed close together in the memory
improves instruction cache performance by improving the locality of reference.
• Once inlining has been performed, additional intraprocedural optimizations be- come
possible on the in-lined function body.
Ex.

Code Hoisting In many situations, there are some computations inside the body of the loop
which are computed in each pass of the loop execution. Significant saving in the computation
time and reduction in power dissipation can be obtained by moving computations outside the
loop body. This is known as code hoisting.
Ex:

Before Hoisting

After hoisting

Dead-Store Elimination Dead-code elimination (also known as dead-code removal, dead-code


stripping, or dead-code strip) is a compiler optimization to remove the code which does not
affect the program results.
Ex:
Dead-Code Elimination Most advanced compilers have options to activate dead- code
elimination, sometimes at varying levels. A lower level might only remove instructions that
cannot be executed. A higher level might also not reserve space for unused variables. Yet a
higher level might determine instructions or functions that serve no purpose and eliminate them.
Ex:

Dynamic Dead-Code Elimination Dead code is normally considered dead unconditionally.


Therefore, it is reasonable to attempt removing dead code through dead-code elimination at
compile time. However, in practice, it is also common for code sections to represent dead or
unreachable code only under certain conditions, which may not be known at the time of
compilation. Such conditions may be imposed by different runtime environments may be
configurable to include or exclude certain features depending on user preferences, rendering
unused code portions useless in a particular scenario.

Loop-Invariant Computations Calculations that do not change between loop iterations are
called loop-invariant computations. These computations can be moved outside the loop to
improve performance.
Ex:

Loop-Invariant Computations:

Before Loop-Invariant Computations

After Loop-Invariant Computations


Common Sub-Expression Elimination This is a compiler optimization technique of finding
redundant expression evaluations and replacing them with a single computation. This saves the
time overhead resulted by evaluating the expression for more than once.
Ex:

Common Sub-Expression Elimination

Before Common Sub-Expression Elimination

After Common Sub-Expression Elimination


Loop Unrolling Unrolling duplicates the body of the loop multiple times, in order to decrease
the number of times the loop condition is tested and the number of jumps, which hurt
performance by impairing the instruction pipeline. Unrolling leads to lesser execution time for
the program because lesser number of jumps and lesser number of loop manipulation instructions
are to be executed.
Ex:

Loop Unrolling

After Loop Unrolling


Before Loop Unrolling

2 Marks Questions:

1. Distinguish between standby and run-time leakage power dissipation.


Ans: Standby leakage power dissipation takes place when the circuit is not in use, i.e. inputs do
not change and clock is not applied. On the other hand, runtime leakage power dissipation takes
place when the circuit is being used.

2. What is the energy density of a battery? How does it vary for different technologies?
Ans: Energy density is the amount of energy stored in given system or region of space per unit
volume. The common method of evaluating the energy density is to consider the amount of watt-
hours per kilo- gram (Wh/kg). The energy density of the commonly used battery technologies
that have been developed over the years to meet the increasing demand for smaller, lighter, and
higher-capacity rechargeable batteries for portable devices is shown in Fig. 1.

Fig. 1 Energy density of the commonly used batteries used in portable devices

3. What is meant by widening gap between the processor and battery technology?
Ans: The battery technology has not been able to maintain the same rate of growth in energy
density, leading to a widening “battery gap” as shown in Fig. 1.
Fig. 1 Widening battery gap

4. Draw the general structure of power Aware software prefetching?


Ans: Software prefetching is a technique of inserting prefetch instructions into codes for
memory reference that are likely to result in cache miss. This is done either by the programmer
or by a compiler. The general structure of PASPR shown in figure.

5. List out loop optimizations?


Ans: 1. Inlining Small Functions
2. Code Hoisting
3. Dead-Store Elimination
4. Dead-Code Elimination
5. Dynamic Dead-Code Elimination
6. Loop-Invariant Computations
7. Common Sub-Expression Elimination
8. Loop Unrolling

You might also like