0% found this document useful (0 votes)
17 views

Tvlsi Class

The document discusses testability for VLSI chips. It covers topics like design for testability (DFT), fault modeling, automated test pattern generation (ATPG), and testing methodologies. The goal is to understand how to design chips that are easy to test in order to improve manufacturing yield and reduce costs. Testing chips is important to sort working chips from defective ones before customer shipment. However, testing adds significant expense, so design techniques are needed to make chips highly testable.

Uploaded by

Keyur Mahant
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Tvlsi Class

The document discusses testability for VLSI chips. It covers topics like design for testability (DFT), fault modeling, automated test pattern generation (ATPG), and testing methodologies. The goal is to understand how to design chips that are easy to test in order to improve manufacturing yield and reduce costs. Testing chips is important to sort working chips from defective ones before customer shipment. However, testing adds significant expense, so design techniques are needed to make chips highly testable.

Uploaded by

Keyur Mahant
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 380

TESTABILITY FOR VLSI

MEL ZG531 / ES ZG532

BITS Pilani
Pilani|Dubai|Goa|Hyderabad

1
BITS Pilani
Pilani|Dubai|Goa|Hyderabad

Course Overview

2
Scope and Objective of the
course

• Understand the basics of Designing a testable chip to


increase the yield.

• Understand different possible defects and fault models


related to the chip design.

• Understand design rule checks, test pattern generation


for combinational and sequential circuits.

• Fundamentals of DFT (Design for Testability)

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Prescribed & Reference
Books
T1. “Essentials of Electronic Testing, for Digital, Memory
and Mixed-Signal VLSI Circuits”,
Michael L. Bushnell and Vishwani D. Agrawal,
– Kluwer Academic Publishers

R1. “Digital Systems Testing & Testable Design ”,


Miron Abromavicici , Melvi Breuer & Friedman

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Content Structure

• Introduction to ASIC Flow, VLSI testing and DFT

• Fault Modeling and Logic Simulation

• Automated Test Equipment basics

• Fault Simulations and Testability Measures

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Content Structure cont..
• Stuck-at, IDDQ and Delay Fault models

• Scan based DFT

• Basics of Combinational and Sequential ATPG

• ATPG Algorithms

• Boundary Scan Test technique

• Logic and Memory BIST

• Memory Faults and Testing

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


MID-SEMESTER TEST

Syllabus for Mid-Semester Test (Closed/Open Book):


Topics covered in ~(1-7) weeks.

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Comprehensive Exam

Syllabus for Comprehensive Exam (Open Book):


Topics covered in ~(1-16) weeks.

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Evaluation Scheme

EC No. Evaluation Component & Duration Weightage


Type of Examination
EC-1 Mid-Semester Test
(Closed/Open Book)* 2 hrs 30%

Assignment 20%

EC-2 Comprehensive Exam


(Open Book)* 3 Hours 50%
Quiz

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


BITS Pilani
Pilani|Dubai|Goa|Hyderabad

INTRODUCTION

10
System On Chip (SoC)

Clk & GLUE


ANALOG GLUE LOGIC
Rst LOGIC
ROM
RAM ➢ Digital Logic

PLATFORM ➢ Memories
DIGITAL IPs
➢ IOs
DIGITAL
IPs
FLASH
RAM ➢ Analog blocks
ANALOG

PADI
IOSS TCU
Design Structures

• Combinational Logic
• Sequential Logic
• Inputs & Outputs (I/O)
• Memories
• Analog
Combinational Logic

A3 n1
A2
Y

n2 n3
A1
A0
Sequential Logic

Step Q

0 111

1 011 CLK
Q[0] Q[1] Q[2]

Flop

Flop

Flop
2 101 D D D
3 010

4 001

5 100

6 110

7 111 (repeats)
Memory
ANALOG Block

SYSTEM I/P SYSTEM O/P


Filter DAC Analo
g

SYSTEM O/P SYSTEM I/P


AD Analog
C
Inputs & Outputs
Digital Basics

• Combinational logic

• Sequential logic
Digital Basics

• Logic gates
– AND, OR , NOT, XOR..
– NAND, NOR..

• Logic gates build with transistors

• Truth tables
Digital Basics

• Flip flops
HDL - Verilog

• Hardware description language


• ASIC Design
• RTL and Gate level
• Module definition
• Module instantiation
HDL - Verilog

RTL Netlist
ASIC / SOC Design FLOW

Design Architecture
Floor Planning,P&R
RTL
Design/Verification
Timing Analysis

Logic Synthesis
ATPG and Pattern Pattern
simulation conversion
DFT/Scan Insertion
ATPG
(SCAN,JTAG,BIST)
Tapeout

Incremental Synthesis
What is next?

• Post TAPEOUT?

• The Fabrication of IC

• IC manufacturing process (Video)


– https://www.youtube.com/watch?v=bor0qLifjz4
Chip Fabrication Process
Chip Fabrication Process (Contd.)
Is the manufacturing process ERROR free?

May not be ..

The process of manufacturing is not 100% error free.


There are defects in silicon which contribute towards
the errors introduced in the physical device.
Defects In Silicon
▪ IC manufacturing process is defect-prone

OPEN Short
Defects In Silicon
▪ IC manufacturing process is defect-prone
Defects : Particles
▪ Caused by impurities
• Shorts for additive material
• Opens for subtractive material
How to find defective parts?
TESTING

❑ Testing is a process of sorting DUTs to


determine their acceptability
PASS

STIMULUS DUT RESPONSE DECISION

FAIL
❑ Quality – Defective Parts Per Million (DPPM)
TESTING & Diagnosis

Functional Test (Logic Verification)


❑ Ascertain the design perform its specified behavior

Manufacturing Test
❑ Exercise the system and analyze the response to
ascertain whether it behaves correctly

Diagnosis
❑ To locate the cause of misbehavior after the
incorrect behavior is detected
Functional Test
• Does the chip simulate correctly?
– Usually done at HDL level
– Verification engineers write test bench for HDL
• Can’t test all cases
• Look for corner cases
• Try to break logic design
• Ex: 64-bit adder
– Test all combinations of corner cases as inputs
• 0, 1, 2, ……., 263-1
Functional vs. Structural ATPG
Carry Circuit
Functional vs. Structural
• Functional ATPG – generate complete set of tests for circuit input-
output combinations
– 129 inputs, 65 outputs:
– 2129 = 680,564,733,841,876,926,926,749,
214,863,536,422,912 patterns
– Using 1 GHz ATE, would take 2.15 x 1022 years
• Structural test:
– No redundant adder hardware, 64 bit slices
– Each with 27 faults (using fault equivalence)
– At most 64 x 27 = 1728 faults (tests)
– Takes 0.000001728 s on 1 GHz ATE
• Designer gives small set of functional tests – augment with
structural tests to boost coverage to 98+ %
Manufacturing Test
• A speck of dust on a wafer is sufficient to kill
chip
• Yield of any chip is < 100%
– Must test chips after manufacturing before
delivery to customers to only ship good parts
• Manufacturing testers are
very expensive
– Minimize time on tester
– Careful selection of
test vectors
Silicon Debug
• Test the first chips back from fabrication
– If you are lucky, they work the first time
– If not…
• Logic bugs vs. electrical failures
– Most chip failures are logic bugs from inadequate simulation
– Some are electrical failures
• Crosstalk
• Dynamic nodes: leakage, charge sharing
– A few are tool or methodology failures (e.g. DRC)
• Fix the bugs and fabricate a corrected chip
Shmoo plots
• How to diagnose failures?
– Hard to access chips
• Pico probes
• Electron beam
• Laser voltage probing
• Built-in self-test
• Shmoo plots
– Vary voltage, frequency
– Look for cause of
electrical failures
Cost of Test escapes
• Testing is one of the most expensive parts of
chips
– Logic verification accounts for > 50% of design
effort for many chips
– Debug time after fabrication has enormous
opportunity cost
– Shipping defective parts can sink a company

• Example: Intel FDIV bug


– Logic error not caught until > 1M units shipped
– Recall cost $450M (!!!)
Cost of Time-to-Market

❑ Delay of product lifetime → Reduces Revenue

❑ Missed opportunities → for short, stagnation time

❑ Moore’s Law → IC gate count double every 18 months


What is DFT ?
✓ It is a technique of adding testability features to IC
design.
✓ Developing and applying manufacturing tests.
✓ The manufacturing tests help to separate defective
components from the healthy ones.

What are DFT Goals?


✓ Quality: A high degree of confidence during testing
✓ Test cost reduction
✓ Time To Volume reduction through test automation

Why so much importance to DFT today?


✓ DFT ranks as one of the top concerns for SoC design
✓ DFT is an integral part of the high level design process
✓ Reduction in feature size leads to new types of manufacturing
faults
DESIGN FOR TEST

❑ Test merges with design much earlier in the process

❑ Testable circuitry is both Controllable and Observable

❑ Designers must employ special DFT techniques at


specific stages in the development process

❑ Ad Hoc and Structured DFT


Ad Hoc DFT
To enhance a design’s testability without major changes to the
design style

❑ Minimizing redundant logic

❑ Minimizing asynchronous logic

❑ Isolating clocks from the logic

❑ Adding internal control and observation points

However structured DFT techniques yields greater results


Structured DFT
❑ More systematic and automatic approach

❑ Scan Design

❑ BIST (Built-in-Self Test)


– Memory BIST
– Logic BIST

❑ Boundary Scan

Goal is to increase the controllability and observability of a


circuit
System On Chip (SoC)

Clk & GLUE


ANALOG GLUE LOGIC
Rst LOGIC
ROM
RAM ➢ Digital Logic

PLATFORM ➢ Memories
DIGITAL IPs
➢ IOs
DIGITAL
IPs
FLASH
RAM ➢ Analog blocks
ANALOG

PADI
IOSS TCU
SoC with Design For Testability

Clk & ANALOG GLUE


Clk
Rst
&
ANALOG
with GLUE LOGIC
LOGIC ➢ Digital Logic
Rst
with test
wrapper ROM
muxing (scan inserted) RAM
with ROM BIST

➢ Memories
PLATFORM
PLATFORM
(scan inserted)
DIGITAL
DIGITAL IPs
IPs ➢ IOs
(scan inserted)

DIGITAL
DIGITAL
IPs
IPs ➢ Analog blocks
RAM
FLASH (scan inserted)
with RAM BIST
ANALOG
ANALOG
with wrapper

IOSS
PADI TCU JTAG
with test muxing
DFT comes free ?
• Area penalty
• Test Cost
• Performance penalty
TESTABILITY
▪ Controllable : The ability to set a node in a design to a
desired state, ie logic 0 or 1

▪ Observable : The ability to observe a change in logic


value of a node in a design

▪ Testable : If a design is well-controllable and well-


observable it is said to easily testable.

▪ Not possible to add such ‘directly’ controllable &


observable points to every gate in an actual design
TESTABILITY
TESTABILITY (Area/Performance/cost ?)
BITS Pilani
Pilani|Dubai|Goa|Hyderabad

FAULT MODELS AND TEST TYPES

81
Manufacturing Defect Space
• A manufacturing defect is a physical problem that
occurs during the manufacturing process, causes
device malfunction

• The purpose of test generation is to create a set of


test patterns that
detect as many
manufacturing defects
as possible.
Test Types
• Main categories of defects and test types

– Structural Testing - Checks the logic levels of output


pins for a “0” & ”1” response

– At-Speed Testing - Checks the amount of time it


takes for a device to change logic states

– IDDQ Testing - Measures the current going through


the circuit devices
Structural Test
• Most widely accepted test type
– User-generated test patterns
– ATPG tool patterns

Input - 1 Input – 2 Output Output


A A B Y (faulty)
Y 0 0 0 0
0 1 0 0
1 0 0 1
B 1 1 1 1

Node-B stuck at - 1
At-Speed Test
• Circuit operates correctly at a slow clock rate and
then fails when run at the normal system speed

• Delay variations exist in the chip due to statistical


variations in the manufacturing process

• At-speed testing is to detect these


type of problems, runs the test
patterns at system clock speed
IDDQ Test
• Measures quiescent power supply current rather than
pin voltage
• Detecting device failures such as CMOS transistor
stuck-on faults and bridging faults
• Devices that draw excessive current
may have internal
manufacturing
defects
Fault Modeling
• Fault models are means of abstractly
representing manufacturing defects in the
logical model of your design
Test Type Fault Model Defects Detected
Structural Stuck-at, Toggle Opens/shorts in circuit
interconnections
IDDQ Pseudo stuck-at CMOS transistor stuck-
on, stuck-open
At-Speed Transition, Path delay Partially conducting
transistors, Resistive
bridges

Test Types and Associated Fault Models


Fault Locations
• By default faults reside at the inputs and outputs of
library models
• Internal fault locations are faults can instead reside at
the inputs and outputs of gates within library models
Stuck-At Fault Model
• The most common fault model used in fault simulation
due to its effectiveness in finding many common
defect types
• Models the behavior that occurs if the terminals of a
gate are stuck at either a high (Stuck-at-1) or low
(Stuck-at-0)
Possible faults : 6 A
PORTS Stuck High Stuck Low Y
A S-a-1 S-a-0
B S-a-1 S-a-0
Y S-a-1 S-a-0 B
Possible Fault Locations
• For a single-output, n-input gate, there are 2(n+1)
possible stuck-at errors.

Inputs = 4 A
B Y
Outputs = 1 C
Possible faults = 2(4+1) = 10 D
Bridge Fault Model
• Bridge fault model is to test against potential
bridge sites (net pairs) extracted from the design

• Bridge sites can


load from a bridge
definition file
Bridge Fault Model
• This model uses a 4-way dominant fault model
• Driving one net (dominant) to a logic value and ensuring that
the other net (follower) can be driven to the opposite value

Sig_A is dominant with a value of 0 (sig_A=0; sig_B=1/0)


Sig_A is dominant with a value of 1 (sig_A=1; sig_B=0/1)
Sig_B is dominant with a value of 0 (sig_B=0; sig_A=1/0)
Sig_B is dominant with a value of 1 (sig_B=1; sig_A=0/1)

ATPG tools create test patterns that test each net pair against
all four faulty relationships
Transition Fault Model
• Transition faults model large delay defects at
gate terminals in the circuit under test

– slow-to-raise
models a device pin that is defective because its
value is slow to change from 0 to a 1

– slow-to-fall
models a device pin that is defective because its
value is slow to change from 1 to a 0
Path Delay Fault Model
• Path delay faults model defects in circuit
paths (no localized fault sites)
• Testing the combined delay through all
gates of specific paths (critical paths)
Path Sensitization
• Determine values necessary at x1 &
x2 that set y1 to a 1
• Select a path to propagate the response of the
fault site to a primary output
• Specify the input values to enable detection
at primary output. X3 must be set to a 1
S-a-0
X1
y1
X2
y2
X3
Path Delay Fault Testing

A
C
B
P Q
R D
11

➢ Distributed delay in combinational path

➢ Propagation delays of all paths in a circuit must be less than one clock cycle

➢ Rising and falling transitions

➢ Critical paths from STA


Path Delay Fault Testing
➢ Robust path delay testing
Guarantees to Detect the Delay Fault on the Targeted Path Independent of all other
delays in the circuit.

➢ Non-Robust path delay testing


Guarantees to Detect the Delay Fault on the targeted Path only if no other path delay is
increased.
Path Delay Fault Testing
➢ Robust Path delay testing
Path Delay Fault Testing
➢ Non-Robust path delay testing
MCP and False Paths
Multicycle path
➢ Takes more than one cycle to complete its operation
➢ Propagation delays of combinational path takes more than that of clock period

P1
U
G G
1 G
D Q 1 3 G U
CLK1 5 6
CK D 5 Q
U CK
D 2 Q

CK
U G U
D 3 Q 2 D 6 Q

CK G G CK
4 7
U
D 4 Q
td : Propagation delay
CK td T cycle

CLK1

T cycle
MCP and False Paths
False path
➢ Functionally never exercised
➢ Source register data will not be captured
in destination register
Fault Classes
• Untestable (UT)
– Unused (UU)
– Tied (TI)
– Blocked (BL)
– Redundant (RE)

• Testable
– Detected (DT)
– Posdet (PD)
– Atpg_untestable (AU)
– Undected (UD)
Untestable (UT)
• Untestable (UT) faults are faults for which
no pattern can exist to either detect or
possible-detect them.

• The tools acquire some knowledge of


faults prior to ATPG, they classify certain
unused, tied, or blocked faults before
ATPG runs.
Unused (UU)
• The unused fault class includes all faults
on circuitry unconnected to any circuit
observation point and faults on floating
primary outputs.
Tied (TI)
• The tied fault class includes faults on gates
where the point of the fault is tied to a value
identical to the fault stuck value.
• Because tied values propagate, the tied
circuitry at A causes tied faults at A, B, C,
and D.
Blocked (BL)
• The blocked fault class includes faults on
circuitry for which tied logic blocks all paths
to an observable point.
• Tied faults and blocked faults can be
equivalent faults.
Redundant (RE)
• The redundant fault class includes faults the
test generator considers undetectable. After
the test pattern generator exhausts all
patterns, it performs a special analysis to
verify that the fault is undetectable under any
conditions
Laws of Boolean Algebra
Boolean Algebra Laws are used to simplify boolean expressions.
Basic Boolean Laws
1.Idempotent Law
1. A * A = A
2. A + A = A
2.Associative Law
1. (A * B) * C = A * (B * C)
2. (A + B) + C = A + (B + C)
3.Commutative Law
1. A * B = B * A
2. A + B = B + A
4.Distributive Law
1. A * (B + C) = A * B + A * C
2. A + (B * C) = (A + B) * (A + C)
5.Identity Law
1. A * 0 = 0 A * 1 = A
2. A + 1 = 1 A + 0 = A
6.Complement Law
1. A * ~A = 0
2. A + ~A = 1
7.Involution Law
1. ~(~A) = A
8.DeMorgan's Law
1. ~(A * B) = ~A + ~B
2. ~(A + B) = ~A * ~B
9.Redundancy Laws
10.Absorption
1. A + (A * B) = A
2. A * (A + B) = A
11.
1. (A * B) + (A * ~B) = A
2. (A + B) * (A + ~B) = A
12.
1. A + (~A * B) = A + B
2. A * (~A + B) = A * B
Each law is described by two parts that are duals of each other. The Principle of duality is
•Interchanging the + (OR) and * (AND) operations of the expression.
•Interchanging the 0 and 1 elements of the expression.
•Not changing the form of the variables.
Testable Faults (TE)
• Testable faults are all those faults that
cannot be proven untestable.

– Detected (DT)
– Posdet (PD)
– Atpg_untestable (AU)
– Undected (UD)
Detected (DT)
• The detected fault class includes all faults
that the ATPG process identifies as detected.
The detected fault class contains two
subclasses,
• det_simulation (DS) - faults detected when
the tool performs fault simulation.
• det_implication (DI) - faults detected when
the tool performs learning analysis.
The det_implication subclass normally includes faults in the scan path
circuitry. The scan chain test, which detects a binary difference at an
observation point, guarantees detection of these faults.
Posdet (PD)

• The posdet, or possible-detected, fault class


includes all faults that fault simulation identifies
as possible-detected but not hard detected.

• A possible-detected fault results from a 0-X or 1-


X difference at an observation point.

By default, the calculations give 50% credit for posdet faults. You can adjust
the credit percentage with the Set Possible Credit command.
ATPG_untestable (AU)
• The ATPG_untestable fault class includes all
faults for which the test generator is unable to
find a pattern to create a test, and yet cannot
prove the fault redundant.

• Testable faults become ATPG_untestable


faults because of constraints, or limitations,
placed on the ATPG tool (such as a pin
constraint or an insufficient sequential depth).
ATPG_untestable (AU) contd…
• These faults may be possible-detectable, or
detectable, if you remove some constraint, or
change some limitation, on the test generator

• such as removing a pin constraint or


changing the sequential depth

• You cannot detect them by increasing the test


generator abort limit.
Undetected (UD)

• The undetected fault class includes undetected


faults that cannot be proven untestable or
ATPG_untestable. The undetected class contains
two subclasses:

• uncontrolled (UC) - undetected faults, which during


pattern simulation, never achieve the value at the
point of the fault required for fault detection—that is,
they are uncontrollable.

• unobserved (UO) - faults whose effects do not


propagate to an observable point.
Undetected (UD) contd…
• All testable faults prior to ATPG are put in the
UD category. Faults that remain UC or UO
after ATPG are aborted, which means that a
higher abort limit may reduce the number of
UC or UO faults.

• Uncontrolled and unobserved faults can be


equivalent faults. If a fault is both uncontrolled
and unobserved, it is categorized as UD.
Fault Class Hierarchy
Test Coverage
• Percentage of faults detected from among
all testable faults

# DT + (# PD * posdet_credit)
--------------------------------------- X 100
# testable faults
DT -> Detected
PD -> Possible Detected
Fault Coverage
• Percentage of faults detected from among
all faults that test pattern set tests

# DT + (# PD * posdet_credit)
-------------------------------------------- X 100
# full faults
DT -> Detected
PD -> Possible Detected
Significance of Fault & Test Coverage
ATPG Effectiveness
• ATPG tool’s ability to either create a test for a
fault
• Test cannot be created for the fault under the
restrictions placed on the tool

#DT + #UT + #AU + #PU + (#PT*posdet_credit)


-------------------------------------------------------------- x 100
# full faults
BITS Pilani
Pilani|Dubai|Goa|Hyderabad

TEST FUNDAMENTALS

125
‘AND’ & ‘INV’ S@

– The possible faults of – The possible faults of


AND are INV
•A S@0
•A S@0
•A S@1
•B S@0
•A S@1
•B S@1
•Y S@0
•C S@0 •Y S@1
•C S@1
‘AND’ & ‘INV’ S@

– The possible vectors of – The possible vectors


AND are of INV
1. A S@0 ; AB = 11 1. A S@0; A = 1
2. A S@1 ; AB = 01 2. A S@1; A = 0
3. B S@0 ; AB = 11 3. Y S@0; A = 0
4. B S@1; AB = 10 4. Y S@1; A = 1
5. C S@0; AB = 11
6. C S@1; AB = 00, 01, 10
‘AND’ & ‘INV’ S@

– The possible vectors of AND – The possible vectors of


are INV
1. A S@0 ; AB = 11 1. A S@0; A = 1
2. A S@1 ; AB = 01 2. A S@1; A = 0
3. B S@0 ; AB = 11 3. Y S@0; A = 1
4. B S@1; AB = 10 4. Y S@1; A = 0
5. C S@0; AB = 11 – Need 2 vectors (not 4)
6. C S@1; AB = 00, 01, 10
– AB = 00 is redundant
– Need 3 vectors (not 4) Fault equivalence and Fault collapsing
Combinational Logic
SA1 SA0
o A3 {0110} {1110}
o A2 {1010} {1110}
o A1 {0100} {0110} A3
A2
n1

o A0 {0110} {0111} Y

o n1 {1110} {0110} A1
n2 n3

o n2 {0110} {0100} A0

o n3 {0101} {0110}
o Y {0110} {1110}

❑ Minimum set: {0100, 0101, 0110, 0111, 1010,


1110}
TESTABILITY
Controllable : The ability to set a node in a design to a
desired state, ie logic 0 or 1.

Observable : The ability to observe a change in logic value


of a node in a design.

Testable : If a design is well-controllable and well-


observable it is said to easily testable.

Not possible to add such ‘directly’ controllable & observable


points to every gate in an actual design
SCAN Design Basics

Before scan insertion After scan insertion

D Q functional_data 0
D Q
scan_data 1

scan_en
sys_clk sys_clk
SCAN Basics

(A) (B)
OUT1 A_IN OUT1
A_IN Combination Combinational
B_IN al logic B_IN logic

D Q D Q D Q D Q D Q D Q
SC_IN SCL SCL SCL
SEN SEN SEN
CLK CLK CLK
CLK CLK CLK
CLK
CLK

C_IN Combination SC_EN


OUT2 Combinational
al logic C_IN
logic OUT2

Before modification,

This logic circuit appears as in (A).

After inserting scan cells, the resulting circuit looks like that shown in (B)
SCAN Operation

Combinatorial Logic

Functional D To Scan
0 D D
Data Q Out port
From Scan 1 Q Q
in Port SI FF SI FF SI FF

From Scan
enable Port S S
S E E
E
From clock
Port CLK

Clock signal
Scan chain
Functional Path
SCAN OPERATION
• Select Scan Shift mode. {SE = 1}
• Shift in scan cell values. {PULSE SHIFT CLOCK}
• Select Capture mode. {SE = 0}
• Force primary input values.
• Measure primary outputs.
• Capture circuit response into scan chains. {PULSE CAPTURE CLOCK}
• Select Scan Shift mode. {SE = 1}
• Shift-out the scan data. {PULSE SHIFT CLOCK}
– shift-in the next set of scan cell values
SCAN OPERATION

Shift ➔ SCAN_EN = ‘1’


Shift [Load]: Loading the scan chain with data, required to detect faults in the logic
Shift [Unload]: Shifting out the scan chain data, observed from logic

Capture ➔ SCAN_EN = ‘0’


Pulse clock to capture data input of the MUX
Scan Chain Operation for Stuck-at Test

139
Scan Chain Operation for Stuck-at Test

140
Scan Chain Operation for Stuck-at Test

141
Scan Chain Operation for Stuck-at Test

142
Scan Chain Operation for Stuck-at Test

143
Scan Chain Operation for Stuck-at Test

144
Scan Chain Operation for Stuck-at Test

146
Scan Chain Operation for Stuck-at Test

147
Scan Chain Operation for Stuck-at Test

148
Full Scan Design
Partial Scan
FULL vs PARTIAL SCAN
Test time and Test Data
How to calculate Test Time and Test Data for the chip?
➢ Scan Chain Length

➢ Number of patterns

➢ Scan shift frequency

➢ Number of Scan ports


Why Scan Compression?

➢ Test Time

➢ Test Data Volume

➢ ATE Memory
Scan compression

• Length of scan chains


• Number of scan chains

SCAN OUTs
SCAN OUTs

SCAN INs
SCAN INs

CMP MODE
Scan Compression Blocks

➢ Decompressor

➢ Compactor

➢ Controller + Masking logic


Scan in from
Tester

Mask Controls
DECOMPRESSOR

s
Mb
Internal Chains

X-MASK

COMPRESSOR
Scan compression Block diagram

Scan out to Tester


Scan Compression architecture
Scan compression with Bypass

Chain
Decompressor

Compactor
Chain
Input Output
Channel Channel

Clock B

Core design

Bypass
path
Fault Aliasing
Fault Aliasing contd..
SCAN INSERTION DRCs
(Design Rule Checks)
• Adoption of design-for-testability principles early
in the design process ensures the maximum
testability with the minimum effort.

• These guidelines emphasize that test is a part of


the design flow, not a process that is performed at
the end of the design cycle.
DFT Rule #1
• All internal clocks must be controlled by port level CLK signal (primary
input) in scan test mode

INPUT1 D Q OUTPUT
DTC 10
INPUT2 D Q CLK
DTC 10
CLK CLK

Gated Clock
DFT Rule #1
• Clock is controllable with additional circuitry

INPUT1 D Q OUTPUT
DTC 10
A
INPUT2 D Q MU111 CLK
B
DTC 10
CLK CLK

TEST_MODE
DFT Rule #2
• Avoid implementation of combination feedback circuit. If present, the feedback
loop be broken to test
• The gate output is not testable for stuck-at faults as it is usually held constant
during test.
• The feedback signal may not be testable (observable) in test mode
Feedback Signal
Cannot
Cannot Observe
Control Input Output
(At All)
(Much)
Combinational
Logic

Outputs of this circuit cannot be controlled by their inputs alone


DFT Rule #2
• Combination feedback loop is broken using testmode signal

TEST_MODE

Combinational
Logic
INPUT OUTPUT
DFT Rule #3
▶Asynchronous SET/RESET pins of flip-flops must be controlled by a port
level RESET (primary input) in scan test mode

Combinational
Logic

R
D Q

CLK
DFT Rule #3
• Reset is controlled during scan mode using Test_mode signal

Test_mode

Combinational OR
Logic

R
D Q

CLK
DFT Rule #3
• Reset controllability is added
Test_mode
RESET from port
1

Combinational 0
Logic

R
D Q

CLK
DFT Rule #4
• Gated clock must be enabled in scan test mode

CLK
GATED CLOCK
C1
LATCH
HOLD D type D

Gated clocks can block the scan chain from shifting


DFT Rule #4
• Gated clock must be enabled in scan test mode

CLK

C1

HOLD LATCH
D type D GATED CLOCK
TEST
D
SD

The muxed scan flip-flop observer is not required


if the HOLD signal is directly issued from a scan flip-flop.
DFT Rule #5
• Latches have to be avoided as much as possible. If present, make it
transparent in scan test mode
• In an edge-triggered design, it is difficult to put latches on a scan chain
because the library does not contain their edge-triggered scan
equivalents. If they are not part of a scan chain, their outputs will be
difficult to control. The fault coverage will therefore be very low.
DFT Rule #5
• Latch is converted into transparent latch

TEST

ENABLE
Process(DATA,ENABLE,TEST) C1

begin
LATCH
if(ENABLE =“1” or TEST=“1”)then DATA D
latch_signal<=DATA;
endif;
end process;
DFT Rule #6
• Do not replace flip-flops of the shift Process(CLK)

register structure by equivalent begin


scan flip-flops if (CLK’EVENT and CLK = “1”) then

• For efficient area purposes, the if (RESET = “1” and SCAN_EN = “0”) then
flip-flops of the shift register shifter_bus <= (others => “0”);
structure will not be replaced by elsif (ACTIVE_SHIFT = “1” or SCAN_EN = “1”) then
equivalent scan flip-flops. The
shifter_bus(16 downto 1) <= shifter_bus(15 downto 0);
SCAN_EN signal have to be
shifter_bus(0) <= DATA_IN;
added in your VHDL RTL code of
the shift register to allow the shift endif;

of scan patterns in scan mode. endif;

end process;
DFT Rule #7
• Clock should not be used as data in
scan test mode D
DATA
• For ATPG to be successful, there CLK
Mux
Scan
should be minimal coupling between FF
the clocks and data. When there is
any coupling between clock and
data, the ATPG tool will have a set
of conflicting requirements to satisfy
at the same time. This results in loss
D
of test coverage. When the clock DATA
Mux
pulses, it can create race conditions TEST
Scan
too CLK FF

This can result in a loss of test coverage


on combinational logic.
DFT Rule #8
Bypass the Memory in scan test mode:
▪ All the paths ending at Memory cell are not observable
▪ All the paths starting from Memory cell are not controllable

Memory
block
Test mode
DFT Rule #9
The SCAN_ENABLE signal must be buffered adequately.
• The scan enable signal that causes all flip flops in the design to be
connected to form the scan shift register, has to be fed to all flip flops in
the design. This signal will be heavily loaded.
• The problem of buffering this signal is identical to that of clock
buffering. The drive strength of scan enable port on each block of the
design must be set to a realistic value when the design is synthesized.
If this port is left unconstrained during synthesis, it could result in silicon
failure.
DFT Rule #10
Avoid multicycle paths as much as possible. Ideally Zero
• This restriction arises from the fact that most ATPG tools use unit delay or
zero delay simulation. The tools assume that the results of applying a test
vector will be available before the end of the clock cycle. This means that
the vectors generated by the tools may be functionally correct, but may
not work with timing. Because of large combinational delays, the scan flip
flops may not be able to capture data at the end of the clock cycle.
• Loss of coverage during at-speed testing
DFT Rule #11
Negative edge flops should be placed in the start of the scan chain.
BITS Pilani
Pilani|Dubai|Goa|Hyderabad

FAULT Modeling

185
Why Modelling?
• In Engineering , Models bridge the gap
between the physical reality and
mathematical abstraction

• Allows development and application of


analytical tools
Few Definitions
• Defect : Unintended difference between the
implemented hardware and its intended design

• Error : A wrong output signal produced by a


defective system is called an error. An error is
an effect whose cause is some defect

• Fault : A representation of a “defect” at the


abstracted function level is called a fault
An example :
• Consider a digital system consisting of two inputs a and b, one
output c, and one two-input AND gate. The system is assembled by connecting a wire
between the terminal a and the first input of the AND gate. The output of the gate is
connected to c. But the connection between b and the gate is incorrectly made– b is left
unconnected and the second input of the gate is grounded. The functional output of this
system, as implemented, is c=0 instead of the correct output c=ab.
• For this system, we have:
– Defect : a short to ground
– Fault : signal b stuck at logic 0
– Error : a=1,b=1, output c=0; correct output c=1;

– Is Error Permanent?
a c
b
Common Fault Models
• Single stuck-at faults
• Transistor open and short faults
• Memory faults
• PLA faults (stuck-at, cross-point, bridging)
• Functional faults (processors)
• Delay faults (transition, path)
• Analog faults
Single Stuck-at Fault
• Three properties define a single stuck-at fault
• Only one line is faulty
• The faulty line is permanently set to 0 or 1
• The fault can be at an input or output of a gate
• Example: XOR circuit has 12 fault sites ( ) and
24 single stuck-at faults
Faulty circuit value
Good circuit value
c j
0(1)
s-a-0
a d 1(0)
1 g h
z
0 1 i
b e 1
f k
Test vector for h s-a-0 fault
Multiple Stuck-at Faults
• A multiple stuck-at fault means that any set of
lines is stuck-at some combination of (0,1)
values.
• A circuit with n lines can have 3^n - 1 multiple
stuck line combinations since each line can be in
one of SA0, SA1 and fault-free.
• A single fault test can fail to detect the target
fault if another fault is also present, however,
such masking of one fault by another is rare.
• Statistically, single fault tests cover a very large
number of multiple faults.
Checkpoints
• Primary inputs and fanout branches of a
combinational circuit are called checkpoints.
• Checkpoint theorem: A test set that detects all
single (multiple) stuck-at faults on all checkpoints
of a combinational circuit, also detects all single
(multiple) stuck-at faults in that circuit.

Total fault sites = 16

Checkpoints ( ) = 10
Fault Equivalence
• Number of fault sites in a Boolean gate circuit is
= #PI + #gates + # (fanout branches)
• Fault equivalence: Two faults f1 and f2 are
equivalent if all tests that detect f1 also detect f2.
• If faults f1 and f2 are equivalent then the
corresponding faulty functions are identical.
• Fault collapsing: All single faults of a logic circuit
can be divided into disjoint equivalence subsets,
where all faults in a subset are mutually equivalent.
A collapsed fault set contains one fault from each
equivalence subset.
Equivalence Rules
sa0 sa0
sa1 sa1
sa0 sa1 sa0 sa1 WIRE
sa0 sa1 sa0 sa1
AND OR

sa0 sa1 sa0 sa1

sa0 sa1
NOT sa0
sa1

sa0 sa1 sa0 sa1


sa0 sa1 sa0 sa1 sa0
NAND NOR
sa1
sa0 sa1 sa0
sa0 sa1
sa1
sa0
FANOUT sa1
Equivalence Example
Faults in grey
sa0 sa1 removed by
sa0 sa1 equivalence
sa0 sa1 collapsing
sa0 sa1 sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1 sa0 sa1
sa0 sa1

sa0 sa1 sa0 sa1


sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
20
Collapse ratio = ── = 0.625
32
Fault Dominance
• If all tests of some fault F1 detect another fault
F2, then F2 is said to dominate F1.
• Dominance fault collapsing: If fault F2
dominates F1, then F2 is removed from the fault
list.
• When dominance fault collapsing is used, it is
sufficient to consider only the input faults of
Boolean gates. See the next example.
• In a tree circuit (without fanouts) PI faults form a
dominance collapsed fault set.
• If two faults dominate each other then they are
equivalent.
Dominance Example
All tests of F2
F1
s-a-1 001
F2
s-a-1 110 010
000
101 011
100

s-a-1 Only test of F1

s-a-1
s-a-1
s-a-0
A dominance collapsed fault set
Fault Dominance
Fault Dominance
Dominance Example
sa0 sa1
sa0 sa1
sa0 sa1

sa0 sa1 sa0 sa1


sa0 sa1
sa0 sa1
sa0 sa1 sa0 sa1
sa0 sa1

sa0 sa1 sa0 sa1


Faults in BLUE
sa0 sa1 removed by
sa0 sa1 dominance
sa0 sa1 collapsing
sa0 sa1
15
Collapse ratio = ── = 0.47
32
Classes of Stuck-at Faults
• Following classes of single stuck-at faults
are identified by fault simulators:
• Potentially-detectable fault -- Test produces an
unknown (X) state at primary output (PO); detection
is probabilistic, usually with 50% probability.
• Initialization fault -- Fault prevents initialization of the
faulty circuit; can be detected as a potentially-
detectable fault.
• Hyperactive fault -- Fault induces much internal
signal activity without reaching PO.
• Redundant fault -- No test exists for the fault.
• Untestable fault -- Test generator is unable to find a
test.
Transistor (Switch) Faults
• MOS transistor is considered an ideal switch
and two types of faults are modeled:
• Stuck-open -- a single transistor is permanently stuck
in the open state.
• Stuck-short -- a single transistor is permanently
shorted irrespective of its gate voltage.
• Detection of a stuck-open fault requires two
vectors.
• Detection of a stuck-short fault requires the
measurement of quiescent current (IDDQ).
Stuck-Open Example
Vector 1: test for A s-a-0
(Initialization vector)
Vector 2 (test for A s-a-1)
pMOS VDD Two-vector s-op test
FETs
can be constructed by
A ordering two s-at tests
1 0
Stuck-
open
0 0
B
C
0 1(Z)

Good circuit states


nMOS
FETs Faulty circuit states
Stuck-Short Example
Test vector for A s-a-0

pMOS VDD
FETs IDDQ path in
faulty circuit
A Stuck-
1
short

0
B Good circuit state
C
0 (X)

nMOS
FETs Faulty circuit state
Summary
• Fault models are analyzable approximations of
defects and are essential for a test methodology.
• For digital logic single stuck-at fault model offers
best advantage of tools and experience.
• Many other faults (bridging, stuck-open and
multiple stuck-at) are largely covered by stuck-at
fault tests.
• Stuck-short and delay faults and technology-
dependent faults require special tests.
• Memory and analog circuits need other specialized
fault models and tests.
BITS Pilani
Pilani|Dubai|Goa|Hyderabad

IEEE 1149.1 JTAG


Boundary Scan Standard
218
Motivation for Standard
Bed-of-nails printed circuit board tester gone
▪ We put components on both sides of PCB &
replaced DIPs with flat packs to reduce
inductance
Nails would hit components
▪ Reduced spacing between PCB wires
Nails would short the wires
▪ PCB Tester must be replaced with built-in test
delivery system -- JTAG does that
▪ Need standard System Test Port and Bus
▪ Integrate components from different vendors
Test bus identical for various components
One chip has test hardware for other chips
Bed-of-Nails Tester Concept
Bed-of-Nails Tester
Purpose of Standard
Lets test instructions and test data be serially
fed into a component-under-test (CUT)
▪ Allows reading out of test results
▪ Allows RUNBIST command as an instruction
Too many shifts to shift in external tests
JTAG can operate at chip, PCB, & system levels
Allows control of tri-state signals during testing
Lets other chips collect responses from CUT
Lets system interconnect be tested separately
from components
Lets components be tested separately from
wires
System Test Logic
Instruction Register Loading with
JTAG
System View of Interconnect
Boundary Scan Chain View
Elementary Boundary Scan Cell
Serial Board / MCM Scan
Parallel Board / MCM Scan
Independent Path Board / MCM Scan
Tap Controller Signals
Test Access Port (TAP) includes these signals:
▪ Test Clock Input (TCK) -- Clock for test logic
Can run at different rate from system clock
▪ Test Mode Select (TMS) -- Switches system
from functional to test mode
▪ Test Data Input (TDI) -- Accepts serial test
data and instructions -- used to shift in
vectors or one of many test instructions
▪ Test Data Output (TDO) -- Serially shifts out
test results captured in boundary scan chain
(or device ID or other internal registers)
▪ Test Reset (TRST) -- Optional asynchronous
TAP controller reset
Tap Controller State Diagram
TAP Controller Power-Up Reset Logic
Boundary Scan
Instructions
SAMPLE / PRELOAD Instruction --
SAMPLE
Get snapshot of normal chip output signals
SAMPLE / PRELOAD Instruction --
PRELOAD
Put data on boundary scan chain before next instr.
EXTEST Instruction
Purpose: Test off-chip circuits and board-
level interconnections
INTEST Instruction
Purpose:
1. Shifts external test patterns onto component
2. Shifts component responses out
RUNBIST Instruction
Purpose: Allows you to issue BIST command to
component through JTAG hardware
Optional instruction
Lets test logic control state of output pins
1. Can be determined by pin boundary scan cell
2. Can be forced into high impedance state
BIST result (success or failure) can be left in
boundary scan cell or internal cell
▪ Shift out through boundary scan chain
May leave chip pins in an indeterminate state
(reset required before normal operation
resumes)
CLAMP Instruction

Purpose: Forces component output signals


to be driven by boundary-scan register

Bypasses the boundary scan chain by


using the one-bit Bypass Register

Optional instruction
IDCODE Instruction
Purpose: Connects the component device
identification register serially between TDI
and TDO
▪ In the Shift-DR TAP controller state

Allows board-level test controller or


external tester to read out component ID

Required whenever a JEDEC identification


register is included in the design
Device ID Register --JEDEC Code

MSB LSB
31 28 27 12 11 1 0
Version Part Manufacturer ‘1’
Number Identity
(4 bits) (16 bits) (11 bits) (1 bit)
USERCODE Instruction
• Purpose: Intended for user-programmable
components.

• Selects the device identification register as


serially connected between TDI and TDO

• User-programmable ID code loaded into


device identification register

• Switches component test hardware to its


system function
HIGHZ Instruction
Purpose: Puts all component output pin signals
into high-impedance state
Control chip logic to avoid damage in this mode
May have to reset component after HIGHZ runs
Optional instruction
BYPASS Instruction
Purpose: Bypasses scan chain with 1-bit register
Optional / Required Instructions

Instruction Status
BYPASS Mandatory
CLAMP Optional
EXTEST Mandatory
HIGHZ Optional
IDCODE Optional
INTEST Optional
RUNBIST Optional
SAMPLE / PRELOAD Mandatory
USERCODE Optional
Summary
Boundary Scan Standard has become
absolutely essential --
▪ No longer possible to test printed circuit
boards with bed-of-nails tester
▪ Not possible to test multi-chip modules
at all without it
▪ Supports BIST, external testing with
Automatic Test Equipment, and
boundary scan chain reconfiguration as
BIST pattern generator and response
compacter
▪ Now getting widespread usage
BITS Pilani
Pilani|Dubai|Goa|Hyderabad

LOGIC SIMULATION

250
Logic Simulation

• What is simulation?
• Design verification
• Circuit modeling
• True-value simulation algorithms
• Compiled-code simulation
• Event-driven simulation
• Summary
Simulation Defined
• Definition: Simulation refers to modeling of
a design, its function and performance.
• A software simulator is a computer
program; an emulator is a hardware
simulator.
• Simulation is used for design verification:
• Validate assumptions
• Verify logic
• Verify performance (timing)
• Types of simulation:
• Logic or switch level
• Timing
• Circuit
• Fault
Simulation for Verification

Specification

Synthesis

Response Design Design


analysis changes (netlist)

Computed True-value
Input stimuli
responses simulation
Modeling for Simulation
• Modules, blocks or components described by
• Input/output (I/O) function
• Delays associated with I/O signals
• Examples: binary adder, Boolean gates, FET, resistors
and capacitors
• Interconnects represent
• ideal signal carriers, or
• ideal electrical conductors
• Netlist: a format (or language) that describes a
design as an interconnection of modules.
Netlist may use hierarchy.
Example: A Full-Adder
c HA;
a inputs: a, b;
e
outputs: c, f;
d f AND: A1, (a, b), (c);
b
HA AND: A2, (d, e), (f);
Half-adder OR: O1, (a, b), (d);
NOT: N1, (c), (e);

A D
Carry FA;
B
HA1 E F inputs: A, B, C;
C
HA2 Sum
outputs: Carry, Sum;
HA: HA1, (A, B), (D, E);
Full-adder
HA: HA2, (E, C), (F, Sum);
OR: O2, (D, F), (Carry);
Logic Model of MOS Circuit
pMOS FETs VDD
a Da
a Dc c
Ca b Db
c
b Cc
Da and Db are
Cb interconnect or
nMOS FETs
propagation delays

Dc is inertial delay
Ca , Cb and Cc are of gate
parasitic capacitances
Inertial Delay
Options for Inertial Delay
(simulation of a NAND gate)
Transient
a
region
Inputs

c (CMOS)

c (zero delay)
Logic simulation

c (unit delay)

c (multiple delay) rise=5, fall=3

0 5 Time units
Signal States

• Two-states (0, 1) can be used for purely


combinational logic with zero-delay.
• Three-states (0, 1, X) are essential for
timing hazards and for sequential logic
initialization.
• Four-states (0, 1, X, Z) are essential for
MOS devices.
• Analog signals are used for exact timing
of digital logic and for analog circuits.
Modeling Levels
Modeling Circuit Signal Timing Application
level description values
Architectural
Function, Programming 0, 1 Clock and functional
behavior, RTL language-like HDL boundary verification

Logic Connectivity of 0, 1, X Zero-delay Logic


Boolean gates, unit-delay, verification
and Z
flip-flops and multiple- and test
transistors delay

Switch Transistor size 0, 1 Zero-delay Logic


and connectivity, and X verification
node capacitances

Timing Transistor technology Analog Fine-grain Timing


data, connectivity, voltage timing verification
node capacitances
Circuit Continuous Digital timing
Tech. Data, active/ Analog
time and analog
passive component voltage,
current
circuit
connectivity verification
True-Value Simulation Algorithms

• Compiled-code simulation
• Applicable to zero-delay combinational logic
• Also used for cycle-accurate synchronous sequential circuits
for logic verification
• Efficient for highly active circuits, but inefficient for low-
activity circuits
• High-level (e.g., C language) models can be used

• Event-driven simulation
• Only gates or modules with input events are evaluated (event
means a signal change)
• Delays can be accurately simulated for timing verification
• Efficient for low-activity circuits
• Can be extended for fault simulation
Event-Driven Algorithm
(Example)
Scheduled Activity
events list
a=1 e=1 t=0 c=? d, e
c=1 0 2
1
g=1
2
2 2 d = ?, e = ? f, g
d=0
3

Time stack
4 f=0
b=1 4 g=?

5
g 6 f=? g
0 4 8
Time, t 7

8 g=?
Event-Driven Algorithm
(Example)
Scheduled Activity
events list
a=1 e=1 t=0 c=0 d, e
c=1 0 2
1
g=1
2
2 2 d = 1, e = 0 f, g
d=0
3

Time stack
4 f=0
b=1 4 g=0

5
g 6 f=1 g
0 4 8
Time, t 7

8 g=1
Time Wheel (Circular Stack)

Current max
time t=0
pointer Event link-list
1

2
3

4
5
6
7
Efficiency of Event-Driven Simulator

• Simulates events (value changes) only


• Speed up over compiled-code can be ten times
or more; in large logic circuits about 0.1 to
10% gates become active for an input change

Steady 0
Steady 0 Large logic
block without
(no event)
0 → 1 event activity
Summary
• Logic or true-value simulators are essential
tools for design verification.
• Verification vectors and expected responses
are generated (often manually) from
specifications.
• A logic simulator can be implemented using
either compiled-code or event-driven method.
• Per vector complexity of a logic simulator is
approximately linear in circuit size.
• Modeling level determines the evaluation
procedures used in the simulator.
Fault Simulation
BITS Pilani
Pilani|Dubai|Goa|Hyderabad

Automatic Test Equipment (ATE)

272
ADVANTEST Model T6682 ATE
T6682 ATE Block Diagram
T6682 ATE Specifications
• Uses 0.35μ VLSI chips in implementation
• 1,024 digital pin channels
• Speed: 250, 500, or 1000 MHz
• Timing accuracy: +/- 200 ps
• Drive voltage: - 2.5 to 6 V
• Clock/strobe accuracy: +/- 870 ps
• Clock settling resolution: 31.25 ps
• Pattern multiplexing: write 2 patterns in one
ATE cycle
• Pin multiplexing: use 2 pins to control 1
DUT pin
Pattern Generation
• Sequential pattern generator (SQPG):
stores 16 Mvectors of patterns to apply to
DUT -- vector width determined by # DUT
pins
• Algorithmic pattern generator (ALPG): 32
independent address bits, 36 data bits
– For memory test – has address descrambler
– Has address failure memory
• Scan pattern generator (SCPG) supports
JTAG boundary scan, greatly reduces test
vector memory for full-scan testing
– 2 Gvector or 8 Gvector sizes
Response Checking and Frame
Processor
• Response Checking:
– Pulse train matching – ATE matches
patterns on 1 pin for up to 16 cycles
– Pattern matching mode – matches pattern
on a number of pins in 1 cycle
– Determines whether DUT output is correct,
changes patterns in real time
• Frame Processor – combines DUT input
stimulus from pattern generators with
DUT output waveform comparison
• Strobe time – interval after pattern
application when outputs sampled
Probing
• Pin electronics (PE) – electrical buffering
circuits, put as close as possible to DUT
• Uses pogo pin connector at test head
• Test head interface through custom printed
circuit board to wafer prober (unpackaged chip
test) or package handler (packaged chip test),
touches chips through a socket (contactor)
• Uses liquid cooling
• Can independently set VIH , VIL , VOH , VOL, IH , IL,
VT for each pin
• Parametric Measurement Unit (PMU)
Probe Card and Probe Needles or
Membrane

• Probe card – custom printed circuit board


(PCB) on which DUT is mounted in socket –
may contain custom measurement
hardware (current test)
• Probe needles – come down and scratch the
pads to stimulate/read pins
• Membrane probe – for unpackaged wafers –
contacts printed on flexible membrane,
pulled down onto wafer with compressed air
to get wiping action
T6682 ATE Software

• Runs Solaris UNIX on UltraSPARC 167 MHz


CPU for non-real time functions
• Runs real-time OS on UltraSPARC 200 MHz
CPU for tester control
• Peripherals: disk, CD-ROM, micro-floppy,
monitor, keyboard, HP GPIB, Ethernet
• Viewpoint software provided to debug,
evaluate, and analyze VLSI chips
LTX FUSION HF ATE
Specifications
• Intended for SOC test – digital, analog, and
memory test – supports scan-based test
• Modular – can be upgraded with additional
instruments as test requirements change
• enVision Operating System
• 1 or 2 test heads per tester, maximum of
1024 digital pins, 1 GHz maximum test rate
• Maximum 64 Mvectors memory storage
• Analog instruments: DSP-based
synthesizers, digitizers, time measurement,
power test, Radio Frequency (RF) source
and measurement capability (4.3 GHz)
Multi-site Testing – Major Cost
Reduction
• One ATE tests several (usually identical)
devices at the same time
• For both probe and package test
• DUT interface board has > 1 sockets
• Add more instruments to ATE to handle
multiple devices simultaneously
• Usually test 2 or 4 DUTS at a time, usually
test 32 or 64 memory chips at a time
• Limits: # instruments available in ATE, type
of handling equipment available for package
BITS Pilani
Pilani|Dubai|Goa|Hyderabad

TESTABILITY MEASURES

285
Testability Measures

• Definition
• Controllability and observability
• SCOAP measures
– Combinational circuits
– Sequential circuits
• Summary
SCOAP

• SCOAP (Sandia controllability and observability analysis


program) is a program developed at Sandia National
Laboratories, USA for the analysis of digital circuit
testability.
– Published in: 17th Design Automation Conference, 1980.

• Testability is related to the difficulty of controlling and


observing the logical values of internal nodes from circuit
inputs and outputs.
What are Testability Measures?
• Approximate measures of:
– Difficulty of setting internal circuit lines to 0
or 1 from primary inputs.
– Difficulty of observing internal circuit lines at
primary outputs.
• Applications:
– Analysis of difficulty of testing internal circuit
parts – redesign or add special test hardware.
– Guidance for algorithms computing test
patterns – avoid using hard-to-control lines.
SCOAP Measures
▪ SCOAP
▪ CC0 – Difficulty of setting circuit line to logic 0
▪ CC1 – Difficulty of setting circuit line to logic 1
▪ CO – Difficulty of observing a circuit line

▪ Sequential measures – analogous:


▪ SC0
▪ SC1
▪ SO
Range of SCOAP Measures

▪ Controllabilities – 1 (easiest) to infinity (hardest)


▪ Observabilities – 0 (easiest) to infinity (hardest)
▪ Combinational measures:
– Roughly proportional to number of circuit lines that
must be set to control or observe given line.
▪ Sequential measures:
– Roughly proportional to number of times flip-flops
must be clocked to control or observe given line.
Combinational Controllability

* The result is incremented by 1 so that the value reflects the distance to the PIs
Controllability Formulas
(Continued)
Combinational Observability
To observe a gate input: Observe output and make other input
values non-controlling.
Observability Formulas
(Continued)

Fanout stem: Observe through branch


with best observability.
Comb. Controllability
Circled numbers give level number. (CC0, CC1)
Controllability Through Level 2
Final Combinational Controllability
Combinational Observability for Level 1
Number in square box is level from primary outputs (POs).
(CC0, CC1) CO
Combinational Observabilities for Level 2
Final Combinational Observabilities
306
307
308
309
310
311
Sequential Measures (Comparison)
▪ Combinational
▪ Increment CC0, CC1, CO whenever you pass through a
gate, either forward or backward.
▪ Sequential
▪ Increment SC0, SC1, SO only when you pass through a
flip-flop, either forward or backward.
▪ Both
▪ Must iterate on feedback loops until controllabilities
stabilize.
D Flip-Flop Equations
▪ Assume a synchronous RESET line.
▪ CC1 (Q) = CC1 (D) + CC1 (C) + CC0 (C) + CC0
(RESET)
▪ SC1 (Q) = SC1 (D) + SC1 (C) + SC0 (C) + SC0
(RESET) + 1
▪ CC0 (Q) = min [CC1 (RESET) + CC1 (C) + CC0 (C),
CC0 (D) + CC1 (C) + CC0 (C)]
▪ SC0 (Q) is analogous
▪ CO (D) = CO (Q) + CC1 (C) + CC0 (C) + CC0
(RESET)
▪ SO (D) is analogous
D Flip-Flop Clock and Reset
▪ CO (RESET) = CO (Q) + CC1 (Q) + CC1 (RESET) +
CC1 (C) + CC0 (C)
▪ SO (RESET) is analogous
▪ Three ways to observe the clock line:
1. Set Q to 1 and clock in a 0 from D
2. Set the flip-flop and then reset it
3. Reset the flip-flop and clock in a 1 from D
▪ CO (C) = min [ CO (Q) + CC1 (Q) + CC0 (D) +
CC1 (C) + CC0 (C) + CC0 (RESET),
CO (Q) + CC1 (Q) + CC1 (RESET) +
CC1 (C) + CC0 (C),
CO (Q) + CC0 (Q) + CC0 (RESET) +
CC1 (D) + CC1 (C) + CC0 (C)]
▪ SO (C) is analogous
322
323
324
Sequential Example Initialization
After 1 Iteration
After 2 Iterations
After 3 Iterations
Stable Sequential Measures
Final Sequential Observabilities
1. How are Testability Measures useful?

In ATPG, during back tracing (controllability) and propagating the


fault effect (observabilities) since they give the path of least
resistance.

2. How are Testability Measures useful?

Tell designer which parts of the design are extremely hard-to-test.


Either redesign or special-purpose test hardware is needed to
achieve high fault coverage.

3. Extremely useful for estimating fault coverage and test vector


length.

Fault coverage estimation via testability can reduce CPU time by


orders of magnitude over fault simulation

336
337
BITS Pilani
Pilani|Dubai|Goa|Hyderabad

BIST – Built-In Self-Test

338
Define Built-In Self-Test
• Implement the function of automatic test
equipment (ATE) on circuit under test (CUT).
• Hardware added to CUT:
• Pattern generation (PG)
• Response analysis (RA)
• Test controller CK PG

Test control logic


Stored
Pin
Test
Patterns
Electronics CUT
Test control HW/SW CUT BIST RA
Stored Comparator Enable
responses hardware
ATE Go/No-go
signature
BIST Motivation
• Useful for field test and diagnosis (less
expensive than a local automatic test
equipment)
• Software tests for field test and
diagnosis:
▪ Low hardware fault coverage
▪ Low diagnostic resolution
▪ Slow to operate
• Hardware BIST benefits:
▪ Lower system test effort
▪ Improved system maintenance and repair
▪ Improved component repair
▪ Better diagnosis
Costly Test Problems Alleviated by BIST
• Increasing chip logic-to-pin ratio – harder
observability
• Increasingly dense devices and faster clocks
• Increasing test generation and application
times
• Increasing size of test vectors stored in ATE
• Expensive ATE needed for 1 GHz clocking
chips
• Hard testability insertion – designers
unfamiliar with gate-level logic, since they
design at behavioral level
Typical Quality Requirements

• 99% single stuck-at fault coverage


• 100% interconnect fault coverage
• Reject ratio – 1 in 100,000
Benefits and Costs of BIST with DFT
Level Design Fabri- Manuf. Maintenance Diagnosis Service
and test cation Test test and repair interruption

Chips +/- + -

Boards +/- + - -

System +/- + - - - -

+ Cost increase
- Cost saving
+/- Cost increase may balance cost reduction
Economics – BIST Costs
▪ Chip area overhead for:
• Test controller
• Hardware pattern generator
• Hardware response compacter
• Testing of BIST hardware
▪ Pin overhead -- At least 1 pin needed to activate
BIST operation
▪ Performance overhead – extra path delays due to
BIST
▪ Yield loss – due to increased chip area or more
chips In system because of BIST
▪ Reliability reduction – due to increased area
▪ Increased BIST hardware complexity – happens
when BIST hardware is made testable
BIST Benefits
• Faults tested:
▪ Single combinational / sequential stuck-at faults
▪ Delay faults
▪ Single stuck-at faults in BIST hardware
• BIST benefits
▪ Reduced testing and maintenance cost
▪ Lower test generation cost
▪ Reduced storage / maintenance of test patterns
▪ Simpler and less expensive ATE
▪ Can test many units in parallel
▪ Shorter test application times
▪ Can test at functional system speed
BIST Process

• Test controller – Hardware that activates self-test


simultaneously on all PCBs
• Each board controller activates parallel chip BIST
Diagnosis effective only if very high fault coverage
BIST Architecture

• Note: BIST cannot test wires and transistors:


▪ From PI pins to Input MUX
▪ From POs to output pins
Random Pattern Testing
LFSR
An LFSR is a shift register.

when clocked, advances the signal through the


register from one bit to the next most-significant
bit.

Some of the outputs are combined in exclusive-OR


configuration to form a feedback mechanism.

A linear feedback shift register can be formed by


performing exclusive-OR on the outputs of two or
more of the flip-flops together and feeding those
outputs back into the input of one of the flip-flops.
PSA
A parallel signal analyzer (PSA) is used to compress the data at the
outputs of the ASIC.

The PSA is nothing more than an LFSR with exclusive-OR gates


between the shift register elements. In fact, a PSA can be used as an
LFSR if the A_IN, B_IN, and C_IN inputs are all held at 0. If the inputs
are held at 0, the PSA will generate exactly the same patterns as the
LFSR in Figure.
PSA contd..
The PSA is most often used as a parallel-to-serial compression circuit.
The A_IN, B_IN, etc. inputs are multiplexed with the ASIC outputs. As
each pattern is applied to the ASIC by the LFSR connected to the ASIC
inputs, the output state of the ASIC is read into the PSA. As each new
pattern is applied, the PSA will perform an exclusive-OR of the last
pattern’s outputs with the current pattern’s output to create a new value
in the PSA.

The PSA compresses multiple parallel patterns into a single pattern


signature that is compared to the expected value. If the signatures match,
it is assumed that the ASIC passed the test vectors applied and there are
no manufacturing defects.
Aliasing
• One major problem in compressing the results of a number of
patterns into one pattern, called a signature, is aliasing.
• Aliasing occurs when the signature of the PSA is the same as
expected, but the identity was caused by a cancellation of errors in
the patterns.
• Thus, the silicon fault that caused the incorrect circuit response to
the input pattern is never detected. To use the calculator example,
2 + 3 + 6 + 9 + 1 + 1 = 22, but so does 2 + 2 + 7 + 9 + 1 + 1.
• In this example, the second and third values that represent incorrect
circuit values cancel each other out. Note that aliasing can occur
only when there are two or more incorrect output patterns.
• A way to minimize the aliasing problem is use maximal-length PSAs
and to frequently read out the PSA signature for comparison to a
known value.
Aliasing Probability
• Aliasing means that faulty signature matches
fault-free signature
• Aliasing probability ~ 2-n
– where n = length of signature register
– Example 1: n = 4, Aliasing probability = 6.25%
– Example 2: n = 8, Aliasing probability = 0.39%
– Example 3: n = 16, Aliasing probability = 0.0015%

Fault-free 2n-1 faulty


signature signatures
BIST Architectures

• LOGIC BIST
• MEMORY BIST
LOGIC BIST
PG Scan register PI and PO
disabled
Comb. logic during test

Scan register
BIST
BIST Go/No-go Comb. logic
Control
enable signature
logic
Scan register

Comb. logic

RA Scan register
Memory BIST
Summary
• LFSR pattern generator and MISR response analyzer –
preferred BIST methods

• BIST has overheads: test controller, extra circuit delay,


primary input MUX, pattern generator, response compacter,
DFT to initialize circuit and test the test hardware

• BIST benefits:
▪ At-speed testing for delay and stuck-at faults
▪ Drastic ATE cost reduction
▪ Field test capability
▪ Faster diagnosis during system test
▪ Less effort to design testing process
▪ Shorter test application times
BITS Pilani
Pilani|Dubai|Goa|Hyderabad

ATPG Algorithms

397
Combinational ATPG

• Algorithms and representations


• Structural vs. functional test
• Definitions
• ATPG problem
• Algorithms
– Multi-valued algebra
– D-algorithm
– Podem
• ATPG system
• Summary
• Exercise
Origins of Stuck-Faults

• Eldred (1959) – First use of structural


testing for the Honeywell Datamatic
1000 computer
• Galey, Norby, Roth (1961) – First
publication of stuck-at-0 and stuck-at-1
faults
• Seshu & Freeman (1962) – Use of
stuck-faults for parallel fault simulation
• Poage (1963) – Theoretical analysis of
stuck-at faults
Functional vs. Structural ATPG
Carry Circuit
Functional vs. Structural
(Continued)
• Functional ATPG – generate complete set of tests for circuit input-
output combinations
– 129 inputs, 65 outputs:
– 2129 = 680,564,733,841,876,926,926,749,
214,863,536,422,912 patterns
– Using 1 GHz ATE, would take 2.15 x 1022 years
• Structural test:
– No redundant adder hardware, 64 bit slices
– Each with 27 faults (using fault equivalence)
– At most 64 x 27 = 1728 faults (tests)
– Takes 0.000001728 s on 1 GHz ATE
• Designer gives small set of functional tests – augment with
structural tests to boost coverage to 98+ %
Definition of Automatic Test-Pattern
Generator
• Operations on digital hardware:
– Inject fault into circuit modeled in computer
– Use various ways to activate and propagate fault effect through
hardware to circuit output
– Output flips from expected to faulty signal
• Electron-beam (E-beam) test observes internal signals – “picture”
of nodes charged to 0 and 1 in different colors
– Too expensive
• Scan design – add test hardware to all flip-flops to make them a
giant shift register in test mode
– Can shift state in, scan state out
– Costs: 5 to 20% chip area, circuit delay, extra pin, longer test
sequence
Circuit and Binary Decision Tree
Binary Decision Diagram
• BDD – Follow path from source to sink
node – product of literals along path gives
Boolean value at sink
• Rightmost path:A B C = 1
• Problem: Size varies greatly
with variable order
Algorithm Completeness

• Definition: Algorithm is complete if it


ultimately can search entire binary
decision tree, as needed, to generate a
test
• Untestable fault – no test for it even
after entire tree searched
• Combinational circuits only –
untestable faults are redundant,
showing the presence of unnecessary
hardware
Random-Pattern Generation

• Flow chart
for method
• Use to get
tests for 60-
80% of
faults, then
switch to D-
algorithm or
other ATPG
for rest
Path Sensitization Method Circuit
Example
1 Fault Sensitization
2 Fault Propagation
3 Line Justification
Path Sensitization Method Circuit
Example
▪ Try path f – h – k – L blocked at j, since
there is no way to justify the 1 on i

1 D

D D
1 D
D 0
1

1
Path Sensitization Method Circuit
Example
▪ Try simultaneous paths f – h – k – L and
g – i – j – k – L blocked at k because
D-frontier (chain of D or D) disappears
1 D
D 1
1
D D
D
1
Path Sensitization Method Circuit
Example
▪ Final try: path g – i – j – k – L – test found!

0
0 D
1 D
D D D
1
1
History of Algorithm Speedups
Algorithm Est. speedup over D-ALG Year
(normalized to D-ALG time)
D-ALG 1 1966
PODEM 7 1981
FAN 23 1983
TOPS 292 1987
SOCRATES 1574 † ATPG System 1988
Waicukauski et al. 2189 † ATPG System 1990
EST 8765 † ATPG System 1991
TRAN 3005 † ATPG System 1993
Recursive learning 485 1995
Tafertshofer et al. 25057 1997
ATPG Problem
• ATPG: Automatic test pattern generation
– Given
• A circuit (usually at gate-level)
• A fault model (usually stuck-at type)
– Find
• A set of input vectors to detect all modeled faults.
• Core problem: Find a test vector for a
given fault.
• Combine the “core solution” with a fault
simulator into an ATPG system.
What is a Test?
Fault activation
Fault effect
X Combinational circuit
1
0
0 1/0 1/0
Primary inputs
1 Primary outputs
(PI)
0 (PO)
1
X
X

Path sensitization
Stuck-at-0 fault
ATPG is a Search Problem
• Search the input vector space for a test:
• Initialize all signals to unknown (X) state – complete
vector space is the playing field
• Activate the given fault and sensitize a path to a PO
– narrow down to one or more tests
Vector Vector
Space Circuit Space Circuit
X X
X 0
sa1 sa1 0/1
X 1

001 101
Need to Deal With Two Copies of the
Circuit
Good circuit
X X
Alternatively, use a multi-valued
0
algebra of signal values for both

Different outputs
0 good and faulty circuits.
Same input

Faulty circuit Circuit


X X X
X
0 0
sa1 1 sa1 0/1
1 1
Function of NAND Gate
Input a
c
0 1 X D D
D
1/0 0 1 1 1 1 1
a
1
c
0/1
b 1 1 0 X D D
D Input b
X 1 X X X X

D 1 D X D 1

D 1 D X 1 D
Definitions
• Line Justification: Changing inputs of a gate if the
present input values do not justify the output
value.
• Forward implication: Determination of the gate
output value, which is X, according to the input
values.
• Consistency check: Verifying that the gate output
is justifiable from the values of inputs, which may
have changed since the output was determined.
• D-frontier: Set of gates whose inputs have a D or ,
D and the output is X.
D-Algorithm

• Use D-algebra
• Activate fault
• Place a D or D at fault site
• Do justification, forward implication and consistency check
for all signals
• Repeatedly propagate D-chain toward POs through a gate
• Do justification, forward implication and consistency check
for all signals
• Backtrack if
• A conflict occurs, or
• D-frontier becomes a null set
• Stop when
• D or D at a PO, i.e., test found, or
• If search exhausted without a test, then no test possible
Definition: Singular Cover
• A singular cover defines the least restrictive
inputs for a deterministic output value.
• Used for:
• Line justification: determine gate inputs for specified
output.
• Forward implication: determine gate output.

a X Singular
0 a b c
c covers
b X
SC-1 0 X 1
Example: XX0 ∩ 110 = 110
SC-2 X 0 1

SC-3 1 1 0
Definition: D-Cubes
• D-cubes are singular
covers with five- D-cube a b c
valued signals
• Used for D-drive D-1 D 1 D
(propagation of D D-2 1 D D
through gates) and
forward implication. D-3 D 1 D
D-4 1 D D
D-5 D D D
a X D-6 D D D
X
c D-7 D 0 1
b D
D-8 0 D 1
Examples: XDX ∩ 1DD = 1DD D-9 D D 1
0DX ∩ 0D1 = 0D1
DDX ∩ DD1 = DD1 D-10 D D 1
An Example: XOR

a2
d
a1 c1
a c
b f
c2
b1
e
b2

Find tests for: c sa0


c1 sa0
c2 sa0
Test-Detect: XOR, Test (0,1)
• Determine good circuit signal values.
• For each fault
• Place a D or D at the fault site
• Perform forward implications
• Fault is detected if any PO assumes a D or D value
D for c1 sa0
0DX ∩ 0D1 = 0D1 (null D-frontier) → c1 sa0 not
a2 detected
1 d
0 a1 c1
a c 1
b f
1 c2
1 b1 0 D
e
b2 D 1DX ∩ 1DD = 1DD, D at PO →
c2 sa0 is detected
D for c2 sa0 D1X ∩ D1D = D1D
XOR: Test for c sa0
a2
d
a1 c1
a c
b f
c2
b1
e
b2
Action Operation D-frontier
1. Activate fault c=1 or c=c1=c2=D d, e
2. Justify c=1 XX1 ∩ 0X1 = 0X1, a=a1=a2=0 d, e
3. Forward impl a2=0 0DX ∩ 0D1= 0D1, d=1 e
4. Forward imp d=1 1XX ∩ XXX= 1XX, no implication possible e
5. D-drive c2→e DXX ∩ D1D= D1D, b2=b=b1=1, e=D f
6. Forward impl b1=1 011 ∩ 0X1 = 011, consistency checked f
7. D-drive e→f 1DX ∩ 1DD = 1DD, f=D PO
8. Stop, test found Test: (a,b) = (0, 1), f = 1
Complexity of D-Alg
• Signal values on all lines (PIs and internal
lines) are manipulated using 5-valued algebra.
• Worst-case combinations of signals that may
be tried is 5#lines
• For XOR circuit, 512 = 244,140,625.
• Podem: A reduced-complexity ATPG algorithm
• Recognizes that internal signals depend on PIs.
• Only PIs are independent variables and should be
manipulated.
• Because faults are internal, a PI can assume only 3
values (0, 1, X).
• Worst-case combinations = 3#PI; for XOR circuit, 32 = 9.
Podem (Goel, 1981)
• Podem: Path oriented decision making
• Step 1: Define an objective (fault activation, D-drive, or line
justification)
• Step 2: Backtrace from site of objective to PIs (use
testability measure guidance) to determine a value for a PI
• Step 3: Simulate logic with new PI value
• If objective not accomplished but is
possible, then continue backtrace to
another PI (step 2)
• If objective accomplished and test not
found, then define new objective (step 1)
• If objective becomes impossible, try
alternative backtrace (step 2)
• Use X-PATH-CHECK to test whether D-frontier still there –
a path of X’s from a D-frontier to a PO must exist.
XOR Example Again
Compute SCOAP testability measures: (CC0,CC1)CO

6
(4,2)3
5
(1,1)6 7 (3,2)5
(5,5)0
7
(1,1)6 5

6
(4,2)3
Podem: Objective and Backtrace
2&3. Backtrace to a PI 1. Objective 1: set fault site to 1
and simulate
6
(4,2)3
5 1
(1,1)6 7 (3,2)5 sa0
0 D (5,5)0
7
1
(1,1)6 5

6
X-path check fails
(4,2)3 → Back up:
Erase effects of steps 2&3
Try alternative backtrace
Podem: Back up
4&5. Alt. backtrace to a PI 1. Objective 1: set fault site to 1
and simulate
6
(4,2)3
5
(1,1)6 7 (3,2)5 sa0
D (5,5)0

0 1
(1,1)6 7 5 X-path
1
X-path check: OK
6 Objective 1 achieved
(4,2)3
Podem: D-Drive
5. Backtrace to a PI 4. Objective 2: D-drive, set line to 1
and simulate
6
(4,2)3
1
5 D
(1,1)6 7 (3,2)5 sa0
1 D (5,5)0
7
0 1 D
(1,1)6 5
1 D at PO
6 →Test found
(4,2)3
Another Podem Example,
Find out (CC0, CC1)CO values of this circuit

S-a-1
Another Podem Example
3. Logic simulation for A=0 2. Backtrace “A=0” 1. Objective “0”

S-a-1

(9, 2)

4. Objective possible but not accomplished


Podem Example (Cont.)
6. Logic simulation for A=0, B=0
5. Backtrace “B=0” 1. Objective “0”

0
0
0
S-a-1

0
(9, 2)

7. Objective possible but not accomplished


Podem Example (Cont.)
9. Logic simulation for E=0
8. Backtrace “E=0” 1. Objective “0”

0
0
0
0
0
S-a-1
0
(9, 2)

10. Objective possible but not accomplished


Podem Example (Cont.)
12. Logic simulation for D=0
1. Objective “0”

0
0
0
0
0
S-a-1

0 0
(9, 2)
0

13. Objective accomplished 11. Backtrace “D=0”


An ATPG System
Random pattern
generator

Fault simulator
yes

Fault Random Deterministic


Save coverage patterns ATPG (D-alg.
patterns improved? no no or Podem)
yes effective?

yes
Compact Coverage no
vectors Sufficient?
Random-Pattern Generation

• Easily gets
tests for 60-
80% of faults
• Then switch
to D-
algorithm,
Podem, or
other ATPG
method
Vector Compaction
• Objective: Reduce the size of test vector set
without reducing fault coverage.
• Simulate faults with test vectors in reverse
order of generation
• ATPG patterns go first
• Randomly-generated patterns go last (because they
may have less coverage)
• When coverage reaches 100% (or the original maximum
value), drop remaining patterns
• Significantly shortens test sequence – testing
cost reduction.
• Fault simulator is frequently used for
compaction.
• Many recent (improved) compaction
algorithms.
– Static & Dynamic compaction
Static and Dynamic Compaction of
Sequences
• Static compaction
• ATPG should leave unassigned inputs as X
• Two patterns compatible – if no conflicting values
for any PI
• Combine two tests ta and tb into one test tab = ta ∩ tb
using intersection
• Detects union of faults detected by ta and tb
• Dynamic compaction
• Process every partially-done ATPG vector
immediately
• Assign 0 or 1 to PIs to test additional faults
Compaction Example
• t1 = 0 1 X t2 = 0 X 1
t3 = 0 X 0 t4 = X 0 1

• Combine t1 and t3, then t2 and t4


• Obtain:
– t13 = 0 1 0 t24 = 0 0 1

• Test Length shortened from 4 to 2


Summary
• Most combinational ATPG algorithms use D-algebra.
• D-Algorithm is a complete algorithm:
• Finds a test, or
• Determines the fault to be redundant
• Complexity is exponential in circuit size
• Podem is another complete algorithm:
• Works on primary inputs – search space is smaller than that of
D-algorithm
• Exponential complexity, but several orders faster than D-
algorithm
• More efficient algorithms available – FAN, Socrates, etc.
• See, M. L. Bushnell and V. D. Agrawal, Essentials of Electronic
Testing for Digital, Memory and Mixed-Signal VLSI Circuits,
Springer, 2000, Chapter 7.
BITS Pilani
Pilani|Dubai|Goa|Hyderabad

MEMORY TESTING

452
BITS Pilani
Pilani|Dubai|Goa|Hyderabad

APPENDIX

498

You might also like