Mixed Signal Ic Design Testing-1
Mixed Signal Ic Design Testing-1
An Introduction to Mixed-Signal
IC Test and Measurement
GORDON ROBERTS
FRIEDRICH TAENZLER
MARK BURNS
With offices in
Argentina Austria Brazil Chile Czech Republic France Greece
Guatemala Hungary Italy Japan Poland Portugal Singapore
South Korea Switzerland Thailand Turkey Ukraine Vietnam
For titles covered by Section 112 of the US Higher Education Opportunity Act,
please visit www.oup.com/us/he for the latest information about pricing and
alternate formats.
Printing number: 9 8 7 6 5 4 3 2 1
S ince the introduction of the first edition of this textbook in 2001, much change has occurred
in the semiconductor industry, especially with the proliferation of complex semiconductor
devices containing digital, analog, mixed-signal, and radio-frequency (RF) circuits. The integra-
tion of these four circuit types has created many new business opportunities, but at the same time
made the economics of test much more significant. Today, product costs are divided among sili-
con, test, and packaging in various proportions, depending on the maturity of the product as well
as the technical skill of the engineering teams. In some market segments, we are seeing packaging
costs dominate product costs; in other market segments, device packaging is being done away
with entirely, because bare die is mounted directly on the carrier substrate. While this helps to
address the packaging costs, it puts greater pressure on test costs, because it now becomes a larger
contributor to the overall product cost.
Analog, mixed-signal, and RF IC test and measurement have grown into a highly special-
ized field of electrical engineering. For the most part, analog and mixed-signal test engineering
is handled by one team of specialists, while RF test is handled by another. The skill set required
to master both technical areas are quite different, because one involves the particle (electron)
perspective of physics whereas the other handles the wave perspective. Nonetheless, these two
technical areas have much in common, such as the need to accurately measure a signal in the pres-
ence of noise and distortion in a time-sensitive manner, albeit over vastly different dynamic range
and power levels.
The goal of the first edition of this textbook was to create a source of information about analog
and mixed-signal automated test and measurement as it applies to ICs. At the time, little informa-
tion was available for the test engineer. One important source was the textbook by M. Mahoney,
the pioneer of DSP-based test techniques, however, this was largely limited to the coherency prin-
ciples of DSP-based testing, and it did not discuss system level tradeoffs of test engineering, nor
did it discuss the practical issue related to test interfacing.
Based on the feedback that we received, the first edition of this textbook has been a source
of inspiration to many, especially those new to the test field. The industry has seen a great deal of
change since the release of the first edition almost a decade ago. RF circuits now play a larger part
in many of the devices and systems created today. It is clear to us that engineers need to be fluent
in all four-circuit types, digital, analog, mixed-signal and RF, although we limit our discussion in
this textbook to the latter three, as digital test is a subject all in itself. We do not believe we could
do this topic justice in the amount of space remaining after discussing analog, mixed-signal, and
RF test. We encourage our readers to learn as much as possible about digital test as they will most
certainly encounter such techniques during their career in test (or design for that matter).
The prerequisite for this book remains at a junior or senior university level; it is assumed that
students reading this textbook will have taken courses in linear continuous-time and discrete-time
systems, fields, and waves, as well as having had exposure to probability and statistical concepts.
xix
xx PREFACE
In fact, the three greatest changes made to the second edition of this textbook is a lengthy discus-
sion on RF circuits, high-speed I/Os, and probabilistic reasoning. Over the years, it has become
quite clear that test, application, and product engineers make extensive use of probability theory
in their day-to-day function, leading us to believe that it is necessary to increase the amount of
probability coverage contained in this textbook. By doing so, we could define the concept of noise
more rigorously and study its affects on a measurement more concisely. These ideas will be used
throughout the textbook to help convey the limitations of a measurement.
generators in a mixed-signal tester. ADC sampling theory is applicable to both (a) ADC circuits
in the device under test and (b) waveform digitizers in a mixed-signal tester. Coherent multitone
sample sets are also introduced as an introduction to DSP-based testing.
Chapter 9 further develops sampling theory concepts and DSP-based testing methodologies,
which are at the core of many mixed-signal test and measurement techniques. FFT fundamentals,
windowing, frequency domain filtering, and other DSP-based testing fundamentals are covered
in Chapters 8 and 9.
Chapter 10 shows how basic AC channel tests can be performed economically using DSP-
based testing. This chapter covers only nonsampled channels, consisting of combinations of oper-
ational amplifiers, analog filters, PGAs, and other continuous-time circuits.
Chapter 11 explores many of these same tests as described in Chapter 10 as they are applied
to sampled channels, which include DACs, ADCs, sample and hold (S/H) amplifiers, and so on.
The principle of undersampling under coherent conditions is also discussed in this chapter.
Chapters 12 and 13 are two chapters related to RF testing. They are both new to the second
edition of this textbook.
Chapter 12 begins by introducing the reader to the concept of a propagating wave and the
various means by which a wave is quantified (e.g., power, wavelength, velocity, etc.). Included
in this discussion are the amplitude and phase noise impairments that an RF wave experiences in
transmission. A portion of this chapter is devoted to the concept of S-parameters as it applies to
an n-port network, such as a two-port network. S-parameters are used to describe the small-signal
performance of the network as seen from one port to another. Measures like reflection coefficient,
mismatch loss, insertion and transducer loss, and various power gains can easily be defined in
terms of these S-parameters. Moreover, the idea of a mismatch uncertainty and its impact on a
power measurement is introduced. Mismatch uncertainty is often the most significant contributor
to measurement error in an RF test. Several forms of modulation, including analog modulation
schemes like AM and PM, followed by several digital modulation schemes, such as ASK, PSK,
and QAM used primarily for digital communication systems, are described.
Chapter 13 describes the principles of RF testing of electronic circuits using commercial
ATE. These principles are based on the physical concepts related to wave propagation outlined
in Chapter 12 combined with the DSP-based sampling principles of Chapters 8–11. This chap-
ter describes the most common types of RF tests using ATE for standard devices, like mixers,
VCOs, and power amplifiers. Issues relating to dynamic range, maximum power, noise floor, and
phase noise are introduced. Measurement errors introduced by the device interface board due to
impedance mismatches, transmission losses, etc., are described, together with de-embedding tech-
niques to compensate for these errors. Also included is a discussion of ATE measurements using
directional couplers. These directional couplers enable the measurement of the S-parameters of a
device through the direct measurement of the incident and reflected waves at the input and output
port of a DUT. Various noise figures measurements (Y-factor and cold noise methods) using an
ATE are outlined. Finally, the chapter concludes with a discussion on using an ATE to measure
more complex RF system parameters like EVM, ACPR, and BER.
Chapter 14 is also new to the second edition. Chapter 14 outlines the test techniques and met-
rics used to quantify the behavior of clocks and serial data communication channels. The chapter
begins by describing several measures of clock behavior in both the time and frequency domains.
In the time domain, measures like instantaneous jitter, period jitter, and cycle-to-cycle jitter are
described. In the frequency domain, the concept of phase noise will be derived from a power
spectral density description of the clock signal. Chapter 14 will then look at the test attributes
associated with communicating serial over a channel. For the most part, this becomes a measure of
the bit error rate (BER). However, because BER is a very time-consuming measurement to make,
methods to extract estimates of the BER in short period of time are described. These involve a
model of channel behavior that is based on parameters known as dependent jitter (DJ) and random
xxii PREFACE
jitter (RJ). Finally, the chapter will discuss jitter transmission test such as jitter transfer and jitter
tolerance.
Chapter 15 explores the gray art of mixed-signal DIB design. Topics of interest include com-
ponent selection, power and ground layout, crosstalk, shielding, transmission lines, power match-
ing networks, and tester loading. Chapter 15 also illustrates several common DIB circuits and their
use in mixed-signal testing.
Chapter 16 gives a brief introduction to some of the techniques for analog and mixed-signal
design for test. There are fewer structured approaches for mixed-signal DfT than for purely digital
DfT. The more common ad hoc methods are explained, as well as some of the industry standards
such as IEEE Std. 1149.1, 1149.4, and 1500.
related to impedance matching circuits and Smith ChartsTM. The chapter on DfT was expanded to
include a discussion on DFT for RF circuits.
A summary of the changes made are:
• Expanded use of probabilistic approach to problem description.
• Windowing for noncoherent sampling.
• Undersampling and reconstruction using modulo-time shuffling algorithm.
• New chapter on fundamentals of RF testing.
• New chapter on RF test methods.
• New chapter on clock and serial data communications channel measurements.
• Added several sections on RF load board design.
• Hundreds of new examples, exercises, and problems have been added.
A NOTE OF THANKS
First and foremost, a special note of thanks to Mark Burns for starting this project while at Texas
Instruments and overseeing the first edition of this book get published. Mark has since retired from
the industry and is enjoying life far away from test. It was a great pleasure working with Mark.
We would like to extend our sincere appreciation to the many people who assisted us with this
development; beginning with the first edition and subsequently followed by those who contributed
to the second edition.
First Edition
The preliminary versions of the first edition were reviewed by a number of practicing test engi-
neers. We would like to thank those who gave us extensive corrections and feedback to improve
the textbook:
Steve Lyons, Lucent Technologies
Jim Larson, Teradyne, Inc.
Gary Moraes, Teradyne, Inc.
Justin Ewing, Texas A&M University/Texas Instruments, Inc.
Pramodchandran Variyam, Georgia Tech/Texas Instruments, Inc./Anora LLC.
Geoffrey Zhang, formerly of Texas Instruments, Inc.
We would also like to extend our sincere appreciation to the following for their help in developing
this textbook:
Dr. Rainer Fink, Texas A&M University
Dr. Jay Porter, Texas A&M University
Dr. Cajetan Akujuobi, Prairieview A&M University
Dr. Simon Ang, University of Arkansas
Their early adoption of this work at their respective universities has helped to shape the book’s
content and expose its many weaknesses.
We also thank Juli Boman (Teradyne, Inc.) and Ted Lundquist (Schlumberger Test Equipment)
for providing photographs for Chapter 1.
We are extremely grateful to the staff at Oxford University Press, who have helped guide us
through the process of writing an enjoyable book. First, we would like to acknowledge the help
and constructive feedback of the publishing editor, Peter Gordon. The editorial development help
of Karen Shapiro was greatly appreciated.
xxiv PREFACE
Finally, on behalf of the test engineering profession, Mark Burns would like to extend his
gratitude to
Del Whittaker, formerly of Texas Instruments, Inc.
David VanWinkle, formerly of Texas Instruments, Inc.
Bob Schwartz, formerly of Texas Instruments, Inc.
Ming Chiang, formerly of Texas Instruments, Inc.
Brian Evans, Texas Instruments, Inc.
for allowing him to develop this book as part of his engineering duties for the past three years. It
takes great courage and vision for corporate management to expend resources on the production
of a work that may ultimately help the competition.
Second Edition
The preliminary versions of the second edition were reviewed by a number of practicing test
engineers and professors. We would like to thank those who gave us extensive corrections and
feedback to improve the textbook, specifically the following individuals:
Brice Achkir, Cisco Systems Inc.
Rob Aitken, ARM
Benjamin Brown, LTX-Credence Corporation
Cary Champlin, Blue Origin LLC
Ray Clancy, Broadcom Corporation
William DeWilkins, Freescale Semiconductor
Rainer Fink, Texas A&M University
Richard Gale, Texas Tech University
Michael Purtell, Intersil Corporation
Jeff Rearick, Advanced Micro Devices
Tamara Schmitz, Intersil Corporation
Robert J. Weber, Iowa State University
We are extremely grateful to the staff members at Oxford University Press, who have helped guide
us through the process of writing the second edition of this book. First, we would like to acknowl-
edge the help and constructive feedback of our editors: Caroline DiTullio, Claire Sullivan, and
Rachael Zimmerman.
Gordon Roberts would like to extend his sincere appreciation to all the dedicated staff mem-
bers and graduate students associated with the Integrated Microsystem Laboratory (formerly the
Microelectronics and Computer System Laboratory) at McGill University. Without their desire to
learn and ask thought-provoking questions, this textbook would have been less valuable to the stu-
dents that follow them. For this, I am thankful. More importantly, though, is how the excitement
of learning new things has simply made life more enjoyable. For this, I am eternally grateful. A
listing of the students that had made some contribution to these works are:
Sadok Aouini Dong (Hudson) An
Christopher Taillefer Mouna Safi-Harb
Mohammad Ali-Bakhshian Mourad Oulmane
Tsung-Yen Tsai Ali Ameri
Azhar Chowdhury Shudong Lin
Marco Macedo George Gal
Kun Chuai Michael Guttman
Euisoo. Yoo Simon Hong
Tarek. Alhajj
Preface xxv
Gordon Roberts would like to express his love and thanks to his family (Brigid, Sean, and
Eileen) for their unequivocal support over the course of this project. Their understanding of the
level of commitment that is required to undertake such a large project is essential for the success
of this work. Without their support, this project would not have come to completion. For that, all
those that learn something from this book owe each one of you some level of gratitude.
In a similar fashion, Friedrich Taenzler would like to express his sincere gratitude to his col-
leagues at Texas Instruments he had the chance to work with, thereby discussing and learning
multiple aspects of test engineering. A special thanks to those colleagues who gave extensive
corrections and feedback to improve this textbook: Ganesh Srinivasan, Kausalya Palavesam, and
Elida de-Obaldia. Friedrich Taenzler also extends his appreciation to the ATE and test equipment
vendors he had the chance to work with while learning the broader view on production tester
implementation.
Finally, and most important, Friedrich would like to thank his family, his wife Claudia and
his sons Phillip and Ferdinand, for their tremendous support and understanding for the time given
to contribute to this book. Needless to say, without their help and tolerance, this book would have
never been completed.
Gordon W. Roberts Friedrich Taenzler Mark Burns
McGill University Texas Instruments, Inc. formerly of Texas Instruments, Inc.
Montreal, Quebec, Canada Dallas, Texas, USA Dallas, Texas, USA
CONTENTS
Preface xix
v
vi CONTENTS
3.13 Summary 76
Problems 76
References 78
circuitry coexisting on the same die or circuit board. The line between mixed-signal circuits and
analog or digital circuits is blurry if one wants to be pedantic.
Fortunately, the blurry lines between digital, analog, and mixed-signal are completely irrel-
evant in the context of mixed-signal test and measurement. Most complex mixed-signal devices
include at least some stand-alone analog circuits that do not interact with digital logic at all. Thus,
the testing of op amps, comparators, voltage references, and other purely analog circuits must be
included in a comprehensive study of mixed-signal testing. This book encompasses the testing of
both analog and mixed-signal circuits, including many of the borderline examples. Digital test-
ing will only be covered superficially, since testing of purely digital circuits has been extensively
documented elsewhere.2–4
conversion architecture, whereas a 100-MHz video ADC may have to employ a much faster
flash conversion architecture. The weaknesses of these two architectures are totally different.
Consequently, the testing of these two converter types is totally different. Similar differences
exist between the various types of DACs.
Another common mixed-signal circuit is the phase locked loop, or PLL. PLLs are typically
used to generate high-frequency reference clocks or to recover a synchronous clock from an asyn-
chronous data stream. In the former case, the PLL is combined with a digital divider to construct
a frequency multiplier. A relatively low-frequency clock, say, 50 MHz, is then multiplied by an
integer value to produce a higher-frequency master clock, such as 1 GHz. In the latter case, the
recovered clock from the PLL is used to latch the individual bits or bytes of the incoming data
stream. Again, depending on the nature of the PLL design and its intended use, the design weak-
nesses and testing requirements can be very different from one PLL to the next.
MI C EAR
B as e st at i on s
Display
Control Frequency
μ-processor synthesizer
Keyboard
4 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
each cellular area. The control microprocessor selects the incoming and outgoing transmission
frequencies by sending control signals to the frequency synthesizer. The synthesizer often con-
sists of several PLLs, which control the mixers in the radio-frequency (RF) section of the cellular
telephone. The mixers convert the relatively low-frequency signals of the base-band interface to
extremely high frequencies that can be transmitted from the cellular telephone’s radio antenna.
They also convert the very high-frequency incoming signals from the base station into lower-
frequency signals that can be processed by the base-band interface.
The voice-band interface, digital signal processor (DSP), and base-band interface perform
most of the complex operations. The voice-band interface converts the user’s voice into digital
samples using an ADC. The volume of the voice signal from the microphone can be adjusted
automatically using a programmable gain amplifier (PGA) controlled by either the DSP or the
control microprocessor. Alternatively, the PGA may be controlled with a specialized digital circuit
built into the voice-band interface itself. Either way, the PGA and automatic adjustment mecha-
nism form an automatic gain control (AGC) circuit. Before the voice signal can be digitized by
the voice-band interface ADC, it must first be low-pass filtered to avoid unwanted high-frequency
components that might cause aliasing in the transmitted signal. (Aliasing is a type of distortion
that can occur in sampled systems, making the speaker’s voice difficult to understand.) The digi-
tized samples are sent to the DSP, where they are compressed using a mathematical process called
vocoding. The vocoding process converts the individual samples of the sound pressure waves into
samples that represent the essence of the user’s speech. The vocoding algorithm calculates a time-
varying model of the speaker’s vocal tract as each word is spoken. The characteristics of the vocal
tract change very slowly compared to the sound pressure waves of the speaker’s voice. Therefore,
the vocoding algorithm can compress the important characteristics of speech into a much smaller
set of data bits than the digitized sound pressure samples. The vocoding process is therefore a
type of data compression algorithm that is specifically tailored for speech. The smaller number of
transmitted bits frees up airspace for more cellular telephone users. The vocoder’s output bits are
sent to the base-band interface and RF circuits for modulation and transmission. The base-band
interface acts like a modem, converting the digital bits of the vocoder output into modulated ana-
log signals. The RF circuits then transmit the modulated analog waveforms to the base station.
In the receiving direction, the process is reversed. The incoming voice data are received by
the RF section and demodulated by the base-band interface to recover the incoming vocoder bit
stream. The DSP converts the incoming bit stream back into digitized samples of the incoming
speaker’s voice. These samples are then passed to the DAC and low pass reconstruction filter of the
voice-band interface to reconstruct the voltage samples of the incoming voice. Before the received
voice signal is passed to the earpiece, its volume is adjusted using a second PGA. This earpiece
PGA is adjusted by signals from the control microprocessor, which monitors the telephone’s vol-
ume control buttons to determine the user’s desired volume setting. Finally, the signal must be
passed through a low-impedance buffer to provide the current necessary to drive the earpiece.
Several common cellular telephone circuits are not shown in Figure 1.2. These include DC
voltage references and voltage regulators that may exist on the voice-band interface or the base-
band processor, analog multiplexers to control the selection of multiple voice inputs, and power-on
reset circuits. In addition, a watchdog timer is often included to periodically wake the control
microprocessor from its battery-saving idle mode. This allows the microprocessor to receive infor-
mation such as incoming call notifications from the base station. Clearly, the digital cellular tele-
phone represents a good example of a complex mixed-signal system. The various circuit blocks of
a cellular telephone may be grouped into a small number of individual integrated circuits, called
a chipset, or they may all be combined into a single chip. The test engineer must be ready to test
the individual pieces of the cellular telephone and/or to test the cellular telephone as a whole. The
increasing integration of circuits into a single semiconductor die is one of the most challenging
aspects of mixed-signal test engineering.
Chapter 1 • Overview of Mixed-Signal Testing 5
(a) Mask
Photoresist
SiO2
P- substrate
(b) Photoresist
-
P substrate
exposure
(c) Photoresist
P- substrate
selective removal
(d) Oxide et ch
P- substrate
N-well
(e) N-well doping
P- substrate
Vias SiO2
Protective overcoat (PO)
Metal 2
Metal 1
Polysilicon gate
P+ P+ N+ N+
N-well
-
P substrate
(f) Finished IC
6 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
textbook diagrams would lead us to believe. Cross sections of actual integrated circuits reveal a vari-
ety of nonideal physical characteristics that are not entirely under the semiconductor manufacturer’s
control. Certain characteristics, such as doping profiles that define the boundaries between P and N
regions, are not even visible in a cross-section view. Nevertheless, they can have a profound effect on
many important analog and mixed-signal circuit characteristics.
subtle problem is a partially connected via, which may exhibit an abnormally high contact resis-
tance. Depending on the amount of excess resistance, the results of a partially connected via can
range from minor DC offset problems to catastrophic distortion problems.
Figure 1.6 shows incomplete etching of the metal surrounding a circuit trace. Incomplete
etching can result in catastrophic shorts between circuit nodes. Finally, Figure 1.7 shows a
surface defect caused by particulate matter landing on the surface of the wafer or on a photo-
graphic mask during one of the processing steps. Again, this type of defect results in a short
between circuit nodes. Other catastrophic defects include surface scratches, broken bond wires,
and surface explosions caused by electrostatic discharge in a mishandled device. Defects such
as these are the reason each semiconductor device must be tested before it can be shipped to
the customer.
It has been said that production testing adds no value to the final product. Testing is an
expensive process that drives up the cost of integrated circuits without adding any new func-
tionality. Testing cannot change the quality of the individual ICs; it can only measure quality
if it already exists. However, semiconductor companies would not spend money to test prod-
ucts if the testing process did not add value. This apparent discrepancy is easily explained if
we recognize that the product is actually the entire shipment of devices, not just the individual
ICs. The quality of the product is certainly improved by testing, since defective devices are
not shipped. Therefore, testing does add value to the product, as long as we define the product
correctly.
Test DUT
stimulus DUT response
Good
or Pass
bad
?
Fail
Sometimes the test engineer is also responsible for developing hardware and software that
modifies the structure of the semiconductor die to adjust parameters like DC offset and AC gain,
or to compensate for grotesque manufacturing defects. Despite claims that production testing
adds no value, this is one way in which the testing process can actually enhance the quality of the
individual ICs. Circuit modifications can be made in a number of ways, including laser trimming,
fuse blowing, and writing to nonvolatile memory cells.
The test engineer is also responsible for reducing the cost of testing through test time
reductions and other cost-saving measures. The test cost reduction responsibility is shared
with the product engineer. The product engineer’s primary role is to support the production
of the new device as it matures and proceeds to profitable volume production. The product
engineer helps identify and correct process defects, design defects, and tester hardware and
software defects.
Sometimes the product engineering function is combined with the test engineering function,
forming a single test/product engineering position. The advantage of the combined job function
is that the product engineering portion of the job can be performed with a much more thorough
understanding of the device and test program details. The disadvantage is that the product engi-
neering responsibilities may interfere with the ability of the engineer to become an expert on the
use of the complex test equipment. The choice of combined versus divided job functions is highly
dependent on the needs of each organization.
After the leads have been trimmed and formed, the devices are ready for final testing
on a second ATE tester. Final testing guarantees that the performance of the device did not
shift during the packaging process. For example, the insertion of plastic over the surface of
the die changes the electrical permittivity near the surface of the die. Consequently, trace-
to-trace capacitances are increased, which may affect sensitive nodes in the circuit. In addi-
tion, the injection-molded plastic introduces mechanical stresses in the silicon, which may
consequently introduce DC voltage shifts. Final testing also guarantees that the bond pads are
all connected and that the die was not cracked, scratched, or otherwise damaged in the pack-
aging process. After final testing, the devices are ready for shipment to the end-equipment
manufacturer.
Figure 1.11. Octal site device interface board (DIB) showing DUT sockets (left) and local circuits
with RF interface (right).
local circuits such as load circuits and buffer amplifiers that are often required for mixed-
signal device testing. Figure 1.11 illustrates the top and bottom sides of an octal site DIB. The
topside shown on the left displays eight DUT sockets, and the picture on the right shows the
local circuits and RF interface.
1.4.3 Handlers
Handlers are used to manipulate packaged devices in much the same way that probers are used
to manipulate wafers. Most handlers fall into two categories: gravity-fed and robotic. Robotic
handlers are also known as pick-and-place handlers. Gravity-fed handlers are normally used with
dual inline packages, while robotic handlers are used with devices having pins on all four sides or
pins on the underside (ball grid array packages, for example).
Either type of handler has one main purpose: to make a temporary electrical connection
between the DUT pins and the DIB board. Gravity-fed handlers often perform this task using a
contactor assembly that grabs the device pins from either side with metallic contacts that are in
turn connected to the DIB board. Robotic handlers usually pick up each device with a suction arm
and then plunge the device into a socket on the DIB board.
Chapter 1 • Overview of Mixed-Signal Testing 13
Test
head
Prober
In addition to providing a temporary connection to the DUT, handlers are also responsible for
sorting the good DUTs from the bad ones based on test results from the ATE tester. Some handlers
also provide a controlled thermal chamber where devices are allowed to “soak” for a few minutes
so they can either be cooled or heated before testing. Since many electrical parameters shift with
temperature, this is an important handler feature.
device, DIB hardware, and software on the ATE tester. Any design problems are reported to the
design engineers, who then begin evaluating possible design errors. A second design pass is often
required to correct errors and to align the actual circuit performance with specification require-
ments. Finally, the corrected design is released to production by the product engineer, who then
supports the day-to-day manufacturing of the new product.
Of course, the idealized concurrent engineering flow is a simplification of what happens in a
typical company doing business in the real world. Concurrent engineering is based on the assump-
tion that adequate personnel and other resources are available to write test plans and generate
test hardware and software before the first silicon wafers arrive. It also assumes that only one
additional design pass is required to release a device to production. In reality, a high-performance
device may require several design passes before it can be successfully manufactured at a profit.
This flow also assumes that the market does not demand a change in the device specifications in
midstream - a poor assumption in a dynamic world. Nevertheless, concurrent engineering is con-
sistently much more effective than a disjointed development process with poor communication
between the various engineering groups.
finds that the test program results do not agree with measurements taken using bench equip-
ment in their lab. The test engineer must determine which answer is correct and why there is
a discrepancy. It is also common to find that two supposedly identical testers or DIB boards
give different answers or that the same tester gives different answers from day to day. These
problems frequently result from obscure hardware or software errors that may take days to
isolate. Correlation efforts can represent a major portion of the time spent debugging a test
program.
system feature, although duplicate tester instruments must be added to the tester to allow simulta-
neous testing on multiple DUT sites.
Clearly, production test economics is an extremely important issue in the field of mixed-
signal test engineering. Not only must the test engineer perform accurate measurements of mixed-
signal parameters, but the measurements must be performed as quickly as possible to reduce
production costs. Since a mixed-signal test program may perform hundreds or even thousands
of measurements on each DUT, each measurement must be performed in a small fraction of a
second. The conflicting requirements of low test time and high accuracy will be a recurring theme
throughout this book.
PROBLEMS
1.1. List four examples of analog circuits.
1.2. List four examples of mixed-signal circuits.
1.3. Problems 1.3–1.6 relate to the cellular telephone in Figure 1.2. Which type of mixed-signal
circuit acts as a volume control for the cellular telephone earpiece?
1.4. Which type of mixed-signal circuit converts the digital samples into speaker’s voice?
1.5. Which type of mixed-signal circuit converts incoming modulated voice data into digital
samples?
1.6. Which type of digital circuit vocodes the speaker’s voice samples before they are passed to
the base-band interface?
1.7. When a PGA is combined with a digital logic block to keep a signal at a constant level,
what is the combined circuit called?
1.8. Assume a particle of dust lands on a photomask during the photolithographic printing pro-
cess of a metal layer. List at least one possible defect that might occur in the printed IC.
1.9. Why does the cleanliness of the air in a semiconductor fabrication area affect the number
of defects in IC manufacturing?
1.10. List at least four production steps after wafers have been fabricated.
1.11. Why would it be improper to draw conclusions about a design based on characterization
data from one or two devices?
1.12. List three main components of an ATE tester.
1.13. What is the purpose of a DIB board?
1.14. What type of equipment is used to handle wafers as they are tested by an ATE tester?
1.15. List three advantages of concurrent engineering.
1.16. What is the purpose of a test plan?
1.17. List at least four challenges faced by the mixed-signal test engineer.
1.18. Assume that a test program runs on a tester that costs the company 3 cents per second to
operate. This test cost includes tester depreciation, handler depreciation, electricity, floor
space, personnel, and so on. How much money can be saved per year by reducing a 5-s test
program to 3.5 s, assuming that 5 million devices per year are to be shipped. Assume that
only 90% of devices tested are good and that the average time to find a bad device drops to
0.5 s.
1.19. Assume that the profit margin on the device in Problem 1.18 is 20% (i.e., for each $1 worth
of devices shipped to the customer, the company makes a profit of 20 cents). How many
dollars worth of product would have to be shipped to make a profit equal to the savings
offered by the streamlined test program in Problem 1.18? If each device sells for $1.80,
how many devices does this represent? What obvious conclusion can we draw about the
importance of test time reduction versus the importance of selling and shipping additional
devices?
18 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
REFERENCES
1. M. Burns, High Speed Measurements Using Undersampled Delta Modulation, 1997 Teradyne
User’s Group proceedings, Teradyne, Inc., Boston.
2. M. Abramovici, M. A. Breuer, and A. D. Friedman, Digital Systems Testing and Testable Design,
revised printing, IEEE Press, New York, January 1998, ISBN 0780310624.
3. P. K. Lala, Practical Digital Logic Design and Testing, Prentice Hall, Upper Saddle River, NJ,
1996, ISBN 0023671718.
4. J. Max Cortner, Digital Test Engineering, John Wiley & Sons, New York, 1987, ISBN
0471851353.
5. D. A. Johns and K. Martin, Analog Integrated Circuit Design, John Wiley & Sons, New York,
1996, ISBN 0471144487.
CHAPTER 5
T esting is an important and essential phase in the manufacturing process of integrated circuits.
In fact, the only way that manufacturers can deliver high-quality ICs in a reasonable time is
through clever testing procedures. The IC manufacturing process involves three major steps: fab-
rication, testing, and packaging. Today, manufacturing costs associated with mixed-signal ICs is
being dominated by the test phase (i.e., separating bad dies from good ones), although packaging
costs are becoming quite significant in some large ICs. In order to create clever test procedures,
one needs to have a clear understanding of the tradeoffs involved. In particular, the test engineer
needs to understand the needs of their business (making ICs for toys or the automotive industry),
the cost of test, and the quality of the product produced. It is the intent of this chapter to outline
these tradeoffs, beginning with a discussion on manufacturing yield, followed by a discussion on
measurement accuracy, and then moving to a discussion of test time. As the needs of a business is
highly variable, we will make comments throughout this chapter where appropriate.
5.1 YIELD
The primary goal of a semiconductor manufacturer is to produce large quantities of ICs for sale
to various electronic markets—that is, cell phones, ipods, HDTVs, and so on. Semiconductor
factories are highly automated, capable of producing millions of ICs over a 24-hour period,
every day of the week. For the most part, these ICs are quite similar in behavior, although
some will be quite different from one another. A well-defined means to observe the behavior
of a set of large elements, such as ICs, is to categorize their individual behavior in the form
of a histogram, as shown in Figure 5.1. Here we illustrate a histogram of the offset voltage
associated with a lot of devices. We see that 15% of devices produced in this lot had an offset
voltage between −0.129 V and −0.128 V. We can conjecture that the probability of another lot
producing devices with an offset voltage in this same range is 15%. Of course, how confident
we are with our conjecture is the basis of all things statistical; we need to capture more data
to support our claim. This we will address shortly; for now, let us consider the “goodness” of
what we produced.
127
128 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
In general, the component data sheet defines the “goodness” of an analog or mixed-signal
device. As a data sheet forms the basis of any contract between a supplier and a buyer, we avoid
any subjective argument of why one measure is better or worse than another; it is simply a matter
of data sheet definition. Generally, the goodness of an analog and mixed-signal device is defined
by a range of acceptability, bounded by a lower specification limit (LSL) and an upper specifica-
tion limit (USL), as further illustrated in Figure 5.1. These limits would be found on the device
data sheet. Any device whose behavior falls outside this range would be considered as a bad
device. This particular example considers a device with a two-sided limit. Similarly, the same
argument applies to a one-sided limit; just a different diagram is used.
Testing is the process of separating good devices from the bad ones. The yield of a given lot
of material is defined as the ratio of the total good devices divided by the total devices tested:
If 10,000 parts are tested and only 7000 devices pass all tests, then the yield on that lot of 10,000
devices is 70%. Because testing is not a perfect process, mistakes are made, largely on account
of the measurement limitations of the tester, noise picked up at the test interface, and noise pro-
duced by the DUT itself. The most critical error that can be made is one where a bad device is
declared good, because this has a direct impact on the operations of a buyer. This error is known
as an escape. As a general rule, the impact that an escape has on a manufacturing process goes up
exponentially as it moves from one assembly level to another. Hence, the cost of an escape can
be many orders of magnitude greater than the cost of a single part. Manufacturers make use of
test metrics to gauge the goodness of the component screening process. One measure is the defect
level (DL) and it is defined as
Figure 5.1. Histogram showing specification limits and regions of acceptance and rejection.
Acceptable
Range
LSL USL
16%
BAD BAD
(percent)
Count
8%
GOOD
0%
-0.140 -0.135 -0.130 -0.125 -0.120
x = Offset Voltage
Chapter 5 • Yield, Measurement Accuracy, and Test Time 129
total escapes
DL = × 100% (5.3)
total devices declared good
Exercises
5.1. If 15,000 devices are tested with a yield of 63%, how many devices
passed the test? ANS. 9450 devices.
5.2. A new product was launched with 100,000 sales over a one-year
time span. During this time, seven devices were returned to the
manufacturer even though an extensive test screening procedure
was in place. What is the defect level associated with this testing
procedure in parts per million? ANS. 70 ppm.
According to these definitions, precision refers only to the repeatability of a series of mea-
surements. It does not refer to consistent errors in the measurements. A series of measurements
can be incorrect by 2 V, but as long as they are consistently wrong by the same amount, then the
measurements are considered to be precise.
This definition of precision is somewhat counterintuitive to most people, since the words
precision and accuracy are so often used synonymously. Few of us would be impressed by a
“precision” voltmeter exhibiting a consistent 2-V error! Fortunately, the word repeatability is far
more commonly used in the test-engineering field than the word precision. This textbook will use
the term accuracy to refer to the overall closeness of an averaged measurement to the true value
and repeatability to refer to the consistency with which that measurement can be made. The word
precision will be avoided.
Unfortunately, the definition of accuracy is also somewhat ambiguous. Many sources of error
can affect the accuracy of a given measurement. The accuracy of a measurement should probably
refer to all possible sources of error. However, the accuracy of an instrument (as distinguished
from the accuracy of a measurement) is often specified in the absence of repeatability fluctuations
and instrument resolution limitations. Rather than trying to decide which of the various error
sources are included in the definition of accuracy, it is probably more useful to discuss some of the
common error components that contribute to measurement inaccuracy. It is incumbent upon the
test engineer to make sure all components of error have been accounted for in a given specifica-
tion of accuracy.
101 mV, 103 mV, 102 mV, 101 mV, 102 mV, 103 mV, 103 mV, 101 mV, 102 mV . . .
This measurement series shows an average error of about 2 mV from the true value of 100
mV. Errors like this are caused by consistent errors in the measurement instruments. The errors
can result from a combination of many things, including DC offsets, gain errors, and nonideal
linearity in the DVM’s measurement circuits. Systematic errors can often be reduced through
a process called calibration. Various types of calibration will be discussed in more detail in
Section 5.4.
itself. If the source of error is found and cannot be corrected by a design change, then averag-
ing or filtering of measurements may be required. Averaging and filtering are discussed in more
detail in Section 5.6.
Figure 5.2. Output codes versus input voltages for an ideal 3-bit ADC.
5
ADC
output 4
code
3
0
0 1.0 V 2.0 V 3.0 V 4.0 V
ADC input voltage
132 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
5.2.5 Repeatability
Nonrepeatable answers are a fact of life for mixed-signal test engineers. A large portion of the time
required to debug a mixed-signal test program can be spent tracking down the various sources of
poor repeatability. Since all electrical circuits generate a certain amount of random noise, mea-
surements such as those in the 100-mV offset example are fairly common. In fact, if a test engi-
neer gets the same answer 10 times in a row, it is time to start looking for a problem. Most likely,
the tester instrument’s full-scale voltage range has been set too high, resulting in a measurement
resolution problem. For example, if we configured a meter to a range having a 10-mV resolution,
then our measurements from the prior example would be very repeatable (100 mV, 100 mV, 100
mV, 100 mV, etc.). A novice test engineer might think that this is a terrific result, but the meter
is just rounding off the answer to the nearest 10-mV increment due to an input ranging problem.
Unfortunately, a voltage of 104 mV would also have resulted in this same series of perfectly
repeatable, perfectly incorrect measurement results. Repeatability is desirable, but it does not in
itself guarantee accuracy.
5.2.6 Stability
A measurement instrument’s performance may drift with time, temperature, and humidity. The
degree to which a series of supposedly identical measurements remains constant over time, tem-
perature, humidity, and all other time-varying factors is referred to as stability. Stability is an
essential requirement for accurate instrumentation.
Shifts in the electrical performance of measurement circuits can lead to errors in the tested
results. Most shifts in performance are caused by temperature variations. Testers are usually
equipped with temperature sensors that can automatically determine when a temperature shift has
occurred. The tester must be recalibrated anytime the ambient temperature has shifted by a few
degrees. The calibration process brings the tester instruments back into alignment with known
electrical standards so that measurement accuracy can be maintained at all times.
After the tester is powered up, the tester’s circuits must be allowed to stabilize to a constant
temperature before calibrations can occur. Otherwise, the measurements will drift over time as
the tester heats up. When the tester chassis is opened for maintenance or when the test head is
opened up or powered down for an extended period, the temperature of the measurement elec-
tronics will typically drop. Calibrations then have to be rerun once the tester recovers to a stable
temperature.
Shifts in performance can also be caused by aging electrical components. These changes
are typically much slower than shifts due to temperature. The same calibration processes used
to account for temperature shifts can easily accommodate shifts of components caused by
aging. Shifts caused by humidity are less common, but can also be compensated for by periodic
calibrations.
5.2.7 Correlation
Correlation is another activity that consumes a great deal of mixed-signal test program debug time.
Correlation is the ability to get the same answer using different pieces of hardware or software.
It can be extremely frustrating to try to get the same answer on two different pieces of equipment
using two different test programs. It can be even more frustrating when two supposedly identical
pieces of test equipment running the same program give two different answers.
Of course correlation is seldom perfect, but how good is good enough? In general, it is a
good idea to make sure that the correlation errors are less than one-tenth of the full range between
the minimum test limit and the maximum test limit. However, this is just a rule of thumb. The
exact requirements will differ from one test to the next. Whatever correlation errors exist, they
Chapter 5 • Yield, Measurement Accuracy, and Test Time 133
must be considered part of the measurement uncertainty, along with nonrepeatability and sys-
tematic errors.
The test engineer must consider several categories of correlation. Test results from a mixed-
signal test program cannot be fully trusted until the various types of correlation have been verified.
The more common types of correlation include tester-to-bench, tester-to-tester, program-to-pro-
gram, DIB-to-DIB, and day-to-day correlation.
Tester-to-Bench Correlation
Often, a customer will construct a test fixture using bench instruments to evaluate the quality of
the device under test. Bench equipment such as oscilloscopes and spectrum analyzers can help
validate the accuracy of the ATE tester’s measurements. Bench correlation is a good idea, since
ATE testers and test programs often produce incorrect results in the early stages of debug. In
addition, IC design engineers often build their own evaluation test setups to allow quick debug of
device problems. Each of these test setups must correlate to the answers given by the ATE tester.
Often the tester is correct and the bench is not. Other times, test program problems are uncovered
when the ATE results do not agree with a bench setup. The test engineer will often need to help
debug the bench setup to get to the bottom of correlation errors between the tester and the bench.
Tester-to-Tester Correlation
Sometimes a test program will work on one tester, but not on another presumably identical tester.
The differences between testers may be catastrophically different, or they may be very subtle. The
test engineer should compare all the test results on one tester to the test results obtained using
other testers. Only after all the testers agree on all tests is the test program and test hardware
debugged and ready for production.
Similar correlation problems arise when an existing test program is ported from one tester
type to another. Often, the testers are neither software compatible nor hardware compatible with
one another. In fact, the two testers may not even be manufactured by the same ATE vendor. A
myriad of correlation problems can arise because of the vast differences in DIB layout and tes-
ter software between different tester types. To some extent, the architecture of each tester will
determine the best test methodology for a particular measurement. A given test may have to be
executed in a very different manner on one tester versus another. Any difference in the way a
measurement is taken can affect the results. For this reason, correlation between two different test
approaches can be very difficult to achieve. Conversion of a test program from one type of tester
to another can be one of the most daunting tasks a mixed-signal test engineer faces.
Program-to-Program Correlation
When a test program is streamlined to reduce test time, the faster program must be correlated
to the original program to make sure no significant shifts in measurement results have occurred.
Often, the test reduction techniques cause measurement errors because of reduced DUT settling
time and other timing-related issues. These correlation errors must be resolved before the faster
program can be released into production.
DIB-to-DIB Correlation
No two DIBs are identical, and sometimes the differences cause correlation errors. The test engi-
neer should always check to make sure that the answers obtained on multiple DIB boards agree.
DIB correlation errors can often be corrected by focused calibration software written by the test
engineer (this will be discussed further in Section 5.4).
134 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Exercises
Day-to-Day Correlation
Correlation of the same DIB and tester over a period of time is also important. If the tester and
DIB have been properly calibrated, there should be no drift in the answers from one day to the
next. Subtle errors in software and hardware often remain hidden until day-to-day correlation is
performed. The usual solution to this type of correlation problem is to improve the focused cali-
bration process.
5.2.8 Reproducibility
The term reproducibility is often used interchangeably with repeatability, but this is not a correct
usage of the term. The difference between reproducibility and repeatability relates to the effects of
correlation and stability on a series of supposedly identical measurements. Repeatability is most
often used to describe the ability of a single tester and DIB board to get the same answer multiple
times as the test program is repetitively executed.
Reproducibility, by contrast, is the ability to achieve the same measurement result on a given
DUT using any combination of equipment and personnel at any given time. It is defined as the
statistical deviation of a series of supposedly identical measurements taken over a period of time.
These measurements are taken using various combinations of test conditions that ideally should
not change the measurement result. For example, the choice of equipment operator, tester, DIB
board, and so on, should not affect any measurement result.
Consider the case in which a measurement is highly repeatable, but not reproducible. In such
a case, the test program may consistently pass a particular DUT on a given day and yet consis-
tently fail the same DUT on another day or on another tester. Clearly, measurements must be both
repeatable and reproducible to be production-worthy.
If we repeat a sequence of measurements involving the same reference, we would obtain a set
of values that would in general be all different on account of the noise that is present. To eliminate
the effects of this noise, one could instead take the average value of a large number of samples as
the measurement. For instance, if we take the expected or average value of each side of Eq. (5.4),
we write
Recognizing that the expectation operation distributes across addition, we can write
Assuming that the noise process is normal with zero mean, together with the fact that VREF and
VOFF are constants, we find that the expected measured value becomes
As long as the sample set is large, then averaging will eliminate the effects of noise. However,
if the sample size is small, a situation that we often find in practice, then our measured value will
vary from one sample set to another. See, for example, the illustration in Figure 5.4a involving two
sets of samples. Here we see the mean values μM,1 and μM,2 are different. If we increase the total
number of samples collected to, say, N, we would find that the mean values of the two distributions
approach one another in a statistical sense. In fact, the mean of the means will converge to VREF
+VOFF with a standard deviation of σ M N as illustrated by the dashed line distribution shown in
Figure 5.4b. We should also note that the distribution of means is indeed Gaussian as required by
the central limit theorem.
Metrology (the science of measurement) is interested in quantifying the level of uncertainty
present in a measurement. Three terms from metrology are used in test to describe this uncer-
tainty: repeatability, accuracy, and bias.
Figure 5.3. Modeling a measurement made by a voltmeter with offset and noise component.
VOFF Vnoise
- + - +
± VM
VREF
VMEASURED
Actual Meter
136 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Assume that N samples are taken during some measurement process and that these samples
are assigned to vector x. The mean value of the measurement is quantified as
N
1
μM =
N
∑ x [k ]
k =1
(5.8)
The repeatability of a measurement refers to the standard deviation associated with the measure-
ment set, that is,
∑ (x [k ]− μ )
1 2
σM = M
(5.9)
N k =1
For the example shown in Figure 5.4b, repeatability refers to the spread of the measurement sam-
ples about the sample mean value μM. The larger the spread, the less repeatable the measurement
will be. We can now define repeatability as the variation (quantified by σ M ) of a measurement
system obtained by repeating measurements on the same sample back-to-back using the same
measurement conditions.
Bias error or systematic error is the difference between the reference value and the average of
a large number of measured values. Bias error can be mathematically described as
where E [V MEASURED ] is derived through a separate measurement process involving a (very) large
number of samples, that is,
N
1
E [VMEASURED ]= ∑ x [k ] (5.11)
N k =1 N LARGE
This step is usually conducted during the characterization phase of the product rather than during
a production run to save time. In essence, E [V MEASURED ] converges to V REF +V OFF (the noiseless
value) and β equals the negative of the instrument offset, that is,
β = −V OFF (5.12)
Finally, we come to the term accuracy. Since test time is of critical importance during a produc-
tion test, the role of the test engineer is to make a measurement with just the right amount of
uncertainty—no more, no less. This suggests selecting the test conditions so that the accuracy of
the measurement is just right.
Like bias error, accuracy is defined in much the same way, that being the difference between
the known reference value and the expected value of the measurement process. However, accuracy
accounts for the error that is introduced due to the repeatability of the measurement that is caused
by the small sample set. Let us define the difference between the reference level VREF and an esti-
1 N
mate of the mean value, given by VMEASURED = ∑ x [k ] , as the measurement error:
N k =1 N SMALL
Chapter 5 • Yield, Measurement Accuracy, and Test Time 137
MEASURED
E = V REF − V (5.13)
Figure 5.4. (a) Small sets of different measurements will have different mean values. (b) The mean
value of a large sample set will converge to VREF with an offset VOFF. (c) Distribution of measurement
errors.
(a) pdf
Sample 1 Sample 2
σM σM
x
μ M,1 μ M,2
(b) β
pdf
σM
N
σM
x
VREF μM,1
VREF + VOFF
(c) pdfE
Accuracy
σM
N
E
0 EMIN -VOFF EMAX
138 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
where
{
E MIN = min VREF − min ⎡ ⎤ V ⎡ ⎤
⎣ MEASURED ⎦ , REF − max ⎣ MEASURED ⎦
V V }
E MAX = max {V − min ⎡⎣
VMEASURED ⎤ , VREF − max ⎡
V ⎤}
REF ⎦ ⎣ MEASURED ⎦
It is common practice to refer to the range bounded by EMAX and EMIN as the uncertainty range of
the measurement process, or simply just accuracy, as shown in Figure 5.4c. Clearly, some measure
of the distribution of measurement errors must be made to quantify accuracy. We have more to say
about this in a moment.
As a reference check, if the measurement process is noiseless, then we have
max ⎡ V
⎤ = min ⎡ V ⎤ = V REF + V OFF (5.15)
⎣ MEASURED ⎦ ⎣ MEASURED ⎦
and the accuracy of the measurement process would simply be equal to the offset term, that is,
At this point in the discussion of accuracy, it is important to recognize that measurement offset
plays an important role in the role of accuracy in a measurement. We have a lot more to say about
this in a moment.
Absolute accuracy is also expressed in terms of the center value and a plus–minus difference
measure defined as
It is important that the reader be aware of the meaning of the notation used in Eq. (5.17) and how
it maps to the error bound given by Eq. (5.14).
Sometimes accuracy is expressed in relative terms, such as a percentage of reference value,
that is,
EXAMPLE 5.1
A measurement of a 1-V reference level is made 100 times, where the minimum reading is 0.95 V
and the maximum reading is 1.14 V. What is the absolute accuracy of these measurements? What
is the relative accuracy with respect to a full-scale value of 5 V?
Solution:
According to Eq. (5.14), the smallest and largest errors are
−0.045 ± 0.095 V
accuracy %FS = × 100% = −0.9 ± 1.9%
5V
It is interesting to observe the statistics of this estimator, V MEASURED, when noise is present.
Regardless of the nature of the noise, according to the central limit theorem, the estimator
MEASURED will follow Gaussian statistics. This is because the estimator is a sum of several ran-
V
dom variables as described in the previously chapter. The implications of this is that if one were to
repeat a measurement a number of times and create a histogram of resulting estimator values, one
would find that it has a Gaussian shape. Moreover, it would have a standard deviation of approxi-
mately σ M N . As a first guess, we could place the center of the Gaussian distribution at the
N
MEASURED = 1 x [k ] . Hence, the pdf of the estimator values would
value of the estimator V ∑
N k =1
appear as
( )
2
− v −
VMEASURED
( )
2
N (5.20)
g (v) =
2 σM N
e
σ M 2π
140 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Consequently, we can claim with a 99.7% probability that the true mean of the measurement will
lie between V MEASURED − 3 σ M and V MEASURED +3 σ M . One can introduce an α term to the
N N
previous range term and generalize the result to a set of probabilities—for example,
⎧0.667, α = 1
⎛ σM σ M ⎞ ⎪⎪
P ⎜ V MEASURED − α ≤ E [V MEASURED ]≤ V MEASURED + α ⎟ = ⎨0.950, α = 2 (5.21)
⎝ N N ⎠ ⎪
⎪⎩0.997, α = 3
One can refer to the α term as a confidence parameter, because the larger its value, the greater our
confidence (probability) that the noiseless measured value lies within the range defined by
MEASURED − α σ M ≤ E [V σM
V MEASURED ]≤ V MEASURED + α (5.22)
N N
In the statistical literature this range is known as the confidence interval (CI). The extremes of
measurement estimator can then be identified as
max ⎡ V MEASURED + α σ M
⎤=V
⎣ MEASURED ⎦ N
(5.23)
σ
MEASURED − α M
min ⎡ V ⎤=V
⎣ MEASURED ⎦ N
where VMEASURED is any one estimate of the mean of the measurement and σ M is the standard devi-
ation of the measurement process usually identified during a characterization phase. Substituting
Eq. (5.23) into Eq. (5.17), together with definitions given in Eq. (5.14), we write the accuracy
expression as
MEASURED ± α σM
accuracy = V REF − V (5.24)
N
MEASURED ≈ β
V REF − V (5.25)
σM
accuracy = β ± α (5.26)
N
This is the fundamental equation for measurement accuracy. It illustrates the dependency of
accuracy on the bias error, repeatability, and the number of samples. It also suggest several ways
in which to improve measurement accuracy:
Chapter 5 • Yield, Measurement Accuracy, and Test Time 141
EXAMPLE 5.2
A DC offset measurement is repeated 100 times, resulting in a series of values having an average
of 257 mV and a standard deviation of 27 mV. In what range does the noiseless measured value
lie for a 99.7% confidence? What is the accuracy of this measurement assuming the systematic
offset is zero?
Solution:
Using Eq. (5.22) with α = 3, we can bound the noiseless measured value to lie in the range defined by
27 mV 27 mV
257 mV − 3 × ≤ E [VMEASURED ] ≤ 257 mV + 3 ×
100 100
or
The accuracy of this measurement (where a measurement is the average of 100 voltage samples)
would then be ±8.1 mV with a 99.7% confidence. Alternatively, if we repeat this measurement
1000 times, we can expect that 997 measured values (i.e., each measured value corresponding to
100 samples) will lie between 248.9 mV and 265.1 mV.
Exercises
( )
2
1 2 10−2
amp circuit whereby the distribution was found to be Gauss- ANS. g (v ) = e ;
ian with mean value of 12.5 mV and a standard deviation of 10 (10 )
−2
2π
μ = 12.5 mV; σ = 10.0 mV.
mV. Write an expression for the pdf of these measurements?
( )
2
ian with mean value of 12.5 mV and a standard deviation of 1 2 10−3
ANS. f (v ) = e ;
1 mV. If this experiment is repeated, write an expression for (10 )
−3
2π
the pdf of the mean values of each of these experiments? μ = 12.5 mV; σ = 1 mV.
142 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Exercises
Figure 5.5. (a) Modeling a voltmeter with an ideal voltmeter and a nonideal component in cascade.
(b) Calibrating the nonideal effects using a software routine.
vDUT vDUT
Ideal
meter
Actual VM vmeasured = vmeasured = f
VM vmeasured
meter (vDUT)
(a)
vmeasured = f (vDUT)
vcalibrated = f -1 (vmeasured)
VM vmeasured
Ideal
meter Software routine
(b)
where f(⋅) indicates the functional relationship between vmeasured and vDUT.
The true functional behavior f(⋅) is seldom known; thus one assumes a particular behavior or
model, such as a first-order model given by
where G and offset are the gain and offset of the voltmeter, respectively. These values must be
determined from measured data. Subsequently, a mathematical procedure is written in software
that performs the inverse mathematical operation
where vcalibrated replaces vDUT as an estimate of the true voltage that appears across the terminals of
the voltmeter as depicted in Figure 5.5b. If f(⋅) is known precisely, then vcalibrated = vDUT.
In order to establish an accurate model of an instrument, precise reference levels are nec-
essary. The number of reference levels required to characterize the model fully will depend
on its order—that is, the number of parameters used to describe the model. For the linear or
144 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
firstorder model described, it has two parameters, G and offset. Hence, two reference levels will
be required.
To avoid conflict with the meter’s normal operation, relays are used to switch in these refer-
ence levels during the calibration phase. For example, the voltmeter in Figure 5.6 includes a pair
of calibration relays, which can connect the input to two separate reference levels, Vref1 and Vref2.
During a system level calibration, the tester closes one relay and connects the voltmeter to Vref1 and
measures the voltage, which we shall denote as vmeasured1. Subsequently, this process is repeated for
the second reference level Vref2 and the voltmeter provides a second reading, vmeasured2.
Based on the assumed linear model for the voltmeter, we can write two equations in terms of
two unknowns
Using linear algebra, the two model parameters can then be solved to be
v measured 2 − v measured1
G= (5.31)
V ref 2 −V ref 1
and
v measured1V ref 2 − v measured 2V ref 1
offset = (5.32)
V ref 2 −V ref 1
The parameters of the model, G and offset, are also known as calibration factors, or cal factors
for short.
When subsequent DC measurements are performed, they are corrected using the stored cali-
bration factors according to
v measured − offset
vcalibrated = (5.33)
G
This expression is found by isolating vDUT on one side of the expression in Eq. (5.28) and replacing
it by vcalibrated.
M ete r
input
Vref1 VM VMETER
Vref2
Chapter 5 • Yield, Measurement Accuracy, and Test Time 145
Of course, this example is only for purposes of illustration. Most testers use much more
elaborate calibration schemes to account for linearity errors and other nonideal behavior in the
meter’s ADC and associated circuits.. Also, the meter’s input stage can be configured many ways,
and each of these possible configurations needs a separate set of calibration factors. For example,
if the input stage has 10 different input ranges, then each range setting requires a separate set of
calibration factors. Fortunately for the test engineer, most instrument calibrations happen behind
the scenes. The calibration factors are measured and stored automatically during the tester’s peri-
odic system calibration and checker process.
Exercises
5.10. A meter reads 0.5 mV and 1.1 V when connected to two precision ANS. 0.5 mV,
reference levels of 0 and 1 V, respectively. What are the offset and 1.0995 V/V, Vcalibrated
gain of this meter? Write the calibration equation for this meter. = (Vmeasured - 0.5
mV)/1.0995.
ANS.
−G1 + G12 + 4G2 (v measured − offset)
v calibrated = or
2G2
−G1 − G12 + 4G2 (v measured − offset)
v calibrated =
2G2
To verify that the tester is in compliance with all its published specifications, a more extensive
process called performance verification may be performed. Although full performance verifica-
tion is typically performed at the tester vendor’s production floor, it is seldom performed on the
146 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
production floor. By contrast, periodic system calibrations and checkers are performed on a regu-
lar basis in a production environment. These software calibration and checker programs verify that
all the system hardware is production worthy.
Since tester instrumentation may drift slightly between system calibrations, the tester may
also perform a series of fine-tuning calibrations each time a new test program is loaded. The extra
calibrations can be limited to the subset of instruments used in a particular test program. This
helps to minimize program load time. To maintain accuracy throughout the day, these calibrations
may be repeated on a periodic basis after the program has been loaded. They may also be executed
automatically if the tester temperature drifts by more than a few degrees.
Finally, focused calibrations are often required to achieve maximum accuracy and to compen-
sate for nonidealities of DIB board components such as buffer amplifiers and filters. Unlike the
ATE tester’s built-in system calibrations, focused calibration and checker software is the respon-
sibility of the test engineer. Focused calibrations fall into two categories: (1) focused instrument
calibrations and (2) focused DIB calibrations and checkers.
EXAMPLE 5.3
A 2.500-V signal is required from a DC source as shown in Figure 5.7. Describe a calibration pro-
cedure that can be used to ensure that 2.500 V ± 500 µV does indeed appear at the output of the
DC source.
Solution:
The source is set to 2.500 V and a high-accuracy voltmeter is connected to the output of the
source using a calibration path internal to the tester. Calibration path connections are made
through one or more relays such as the ones in Figure 5.6. Assume the high-accuracy voltme-
ter reads 2.510 V from the source. The source is then reprogrammed to 2.500 V − 10 mV and the
output is remeasured. If the second meter reading is 2.499 V, then the source is reprogrammed
Chapter 5 • Yield, Measurement Accuracy, and Test Time 147
to 2.500 V − 10 mV + 1 mV and measured again. This process is repeated until the meter reads
2.500 V (plus or minus 500 µV). Once the exact programmed level is established, it is stored as a
calibration factor (e.g., calibration factor = 2.500 V − 10 mV + 1 mV = 2.491 V). When the 2.500-V
DC level is required during subsequent program executions, the 2.491 V calibration factor is used
as the programmed level rather than 2.500 V. Test time is not wasted searching for the ideal level
after the first calibration is performed. However, calibration factors may need to be regenerated
every few hours to account for slow drifts in the DC source. This recalibration interval is depen-
dent on the type of tester used.
DUT
input
Programmable
DC
VSRC
source
VM VMETER
High-accur a cy
voltmeter
N
y (t ) = A 0 + A1 sin (2π f 1t + φ1 )+ + A N sin (2π f N t + φN ) = A 0 + ∑ A k sin (2π f k t + φk ) (5.34)
k =1
where Ak, fk, and φk denote the amplitude, frequency, and phase, respectively, of the kth tone. A
multitone signal can be viewed in either the time domain or in the frequency domain. Timedomain
views are analogous to oscilloscope traces, while frequency-domain views are analogous to spec-
trum analyzer plots. The frequency-domain graph of a multitone signal contains a series of vertical
lines corresponding to each tone frequency and whose length∗ represents the root-mean-square
(RMS) amplitude of the corresponding tone. Each line is referred to as a spectral line. Figure 5.8
illustrates the time and frequency plots of a composite signal consisting of three tones of frequen-
cies 1, 2.5, and 4.1 kHz, all having an RMS amplitude of 2 V. Of course, the peak amplitude of
each sinusoid in the multitone is simply 2 × 2 or 2.82 V, so we could just as easily plot these
values as peak amplitudes rather than RMS. This book will consistently display frequency-domain
plots using RMS amplitudes.
*Spectral density plots are commonly defined in engineering textbooks with the length of the spectral line
representing one-half the amplitude of a tone. In most test engineering work, including spectrum analyzer
displays, it is more common to find this length defined as an RMS quantity.
148 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
v v
5V 1 kHz 2.5 kHz 4 kHz
2V
t f
-5 V
Figure 5.9. Modeling an AWG as a cascaded combination of an ideal source and frequency-
dependent gain block.
G(f)
v SO U R C E v SOURCE
Actual I de al
AWG = AWG
The AWG produces its output signal by passing the output of a DAC through a low-pass
antiimaging filter. Due to its frequency behavior, the filter will not have a perfectly flat magni-
tude response. The DAC may also introduce frequency-dependent errors. Thus the amplitudes of
the individual tones may be offset from their desired levels. We can therefore model this AWG
multitone situation as illustrated in Figure 5.9. The model consists of an ideal source connected
in cascade with a linear block whose gain or magnitude response is described by G(f), where f is
the frequency expressed in hertz. To correct for the gain change with frequency, the amplitude of
each tone from the AWG is measured individually using a high-accuracy AC voltmeter. The ratio
between the actual output and the requested output corresponds to G(f) at that frequency. This
gain can then be stored as a calibration factor that can subsequently be retrieved to correct the
amplitude error at that frequency. The calibration process is repeated for each tone in the multi-
tone signal. The composite signal can then be generated with corrected amplitudes by dividing
the previous requested amplitude at each frequency by the corresponding AWG gain calibration
factor. Because the calibration process equalizes the amplitudes of each tone, the process is called
multitone leveling.
As testers continue to evolve and improve, it may become increasingly unnecessary for the
test engineer to perform focused calibrations of the tester instruments. Focused calibrations were
once necessary on almost all tests in a test program. Today, they can sometimes be omitted with
little degradation in accuracy. Nevertheless, the test engineer must evaluate the need for focused
calibrations on each test. Even if calibrations become unnecessary in the future, the test engi-
neer should still understand the methodology so that test programs on older equipment can be
comprehended.
Calibration of circuits on the DIB, on the other hand, will probably always be required. The
tester vendor has no way to predict what kind of buffer amplifiers and other circuits will be placed
on the DIB board. The tester operating system will never be able to provide automatic calibration
of these circuits. The test engineer is fully responsible for understanding the calibration require-
ments of all DIB circuits.
Chapter 5 • Yield, Measurement Accuracy, and Test Time 149
Exercises
−1
⎛
( ) ⎞
2
5.14. An AWG has a gain response described by ⎜ 1 + f 103 ⎟ and is
⎝ ⎠
to generate three tones at frequencies of 1, 2, and 3 kHz. What are ANS. 0.707, 0.447,
the calibration factors? and 0.316.
EXAMPLE 5.4
The op-amp circuit in Figure 5.10 has been added to a DIB board to buffer the output of a DUT.
The buffer will be used to condition the DC signal from the DUT before sending it to a calibrated
DC voltmeter resident in the tester. If the output is not buffered, then we may find that the DUT
breaks into oscillations as a result of the stray capacitance arising along the lengthy signal path
leading from the DUT to the tester. The buffer prevents these oscillations by substantially reduc-
ing stray capacitance at the DUT output. In order to perform an accurate measurement, the be-
havior of the buffer must be accounted for. Outline the steps to perform a focused DC calibration
on the op-amp buffer stage.
Solution:
To perform a DC calibration of the output buffer amplifier, it is necessary to assume a model for
the op-amp buffer stage. It is reasonable to assume that the buffer is fairly linear over a wide
range of signal levels, so that the following linear model can be used:
Subsequently, following the same procedure as outlined in Section 5.4.3, a pair of known voltages
are applied to the input of the buffer from source SRC1 via the relay connection and the output of
150 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
the buffer is measured with a voltmeter. This temporary connection is called a calibration path.
As an example, let SRC1 force 2 V and assume that an output voltage of 2.023 V is measured us-
ing the voltmeter. Next the input is dropped to 1 V, resulting in an output voltage of 1.012 V. Using
Eq. (5.31), we find the buffer has gain given by
2.023 V − 1.012 V
G= = 1.011 V V
2 V -1 V
1.012 V ⋅ 2 V − 2.023 V × 1 V
offset = = 1 mV
2 V −1V
Hence, the DUT output vDUT and the voltmeter value vmeasured are related according to
The goal of the focused DC calibration procedure is to find an expression that relates the DUT
output in terms of the measured value. Hence, by rearranging the expression and replacing
vcalibrated for vDUT, we obtain
v measured − 0.001 V
v calibrated =
1.011 V/V
For example, if the voltmeter reads 1.732 V, the actual voltage appearing at its terminals is
actually
1.732 V − 0.001 V
v calibrated = = 1.712 V
1.011 V/V
If the original uncalibrated answer had been used, there would have been a 20-mV error! This
example shows why focused DUT calibrations are so important to accurate measurements.
DUT T e st e r 1 kΩ
Cap ac i t ive
VOUT meter cable
Buffer amp VM VO
SRC1 Mete r
VSRC1
Chapter 5 • Yield, Measurement Accuracy, and Test Time 151
When buffer amplifiers are used to assist the measurement of AC signals, a similar calibration
process must be performed on each frequency that is to be measured. Like the AWG calibration
example, the buffer amplifier also has a nonideal frequency response and will affect the reading
of the meter. Its gain variation, together with the meter’s frequency response, must be measured
at each frequency used in the test during a calibration run of the test program. Assuming that the
meter has already been calibrated, the frequency response behavior of the DIB circuitry must be
correctly accounted for. This is achieved by measuring the gain in the DIB’s signal path at each
specific test frequency. Once found, it is stored as a calibration factor. If additional circuits such
as filters, ADCs, and so on, are added on the DIB board and used under multiple configurations,
then each unique signal path must be individually calibrated.
Programmable
g ain
amplifier (PGA)
vDUT ADC
Tester
comput er
M eter R a n g e c o n t r ol
152 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
This accuracy specification probably assumes that the measurement is made 100 or more
times and averaged. For a single nonaveraged measurement, there may also be a repeatability error
to consider. It is not clear from the table above what assumptions are made about averaging. The
test engineer should make sure that all assumptions are understood before relying on the accuracy
numbers.
EXAMPLE 5.5
A DUT output is expected to be 100 mV. Our fictitious DC voltmeter, the DVM100, is set to the
0.5-V range to achieve the optimum resolution and accuracy. The reading from the meter (with
the meter’s input filter enabled) is 102.3 mV. Calculate the accuracy of this reading (excluding
possible repeatability errors). What range of outputs could actually exist at the DUT output with
this reading?
Solution:
The measurement error would be equal to ±0.05% of 100 mV, or 50 µV, but the specification has
a lower limit of 1 mV. The accuracy is therefore ±1 mV. Based on the single reading of 102.3 mV,
the actual voltage at the DUT output could be anywhere between 101.3 and 103.3 mV.
In addition to the ranging hardware, the meter also has a low-pass filter in series with its input.
The filter can be bypassed or enabled, depending on the measurement requirements. Repeatability
is enhanced when the low-pass filter is enabled, since the filter reduces electrical noise in the input
signal. Without this filter the accuracy would be degraded by nonrepeatability. The filter undoubt-
edly adds settling time to the measurement, since all low-pass filters require time to stabilize to a
final DC value. The test engineer must often choose between slow, repeatable measurements and
fast measurements with less repeatability.
It may be possible to empirically determine through experimentation that this DC voltmeter
has adequate resolution and accuracy to make a DC offset measurement with less than 100 μV of
error. However, since this level of accuracy is far better than the instrument’s ±1-mV specifica-
tions, the instrument should probably not be trusted to make such a measurement in production.
The accuracy might hold up for 100 days and then drift toward the specification limits of 1 mV
on day 101.
Another possible scenario is that multiple testers may be used that do not all have 100-μV
performance. Tester companies are often conservative in their published specifications, meaning
that the instruments are often better than their specified accuracy limits. This is not a license to
Chapter 5 • Yield, Measurement Accuracy, and Test Time 153
use the instruments to more demanding specifications. It is much safer to use the specifications
as printed, since the vendor will not take any responsibility for use of instruments beyond their
official specifications.
Sometimes the engineer may have to design front-end circuitry such as PGAs and filters onto
the DIB board itself. The DIB circuits might be needed if the front-end circuitry of the meter is
inadequate for a high-accuracy measurement. Front-end circuits may also be added if the signal
from the DUT cannot be delivered cleanly through the signal paths to the tester instruments. Very
high-impedance DUT signals might be susceptible to externally coupled noise, for example. Such
signals might benefit from local buffering and amplification before passing to the tester instru-
ment. The test engineer must calibrate any such buffering or filtering circuits using a focused DIB
calibration.
Exercises
5.15. A voltmeter is specified to have an accuracy of ±1% of programmed ANS. 0.5 V ±10 mV
range. If a DC level is measured on a ±1 V range and appears on (i.e., the input
the meter as 0.5 V, what are the minimum and maximum DC levels could lie anywhere
that might have been present at the meter’s input during this mea- between 490 and
surement? 510 mV).
ω
V n = V DUT b V (5.35)
O ω DUT
The above expression illustrates the noise reduction gained by filtering the output. The smaller the
ratio ωb /ωDUT, the greater the noise reduction. Other types of filtering circuits can be placed on the
DIB board when needed. For example, a very narrow bandpass filter may be placed on the DIB
board to clean up noise components in a sine wave generated by the tester. The filter allows a much
more ideal sine wave to the input of the DUT than the tester would otherwise be able to produce.
154 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
EXAMPLE 5.6
The simple RC low-pass circuit shown in Figure 5.12 is used to filter the output of a DUT con-
taining a noisy DC signal. For a particular measurement, the signal component is assumed to
change from 0 to 1 V, instantaneously. How long does it take the filter to settle to within 1% of its
final value? By what factor does the settling time increase when the filter’s 3-dB bandwidth is
decreased by a factor of 10?
1 kΩ
VO
VI 1 μF
Solution:
From the theory of first-order networks, the step response of the circuit starting from rest (i.e.,
vI = 0) is
−t τ ⎞
v O (t) = S ⎛ 1 − e (5.36)
⎝ ⎠
where S = 1 V is the magnitude of the step and τ = RC = 10–3 s. Moreover, the 3-dB bandwidth ωb
(expressed in rad/s) of a first-order network is 1/RC, so we can rewrite the above expression as
⎛ −ω t ⎞
v O (t) = S ⎜ 1 − e b ⎟ (5.37)
⎝ ⎠
Clearly, the time t = tS the output reaches an arbitrary output level of VO is then
⎛ S − VO ⎞
ln ⎜
⎝ S ⎟⎠ (5.38)
tS = −
ω
b
Chapter 5 • Yield, Measurement Accuracy, and Test Time 155
Furthermore, we recognize that (S − VO ) S is the settling error ε or the accuracy of the measure-
ment, so we can rewrite Eq. (5.38) as
ln (ε )
tS = − (5.39)
ω
b
Hence, the time it takes to reach within 1% of 1 V, or 0.99 V, is 4.6 ms. Since settling time and
3-dB bandwidth are inversely related according to Eq. (5.39), a tenfold decrease in bandwidth
leads to a tenfold increase in settling time. Specifically, the settling time becomes 46 ms.
Exercises
5.18. By what factor should the bandwidth of an RC low-pass filter be ANS. The band-
decreased in order to reduce the variation in a DC measurement width should be
from 250 μV RMS to 100 μV RMS. By what factor does the settling decreased by 6.25
time increase. (= 2.52). Settling
time increases by
6.25.
5.20. The variation in the output RMS signal of a DUT is 1 mV, but it
needs to be reduced to a level closer to 500 μV. What filter band-
width is required to achieve this level of repeatability? Assume that
the DUT’s output follows a first-order frequency response and has
a 3-dB bandwidth of 1000 Hz. ANS. 250 Hz.
5.6.2 Averaging
1 N
Averaging defined by the expression ∑ x [k ] is a specific form of discrete-time filtering.
N k =1
Averaging can be used to improve the repeatability of a measurement. For example, we can aver-
age the following series of nine voltage measurements and obtain an average of 102 mV.
156 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
101 mV, 103 mV, 102 mV, 101 mV, 102 mV, 103 mV, 103 mV, 101 mV, 102 mV
There is a good chance that a second series of nine unique measurements will again result in
something close to 102 mV. If the length of the series is increased, the answer will become more
repeatable and reliable. But there is a point of diminishing returns. To reduce the effect of noise
on the voltage measurement by a factor of two, one has to take four times as many readings and
average them. At some point, it becomes prohibitively expensive (i.e., from the point of view of
test time) to improve repeatability. In general, if the RMS variation in a measurement is again
denoted V DUT , then after averaging the measurement N time, the RMS value of the resulting aver-
aged value will be
V DUT
Vn = V (5.40)
O N
Here we see the output noise voltage reduces the input noise before averaging by the factor N .
Hence, to reduce the noise RMS voltage by a factor of two requires an increase in the sequence length,
N, by a factor of four.
AC measurements can also be averaged to improve repeatability. A series of sine wave signal
level measurements can be averaged to achieve better repeatability. However, one should not try to
average readings in decibels. If a series of measurements is expressed in decibels, they should first
be converted to linear form using the equation V = 10dB/20 before applying averaging. Normally,
the voltage or gain measurements are available before they are converted to decibels in the first
place; thus the conversion from decibels to linear units or ratios is not necessary. Once the average
voltage level is calculated, it can be converted to decibels using the equation dB = 20 log10 (V) . To
understand why we should not perform averaging on decibels, consider the sequence 0, −20, -40
dBV. The average of these values is –20 dBV. However, the actual voltages are 1 V, 100 mV, and
10 mV. Thus the correct average value is (1 V + 0.1 V + 0.01 V) / 3 = 37 mV, or −8.64 dBV.
5.7 GUARDBANDS
Guardbanding is an important technique for dealing with the uncertainty of each measurement.
If a particular measurement is known to be accurate and repeatable with a worst-case uncertainty
of ±ε, then the final test limits should be tightened from the data sheet specification limits by ε to
make sure that no bad devices are shipped to the customer. In other words,
So, for example, if the data sheet limit for the offset on a buffer output is –100 mV minimum and
100 mV maximum, and an uncertainty of ±10 mV exists in the measurement, the test program
limits should be set to –90 mV minimum and 90 mV maximum. This way, if the device output is
101 mV and the error in its measurement is –10 mV, the resulting reading of 91 mV will cause
a failure as required. Of course, a reading of 91 mV may also represent a device with an 81-mV
output and a +10-mV measurement error.
In such cases, guardbanding has the unfortunate effect of disqualifying good devices. Ideally,
we would like all guardbands to be set to 0 so that no good devices will be discarded. To minimize
the guardbands, we must improve the repeatability and accuracy of each test, but this typically
requires longer test times. There is a balance to be struck between repeatability and the number of
good devices rejected. At some point, the added test time cost of a more repeatable measurement
Chapter 5 • Yield, Measurement Accuracy, and Test Time 157
105 101
101 107
98 102
96 95
86 92
72 78
outweighs the cost of discarding a few good devices. This tradeoff is illustrated in Figure 5.13
on the histogram of some arbitrary offset voltage test data for two different-sized guardbands.
With larger guardbands, the region of acceptability is reduced; hence fewer good devices will be
shipped.
EXAMPLE 5.7
Table 5.2 lists a set of output values from a DUT together with their measured values. It is as-
sumed that the upper specification limit is 100 mV and the measurement uncertainty is ±6 mV.
How many good devices are rejected because of the measurement error? How many good de-
vices are rejected if the measurement uncertainty is increased to ±10 mV?
Solution:
From the DUT output column on the left, four devices are below the upper specification limit of
100 mV and should be accepted. The other two should be rejected. Now with a measurement
uncertainty of ±6 mV, according to Eq. (5.41) the guardbanded upper test limit is 94 mV. With the
revised test limit, only two devices are acceptable. The others are all rejected. Hence, two other-
wise good devices are disqualified.
If the measurement uncertainty increases to ±10 mV, then the guardbanded upper test limit
becomes 90 mV. Five devices are rejected and only one is accepted. Consequently, three other-
wise good devices are disqualified.
In practice, we need to set ε equal to 3 to 6 times the standard deviation of the measurement
to account for measurement variability. A diagram illustrating the impact of shifting the test limits
away from the specification limits on the probability density is provided in Figure 5.14. This dia-
gram shows a marginal device with an average (true) reading equal to the upper specification limit.
The upper and lower specification limits (USL and LSL, respectively) have each been tightened
by ε = 3σ. The tightened upper and lower test limits (UTL and LTL, respectively) reject marginal
devices such as this, regardless of the magnitude of the measurement error. A more stringent
guardband value of ε = 6σ gives us an extremely low probability of passing a defective device,
but this is sometimes too large a guardband to allow a manufacturable yield.
158 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Figure 5.13. (a) Guardbanding the specification limits. (b) Illustrating the implications of large
guardbands on the region of acceptability.
Guardband Guardband
16%
BAD BAD
(percent)
Count
8%
GOOD
0%
-0.140 -0.135 -0.130 -0.125 -0.120
(a)
x = Offset Voltage
L TL UTL
Guardband Guardband
16%
(percent)
Count
8%
GOOD
0%
-0.140 -0.135 -0.130 -0.125 -0.120
(b)
x = Offset Voltage
EXAMPLE 5.8
A DC offset measurement is repeated many times, resulting in a series of values having an aver-
age of 257 mV. The measurements exhibit a standard deviation of 27 mV. If our specification limits
are 250 ±50 mV, where would we have to set our 6σ guardbanded upper and lower test limits?
Solution:
The value of σ is equal to 27 mV; thus the width of the 6σ guardbands would have to be equal to
162 mV. The upper test limit would be 300 mV – 162 mV, and the lower test limit would be 200
mV + 162 mV. Clearly, there is a problem with the repeatability of this test, since the lower guard-
banded test limit is higher than the upper guardbanded test limit! Averaging would have to be
used to reduce the standard deviation.
Chapter 5 • Yield, Measurement Accuracy, and Test Time 159
Gaussian ε ε
measurement
pdf
f(x)
σ
σ ave = (5.42)
N
So, for example, if we want to reduce the value of a measurement’s standard deviation σ by a
factor of two, we have to average a measurement four times. This gives rise to an unfortunate
exponential tradeoff between test time and repeatability.
We can use Gaussian statistical analysis to predict the effects of nonrepeatability on yield.
This allows us to make our measurements repeatable enough to give acceptable yield without
wasting time making measurements that are too repeatable. It also allows us to recognize the situ-
ations where the average device performance or tester performance is simply too close to failure
for economical production.
EXAMPLE 5.9
How many times would we have to average the DC measurement in Example 5.8 to achieve 6σ
guardbands of 10 mV? If each measurement takes 5 ms, what would be the total test time for the
averaged measurement?
160 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Solution:
The value of σave must be equal to 10 mV divided by 6 to achieve 6σ guardbands. Rearranging
Eq. (5.42), we see that N must be equal to
2 2
⎛ σ ⎞ ⎛ 27 mV ⎞
N=⎜ =⎜ = 262 measurements
⎝ σ ave ⎟⎠ ⎝ 10 mV 6 ⎟⎠
The total test time would be equal to 262 times 5 ms, or 1.31 s. This is clearly unacceptable for
production testing of a DC offset. The 27-mV standard deviation must be reduced through an
improvement in the DIB hardware or the DUT design.
Above we stated that the guardbands should be selected to be between 3 and 6 standard
deviations of the measurement. Here we recast this statement in terms of the desired defect level.
Consider the situation depicted in Figure 5.14 for a marginal device. The probability that a bad
part will have a measured value that is less than UTL is given by
⎛ ε ⎞
P ( X < UTL ) = Φ ⎜ ⎟ (5.43)
⎝σn ⎠
If N devices are produced, the defect level in ppm as defined by Eq. (5.3) can be written as
# escapes ⎛ ε ⎞
DL [ppm ] = × 106 = Φ ⎜ ⎟ × 106 (5.44)
N ⎝σn ⎠
⎛ DL ⎞
ε = σ n × Φ −1 ⎜ 6 ⎟ (5.45)
⎝ 10 ⎠
The upper and lower test limits can then be found Eq. (5.41) above.
EXAMPLE 5.10
A DC offset test is performed on a DUT with lower and upper specification limits of −5 mV and 5
mV, respectively. The expected RMS level of the noise present during the test is 1 mV. If a defect
level of less than 200 ppm is required, what should be the test limits?
Chapter 5 • Yield, Measurement Accuracy, and Test Time 161
Solution:
According to Eq. (5.45), the guardband is
⎛ 200 ⎞
ε = 10−3 × Φ −1 ⎜ 6 ⎟ = 35.40 mV
⎝ 10 ⎠
Exercises
5.25. The following lists a set of output voltage values from a group of
DUTs together with their measured values: {(2.3, 2.1), (2.1, 1.6), ANS. Four devices
(2.2, 2.1), (1.9, 1.6), (1.8, 1.7), (1.7, 2.1), (1.5, 2.0)}. If the upper spec- (all good devices
ification limit is 2.0 V and the measurement uncertainty is ±0.5 are rejected by the
V, how many good devices are rejected due to the measurement 1.5-V guardbanded
error? upper test limit).
Figure 5.15. Probability density plot for measurement result between two test limits.
Gaussian
Gaussian
pdf
measurement
pdf
f(x)
Meas ured
Average value
measurement
case of the Gaussian pdf, the test will produce an equal number of failures and passing results.
This is illustrated by the pdf diagram shown in Figure 5.16. The area under the pdf is equally split
between the passing region and the failing region; so we would expect 50% of the test results to
pass and 50% to fail.
For measurements whose average value is close to but not equal to either test limit, the analy-
sis gets a little more complicated. Consider an average measurement μ that is δ1 units below the
upper test limit as shown in Figure 5.17.
Any time the repeatability error exceeds δ1 the test will fail. In effect, the measurement noise
causes an erroneous failure. The probability that the measurement error will not exceed δ1 and
cause a failure is equal to the area underneath the portion of the pdf that is less than the UTL. This
area is equal to the integral of the pdf from minus infinity to the UTL of the measurement results.
In other words, the probability that a measurement will not fail the upper test limit as adopted
from Eq. 5.19 is
⎛ UTL − μ ⎞
P ( X < UTL ) = Φ ⎜ ⎟⎠ (5.46)
⎝ σ
Conversely, the probability of a failing result due to the upper test limit is
⎛ UTL − μ ⎞
P ( UTL < X ) = 1 − Φ ⎜ ⎟⎠ (5.47)
⎝ σ
⎛ UTL − μ ⎞ ⎛ LTL − μ ⎞
P (LTL < X < UTL ) = Φ ⎜ ⎟⎠ − Φ ⎜⎝ ⎟⎠ (5.48)
⎝ σ σ
Chapter 5 • Yield, Measurement Accuracy, and Test Time 163
Figure 5.16. Probability density plot for nonrepeatable measurement centered at the UTL.
Gaussian
50% probability for 50% probability for
measurement
passing result failure
pdf
f(x)
LTL UTL
Measurement
r es ul t
Figure 5.17. Probability density plot for average reading, μ, slightly below UTL by δ1.
δ1
Measurement
r esu lt
LTL − μ ⎞ ⎛ UTL − μ ⎞
P (X < LTL or UTL < X ) = P (X < LTL ) + P (UTL < X ) = 1 + Φ ⎛⎜⎝ ⎟⎠ − Φ ⎜⎝ ⎟⎠ (5.49)
σ σ
EXAMPLE 5.11
A DC offset measurement is repeated many times, resulting in a series of values having an aver-
age of 257 mV. The measurements exhibit a standard deviation of 27 mV. What is the probability
that a nonaveraged offset measurement will fail on any given test program execution? Assume
an upper test limit of 300 mV and a lower test limit of 200 mV.
Solution:
The probability that the test will lie outside the test limits of 200 and 300 mV is obtained by
substituting the test limits into Eq. (5.49),
Here we see that there is a 7.27% chance of failure, even though the true DC offset value is known
to be within acceptable limits.
(σ ) + (σ )
2 2
σ tester = repeatability reproducibility
(5.50)
Yield loss due to total tester variability can then be calculated using the equations from the previ-
ous sections, substituting the value of σtester in place of σ.
The variability of the actual DUT performance from DUT to DUT and from lot to lot also
contributes to yield loss. Thus the overall variability can be described using an overall standard
deviation, calculated using an equation similar to Eq. (5.50), that is,
Area 1 = probabili ty
for passing result
LTL UTL
Gaussian A r e a 2 = p r o b a b il i t y
measurement for LTL failure Area 3 = probabili ty
pdf for UTL failure
f(x)
δ2 δ1
Measurement
re su lt
Chapter 5 • Yield, Measurement Accuracy, and Test Time 165
(σ ) + (σ ) + (σ )
2 2 2
σ total = repeatability reproducibility process
(5.51)
Since σtotal ultimately determines our overall production yield, it should be made as small as pos-
sible to minimize yield loss. The test engineer must try to minimize the first two standard devia-
tions. The design engineer and process engineer should try to reduce the third.
EXAMPLE 5.12
A six-month yield study finds that the total standard deviation of a particular DC offset mea-
surement is 37 mV across multiple lots, multiple testers, multiple DIB boards, and so on. The
standard deviation of the measurement repeatability is found to be 15 mV, while the standard
deviation of the reproducibility is found to be 7 mV. What is the standard deviation of the actual
DUT-to-DUT offset variability, excluding tester repeatability errors and reproducibility errors?
If we could test this device using perfectly accurate, repeatable test equipment, what would be
the total yield loss due to this parameter, assuming an average value of 2.430 V and test limits of
2.5 V ± 100 mV?
Solution:
Rearranging Eq. (5.51), we write
= (37 mV )2 − (15 mV )2 − (7 mV )2
= 33 mV
Thus, even if we could test every device with perfect accuracy and no repeatability errors, we
would see a DUT-to-DUT variability of σ = 33 mV. The value of μ is equal to 2.430 V; thus our over-
all yield loss for this measurement is found by substituting the above values into Eq. (5.49) as
From Table 5.1, Φ(-0.91) ≅ Φ(-0.9) = 0.1841, and we estimate Φ(5.15) ≅ 1; hence
We would therefore expect an 18% yield loss due to this one parameter, due to the fact that the
DUT-to-DUT variability is too high to tolerate an average value that is only 30 mV from the lower
test limit. Repeatability and reproducibility errors would only worsen the yield loss; so this device
would probably not be economically viable. The design or process would have to be modified to
achieve an average DC offset value closer to 2.5 V.
166 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
The probability that a particular device will pass all tests in a test program is equal to the
product of the passing probabilities of each individual test. In other words, if the values P1, P2,
P3, . . . , Pn represent the probabilities that a particular DUT will pass each of the n individual tests
in a test program, then the probability that the DUT will pass all tests is equal to
Equation (5.52) is of particular significance, because it dictates that each of the individual tests
must have a very high yield if the overall production yield is to be high. For example, if each of
the 200 tests has a 2% chance of failure, then each test has only a 98% chance of passing. The
yield will therefore be (0.98)200, or 1.7%! Clearly, a 1.7% yield is completely unacceptable. The
problem in this simple example is not that the yield of any one test is low, but that so many tests
combined will produce a large amount of yield loss.
EXAMPLE 5.13
A particular test program performs 857 tests, most of which cause little or no yield loss. Five
measurements account for most of the yield loss. Using a lot summary and a continue-on-fail
test process, the yield loss due to each measurement is found to be:
Test #1: 1%, Test #2: 5%, Test #3: 2.3%, Test #4: 7%, Test #5: 1.5%
All other tests combined 0.5%
Solution:
The probability of passing each test is equal to 1 minus the yield loss produced by that test. The
values of P1, P2, P3, . . . , P5 are therefore
If we consider all other tests to be a sixth test having a yield loss of 0.5%, we get a sixth proba-
bility
P6 = 99.5%
P (DUT passes all tests ) = 0.99 × 0.95 × 0.977 × 0.93 × 0.985 × 0.995 = 0.8375
Because the yield of each individual test must be very high, a methodology called statistical pro-
cess control (SPC) has been adopted by many companies. The goal of SPC is to minimize the total
variability (i.e., to try to make σtotal = 0) and to center the average test result between the upper and
lower test limits [i.e. to try to make μ = (UTL+LTL)/2]. Centering and narrowing the measure-
ment distribution leads to higher production yield, since it minimizes the area of the Gaussian pdfs
that extend into the failing regions as depicted in Figure 5.19. In the next section, we will briefly
Chapter 5 • Yield, Measurement Accuracy, and Test Time 167
Measurement
res ul t
examine the SPC methodology to see how it can help improve the quality of the manufacturing
process, the quality of the test equipment and software, and most important the quality of the
devices shipped to the customer.
Exercises
5.27. A particular test program performs 600 tests, most of which cause
little or no yield loss. Four measurements account for most of
the yield loss. The yield loss due to each measurement is found
to be: Test #1: 1.5%, Test #2: 4%, Test #3: 5.3%, Test #4: 2%. All
other tests combined 5%. What is the overall yield loss of this lot ANS. Yield loss =
of material? 16.63%.
SPC provides a means of identifying device parameters that exhibit excessive variations over
time. It does not identify the root cause of the variations, but it tells us when to look for problems.
Once an unstable parameter has been identified using SPC, the engineering and manufacturing
team searches for the root cause of the instability. Hopefully, the excessive variations can be
reduced or eliminated through a design modification or through an improvement in one of the
168 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Time
many manufacturing steps. By improving the stability of each tested parameter, the manufacturing
process is brought under control, enhancing the inherent quality of the product.
A higher level of inherent quality leads to higher yields and less demanding test require-
ments. If we can verify that a parameter almost never fails, then we may be able to stop testing
that parameter on a DUT-by-DUT basis. Instead, we can monitor the parameter periodically to
verify that its statistical distribution remains tightly packed and centered between the test limits.
We also need to verify that the mean and standard deviation of the parameter do not fluctuate
wildly from lot to lot as shown in the four rightmost columns of Figure 5.20.* Once the stability of
the distributions has been verified, the parameter might only be measured for every tenth device
or every hundredth device in production. If the mean and standard deviation of the limited sample
set stays within tolerable limits, then we can be confident that the manufacturing process itself
is stable. SPC thus allows statistical sampling of highly stable parameters, dramatically reducing
testing costs.
*
The authors acknowledge the efforts of the Texas Instruments SPC Guidelines Steering Team, whose
document “Statistical Process Control Guidelines, The Commitment of Texas Instruments to Continuous
Improvement Through SPC” served as a guide for several of the diagrams in this section.
Chapter 5 • Yield, Measurement Accuracy, and Test Time 169
Figure 5.21. Six-sigma quality standards lead to low defect rates (< 3.4 defective parts per million).
6σ 6σ
μ
Measurement
result
Six-sigma quality standards result in a failure rate of less than 3.4 defective parts per million
(dppm). Therefore, the chance of an untested device failing a six-sigma parameter is extremely
low. This is the reason we can often eliminate DUT-by-DUT testing of six-sigma parameters.
USL − LSL
Cp = (5.53)
6σ
Cp indicates how tightly the statistical distribution of measurements is packed, relative to the range
of passing values. A very large Cp value indicates a process that is stable enough to give high yield
and high quality, while a Cp less than 2 indicates a process stability problem. It is impossible to
achieve six-sigma quality with a Cp less than 2, even if the parameter is perfectly centered. For
this reason, six-sigma quality standards dictate that all measured parameters must maintain a Cp
of 2 or greater in production.
The process capability index, Cpk, measures the process capability with respect to centering
between specification limits
C pk = C p (1 − k ) (5.54)
where
T −μ
k = (5.55)
0.5 (USL − LSL )
170 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Here T is the specification target (ideal measured value) and μ is the average measured value. The
target value T is generally placed in the middle of the specification limits, defined as
USL + LSL
T = (5.56)
2
For one-sided specifications, such as a signal-to-distortion ratio test, we only have an upper or
lower specification limit. Therefore, we have to use slightly different calculations for Cp and Cpk.
In the case of only the upper specification limit being defined, we use
USL − μ
C pk = C p = (5.57)
3σ
μ − LSL
C pk = C p = (5.58)
3σ
The value of Cpk must be 1.5 or greater to achieve six-sigma quality standards as shown in
Figure 5.21.
EXAMPLE 5.14
The values of an AC gain measurement are collected from a large sample of the DUTs in a pro-
duction lot. The average reading is 0.991 V/V and the upper and lower specification limits are
1.050 and 0.950 V/V, respectively. The standard deviation is found to be 0.0023 V/V. What is the
process capability and the values of Cp and Cpk for this lot? Does this lot meet six-sigma quality
standards?
Solution:
The process capability is equal to 6 sigma, or 0.0138 V/V. The values of Cp and Cpk are given by
Eqs. (5.53)–(5.55):
This parameter meets six-sigma quality requirements, since the values of Cp is greater than 2
and Cpk is greater than 1.5.
gauge. Before we can apply SPC to a manufacturing process, we first need to verify the accuracy,
repeatability, and reproducibility of the gauge. Once the quality of the testing process has been
established, the test data collected during production can be continuously monitored to verify a
stable manufacturing process.
Gauge repeatability and reproducibility, denoted GRR, is evaluated using a metric called
measurement Cp. We collect repeatability data from a single DUT using multiple testers and dif-
ferent DIBs over a period of days or weeks. The composite sample set represents the combination
of tester repeatabilty errors and reproducibility errors [as described by Eq. (5.50)]. Using the com-
posite mean and standard deviation, we calculate the measurement Cp using Eq. (5.53). The gauge
repeatability and reproducibility percentage (precision-to-tolerance ratio) is defined as
100
%GRR = (5.59)
measurement Cp
The general criteria for acceptance of gauge repeatability and reproducibility are listed in
Table 5.3.
5.11 SUMMARY
In this chapter we have introduced the concept of accuracy and repeatability and shown how
these concepts affect device quality and production test economics. We have examined many con-
tributing factors leading to inaccuracy and nonrepeatability. Using software calibrations, we can
eliminate or at least reduce many of the effects leading to measurement inaccuracy. Measurement
repeatability can be enhanced through averaging and filtering, at the expense of added test time.
The constant balancing act between adequate repeatability and minimum test time represents a
large portion of the test engineer’s workload. One of fundamental skills that separates good test
engineers from average test engineers is the ability to quickly identity and correct problems with
measurement accuracy and repeatability. Doing so while maintaining low test times and high
yields is the mark of a great test engineer.
Statistical process control not only allows us to evaluate the quality of the process, includ-
ing the test and measurement equipment, but also tells us when the manufacturing process is not
stable. We can then work to fix or improve the manufacturing process to bring it back under con-
trol. We have really only scratched the surface of SPC and TQC in this chapter. Although every
test engineer may not necessarily get involved in SPC directly, it is important to understand the
basic concepts. The limited coverage of this topic is only intended as an introduction to the sub-
ject rather than a complete tutorial. For a comprehensive treatment of these subjects, the reader is
encouraged to refer to books devoted to TQC and Six Sigma.7–10
172 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
PROBLEMS
5.1. If 20,000 devices are tested with a yield of 98%, how many devices failed the test?
5.2. A new product was launched with 20,000 sales over a one-year time span. During this
time, three devices were returned to the manufacturer even though an extensive test screen-
ing procedure was in place. What is the defect level associated with this testing procedure
in parts per million?
5.3. A 55-mV signal is measured with a meter 10 times, resulting in the following sequence of
readings: 57 mV, 60 mV, 49 mV, 58 mV, 54 mV, 57 mV, 55 mV, 57 mV, 48 mV, 61 mV.
What is the average measured value? What is the systematic error?
5.4. A DC voltmeter is rated at 14 bits of resolution and has a full-scale input range of
±5 V. Assuming the meter’s ADC is ideal, what is the maximum quantization error that
we can expect from the meter? What is the error as a percentage of the meter’s full-scale
range?
5.5. A 100-mV signal is to measured with a worst-case error of ±10 μV. A DC voltmeter is set
to a full-scale range of ±1 V. Assuming that quantization error is the only source of inac-
curacy in this meter, how many bits of resolution would this meter need to have to make
the required measurement? If the meter in our tester only has 14 bits of resolution but has
binary-weighted range settings (i.e., ±1 V, ±500 mV, ±250 mV, etc.), how would we make
this measurement?
5.6. A voltmeter is specified to have an accuracy error of ±0.1% of full-scale range on a ±1-V
scale. If the meter produces a reading of 0.323 V DC, what is the minimum and maximum
DC levels that might have been present at the meter’s input during this measurement?
5.7. A series of 100 gain measurements was made on a DUT whereby the distribution was
found to be Gaussian with mean value of 10.1 V/V and a standard deviation of 0.006 V/V.
Write an expression for the pdf of these measurements.
5.8. A series of 100 gain measurements was made on a DUT whereby the distribution was
found to be Gaussian with mean value of 10.1 V/V and a standard deviation of 0.006 V/V.
If this experiment is repeated, write an expression for the pdf of the mean values of each of
these experiments?
5.9. A series of 100 gain measurements was made on a DUT whereby the distribution was
found to be Gaussian with mean value of 10.1 V/V and a standard deviation of 0.006 V/V.
If this experiment is repeated and the mean value is compared to a reference gain value of
10 V/V, what is the mean and standard deviation of the error distribution that results? Write
an expression for the pdf of these errors.
5.10. A series of 100 gain measurements was made on a DUT whereby the distribution was
found to be Gaussian with mean value of 10.1 V/V and a standard deviation of 0.006 V/V.
If this experiment and the mean value is compared to a reference value of 10 V/V, in what
range will the expected value of the error lie for a 99.7% conference interval.
5.11. A meter reads –1.039 V and 1.121 V when connected to two highly accurate reference
levels of –1 V and 1 V, respectively. What is the offset and gain of this meter? Write the
calibration equation for this meter.
5.12. A DC source is assumed characterized by a third-order equation of the form:
V SOURCED = 0.004 +V PROGRAMMED + 0.001V PROGRAMMED
2
− 0.007V PROGRAMMED
3
and is required
to generate a DC level of 1.25 V. However, when programmed to produce this level, 1.242
V is measured. Using iteration, determine a value of the programmed source voltage that
will establish a measured voltage of 1.25 V to within a ± 0.5 mV accuracy.
Chapter 5 • Yield, Measurement Accuracy, and Test Time 173
1
5.13. An AWG has a gain response described by G ( f ) = 2 and is to generate three
⎛ f ⎞
1+ ⎜
⎝ 4000 ⎟⎠
tones at frequencies of 1, 2, and 3 kHz. What are the gain calibration factors? What voltage
levels would we request if we wanted an output level of 500 mV RMS at each frequency?
5.14. Several DC measurements are made on a signal path that contains a filter and a buffer
amplifier. At input levels of 1 and 3 V, the output was found to be 1.02 and 3.33 V, respec-
tively. Assuming linear behavior, what is the gain and offset of this filter-buffer stage?
5.15. Using the setup and results of Problem 5.14, what is the calibrated level when a 2.13 V
level is measured at the filter-buffer output? What is the size of the uncalibrated error?
5.16. A simple RC low-pass circuit is constructed using a 1-kΩ resistor and a 10-μF capacitor.
This RC circuit is used to filter the output of a DUT containing a noisy DC signal. If the
DUT’s noise voltage has a constant spectral density of 100 nV/ Hz , what is the RMS
noise voltage that appears at the output of the RC filter? If the we decrease the capacitor
value to 2.2 μF, what is the RMS noise voltage at the RC filter output?
5.17. Assume that we want to allow the RC filter in Problem 5.16 to settle to within 0.2% of its
final value before making a DC measurement. How much settling time does the first RC
filter in Problem 5.16 require? Is the settling time of the second RC filter greater or less
than that of the first filter?
5.18. A DC meter collects a series of repeated offset measurements at the output of a DUT.
A first-order low-pass filter such as the first one described in Problem 5.16 is connected
between the DUT output and the meter input. A histogram is produced from the repeated
measurements. The histogram shows a Gaussian distribution with a 50-mV difference
between the maximum value and minimum value. It can be shown that the standard devia-
tion, σ, of the histogram of a repeated series of identical DC measurements on one DUT
is proportional to the RMS noise at the meter’s input. Assume that the difference between
the maximum and minimum measured values is roughly equal to 6σ. How much would we
need to reduce the cutoff frequency of the low-pass filter to reduce the nonrepeatability of
the measurements from 50 to 10 mV? What would this do to our test time, assuming that
the test time is dominated by the settling time of the low-pass filter?
5.19. The DUT in Problem 5.16 can be sold for $1.25, assuming that it passes all tests. If it
does not pass all tests, it cannot be sold at all. Assume that the more repeatable DC offset
measurement in Problem 5.16 results in a narrower guardband requirement, causing the
production yield to rise from 92% to 98%. Also assume that the cost of testing is known to
be 3.5 cents per second and that the more repeatable measurement adds 250 ms to the test
time. Does the extra yield obtained with the lower filter cutoff frequency justify the extra
cost of testing resulting from the filter’s longer settling time?
5.20. A series of DC offset measurements reveal an average value of 10 mV and a standard
deviation of 11 mV. If our specification limits were 0 ± 50 mV, where would we have to set
our 3σ guardbanded upper and lower test limits? If 6σ guardbands are desired, how many
times would we have to average the measurement to achieve guardbands of 20 mV?
5.21. A DC offset measurement is repeated many times, resulting in a series of values having
an average of −100 mV. The measurements exhibit a standard deviation of 38 mV. What is
the probability that the offset measurement will fail on any given test program execution?
Assume an upper test limit of 0 mV and a lower test limit of −150 mV. Provide a sketch
of the pdf, label critical points, and highlight the area under the pdf that corresponds to the
probability of interest.
174 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
5.22. A gain measurement is repeated many times, resulting in a series of values having an aver-
age of 6.5 V/V. The measurements exhibit a standard deviation of 0.05 V/V. If our specifi-
cation limits are 6.0 ± 0.5 V/V, where would we have to set our 3 σ guardbanded upper and
lower test limits? If 6σ guardbands are desired, how many times would we have to average
the measurement to achieve guardbands of 0.1 V/V?
5.23. A DC offset test is performed on a DUT with lower and upper specification limits of −12
mV and 12 mV, respectively. The expected RMS level of the noise present during the test is
1.5 mV. If a defect level of less than 200 ppm is required, what should be the test limits?
5.24. A device is expected to exhibit a worst-case offset voltage of ±10 mV and is to be measured
using a voltmeter having an accuracy of only ±500 μV. Where should the guardbanded test
limits be set?
5.25. The guardband of a particular measurement is 0.2 V/V and the test limits are set to 6.1 V/V
and 6.2 V/V. What are the original device specification limits?
5.26. A series of DC measurements reveal the following list of values:
{0 mV, −10 mV, 1.5 mV, 9.5 mV, −8.5 mV, 13.2 mV,
18.5 mV, −17.2 mV, 5.3 mV, and 6.2 mV}
If our specification limits were 0 ± 50 mV, where would we have to set our 3 σ guardband
upper and lower test limits? Provide a sketch to illustrate the probability density function
and show the test limits. If 6σ guardbands are desired, how many times would we have to
average the measurement to achieve guardbands of 24 mV?
5.27. The following contains a list of output voltage values from a DUT together with their
actual measured values (i.e., sets of (true value, measured value) ):
{(1.9, 1.81), (2.1, 1.75), (2.1,1.77), (1.8, 1.79), (1.9, 1.71), (2.1, 1.95),
(2.2, 2.11), (1.7,1.89), (1.5, 1.7)}
If the upper specification limit is 2 V and the guardbanded upper test limit is set to 1.8 V,
answer the following questions:
(a) How many good devices are rejected on account of measurement error?
(b) How many devices escape the test?
(c) If the upper test limit is reduced to 1.74 V, how many devices escape on account of
measurement error?
5.28. An AC gain measurement is repeated many times, resulting in a series of values having an
average of 0.99 V/V. The measurements exhibit a standard deviation of 0.2 V/V. What is
the probability that the gain measurement will fail on any given test program execution?
Assume an upper test limit of 1.2 V/V and a lower test limit of 0.98 V/V. Provide a sketch
of the pdf, label critical points, and highlight the area under the pdf that corresponds to the
probability of interest.
5.29. The standard deviation of a measurement repeatability is found to be 12 mV, while the
standard deviation of the reproducibility is found to be 8 mV. Determine the standard
deviation of the tester’s variability. If process variation contributes an additional 10 mV
of uncertainity to the measurement, what is the total standard deviation of the overall
measurement?
5.30. An extensive study of yield finds that the total standard deviation of a particular DC offset
measurement is 25 mV across multiple lots, multiple testers, multiple DIB boards, and so
on. The standard deviation of the measurement repeatability is found to be 19 mV, while
the standard deviation of the reproducibility is found to be 11 mV. What is the standard
deviation of the actual DUT-to-DUT offset variability, excluding tester repeatability errors
and reproducibility errors? If we could test this device using perfectly accurate, repeatable
Chapter 5 • Yield, Measurement Accuracy, and Test Time 175
test equipment, what would be the total yield loss due to this parameter, assuming an aver-
age value of 2.235 V and test limits of 2.25 V ± 40 mV.
5.31. A particular test program performs 1000 tests, most of which cause little or no yield loss.
Seven measurements account for most of the yield loss. The yield loss due to each mea-
surement is found to be: Test #1: 1.1%; Test #2: 6%; Test #3: 3.3%; Test #4: 8%; Test #5:
2%; Test #6: 2%; Test #7: 3%; all other tests: 1%. What is the overall yield of this lot of
material?
5.32. The values of an AC noise measurement are collected from a large sample of the DUTs
in a production lot. The average RMS reading is 0.12 mV and the upper and lower RMS
specification limits are 0.15 and 0.10 mV, respectively. The standard deviation is found to
be 0.015 mV. What is the process capability and the values of Cp and Cpk for this lot? Does
this lot meet six-sigma quality standards?
REFERENCES
1. A. H. Moorehead, et.al., The New American Roget’s College Thesaurus, New American Library,
New York, 1985, p. 6.
2. Webster’s New World Dictionary, Simon and Schuster, New York, August 1995, pp. 5, 463.
3. B. Metzler, Audio Measurement Handbook, Audio Precision, Inc., Beaverton, OR, August 1993,
p. 147.
4. W. D. Cooper, Electronic Instrumentation and Measurement Techniques, 2nd edition, Prentice
Hall, Englewood Cliffs, NJ, 1978, ISBN 0132517108, pp. 1, 2.
5. R. F. Graf, Modern Dictionary of Electronics, Newnes Press, Boston, July 1999, ISBN 0750698667,
pp. 5, 6, 584.
6. G. W. Roberts and S. Aouini, An overview of mixed-signal production test: Past, present and
future, IEEE Design & Test of Computers, 26. (5), pp. 48–62, September/October 2009.
7. J. M. Juran (editor) and A. Blanford Godfrey, Juran’s Quality Handbook, 5th edition, January
1999, McGraw-Hill, New York, ISBN 007034003X.
8. M. J. Kiemele, S. R. Schmidt, and R. J. Berdine, Basic Statistics, Tools for Continuous Improve-
ment, 4th edition, Air Academy Press, Colorado Springs, CO, 1997, ISBN 1880156067,
pp. 9–71.
9. Thomas Pyzdek, The Complete Guide to Six Sigma, Quality Publishing, Tucson, AZ, 1999, ISBN
0385494378.
10. Forrest W. Breyfogle, Implementing Six Sigma: Smarter Solutions Using Statistical Methods, 2nd
edition, June 7, 1999, John Wiley & Sons, New York, ISBN 0471296597
CHAPTER 6
DAC Testing
D ata converters (digital-to-analog and analog-to-digital) are used in all aspects of system and
circuit design, from audio and video players to cellular telephones to ATE test hardware.
When used in conjunction with computers and microprocessors, low-cost mixed-signal systems
and circuits have been created that have high noise immunity and an ability to store, retrieve, and
transmitted analog information in digital format. Such systems have fueled the growth in the use
of the Internet, and this growth continues to push data converter technology to higher operating
frequencies and larger bandwidths, along with higher conversion resolution and accuracy.
In this chapter, we will focus on testing the intrinsic parameters of a digital-to-analog con-
verter (DAC). The next chapter will look at testing the intrinsic parameters of an analog-to-digital
converter (ADC). Intrinsic parameters are those parameters that are intrinsic to the circuit itself
and whose parameters are not dependent on the nature of the stimulus. This includes such mea-
surements as absolute error, integral nonlinearity (INL), and differential nonlinearity (DNL). For
the most part, intrinsic measurements are related to the DC behavior of the device. In contrast,
the AC or transmission parameters, such as gain, gain tracking, signal-to-noise ratio, and signal to
harmonic distortion, are strongly dependent on the nature of the stimulus signal. For instance, the
amplitude and frequency of the sine wave used in a signal-to-distortion test will often affect the
measured result. We defer a discussion of data converter transmission parameters until Chapter 11,
after the mathematical details of AC signaling and measurement are described.
When testing a DAC or ADC, it is common to measure both intrinsic parameters and trans-
mission parameters for characterization. However, it is often unnecessary to perform the full suite
of transmission tests and intrinsic tests in production. The production testing strategy is often
determined by the end use of the DAC or ADC. For example, if a DAC is to be used as a program-
mable DC voltage reference, then we probably do not care about its signal-to-distortion ratio at
1 kHz. We care more about its worst-case absolute voltage error. On the other hand, if that same
DAC is used in a voice-band codec to reconstruct voice signals, then we have a different set of
concerns. We do not care as much about the DAC’s absolute errors as we care about their end
effect on the transmission parameters of the composite audio channel, comprising the DAC, low-
pass filter, output buffer amplifiers, and so on.
176
Chapter 6 • DAC Testing 177
This example highlights one of the differences between digital testing and specification-
oriented mixed-signal testing. Unlike digital circuits, which can be tested based on what they are
(NAND gate, flip-flop, counter, etc.), mixed-signal circuits are often tested based on what they do
in the system-level application (precision voltage reference, audio signal reconstruction circuit,
video signal generator, etc.). Therefore, a particular analog or mixed-signal subcircuit may be
copied from one design to another without change, but it may require a totally different suite of
tests depending on its intended functionality in the system-level application.
where DIN is some integer value, vOUT is a real-valued output value, and GDAC is some real-valued
proportionality constant. Because the input DIN is typically taken from a digital system, it may
come in the form of a D-bit-wide base-2 unsigned integer number expressed as
DIN = b0 + b1 21 + b2 22 + + bD −1 2 D −1 (6.2)
where the coefficients bD −1 , bD − 2 ,..., b2 , b1 , b0 have either a 0 or 1 value. A commonly used symbol
for a DAC is that shown in Figure 6.1. Coefficient bD–1 is regarded as the most significant bit
(MSB), because it has the largest effect on the number and the coefficient b0 is known as the least
significant bit (LSB), as it has the smallest effect on the number.
For a single LSB change at the input (i.e., ΔDIN = 1 LSB),we see from Eq. (6.1) that the
smallest voltage change at the output is ΔvOUT = GDAC × 1 LS. Because this quantity is called upon
frequently, it is designated as VLSB and is referred to as the least significant bit step size. The trans-
fer characteristic of a 4-bit DAC with decoding equation
1
vOUT = DIN , DIN ∈{0,1,...,15} (6.3)
10
is shown in Figure 6.2a. Here the DAC output ranging from 0 to 1.5 V is plotted as a function of the
digital input. For each input digital word, a single analog voltage level is produced, reflecting the one-
to-one input–output mapping of the DAC. Moreover, the LSB step size, VLSB, is equal to 0.1 V.
b1
b2
Digital b3 Analog
DAC Output
Input
bD
178 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Figure 6.2. (a) DAC code-to-voltage transfer curve (b) ADC voltage-to-code transfer curve.
0.5 5
0 0
0 5 10 15 0 0.5 1.0 1.5
DAC input code ADC input voltage
(a) (b)
Alternatively, we can speak about the gain of the DAC (GDAC) as the ratio of the range of
output values to the range of input values as follows
If we denote the full-scale output range as VFSR = vOUT ,max − vOUT ,min and the input integer range as
2D–1 (for above example, 15–0 = 24–1), then the DAC gain becomes
VFSR (6.5)
GDAC =
2D − 1
expressed in terms of volts per bit. Consequently, the LSB step size for the ideal DAC in volts can
be written as
VFSR
VLSB = (6.6)
2D − 1
Interesting enough, if the terms vOUT ,min and vOUT ,max covers some arbitrary voltage range corre-
sponding to an arbitrary range of digital inputs, then the DAC input-output behavior can be described in
identical terms to the ideal DAC described above except that an offset term is added as follows
Returning to our number line analogy, we recognize the ADC process is one that maps an input
analog value that is represented on a real number line to a value that lies on an integer number line.
Chapter 6 • DAC Testing 179
b1
b2
Analog b3 Digital
Input
AD C Output
bD
However, not all numbers on a real number line map directly to a value on the integer number line.
Herein lies the challenge with the encoding process. One solution to this problem is to divide the
analog input full-scale range (VFSR) into 2D equal-sized intervals according to
VFSR
VLSB = (6.9)
2D
and assign each interval a code number. Mathematically, we can write this in the form of a set of
inequalities as follows:
⎧
⎪0, VFS − ≤ vIN < VLSB
⎪1, VLSB ≤ vIN < 2VLSB
⎪⎪
DOUT = ⎨
⎪ D
( ) (
⎪2 − 2, 2 − 2 VLSB ≤ vIN < 2 − 1 VLSB
D D
) (6.10)
⎪ D
( )
⎪⎩ 2 − 1, 2 − 1 VLSB ≤ vIN ≤ VFS +
D
where VFS − and VFS + defined the ADC full-scale range of operation, that is, V FSR = V FS + −V FS −. The
transfer characteristic for a 4-bit ADC is shown in Figure 6.2b for a full-scale input range between
0 and 1.5 V and a LSB step size VLSB of 0.09375 V.
The transfer characteristic of an ADC is not the same across all ADCs, unlike the situa-
tion that one finds for DACs. The reason for this comes back to the many-to-one mapping issue
described above. Two common approaches used by ADC designers to define ADC operation are
based on the mathematical principle of rounding or truncating fractional real numbers. The trans-
fer characteristics of these two types of ADCs are shown in Figure 6.4 for a 4-bit example with
VFS − = 0 V and VFS+ = 1.5 V . If the ADC is based on the rounding principle, then the ADC transfer
characteristics can be described in general terms as
⎧
⎪0, 1
VFS − ≤ vIN < VLSB
⎪ 2
⎪ 1 3
⎪1, VLSB ≤ vIN < VLSB
⎪ 2 2
⎪
DOUT = ⎨ (6.11)
⎪
⎡ D 1⎤ ⎡ D 1⎤
⎪2 D − 2,
⎪
( ) ( )
⎢⎣ 2 − 2 − 2 ⎥⎦ VLSB ≤ vIN < ⎢⎣ 2 − 1 − 2 ⎥⎦ VLSB
⎪
⎡ D 1⎤
⎪ 2 D − 1, ( )
⎢⎣ 2 − 1 − 2 ⎥⎦ VLSB ≤ vIN ≤ VFS +
⎩⎪
180 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Figure 6.4. Alternative definitions of the ADC transfer characteristic. (a) ADC operation based on the
rounding operation; (b) ADC operation based on the truncation operation.
15 15
10 10
5 5
0 0
0 0.5 1.0 1.5 0 0.5 1.0 1.5
ADC input voltage ADC input voltage
(a) (b)
⎧
⎪0, V FS − ≤ v IN < V LSB
⎪1, V LSB ≤ v IN < 2V LSB
⎪
DOUT = ⎪⎨
(6.12)
⎪ D
( )
⎪2 − 2, 2 − 2 V LSB ≤ v IN < 2 − 1 V LSB
D D
( )
⎪ D
( )
⎪⎩ 2 − 1, 2 − 1 V LSB = v IN = V FS +
D
In both of these two cases, the full-scale range is no longer divided into equal segments. These
new definitions lead to a different value for the LSB step size as that described earlier. For these
two cases, it is given by
VFSR (6.13)
VLSB =
2D − 1
Finally, to complete our discussion on ideal ADCs, we like to point out that the proportional-
ity constant GADC in Eq. (6.8) can be expressed in terms of bits per volt as
For both the truncating- and rounding-based ADC, its gain is equal to the reciprocal of the LSB
step size as is evident when Eqs. (6.13) and (6.14) are compared.
Chapter 6 • DAC Testing 181
Exercises
Unsigned binary format places the lowest voltage at code 0 and the highest voltage at the code
with all 1’s. For example, an 8-bit DAC with a full-scale voltage range of 1.0 to 3.0 V would have
the code-to-voltage relationship shown in Table 6.1.
One LSB step size is equal to the full-scale voltage range, VFS+ – VFS-, divided by the number
of DAC codes (e.g., 2D) minus one
VFS + − VFS −
VLSB = (6.16)
# DAC codes − 1
In this example, the voltage corresponding to one LSB step size is equal to (3.0 V – 1.0 V)/255
= 7.843 mV. Sometimes the full-scale voltage is defined with one an additional imaginary code
above the maximum code (i.e., code 256 in our 8-bit example). If so, then the LSB size would be
(3.0 V – 1.0 V)/256 = 7.8125 mV. This source of ambiguity should be clarified in the data sheet.
Another common data format is two’s complement; written exactly the same as an unsigned
binary number, for example, bD −1bD − 2 ... b2b1b0 . A two’s complement binary representation is con-
verted to its equivalent base-10 integer value using the equation
A two’s complement binary formatted number can be used to express both positive and negative
integer values. Positive numbers are encoded the same as an unsigned binary in two’s comple-
ment, except that the most significant bit must always be zero. When the most significant bit is
one, the number is negative. To multiply a two’s complement number by –1, all bits are inverted
and one is added to the result. The two’s complement encoding scheme for an 8-bit DAC is shown
in Table 6.2. As is evident from the table, all outputs are made relative to the DAC’s midscale value
of 2.0 V. This level corresponds to input digital code 0. Also evident from this table is the LSB is
equal to 5 mV. The midscale (MS) value is computed from either of the following two expressions
using knowledge of the lower and upper limits of the DAC’s full-scale range, denoted VFS– and
VFS+, respectively, together with the LSB step size obtained from Eq. (6.16), as follows:
⎛ # DAC codes ⎞
VMS = VFS + − ⎜ − 1⎟ VLSB (6.19)
⎝ 2 ⎠
Note that the two’s complement encoding scheme is slightly asymmetrical since there are
more negative codes than positive ones.
One’s complement format is similar to two’s complement, except that it eliminates the
asymmetry by defining 11111111 as minus zero instead of minus one, thereby making 11111111
a redundant code. One’s complement format is not commonly used in data converters because
it is not quite as compatible with mathematical computations as two’s complement or unsigned
binary formats.
(
DIN = (−1) N −1 × bN − 2 2 N − 2 + bN − 3 2 N − 3 + … + b1 21 + b0 20
b
) (6.20)
Like one’s complement, sign/magnitude format also has a redundant negative zero value. Table
6.3 shows sign/magnitude format for the 8-bit DAC example. The midscale level corresponding to
input code 0 for this type of converter is
VFS + − VFS −
VLSB = (6.22)
# DAC codes − 2
Two other data formats, mu-law and A-law, were developed in the early days of digital tele-
phone equipment. Mu-law is used in North American and related telephone systems, while A-law
is used in European telephone systems. Today the mu-law and A-law data formats are sometimes
found not only in telecommunications equipment but also in digital audio applications, such as PC
sound cards. These two data formats are examples of companded encoding schemes.
Companding is the process of compressing and expanding a signal as it is digitized and
reconstructed. The idea behind companding is to digitize or reconstruct large amplitude signals
with coarse converter resolution while digitizing or reconstructing small amplitude signals with
finer resolution. The companding process results in a signal with a fairly constant signal to quan-
tization noise ratio, regardless of the signal strength.
Compared with a traditional linear converter having the same number of bits, a companding
converter has worse signal-to-noise ratio when signal levels are near full scale, but better signal-
to-noise ratios when signal levels are small. This tradeoff is desirable for telephone conversations,
since it limits the number of bits required for transmission of digitized voice. Companding is
therefore a simple form of lossy data compression.
Figure 6.5. Comparison of linear and companding 4-bit ADC-to-DAC transfer curves.
3 3
2 2
1 1
Output (V)
Output (V)
0 0
Linear Companding
ADC / DAC ADC / DAC
-1 pair -1
pair
-2 -2
-3 -3
-2 -1 0 1 2 -2 -1 0 1 2
Input (V) Input (V)
Figure 6.5 shows the transfer curve of a simple 4-bit companded ADC followed by a 4-bit
DAC. In a true logarithmic companding process such as the one in Figure 6.5, the analog signal is
passed through a linear-to-logarithmic conversion before it is digitized. The logarithmic process
compresses the signal so that small signals and large signals appear closer in magnitude. Then the
compressed signal may be digitized and reconstructed using an ADC and DAC. The reconstructed
signal is then passed through a logarithmic-to-linear conversion to recover a companded version
of the original signal.
Exercises
6.5. A 4-bit DAC has a full-scale voltage range of 0 to ANS. Code 0 to 15: 0, 0.333, 0.666,
5 V. The input is formatted using an unsigned bi- 0.999, 1.33, 1.66, 2.00, 2.33,
nary number representation. List all possible ideal 2.66, 3.00, 3.33, 3.66, 4.00,
output levels. What output level corresponds to the 4.33, 4.66, 5.00 V;
DAC input code 0? What is the VLSB? Code 0 = 0 V; VLSB: = 0.333 V.
The mu-law and A-law encoding and decoding rules are a sign/magnitude format with a piecewise
linear approximation of a true logarithmic encoding scheme. They define a varying LSB size that is
small near 0 and larger as the voltage approaches plus or minus full scale. Each of the piecewise linear
sections is called a chord. The steps in each chord are of a constant size. The piecewise approximation
was much easier to implement in the early days of digital telecommunications than a true logarithmic
companding scheme, since the piecewise linear sections could be implemented with traditional binary
weighted ADCs and DACs. Today, the A-law and mu-law encoding and decoding process is often
performed using lookup tables combined with linear sigma-delta ADCs and DACs having at least 13
bits of resolution. A more complete discussion of A-law and mu-law codec testing can be found in
Matthew Mahoney’s book, DSP-based Testing of Analog and Mixed-Signal Circuits.1
Before we discuss testing methodologies for each type of DAC, we first need to outline the
DC and dynamic tests commonly performed on DACs. The DC tests include the usual specifica-
tions like gain, offset, power supply sensitivity, and so on. They also include converter-specific
tests such as absolute error, monotonicity, integral nonlinearity (INL), and differential nonlinear-
ity (DNL), which measure the overall quality of the DAC’s code-to-voltage transfer curve. The
dynamic tests are not always performed on DACs, especially those whose purpose is to provide
DC or low-frequency signals. However, dynamic tests are common in applications such as video
DACs, where fast settling times and other high-frequency characteristics are key specifications.
1.0
Gain (slope)
-0.5 based on
endpoint codes
-1.0
-10 -5 0 5 10
DAC input code
reasonable linearity, the errors between these two techniques become very small. Nevertheless,
the best-fit line approach is independent of DAC resolution; thus it is the preferred technique.
A best-fit line is commonly defined as the line having the minimum squared errors between
its ideal, evenly spaced samples and the actual DAC output samples. For a sample set S(i), where i
ranges from 0 to N – 1 and N is the number of samples in the sample set, the best-fit line is defined
by its slope (DAC gain) and offset using a standard linear equation having the form
The equations for slope and offset can be derived using various techniques. One technique
minimizes the partial derivatives with respect to slope and offset of the squared errors between the
sample set S and the best-fit line. Another technique is based on linear regression.2 The equations
derived from the partial derivative technique are
N K 4 − K1 K 2 K2 K
gain = , offset = − gain 1 (6.25)
N K 3 − K12 N N
where
N −1 N −1 N −1 N −1
K1 = ∑ i, K 2 = ∑ S (i ), K3 = ∑ i 2 , K 4 = ∑ i S (i )
i=0 i=0 i=0 i=0
The derivation details are left as an exercise in the problem set found at the end of this chapter.
These equations translate very easily into a computer program.
The values in the array Best_ fit_line represent samples falling on the least-squared-error line.
The program variable Gain represents the gain of the DAC, in volts per bit. This gain value is the
average gain across all DAC samples. Unlike the gain calculated from the full-scale range divided
by the number of code transitions, the slope of the best-fit line represents the true gain of the DAC.
It is based on all samples in the DAC transfer curve and therefore is not especially sensitive to any
one code’s location. Gain error, ∆G, expressed as a percent, is defined as
⎛G ⎞
ΔG = ⎜ ACTUAL − 1⎟ × 100% (6.26)
⎝ GIDEAL ⎠
188 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Likewise, the best-fit line’s calculated offset is not dependent on a single code as it is in the
midscale code method. Instead, the best-fit line offset represents the offset of the total sample set.
The DAC’s offset is defined as the voltage at which the best-fit line crosses the y axis. The DAC’s
offset error is equal to its offset minus the ideal voltage at this point in the DAC transfer curve.
The y axis corresponds to DAC code 0.
In unsigned binary DACs, this voltage corresponds to Best_fit_line(1) in the MATLAB rou-
tine. However, in two’s complement DACs, the value of Best_fit_line(1) corresponds to the DAC’s
VFS– voltage, and therefore does not correspond to DAC code 0. In an 8-bit two’s complement
DAC, for example, the 0 code point is located at i = 128. Therefore, the value of the program vari-
able Offset does not correspond to the DAC’s offset. This discrepancy arises simply because we
cannot use negative index values in MATLAB code arrays such as Best_fit_line(–128). Therefore,
to find the DAC’s offset, one must determine which sample in vector Best_fit_line corresponds to
the DAC’s 0 code. The value at this array location is equal to the DAC’s offset. The ideal voltage
at the DAC 0 code can be subtracted from this value to calculate the DAC’s offset error.
EXAMPLE 6.1
A 4-bit two’s complement DAC produces the following set of voltage levels, starting from code –8
and progressing through code +7:
780 mV, 705 mV, 530 mV, 455 mV, 400 mV, 325 mV, 150 mV, 75 mV,
120 mV, 195 mV, 370 mV, 445 mV, 500 mV, 575 mV, 750 mV, 825 mV
These code levels are shown in Figure 6.7. The ideal DAC output at code 0 is 0 V. The ideal gain
is equal to 100 mV/bit. Calculate the DAC’s gain (volts per bit), gain error, offset, and offset error
using a best-fit line as reference.
Figure 6.7. A 4-bit DAC transfer curve and best-fit line.
1.0
Best-fit line
Slope = DAC gain
DAC output voltage (V)
0.5
DAC offset
-0.5
-1.0
-10 -5 0 5 10
DAC input code
Solution:
We calculate gain and offset using the previous MATLAB routine, resulting in a gain value of
109.35 mV/bit and an offset value of –797.64 mV. The gain error is found from Eq. (6.26) to be
⎛ 109.35 mV ⎞
ΔG = ⎜ − 1⎟ × 100% = 9.35%
⎝ 100 mV ⎠
Chapter 6 • DAC Testing 189
Because this DAC uses a two’s complement encoding scheme, this offset value is the offset of the
best-fit line, not the offset of the DAC at code –8.
The DAC’s offset is found by calculating the best-fit line’s value at DAC code 0, which corresponds to i = 8
DAC offset = gain × 8 + offset
= 109.35 mV × 8 − 797.64 mV
= 77.16 mV
6.2.5 DC PSS
DAC DC power supply sensitivity (PSS) is easily measured by applying a fixed code to the DAC’s
input and measuring the DC gain from one of its power supply pins to its output. PSS for a DAC
is therefore identical to the measurement of PSS in any other circuit, as described in Section 3.8.1.
The only difference is that a DAC may have different PSS performance depending on the applied
digital code. Usually, a DAC will exhibit the worst PSS performance at its full-scale and/or minus
full-scale settings because these settings tie the DAC output directly to a voltage derived from
the power supply. Worst-case conditions should be used once they have been determined through
characterization of the DAC.
Exercises
6.9. A 4-bit unsigned binary DAC produces the following set of voltage
levels, starting from code 0 and progressing through to code 15:
1.0091, 1.2030, 1.3363, 1.5617, 1.6925, 1.9453, 2.0871, 2.3206,
2.4522, 2.6529, 2.8491, 2.9965, 3.1453, 3.3357, 3.4834, 3.6218 ANS. G = 177.3
The ideal DAC output at code 0 is 1 V and the ideal gain is equal mV/bit;
to 200 mV/bit. The data sheet for this DAC specifies offset and ∆G = 11.3%;
offset using a best-fit line, evaluated at code 0. Gain is also spec- offset = 1.026
ified using a best-fit line. Calculate the DAC’s gain (volts per bit), V; offset error =
gain error, offset, and offset error. 26.1 mV.
6.10. Estimate the LSB step size of the DAC described in Exercise ANS. LSB = 174.2 mV;
6.7 using its measured full-scale range (i.e. using the endpoint ∆G = 12.9%;
method). What are the gain error and offset error? offset error =
9.1 mV.
190 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
S (i ) − S IDEAL (i )
ΔS (i ) = (6.27)
VLSB
EXAMPLE 6.2
Assuming an ideal gain of 100 mV per LSB and an ideal offset of 0 V at code 0, calculate the
absolute error curve for the 4-bit DAC of the previous example. Express the results in terms
of LSBs.
Solution:
The ideal DAC levels are –800, –700, . . . , +700 mV. Subtracting these ideal values from the actual
values, we can calculate the absolute voltage errors ∆S(i) as:
+20 mV, 25 mV, +70 mV, +45 mV, 0 mV, 225 mV, +50 mV, +25 mV, +120 mV,
+95 mV, +170 mV, +145 mV, +100 mV, +75 mV, +150 mV, +125 mV
The maximum absolute error is +170 mV and the minimum absolute error is –25 mV. Divid-
ing each value by the ideal LSB size (100 mV), we get the normalized error curve shown in
Figure 6.8. This curve shows that this DAC’s maximum and minimum absolute errors are
+1.7 and –0.25 LSBs, respectively. In a simple 4-bit DAC, this would be considered very bad
performance, but this is an imaginary DAC designed for instructional purposes. In high-
resolution DACs, on the other hand, absolute errors of several LSBs are common. The larg-
er normalized absolute error in high-resolution DACs is a result of the smaller LSB size.
Therefore, absolute error testing is often replaced by gain, offset, and linearity testing in
high-resolution DACs.
Chapter 6 • DAC Testing 191
1.5
0.5
-0.5
-8 -6 -4 -2 0 2 4 6 8
6.3.2 Monotonicity
A monotonic DAC is one in which each voltage in the transfer curve is larger than the previous
voltage, assuming a rising voltage ramp for increasing codes. (If the voltage ramp is expected to
decrease with increasing code values, we simply have to make sure that each voltage is less than
the previous one.) While the 4-bit DAC in the previous examples has a terrible set of absolute
errors, it is nevertheless monotonic. Monotonicity testing requires that we take the discrete first
derivative of the transfer curve, denoted here as S ′ (i ), according to
S ′ (i ) = S (i + 1) − S (i ) (6.28)
If the derivatives are all positive for a rising ramp input or negative for a falling ramp input,
then the DAC is said to be monotonic.
EXAMPLE 6.3
S (i + 1) − S (i ) − VLSB
DNL (i ) = LSB (6.29)
VLSB
As previously mentioned, we can define the average LSB size in one of three ways. We can
define it as the actual full-scale range divided by the number of code transitions (number of codes
minus 1) or we can define the LSB as the slope of the best-fit line. Alternatively, we can define the
LSB size as the ideal DAC step size.
Exercises
6.11. Assuming an ideal gain of 200 mV/bit and an ANS. 0.0455, 0.0150, -0.3185, 0.1915,
ideal offset of 1 V at code 0, calculate the ab- 0.5375, 0.2735, 0.5645,
solute error transfer curve for the 4-bit DAC of 0.3970, 0.7390 0.7355
Exercise 6.7. Normalize the result to a single 0.7545 1.0175 1.2735
LSB step size. 1.3215 1.5830 1.8910
6.12. Compute the discrete first derivative of the ANS. 0.1939, 0.1333, 0.2254, 0.1308,
DAC transfer curve given in Exercise 6.7. Is 0.2528, 0.1418, 0.2335, 0.1316,
the DAC output monotonic? 0.2007, 0.1962, 0.1474, 0.1488,
0.1904, 0.1477, 0.1384
The DAC is monotonic since there
are no negative values in the dis-
crete derivative.
The choice of LSB calculations depends on what type of DNL calculation we want to per-
form. There are four basic types of DNL calculation method: best-fit, endpoint, absolute, and best-
straight-line. Best-fit DNL uses the best-fit line’s slope to calculate the average LSB size. This is
probably the best technique, since it accommodates gain errors in the DAC without relying on the
values of a few individual voltages. Endpoint DNL is calculated by dividing the full-scale range
by the number of transitions. This technique depends on the actual values for the maximum full-
scale (VFS+) and minimum full-scale (VFS–) levels. As such it is highly sensitive to errors in these
two values and is therefore less ideal than the best-fit technique. The absolute DNL technique
uses the ideal LSB size derived from the ideal maximum and minimum full-scale values. This
technique is less commonly used, since it assumes the DAC’s gain is ideal.
The best-straight-line method is similar to the best-fit line method. The difference is that the
best-straight-line method is based on the line that gives the best answer for integral nonlinearity
(INL) rather than the line that gives the least squared errors. Integral nonlinearity will be discussed
later in this chapter. Since the best-straight-line method is designed to yield the best possible
answer, it is the most relaxed specification method of the four. It is used only in cases where the
DAC or ADC linearity performance is not critical. Thus the order of methods from most relaxed
to most demanding is best-straight line, best-fit, endpoint, and absolute.
Chapter 6 • DAC Testing 193
The choice of technique is not terribly important in DNL calculations. Any of the three tech-
niques will result in nearly identical results, as long as the DAC does not exhibit grotesque gain
or linearity errors. DNL values of ±1/2 LSB are usually specified, with typical DAC performance
of ±1/4 LSB for reasonably good DAC designs. A 1% error in the measurement of the LSB size
would result in only a 0.01 LSB error in the DNL results, which is tolerable in most cases. The
choice of technique is actually more important in the integral nonlinearity calculation, which we
will discuss in the next section.
EXAMPLE 6.4
Calculate the DNL curve for the 4-bit DAC of the previous examples. Use the best-fit line to define the
average LSB size. Does this DAC pass a ±1/2 LSB specification for DNL? Use the endpoint method to
calculate the average LSB size. Is this result significantly different from the best-fit calculation?
Solution:
The first derivative of the transfer curve was calculated in the previous monotonicity example.
The first derivative values are
75 mV, 175 mV, 75 mV, 55 mV, 75 mV, 175 mV, 75 mV, 195 mV, 75 mV, 175 mV,
75 mV, 55 mV, 75 mV, 175 mV, 75 mV
The average LSB size, 109.35 mV, was calculated in Example 6.1 using the best-fit line calcula-
tion. Dividing each step size by the average LSB size yields the following normalized derivative
values (in LSBs)
0.686, 1.6, 0.686, 0.503, 0.686, 1.6, 0.686, 1.783, 0.686, 1.6, 0.686, 0.503, 0.686, 1.6, 0.686
Subtracting one LSB from each of these values gives us the DNL values for each code transition
of this DAC expressed as a fraction of an LSB
0.314, 0.6, 0.314, 0.497, 0.314, 0.6, 0.314, 0.783, 0.314, 0.6, 0.314, 0.497, 0.314, 0.6, 0.314
Note that there is one fewer DNL value than there are DAC codes.
Figure 6.9a shows the DNL curve for this DAC. The maximum DNL value is +0.783 LSB, while
the minimum DNL value is –0.497. The minimum value is within the –1/2 LSB test limit, but the
Figure 6.9. 4-bit DAC DNL curve (a) best-fit method (b) endpoint method.
1.0 1.0
0.5 0.5
DNL (LSBs)
DNL (LSBs)
0 0
-0.5 -0.5
-1.0 -1.0
-8 -6 -4 -2 0 2 4 6 8 -8 -6 -4 -2 0 2 4 6 8
DAC input code DAC input code
(a) (b)
194 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
maximum DNL value exceeds the +1/2 LSB limit. Therefore, this DAC fails the DNL specification
of ±1/2 LSB.
The average LSB step size calculated using the endpoint method is given by
The DNL curve calculated using the endpoint method gives the following values, which have been
normalized to an LSB size of 107 mV:
0.299, 0.636, 0.299, 0.486, 0.299, 0.636, 0.299, 0.822, 0.299,
0.636, 0.299,0.486,0.299, 0.636, 0.299
The corresponding DNL curve is shown in Figure 6.9b. Using the endpoint calculation, we get
slightly different results. Instead of a maximum DNL result of +0.783 LSB and a minimum DNL
of –0.497 LSB, we get +0.822 and –0.486 LSB, respectively. This might be enough of a difference
compared to the best-fit technique to warrant concern. Unless the endpoint method is explicitly
called for in the data sheet, the best-fit method should be used since it is the least sensitive to
abnormalities in any one DAC voltage.
Exercises
6.13. Calculate the DNL curve for the 4-bit DAC of Exer- ANS. 0.0937, 0.2481, 0.2714,
cise 6.7. Use the best-fit line to define the average 0.2622, 0.4259, 0.2002,
LSB size. Does this DAC pass a ±1/2 LSB specifi- 0.3170, 0.2577, 0.1320, 0.1067
cation for DNL? 0.1686, 0.1607, 0.0739,
0.1669, 0.2194; pass
6.14. Calculate the DNL curve for the 4-bit DAC of Exer- ANS. 0.1132 0.2347, 0.2941
cise 6.7. Use the endpoint method to calculate the 0.2491, 0.4514 0.1859,
average LSB size. Does this DAC pass a ±1/2 LSB 0.3406 0.2445, 0.1523, 0.1264
specification for DNL? 0.1537 0.1457, 0.0931
0.1520 0.2054; pass
S (i ) − S REF (i )
INL (i ) = (6.30)
VLSB
Note that using the ideal DAC line is equivalent to calculating the absolute error curve. Since
a separate absolute error test is often specified, the ideal line is seldom used in INL testing. Instead,
the endpoint or best-fit line is generally used. As in DNL testing, we are interested in the maximum
and minimum value in the INL curve, which we compare against a test limit such as ±1/2 LSB.
Chapter 6 • DAC Testing 195
EXAMPLE 6.5
Calculate the INL curve for the 4-bit DAC in the previous examples. First use an endpoint calcula-
tion, then use a best-fit calculation. Does either result pass a specification of ±1/2 LSB? Do the
two methods produce a significant difference in results?
Solution:
Using an endpoint calculation method, the INL curve for the 4-bit DAC of the previous examples
is calculated by subtracting a straight line between the VFS– voltage and the VFS+ voltage from the
DAC output curve. The difference at each point in the DAC curve is divided by the average LSB
size, which in this case is calculated using an endpoint method. As in the endpoint DNL example,
the average LSB size is equal to 107 mV. The results of the INL calculations are (again, these
values are expressed in LSBs)
0.0, 0.299, 0.336, 0.037, 0.449, 0.748, 0.112, 0.411,
0.411, 0.112, 0.748, 0.449, 0.037, 0.336, 0.299, 0.0
Figure 6.10a shows this endpoint INL curve. The maximum INL value is +0.748 LSB, and the
minimum INL value is –0.748. This DAC does not pass an INL specification of ±1/2 LSB.
Using a best-fit calculation method, the INL curve for the 4-bit DAC of the previous examples is
calculated by subtracting the best-fit line from the DAC output curve. Each point in the difference
curve is divided by the average LSB size, which in this case is calculated using the best-fit line
method. As in the best-fit DNL example, the average LSB size is equal to 109.35 mV. The results
of the INL calculations are
0.161, 0.153, 0.448, 0.133, 0.364, 0.678, 0.077, 0.392, 0.392,
0.077, 0.678, 0.364, 0.133, 0.448, 0.153, 0.161
The maximum value is +0.678, and the minimum value is –0.678. These INL results are better
than the endpoint INL values, but still do not pass a ±1/2 LSB test limit. The best-fit INL curve
is shown in Figure 6.10b for comparison with the endpoint INL curve. The two INL curves are
somewhat similar in shape, but the individual INL values are quite different. Remember that the
DNL curves for endpoint and best-fit calculations were nearly identical. So, as previously stated,
Figure 6.10. 4-bit DAC INL curve (a) endpoint method (b) best-fit method.
1.0 1.0
0.5 0.5
INL (LSBs)
INL (LSBs)
0 0
-0.5 -0.5
-1.0 -1.0
-8 -6 -4 -2 0 2 4 6 8 -8 -6 -4 -2 0 2 4 6 8
DAC input code DAC input code
(a) (b)
196 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
the choice of calculation technique is much more important for INL curves than for DNL curves.
Notice also that while an endpoint INL curve always begins and ends at zero, the best-fit curve
does not necessarily behave this way. A best-fit curve will usually give better INL results than an
endpoint INL calculation. This is especially true if the DAC curve exhibits a bowed shape in either
the upward or downward direction. The improvement in the INL measurement is another strong
argument for using a best-fit approach rather than an absolute or endpoint method, since the
best-fit approach tends to increase yield.
The INL curve is the integral of the DNL curve, thus the term “integral nonlinearity”; DNL is a
measurement of how consistent the step sizes are from one code to the next. INL is therefore a
measure of accumulated errors in the step sizes. Thus, if the DNL values are consistently larger
than zero for many codes in a row (step sizes are larger than 1 LSB), the INL curve will exhibit
an upward bias. Likewise, if the DNL is less than zero for many codes in a row (step sizes are less
than 1 LSB), the INL curve will have a downward bias. Ideally, the positive error in one code’s
DNL will be balanced by negative errors in surrounding codes and vice versa. If this is true, then
the INL curve will tend to remain near zero. If not, the INL curve may exhibit large upward or
downward bends, causing INL failures.
The INL integration can be implemented using a running sum of the elements of the DNL.
The ith element of the INL curve is equal to the sum of the first i–1 elements of the DNL curve
plus a constant of integration. When using the best-fit method, the constant of integration is equal
to the difference between the first DAC output voltage and the corresponding point on the best-fit
curve, all normalized to one LSB. When using the endpoint method, the constant of integration
is equal to zero. When using the absolute method, the constant is set to the normalized difference
between the first DAC output and the ideal output. In any running sum calculation it is important
to use high-precision mathematical operations to avoid accumulated math error in the running
sum. Mathematically, we can express this process as
i −1
INL (i ) = ∑ DNL (k ) + C (6.31)
k =0
where
⎧
⎪ S (0 ) − Best _ fit _ line (0 )
⎪ for best-fit linearity method
V LSB
⎪
C=⎨ 0 for endpoint linearity method
⎪
⎪ S (0 ) − S IDEA L (0 )
⎪ for absolute linearity method
⎩ V LSB
This is usually the easiest way to calculate DNL. The first derivative technique works well in DAC
testing, but we will see in the next chapter that the DNL curve for an ADC is easier to capture
than the INL curve. In ADC testing it is more common to calculate the DNL curve first and then
integrate it to calculate the INL curve. In either case, whether we integrate DNL to get INL or dif-
ferentiate INL to get DNL, the results are mathematically identical.
Integral nonlinearity and differential nonlinearity are sometimes referred to by the names
integral linearity error (ILE) and differential linearity error (DLE). However, the terms INL and
DNL seem to be more prevalent in data sheets and other literature. We will use the terms INL and
DNL throughout this text.
Exercises
where
• DC base is the DAC output value with a VFS– input code.
• DAC code bits bD −1, bD − 2 ,..., b2 , b1, b0 ; take on values of 1 or 0.
If this idealized model of the DAC is sufficiently accurate, then we only need to make D+1
measurements of DAC behavior and solve for the unknown model parameters: W0, W1, . . . , WD–1
and the DC Base term. Subsequently, we can cycle through all binary values using Eq. (6.33)
and compute the entire DAC transfer curve. This DAC testing method is called the major carrier
technique. The major carrier approach can be used for ADCs as well as DACs. The assumption
of sufficient DAC or ADC model accuracy is only valid if the actual superposition errors of the
DAC or ADC are low. This may or may not be the case. The superposition assumption can only
be determined through characterization, comparing the all-codes DAC output levels with the ones
generated by the major carrier method.
The most straightforward way to obtain each model parameter Wn is to set code bit bn to 1
and all other to zero. This is then repeated for each code bit for n from 0 to D—1. However, the
resulting output levels are widely different in magnitude. This makes them difficult to measure
accurately with a voltmeter, since the voltmeter’s range must be adjusted for each measurement.
A better approach that alleviates the accuracy problem is to measure the step size of the major
carrier transitions in the DAC curve, which are all approximately 1 LSB in magnitude. A major
carrier transition is defined as the voltage (or current) transition between the DAC codes 2n–1
and 2n. For example, the transition between binary 00111111 and 01000000 is a major carrier
transition for n = 6. Major carrier transitions can be measured using a voltmeter’s sample-and-
difference mode, giving highly accurate measurements of the major carrier transition step sizes.
Once the step sizes are known, we can use a series of inductive calculations to find the values
of W0, W1, . . . , WD-1. We start by realizing that we have actually measured the following values:
DC base = measured DAC output with minus full-scale code
V0 = W0
V1 = W1 − W0
V2 = W2 − (W1 + W0 )
V3 = W3 − (W2 + W1 + W0 )
Vn = Wn − (Wn −1 + Wn − 2 + Wn − 3 + … + W0 )
The value of the first major transition, V0, is a direct measurement of the value of W0 (the step size
of the least significant bit). The value of W1 can be calculated by rearranging the second equation:
W1 = V1 + W0. Once the values of W0 and W1 are known, the value of W2 is calculated by rearrang-
ing the third equation: W2 = V2 + W1 + W0, and so forth. Once the values of W0–Wn are known, the
complete DAC curve can be reconstructed for each possible combination of input bits b0 – bn using
the original model of the DAC described by Eq. (6.33).
The major carrier technique can also be used on signed binary and two’s complement con-
verters, although the codes corresponding to the major carrier transitions must be chosen to match
the converter encoding scheme. For example, the last major transition for our two’s complement
4-bit DAC example happens between code 1111 (decimal –1) and 0000 (decimal 0). Aside from
these minor modifications in code selection, the major carrier technique is the same as the simple
unsigned binary approach.
Chapter 6 • DAC Testing 199
EXAMPLE 6.6
Using the major carrier technique on the 4-bit DAC example, we measure a DC base of 780 mV
setting the DAC to VFS – (binary 1000, or 8). Then we measure the step size between 1000 (8) and
1001 (7). The step size is found to be 75 mV. Next we measure the step size between 1001 (7)
and 1010 (6). This step size is 175 mV. The step size between 1011 (5) and 1100 (4) is 55 mV
and the step size between 1111 (1) and 0000 (0) is 195 mV. Determine the values of W0, W1, W2,
and W3. Reconstruct the voltages on the ramp from DAC code –8 to DAC code +7.
Solution:
Rearranging the set of equations Vn = Wn—(Wn-1 + Wn-2 + Wn-3 + . . . + W0) to solve for Wn, we obtain
For a two’s complement DAC, we have to realize that the most significant bit is inverted in polarity
compared to an unsigned binary DAC. Therefore, the DAC model for our 4-bit DAC is given by
Using this two’s complement version of the DAC model, the 16 voltage values of the DAC curve are
reconstructed as shown in Table 6.4. Notice that these values are exactly equal to the all-codes
results in Figure 6.8. The example DAC was created using a binary-weighted model with perfect
superposition; so it is no surprise the major carrier technique works for this imaginary DAC. Real
DACs and ADCs often have superposition errors that make the major carrier technique unusable.
Table 6.4. DAC Transfer Curve Calculated Using the Major Carrier
Technique
DAC Code Calculation Output Voltage (mV)
Both the fine DAC and the coarse DAC are designed using a resistive divider architecture
rather than a binary-weighted architecture. Since major carrier testing can only be performed on
binary-weighted architectures, an all-codes testing approach must be used to verify the perfor-
mance of each of the two 6-bit resistive divider DACs. However, we would like to avoid testing
each of the 212, or 4096 codes of the composite 12-bit DAC. Using superposition, we will test
each of the two 6-bit DACs using an all-codes test. This requires only 2 × 26, or 128 measure-
ments. We will then combine the results mathematically into a 4096-point all-codes curve using a
linear model of the composite DAC.
Let us assume that through characterization, it has been determined that this example DAC
has excellent superposition. In other words, the step sizes of each DAC are independent of the set-
ting of the other DAC. Also, the summation circuit has been shown to be highly linear. In a case
such as this, we can measure the all-codes output curve of the coarse DAC while the fine DAC is
set to 0 (i.e., D5-D0 = 000000). We store these values into an array VDAC-COARSE(n), where n takes on
the values 0 to 63, corresponding to data bits D11-D6. Then we can measure the all-codes output
curve for the fine DAC while the coarse DAC is set to 0 (i.e., D11-D6 = 000000). These voltages
are stored in the array VDAC-FINE(n), where n corresponds to data bits D5-D0.
Co ar s e DAC
LSB size = 26×VLSB
DAC code
bits D11-D6 6-Bit DAC
12-bit
D A C ou t p ut
LSB size = VLSB
DAC code
6-Bit DAC
bits D5-D0
Fine DAC
LSB size = VLSB
Chapter 6 • DAC Testing 201
Although we have only measured a total of 128 levels, superposition allows us to recreate the
full 4096-point DAC output curve by a simple summation. Each DAC output value VDAC(i) is equal
to the contribution of the coarse DAC plus the contribution of the fine DAC
⎛ i AND 111111000000 ⎞
VDAC (i ) = VDAC − FINE (i AND 000000111111) + VDAC − COARSE ⎜ ⎟⎠ (6.36)
⎝ 64
Exercises
Figure 6.12. DAC settling time measurement (a) referenced to a digital signal; (b) referenced to the
DAC output 50% point.
F in a l v ol t ag e F in al v o l t ag e
50% point
DAC DAC
output output
DAC
DAC settling time
write strobe
DAC
settling time
millions of transitions on a typical DAC. As with any other test, we have to determine what codes
represent the worst-case transitions. Typically settling time will be measured as the DAC transi-
tions from minus full-scale (VFS–) to plus full-scale (VFS+) and vice versa, since these two tests
represent the largest voltage swing.
The 1/2 LSB example uses an error band specification that is referenced to the LSB size.
Other commonly used definitions require the DAC output to settle within a certain percentage of
the full-scale range, a percentage of the final voltage, or a fixed voltage range. So we might see
any of the following specifications:
settling time = 1 μs to ± 1% of full-scale range
settling time = 1 μs to ± 1% of final value
settling time = 1 μs to ±1 mV
The test technique for all these error-band definitions is the same; we just have to convert the
error-band limits to absolute voltage limits before calculating the settling time. The straightfor-
ward approach to testing settling time is to digitize the DAC’s output as it transitions from one
code to another and then use the known time period between digitizer samples to calculate the
settling time. We measure the final settled voltage, calculate the settled voltage limits (i.e., ±1/2
LSB), and then calculate the time between the digital signal transition that initiates a DAC code
change and the point at which the DAC first stays within the error band limits, as shown in Figure
6.12a.
In extremely high frequency DACs it is common to define the settling time not from the
DAC code change signal’s transition but from the time the DAC passes the 50% point to the time
it settles to the specified limits as shown in Figure 6.12b. This is easier to calculate, since it only
requires us to look at the DAC output, not at the DAC output relative to the digital code.
10% overshoot
VFS+
2% undershoot
100%
reference
2% undershoot VFS–
10% overshoot
Fall t ime
90% 90%
100%
referen ce
10% 10%
Red DAC
50% point
Negative glitch
Ma jor car rier
t r a n si t i o n
1-LSB swing P o si t i v e g l i t c h
switches across the largest major transition (i.e., 01111111 to 10000000 in an 8-bit DAC) and
back again. As shown in Figure 6.16, the glitch area is defined as the area that falls outside the
rated error band. These glitches are caused by a combination of capacitive/inductive ringing in
the DAC output and skew between the timing of the digital bits feeding the binary-weighted
DAC circuits. The parameter is commonly expressed in picosecond-volts (ps-V) or equivalently,
picovolt-seconds (pV-s). (These are not actually units of energy, despite the term glitch energy.)
The area under the negative glitches is considered positive area, and should be added to the area
under the positive glitches. Both the rising-edge glitch energy and the falling-edge glitch energy
should be tested.
However, clock and data feedthrough can be measured using a technique similar to all the other
tests in this section. The output of the DAC is digitized with a high-bandwidth digitizer. Then the
various types of digital signal feedthrough are analyzed to make sure they are below the defined
test limits. The exact test conditions and definition of clock and data feedthrough should be pro-
vided in the data sheet. This measurement may require time-domain analysis, frequency-domain
analysis, or both.
Exercises
0.40
0.30
v(t)
0.20
0.10
0
0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0
time (us)
The data sheet states that the settling time is 1 µs (error band = ± 20 mV). Does this DAC settle
fast enough to meet the settling time specification? Also, determine the overshoot of this
signal and its rise time. Estimate the total glitch energy during the positive-going transition.
ANS. Actual settling time = 0.82 µs; yes; overshoot = 30%; rise time = 0.2 µs. Glitch energy
= 0.5(0.3)(0.13) + 0.5(0.5)(-0.033) + 0.5(0.6)(0.01) = 14 ns-V (triangle approximation).
The NTSC format is used in transmission of standard (i.e., non-HDTV) analog television
signals. It requires only a single DAC, rather than a separate DAC for each color. The picture
intensity, color, and saturation information is contained in the time-varying offset, amplitude, and
phase of a 3.54-MHz sinusoidal waveform produced by the DAC. Clearly this is a totally different
DAC application than the RGB DAC application. These two seemingly similar video applications
require totally different testing approaches.
RGB DACs are tested using the standard intrinsic tests like INL and DNL, as well as the
dynamic tests like settling time and DAC-to-DAC skew. These parameters are important because
the DAC outputs directly control the rapidly changing beam intensities of the red, green, and blue
electron beams as they sweep across the computer monitor. Any settling time, rise time, fall time,
undershoot, or overshoot problems show up directly on the monitor as color or intensity distor-
tions, vertical lines, ghost images, and so on.
The quality of the NTSC video DAC, by contrast, is determined by its ability to produce
accurate amplitude and phase shifts in a 3.54-MHz sine wave while changing its offset. This type
of DAC is tested with transmission parameters like gain, signal-to-noise, differential gain, and
differential phase (topics of Chapters 10 and 11).
6.6 SUMMARY
DAC testing is far less straightforward than one might at first assume. Although DACs all per-
form the same basic function (digital-to-analog conversion), the architecture of the DAC and
its intended application determine its testing requirements and methodologies. A large variety
of standard tests have been defined for DACs, including transmission parameters, DC intrinsic
parameters, and dynamic parameters. We have to select DAC test requirements carefully to guar-
antee the necessary quality of the DAC without wasting time with irrelevant or ineffective tests.
ADC testing is very closely related to DAC testing. Many of the DC and intrinsic tests defined
in this chapter are very similar to those performed on ADCs. However, due to the many-to-one
transfer characteristics of ADCs, the measurement of the ADC input level corresponding to each
output code is much more difficult than the measurement of the DAC output level corresponding
to each input code. Chapter 7, “ADC Testing,” explains the various ways the ADC transfer curve
can be measured, as well as the many types of ADC architectures and applications the test engi-
neer will likely encounter.
PROBLEMS
6.1. Given a set of N points denoted by S(i), derive the parameters of a straight line
described by
i=0 i=0
Hint: Find partial derivatives ∂e2 ∂ gain and ∂e2 ∂ offset, set them both to zero, and solve
for the two unknowns, gain and offset, from the system of two equations.
6.2. The output levels of a 4-bit DAC produces the following set of voltage levels, starting from
code 0 and progressing through to code 15:
0.0465, 0.3255, 0.7166, 1.0422, 1.5298, 1.8236, 2.1693, 2.5637,
208 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
(b) Estimate the LSB step size using its measured full-scale range. What is the gain error
and offset error?
(c) Calculate the absolute error transfer curve for this DAC. Normalize the result to one
LSB.
(d) Is the DAC output monotonic?
(e) Compute the DNL curve for this DAC. Does this DAC pass a ±1/2 LSB specification
for DNL?
6.10. A 4-bit two’s complement DAC produces the following set of voltage levels, starting from
code 8 and progressing through to code +7
0.9738, 0.8806, 0.6878, 0.6515, 0.3942, 0.3914, 0.2497, 0.1208,
0.0576, 0.1512, 0.2290, 0.4460, 0.4335, 0.5999, 0.6743, 0.8102
The ideal DAC output at code 0 is 0 V and the ideal gain is equal to 133.3 mV/bit. Answer
the following questions assuming a endpoint-to-endpoint line is used as a reference.
(a) Calculate the DAC’s gain (volts per bit), gain error, offset and offset error.
(b) Estimate the LSB step size using its measured full-scale range. What is the gain error
and offset error?
(c) Calculate the absolute error transfer curve for this DAC. Normalize the result to one
LSB.
(d) Is the DAC output monotonic?
(e) Compute the DNL curve for this DAC. Does this DAC pass a ±1/2 LSB specification
for DNL?
6.11. Calculate the INL curve for a 4-bit unsigned binary DAC whose DNL curve is described
by the following values
0.0815, 0.1356, 0.1133, 0.0057, 0.0218, 0.1308, 0.0361, 0.0950,
0.1136, 0.1633, 0.2101, 0.0512, 0.0119, 0.0706, 0.0919
The DAC output for code 0 is 0.4919 V. Assume that the best-fit line has a gain of 63.1 mV/
bit and an offset of 0.5045 V. Does this DAC pass a ±1/2 LSB specification for INL?
6.12. Calculate the DNL curve for a 4-bit DAC whose INL curve is described by the following
values
0.1994, 0.1180, 0.0177, 0.1310, 0.1253, 0.1036, 0.0272, 0.0089,
0.1039, 0.0096, 0.1537, 0.0565, 0.1077, 0.1196,0.0490, 0.0429
Does this DAC pass a ±1/2 LSB specification for DNL?
6.13. The step sizes between the major carries of a 5-bit unsigned binary DAC were measured to
be as follows
code 0 → 1: 0.1939 V, code 1 → 2: 0.1333 V, code 3 → 4: 0.1308 V, code 7 → 8: 0.1316
V, code 15 → 16: 0.1345 V
Determine the values of W0, W1, W2, W3, and W4. Reconstruct the voltages on the ramp from
DAC code 0 to DAC code 31 if the DC base value is 100 mV.
6.14. The step sizes between the major carries of a 4-bit two’s complement DAC were measured
to be as follows:
code 8 → 7: 0.1049 V, code 7 → 6: 0.1033 V, code 5 → 4: 0.0998 V,
code 1 → 0: 0.1016 V
Determine the values of W0, W1, W2, and W3. Reconstruct the voltages on the ramp from
DAC code 8 to DAC code 7 if the DC base value is 500 mV.
6.15. Can a major carrier test technique be used to describe a 4-bit unsigned DAC if the output
levels beginning with code 0 were found to be the following
0.0064, 0.0616, 0.1271, 0.1812, 0.2467, 0.3206, 0.3856, 0.4406,
210 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
1.4
1.2
1.0
v(t)
0.8
0.6
0.4
0.2
0.0
0 5 10 15 20 25 30
time (ns)
The data sheet states that the settling time is 10 ns (error band = ±50 mV). Does this DAC
settle fast enough to meet this specification? Also, determine the overshoot of this signal
and its rise time. Estimate the total glitch energy during the positive-going transition.
6.17. Using MATLAB or equivalent software, evaluate the following expression for the step
response of a circuit using a time step of no larger than 20 ns
( )
−ζω n t
e
v(t ) = 1 + sin ω nt 1 − ζ 2 − cos −1 ζ
1−ζ 2
where ω n = 2π × 100 MHz and ζ = 0.3. Determine the time for circuit to settle to within
1% of its final value. Determine the rise time.
REFERENCES
1. M. Mahoney, Tutorial DSP-Based Testing of Analog and Mixed-Signal Circuits, The Computer
Society of the IEEE, 1730 Massachusetts Avenue N.W., Washington D.C. 200361903, 1987,
ISBN: 0818607858.
2. G. W. Snedecor and W. G. Cochran, Statistical Methods, Eighth Edition, Iowa State University
Press, 1989, ISBN: 0813815614, pp. 149–176.
3. G. N. Stenbakken, T. M. Souders, Linear Error Modeling of Analog and Mixed-Signal Devices,
Proc. International Test Conference, 1991.
4. T. Yamaguchi, M. Soma, Dynamic Testing of ADCs Using Wavelet Transforms, Proc. International
Test Conference, 1997, pp. 379–88.
CHAPTER 7
ADC Testing
A s mentioned in Chapter 6, “DAC Testing,” there are many similarities between DAC testing
and ADC testing. However, there are also a few notable differences. In this chapter, we will
examine their differences as they relate to the intrinsic parameters of an ADC such as DC offset,
INL, and DNL. A discussion will then follow about testing the dynamic operation of ADCs.
Figure 7.1. Comparing transfer curves. (a) DAC and (b) ADC.
1.5 15
DAC ADC
output output
1.0 10
voltage code
0.5 5
0 0
0 5 10 15 0 0.5 1.0 1.5
DAC input code ADC input voltage
(a) (b)
Input
signal
Noise-free
+
ADC
Noise
source
where the function Quantize ( ) represents the noise-free ADC’s quantization process. The noisy
ADC can be described using a similar equation
output code = Quantize (input voltage + noise voltage) (7.2)
Now consider the case of a noisy ADC with a DC input voltage. If the DC input level lies
exactly between two ADC decision levels, and the noise voltage rarely exceed ±½ VLSB, then the
ADC will for the most part produce the same output code. The noise voltage for all practical pur-
poses never gets large enough to push the total voltage across either of the adjacent decision levels.
We depict this situation in Figure 7.3 where we described the input signal vIN with a probability
density function given by
2
− (vIN −VDC )
1
f (vIN ) =
2
e 2σn (7.3)
σ n 2π
Here we assume that the noise present at the ADC input is modeled as a Gaussian-distributed ran-
dom variable with zero mean and a standard deviation of σ n (i.e., the RMS noise voltage).
On the other hand, if the input DC voltage is exactly equal to a decision level (i.e., vIN = VTH )
as depicted in Figure 7.4, then even a tiny amount of noise voltage will cause the quantization
process to randomly dither between the two codes on each side of the decision level. While we
Chapter 7 • ADC Testing 213
Figure 7.3. Probability density plot for DC input between two decision levels.
ADC
Gaussian Decision
Noise pdf Levels
Code 1 Code 2
Input Voltage
Probability
Density
VTH
Input Voltage
Average Voltage
(DC Plus Noise)
(DC Input)
Figure 7.4. Probability density plot for DC input equal to a decision level.
Code 1 Code 2
Input Voltage
Probability 50% probability 50% probability
for code 1 for code 2
Density
Input Voltage
Average Voltage (DC Plus Noise)
(DC Input, VTH)
are assuming that the noise is Gaussian distributed, this conclusion will be same regardless of the
nature of the noise as long as its pdf is symmetrical about its mean value.1 Since the area under the
pdf is equally split between code 1 and code 2, we would expect 50% of the ADC conversions to
produce code 1 and 50% of the conversions to produce code 2.
For input voltages that are close but not equal to the decision levels, the process is little more
complicated but tractable using the probability theory from Chapter 4. Consider the DC input VDC
as being some value less than the decision level VTH that separates code 1 and code 2, such as situ-
ation depicted in Figure 7.5. The probability that the input vIN will trip the ADC quantizer to the
next code value is given by
2
− (vIN −VDC )
∞
1 ⎛ V − VDC ⎞
P (VTH < vIN ) = ∫ e 2σ 2
n
dvIN = 1 − Φ ⎜ TH ⎟ (7.4)
VTH σ n 2π ⎝ σn ⎠
where (z) is the standard Gaussian cumulative distribution function. Likewise, the probability
that the input signal will not trip the ADC decision level is
⎛ V − VDC ⎞
P (vIN < VTH ) = Φ ⎜ TH ⎟ (7.5)
⎝ σn ⎠
We can therefore conclude that N samples of the ADC output will contain N × Φ (ΔV σ n ) code 1
codes and N × [1 − (V / n)] code 2 codes where ΔV = VTH − VDC. Of course, this assumes that
the noise level is sufficiently small that other codes are not tripped.
214 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Figure 7.5. Probability density plot for DC input input less than the decision level VTH.
Code 1 Code 2
Input Voltage
area = area =
Probability
probability for probability for
Density code 1 code 2
EXAMPLE 7.1
An ADC input is set to 2.453 V DC. The noise of the ADC and DC signal source is characterized to
be 10 mV RMS and is assumed to be perfectly Gaussian. The transition between code 134 and 135
occurs at 2.461 V DC for this particular ADC. Therefore, the value 134 is the expected output from
the ADC. What is the probability that the ADC will produce code 135 instead of 134? If we collected
200 samples from the output of the ADC, how many would we expect to be 134 and how many
would be 135? How might we determine that the transition between code 134 and 135 occurs at
2.461 V DC? How might we characterize the effective RMS input noise?
Solution:
With an input of 2.453 V DC, the ADC’s input noise would have to exceed 2.461 V − 2.453 V = 8 mV
to cause the ADC to trip to code 135. This value is equal to +0.8, since = 10 mV. From Appendix
A, the Gaussian cdf of +0.8 is equal to 0.7881. Therefore, there is a 78.81% probability that the
noise will not be sufficient to trip the ADC to code 135. Thus we can expect 78.81% of the con-
versions to produce code 134 and 21.19% of the conversions to produce code 135. If we collect
200 samples from the ADC, we would expect 78.81% of the 200 conversions (approximately 158
conversions) to produce code 134. We would expect the remaining 21.19% of the conversions (42
samples) to produce code 135.
To determine the transition voltage, we simply have to adjust the input voltage up or down
until 50% of the samples are equal to 134% and 50% are equal to 135. To determine the value of
, we can adjust the input voltage until we get 84.13% of the conversions to produce code 134.
The difference between this voltage and the transition voltage is equal to 1.0, which is equal to
the effective RMS input noise of the ADC.
Because the circuits of an ADC generate random noise, the ADC decision levels represent
probable locations of transitions from one code to the next. In the previous example, we saw that
an input noise level of 10 mV would cause a 2.453-V DC input voltage to produce code 134 only
79% of the time and code 135 21% of the time. Therefore, with an input voltage of 2.453V, we
will get an average output code of 134 × 079 + 135 × 0.21 = 134.21. Of course, the ADC cannot
produce code 134.21. This value only represents the average output code we can expect if we col-
lect many samples.
If we plot the average output code from a typical ADC versus DC input levels, we will see
the true transfer characteristics of the ADC. Figure 7.6 shows a true ADC transfer curve compared
Chapter 7 • ADC Testing 215
Noise-free
7 transfer curve
6 Probable output
tr ansfer curve
5
Code edge
Average l o c a t io n s
o utp ut co de 4
2
Input noise
probability density
1
(typi ca lly Gaussian)
0
0 10 20 30 40 50 60 70 80 90
DC input voltage (mV)
to the idealized, noise-free transfer curve. The center of the transition from one code to the next
(i.e., the decision level) is often called a code edge. The wider the distribution of the Gaussian
input noise, the more rounded the transitions from one code to the next will be. In fact, the true
ADC transfer characteristic is equal to the convolution of the Gaussian noise probability density
function with the noise-free transfer curve.
Code edge measurement is one of the primary differences between ADC and DAC testing.
DAC voltages can simply be measured one at a time using a DC voltmeter or digitizer. By con-
trast, ADC code edges can only be measured using an iterative process in which the input voltage
is adjusted until the output samples dither equally between two codes. Because of the statistical
nature of the ADC’s transfer curve, each iteration of the search requires 100 or more conversions
to achieve a repeatable average value. Since this brute-force approach would lead to very long test
times in production, a number of faster methodologies have been developed to locate code edges.
Unfortunately, these production techniques generally result in somewhat less exact measurements
of code edge voltages.
Exercises
7.1. If V is normally distributed with zero mean and a standard ANS. P(V< 4 mV) = 0.9772;
deviation of 2 mV, find P(V< 4 mV). Repeat for P(V> −1 mV). P(V> −1 mV) = 0.6915; P(−1
Repeat for P(−1 mV < V < 4 mV). mV < V < 4 mV) = 0.6687
7.3. An ADC input is set to 1.4 V DC. The noise of the ADC and
DC signal source is characterized to be 15 mV RMS and is
assumed to be perfectly Gaussian. The transition between
code 90 and 91 occurs at 1.4255 V DC. If 500 samples of the ANS. # of code 90
ADC output are collected, how many do we expect to be code = 95.54% or ~ 478 and #
90 and how many would be code 91? of code 91 = 4.46% (~22).
216 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
In the next section, we will examine the various ways in which the code edges of an ADC can
be measured, both for characterization and production. Once the code edges have been located, we
can apply all the same tests to ADCs that we applied to DACs. Tests such as INL, DNL, DC gain,
and DC offset are commonly performed using the code edge information.
7 Code edge
Locations
6
Code center
5
locations
Average
4
o ut p ut co de
3
0
0 10 20 30 40 50 60 70 80 90
We can search for code edges in one of several different ways. Three common techniques are
the step search or binary search method, the hardware servo method, and the histogram method.
In the next section, we will see how each of these techniques is applied, and we will examine the
strengths and weaknesses of each method. Since all the various ADC edge measurement tech-
niques are slower than simply measuring an output voltage, ADC testing is generally much slower
than DAC testing.
ADC D ig ital
under compar ator
test
VM VCodeEdge
DC
voltmeter
with 1 if ADC output <= search value
low-pass filter 0 if ADC output > search value
One-bit
DAC
VFS+ = ramp up
VFS– = ramp down
218 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Eventually, the integrator finds the desired code edge and fluctuates back and forth across its
transition level. The average voltage at the ADC input, VCodeEdge, represents the lower edge of the
code under test. This voltage can easily be measured using a DC voltmeter with a low-pass filtered
input. The servo search process is repeated for each code edge in the ADC transfer curve.
The servo method is actually a fast hardware version of the step search. Unlike the step search
or binary search methods, the servo method does not perform averaging before moving from one
input voltage to the next. The continuous up/down adjustment of the servo integrator coupled with
the averaging process of the filtered voltmeter act together to remove the effects of the ADC’s
input noise. Because of its speed, the servo technique is generally more production-worthy than
the step search or binary search methods.
Although the servo method is faster than the binary search method, it is also fairly slow
compared with a more common production testing technique, the histogram method. Histogram
testing requires an input signal with a known voltage distribution. There are two commonly used
histogram methods: the linear ramp method and the sinusoidal method.
6
ADC
5 samples
Output 4
code
3
0
0 10 20 30 40 50 60 70 80 90
Input voltage (mV)
Chapter 7 • ADC Testing 219
The number of occurrences of each code is plotted as a histogram, as illustrated in Figure 7.10.
Ideally, each code should be hit the same number of times, but this would only be true for a per-
fectly linear ADC. The histogram shows us which codes are hit more often, indicating that they
are wider codes. For example, we can see from the histogram in Figure 7.10 that codes 2 and 4 are
twice as wide as codes 1 and 6.
Let us denote the number of hits that occur for the ith code word of a D-bit ADC as H(i) for
i = 0,1, . . . ,2D − 1. Next,let us define the average number of hits for each code word, excluding the
number of hits included in the two end codes, as
D
1 2 −2
H Average = D ∑ H (i )
2 − 2 i =1
(7.6)
Dividing H(i) by HAverage , we obtain the width of each code word in units of LSBs as
H (i )
code width (i ) = , i = 1,2,… ,2 D − 2 (7.7)
H Average
Excluding the highest and lowest code count is necessary, because these two codes do not have
a defined code width. In effect, the end codes are infinitely wide. For example, code 0 in an
unsigned binary ADC has no lower decision level, since there is no code corresponding to −1. In
many practical situations, the input ramp signal extends beyond the upper and lower ranges of the
Figure 7.10. LSB normalization translates ADC code histogram into LSB code widths.
8
7
6
5
4
LSB
Number 3
of Code
Normalization
Hits 2
1
0
0 1 2 3 4 5 6 7
Output Code
1.412
Average Hits per Code 1.235
= (4+8+5+8+5+4)/6
1.059
= 5.667
0.882
Divide Number of Hits by Average 0.706
LSB Size to Convert Histogram to
Code 0.529
Code Widths
Undefined
Undefined
Width
0.353
(LSBs)
(Width of Lowest and Highest 0.176
Codes are Undefined)
12 0
0 1 2 3 4 5 6 7
Output Code
220 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
ADC resulting in an increase code count for these two code words. These meaningless hits should
be ignored in the linear ramp histogram analysis.
EXAMPLE 7.2
A binary search method is used to find the transition between code 0 and code 1 of the ADC in
Figure 7.9. The code edge is found to be 53 mV. A second binary search determines the code edge
between codes 6 and 7 to be 2.77 V. What is the average LSB step size for this 3-bit ADC? Based
on the data contained in the histogram of Figure 7.10, what is the width of each of the 8 codes,
in volts?
Solution:
The average LSB size is equal to
2.77 V − 0.053V
VLSB = = 452.8 mV
23 − 2
If we wish to calculate the absolute voltage level of each code edge, we simply perform a run-
ning sum on the code widths expressed in volts, starting with the voltage VLE, as follows
⎧
⎪VLE , i=0
VCodeEdge (i ) = ⎪⎨ i
(7.10)
⎪VLE + ∑ VCodeWidth (k ), i = 1,2, ,2 D − 2
⎪⎩ k =1
Alternatively, we can write a recursive equation for the code edges as follows
where we begin with VCodeEdge(0) = VLE. The resulting code edge transfer curve is equivalent to a
DAC output transfer curve, except that it will only have 2D − 1 values rather than 2D values.
EXAMPLE 7.3
Using the results of Example 7.2, reconstruct the 3-bit ADC transfer curve for each decision
level.
Solution:
The transition from code 0 to code 1 was measured using a binary search. It was 53 mV. The other
codes edges can be calculated using a running sum:
Code 0 to Code 1: 53 mV
Code 1 to Code 2: 53 mV + 319.68 mV = 372.68 mV
Code 2 to Code 3: 372.68 mV + 639.35 mV = 1011.9 mV
Code 3 to Code 4: 1011.9 mV + 399.37 mV = 1411.5 mV
Code 4 to Code 5: 1411.5 mV + 639.35 mV = 2050.8 mV
Code 5 to Code 6: 2050.8 mV + 399.37 mV = 2450.4 mV
Code 6 to Code 7: 2450.4 mV + 319.68 mV = 2770.0 mV
Furthermore, if N1 samples are collected over the ADC input range then each sample represents the
response to a voltage change V given by
VUE − VLE
ΔV = (7.13)
N1
222 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Exercises
Conversely, we can also express this voltage step or voltage resolution in terms of the average
number of code hits HAverage and the LSB step size by combining Eqs. (7.6) and (7.8) with (7.13)
to arrive at
VLSB
ΔV = (7.14)
H Average
By dividing each side of this equation by VLSB, we obtain voltage resolution expressed in
LSBs as
ΔV 1 1
=± [LSB] (7.15)
VLSB 2 H Average
For example, if we measure an average of 5 hits per code, then the code width or code edge
would, on average, have one-fifth of an LSB of resolution. If one LSB step size is equivalent to
452.8 mV, as in the last example, then the code width and edge would have a possible error of
±45.28 mV. To improve the accuracy of the histogram test, the average number of hits per code
must be increased.
To understand how the average number of hits per code can be increased, consider combining
Eqs. (7.12), (7.13), and (7.14), together with Eq. (7.8), to arrive at
FS
H Average = × TR (7.16)
2D − 2
Clearly, a higher average number of hits per code is achiveable by using a longer ramp duration, a
higher ADC sampling frequency, or a smaller ADC resolution. The latter two parameters are gen-
erally set by the DUT, so the test engineer really has only one option: Run the ramp very slowly.
Chapter 7 • ADC Testing 223
7 Consistent ADC
samples
6
ADC samples with
uncert ainty caused
5
b y n o is e
Output 4
code
3
2
Possible
1 transition sequence
0
0 10 20 30 40 50 60 70 80 90
Input voltage (mV)
This, in turn, drives up the time of the test. Nonetheless, for characterization this is an acceptable
solution. Typically, code hits on the order of several hundreds is selected. The larger sample set
also helps to improve the repeatibility of the test.
In production testing, however, we can only afford to collect a relatively small number of
samples from each code, typically 16 or 32. Otherwise the test time becomes excessive. Therefore,
even a perfect ADC will not produce a flat histogram in production testing because the limited
number of samples collected gives rise to a limited code width resolution and repeatability. We can
see that the samples in Figure 7.9 are spread too far apart to resolve small fractions of an LSB.
In addition to the accuracy limitation caused by limited resolution, we also face a repeat-
ability limitation.2 If we look carefully at Figure 7.9, we notice that several of the codes occur so
close to a decision level that the ADC noise will cause the results to vary from one test execution
to the next. This variability will happen even if our input signal is exactly the same during each
test execution. Figure 7.11 illustrates the uncertainty in output codes caused by noise in the ADC
circuits.
In many cases, we find that the raw data sequence from the ADC may zigzag up and down
as the output codes near a transition from one code to the next. In Figure 7.11, for instance, we
see that it is possible to achieve an ADC output sequence 4, 4, 4, 4, 4, 5, 4, 5, 5, 5 rather than the
ideal sequence 4, 4, 4, 4, 4, 4, 5, 5, 5, 5. Unfortunately, this is the nature of histogram testing of
ADCs. The results will be variable and somewhat unrepeatable unless we collect many samples
per code. In histogram testing, as in many other tests, there is an inherent tradeoff between good
repeatability and low test time. It is the test engineer’s responsibility to balance the need for low
test time with the need for acceptable accuracy and repeatability.
Exercises
v v v
t t t
ramp. Both methods must produce a passing result before the ADC is considered good. However,
the extra test doubles the test time; thus we prefer to use only one ramp. If characterization shows
that we have a good match between the rising ramp and falling ramp, then we can drop back to
a single test for production. Alternatively, if characterization shows that either the rising ramp
or falling ramp always produces the worst-case results, then we can use only the worst-case test
condition to save test time.
A compromise solution is to ramp the signal up at twice the normal rate and then ramp it
down again (Figure 7.12). This triangle waveform approach tests both the falling and rising edge
locations, averaging their results. It takes no longer than a single ramp technique, but it cancels the
effects of hysteresis. A separate test could then be performed to verify that the ADC’s hysteresis
errors are within acceptable limits. The hysteresis test could be performed at only a few codes,
saving test time compared to the two-pass ramp solution.
test the ADC transition levels in a more dynamic, real-world situation. To do this, we can use a
high-frequency sinusoidal input signal. Our goal is to make the ADC respond to the rapidly chang-
ing inputs of a sinusoid rather than the slowly varying voltages of a ramp. In theory, we could use
a high-frequency triangle wave to achieve this result, but high-frequency linear triangles are much
more difficult to produce than high-frequency sinusoids.
Ramp inputs have an even distribution of voltages over the entire ADC input range. Sinusoids,
on the other hand, have an uneven distribution of voltages. A sine wave spends much more time
near the upper and lower peak than at the center. As a result, we would expect to get more code hits
at the upper and lower codes than at the center of the ADC’s transfer curve, even when testing a
perfect ADC. Fortunately, the distribution of voltage levels in a pure sinusoid is well defined; thus
we can compensate for the uneven distribution of voltages inherent to sinusoidal waveforms.
Figure 7.13 shows a sinusoidal waveform that is quantized by a 4-bit ADC. Notice that there
are only 15 decision levels in a 4-bit ADC and that the sine wave is programmed to exceed the
upper and lower decision levels by a fairly wide margin. The reason we program the sine wave to
exceed the ADC’s full-scale range is that we have to make sure that the sine wave passes through
all the codes if we want to get a histogram of all code widths. If we expand the time scale to view
a quarter period of the waveform, we can see how the distribution of output codes is nonuniform
due to the sinusoidal distribution of voltages, as shown in Figure 7.14. Clearly we get more code
hits near the peaks of the sine wave than at the center, even for this simple example.
In order to understand the details of this test setup more clearly, consider the illustration
shown in Figure 7.15. The diagram consists of three parts. At the center is the transfer character-
istic of a 4-bit ADC, that is, 16 output code levels expressed as a function of the ADC input volt-
age. Below the ADC input voltage axis is a rotated graph of the ADC input voltage as a function
of time. Here the input signal is a sine wave of amplitude peak and with DC offset, Offset, all
ADC
decision
levels
Time
ADC
decision
levels
H(15)
15
H(0)
ADC input voltage
0 VIN.ADC
Measured 0 0.5 1.0 1.5
Histogram VFS- VLE VMID VUE VFS+
Peak (V)
Δ
Δ
0 Offset (V)
VIN.ADC - VMID
referenced to the middle level of the ADC input as defined by the distance between the upper and
lower decision levels (VUE and VLE), that is,
where
VUE − VLE
Δ= (7.18)
2
Looking back at the expression for the LSB step size VLSB in Eq. (7.8), we recognize that we can
also write as
(
Δ = 2 D −1 − 1 VLSB ) (7.19)
On the left-hand side of the ADC transfer curve, we have plot of a histogram of the ADC code lev-
els (albeit rotated by 90 degrees). Here we see that the histogram exhibits a “bathtub”-like shape.
If we try to use this histogram result the same way we use the linear ramp histogram results, the
upper and lower codes would appear to be much wider than the middle codes. Clearly, we need to
normalize our histogram to remove the effects of the sinusoidal waveform’s nonuniform voltage
distribution.
Chapter 7 • ADC Testing 227
The normalization process is slightly complicated because we do not really know what the
gain and offset of the ADC will be a priori. Additionally, we may not know the exact offset and
amplitude of the sinusoidal input waveform. Fortunately, we have a piece of information at our
disposal that tells us the level and offset of the signal as the ADC sees it.
The number of hits at the upper and lower codes in our histogram can be used to calculate the
input signal’s offset and amplitude. For example, in Figure 7.13, we can see that we will get more
hits at the lower code than at the upper code. The lower codes will be hit more often because the
sinusoid has a negative offset. The mismatch between these two numbers tells us the offset, while
the number of total hits tells us the amplitude. Consider the pdf for the input voltage seen by the
ADC is
⎧
⎪ 1
, −peak + offset ≤ v ≤ peak + offset
f (v ) = ⎪⎨ (7.20)
π peak − (v − offset )
2 2
⎪
⎪⎩0, otherwise
The probability that the input signal is less than the lowest code decision level now defined by
-(i.e., relative to the ADC mid-level) is given by
−Δ
1
P (VIN , ADC < −Δ) = ∫ dv (7.21)
π peak − (v − offset )
2 2
− Peak+Offset
1 ⎡ −1 ⎛ −Δ − offset ⎞ π ⎤ (7.22)
P (VIN , ADC < −Δ) = ⎢sin ⎜ ⎟+ ⎥
π⎣ ⎝ peak ⎠ 2 ⎦
Likewise, the probability that the input signal is larger than the highest code decision level + is
found in a similar manner as
peak+offset
1 1 ⎡π −1 ⎛ Δ − offset ⎞
⎤
P (Δ < VIN , ADC ) = ∫ dv =
π
⎢ − sin ⎜ ⎟⎥
(7.23)
π peak − (v − offset ) 2 ⎝ peak ⎠⎦
2
Δ
2
⎣
If N samples are collected from the ADC output (including end code counts), then the expected
number of code hits for code 0 and code 2D − 1 is simply given by the
N ⎡ −1 ⎛ −Δ − offset ⎞ π ⎤
H (0 ) = N × P (VIN , ADC < −Δ ) = ⎢sin ⎜ ⎟+ ⎥
π ⎣ ⎝ peak ⎠ 2 ⎦
(7.24)
N ⎡π ⎛ Δ − offset ⎞ ⎤
( )
H 2 D − 1 = N × P(Δ < VIN , ADC ) = ⎢ − sin −1 ⎜
π ⎣2
⎟⎥
⎝ peak ⎠ ⎦
Here we see we have two equations and two unknowns, which leads to the following solution:
⎛ C − C1 ⎞ ⎛ C2 − C1 ⎞ D −1
offset = ⎜ 2 ⎟Δ = ⎜ ⎟ 2 − 1 VLSB ( ) (7.25)
⎝ C2 + C1 ⎠ ⎝ C2 + C1 ⎠
and
228 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
peak =
2
C2 + C1
Δ=
2
C2 + C1
( )
2 D −1 − 1 VLSB
(7.26)
where
C1 = cos ⎜ π
(
⎛ H 2D − 1 )⎟⎞ ⎛ H (0 ) ⎞
C2 = cos ⎜ π
and ⎟
⎜ N ⎟ ⎝ N ⎠
⎝ ⎠
We should note that N should be large enough that each ADC code is hit at least 16 times. The
common rule of thumb is to collect at least 32 samples for each code in the ADC’s transfer curve.
For example, an 8-bit converter would require 28 × 32 = 8192 samples. Of course, some codes will
be hit more often than 32 times and some will be hit less often than 32 times due to the curved
nature of the sinusoidal input.
Once we know the values of peak and offset, we can calculate the ideal sine wave distribu-
tion of code hits, denoted Hsinewave, that we would expect from a perfectly linear ADC excited by
a sinusoid. The equation for the ith code count, once again, excluding the upper and lower code
counts, is
()
( ) ( )
H i
LSB code width i = , i = 1,2,… ,2 D − 2 (7.28)
H sinewave i
Figure 7.16 illustrates the sinusoidal histogram normalization process for an idealized 4-bit ADC.
Once we have calculated the normalized histogram, we are ready to convert the code widths into a
code edge plot, using the same steps as we used for the linear ramp histogram method.
This example is based on an ideal ADC with equal code widths. Even with this idealized
simulation, the normalized histogram does not result in equal code width measurements. This
simulated example was based on a sample size of 32 samples per ADC code (16 ADC codes × 32
samples per code = 512 collected samples). As we can see in Figure 7.16, many of the codes were
hit fewer than 20 times in this simulation. Like the linear ramp histogram method, the number
of hits per code limits the measurement resolution of a sinusoidal histogram. If we had collected
hundreds of samples for each code in this 4-bit ADC example, the results would have been much
closer to a flat histogram. Also, the repeatability of code width measurements will improve with a
larger sample size. Unfortunately, a larger sample size requires a longer test time. Again, we are
faced with a tradeoff between low test time and high accuracy. We’ll explore this in greater detail
shortly. Let us first look at an example.
Chapter 7 • ADC Testing 229
80 80
70 H0 =140 70
H15=70 Expected Histogram
60 60
Hsinewave(i)
40 40
Undefined
H(i) 30 30
20 20
10 10
0 0
0 2 4 6 8 10 12 14 0 2 4 6 8 10 12 1 4
1.6
Normalized Histogram
1.4
Divide Measured Number of Hits by
Expected Number of Hits to Convert 1.2
Sinusoidal Histogram to Code Widths 1.0
Code 0.8
(Width of Lowest and Highest Codes Width
a re U n d e f i n e d ) (LSBs) 0.6
12
0.4
0.2
0
0 2 4 6 8 10 12 14
Output Code, i
EXAMPLE 7.4
The distribution of code hits for an two’s complement 4-bit ADC excited by a sinusoidal signal
beginning with code −8 is as follows
170, 61, 55, 48, 44, 42, 40, 39, 39, 40, 41, 42, 45, 50, 72, 196
A binary search was performed on the first transition between codes −8 and −7 and found the
code edge to be at 330 mV. A second binary search was performed and found the code edge
between codes 6 and 7 to be 4.561 V. What is the average LSB size for this 4-bit ADC? What is
the mid-level of the ADC input? What is the offset and amplitude of the sinusoidal signal seen by
the ADC relative to its mid-level? What is expected or ideal sinusoidal distribution of code hits
corresponding to this input signal? Determine the width of each code, as well as the code edges
(all in volts). Plot the transfer characteristics of this ADC.
Solution:
According to Eq. (7.8) we find the average LSB step size is
4.561 V − 0.330V
VLSB = = 302.2 mV
24 − 2
230 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Next, we compute the sinusoidal signal seen by the ADC relative to the input mid-level according
to the following
where
VUE − VLE 4.561 − 0.330
Δ= = = 2.115 V
2 2
The parameters of the sine wave seen by the ADC relative to mid-level is then found from the
code hits data such that
⎛ H (2D − 1)⎞
⎟ = cos ⎛⎜
196 ⎞
C1 = cos ⎜ π π ⎟ = 0.8245,
⎜ N ⎟ ⎝ 1024 ⎠
⎝ ⎠
⎛ H (0) ⎞ ⎛ 170 ⎞
C2 = cos ⎜ π ⎟ = cos ⎜ π ⎟ = 0.8670
⎝ N ⎠ ⎝ 1024 ⎠
leading to
2 2
peak = Δ= × 2.115 = 2.5011 V
C2 + C1 0.8670 + 0.8245
⎛ C − C1 ⎞ ⎛ 0.8670 − 0.8245 ⎞
offset = ⎜ 2 ⎟ Δ=⎜ ⎟ × 2.115 = 0.05309 V
⎝ C2 + C1 ⎠ ⎝ 0.8670 + 0.8245 ⎠
Next, the expected number of code hits for an ideal ADC is found from Eq. (7.27), resulting in the
following list of code hits beginning code −7 and ending with code 6:
Hsinewave = 67.42, 54.33, 47.78, 44.02, 41.67, 40.23, 39.56, 39.44, 39.88, 41.09,
43.10, 46.25, 51.56, 61.50
Subsequently, the width of each code (-7 to 6) expressed in LSBs is found from Eq. (7.28) to be
code width = 0.9048, 1.012, 1.005, 0.9995, 1.008, 0.9943, 0.9858, 0.9888, 1.003,
0.9978, 0.9745, 0.9730, 0.9697, 1.171
The code width of each code (-7 to 6) is scaled by VLSB to obtain the code width in volts:
code width = 0.2734, 0.3058, 0.3037, 0.3020, 0.3046, 0.3005, 0.2979, 0.2988, 0.3031,
0.3015, 0.2945, 0.2940, 0.2930, 0.3539
Finally, the code edges in are found through application of Eq. (7.11) using the above code widths
in volts:
code edges = 0.330, 0.6034, 0.9092, 1.213, 1.515, 1.820, 2.120, 2.418, 2.717,
3.020, 3.322, 3.616, 3.910, 4.203, 4.557
Chapter 7 • ADC Testing 231
Exercises
7.9. The distribution of code hits for an unsigned 4-bit ADC ANS. Offset = 0.0849 V;
excited by a sinusoidal signal beginning with code 0 is as peak = 5.500 V.
follows Ideal sinusoidal distribution
137, 80, 52, 60, 40, 51, 36, 48, 36, 48, 37, 52, (code 1 to 14): 80.75, 60.56,
42, 64, 80, 160 51.95, 47.19, 44.36, 42.70,
A binary search was performed on the first transition be- 41.90, 41.81, 42.43, 43.86,
tween codes 0 and 1 and found the code edge to be at 46.37, 50.55, 57.90, 73.60.
−4.921 V. A second binary search was performed and Code widths (code 1 to 14):
found the code edge between codes 14 and 15 to be 4.952 0.7003, 0.6060, 0.8159,
V. What is the offset and amplitude of the input sinusoi- 0.5980, 0.8103, 0.5948,
dal signal? What is expected or ideal sinusoidal distribu- 0.8082, 0.6070, 0.7983,
tion of code hits? Finally, what is the distribution of code 0.5949, 0.7912, 0.5864,
widths (in volts) for this ADC? 0.7800, 0.7694.
To gain a better idea of the distribution of codes and its impact on the performance of the
sinusoidal histogram test, consider the minimum and maximum number of code hits correspond-
ing to an ideal D-bit ADC excited by a peak-to-peak sine wave signal equal to the full-scale range
of the ADC with frequency FT as follows:
1 ⎛ FS
⎞ ⎡π −1 ⎛ 1 ⎞⎤
maximum number of code hits = ⎟ ⎢ − sin ⎜ 1 − D −1 ⎟ ⎥
⎜
π ⎝ FT
⎠⎣2 ⎝ 2 ⎠⎦
(7.29)
1 ⎛ FS ⎞ −1 ⎛ 1 ⎞
minimum number of code hits = ⎜ ⎟ sin ⎜ D −1 ⎟
2π ⎝ FT ⎠ ⎝2 ⎠
232 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
While not explicit in the above equation, one can show that the total number of samples collected
from the ADC output when excited by a single cycle of the sine wave input is given by
FS (7.30)
N =
FT
Like before, both FS and D are parameters of the DUT, leaving FT as the only parameter that the
test engineer can use to optimize the test (e.g., minimize test time). The lower the test frequency,
FT, the greater the number of minimum code hits and, in turn, the longer the test time. The resolu-
tion of the measurement is bounded by the largest step size that takes place during the sampling
process. Each step change in the input signal level can be described by
⎪⎧ ⎡ FT ⎤ ⎡ F ⎤⎫
(n + 1)⎥ − sin ⎢2π T n ⎥ ⎪⎬
VFSR
ΔV [n ] = ⎨sin ⎢ 2π (7.31)
2 ⎪⎩ ⎣ FS ⎦ ⎣ FS ⎦ ⎪⎭
The largest step change occurs around the zero crossing point of the sine wave, resulting in the
sinusoidal historgam method having a worst-case voltage resolution of
VFSR ⎛ F ⎞
max {ΔV } = sin ⎜ 2π T ⎟ (7.32)
2 ⎝ FS ⎠
If we assume the full-scale range is equivalent to 2DVLSB, Eq. (7.32) can be rewrtitten as
⎛ F ⎞
max {ΔV } = 2 D −1VLSB sin ⎜ 2π T ⎟ (7.33)
⎝ FS ⎠
Furthermore, because FT is typically much smaller than FS, we can approximate the worst-case
voltage resolution of the sinusoidal histogram test as
FT
max {ΔV } = ± 2 D −1π VLSB (7.34)
FS
Clearly, the lower the test frequency, the higher the resolution but longer the test time.
Exercises
15 15
ADC ADC
output output
10 10
code code
5 5
0 0
0 0.5 1.0 1.5 0 0.5 1.0 1.5
ADC input voltage ADC input voltage
(a) (b)
234 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Subsequently, as described in Chapter 6, the DNL curve can then be integrated using a running
sum to calculate the endpoint INL curve in units of LSBs according to the following
⎧0
⎪ i =1
⎪ i −1
endpoint INL (i ) = ⎨∑ Endpoint DNL (k ), i = 2,3,… ,2 D − 2
(7.36)
⎪ k =1
⎪
⎩0 i = 2D − 1
Using this shortcut method, we never even have to compute the absolute voltage level for each
code edge, unless we need that information for a separate test, such as gain or offset.
As with DAC INL and DNL testing, a best-fit approach is the preferred method for calculat-
ing ADC INL and DNL. As discussed in Chapter 6, “DAC Testing,” best-fit INL and DNL testing
results in a more meaningful, repeatable reference line than endpoint testing, since the best-fit ref-
erence line is less dependent on any individual code’s edge location. We can convert an endpoint
INL curve to a best-fit INL curve by first calculating the best-fit line for the endpoint INL curve,
for example,
Subtracting the best-fit line from the endpoint INL curve yields the best-fit INL curve, that is,
The best-fit DNL curve is then calculated by taking the discrete time first derivative of the best-fit
INL curve according to
Notice that the histogram method captures an endpoint DNL curve and then integrates the
DNL curve to calculate endpoint INL. This is unlike the DAC methodology and the ADC servo/
search methodologies, which start with a measurement of absolute voltage levels to measure INL
and then calculate the DNL through discrete time first derivatives. The following example will
illustrate this method.
Chapter 7 • ADC Testing 235
EXAMPLE 7.5
A linear histogram test was performed on an unsigned 4-bit ADC resulting in the following dis-
tribution of code hits beginning with code 0:
4, 5, 5, 7, 8, 4, 2, 4, 4, 3, 6, 3, 4, 6, 5, 9
Determine the best-fit DNL and INL characteristicsof this ADC.
Solution:
We begin by first finding the endpoint DNL characteristics for this ADC. As the average code hit is
4.714, we find that the code width (in LSBs) beginning with code 1 and ending with code 14 is:
Code Widths:
[0, undefined], [1, 1.061], [2, 1.061], [3, 1.485], [4, 1.697], [5, 0.8485], [6, 0.4243],
[7, 0.8485], [8, 0.8485], [9, 0.6364], [10, 1.273], [11, 0.6364], [12, 0.8485], [13, 1.273],
[14, 1.061], [15, undefined]
Subsequently, using Eq. (7.35), we find that the endpoint DNL characteristics beginning with the
0 to 1 code transition and ending with the 13th to 14th code transition is:
Endpoint DNL:
[1, 0.061], [2, 0.061], [3, 0.485], [4, 0.697], [5, −0.1515], [6, −0.5757], [7, −0.1515],
[8, −0.1515], [9, −0.3636], [10, 0.273], [11, −0.3636], [12, −0.1515], [13, 0.273], [14, 0.061]
Integrating the DNL function according to Eq. (7.36), we find the endpoint INL characteristics
beginning with code 1 and ending with code 15 as follows:
Endpoint INL:
[1, 0], [2, 0.061], [3, 0.122], [4, 0.607], [5, 1.304], [6, 1.152], [7, 0.5763], [8, 0.4248],
[9, 0.2733], [10, −0.0903], [11, 0.1827], [12, −0.1809], [13, −0.3324], [14, −0.0594], [15, 0]
Using the regression analysis equations of Chapter 6, we find that the gain and offset of the best-
fit line parameters associated with the endpoint INL curve is −0.04393 and 0.5769, respectively.
The best-fit line corresponding to the endpoint INL is then given by the expression
Evaluating this function for n from 1 to 15, and it subtracting from the endpoint INL data set,
that is,
best-fit INL[i ] = endpoint INL[i ] − best-fit endpoint INL[ i ], i = 1,...,15
we obtain the following set of best-fit INL points:
Best-Fit INL:
[1, −0.5330], [2, −0.4280], [3, −0.3231], [4, 0.2058], [5, 0.9467], [6, 0.8387], [7, 0.3069],
[8, 0.1993], [9, 0.0918], [10, −0.2279], [11, 0.0890], [12, −0.2306], [13, −0.3382],
[14, −0.0213], [15, 0.0821]
236 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Below is a plot of three sets of data corresponding to the INL characteristics of the ADC: endpoint
INL, the regression line for the endpoint INL, and the corresponding best-fit INL. As is clearly
evident, the endpoint INL and best-fit INL are different.
Finally, we compute the best-fit DNL characteristics of the ADC by differentiating the best-fit
INL curve using the first-order difference operation given in Eq. (7.38):
Best-Fit DNL:
[1, 0.1050], [2, 0.1049], [3, 0.5289], [4, 0.7409], [5, −0.1080], [6, −0.5318], [7, −0.1076],
[8, −0.1075], [9, −0.3197], [10, 0.3169], [11, −0.3196], [12, −0.1076], [13, 0.3169], [14, 0.1034]
Above, we see from above that the DNL is very similar for both the endpoint and best-fit refer-
ence lines.
Exercises
negative. (One example of this is an ADC whose DC reference voltage is somehow drastically
perturbed as the input voltage varies. However, this failure mechanism is quite rare.) Nevertheless,
an ADC can appear to be nonmonotonic when its input is changing rapidly.3
For this reason, we do not typically test ADCs for monotonicity when we use slowly chang-
ing inputs (as in search or linear ramp INL and DNL tests). However, when testing ADCs with
rapidly changing inputs, the ADC may behave as if it were nonmonotonic due to slew rate limi-
tations in its comparator(s). These monotonicity errors show up as signal-to-noise ratio failures
in some ADCs and as sparkling in others. (Sparkling is a dynamic failure mode discussed in
Section 7.4.3.)
Unlike DACs, ADCs are often tested for missing codes. A missing code is one whose
voltage width is zero. This means that the missing code can never be hit, regardless of the
ADC’s input voltage. A missing code appears as a missing step on an ADC transfer curve, as
illustrated in Figure 7.18. Since DACs always produce a voltage for each input code, DACs
cannot have missing codes. Although a true missing code is one that has zero width, miss-
ing codes are often defined as any code having a code width smaller than some specified
value, such as 1/10 LSB. Technically, a code having a width of 1/10 LSB is not missing, but
the chances of it being hit are low enough that it is considered to be missing from the ADC
transfer curve.
238 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Figure 7.18. (a) Monotonicity errors in DACs and (b) missing codes in ADCs.
1.5 15
DAC ADC
output output
1.0 10
voltage code
Missing
code
0.5 5
Monotonicit y
err or
0 0
0 5 10 15 0 0.5 1.0 1.5
DAC input code ADC input voltage
(a) (b)
We typically test Tconvert by measuring the period of time from the CONVERT signal’s active
edge to the DATA_READY signal’s active edge. We have to verify that the Tconvert time is less
than or equal to the maximum conversion time specification. For this measurement, we can use
a time measurement system (TMS) instrument, or we can sometimes use the tester’s digital
pattern compare function if we can tolerate a less accurate pass/fail test. We can verify the Fmax
specification (and thus the Trecovery specification) by simply operating the converter at its maxi-
mum sampling rate, Fmax, and verifying that it passes all its dynamic performance specifications
at this frequency.
Chapter 7 • ADC Testing 239
Sample 1 Sample 2
CONVERT
DATA_READY
READ
DATA
Tconvert Trecovery
Tsample
Figure 7.20. ADC conversion cycles with internally generated CONVERT signal.
Sample 1 Sample 2
DATA_READY
READ
DATA
Invalid sample 1s t v a l i d s a m p l e
In many ADC designs, the CONVERT signal is generated automatically after the ADC out-
put data is read, as shown in Figure 7.20. This type of converter requires no externally supplied
CONVERT signal. The first sample read from the ADC must therefore be discarded, since no
conversion is performed until after the first READ pulse initiates the first conversion cycle.
Sometimes ADCs simply perform continuous conversions at a constant sampling rate. The
CONVERT signal is generated at a fixed frequency derived from the device master clock. This
architecture is very common in ADC channels such as those in a cellular telephone voice band
interface or multimedia audio device. The continuous conversions can usually be disabled by a
register bit or other control mechanism to minimize power consumption when conversions are not
needed. These devices sometimes generate a DATA_READY signal that must be used to synchro-
nize the tester with an asynchronous data stream. DUT-defined timing can be a difficult situation
240 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
to deal with, since ATE testers are not designed to operate in a slave mode with the DUT driving
digital timing.
Clearly, there are many ways to design ADCs. The test engineer has to deal with many differ-
ent permutations of interfacing possibilities, each with its own testing requirements.
7.4.3 Sparkling
Sparkling is a phenomenon that happens most often in high-speed flash converters, such as those
described ahead in Section 7.5.3, due to digital timing race conditions. It is the tendency for an
ADC to occasionally produce a conversion that has a larger than expected offset from the expected
value. We can think of a sparkle sample as one that is a statistical outlier from the Gaussian dis-
tribution in Figure 7.6. Sparkling shows up in a time-domain plot as sudden variations from the
expected values. It got its name from early flash ADC applications, in which the sample outliers
produced white sparkles on a video display. Sparkling is specified as a maximum acceptable devi-
ation from the expected conversion result. For example, we might see a specification that states
sparkling will be less than 2 LSBs, meaning that we will never see a sample that is more than 2
LSBs from the expected value (excluding gain and offset errors, of course). Sparkling should not
be confused with noise-induced errors such as those illustrated in Figure 7.11.
Test methodologies for sparkling vary, mainly in the choice of input signal. We might look
for sparkling in our ramp histogram raw data, such as that shown in Figure 7.21. We might also
apply a very high-frequency sine wave to the ADC and look for time-domain spikes in the col-
lected samples.
7 Sparkle
samples
6
5
Output code
4
2
Sparkle
1 sample
0
0 10 20 30 40 50 60 70 80 90
Input voltage (mV)
Chapter 7 • ADC Testing 241
Since it is a random digital failure process, sparkling often produces intermittent test results.
Sparkling is generally caused by a weakness in the ADC design that must be eliminated through
good design margin rather than being screened out by exhaustive testing. Nevertheless, ADC
sparkling tests are often added to a test program as a quick sanity check, making use of samples
collected for one of the required parametric tests.
Signal-to-noise ratio, group delay distortion, and other transmission parameters are often
specified in data transmission applications. Also, data transmission specifications such as
error vector magnitude (EVM), phase trajectory error (PTE), and bit error rate (BER) may
also need to be tested. Some of these parameters are so numerous that we cannot possibly
cover them in this book. The test engineer will have to learn about these and other applica-
tion-specific testing requirements by studying the relevant standards documents. ATE vendors
can also be a tremendous source of expertise when learning about new testing requirements
and methodologies.
Most ADC architectures are well suited for low-frequency data transmission applications
(with the exception of integrating converters). High-frequency applications may require fast
successive approximation ADCs, semiflash ADCs, or even full-flash ADCs, depending on the
required sampling rates.
7.6 SUMMARY
ADC testing is very closely related to DAC testing. Many of the DC and intrinsic tests defined in
this chapter are very similar to those performed on DACs. The most important difference is that
the ADC code edge transfer curve is harder and much more time-consuming to measure than the
DAC transfer curve. However, once the many-to-one statistical mapping of an ADC has been con-
verted to a one-to-one code edge transfer curve, the DC and transfer curve tests are very similar in
nature to those encountered in DAC testing. This chapter by no means represents an exhaustive list
of all possible ADC types and testing methodologies. There is a seemingly endless variety of ADC
architectures and methods for defining their performance. Hopefully, this chapter will provide a
solid starting point for the beginning test engineer.
PROBLEMS
7.1. If V is normally distributed with zero mean and a standard deviation of 50 mV, find P(V <
40 mV). Repeat for P(V > 10 mV). Repeat for P(−10 mV < V < 40 mV).
7.2. If V is normally distributed with mean 10 mV and standard deviation 50 mV, find P(V< 40
mV). Repeat for P(V > 10 mV). Repeat for P(−10 mV <V < 40 mV).
7.3. If V is normally distributed with zero mean and standard deviation 200 mV, what is the
value of V such that P(V < V) = 0.6?
7.4. An ADC input is set to 3.340 V DC. The noise of the ADC and DC signal source is charac-
terized to be 15 mV RMS and is assumed to be perfectly Gaussian. The transition between
code 324 and 325 occurs at 3.350 V DC for this particular ADC; therefore the value 324 is
the expected output from the ADC. What is the probability that the ADC will produce code
325 instead of 324? If we collected 400 samples from the output of the ADC, how many
would we expect to be code 324 and how many would be code 325?
7.5. An ADC input is set to 1.000 V DC. The transition between code 65 and 66 occurs at 1.025
V DC for this particular ADC. If 200 samples of the ADC output are collected and 176
Chapter 7 • ADC Testing 243
of them are code 65 and the remaining code 66, what is the RMS value of the noise at the
input of this particular ADC?
7.6. An ADC input is set to 2.000 V DC. The noise of the ADC and DC signal source is charac-
terized to be 10 mV RMS and is assumed to be perfectly Gaussian. The transition between
code 115 and 116 occurs at 1.990 V DC and the transition between code 116 and 117
occurs at 2.005 V DC for this particular ADC. If 500 samples of the ADC output are col-
lected, how many do we expect to be code 115, code 116, and code 117?
7.7. A linear histogram test was performed on an unsigned binary 3-bit ADC, resulting in the
following distribution of code hits beginning with code 0:
5, 6, 4, 6, 7, 7, 5, 6
A binary search was performed on the first transition between codes 0 and 1 and found
the code edge to be at 10 mV. A second binary search was performed and found the code
edge between codes 6 and 7 to be 1.25 V. What is the average LSB size for this 3-bit ADC?
Determine the width of each code, in volts. Also, determine the location of the code edges.
Plot the transfer curve for this ADC.
7.8. A linear histogram test was performed on a two’s complementary 4-bit ADC resulting in
the following distribution of code hits beginning with code −8:
12, 15, 13, 12, 10, 12, 12, 14, 14, 13, 15, 19, 16, 14, 20, 19
A binary search was performed on the first transition between codes −8 and −7 and found
the code edge to be at 75 mV. A second binary search was performed and found the code
edge between codes 6 and 7 to be 4.56 V. What is the average LSB size for this 4-bit ADC?
Determine the width of each code, in volts. Also, determine the location of the code edges.
Plot the transfer curve for this ADC.
7.9. A linear histogram test was performed on an unsigned binary 3-bit ADC, resulting in the
following distribution of code hits beginning with code 0:
6, 6, 5, 6, 4, 6, 5, 6
A binary search was performed on the first transition between codes 0 and 1 and found the
code edge to be at 32 mV. A second binary search was performed and found the code edge
between codes 6 and 7 to be 3.125 V. What is the average LSB size for this 3-bit ADC?
What is the measurement accuracy of this test, in volts?
7.10. A 12-bit ADC operates with a sampling frequency of 25 MHz. If a linear histogram test is
to be conducted on this ADC, what should be the minimum duration of the ramp signal so
that the average code count is at least 6 hits? What about for 100 hits?
7.11. A 10-bit ADC with a 10-V full-scale range operates with a sampling frequency of 60 MHz.
If a linear histogram test is to be conducted on this ADC with a ramp signal of 100-μs dura-
tion, estimate the voltage resolution of this test. How many samples need to be collected
for this test?
7.12. A sinusoidal histogram test was performed on an unsigned binary 4-bit ADC, resulting in
the following distribution of code hits beginning with code 0:
137, 81, 60, 52, 47, 44, 43, 42, 42, 42, 44, 46, 50, 57, 72, 166
A binary search was performed on the first transition between codes 0 and 1 and found the
code edge to be at 14 mV. A second binary search was performed and found the code edge
between codes 14 and 15 to be 0.95 V. What is the average LSB size for this 4-bit ADC?
Determine the width of each code, in volts. Also, determine the location of the code edges.
Plot the transfer curve for this ADC.
7.13. A sinusoidal histogram test was performed on an two’s complementary binary 4-bit ADC
resulting in the following distribution of code hits beginning with code −8:
251, 163, 104, 118, 80, 99, 71, 93, 70, 94, 72, 101, 82, 124, 163, 315
244 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
A binary search was performed on the first transition between codes −8 and −7 and found
the code edge to be at 20 mV. A second binary search was performed and found the code
edge between codes 6 and 7 to be 9.94 V. What is the average LSB size for this 4-bit ADC?
Determine the width of each code, in volts. Also, determine the location of the code edges.
Plot the transfer curve for this ADC.
7.14. A sinusoidal histogram test was performed on an unsigned binary 4-bit ADC, resulting in
the following distribution of code hits beginning with code 0:
137, 81, 60, 52, 47, 44, 43, 42, 42, 42, 44, 46, 50, 57, 72, 166
A binary search was performed on the first transition between codes 0 and 1 and found
the code edge to be at 14 mV. A second binary search was performed and found the code
edge between codes 14 and 15 to be 0.95 V. What was the input DC bias level relative to
VSS = 0 V used to perform this test? What is the minimum and maximum input signal level
applied to the ADC relative to VSS?
7.15. A sinusoidal histogram test was performed on an two’s complementary binary 4-bit ADC
resulting in the following distribution of code hits beginning with code −8:
251, 163, 104, 118, 80, 99, 71, 93, 70, 94, 72, 101, 82, 124, 163, 315
A binary search was performed on the first transition between codes −8 and −7 and found
the code edge to be at 20 mV. A second binary search was performed and found the code
edge between codes 6 and 7 to be 9.94 V. What was the input DC bias level relative to
VSS = 0 V used to perform this test? What is the minimum and maximum input signal level
applied to the ADC relative to VSS?
7.16. A 12-bit ADC operates with a sampling frequency of 25 MHz. If a sinusoidal histo-
gram test is to be conducted on this ADC, what should be the maximum frequency
of the input signal so that the average minimum code count is at least 10 hits? 100
hits? How many samples will be collected? How long will it take to collect these
samples?
7.17. A 10-bit 5-V ADC operates with a sampling frequency of 25 MHz. If a sinusoidal histo-
gram test is to be conducted on this ADC with a test time of no more than 15 ms, estimate
the worst-case voltage resolution for this test. How many samples need to be collected for
this test?
7.18. A linear histogram test was performed on a two’s complementary 4-bit ADC, resulting in
the following distribution of code hits beginning with code −8:
20, 15, 14, 12, 11, 12, 12, 14, 14, 13, 15, 16, 16, 14, 20, 23
Determine the endpoint DNL and INL curves for this ADC. Compare these results to those
obtained with a best-fit reference line.
7.19. Determine the endpoint DNL and INL curves for the histogram data provided in Problem
7.8. Compare these results to those obtained with a best-fit reference line.
7.20. Determine the endpoint DNL and INL curves for the histogram data provided in Problem
7.9. Compare these results to those obtained with a best-fit reference line.
REFERENCES
1. M. J. Kiemele, S. R. Schmidt, and R. J. Berdine, Basic Statistics, Tools for Continuous Improvement,
4th edition, Air Academy Press, Colorado Springs, CO, pp. 9–71, 1997, ISBN 1880156067.
2. M. Mahoney, Tutorial DSP-Based Testing of Analog and Mixed-Signal Circuits, The Computer
Society of the IEEE, Washington, D.C., p. 137, 1987, ISBN 0818607858.
Chapter 7 • ADC Testing 245
M any of today’s electronic devices make use of high-speed asynchronous serial links for data
communications such as USB, Firewire, PCI-Express, XAUI, SONET, SAS, and so on. Such
devices make use of a serializer-deserializer transmission scheme called SerDes. Older devices
operated on a synchronous clocking system such as the recommended standard 232 serial bus,
referred to as RS232, or the small computer system interface, known as SCSI. While such syn-
chronous buses are being used less as a means to communicate between two separate devices,
buses internal to most devices remain for the most part dependent on a synchronous clocking
scheme. The goal of this chapter is describe the various data communication schemes used today
and how such systems are characterized and tested in production.
This chapter will begin by describing the attributes of both synchronous clock signals and
those signals transmitted asynchronous over a serial channel. In the case of synchronous clock sig-
nals, both time- and frequency-domain descriptions of performance are described. This includes
various time-domain jitter metrics, like periodic jitter and cycle-to-cycle jitter, and frequency-
domain metrics, like phase noise. Our readers encountered phase noise for RF systems back in
Chapters 12 and 13. It is pretty much the same for clocks. For asynchronous systems, the ultimate
measure of performance is bit error rate (BER). The student will learn how to calculate the neces-
sary test time to assure a desired level of BER performance and learn about several techniques
that are used in production to reduce this test time. The latter approaches are largely based on jitter
decomposition methods, and this chapter will explore four common methods found in production
test. Unique to this chapter is the extensive application of probability theory to quantify the use of
these jitter decomposition methods. This chapter will conclude with a discussion of several DSP-
based test techniques used to quantify jitter transmission from the system input to its output. This
includes a discussion about jitter transfer function test and a jitter tolerance test.
Rx Data
D Q D Q
Data In Data Out
Locla CIk
Clock
Rx
Tx
Figure 14.2. An asynchronous data communication link involving two D-type registers.
D Q D Q
Local Clk
Clock
Recovery
Recovered Clock
Tx Rx
information is transmitted in sequence, the receiver must have the means to extract the individual
symbols. As symbols arrive as a continuous stream of bits, one has to be able to separate one sym-
bol from another. In asynchronous communications, each symbol is separated by the equivalent of
a tag so that one knows exactly when the symbol arrives at its destination. In synchronous com-
munications, both the sender and receiver are synchronized with a separate clock signal, thereby
providing all necessary timing information at both ends.
Figure 14.1 illustrates a typical synchronous system involving two D-type registers physi-
cally separated from one another via a transmission line. One-bit data are exchanged between the
two registers through the action of the falling or rising edge of the clock signal. The clock signal is
used to specify when a specific bit is to be transmitted and received. Due to the physical distance
between the sender and the receiver, and the fact that the clock signal travels at about one-half
light speed on a PCB (equivalent to 3.3 ns/m or 2 ns/ft), the timing information is not the same at
all locations. This results in timing differences between the clock signal at the transmit and receive
ends, commonly referred to as clock skew. Ultimately, clock skew limits the maximum rate at
which symbols can be exchanged between the transmitter and the receiver.
Conversely, an asynchronous system involving two D-type registers does not share a
common clock signal (Figure 14.2). Rather, the timing information associated with the clock
signal is embedded in the data stream and recovered by the receiver by a clock recovery (CR)
circuit. While the system is more complicated than a synchronous system, no clock skew
occurs. This enables faster data exchange rates. For this reason alone, asynchronous data
communications is fast becoming the dominant method of exchanging information between
electronic devices.
Chapter 14 • Clock and Serial Data Communications Channel Measurements 587
While asynchronous systems avoid the problem of clock skew, both synchronous and asyn-
chronous systems suffer from the effects of circuit noise. While the source of noise is identical
to that which we studied previously, here we are interested in the effect of noise on the timing
information that is associated with a data communication link. Deviation from the reference signal
is referred to as clock jitter.
⎧ 1, x > 0
⎪
sgn (x ) = ⎨ 0, x = 0 (14.2)
⎪ −1, x < 0
⎩
The clock signal is assumed to be symmetrical with a zero DC offset and a 1-V signal amplitude.
Of course, any offset and amplitude can be incorporated into the model of the clock by introducing
the appropriate terms.
If the clock signal experiences a time delay as it travels from one point to another, say on a
PCB, then we can model the received clock signal by incorporating a time delay TD in Eq. (14.1)
according to
{
cRx ( t ) = sgn cos ⎣⎡ 2π fo ( t − TD )⎦⎤ } (14.3)
From a system timing perspective, the time at which the rising edge of a clock signal crosses the
logic level threshold, represented here by the 0-V level, can be deduced from Eq. (14.3) by equat-
ing it to 0 and solving for the zero crossing times. This is illustrated in Figure 14.3 for both the
transmit and received clock signals. Following this logic, we can write
2π n + 2π foTD n
TZC ( n ) = = + TD (14.4)
2π fo fo
where n represents the nth clock cycle associated with the transmit clock. In the manner written
here, clock skew alters the zero crossing point by a constant amount TD. However, in practice,
TD varies from system to system and from device to device, hence we must treat TD as a random
variable. Let us assume that TD is a Gaussian random variable with mean value TD and standard
deviation σ TD; then we can write the mean value and standard deviation of the zero crossing
point as
μ ZC = μTD
σ ZC = σ TD (14.5)
588 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
1
fo
1
Tx Clk 0
-1
1
Rx Clk 0
-1
tim e
n−1 n n+1
fo fo fo
n−1 n n+1
+ TD + T + TD
fo fo D fo
Figure 14.4. Illustrating the effects of noise on the received clock signal when compared against
that which was transmitted.
T x C lk
Since the standard deviation is nonzero, it indicates that the zero crossing times will vary and in
some way act to reduce the amount of time available for the logic circuits to react to incoming
signals, thereby increasing the probability of logic errors.
In addition to clock skew, noise is always present in any electronic circuit. While noise may
arise from many different sources, the effect of noise on a synchronous or asynchronous sys-
tem is to alter the zero crossing point. This is depicted in Figure 14.4, where a received signal is
compared to a transmitted signal minus any physical delays for ease of comparison. Here we see
that the zero crossing of the received clock varies with respect to that which was transmitted. If
we denote the time difference between the transmit and received zero crossings as J(n), then we
can gather a set of data and perform some data analysis to better understand the underlying noise
mechanism. For ease of discussion, we shall refer to J(n) as the instantaneous time jitter associ-
ated with the received signal. Conversely, we can convert this jitter term into a phase-difference
jitter term, that is,
J [n]
φ [ n ] = 2π fo × J [ n ] = 2π × (14.6)
T
In the study of jitter one finds several ways to interpret jitter behavior. The most common
approach is to construct a histogram of the instantaneous jitter and observe the graphical behavior
Chapter 14 • Clock and Serial Data Communications Channel Measurements 589
Figure 14.5. Typical jitter histogram and its corresponding density function.
Density
Jitter
Figure 14.6. The histogram and PDF associated with a random noise and a sinusoidal signal.
Gaussian
Random Noise
Uniform
Random Noise
Sinusoidal
that results (see Figure 14.5). By comparing it to known shapes, one can deduce the nature of the
underlying source of jitter, at least in a qualitative manner. For example, Figure 14.6 illustrates
the histogram as well as its corresponding PDF for several known signal types such as a Gaussian
random noise signal, a uniform random noise signal, and a sinusoidal signal. It is reasonable
to assume that the distribution shown in Figure 14.5 was obtained from a noise source with a
Gaussian distribution and not one from a signal involving a uniform random noise signal or a sig-
nal with a sinusoidal component.
590 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
To obtain a more quantitative measure of jitter, one can extract statistical measures1 from the
captured data such as the mean and standard deviation. For instance, if N samples of the instanta-
neous jitter is captured, then we can calculate the mean according to
N
1
μJ =
N
∑ J [n]
n =1
(14.7)
∑ {J [ n ] − μ }
1 2
σJ = J (14.8)
N n= 1
Here the mean value would refer to the symmetric timing offset associated with the data set. If the
data represent the jitter associated with a received signal, then the mean value represents the clock
skew mentioned earlier. Often we are also interested in the peak-to-peak jitter, which is computed
as follows:
Due to the statistical nature of a peak-to-peak estimator, it is always best to repeat the peak-to-
peak measurement and average the set of values, rather than work with any one particular value.
A peak-to-peak metric is a biased estimator and will increase in value as the number of points
collected increases. This stems from the fact that a Gaussian random variable has a theoreti-
cally infinite peak-to-peak value, although obtaining such a value would require infinite samples.
We often to refer to such random variables as being unbounded since they theoretically have
no limit.
There are other types of analysis that can be performed directly on the time jitter sequence.
For instance, one may be interested in how the period of the clock varies as a function of time. This
removes any time offset or skew. Mathematically, this is defined as
J PER [ n ] = J [ n ] − J [ n − 1] (14.10)
where we denote JPER[n] as the period jitter for the nth clock cycle. For a large enough sample set,
we can calculate the mean and standard deviation of the period jitter. We can also define a cycle-
to-cycle time jitter metric as follows:
Substituting Eq.(14.10) into Eq.(14.11), we can write the cycle-to-cycle jitter in terms of the time
jitter as
J CC [ n ] = J [ n ] − 2 J [ n − 1] + J [ n − 2 ] (14.12)
Mean and standard deviation can be computed when a significant number of samples of cycle-to-
cycle jitter are captured.
Sometimes jitter metrics are used that go beyond comparing adjacent edges or periods and
instead look at the difference between a multiple number of edges or periods that have passed.
Chapter 14 • Clock and Serial Data Communications Channel Measurements 591
Such metrics are referred to as N-period or N-cycle jitter. Let us assume that N is the number
of periods of the clock signal that separate the edges or periods, then we can write the new
metrics as
J N , PER [ n ] = J [ n + N − 1] − J [ n − 1] (14.13)
and
Phase, period, and cycle-to-cycle jitter provide information about clock behavior at a localized
point in time. N-period or N-cycle jitter metrics track the effects of jitter that accumulate over
time, at least, over the N period observation interval.
Time jitter, period jitter, and cycle-to-cycle jitter are related quantities. They are related
through the backwards difference operator, a functionanalogous to differentiation for continuous
functions. Assuming that time jitter has a uniform frequency distribution, period jitter will have
a high-pass nature that increases at rate of 20 dB/decade across the frequency spectrum. Cycle-
to-cycle jitter will also have a high-pass behavior but will increase at a much faster rate of 40 dB/
dec. In both cases, low-frequency jitter components will be greatly attenuated and will not have
much influence on the jitter metric. This may lead to incorrect decisions about jitter, so one must
be careful in the use of these jitter metrics.
Metrics that provide insight into the low-frequency variation of jitter involve a summation or
integration process as opposed to a differencing or differentiation operation. One such metric is
known as accumulated jitter or long-term jitter. Accumulated jitter involves collecting the statis-
tics of time jitteras a function of the number of cycles that has passed from the reference point. In
order to identify the reference point, as well the time instant the time jitter is captured, we write
J[n,k] where n is the sampling instant and k is the number of clock delays that has passed from the
reference point. Figure 14.7a illustrates the time jitter captured as a function of clock period delay,
k. For each delay, we compute the statistics of the jitter, that is,
N
1
μJ [ k ] = ∑ J [ n, k ]
N n= 0 (14.15)
N
1 N for k = 0,… ,
σ J [k ] = ∑ ⎡⎣ J [ n, k ] − μ ( k )⎤⎦
2
J
2
N n= 0
Accumulated jitter refers to the behavior of the standard deviation σj [k] as a function of the delay
index, k, as illustrated in Figure 14.7b. This particular jitter accumulation plot would be typical
of a PLL. It is useful for identifying low-frequency jitter trends associated with the clock signal.
The concept of accumulated jitter is related to the autocorrelation function of random signals. As
a word of caution, the length of each jitter sequence must be the same for each delay setting to
ensure equal levels of uncertainty.Moreover, the length of each sequence must be no longer than
the fastest time constant associated with the jitter sequence. For instance, if the jitter sequence has
a noise bandwidth of 1 MHz, then the time over which a set of points are collected should be no
longer than 1 μs.
592 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Figure 14.7. Accumulated jitter: (a) illustrating the sampling instant as function of the number of clock
period delays from the reference point. (b) Jitter standard deviation as a function of delay index, k.
T x C lk
R x C lk
(0 D e la y )
R x C lk
(1 D e la y )
J[n,1] J[n+1,1] J[n+2,1]
R x C lk
(2 D e la y )
J[n,2] J[n+1,2]
(a)
σJ
d e la y in d e x , k
(b)
Exercises
Figure 14.8. (a) PSD of an ideal clock signal. (b) PSD of a jittered clock signal.
1 Sv ( f )
fo
1 8
8
π2
32π 2 8 8
-1 52 π 2 7 2π 2
Ideal Clock f
0 fo 3fo 5fo 7fo
(a)
Sv( f )
8
1 8
π2
32π 2 8 8
-1 5 2π 2 7 2π 2
J itte r e d C lo c k
f
0 fo 3fo 5fo 7fo
(b)
The PSD of an ideal clock signal Sv(f) expressed in units of V2 per Hz is illustrated in Figure
14.8a. Here the clock signal is decomposed into a set of monotonically decreasing sized impulses
that are odd multiples of the clock frequency fo. The impulses indicate that each harmonic contrib-
utes a constant level of power to the spectrum of the clock signal. When random jitter is present
with the clock signal, it modifies the spectrum of the clock signal by adding sidebands about each
harmonic as shown in Figure 14.8b. This spectrum is the result of noise in the clock circuitry
phase modulating the output signal. We learned in Section 12.3.3 that the resulting PSD follows
a Lorentzian distribution.
The instantaneous phase difference φ between the first harmonic of the clock signal located
at fo and a signal occupying a 1-Hz bandwidth offset from this harmonic by some frequency Δf has
a PSD described by
Sv ( fo + Δf ) rad 2
Sφ ( Δf ) = 2
Sv ( f o )
(14.16)
Hz
As we learned in Section 12.3.3, sφ (Δf) is related to the 1139IEEE standard definition for phase
noise £(Δf) according to
⎡ Sφ ( Δf ) ⎤ ⎡ Sv ( fo + Δf ) ⎤
£ ( Δf ) dBc/Hz 10 log10 ⎢ ⎥ = 10 log10 ⎢ ⎥
(14.17)
⎣ 2 ⎦ ⎢⎣ Sv ( fo ) ⎥⎦
The term dBc is a shorthand notation for “decibels with respect to carrier.” In the context of clock
signals, the carrier is the fundamental harmonic of the clock.
In many measurement situations involving a spectrum analyzer, the PSD displayed on the
screen of the instrument is not in terms of V2 per Hz but rather V2 per resolution bandwidth in
Hz. The resolution bandwidth (BW) represents the equivalent noise bandwidth of the front-end
filter of the spectrum analyzer. To get the correct PSD level, one must scale the instrument PSD
594 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
EXAMPLE 14.1
Below is a PSD plot of the voltage level associated with a clock generator captured by a spec-
trum analyzer with a center frequency of 1.91 GHz and a resolution bandwidth of 200 kHz. Each
frequency division represents 1-MHz span. What is the phase noise £(f) at a 1-MHz offset from
the 1.91-GHz fundamental tone of this clock signal?
Solution:
As the carrier at 1.91 GHz has a signal level of 3.9 dBm and the PSD at 1 MHz offset from the car-
rier is –68 dBm, the phase noise metric [Eq. (14.19]) is computed as follows:
dBc
£ (1 MHz) = –68 dBm – 3.9 dBm – 3.9 dBm – 10log10 (200 x 103) = −124.9
Hz
Hence, the phase noise at a 1-MHz frequency offset is –124.9 dBc/Hz.
Exercises
by dividing by the factor BW. In terms of the phase noise metric, £(Δf), the expression would
become
⎡ 1 Sv ( fo + Δf ) ⎤
£ ( Δf ) dBc/Hz = 10 log10 ⎢ ⎥
⎣⎢ BW Sv ( fo ) ⎦⎥
(14.18)
More often than not, the data read off a spectrum analyzer is in terms of dBm, hence we convert
Eq. (14.18) into the following equivalent form
where we define Pcarrier dBm = 10log10 ⎡⎣ Sv ( fo )⎤⎦ and PSSB dBm = 10 log10 [ Sv ( fo + Δf )] −
10 log10 ( BW ), both in units of dBm.
In the study of jitter, one often comes across different measures of the underlying noise
mechanisms associated with a clock signal. In the previous section, measures such as a time-jitter
sequence J[n] or a period jitter sequence JPER[n] were introduced. In all cases, these sampled quan-
tities are related to one another through a linear operation. For instance, from Eq. (14.6) the phase-
difference sequence is related to the time-jitter sequence through the proportional constant 2π/T,
where T is the period of the clock signal. Correspondingly, the PSD of the time-jitter sequence can
be expressed in terms of the PSD of the phase jitter according to
2
⎛ T ⎞ s2
SJ (Δf ) s2 /Hz = ⎜ ⎟ Sφ (Δf ) (14.20)
⎝ 2π ⎠ Hz
Normalizing by the period T, we can express the jitter PSD in terms of the unit interval (denoted
as UI) and write Eq. (14.20) as
2
⎛ 1 ⎞ UI 2
SJ (Δf ) UI2 /Hz = ⎜ ⎟ Sφ (Δf ) (14.21)
⎝ 2π ⎠ Hz
We can also relate the phase noise £(Δf) to the PSD of the jitter sequence expressed in s2/Hz or in
UI2/Hz by substituting Eq. (14.20) or Eq.(14.21) into Eq.(14.16) above.
To obtain the PSD associated with the phase signal, we can perform an FFT analysis of the
samples of the instantaneous phase jitter signal. Following the procedure outlined in Section 9.3.3,
596 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
we first calculate the RMS value of the spectral coefficients of the DTFS representation of the
phase error signal, that is,
⎧
⎪ Φ [k ] , k= 0
⎪
ck , RMS = ⎪ 2 Φ [ k ] , k = 1,… , N 2 − 1 (14.22)
⎨
⎪ Φ k
⎪ [ ], k= N 2
⎪⎩ 2
where the N-point variable Φ is obtain from the N-point FFT analysis from the phase-jitter
sequence φ expressed in radians as follows:
FFT {φ [ n ]} (14.23)
Φ=
N
As a phase signal may consist of both periodic signals (e.g., spurs) and noise, we must handle the
two types of signals differently. Specifically, Sφ[k] of the random noise component expressed in
rad2-per-Hz is given by
N rad 2
Sφ [ k ] = ck2, RMS × (14.24)
Fs Hz
whereas the power associated with any periodic component in the spectrum of the phase signal
expressed in rad2 is given by
Here k = ktone represents the bin location of the tone. As these tonal components are not known a
priori, the user must decide if a periodic component is present and its corresponding bin location.
If the number of samples is increased, the PSD level of the random component will remain the
same, however, a periodic component will increase in amplitude, thereby revealing its periodic
nature. To summarize, the Sφ [k] of the phase signal can be described as
⎧
⎪ 2 N rad 2
Sφ [k ]= ⎪⎨ck , RMS × F Hz
, k = 0,…, N 2, k ≠ tone BIN
(14.26)
⎪ 2
s
Because the phase noise PSD sequence will change with different sample sets, it is custom-
ary to average K-sets of the PSD on a frequency-by-frequency basis and obtain an average PSD
behavior defined as follows
1 K
Sˆφ [k ]= ∑ Sφ ,i [k ] (14.27)
K i =1
where the subscript i indicates the ith-PSD obtain from Eq. (14.26). These short-term PSDs are
commonly referred to as periodograms.
Chapter 14 • Clock and Serial Data Communications Channel Measurements 597
EXAMPLE 14.2
The following plot of a 512-point phase-difference sequence was obtained from a heterodyning
process for capturing the phase errors associated with a clock signal:
φ [n ]
The samples were obtained with a digitizer operating at a 10 MHz sampling rate. The continuous-
time phase signal passes through a 1-MHz low-pass filter prior to digitization. What is the phase
noise £(Δf) at a frequency offset of about 40 kHz for this particular sample set?
Solution:
Because the sampling rate is 10 MHz and 512 points have been collected, the frequency resolu-
tion of the PSD will be
Fs 10 × 106
= = 19531.25 Hz
N 512
Hence we can find the offset in the second bin of the PSD of phase error signal at about 39.0625
kHz. The PSD of the phase signal is found by sequencing through Eqs. (14.22) to (14.26), resulting
in the following PSD plot of Sφ[k]:
Here we see that spurious tones are present in bins 24 and 103 corresponding to frequencies of
468.75 kHz and 2.0117 MHz. In order to remove the statistical variation in the PSD, we repeated
the sampling process an additional 10 times and then averaged the PSD. The resulting smoothed
598 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
rad2
Sˆφ [ 2 ] = 2.66 × 10−10
Hz
Subsequently, we compute the phase noise metric at a 39.0625 kHz offset from the carrier as
follows:
Exercises
14.5. The PSD of the instantaneous phase of a 10 MHz clock sig- ANS.
nal is described by Sφ ( f ) = 10−8 /104 + f 2 in units of rad2/ 2 −8
⎛ 1 ⎞ 10 UI2
Hz. What is the PSD of the corresponding jitter sequence SJ ( f ) = ⎜ ⎟
⎝ 2π ⎠ 10 + f 2
4
Hz
in UI2/Hz?
Figure 14.9. Asynchronous communications through an ideal channel with ideal signaling.
{TS1,1} , {TS2 , 0} , ... {TSK ,1} {TD + TS1 ,1} , {TD + TS2 , 0} , ... {TD + TSK ,1}
Ideal
Transmitter Channel With Receiver
Delay TD
1 1
0 0
0 T 2T kT 0+TD 2T+TD
T+TD kT+TD
Ideal
Transmitter Channel With Receiver
Delay TD
600 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Figure 14.11. Illustrating the creation of an eye diagram from the received signal: (a) normalized
eye diagram (b) denormalized eye diagram.
V V
VDD VDD
UI
VSS VSS
t t/T
0 T 2T 3T 4T 5T 0 1
(a)
V
VDD
VSS
t
0 T
(b)
Figure 14.12. Illustrating the decision level thresholds associated with an eye diagram.
VDD
VHT
VSS
t
0 tHT T
level and the other threshold tTH is used to set the bit sample time (between 0 and T). To maximize
the likelihood of making the correct decision, the decision point lies in the middle of the range of
possible values as shown in the eye diagram of Figure 14.12.
At this point in our discussion it is interesting to make some general statement about the events
that are taking place at the receiver. From the eye diagram we see that received signal crosses the
logic threshold VTH at two points: t = 0 and t = T. If we assume that a logical 0 or 1 is equally likely,
then the probability that either a 0–1 transition or 1–0 transition occurs is also equally likely at
a 50% probability level. Hence we can model the PDF of the time at which a signal crosses the
logic threshold (also referred to as the zero-crossing) with two equal-sized delta functions at t =
0 and t = T, each having a magnitude of 50% as shown in Figure 14.13a. Likewise, we can model
the PDF of the voltage level that appears at the receiver in a similar manner as shown in Figure
14.13b. These two plots correspond to the fact that perfect square waves appear at the receiver. In
a real serial communication system, such perfection is a very long way off.
Figure 14.13. A PDF interpretation of the received signal: (a) as measured along the voltage decision
axis defined by V = VTH. (b) as measured along the time decision axis defined by t = tTH.
0.5 0.5
0 t 0 V
0 T VSS VDD
(a) (b)
Figure 14.14. Real channel effects; (a) A pulse experiencing an additive noise effect. (b) A pulse
experiencing a spreading effect due to the dispersion nature of the channel.
additive noise
(a)
R
C
T
T
(b)
the transmitter, and the signal experiences a spreading effect due to the dispersion of the channel
(e.g., high-frequency attenuation effects, as well as a delay dependent on frequency). We model
the additive noise with a summing block placed in series with the channel, as illustrated in Figure
14.14a. A simple model that captures the basic idea of dispersion is that shown in Figure 14.14b.
Here the channel is model as a simple RC circuit. A square pulse of duration T applied to one end
of the RC line comes out of the other end as an exponential type pulse with a duration that extends
beyond time T. It is this pulse spreading effect that can lead to a bit interfering with the next bit.
When this occurs, it is referred to as inter-symbol interference (ISI).
Noisy Channel
Let us begin our investigation of real channels by modeling the effects of additive noise on the
received signal. We shall assume that the noise is Gaussian distributed. A pulse train passing
through a channel would experience additive noise and may appear as that shown in Figure
14.15a. The eye diagram corresponding to this pulse train would be that shown in Figure 14.15b.
Also drawn alongside the eye diagram are the corresponding PDFs associated with each decision
axis, that is, V = VTH and t/T = UITH. In contrast to the PDFs along the decision axis for the ideal
channel shown in Figure 14.13, here we see that additive noise modified the PDF of the voltage
602 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Figure 14.15. (a) Pulse train subject to additive channel noise. (b) Corresponding eye diagram and
its PDFs along the decision-making axes (V = VTH and t/T = tTH).
VDD
VSS
t
0 T 2T 3T 4T 5T
(a)
V
V
VDD VDD
VTH
VSS VSS
t
0 tTH T Rx V pdf 0.5
Rx Zero-Crossing pdf
0.5
0 t
0 T
(b)
that appears at the received at the sampling instant UITH but has no effect on the PDF correspond-
ing to the zero-crossing times. Moreover, the PDF of the noise level appears to be the convolution
of the delta functions with a Gaussian distribution with zero mean and some arbitrary standard
deviation, σN.
Dispersive Channel
In this subsection we shall consider the effects of a dispersive channel on the behavior of a trans-
mitted square pulse train. The eye diagram corresponding to a pulse train subject to channel dis-
persive effects is illustrated in Figure 14.16. Here we see that the channel impairments alter the
distributions of the received signal along both decision axes. We model these impairments as
discrete lines in the PDF rather than as a continuous PDF because these effects are deterministic
and dependent on the data pattern transmitted as well as the channel characteristics. This effect
is commonly referred to as data-dependent jitter (DDJ). Due to the physical nature of a channel,
the distribution along each decision axes is bounded by some peak-to-peak value. Unless the eye
opening along the t = UITH axis is closed, DDJ will have little effect on the bit error rate. However,
when the effects of the channel noise is incorporated alongside the dispersive effects, we would
find an increased bit error rate (over and above the bit error rate that would correspond to the
additive noise effect alone). The reason for this stems from the way the different channel effects
combine. From a probability point of view, assuming that the PDFs for the different channels
impairments are independent, they would combine through a convolution operation as shown
in Figure 14.18. The net effect is a wider PDF extending over a larger range on both decision
axes, that is, closing the eye opening. Also, we note that the PDFs no longer follow a Gaussian
distribution.
Chapter 14 • Clock and Serial Data Communications Channel Measurements 603
Figure 14.16. (a) Pulse train subject to channel dispersion or ISI. (b) Corresponding eye diagram and
its PDFs along the decision-making axes (V = VTH and t/T = tTH).
V
VDD
VSS
t
0 T 2T 3T 4T 5T
(a)
V V
VDD VDD
VTH
VSS VSS
t
0 T R x V pdf
tTH
Rx Zero-Crossing pdf
0 t
0 T
(b)
Figure 14.17. (a) Pulse train subject to channel dispersion and additive Gaussian noise. (b)
Corresponding eye diagram and its PDFs along the decision-making axes (V = VTH and t = tTH).
V
VDD
VSS
t
(a)
V
V
VDD VDD
VTH
VSS VSS
t
0 T Rx V pdf
tTH
Rx Zero-Crossing pdf
0 t
0 T
(b)
604 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
⊗ =
EXAMPLE 14.3
Determine the convolution of two impulse functions centered at VSS and VDD with a Gaussian func-
tion with zero mean and σN standard deviation.
Solution:
Mathematically, the two functions are written as follows:
1 1
f (v ) = δ ( v − VSS ) + δ ( v − VDD )
2 2
− v2
1
g( v ) =
2
e 2σ N
σ N 2π
f ( v ) ⊗ g( v ) ∫ f ( v − τ ) g (τ ) dτ
−∞
where τ is just an immediate variable used for integration. Substituting the above two functions,
we write
⎤ ⎡ ⎤
∞ −τ 2
⎡1 1 1
f ( v ) ⊗ g( v ) = ∫−∞ ⎢⎣ 2 δ (v − VSS − τ ) + 2 δ (v − VDD − τ )⎥⎦ ⎢ σ 2π e
2σ N2
⎥ dτ
⎣ N ⎦
Next, using the sifting property of the impulse function, we write the above integral as
− ( v − VSS ) − ( v − VDD )
2 2
1 1
f ( v ) ⊗ g( v ) = e 2σ N2
+ e 2σ N2
σ N 2 2π σ N 2 2π
Hence, the convolution of two impulse functions center at VSS and VDD, and a Gaussian distribution
with zero mean and standard deviation σN are two Guassian distributions with mean VSS and VDD
having a standard deviation σN.
Chapter 14 • Clock and Serial Data Communications Channel Measurements 605
Figure 14.19. Illustrating the effects on the zero-crossing levels: (a) duty-cycle distortion and
(b) periodic-induced jitter, without any noise present.
V V
VDD VDD
VTH VTH
VSS VSS
t t
0 T 0 T
Rx Zero-Crossing pdf Rx Zero-Crossing pdf
0 t 0 t
0 T 0 T
(a) (b)
In many practical situations we encounter problems that involve the convolution of two
Gaussian distributions. As such, one can easily show the convolution of two Gaussian distribu-
tions is another Gaussian distribution, whose mean value is the sum of individual mean values and
whose variance is the sum of individual variances, that is,
μT = μ1 + μ2
(14.28)
σ T2 = σ 12 + σ 22
where the two Gaussian distributions are described by parameters N (μ1, σ21) and N (μ2, σ22).
Transmitter Limitations
The transmitter also introduces signal impairments that show up at the receiver—specifically,
duty-cycle distortion (DCD) and periodic induced jitter (PJ). Duty-cycle distortion is caused by
the unsymmetrical rise and fall times of the driver located in the transmitter as represented by the
eye diagram shown in Figure 14.19a. As can be seen from this figure, a logic 1 bit value has a
shorter bit duration than a logic 0 bit value relative to the zero crossing point. In essence, DCD can
be considered as a shift in the time of the rising and falling edge of the data bit. DCD is a determin-
istic effect because it depends only on the driver characteristic together with the logic pattern that
is transmitted. Fortunately, its effect on the zero-crossing levels is small and bounded.
Sometimes one finds a periodic component of jitter showing up at the receiver end. This is
typically caused by some periodic interference located at the transmitter or picked up via the chan-
nel. Such effects include crosstalk from adjacent power nets and noise from switching power sup-
plies. Its effect on the zero-crossing levels is bounded. Figure 14.19b illustrates a single sine wave
component and its effect on the zero-crossing PDF as shown in Figure 14.19b. Through the appli-
cation of the Fourier series more complicated periodic components can be also be described.
Jitter Classifications
Figure 14.20 provides a quick summary of the breakdown of the various jitter components
described in the previous subsection. At the top of the tree, we list the total jitter (TJ), which is
606 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
T tTH
{0,1,1,1, ... , 0,1,1,1} vRX
Transmitter Channel }0,1,1,1, ... , 0,1,1,1}
+
-
vTh
broken down into two parts: deterministic (DJ) and random (RJ) jitter. The deterministic jitter
component is further broken down into period jitter PJ, data-dependent jitter DDJ, and a catch-
all category called bounded and uncorrelated jitter denoted by BUJ. We further divide DDJ into
intersymbol interference (ISI) and duty-cycle distortion (DCD) components.The BUJ component
includes tones that are unrelated to the input data sequence, as well as uncorrelated noise-like
signals. In essence, it is a catch-all quantity that accounts for unexplained effects. Random jitter is
divided into single or multi-Gaussian distributions, denoted as MGRJ or GRJ, respectively.
The entire jitter classification is further divided between whether the jitter is bounded or
unbounded—that is, varies with sample set. As a general rule, bounded parameters will be speci-
fied with a min–max value or a peak-to-peak value, whereas unbounded parameters have no limit
so instead are specified with anRMS or standard deviation parameter.
Figure 14.22. Modeling the PDF of the received voltage signal and identifying the regions of the PDF
that contributes to the probability of error.
σN σN
vRx
0 Vlogic0 VLogic1
VTH
standard deviation σN. Superimposing these two distributions along the same axis we obtain the
decision diagram shown in Figure 14.22. The dotted vertical line indicates the voltage threshold
VTH. The portion of the distribution centered around VLogic0 but above VTH represents the probability
that logic 1 is detected when logic 0 was sent. Mathematically, we write
∞
⎛ VTH − VLogic 0 ⎞
P (VRx > VTH | Tx = 0 ) = ∫ pdfRx , Logic 0 dvRx = 1 − Φ ⎜ ⎟⎠ (14.29)
VTH ⎝ σN
Conversely, the portion of the distribution centered around VLogic1 but below VTH represents the
probability that logic 0 is detected when logic 1 was sent. This allows us to write
⎛ VTH − VLogic1 ⎞
VTH
Assuming the probability of transmitting a logic 0 and a logic 1 is 1/2, respectively, then we can
write the total probability that a single bit is received in error as
1 1
Pe (VTH ) = × P (VRx > VTH | Tx = 0 ) + × P (VRx < VTH | Tx = 1) (14.31)
2 2
The following example will help illustrate the application of this formula.
EXAMPLE 14.4
A logic 0 is transmitted at a nominal level of 0 V and a logic 1 is transmitted at a nominal level of
2.0 V. Each logic level has equal probability of being transmitted. If a 150-mV RMS Gaussian noise
signal is assumed to be present at the receiver, what is the probability of making an single trial
bit error if the detector threshold is set at 1.0 V.
608 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Solution:
According to Eq.(14.32), together with the numbers described above, we write
Therefore the probabilitry of a single bit error is 1.31 × 10–11. Alternatively, if 1.31 × 1011 bits are
sent in one second, then one can expect about 1 error to be made during this transmisison.
EXAMPLE 14.5
For the system conditions described in Example 14.4, compute the probability of error as a func-
tion of the threshold voltage VTH beginning at 0 V and extending to 2 V.
Solution:
Using Eq. (14.32), together with the data from Example 14.7, we can write the probability of error
as a function of the threshold voltage VTH as follows:
1 1 ⎛ V − 0⎞ 1 ⎛ V − 2.0 ⎞
Pe (VTH ) = − × Φ ⎜ TH + × Φ ⎜ TH
2 2 ⎝ 0.15 ⎟⎠ 2 ⎝ 0.15 ⎟⎠
Numericially, we iterate VTH through this formula beginning at 0 V and ending at 2 V with a 10 mV
step, resulting in the Pe plot shown below:
Here we the Pe plot is minumum at about the 1-V threshold level, a level midway between the
logic 0 and logic 1 level. Also, we see the Pe is symmetrical about this same threhold level. The
above plot is commonly referred to as a bathtub plot on account of its typical shape. (Albeit, this
particular plot looks more like a valley associated with a mountain range than a bathtub. This is
simply a function of the numerical values used for this example.)
Chapter 14 • Clock and Serial Data Communications Channel Measurements 609
Exercises
Up to this point in the discussion, we have assumed that the distribution of the received signal is
modeled as a Gaussian random variable. We learned in the previous section that this is rarely the
case on account of the channel dispersion effects as well as circuit asymmetries associated with
the transmitter.From a mathematical perspective, this does not present any additional complication
to quantifying the probability of error provided that the PDFs of the received signal is captured in
some numerical or mathematical form described in the general way by the following expression:
∞ VTH
1 1
Pe (VTH ) = × ∫ pdf Rx , Logic 0 dvRx + × ∫ pdf Rx , Logic1dvRx (14.33)
2 VTH 2 −∞
Pe ( tTH ) = P ( bit transition ) × P (t > tTH bit = n –1) + P ( bit transition ) × P (t < tTH bit = n ) (14.34)
Figure 14.23. Modeling the PDF of the received zero-crossing time and identifying the regions
of the PDF that contributes to the probability of error.
σ ZC σ ZC
t
0 t Th T
610 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Since we assumed that a 0 and 1 are equally likely to occur, it is reasonable to assume that a bit
transition is also equally likely, hence we write the probability of error as
∞ tTH
1 1
Pe ( tTH ) = × ∫ pdfTJ , n− 1dt + × ∫ pdfTJ , n dt (14.35)
2 tTH
2 −∞
where the subscript TJ, n signifies the PDF of the total jitter around the nth bit transition. Because
the PDFs are Gaussian in nature, we can replace the integral expressionsby normalized Gaussian
CDF functions and write
1 1 ⎛ t − 0⎞ 1 ⎛ t −T⎞
Pe ( tTH ) = − × Φ ⎜ TH ⎟ + × Φ ⎜ TH
2 2 ⎝ σ ZC ⎠ 2 ⎝ σ ZC ⎟⎠ (14.36)
Interesting enough, the above formula has a very similar form as that for the received voltage sig-
nal probability of error shown in Eq. (14.32). Let us look at an example using this approach.
EXAMPLE 14.6
Data are transmitted to a receiver at a data rate of 1 Gbits/s through a channel that causes the
zero crossings to vary according to a Gaussian distribution with zero mean and a 70-ps standard
deviation. What is the probability of error of a single event if the sampling instant is set midway
between bit transitions?
Solution:
As the data rate is 1 Gbits/s, the spacing between bit transistions is 1 ns. Hence the sampling
instant will be set at 500 ps or 0.5 UI. From Eq. (14.36), the Pe is
1 1 ⎛ t − 0⎞ 1 ⎛t −T ⎞
Pe (tTH ) = − × Φ ⎜ TH + × Φ ⎜ TH
2 2 ⎝ σ ZC ⎟⎠ 2 ⎝ σ ZC ⎟⎠
1 1 ⎛ 500 ps − 0 ⎞ 1 ⎛ 500 ps − 1000 ps ⎞
Pe (500 ps) = − × Φ⎜ + × Φ⎜
2 2 ⎝ 70 ps ⎟⎠ 2 ⎝ 70 ps ⎟⎠
Exercises
if the sampling instant is set at 0.3UI? Repeat for a threshold of Pe = 4.93 × 10−10;
0.75UI? Pe = 1.43 × 10−7.
Regardless of whether or not we obtain complete independence, this analysis helps to illustrate the
dependence of the decision levels on the eye opening. Consider the total probability of error as
μNE
BER = (14.39)
NT
According to our earlier development, if NT bits were transmitted to a receiver, then according to
our probability model, we would expect the average number of bit errors to given by
μ N E = N T Pe (14.40)
which leads to the simple observation that BER is equivalent to Pe, that is,
BER = Pe (14.41)
612 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Ideal Data Tx
?
= NE
Channel Data Rx
NT
The test setup for measuring a BER is that shown in Figure 14.24. It consists of a block that
compares the logic state of the data appearing at its two input port: Tx and Rx. One input contains
the ideal data or the transmitted data, and the other input contains the received data. The compare
block captures data over some specified time interval or over some total count NT from which the
number of transmission errors NE is identified. While one may be tempted to compute the BER
using the measured NE, this would be incorrect as BER is defined in terms of the average NE or as
denoted in Eq. (14-39) by μNe. Subsequently, a series of identical measurements must be made in
order to extract the average value. However, we learned from Chapter 5, specifically Section 5.3,
that extracting the average value from a finite-sized population of random variables will always
have some level of uncertainty associated with it. If we were to assume that the error count is
Gaussian distributed with parameters N (μNe , σNe), then one could bound the variation in the BER
value with a 99.7% confidence level as
μ N E − 3σ N E μ N E + 3σ N E
≤ BER ≤ (14.42)
NT NT
Clearly, the higher the number of bits transmitted (NT), the smaller the expected range in pos-
sible measured BER values. While this conclusion is true in general, transmission bit errors do
not obey Gaussian statistics, rather they follow more closely a binomial distribution.We saw this
distribution back in Chapter 4, Section 4.3. There the binomial distribution is the discrete proba-
bility distribution of the number of successes in a sequence of N independent yes/no experiments,
each of which yields a success with probability p. Following this train of thought, if NT bits are
transmitted, then the probability of having NE errors in the received bit set assuming the errors are
independent with probability of error Pe can be written as
⎧
⎪ NT !
P [X = N E ]= ⎪⎨ N ! (N − N )! Pe (1 − Pe )
NT − N E ,
NE
N E = 0,…, N T
E T E
(14.43)
⎪
⎪⎩ 0, otherwise
The binomial distribution has the following mean and standard deviation:
μ = N T Pe
σ = N T Pe (1 − Pe )
(14.44)
which can be further simplified when NT BER < 10 using the Poisson approximation as
NE
1
P [X ≤ N E ]≈ ∑ (N T BER ) e − NT BER
k
(14.46)
k =0 k !
The transmission test problem is one where we would like to verify that the system meets a
certain BER level while at the same time we remain confident that the test results are repeatable
to some statistical level of certainty.4 Mathematically, we can state this as a conditional probabil-
ity whereby we assume that the single bit error probability Pe is equal to the desired BER, and we
further set the probability of NE bit errors to a confidence parameter α as follows:
P ⎣⎡ X ≤ N E Pe = BER ⎦⎤ = α (14.47)
The meaning of what BER refers to should now be clear; every bit received has a probability of
being in error equal to the BER. The parameter α represents the probability of receiving NE errors
or less. For a very large α, the probability of receiving NE or fewer errors is very likely. However, a
small α signifies the reverse, a very unlikely situation. Hence, focusing in on the unlikely situation
provides us with a greater confidence that the assigned conditions will be met. It is customary to
refer to (1 – α) as the confidence level (CL) expressed in percent, that is,
CL = 1 − α (14.48)
Combining Eq. (14.46) with Eq. (14.47), together with the appropriate parameter substitutions,
we write
NE
1
P ⎡⎣ X ≤ N E Pe = BER ⎤⎦ = ∑
k !
(NT BER )k e− NT BER = 1 − CL (14.49)
k =0
To help the reader visualize the relationship provided by Eq.(14.49), Figure 14.25 provides a
contour plot of the confidence levels corresponding to the probability of 2 errors or less as a func-
tion of BER and NT. This plot provides important insight into the tradeoffs between CL, BER
and bit length NT. For example, if 1012 bits are transmitted and 2 or less data errors are received,
then with 95% confidence level we can conclude that the BER ≤ 10-11. Likewise, if after 1011 bits
are received with 2 or less data errors, we can conclude with very little confidence (5%) that the
BER ≤ 10–11.
Exercises
Figure 14.25. The confidence level for a probability of 2 errors or less as a function of BER and bit
length, NT.
CL=5% CL=95%
Equation (14.49) provides us with the opportunity to compute the number of bits NT that need to
be transmitted such that the desired level of BER is reached to a desired level of confidence CL.
As NT represents the minimum bit length that must pass with errors NE or less, we shall designate
this bit length as NT,min. Next, we rearrange Eq. (14.49) and solve for NT,min as
1 ⎡ NE 1 k⎤ 1
N T ,min = ln ⎢ ∑ (N T BER ) ⎥ − ln (1 − CL ) (14.50)
BER ⎣ k = 0 k ! ⎦ BER
Using numerical methods, NT,min can be solved for specific values of BER and CL. To achieve a
high level of confidence, typically, one must collect at least 10 times the reciprocal of the desired
BER. For example, if a BER of 10–12 is required, then one can expect that at least 1013 samples
will be required.
It is interesting to note that Eq.(14.50) reveals a tradeoff between the confidence level of the
test and the bit length NT,min, which in turn is related to test time, Ttest. When Consider substituting
NE = 0 into Eq. (14.50), one finds
ln (1 − CL )
N T ,min = − (14.51)
BER
Since the test time Ttest is given by
N T ,min
Ttest = (14.52)
FS
ln (1 − CL )
Ttest = − (14.53)
FS × BER
The higher the confidence level CL, the longer the time required for completing the test.
Chapter 14 • Clock and Serial Data Communications Channel Measurements 615
EXAMPLE 14.7
A system transmission test is to be run whereby a BER < 10–10 is to be verified. How many sam-
ples should be collected such that the desired BER is met with a CL of 99% when no more than 2
errors are deemed acceptable.What is the total test time if the data rate is 2.5 Gbps?
Solution:
Using Eq. (14.50), we write
⎡ 2 1 k⎤
NT ,min =
1
−10 ( 1
)
ln ⎢ ∑ NT 10−10 ⎥ − −10 ln (1 − 0.99)
10 ⎣ k =0 k ! ⎦ 10
Here we have a transcendental expression in terms of NT,min only, as NE was set to 2. Using a com-
puter program, we solve for NT,min and get
⎡ 2⎤
NT ,min =
10
1
−10
1
ln ⎢ 1+ 10−10 NT ,min +
⎣ 1!
1
2!
( ⎦ 10
)
1
10−10 NT ,min ⎥ − −10 ln(1− 0.99 ) = 8.41 × 1012 bits
If after 8.41 × 1012 bits we have 2 bit errors or less, then we can conclude that BER < 10-10 with a
confidence level of 99%. The time required to perform this test at a data rate of 2.5 Gpbs is
Therefore, 3363.4 s is required to perfom this test. This is almost one hour of testing one part
only!
BER tests are extremely time-consuming and very expensive to run in production. A second
test limit can be derived that measures the confidence that the bit sequence will have more bit
errors than the desired amount. Hence, once identified, the test can be terminated and declared a
failure, thereby saving test time.
Consider the probability of achieving more bit errors than desired with confidence CL; that
is, we write
Following the same set of steps as before, we write a transcendental expression in terms of the bit
length NT . Because this bit length represents the maximum number of bits that can pass before
exceeding a given number of errors, NE, we designate this bit length as NT,max and write
1 ⎡ NE 1 k⎤ 1
N T ,max = ln ⎢ ∑ (N T BER ) ⎥ − ln (CL ) (14.56)
BER ⎣ k = 0 k ! ⎦ BER
616 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Figure 14.26. A flow diagram illustrating the test process for performing an efficient BER test.
T r a n s m it
N T,min b it s
@ NT,max No
#Errors ≤ NE? Fail
Yes
@ NT,min No
#Errors ≤ NE? Fail
Yes
Pass
If the number of observed errors is greater than NE after NT,max bits have been collected, the test can
be declared a failure. Otherwise, we continue the test until all NT,min bits are transmitted. A flow
diagram illustrating the test process is shown in Figure 14.26.
EXAMPLE 14.8
A system transmission test is to be run whereby a BER < 10–10 is to be verified at a confidence level
of 99% when 2 or less bit errors are deemed acccptable. What are the minumum and maximum
bit lengths required for this test? Also what is the minimum and maximum time for this test if the
data rate is 2.5 Gbps? Does testing for a fail part prior to the end of the test provide significant
test time savings?
Solution:
From the previous example, we found the minumum bit length to be
⎡ 2⎤
NT ,min =
10
1
−10
1
ln ⎢ 1+ 10−10 NT ,min +
⎣ 1!
1
2!
( ⎦ 10
1
)
10−10 NT ,min ⎥ − −10 ln(1− 0.99 ) = 8.41 × 1012 bits
⎡
1 1 1
( ) ⎤⎥⎦ 1
ln( 0.99 ) = 4.36 × 1011 bits
2
NT ,max = ln 1+ 10−10 NT ,max + 10−10 NT ,max −
10−10 ⎢⎣ 1! 2! 10−10
Exercises
G ( X ) = X P + gP − 1 X P − 1 + gP − 2 X P − 2 + … + g2 X 2 + g1 X 1 + 1 (14.57)
G(X ) = X4 + X + 1 (14.58)
618 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Here the first and second flip-flop outputs are feedback to the shift register input through a single
XOR operation. As each flip-flop delays their input by one clock cycle, we can write the following
equations for each flip-flop output at time index n as
+ + + + +
gP−1 gP−2 gP− 3 g2 g1 g0 = 1
output
DP-1 QP-1 DP-2 QP-2 DP-3 QP-3 D1 Q1 D0 Q0
clk
XOR
D3 Q3 D2 Q2 D1 Q1 D0 Q0 output
F /F F /F F /F F /F
#3 #2 #1 #0
clk
2 X2+X+1 18 X18+X7+1
3 X3+X+1 19 X19+X5+X2+X+1
4 X4+X+1 20 X20+X3+1
5 X5+X2+1 21 X21+X2+1
6 X6+X+1 22 X22+X+1
7 X7+X3+1 23 X23+X5+1
8 X8+X6+X5+X3+1 24 X24+X7+X2+X+1
9 X9+X4+1 25 X25+X3+1
10 X10+X3+1 26 X26+X6+X2+X+1
11 X11+X2+1 27 X27+X5+X2+X+1
12 X +X6+X4+X+1
12
28 X28+X3+1
13 X13+X4+X3+X+1 29 X29+X2+1
14 X14+X10+X6+X+1 30 X30+X23+X2+X+1
15 X15+X+1 31 X31+X3+1
16 X +X12+X3+X+1
16
32 X +X22+X2+X+1
32
17 X17+X3+1 33 X33+X13+1
Chapter 14 • Clock and Serial Data Communications Channel Measurements 619
F3 [ n ] = L XOR [ n − 1]
F2 [ n ] = F3 [ n − 1]
(14.59)
F1 [ n ] = F2 [ n − 1]
F0 [ n ] = F1 [ n − 1]
where LXOR is the output of the XOR gate. We can describe the XOR output as a summing opera-
tion over a finite field of mod 2 with inputs F1 and F0 as follows:
F0 [ n ] = F0 [ n − 3 ] + F0 [ n − 4 ] mod 2 (14.61)
Including the initial register values, we further write
⎧
⎪ F0 [ 0 ] , n= 0
⎪ F [0], n= 1
⎪ 1
F0 [ n ] = ⎨ F 0 ,
⎪ 2
[ ] n= 2 (14.62)
⎪ F3 [ 0 ] , n= 3
⎪ F [ n − 3 ] + F [ n − 4 ] mod 2, n = 4,5,
⎩ 0 0
The above time difference equation (mod 2) provides the complete pattern sequence from an ini-
tial state or seed. The following example will illustrate this procedure.
EXAMPLE 14.9
Derive the first 15 bits associated with the four-stage LFSR shown in Figure 14.28, assuming that
the shift register is initialized with seed 1011 (from left to right).
Solution:
According to Eq. (14.62), we write
⎧
⎪ 1 n= 0
⎪ 1 n= 1
⎪
F0 [ n ] = ⎨ 0 n= 2
⎪
⎪ 1 n= 3
⎪ F [ n − 3 ] + F [ n − 4 ] mod2, n = 4,5, …
⎩ 0 0
0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1
If the sequence is allowed to run longer, one would observe the following repeating pattern:
1, 1, 0, 1,0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1,
0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, …
620 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
In general, for any Pth-order generator polynomial described by Eq. (14.57) the output of the
XOR gate can be written in terms of state of each flip-flop output at sampling instant n as
Correspondingly, we can write the state of the output of the 0th flip-flop in terms of its passed
state as
The next example will help to illustrate the use of these generalized equations, as well as describe
how the initial seed is incorporated into the PRBS generation.
EXAMPLE 14.10
An LFSR of degree 8 is required. Write a short routine that generates the complete set of unique bits
associated with this sequence. Initialize the LSFR using the seed 00000001, where the first bit corre-
sponds to the state of the 7th flip-flop and the last bit is associated with the state of the 0th flip-flop.
Solution:
The generator polynomial for a PRBS of degree 8 (Table 14.1) is
G (X ) = X 8 + X 6 + X 5 + X 3 + 1
Comparing this expression to the general form of the generating polynomial of degree P = 8,
we determine
g7 = 0, g6 = 1, g5 = 1, g4 = 0, g3 = 1, g2 = 0, g1 = 0
Subsequently, the recursive equation that governs the output behavior of the LSFR as desribed
by Eq. (14.64) can be written as
which reduces to
F0 [ n ] = F0 [ n − 2 ] + F0 [ n − 3 ] + F0 [ n − 5 ] + F0 [ n − 8 ]
Subsequently, we can write the following routine involving a programming for loop and the array
varaible F as
# initialize the output using the given seed value
F [7] = 0 : F [ 6 ] = 0 : F [5 ] = 0 : F [ 4 ] = 0 : F [ 3 ] = 0 : F [ 2 ] = 0 : F [1] = 0 : F [ 0] = 1:
# cycle through the leading flip-flop recursive equation
for n = 8 to 262,
F [ n] = F [ n − 2 ] + F [ n − 3 ] + F [ n − 5] + F [ n − 8]
end
Cycling through this routine will provide all 255 unique bits of this sequence corresponding to the
initial conditions provided by the seed number.
Chapter 14 • Clock and Serial Data Communications Channel Measurements 621
Exercises
14.14. An LFSR has primitive generator polynomial G(X) = X 19 + ANS. XOR= F5 + F2 + F1 + F0;
X 5 + X 2 + X + 1. Write the expression for the XOR opera- XOR is fed into the D-input of
tion assuming the output is taken from the 0th flip-flop. the #18 flip-flop.
The minimum BER occurs when the threshold voltage VTH is set equal to the average voltage
representing the two logic levels, i.e.,
VLogic 0 + VLogic1
V *TH = (14.66)
2
Hence the minimum BER is found by substituting Eq. (14.66) into (14.65), together with the
mathematical identity,
Φ (− x ) = 1 − Φ ( x ) (14.67)
622 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
⎛ VLogic1 − VLogic 0 ⎞
( )
*
BER VTH = 1− Φ ⎜
⎝ σN ⎟⎠ (14.68)
The amplitude-based scan test strategy can now be identified. The goal is to identify the three
unknown parameters of the BER expression found in Eq. (14.65) then using the ideal sampling
point defined by Eq. (14.66), compute the corresponding BER. Moreover, as these three quantities
are the same at any BER level, we can identify them at relatively large BER values (low perfor-
mance) and save enormous test time in the process.
The procedure works by moving the receiver threshold level VTH closer to the logic 1 voltage
level such that the BER performanceis reduced, say, to some value between10−4 and 10−10 where
the test time is short. Let us assume at this instant that the BER is set at some level near 10−9.
According to Eq. (14-65), with VTH = VTH,1, we can write
We further recognize that probability of error due to the noise centered on the logic 0 level is now
insignificant at this BER level, hence we can write
⎛ VTH ,1 − VLogic1 ⎞
(
BER VTH ,1 ≈) 1
2
×Φ ⎜
⎝ σN ⎟⎠ (14.70)
Next, move the threshold level again such that the BER is reduced to say 10-6. This means moving
the threshold level even closer towards the logic 1 voltage level. We now have a second equation
in terms of VTH,2 and the revised BER level, i.e.,
⎛ VTH ,2 − VLogic1 ⎞
(
BER VTH ,2 ≈ ) 1
2
×Φ ⎜
⎝ σN ⎟⎠ (14.71)
We now have two linear equations in two unknowns, from which we can easilysolve VLogic,1
and σN as follows:
VLogic1 =
( ) ( )
VTH ,2 × Φ −1 ⎡⎣2BER VTH ,1 ⎤⎦ − VTH ,1 × Φ −1 ⎡⎣2BER VTH ,2 ⎤⎦
( ) ( )
Φ −1 ⎡⎣2BER VTH ,2 ⎤⎦ − Φ −1 ⎡⎣2BER VTH ,1 ⎤⎦ (14.72)
VTH ,2 − VTH ,1
σ N = −1
( )
Φ ⎣2BER VTH ,2 ⎤⎦ − Φ −1 ⎡⎣2BER VTH ,1 ⎤⎦
⎡ ( )
assuming
We repeat the above procedure, but this time we move closer to the other logic level using the
same BER levels, and we obtain another set of equations assuming VTH,4 < VTH,3 and BER (VTH,3)
< BER (VTH,4), that is,
⎛ VTH ,3 − VLogic 0 ⎞
(
BER VTH ,3 ≈ ) 1 1
− ×Φ⎜
2 2 ⎝ σN ⎟⎠
(14.73)
⎛ VTH ,4 − VLogic 0 ⎞
(
BER VTH ,4 ) 1 1
≈ − ×Φ⎜
2 2 ⎝ σN ⎟⎠
These two equations can then be solved for unknown parameters, VLogic,0 and σN, as follows:
VLogic 0 =
( )
VTH ,3 × Φ −1 ⎡⎣1 − 2BER VTH ,4 ⎤⎦ − VTH ,4 × Φ −1 ⎡⎣1 − 2BER VTH ,3 ⎤⎦( )
( )
Φ −1 ⎡⎣1 − 2BER VTH ,4 ⎤⎦ − Φ −1 ⎡⎣1 − 2BER VTH ,3 ⎤⎦ ( ) (14.74)
VTH ,3 − VTH ,4
σ N = −1
( )
Φ ⎡⎣1 − 2BER VTH ,4 ⎤⎦ − Φ −1 ⎡⎣1 − 2BER VTH ,3 ⎤⎦ ( )
Here we have two expressions for the standard deviation of the noise. Based on our initial
assumptions, these should both be equal. If they are not, we can simply average the two estimate
of the standard deviation of the underlying noise process and use the average value in our esti-
mates of this model parameter, that is,
σ N ,0 + σ N ,1
σN = (14.75)
2
Conversely, we can create a new theory of BER performance based on two different Gaussian
distributions with zero means and standard deviations of σN,0 and σN,1 and solve for the BER per-
formance as
The optimum threshold level V*TH tor minimum level of BER can then be found to occur at
σ N ,0VLogic1 + σ N ,1VLogic 0
V *TH = (14.77)
σ N ,0 + σ N ,1
In order visualize the order of variables, Figure 14.29 illustrate the arrangement of threshold
voltages with respect to the BER level. It is important that the reader maintains the correct order;
otherwise the estimated BER will be incorrect.
To perform numerical calculations, we need to find the inverse value of the Gaussian CDF
function for very small values. Appendix B, found at the back of the book, provides a short table of
some of the important values of this inverse function. One can also make use of the approximation
to the Gaussian CDF when x is large but negative:
1
1 − x2
Φ ( x)≈ e 2
, x << 0 (14.78)
(− x ) 2π
624 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Figure 14.29. Highlighting the notation of the threshold voltages with respect to the BER level.
BER
B E R ( VTH , 4 )
B E R ( VTH , 2 )
B E R ( VTH , 3 )
B E R ( VTH ,1 )
Threshold Voltage
EXAMPLE 14.11
A system transmission test is to be run whereby a BER < 10–12 is to be verified using the amplitude-
based scan test method. The following BER measurements were made at the following four
threshold levels:
Does this system have the capability to meet the BER requirements and what is the optimum
threshold level? Assume that the noise is asymmetrical.
Solution:
Following the reference system of equations we write
BER (VTH,1) = 0.5 ×10–9 @ VTH,1 =1.20, BER (VTH,2) = 0.5 ×10–6 @ VTH,2 =1.35,
BER (VTH,3) = 0.5 ×10–9 @ VTH,3 =0.90, BER (VTH,4) = 0.5 ×10–6 @ VTH,4 =0.70,
Using the relationship for Φ–1(x) from the table in Appendix B, specifically, Φ–1(10–6) = –4.7534 and
Φ–1(10–9) = –5.9978, we find
Likewise, for the other logic level parameters with Φ–1(1 – 10–6) = 4.7534 and Φ–1(1 – 10–9) = 5.9978,
we write
0.90 × Φ −1 ⎡⎣1 − 10−6 ⎤⎦ − 0.70 × Φ −1 ⎡⎣1 − 10−9 ⎤⎦ 0.90 × 4.7534 − 0.70 × 5.9978
VLogic0 = = = −0.064 V
Φ −1 ⎡⎣1 − 10−6 ⎤⎦ − Φ −1 ⎡⎣1 − 10−9 ⎤⎦ 4.7534 − 5.9978
0.90 − 0.70 0.90 − 0.70
σ N,0 = = = 0.1607 V
Φ −1 ⎡⎣1 − 10−6 ⎤⎦ − Φ −1 ⎡⎣1 − 10−9 ⎤⎦ 4.7534 − 5.9978
Φ ( 7.06 ) = 1− Φ ( −7.06 )
( )
BER V *TH = Φ ( −7.06 )
Using the table in Appendix B, we note that is Φ(–7.06) is bounded by Φ(–7.06) = 1012; hence we
estimate the minumum BER as
( )
BER V *TH < 10−12
Alternatively, we can make use of the approximation in Eq. (14.78) and estimate Φ(–7.06) as
1
1 − ( −7.06 )2
Φ ( −7.06 ) ≈ e 2
= 8.22 × 10−13
7.06 2π
We can therefore conclude that the system meets the BER requirements (just barely, though).
626 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
A variant of the amplitude-based scan test technique is one that captures a histogram of the voltage level
around each logic level at the center of the eye diagram. A digital sampler or digital sampling oscillo-
scope is often used to capture these histograms, from which its mean (logic voltage level) and standard
deviation can be found. Subsequently, the optimum threshold and minimum BER can be found by
combining Eq. (14.77) with Eq. (14.76). The following example will illustrate this approach.
EXAMPLE 14.12
A digital sampling oscilloscope obtained the following eye diagram while observing the char-
acteristics of a 1-Gbps digital signal. Histograms were obtain using the built-in function of the
scope at a sampling instant midway between transitions (i.e., at the maximum point of eye open-
ing). Detailed analysis revealed that each histogram is Gaussian. One histogram has a mean val-
ue of 100 mV and a standard deviation of 50 mV. The other has a mean of 980 mV and a standard
deviation of 75 mV. What is the theoretical minimum BER associated with this system?
Solution:
Restating the given information in terms of the parameters used in the text above, we write
Substituting the above parameters into the expression for the optiumum thresold level given in
Eq. (14.77), we write
resulting in
( )
BER V *TH = Φ ( −7.04 ) = 9.61 × 10−13
Here we see that the theoretical minumum BER is slightly less than 10–12.
Exercises
The application of the amplitude-based scan test technique assumes that the histogram around
each voltage level is Gaussian. One often needs to assure oneself that this assumption is realistic
with the data set that are collected. Simply looking at the histogram plot to see if the distribution
accurately follows a Gaussian is difficult because the subtleties in the tails are not easy to observe.
A kurtosis metric or a normal probability plot can be used (see Section 4.2.4).
The validity of the amplitude-based scan test technique or Q-factor method has been called
into question,11 largely on account of its ability to accurately predict system behavior at very low
BER from measurements made at much larger BER levels. While system impairments may be
dominated by Gaussian noise behavior at large BER levels, one really doesn’t know if such behav-
ior continues at low BER levels unless one measures it. The converse is also true: When jitter
impairments are present, the amplitude-based scan test technique predicts the minimum BER at a
much higher level than a direct BER measurement would find—again, skewing the results. Even
with these drawbacks, the amplitude-based scan test technique is a widely used method.
628 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
1 1 ⎛ t − 0⎞ 1 ⎛ t −T⎞
BER ( tTH ) = − × Φ ⎜ TH ⎟ + × Φ ⎜ TH
⎝ σ RJ ⎟⎠
(14.79)
2 2 ⎝ σ RJ ⎠ 2
Here we recognize that the BER expression contains one unknown σRJ; hence we can make a sin-
gle measurement and solve for this unknown. Furthermore, we recognize that if we sample close
to either edge of a bittransition, we can approximate Eq.(14.79) by the following:
⎧
⎪ 1 1 ⎛ t ⎞ T
⎪⎪ − × Φ ⎜ TH ⎟ 0 ≤ tTH
BER ( tTH ) ≈ ⎨ 2 2 ⎝ σ RJ ⎠ 2 (14.80)
⎪ 1 ⎛ t −T⎞ T
⎪ × Φ ⎜ TH tTH ≤ T
⎪⎩ 2 ⎝ σ RJ ⎟⎠ 2
Assuming that we set the sampling instant at tTH,1 much less than T/2, then after making a single
BER measurement we can write an expressionrelating this measurement as
⎛ t ⎞
(
BER tTH ,1 = ) 1 1
− × Φ ⎜ TH ,1 ⎟
2 2 ⎝ σ RJ ⎠
(14.81)
tTH ,1
σ RJ =
( )
Φ ⎡⎣1 − 2 × BER tTH ,1 ⎤⎦
−1 (14.82)
To estimate the location of the sampling instant, first assume a specific level of noise present and
the level of BER that is expected, then rearrange Eq. (14.81) and write
( )
tTH ,1 = σ RJ × Φ −1 ⎡⎣1 − 2 × BER tTH ,1 ⎤⎦ (14.83)
For instance, if we estimate that 10 ps of RJ is present on the digital signal and we want to make
a BER measurement at a 10-6 level, we estimate the sampling time to be
Once σRJ is determined, we substitute this result back into Eq. (14.79) and solve for the system
BER at the appropriate sampling instant, typically the middle of the data eye.
Chapter 14 • Clock and Serial Data Communications Channel Measurements 629
EXAMPLE 14.13
The BER performance of a digital reciever was measured to be 10-14 at a bit rate of 600 Mbps hav-
ing a sampling instant at one-half the bit duration. Due to manufacturing errors, the sampling
instant can vary by a ±20% from its ideal position. Assuming an underlying noise component that
is Gaussian in nature, what is the range of BER performance expected during production?
Solution:
As the ideal sampling instant is set at one-half the clock period, for a 600-MHz bit rate, this cor-
responds to a 0.833-ns sampling instant. Using Eq. (14.82), we can solve for the standard devia-
tion of the underlying noise process as follows
⎛T ⎞ 1 1 ⎛ T ⎞ 1 ⎛ T ⎞
BER ⎜ ⎟ = 10−14 = − × Φ ⎜ ⎟ + × Φ⎜−
⎝ 2⎠ 2 2 ⎝ 2σ RJ ⎠ 2 ⎝ 2σ RJ ⎟⎠
⎛T ⎞ ⎛ T ⎞
BER ⎜ ⎟ = 10−14 = Φ ⎜ −
⎝ 2⎠ ⎝ 2σ RJ ⎟⎠
Rearranging, we write
Next, given that the sampling instant can vary by ±20% from its ideal position of 8.33 × 10-10 s,
we write
or when simplified as
As the BER performance varies in a symmetrical manner about the T/2 sampling instant, the
BER performance level is the same at either extreme. Selecting tTH = 1 × 10–9s, the BER perfor-
mance we can expect is
( )
BER 10−9 = 2.33 × 10−10
Therefore during a production test, we can expect the BER performance to vary between 10-14
and 2.33 x 10-10.
630 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Exercises
We further learned that the DJ component could consist of many parts, including PJ, DDJ and
DCD jitter components. One can extend the above theory for each of these separate noise compo-
nents by expanding the convolution operation across all noise elements, for example,
Keeping the situation here more manageable, we shall consider the situation described by Eq.
(14.84) and leave the more general case to some later examples.
We shall begin by describing the dual-Dirac jitter decomposition method8 where the RJ noise
component is assumed to follow Gaussian statistics with zero mean and standard deviation σRJ
and the DJ component follows a distribution that has a dual-impulse or dual-Dirac behavior with
parameter μDJ. Mathematically, we write these two PDFs as follows:
2
−t
1 2
2σ RJ
pdf RJ = e (14.86)
σ RJ 2 2π
1 ⎛ μ ⎞ 1 ⎛ μ ⎞
pdfDJ = δ ⎜ t − DJ ⎟ + δ ⎜ t + DJ ⎟ (14.87)
2 ⎝ 2 ⎠ 2 ⎝ 2 ⎠
Chapter 14 • Clock and Serial Data Communications Channel Measurements 631
p d f TJ
μ DJ
σ RJ
t
μ DJ 0 μ DJ
−
2 2
DJ = μ DJ
(14.88)
RJ = σ RJ
Through the convolution operation listed in Eq. (14.84), we can describe the total jitter PDF for
any bit transition as
− (t + μDJ 2 ) − (t − μDJ 2 )
2 2
1 1 2
2σ RJ 1 1 2
2σ RJ
pdfTJ = × e + × e (14.89)
2 σ RJ 2π 2 σ RJ 2π
We illustrate this PDF in Figure 14.30, where it consists of two Gaussians distributions centered at
t = –μDJ/2 and t = μDJ/2. It is important to recognize that the peaks associated with each Gaussian
distribution are offset from one another by the parameter μDJ. Moreover, this offset is independent
of the number of samples collected, as it is deterministic in nature. In practice, this fact is often
used to identify the amount of DJ present with a jittery signal. In essence, the dual-Dirac modeling
method is attempting to fit two Gaussian distributions with symmetrical mean values and equal
standard deviation to the random portion of the digital signal.
Following the development of Section 14.4.3, the BER is computed as follows
∞ nT − tTH
1 1
BER (tTH ) =
2 ( n −1)∫T + tTH ∫
× pdfTJ t = ( n −1)T
dt + × pdfTJ t = nT
dt (14.90)
2 −∞
where pdfTJ|t=(n-1)T and pdfTJ|t=nT represents the total jitter of (n – 1)-th and nth consecutive bit transi-
tions. tTH is the sampling instance within the unit-interval bit period T. Substituting Eq. (14.89)
into Eq. (14.90) and working through the integration, we find the BER can be written in terms of
the Gaussian CDF and sampling instant tTH as follows:
1 ⎡ ⎛ t + μ DJ 2 ⎞ ⎛ t − μ DJ 2 ⎞ ⎤
BER ( tTH ) = × ⎢ 2 − Φ ⎜ TH ⎟ − Φ ⎜ TH ⎟⎠ ⎥
4 ⎣⎢ ⎝ σ RJ ⎠ ⎝ σ RJ ⎥⎦ (14.91)
1 ⎡ ⎛ t − T + μ DJ 2 ⎞ ⎛ t − T − μ DJ 2 ⎞ ⎤
+ × ⎢ Φ ⎜ TH ⎟ + Φ ⎜ TH ⎟⎠ ⎥
4 ⎢⎣ ⎝ σ RJ ⎠ ⎝ σ RJ ⎥⎦
632 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Because only two of the four terms contribute to the overall BER in a region, we can simplify the
above equation and write
⎧
⎪ 1 ⎡ ⎛ t + μ DJ 2 ⎞ ⎛ t − μ DJ 2 ⎞ ⎤
⎪⎪ × ⎢2 − Φ ⎜ TH ⎟ − Φ ⎜ TH ⎟⎠ ⎥ 0 ≤ tTH T
⎝ σ RJ ⎠ ⎝ σ RJ 2
BER (tTH ) = ⎨ 4 ⎣⎢ ⎦⎥
(14.92)
⎪1 ⎡ ⎛ t − T + μ 2⎞ ⎛ t − T − μ DJ 2 ⎞ ⎤ T
⎪ × ⎢Φ ⎜ TH DJ
⎟ + Φ ⎜ TH ⎟⎠ ⎥ t ≤ T
⎪⎩ 4 ⎢⎣ ⎝ σ RJ ⎠ ⎝ σ RJ ⎥⎦ 2 TH
The following example will be used to illustrate how the dual-Dirac jitter decomposition method
is applied to a practical situation.
EXAMPLE 14.14
A digital system operates with a 1-GHz clock. How much RJ can be tolerated by the system if the
BER is to be less than 10−10 and the DJ is no greater than 10 ps? Assume that the sampling instant
is in the middle of the data eye having a unit time interval of 1 ns.
Solution:
Substituting the given information, that is, T = 1000 ps, tTH = 500 ps, μDJ 10 ps, and BER (tTH) 10–10
into Eq. (14.91), we write
or
1 ⎡ ⎛ 500 ps + 5 ps ⎞ ⎛ 500 ps − 5 ps ⎞ ⎤
× ⎢ 2− Φ ⎜ ⎟ −Φ ⎜ ⎟⎠ ⎥
4 ⎢⎣ ⎝ σ RJ ⎠ ⎝ σ RJ ⎥⎦
1 ⎡ ⎛ 500 ps − 1000 ps + 5 ps ⎞ ⎛ 500 ps − 1000 ps − 5 ps ⎞ ⎤ −10
+ × ⎢Φ ⎜ ⎟⎠ + Φ ⎜⎝ ⎟⎠ ⎥ ≤ 10
4 ⎣⎢ ⎝ σ RJ σ RJ ⎥⎦
⎛ −495 ps ⎞ ⎛ −505 ps ⎞
Φ⎜ + Φ⎜ ≤ 2 × 10−10
⎝ σ RJ ⎟⎠ ⎝ σ RJ ⎟⎠
Because this expression is transdental and nonlinear, we simply vary for σRJ until the left-hand
side of the above expression is less than 10-10 and declare the largest value of σRJ as the largest RJ
jitter that the system can tolerate. On doing this, we obtain the following listing of results:
𝛔RJ(ps) BER
60 4.93 x10–17
70 5.19 x10–13
Chapter 14 • Clock and Serial Data Communications Channel Measurements 633
𝛔RJ(ps) BER
As is evident from the table, the system can tolerate a random component of jitter of less
having a standard deviaiton of less than 78.5 ps in order to ensure a BER of less than or
equal to 10–10.
Returning to Eq. (14.91), it is evident that the BER expression is a function of two unknown
parameters, μDJ and σRJ. These two parameters can be identified by making two measurements of
the BER under two separate sampling instants, say tTH,1 and tTH,2, both less than T/2, resulting in
the following two equations:
( )
( )
1 ⎡ ⎛ t − μ DJ 2 ⎞ ⎤
BER tTH ,1 ≈ × ⎢1 − Φ ⎜ TH ,1 ⎟⎠ ⎥
4 ⎢⎣ ⎝ σ RJ ⎥⎦
(14.94)
1 ⎡ ⎛t − μ DJ 2 ⎞ ⎤
( )
BER tTH ,2 ≈ × ⎢1 − Φ ⎜ TH ,2
4 ⎣⎢ ⎝ σ RJ ⎟⎠ ⎥
⎦⎥
tTH ,1 − tTH ,2
σ RJ =
( ) ( )
Φ ⎡⎣1 − 4 × BER tTH ,1 ⎤⎦ − Φ −1 ⎡⎣1 − 4 × BER tTH ,2 ⎤⎦
−1
μ DJ =
( ) ( )
2 × tTH ,2 × Φ −1 ⎡⎣1 − 4 × BER tTH ,1 ⎤⎦ − 2 × tTH ,1 × Φ −1 ⎡⎣1 − 4 × BER tTH ,2 ⎤⎦ (14.95)
( ) ( )
Φ −1 ⎡⎣1 − 4 × BER tTH ,1 ⎤⎦ − Φ −1 ⎡⎣1 − 4 × BER tTH ,2 ⎤⎦
The following example will help to illustrate the application of this theory.
634 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Figure 14.31. (a) The zero-crossing distribution around two consecutive bit transitions; shaded area
indicate bit error probability. (b) Individual Gaussian distribution around each bit transition. (c) BER
as a function of sampling instant (t=tTH).
EXAMPLE 14.15
A system transmission test is to be run whereby a BER < 10–12 is to be verified using the jitter-
decomposition test method. The system operates with a 1-GHz clock and the following BER mea-
surements are at two different sampling instances: BER = 0.25 x 10–4 at 300 ps, and BER = 0.25 x 10–6
at 350 ps. Assume the digital system operates with a sampling instant in the middle of the data
eye. Does the system meet spec?
Solution:
Following the reference system of equations, we write
Assuming the BERs were obtained on the tail of a single Gaussian, we substitute the above pa-
rameters into Eq.(14.95) and write
Using the relationship for Φ–1 (1 – x) from the table in Appendix B, specifically, Φ–1(1 – 10–4) =
3.7190, and Φ–1(1 – 10–6) = 4.7534, we solve to obtain μDJ = 240.5 ps and σRJ = 48.3. In order to en-
sure that our simiplified model of BER behavior is reasonable in the region where the samples of
BER were obtained, we compare
1 ⎡ ⎛ t + 120.2 ps ⎞ ⎛ t − 120.2 ps ⎞ ⎤
BER (tTH ) = × ⎢2 − Φ ⎜ TH − Φ ⎜ TH ⎥
4 ⎣ ⎝ 48.3 ps ⎠⎟ ⎝ 48.3 ps ⎠⎟ ⎦
with
1 ⎡ ⎛ t − 120.2 ps ⎞ ⎤
BER (tTH ) = × ⎢1 − Φ ⎜ TH ⎥
4 ⎣ ⎝ 48.3 ps ⎟⎠ ⎦
as shown below:
Clearly, we see that the two curves are identical in the region over the range of sampling instants
chosen. Hence our initial assumption is correct and we are confident that our calculation of DJ
and RJ are correct. Substituting these two parameters into Eq. (14.91) we obtain an expresson of
the BER across the entire eye diagram as as a function of the sampling instant tTH as
⎡
1 ⎛ tTH + 240.5 ps ⎞ ⎤ 1 ⎡ ⎛ tTH − 240.5 ps ⎞ ⎤
BER(tTH ) = ×
⎢ 1− Φ ⎜ ⎟ ⎥ + × ⎢ 1− Φ ⎜ ⎥
4
⎣ ⎝ 48.3 ps ⎠ ⎦ 4 ⎣ ⎝ 48.3 ps ⎟⎠ ⎦
1 ⎛ t − 1000 ps + 240.5 ps ⎞ 1 ⎛ tTH − 1000 ps − 240.5 ps ⎞
+ × Φ ⎜ TH ⎟⎠ + 4 × Φ ⎜⎝ ⎟⎠
4 ⎝ 48.3 ps 48.3 ps
636 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Finally, we solve for the BER at a samping instant in the middle of the data eye, (i.e., tTH = 0.5 ns),
and obtain
Therefore we conclude the BER is less than 10–12 and the system meets spec. We should note
that if the 1-Gaussian model of BER behavior differs from the 2-Gaussian behavior, then we must
solve the RJ and DJ parameters directly from the 2-Gaussian model of BER behavior. We encour-
age our readers to attempt the exercise below.
Exercises
Figure 14.32. Illustrating the definition of total jitter (TJ) based on eye opening: (a) Eye diagram and
(b) corresponding BER versus sampling instant (t = tTH).
V
VDD
Eye-Opening
VTH
@ BERD
VSS
t
0 tTH T - tTH T
(a)
BER
Eye-Opening
BERD
TJ/2 TJ/2
t
0 tTH T - tTH T
(b)
Chapter 14 • Clock and Serial Data Communications Channel Measurements 637
eye opening, the better the quality of the signal transmission, because there would be less chance of
obtaining a bit error. Under ideal conditions, the maximum eye opening is equal to the bit duration,
T. The difference between the ideal and actual eye opening is defined as the total jitter, TJ, that is,
While we could define the eye opening based on the signal transitions that bound the open
area encapsulatedby the data eye, we need to account for the random variations that will occur.
Specifically, the edges of the eye opening along the time axis are defined based on an expected
probability that a transition will go beyond a specific time instant, which we learned previously
is equivalent to the BER. Figure 14.32(b) illustrates the BER as function of the sampling instant,
and provides a manner in which to quantify TJ, that is,
TJ = 2 × tTH (14.97)
1 ⎡ ⎛ t − μ DJ / 2 ⎞ ⎤
BER (tTH ) ≈ × ⎢1 − Φ ⎜ TH ⎟⎠ ⎥ (14.98)
4 ⎢⎣ ⎝ σ RJ ⎥⎦
Rearranging, we can write the sampling instant tTH in terms of the desired BER, denoted by BERD,
as follows:
μ DJ
tTH = σ RJ × Φ −1 [1 − 4 × BER D ]+ (14.99)
2
Substituting the above expression into Eq. (14.97), we obtain the commonly used definition for
TJ, that is,
TJ = 2 × σ RJ × Φ −1 [1 − 4 × BER D ]+ μ DJ (14.100)
EXAMPLE 14.16
A system transmission test was run and the RJ and DJ components were found to be 13 ps and
64.6 ps, respectively. What is the TJ component at a BER level of 10-12?
Solution:
Substituting RJ and DJ into Eq. (14.100), we write
Using the data provided in the Table of Appendix B, we find Φ–1[1 – 4 × 10–12], allowing us to write
Exercises
− (t − μ1 ) − (t − μ2 ) − (t − μG )
2 2 2
α1 α2 α NG
pdfTJ = e 2σ 12
+ e 2σ 22
+…+ e 2σ G2
(14.101)
σ 1 2π σ 2 2π σ G 2π
where the terms αi are weighting terms. Subsequently, we can substitute Eq.(14-101) into Eq.
(14.90) and write the BER at sampling instant tTH as
α1 ⎡ ⎛ t −μ ⎞ ⎤ α ⎡ ⎛ t −μ ⎞ ⎤ α ⎡ ⎛ t −μ ⎞ ⎤
BER ( tTH ) = × ⎢ 1 − Φ ⎜ TH 1 ⎟ ⎥ + 2 × ⎢ 1 − Φ ⎜ TH 2 ⎟ ⎥ + … + G × ⎢ 1 − Φ ⎜ TH G ⎟ ⎥
2 ⎣ ⎝ σ1 ⎠ ⎦ 2 ⎣ ⎝ σ2 ⎠ ⎦ 2 ⎣⎢ ⎝ σ G ⎠ ⎥⎦
α ⎛ t − T − μ1 ⎞ α 2 ⎛ t − T − μ2 ⎞ α ⎛ t − T − μG ⎞ (14.102)
+ 1 × Φ ⎜ TH ⎟ + × Φ ⎜ TH ⎟ + … + G × Φ ⎜ TH ⎟⎠
2 ⎝ σ 1 ⎠ 2 ⎝ σ ⎠ 2 ⎝ σ 2 G
We depict a three-term Gaussian mixture in Figure 14.33. In part (a) we illustrate the total jitter distri-
bution as a function of eye position, followed by the individual Gaussian mixture in part (b) and then
the corresponding BER as a function of sampling instant in part (c). A specific sampling instance tTH
is identified in all three diagrams of Figure 14.33, highlighting a specific bit error probability and its
impact on the BER. It is important to recognizefrom this figure that the distributions and BER func-
tion can be asymmetrical with respect to the ideal bit transition instant (t = (n–1)T and t = nT).
For low BER levels, we can approximate Eq. (14.102) using two Gaussian distributions taken
from the Gaussian mixture of Eq. (14.101) as follows:
⎧
⎪ α tail + ⎡ ⎛ tTH − μtail + ⎞ ⎤ T
⎪⎪ 2 × ⎢ 1 − Φ ⎜ σ ⎟⎠ ⎥ , 0 tTH ≤ 2
BER ( tTH ) ≈ ⎨ ⎣⎢ ⎝ tail + ⎦⎥ (14.103)
⎪ α ⎛ t − T − μtail − ⎞ T
⎪ tail −
× Φ ⎜ TH ⎟⎠ , ≤ tTH T
⎪⎩ 2 ⎝ σ tail − 2
where tail+ and tail– is a integer number from 1 to G, and tail+ can also be equal to tail–. Because
the tails of a general distribution depends on both the mean values and the standard deviations of
Chapter 14 • Clock and Serial Data Communications Channel Measurements 639
the Gaussian mixture, some effort must go into to identifying the dominant Gaussian function in
each tail region of the total jitter distribution. A simple approach is to evaluate the BER contribu-
tion of each Gaussian term at some distance away from each edge transition of the eye diagram.
In terms of extracting the RJ and DJ metrics from the total jitter, we note that in the general
case, DJ is defined as the distance between the two Gaussians that define the tail regions of the
total distribution, that is,
Likewise, RJ is defined as the average value of the standard deviations associated with the tail
distributions9, that is,
σ tail − + σ tail +
RJ σ RJ = (14.105)
2
In addition to these two definitions, we can also write an equation for the TJ metric using the nota-
tion indicated in Figure 14.33c, specifically
(
TJ = T − tTH ,2 − tTH ,1 ) (14.106)
where tTH,1 and tTH,2 correspond to the sampling instant at the desired BER level, BERD, given by
⎡ 2 ⎤
tTH ,1 = σ tail + × Φ −1 ⎢1 − BER D ⎥ + μtail +
⎣ α tail + ⎦
(14.107)
−1⎡ 2 ⎤
tTH ,2 = σ tail − × Φ ⎢ BER D ⎥ + T + μtail −
⎣ α tail − ⎦
Substituting into Eq. (14.106) and simplifying, we obtain
⎡ 2 ⎤ ⎡ 2 ⎤
TJ = σ tail + × Φ −1 ⎢ 1 − BER D ⎥ + σ tail − × Φ −1 ⎢ 1 − BER D ⎥ + μtail + − μtail − (14.108)
⎣ α tail + ⎦ ⎣ α tail − ⎦
Looking back at the dual-Dirac jitter decomposition methods of Section 14.6.3, these three defi-
nitions of DJ, RJ and TJ are consistent with the definitions given there when μtail+ = –μtail–, σtail+=
σ tail– and αtail+ = α tail– = 0.5.
Finally, the random component of the jitter has a PDF that can be described by
−t2
1 2
2σ RJ
pdf RJ = e (14.109)
σ RJ 2π
The remaining task at hand is to determine the model parameters μtail–, μtail+, σtail–, σtail+ and the
weighting coefficients, αtail– and αtail+. While one may be tempted to go off and measure five dif-
ferent BER values at six different sampling times and attempt to solve for the six unknowns using
some form of nonlinear optimization, much care is needed in selecting the sampling instants. It is
imperative that samples are derived only from the outer tails of the total jitter distribution to avoid
any contributions from the other Gaussians. This generally means sampling at very low BER lev-
els, driving up test time. This approach is the basis of the Tail-FitTM algorithm of WavecrestTM. At
the heart of this method is the chi-square (χ2) curve-fitting algorithm. Interested readers can refer
to reference 14.
640 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Exercises
Figure 14.33. (a) An asymmetrical zero-crossing distributions around two consecutive bit
transitions; shaded area indicate bit error probability. (b) Individual Gaussian distribution around
each bit transition. (c) BER as a function of sampling instant (t = tTH).
t
(n-1)T tTH,1 tTH,2 nT
(a)
pdfZC
N (μ1 , σ 1 ) N (μ1 , σ 1 )
N (μ 3 ,σ 3 ) N (μ 3 ,σ 3 )
N (μ 2 , σ 2 ) N (μ2 , σ 2 )
t
(n-1)T tTH,1 tTH,2 nT
BER (b)
10-0
10-5 Eye-Opening
BERD
10-10
t
(n-1)T tTH,1 tTH,2 nT
(c)
Another approach,10 and one that tackles the general Gaussian mixture model, is the expecta-
tion maximum (EM) algorithm introduced back in Section 4.4.1. This algorithm assumes that G
Gaussians are present and solves for the entire set of model parameters: μ1, μ2, …, μG, σ1, σ2,...,
σG, α1, α2, ..., αG using an iteration approach.11,12 The next two examples will help to illustrate the
application of the EM algorithm to the general jitter decomposition problem.
Chapter 14 • Clock and Serial Data Communications Channel Measurements 641
EXAMPLE 14.17
The jitter distribution at the receiver end of a communication channel is to be collected and
analyzed for its DJ, RJ, and TJ corresponding to a BER level of 10-12. In order to extract the model
parameters, use the EM algorithm and fit three Gaussian functions to the jitter data set. In order
to generate the data set, create a routine that synthesizes a Gaussian mixture consisting of three
Gaussians with means –0.02 UI, 0 UI, and 0.04 UI and standard deviations of 0.022 UI, 0.005 UI,
and 0.018 UI and weighting factors 0.4, 0.2, and 0.4, respectively. Provide a plot of the jitter his-
togram. Also, how does the extracted model parameters compare with the synthesized sample
set model parameters?
Solution:
A short routine was written in MATLAB that synthesizes a Gaussian mixture with the above men-
tioned model parameters. The routine was run and 1000 samples were collected and organized
into a histogram as shown below:
The PDF from which the samples were taken from is also superimposed on the above plot. We
note that the general shape of this histogram reveals one Gaussian with a peak at about 0.04 UI,
another with a peak at about 0, and the third may have a peak at about -0.03 UI. We estimate the
standard deviation of the rightmost Gaussian at about 0.05 UI, the center one with about 0.01
UI, and the leftmost Gaussian with 0.05 UI. We will use these parameters as estimates for the
Gaussian mixture curve fit algorithm. We will also assume that each Guassian component is
equally important, and hence we assign a weight of 1/3 each. The better our estimate, the faster
the algorithm will converge. A second MATLAB routinewas created based on the EM algorithm
of Section 4.4.1. Using the above synthesized data, the Gaussian mixture curve-fitting algorithm
was run. After 1000 iterations aiming for a root-mean-square error of less than 10-10, the follow-
ing model parameters were found:
Substituting these parameters back into Eq. (14.101) and computing the value of this PDF over
a range of –0.1 UI to 0.1 UI, we obtain the dashed line shown in the plot below. Also shown in
the plot is a solid line that corresponds to the PDF from which the data set was derived. This is
642 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
the same line as in the figure above superimposed over the histogram of the data set. It is quite
evident from this plot that the estimated PDF corresponds quite closely to the orginal one. We
also include in the plot below a dot-dashed line corresponding to the PDF of our initial guess. In
this case, the initial and final PDFs are clearly quite different from one another; this highlights
the effectiveness of the EM algorithm.
We can write an expression for the BER as a function of the normalized sampling instant, UITH as
follows
After some further investigation we recongize that the negative tail of the jitter distrubution is
dominated by N (μ1, σ1) and the postive tail region is dominated by N (μ3, σ3). We can therefore
write the BER in the tail regions as
⎧
⎪ 0.41 ⎡ ⎛ UITH − 0.0398 ⎞ ⎤ 1
⎪ 2 × ⎢ 1− Φ ⎜⎝ ⎟⎠ ⎥ , 0 UITH <
BER ( tTH ) ≈ ⎨ ⎣ 0.0185 ⎦ 2
⎪ 0.39 ⎛ UI − 1+ 0.018 ⎞ 1
⎪ × Φ ⎜ TH ⎟⎠ , < UITH 1
⎩ 2 ⎝ 0.0224 2
Finally, we compute the test metrics DJ and RJ and TJ(10−12) from Eqs. (14.104) and (14.105) as
follows:
⎡ 2 × 10−12 ⎤ −1 ⎡ 2 × 10−12 ⎤
TJ = 0.0185 × Φ −1 ⎢1 − ⎥ + 0.0224 × Φ ⎢1 − ⎥ + 0.0398 − (−0.018)
⎣ 0.41 ⎦ ⎣ 0.39 ⎦
Chapter 14 • Clock and Serial Data Communications Channel Measurements 643
TJ = 0.0185 × Φ −1 ⎡⎣1 − 4.89 × 10−12 ⎤⎦ + 0.0224 × Φ −1 ⎡⎣1 − 5.05 × 10−12 ⎤⎦ + 0.0398 − (−0.018)
Substituting Φ–1 [1 – 4.89 × 10–12] = 6.8096 and Φ–1 [1 – 5.05 × 10–12] = 6.8051 as found by a built-in
routine in MATLAB, we write TJ = 0.337 UI.
EXAMPLE 14.18
The jitter distribution at the receiver end of a communication channel consisting of 10,000 sam-
ples was modeled by a four-term Gaussian mixture using the EM algorithm, and the following
model parameters were found:
Assuming that the unit interval is 1 ns and that the data eye sampling instant is half a UI, what is
the BER associated with this system? Does the system behave with a BER less than 10-12?
Solution:
Evaluating Eq. (14.102) a UITH = 0.5 UI and T = 1 UI, we write
Substituting the above Gaussian mixture parameters in normalized form, we find BER (0.5 UI) × 10–10.
Therefore, we can expect that the system theoretically will not operate with a BER less than 10–12.
Exercise
2
1 N
⎛ 1 N
⎞
σJ =
N
∑ ⎜⎝ J [n ]− N ∑ J [n ]⎟
⎠ (14.110)
n =1 n =1
PJ 2 2 × σ J (14.111)
The user of Eq. (14.111) should be aware that this formula is only approximate, because it has no
information about the underlying nature of the periodic jitter and how separate sinusoidal signals
combine.
There is another definition of period jitter that one finds in the literature. Digital system
designers often use this definition when describing jitter. We briefly spoke about it in Section 14.2.
Specifically, period jitter is defined as the first-order time difference between adjacent bit transi-
tions given by the equation
J PER [n ]= J [n ]− J [n − 1] (14.112)
The RMS value of this sequence (denoted by σJPER) would have a similar form to Eq. (14.110)
above, except σJPER would replace σJ. In the frequency domain, the period jitter sequence is related
to the time jitter sequence by taking the z-transform of Eq. (14.112) and substituting for physical
frequencies, that is, z = e j2πfT, leading us to write
( ) ( )
J PER e j 2π fT = 1 − z −1 J (z )
z = e j 2 π fT
( )(
= 1 − e − j 2π fT J e j 2π fT ) (14.113)
Chapter 14 • Clock and Serial Data Communications Channel Measurements 645
Clearly, the frequency description of the period jitter sequence JPER is proportional to the time
jitter sequence J in the frequency domain. However, we note that the proportionality constant
is also a function of the test frequency. This means that the PJ metric computed from the time
jitter sequence will be different from that obtained using the period jitter sequence. This can be
source of confusion for many. To avoid this conflict, for the remainder of this text, we shall rely
exclusively on the time jitter sequence to establish the PJ quantity. Example 14.9 will help to
demonstrate this effect.
A more refined approach for identifying the PJ component that makes no assumption of the
relative strength of the PJ component relative to the RJ component is to perform an FFT analysis
of the jitter time series J. If coherency is difficult to achieve, a Hann or Blackman window may be
necessary to reduce frequency leakage effects. If we denote the period jitter frequency transform
with vector XJ (using our usual notation) and identify the FFT bin containing the fundamental of
the periodic jitter as bin MPJ and the corresponding number of harmonics HPJ, then we can deter-
mine the RMS value of the PJ component from the following expression
H PJ
σ PJ = ∑c
p =1
2
p × M PJ , RMS (14.114)
where
⎧
⎪ XJ [ k ] , k= 0
⎪
ck , RMS = ⎪ 2 X J [ k ] , k = 1,… , N 2 − 1
⎨
⎪ X k
⎪ J [ ]
, k= N 2
⎪⎩ 2
If PJ consists of a single tone, then using Eq. (14.111) with σPJ replacing σJ , we can estimate
PER
the peak-to-peak value of the PJ component. If PJ consists of more than one tone, then we must
account for the phase relationship of each tone in the calculation of its peak-to-peak value. One
approach is to perform an inverse-FFT (IFFT) on only the PJ components (remove all others)
and extract the peak-to-peak value directly from the time-domain signal. Mathematically, we
write
Correspondingly, we can also compute the samples of the time sequence IFFT(JPJ,only) to construct
a histogram of the PJ component. A mathematical formula can then be used to model the PDF for
PJ. Because PJ is made up of harmonically related sinusoidal components, the PDF distribution
will involve functions that appears in the general form as 1/ π A 2 − t 2 , where A is the amplitude of
a single sine wave. In general, this is difficult to do and one general defers to thesingle tone case
below.
As a special case of period jitter, if HPJ is unity where only one tone is present, then PJ is
sometimes referred to as sinusoidal jitter and is denoted with acronym SJ. Mathematically, we
define SJ as
SJ 2 2 × σ PJ (14.116)
646 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
⎧
⎪ 1 SJ SJ
, − ≤ t≤
⎪ 2 2 2
pdfPJ = ⎨ ⎛ SJ ⎞ (14.117)
π ⎜ ⎟ − t2
⎪ ⎝ 2 ⎠
⎪
⎩ 0, otherwise
SJ plays an important role in jitter tolerance and transfer function tests of the next section.
EXAMPLE 14.19
A time jitter sequence J[n] can be described by the following discrete-time equation:
⎛ 11 ⎞
J [ n ] = 0.001sin ⎜ 2π n⎟
⎝ 1024 ⎠
Here AJ is the amplitide of the phase modulated sequence and MJ is the number of cycles com-
pleted in N sample points. Compute the period jitter sequence and compare its amplitude to the
time jitter sequence amplitude.
Solution:
The period jitter sequence is computed from time jitter sequence according to
⎡ 11 ⎤ ⎡
(n − 1)⎥⎤
11
JPER [n] = J [n] − J [n − 1] = 0.001sin ⎢2π n⎥ − 0.001sin ⎢2π
⎣ 1024 ⎦ ⎣ 1024 ⎦
Running through a short for loop, we obtain a plot of the period jitter together with the time jitter
sequence below:
The plot clearly reveals that the amplitudes of the two sequences are quite different. In fact, the
analysis shows that the amplitude of the time jitter sequence is 10–3 and the amplitude of the
period jitter sequence is 6.7 x 10–5. In fact, the ratio of these two amplitudes is equal to
JˆPER M j
= 1 − e− j 2π N = 0.0067 with M j = 11and N = 1024.
ˆ
J
P
Chapter 14 • Clock and Serial Data Communications Channel Measurements 647
EXAMPLE 14.20
The SJ component of a jitter distribution was found to be 0.1 UI for a receiver operating at a bit
rate of 1 GHz. Estimate the shape of the histogram that one would expect from this jitter com-
ponent alone.
Solution:
Denormalizing the SJ component, we write its peak-to-peak value as 0.1/109 or 10-10 s. Substitut-
ing this result back into Eq. (14.117), we write the PDF for this jitter component between –0.1 UI
< t < 0.1 UI as
1
pdfSJ =
−10 2
⎛ 10 ⎞
π ⎜ − t2
⎝ 2 ⎟⎠
and everywhere else as 0. The following plot of this distrubtion would resemble the general
shape of any particular histogram captured from this random distribution.
Exercises
excite a system by repeating this pattern throughout the duration of the test. Due to the repetition
of this pattern, additional periodic jitter is produced that is harmonically related to this repletion
frequency. For a P-bit PRBS pattern with bit duration T, the pattern repetition rate is P x T so that
the fundamental frequency of the DDJ component will appear at 1/(P x T) Hz. Assuming the har-
monics of PJ and DDJ are different, then we can compute the RMS of the DDJ components using
the exact same spectrum approach for PJ, except we drive the channel with a P-bit PRBS pattern.
Assuming an N-point period jitter vector J, the fundamental component of DDJ will appear in bin
MDDJ= N/P, whence we write the RMS value of the DDJ component as
H DDJ
σ DDJ = ∑c
p =1
2
p × M DDJ (14.118)
where we assume that HDDJ represent the number of harmonics associated with the DDJ jitter
component. Once again, it is best to isolate the DDJ components so that an inverse FFT can be run
and the peak-to-peak values extracted using the following expression:
Correspondingly, we can also the samples of the time sequence IFFT(JDDJ,only) to construct the
histogram of the DDJ component. In order to determine the mathematical form of this jitter distri-
bution, we need to further decompose DDJ into ISI and DCD components.
In the case of DCD, the input is driven by a clock-like data pattern so that the effects of ISI are
eliminated. The time difference between the zero crossings for a 0-to-1 bit transition and a refer-
ence clock are collected from which the average time offset value tDCD,01,av is found. Likewise, this
procedure is repeated for the 1–0 bit transitions and the tDCD,10,av value is found. These two results
are then combined to form the PDF for the DCD using the dual-Dirac function, accounting for the
asymmetry about the ideal zero crossing point as illustrated in Figure 14.19a, as follows
1 ⎛ t ⎞ 1 ⎛ t ⎞
pdfDCD = δ ⎜ t + DCD,10, av ⎟ + δ ⎜ t − DCD,01, av ⎟ (14.120)
2 ⎝ 2 ⎠ 2 ⎝ 2 ⎠
When only the DCD metric is provided, we can approximate the PDF of the DCD as a dual-Dirac
function as follows:
1 ⎛ DCD ⎞ 1 ⎛ DCD ⎞
pdfDCD ≈ δ ⎜ t + ⎟ + δ ⎜t − ⎟ (14.122)
2 ⎝ 2 ⎠ 2 ⎝ 2 ⎠
To identify the ISI distribution, a test very similar to DCD is performed but the input is driven
with a specific data pattern other than a clock-like pattern. The zero crossings for the 0–1 bit
transitions are collected and the extremes are identified, say tISI,01,min and tISI,01,max. The process is
repeated for the 1–0 zero crossings and, again, the extremes are identified as tISI,10,min and tISI,10,max.
The ISI test metric is then defined as the average of these two peak-to-peak values as follows
ISI ≅
(t ISI ,01,max ) (
− t ISI ,01,min + t ISI ,10,max − t ISI ,10,min ) (14.123)
2
Chapter 14 • Clock and Serial Data Communications Channel Measurements 649
A first-order estimate of the PDF of the ISI can be written as another dual-Dirac function as
follows:
1 ⎛ ISI ⎞ 1 ⎛ ISI ⎞
pdfISI ≈ δ ⎜ t + ⎟ + δ ⎜t − ⎟ (14.124)
2 ⎝ 2 ⎠ 2 ⎝ 2 ⎠
As ISI and DCD convolve together to form the DDJ distribution, that is,
σ BUJ = σ J2 − σ PJ
2
− σ DDJ
2
− σ RJ
2
(14.128)
where we assume that the powers of the individual components of BUJ add, that is,
(σ BUJ = σ 2 + σ 2 ). We do not attempt to list the PDF for BUJ, because we need more infor-
BUJ BUJ ,2
Figure 14.34. Power spectral density plot of a period jitter sequence with input: (a) alternating 1’s
and 0’s pattern, (a) PRBS data sequence and (b) PRBS data sequence, but an additional tone and
noise level is unaccounted for.
σ P2J = SJ ( M PJ )
σ RJ
2
Bin
MPJ
0
(a)
SJ ( f )
SJ ( M DDJ )
SJ ( 3M DDJ ) σ P2J = SJ ( M PJ )
SJ ( 2M DDJ )
σ RJ
2
σ R2 J + σ B2 UJ , 2
Bin
0 MBUJ MDDJ 2MDDJ 3MDDJ MPJ
The following example will help to illustrate the deterministic jitter decomposition method
described above.
EXAMPLE 14.21
A jitter sequence consisting of 1024 samples was captured from a serial I/O channel driven by
two separate data patterns. In one case, an alternating sequence of 1’s and 0’s was used, in the
other, a PRBS pattern was used. An FFT analysis performed on the two separate sets of jitter
data, as well as with the PRBS sequence,resulted in the information below. Determine the test
metrics: PJ, DDJ, BUJ.
Chapter 14 • Clock and Serial Data Communications Channel Measurements 651
Solution:
Using the spectral data from the alternating 1010 data pattern, we find from the FFT data set
above that a tone is present in bin 201 with an RMS value of
PJ = SJ = 0.215 UI
We also notice that from the spectral data for the clock-like input, the RJ noise component can
be estimated to be
( 2 × 10 ) − 2 (0.02
512
)
2
σ RJ = ∑c
k =0
2
k ,RMS − c0,2 RMS − c201,
2
RMS ≈ 512 × −4 2
+ 0.052 = 3.18 × 10−3 UI
Next, as is evident from the middle column of data, through ideal simulation we expect to see
harmonics related to the PRBS data pattern in bins 55, 110, and 165. Tones falling in any other
bin is deemed unrelated to DDJ. Tonal components, other than those related to PJ, would be
considered as being part of BUJ. Using the data from the rightmost column, we find that the RMS
value of DDJ is
σ DDJ = c55,
2
RMS + c110,RMS + c165,RMS =
2 2
2 (0.21 + 0.13 )+ (0.02
2 2 2
) ( )
+ 0.032 + 0.062 + 0.072 = 50.6 × 10−3
To obtain the peak-to-peak value of DDJ, we notch out all bins other than those related to DDJ
and perform an inverse FFT. This is achieved by setting bins 0–512 to zero except bins 55, 110,
and 165 (as well as its first image). Next we scale these three bins by the factor N (=1024) and
perform an inverse FFT, according to
⎧⎪ ⎡ 0,…,0,( 0.02 − j0.03 ),0,…,0,( 0.001+ j0.002 ),0,…,0,0,( 0.005 − j0.006 ),0,…,0,0 ⎤ ⎫⎪
dDDJ = 1024 × IFFT⎨ ⎢ ⎥ ⎬
⎩⎪ ⎣⎢ 0,…,0,( 0.005 + j0.006 ),0,…,0,( 0.001− j0.002 ),0,…,0,0,( 0.02 + j0.03 ),0,…,0 ⎦⎥ ⎭⎪
652 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
DDJ = 0.154 UI
What remains now is the identification of the BUJ components. From the right-most column
we recognize that a tone is present in bin 193 having an RMS value of
We also recognize that the noise floor has increased when the data pattern changed. We at-
tribute this increase to some bounded but uncorrelated jitter source. We quantify its value by
taking the power difference between the two noise floors associated with the two different
data patterns, that is,
σ BUJ = σ BUJ
2
,1 + σ BUJ ,2 =
2
(31.6 × 10 ) + (5.4 × 10 )
−3 2 −3 2
= 5.4 × 10−3 UI
To estimate the peak-to-peak value of the BUJ component, we note that the sinusoidal compo-
nent is about four times larger in amplitude, so as a first estimate we obtain
Because this term does not include the correlated and bounded noise components evident in
the noise floor, it underestimates the level of BUJ. A better estimate would be to repeat the test
several times and average out the random noise leaving only the correlated signals.
EXAMPLE 14.22
A serial I/O channel was characterized for its RJ and DJ components resulting in the follow-
ing table of measurements. Estimate the shape of the histogram of the total jitter distribution
that would result? From this distribution, relate the peak-to-peak levels to the underlying test
Chapter 14 • Clock and Serial Data Communications Channel Measurements 653
metrics found below. Also, plot the corresponding PDF of the zero-crossing distribution across
two consectutive bit transitions assuming a bit duration of 100 ps.
Jitter Metrics
RJ DJ
(rms ps)
SJ (pp ps) ISI (pp ps) DCD (pp ps)
2 25 14 2
Solution:
Given the above metrics, we can represent each jitter distribution mathematically and picto-
rially as follows:
654 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Here we see a multimodal distribution with four peaks, located at approximately -18 ps, -4 ps, 4
ps, and 18 ps. If we treat the sinusoidal distribution as a dual-Dirac density, then together with the
other two Dirac distributions for ISI and DCD, we will end up with eight impulses located at ±12.5 ±7
±1 ps or ±4.5, ±6.5, ±18.5 and ±20.5 ps. While these numbers are close to the peak locations, they
do not coincide exactly, owing to the many interactions with the Gaussian distribution. Care must be
exercised carefully when attempting to de-correlate jitter behavior from the total jitter distribution.
Finally, the PDF of the jitter distribution across two consectutive bit transitions separated by
100 ps is found by convolving the PDF of the total jitter with a dual-Dirac distribution with centers
located at 0 and 100 ps, that is,
⎡1 ⎤
⎣ 2
1
2
( )
pdfZC = pdfTJ ⊗ ⎢ δ (t ) + δ t − 1 × 10−10 ⎥
⎦
The resulting zero-crossing PDF is shown plotted below:
Chapter 14 • Clock and Serial Data Communications Channel Measurements 655
SJ OUT
GJ ( ω ) (14.129)
SJ IN
GJ , dB (ω ) = 20 log10 GJ (ω ) (14.130)
Repeating this measurement for a range of input frequencies and comparing the jitter gain to a jit-
ter transfer compliance mask, such as that shown in Figure 14.35, we would have a complete jitter
transfer test. Any data point that falls inside of the hashed region would result in a failed test.
Figure 14.35. An example jitter transfer compliance mask with measured data superimposed. This
particular data set passes the test as all the data points fall inside the acceptable region (as shown).
Unacceptable Region
Jitter Transfer GJ (dB)
Acceptable Region
Frequency (kHz)
656 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
While we could measure both the input and output jitter level using the power spectral method
described above for computing PJ, a more efficient method is to use coherent signal generation
and sampling, in much the same way we did for the analog and sampled data channel measure-
ments using the AWG and digitizer. Let us explain this more fully below.
where sgn{.} represents the signum function. Now, if we argument the phase of the sinusoidal
function by adding a phase term J(t), to represent the jitter signal as a function of time, we would
write the digital signal as follows
{
d (t ) = sgn sin ⎡⎣2π fc t + J (t )⎤⎦ } (14.132)
Furthermore, if we assume the phase function is sinusoidal, with amplitude AJ and frequency
fJ, that is,
{
d (t ) = sgn sin ⎡⎣2π fc t + AJ sin (2π fJ t )⎤⎦ } (14.134)
The units of J(t) are expressed in radians, but one often see this jitter signal expressed in terms of
seconds,
1
J (t ) [seconds]= × J (t ) [radians] (14.135)
2π fc
1
J (t ) [UI ]= × J (t ) [radians] (14.136)
2π
If we assume that this signal is sampled at sampling rate Fs, then we can write the sampled
form of Eq. (14.134) as
⎪⎧ ⎡ f ⎛ f ⎞ ⎤ ⎪⎫
d [n ]= sgn ⎨sin ⎢ 2π c n + AJ sin ⎜ 2π J n⎟ ⎥ ⎬ (14.137)
⎪⎩ ⎢⎣ FS ⎝ FS ⎠ ⎥⎦ ⎪⎭
Chapter 14 • Clock and Serial Data Communications Channel Measurements 657
fc M c fJ M J
= and = (14.138)
FS N FS N
where N, Mc, and Mf are integers. In the usually way, N represents the number of samples obtained
from d(t), and Mc and MJ represent the number of cycles the clock and jitter signal complete in N
points, respectively. This allows us to rewrite Eq. (14.137) as
⎧ ⎡ M ⎛ M ⎞ ⎤⎫
d [n ]= sgn ⎨sin ⎢2π c n + AJ sin ⎜ 2π J n⎟ ⎥ ⎬ (14.139)
⎩ ⎣ N ⎝ N ⎠ ⎦⎭
Deriving N points from Eq. (14.139) and storing them in the source memory of the arbitrary wave-
form generator (AWG) would provide a means to create a sinusoidal jitter signal with amplitude
AJ riding on the clock-like data signal. A simple test setup illustrating this approach is shown in
Figure 15.36a. For very-high-frequency data rates, an AWG would not be available to synthesis
such waveforms directly. Instead, the digitally synthesized jitter signal would phase modulate a
reference clock signal using some phase modulation technique such as the Armstrong phase mod-
ulator as shown in Figure 14.36b. A third technique is the method of data edge placement as shown
in Figure 14.36c. This technique uses a digital-to-time converter (DTC) to place the data edges at
specific points along the UI of the data eye according to an input digital code. The phase-lock loop
removes high-frequency images, so it is essentially the reconstruction filter of this digital-to-time
conversion process. A variant of this approach is one that makes use of a periodic sequence of bits
together with a phase lock loop to synthesize a sinusoidal jitter signal.14,15 The DTC is essentially
realized with a set of digital codes.
Figure 14.36. Generating a phase modulated clock signal: (a) a direct digital synthesis method
involving an AWG, (b) an indirect method using the Armstrong phase modulator technique and a low-
frequency AWG, (c) edge-placement technique using a digital-to-time converter followed by a PLL.
Anti-
d [n ] AWG Imaging d (t) d [n ] DTC PLL d (t)
Filter
Fs Fs
(a ) (c )
Anti-
J [n ] AWG Imaging X
Filter
+ d (t)
Fs
9 0°
R e f. S ig n a l, f c
(b )
658 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
EXAMPLE 14.23
A digital receiver operates at a clock rate of 100 MHz. A jitter transfer test needs to be performed
with a 10-kHz phase-modulated sinusoidal signal having an amplitude of 0.5 UI. Using an AWG
with a sampling rate of 1 GHz, write a short routine that generates the AWG samples.
Solution:
Our first task is to detemine the number of samples and cycles used in the waveform sythesis.
Substituting the given data in the coherency expressions, we write
1 1
Mc = ×N and MJ = ×N
10 105
As Mc, MJ and N are all integers, we immediately recongize that N must contain the factor 105.
This makes it impossible for N to be expressed solely as a power of two. This leaves us with three
options: (1) Alter the data rate fc and/or the frequency of the modulating sinewave fJ such that
N is a power of two, (2) work with a non-power-of-two vector length and use a window function
together with the radix-2 FFT, or (3) work with a non-power-of-two vector length and assume
that the jitter sequence consists of a single sine wave from which its RMS value is computed
directly from the time sequence. We will select the latter two approaches and select N = 3 x 105.
Substituting N back into the above expressions, we obtain
Mc = 3 × 104 and MJ = 3
As far as the jitter amplitude is concerned, we want a 0.5-UI amplitude but we need to convert
between UI and radians. To do this, we use
AJ [radians] = 2π × AJ [UI] = π
Using Eq. (14.137), MJ, Mc, and N, together with AJ = π radians, the waveform synthesis routine can
be written using a programming for loop as follows:
for n = 1 to 3 × 105
⎧ ⎡ 1 ⎡ 1 ⎤⎤⎫
d[n] = sign⎨ sin ⎢ 2π (n − 1) + 3.1415 × sin ⎢ 2π 5 (n − 1)⎥ ⎥ ⎬
⎩ ⎣ 10 ⎣ 10 ⎦⎦⎭
end
Running this procedure results in the following plot of a short portion of the sequence between
n = 15,000 to 15,100 where the modulating signal is near its peak, and again between n = 50,000
and 50,100 where the modulating signal is near zero:
Chapter 14 • Clock and Serial Data Communications Channel Measurements 659
Clearly, the jitter sequence goes through a maximum phase shift of 3.14 radians or 0.5 UI as
expected.
Exercises
where the discrete-time sequence d˄OUT is related to the original time series through the discrete
Hilbert transform18 (DTH) as follows
⎧
⎪ dOUT (k )
2 ⎪ ∑
dˆOUT [n ]= DHT {dOUT }= ⎨ k = odd n − k
, n even
(14.141)
π
⎪ dOUT (k )
⎪∑ , n odd
⎩ k = even n − k
In general, the DHT defined above is computationally inefficient. Instead, one can work in the
frequency domain using the spectral coefficients of dOUT, that is,
⎧
⎪ DOUT (k ), k = 0
⎪
DOUT, analytic (k ) = ⎪⎨2 DOUT (k ), k = 1,2,…, − 1
N
2 (14.143)
⎪
⎪ N
⎪⎩ 0, k = ,…, N − 1
2
Through the application of the inverse FFT, we obtain the N-point Hilbert transform of the N-point
sequence dOUT as follows
{
dOUT [n ]= Im IFFT DOUT , analytic }, n = 0, …, N − 1 (14.144)
Of course, the real part of IFFT{DOUT , analytic} is the original data sequence dOUT .
The discrete-time total instantaneous phase from the complex signal using the following
recursive equation:
⎛ dˆ [n ]⎞ ⎛ dˆ [n − 1]⎞
θ i [n ]= θ i [n − 1]+ tan −1 ⎜ OUT ⎟ − tan −1 ⎜ OUT ⎟
⎝ dOUT [n ]⎠ ⎝ dOUT [n − 1]⎠
(14.145)
Because this phase term includes both the phase of the carrier and the jitter-injected signal, we
need to determine the instantaneous phase of the carrier and subtract it from Eq. (14.145) to obtain
the effect of the phase-modulated jitter signal alone. To begin, let us recognize that the sequence
of phase samples corresponding to the reference signal varies proportionally with the sample time
[argument term of Eq. (14.139) when AJ is zero], according to
Mc
θ REF [n ]= 2π n (14.146)
N
or in recursive form as
Mc
θ REF [n ]= θ REF [n − 1]+ 2π (14.147)
N
The phase-modulated jitter sequence would then be recovered by subtracting Eq. (14.146) from
Eq. (14.145) to obtain
J [n ]= θ i [n ]− θ REF [n ] (14.148)
or simply as
Mc
J [n ]= θ i [n ]− 2π n (14.149)
N
Note that great care must be exercised when using the inverse-tangent operation associated with
Eq. (14.145). Normally the built-in inverse-tangent operation of a tester or computer routine,
such as MATLAB or Excel, limits its range to two quadrants of the complex plane, that is, ±π.
Chapter 14 • Clock and Serial Data Communications Channel Measurements 661
Moreover, extension to four-quadrant operation does not solve the problem completely. We will
address this issue shortly.
While the reference phase term is known at the AWG output, it may experience phase shift
through the digital channel, so one should not rely on the generated signal properties alone.
Instead, it is always best to extract the reference signal phase behavior from the measured data at
the receiver end. There are at least two ways in which to do this. The first method is to fit a straight
line through the instantaneous phase signal θi[n] using linear regression analysis (similar to the
best-fit line method used in DAC/ADC testing), such as
θ REF [n ]= λ n + β (14.150)
Subsequently, the instantaneous excess phase sequence would then be found from Eq. (14.148).
A second approach is to work with the frequency information associated with the captured dig-
ital sequence dOUT. Let us begin by taking the FFT of dOUT and denote the result as DOUT, that is,
Next, set all bins to zero except bins Mc and N-Mc and perform an inverse FFT according to
{
dREF = IFFT [0,,0, DOUT (M c ),0,,0, DOUT (N − M c ),0,,0] } (14.152)
Subsequently, perform a discrete Hilbert transform on dREF and obtain the analytic signal represen-
tation of the reference signal as follows:
The instantaneous phase behavior of the reference signal is then found using the following recur-
sive equation:
⎛ dˆ [n ]⎞ ⎛ dˆ [n − 1]⎞
θ REF [n ]= θ REF [n − 1]+ tan −1 ⎜ REF ⎟ − tan −1 ⎜ REF ⎟
⎝ dREF [n ]⎠ ⎝ dREF [n − 1]⎠
(14.154)
The analytic signal representation for the reference signal in the time domain would then be
described using the inverse FFT as follows:
{
dREF , analytic = dREF + jdˆREF = IFFT [0,,0,2 DOUT (M c ),0,,0] } (14.155)
Similarly, the analytic signal representation for the dOUT sequence is found from
Before we move on to the calculation of jitter transfer gain, we need to close off the issue of
unwrapping the phase function of the various complex signals. At the heart of this difficulty is the
662 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
mathematical behavior of the inverse-tangent operation. While the trigonometric tangent maps
one value to another, the inverse tangent operation maps one value to many values separated by
multiples of ±2π. This creates discontinuities in the computed phase behavior of the complex sig-
nal and renders the analytic signal analysis method useless.
To get around the computational difficulties of unwrapping a phase function, we must keep
track of the phase changes as a function of the sampling instant, in particular as we move from
either the third or fourth quadrant of the complex place into the first or second quadrant, we must
add 2π to the instantaneous phase function. Here we are assuming that a step change in phase
is never greater than π. A condition that is met when the sampling process respects Shannon’s
sampling constraint (i.e., no less than 2 points per period on the highest frequency component).
Following this process, assuming we have an analytic signal defined as z[n] = x[n] + jx˄ [n], the
instantaneous unwrapped phase function would be written as
⎧
⎪ xˆ [ n − 1] < 0, x [ n ] > 0, xˆ [ n ] > 0
⎪ θ n − 1 + 2π + tan 4q−1 ⎛ xˆ [ n ] ⎞ − tan 4q−1 ⎛ xˆ [ n − 1] ⎞
⎪ i[ ] ⎜ x n ⎟
⎝ [ ]⎠
⎜ x n− 1 ⎟
⎝ [ ]⎠
or
θ i [ n ] = ⎪⎨
x [ n − 1] > 0, xˆ [ n − 1] < 0, xˆ [ n ] > 0
⎪
⎪ ⎛ xˆ [ n ] ⎞ ⎛ xˆ [ n − 1] ⎞
⎪ θ i [ n − 1] + tan 4q ⎜
−1 −1
⎟ − tan 4q ⎜ x n − 1 ⎟ otherwise
⎪⎩ ⎝ x [ n ] ⎠ ⎝ [ ]⎠
(14.157)
where the four-quadrant inverse-tangent function tan 4q-1(.) is defined as follows
⎧
⎪ tan −1 ⎛ y ⎞ , x ≥ 0, y ≥ 0
⎪ ⎜⎝ ⎟⎠
x
⎪
⎪ −1 ⎛ y ⎞
−1 ⎛ y ⎞
⎪ π − tan ⎜⎝ − x ⎟⎠ , x < 0, y ≥ 0
tan 4q ⎜ ⎟ = ⎨ (14.158)
⎝ x⎠ ⎪
−1 ⎛ − y ⎞
⎪ π + tan ⎜⎝ ⎟⎠ , x < 0, y < 0
⎪ −x
⎪ −1 ⎛ − y ⎞
⎪2π − tan ⎜ ⎟ , x ≥ 0, y < 0
⎪⎩ ⎝ x⎠
The following example will help to illustrate this jitter extraction technique on a multitone phase
modulated signal.
Chapter 14 • Clock and Serial Data Communications Channel Measurements 663
EXAMPLE 14.24
A 4096-point data set is collected from the following two-tone unity-amplitude phase-modulated
jitter sequence:
⎡ 1013 ⎤
(n − 1) + 1 × sin ⎡⎢2π (n − 1)⎤⎥ + 1 × sin ⎡⎢2π (n − 1)⎤⎥ ⎥
57 23
dOUT [n] = sin ⎢2π
⎣ 4096 ⎣ 4096 ⎦ ⎣ 4096 ⎦⎦
Extract the jitter sequence using the analytic signal approach and compare this sequence to the
ideal result.
Solution:
The first step to the analytic signal extraction method is to obtain the FFT of the sampled se-
quence. Let us denote the FFT of the data sequenceas follows:
A plot of the magnitude response corresponding to the spectral coefficients of this data sequence
is shown below over the Nyquist and its first image band:
Here we see a very rich spectral plot with most of the power of the signal concentrated around
the carrier located in bin 1013. We also notice that the magnitude response is symmetrical about
the 2048 bin. Next, we perform the Hilbert transform of the data sequence in the frequency do-
main, resulting in the magnitude response shown below:
664 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Here we see magnitude response in the Nyquist interval increase by 3 dB and the level of the first
image reduced to the noise floor of the FFT. We can also extract the reference signal from the
above spectral plot and obtain the spectral coefficients of the Hilbert transform of the reference
signal as follows:
The inverse FFT of the Hilbert transform of DOUT results in the complex sequence whose real and
imaginary parts are plotted below for n between 0 and 150:
The reader can see two identical signals, one shifted slightly with respect to the other. In fact,
the phase offset is exactly π/2, as defined by the Hilbert transform. Using the analytic signal
representation for dOUT and the reference, we extract the following two instantaneous phase
sequences:
Chapter 14 • Clock and Serial Data Communications Channel Measurements 665
Here both phase sequences increase monotonically with index, n. In fact, both plots look iden-
tical. However, taking their difference reveals the key information that we seek, that being the
phase-modulated jitter sequence as shown below:
Superimposed on this plot is the expected jitter sequence taken from the given phase-modu-
lating data sequence (shown as circles).
We conclude that the analytic signal extraction method was effective in extracting the
two-tone phase modulated signal. It is interesting to note from this example that if we take the
FFT of the extracted jitter sequence we can obtain a spectral plot that separates the two tones
in the usual DSP-based manner as follows:
It should be clear from the above example that all DSP-based methods of previous chapters
are directly applicable to a jitter sequence.
Final Comments
The objective of this section was to measure the sinusoidal phase-modulated response of a device
or system so that the jitter transfer gain could be computed. We now are equipped with the DSP-
based techniques in which to extract this information from a time sequence. If we define the jitter
sequence at the output of the system as JOUT[n], then we can obtain the peak-to-peak value from
the MJ bin using the spectral coefficient notation as follows:
666 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Figure 14.37. Extracting the phase component of a digital signal: (a) indirect method involving a
high-speed digitizer, (b) a direct method using a heterodyning technique, (c) another direct method
involving a TDC.
Fs Fs
(a) (c)
d(t)
X Digitizer J[n]
90°
Fs
Ref. Signal, fc
(b)
Likewise, the input sinusoidal phase-modulated sequence, denoted as JIN[n], is analyzed in the
exact same way leading to
Finally, we combine Eqs. (14.159) and (14.160) and substitute into Eq. (14.129) to find the jitter
transfer gain corresponding to Bin MJ as
The process can be repeated across a band of frequencies or one can create a multitone jitter signal
and capture the entire jitter transfer over a band of frequencies.
To close this section, we point out through Figure 14.37 that there are several other ways in
which to extract the jitter sequence of a phase-modulated signal. Figure 14.37a illustrates the high-
speed sampling method just described. The setup in part Figure 14.37b illustrates a heterodyning
approach whereby the system output is mixed with a reference signal and the low-frequency
image is removed by filtering and then digitized. The test setup of Figure 14.37c makes use of
a PLL and time-to-digital (TDC) converter arrangement to obtain the jitter sequence directly. A
TDC essentially measures the time difference between the reference and the digital signal to pro-
duce J[n]. For the latter two test setups, regular FFT post-processing, together with Eq. (14.161),
can be used to obtain the jitter transfer gain.
Figure 14.38. A general jitter tolerance test setup illustrating the different source conditions.
Bit Error
PRBS Rate
Pattern + DUT Tester
Generator
RJ DJ
PJ DDJ BUJ
Figure 14.39. A jitter tolerance compliance mask for the SONET OC-12 specification.
10
Jitter Tolerance (UI peak-peak)
0.4
BER=10-10
0.1
Unacceptable Region
0.01
10 100 1000 10000
Frequency (kHz)
tolerance test setup is shown in Figure 14.38. Depending on the standard, the jitter on the input
signal can take various forms. For example, SONET testing requires jitter in the form of a sinu-
soidal phase modulation, which is swept or stepped across a prescribed frequency band while the
amplitude is adjusted so that a specific BER level is maintained. The input amplitude of the jitter
is then compared against a frequency template, such as that shown in Figure 14.39 for the SONET
OC-12 transmission specification. As an example, at an input sinusoidal frequency of 100 kHz,
the device or system must be capable of handling an injected peak-to-peak sinusoidal jitter of no
less than of 0.4 UI while maintaining a BER of 10-10 or smaller. It is important that the input signal
have little RJ component, at least negligible when compared to the SJ component. In contrast to
SONET testing, the SATA specification requires a test signal bearing a wide variety of both ran-
dom and deterministic jitter down to a BER level of 10-12 or smaller. Great care must be exercised
when selecting the jitter source, especially in light of the very high demands on its performance.
668 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Figure 14.40. (a) Relating the input and output jitter behavior through a linear system function
G(jω). (b) Input-output PDFs for a sinusoidal input signal and its relationship to G(jω).
(a)
pdf J IN pdf JOUT
t t
− Ĵ IN 0 Ĵ IN − ĴOUT 0 ĴOUT
(b)
SJ IN SJ OUT = G ( jω ) SJ IN
To gain a better understanding of a jitter tolerance test under sinusoidal input conditions, let
us assume that the input and output phase or jitter behavior of a system is linearly related by some
transfer function G(jω) as depicted in Figure 14.40a. Such a transfer function arises naturally from
the underlying mathematical description of a clock recovery circuit. If the input to the system is
described by the equation JIN sin (ω̂ t), then the output can be described as |G(jω)|JˆIN sin (ωt + ∠G
(jω)). Equally, we can relate the PDFs of the input and output distributions as shown in Figure
14.40b with peak-to-peak values of SJIN and SJOUT = |G(jω)|SJIN, respectively. Clearly, the larger
the gain of the system at a particular frequency, the larger the output peak-to-peak jitter. Assuming
that sinusoidal jitter is the only jitter present, then we can describe the PDF of the zero-crossings
of two consecutive bits as shown in Figure 14.41a. Superimposed on this plot is the ideal sampling
instant at tSH = T/2. This threshold sits midway between the two ideal bit transitions. As is evident
from this figure, the jitter is bounded and much less than the sampling threshold so that no trans-
mission errors are expected. As the input peak-to-peak value increases, as shown in Figure 14.41b,
the peak value of the output jitter distribution gets closer to the sampling instant, but again, no
errors are expected. However, once the amplitude of the jitter exceeds the sampling threshold [i.e.,
JˆIN|G(jω)| > T/2], errors are expected as illustrated in Figure 14.41c. We can quantify the BER
level by integrating the areas beyond the threshold on each side and write
∞ T 2
1 1 1 1
2 t > tSH 2 t < tSH 2 T∫2
BER = Pe + Pe = × pdfJOUT dt + × ∫ pdfJOUT dt (14.162)
t =0 2 −∞ t =T
where ⎧
⎪ 1 SJ SJ
, − OUT ≤ t ≤ OUT
⎪ 2 2 2
pdfJOUT =⎨ ⎛ SJ ⎞
π ⎜ OUT ⎟ − t 2 (14.163)
⎪ ⎝ 2 ⎠
⎪
⎩ 0 otherwise
1 ⎛ x⎞
Substituting and using the integral relationship ∫ a −x
2
dx = − cos −1 ⎜ ⎟ , (a 2 > x 2 ), we write
2 ⎝ a⎠
⎛ T ⎞
SJOUT 2
1 1
BER = ∫ 2
dt =
π
cos −1 ⎜
⎝ SJ OUT ⎟⎠
, SJ OUT > T
(14.164)
T 2 ⎛ SJ ⎞
π ⎜ OUT ⎟ − t 2
⎝ 2 ⎠
Chapter 14 • Clock and Serial Data Communications Channel Measurements 669
Figure 14.41. Illustrating the zero-crossing PDF for two consecutive bit transitions when the system
is excited by a sinusoidal signal: (a) JˆOUT < T/2 no bit errors, (b) JˆOUT < T/2 but larger than situation in
part (a) but again no bit errors, (c) JˆOUT < T/2 with bit errors.
pdfZC
t
− ĴOUT 0 ĴOUT t SH = T 2 T − ĴOUT T T + ĴOUT
pdfZC
(a)
pdfZC
t
− ĴOUT 0 ĴOUT T 2 T − ĴOUT T T + ĴOUT
(b)
⎛ x2 ⎞
Rearranging and using the trigonometric power series expansion around x = 0 ⎜ i.e., cos x ≈ 1 − ⎟ ,
⎝ 2⎠
we write the BER as
2 T
BER = 1− , SJOUT > T (14.165)
π SJOUT
or expressing SJOUT in terms of UIs, we write
2 1
BER = 1− , SJOUT > 1 UI (14.166)
π SJOUT
670 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
Replacing the output SJ term by the input SJ term and system function G(jω), we finally write
2 1
BER = 1− , G ( jω ) SJ IN > 1 UI
π G ( jω ) SJ IN (14.167)
This equation provides the key insight into the jitter tolerance test. It shows how the peak-to-peak
input jitter level, the system transfer function, and the BER performance are all interrelated. For
instance, if G(jω) has a low-pass nature whereby at DC, G(jω) = 1 and for f > fB, G(jω) = 0, then
the BER performance will have a value of 2 1 − 1 for f > fB and a very large value of 2/π for
f > f B. π SJ IN
For a particular BER performance, we can rearrange Eq. (14.167) and write the input SJ as
follows:
1
SJ IN =
⎡ ⎛π ⎞ ⎤
2
G ( jω ) ⎢1 − ⎜ × BER⎟ ⎥
(14.168)
⎣⎢ ⎝ 2 ⎠ ⎦⎥
The above expression reveals the nature of the mask that separates the acceptable region from the
unacceptable region as illustrated in the jitter tolerance plot of Figure 14.39. The shape of this
mask follows the inverse of the system transfer function |G(jω)|–1. In order to clearly identify the
points on this mask, let us denote these points as SJIN|GOLDEN and the corresponding system gain as
GGOLDEN(jω), then we can write
1
SJ IN =
GOLDEN
⎡ ⎛π ⎞
2
⎤
GGOLDEN ( jω ) ⎢1 − ⎜ × BER⎟ ⎥ (14.169)
⎢⎣ ⎝ 2 ⎠ ⎥⎦
Any measured SJIN that is above SJIN|GOLDEN will fall in the acceptable region. To understand why
this is so, consider a situation where the input SJ at a particular frequency to a DUT is adjusted so
that the BER at the output reaches the desired level. If SJIN|DUT > SJIN|GOLDEN, then we can write
1 1
>
⎡ ⎛π ⎞
2
⎤ ⎡ ⎛π ⎞ ⎤
2
which leads to
This expression suggests that the DUT gain is less than the golden device gain and hence can
tolerate a greater level of jitter at its input before reaching the same output level or, equivalently,
the same BER level. It should be noted here that the jitter tolerance test is much more than just a
measurement of the gain of the system transfer function, because it includes nonlinear and noise
effects not modeled by the above equations.
Chapter 14 • Clock and Serial Data Communications Channel Measurements 671
EXAMPLE 14.25
A jitter tolerance test was performed on a digital receiver that must conform to the SONET OC-
12 specification shown in Figure 14.39. The following table lists the results of this test. Does the
receiver meet the OC-12 specification? What is the gain of the measured device relative to the
golden device gain at 300 kHz?
Solution:
To see clearly the results of the test, we superimposed the data points from the table on a fre-
quency plot with the OC-12 compliance mask superimposed. All points but one at 300 kHz appear
above the BER = 10–10 line, hence this device does not pass the test.
10
Jitter Tolerance (UI peak-peak)
0.4
0.1 BER=10-10
0.01
10 100 1000 10000
Frequency (kHz)
The gain of the DUT relative to the golden device gain is found from Eq. (14.167) as
2 1 2 1
1− = 1−
π GGOLDEN ( jω ) SJIN GOLDEN
π GDUT ( jω ) SJIN DUT
which reduces to
GDUT ( jω ) SJIN
= GOLDEN
Here we see the DUT has a gain that is 1.36 times larger than the golden device gain at
300 kHz.
EXAMPLE 14.26
A golden device operating at a bit rate of 100 MHz has an input–output phase transfer function
described as follows:
s + 50
G (s ) = 50
s + 5000
Plot the expected jitter tolerance mask for a sinusoidal phase-modulated inputsignal with a BER
of 10–10.
Solution:
According to Eq. (14.169), the expected input SJ mask level can be written as
1
SJIN (f ) =
j2π f + 50 ⎡ ⎛ π ⎤
GOLDEN 2
−10 ⎞
50 ⎢1 − ⎜ × 10 ⎟ ⎥
j2π f + 5000 ⎣⎢ ⎝ 2 ⎠ ⎦⎥
Exercises
ANS.
1⎡ 1 ⎛ 2UITH ⎞
BER = ⎢ 1+ cos−1 ⎜ ⎟
2 ⎣⎢ π ⎝ G ( jω ) SJIN ⎠
14.28. Repeat the development of the BER expression
shown in Eq. (14.162) for a sinusoidal jitter 1 −1 ⎛ 2(UITH − 1) ⎞ ⎤
− cos ⎜ ⎟ ⎥,
tolerance test but this time assume the nor- π ⎝ G ( jω ) SJIN ⎠ ⎥⎦
malized sampling instant is an arbitray point
between 0 and T. G ( jω ) SJIN > 1 UI
The above jitter tolerance test was developed from the perspective of a sinusoidal phase-
modulated stimulus such as that used for SONET. One can just as easily develop the idea of a
golden device behavior with respect to other types of input distributions. The reader can find sev-
eral examples of this in the problem set at the end of this chapter.
During production a jitter tolerance test is conducted in a go/no-go fashion whereby the input
DJ and RJ are first established by way of the transmission compliance mask, combined with the
appropriate PRBS data pattern, and then applied to the device or system. The corresponding BER
performance is then measured and compared to the expected level as specified in the communication
standard (say BERD). If the BER is less than or equal to BERD over the entire frequency band, the
device will pass the test. Owing to the number of BER measurements required during a jitter toler-
ance test, methods that speed up this test are often critical for a product’s commercial success.
14.9 SUMMARY
In this chapter, we have reviewed the test principles related to clock and serial communication
channel measurements. We began by studying the noise attributes of a clock signal in the time
domain and then in the frequency domain. As with any noise process, statistical methods were
used to quantify the level of noise present with the clock signal. This included some discussion
on accumulated jitter, a concept related to the autocorrelation function of the jitter time sequence,
and an aggregate power spectral density metric based on the average spectral behavior of the clock
signal. Subsequently, the remainder of the chapter focused on measurement principles related to
serial data communications. It began by introducing the concept of an eye diagram from which the
idea of a zero crossing probability distribution was introduced. This was then expanded to include
the effects of a real channel, such as noise and dispersion, leading to a classification of the various
types of jitter one encounter in practice. Using a probabilistic argument, the probability of trans-
mission error was introduced in terms of a probability density function for a jitter mechanism. The
concept of bit error rate (BER) was introduced as the average probability of transmission error.
Subsequently, the BER was modeled as a binomial distribution from which confidence intervals
could be computed and related to test time. At this point in the chapter it became evident that a
BER test requires excessive test time and the following section outlined different strategies to
reduce the time of a BER test. All of these methods involved expressing the BER as a function of
a model of the underlying jitter process and then solving for the parameters of this model at a low
BER level where test time can be made short. A general method of modeling a Gaussian mixture
674 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
was outlined and shown to be a powerful method of modeling various sources of jitter. Commonly
used test metrics like random jitter (RJ), deterministic jitter (DJ), and total jitter (TJ) were defined.
Methods to further decompose deterministic jitter were outlined. These methods were based on
DSP techniques similar to the techniques used for analog and sampled-data channels. The chapter
concluded with a short discussion on jitter transmission test whereby a jitter signal is injected into
the channel and its effect on the output is quantified. This included a discussion on jitter transfer
and jitter tolerance type tests.
PROBLEMS
14.1. A time jitter sequence is described by J [ n] = 0.001 sin 2 (14π /1024 n ) + 0.001 for n =
0,1,…, 1023. What is the mean, standard deviation, and peak-to-peak value of this time
jitter sequence. Plot the histogram of this data sequence over a range of 0 to 0.002.
14.2. A time jitter sequence is described by J [ n] = 0.001 sin (14π /1024 n ) for n = 0,1,…,
1023. What is the period jitter and cycle-to-cycle jitter of this time jitter sequence.
14.3. A time jitter sequence is described by J [ n] = 0.001 sin (14π /1024 n ) +
⎡ −⎜
⎝
n⎟ ⎤
⎛ 3 ⎞
⎠
⎢ 1 − e 64 ⎥ for all n. Plot and observe the behavior of the accumulated jitter sequence
⎢⎣ ⎥⎦
corresponding to this signal over a range of 64 delay instants using a sample set of 128.
14.5. A time jitter sequence is described by J [ n] = 0.001 × randn() + 0.001 ×
⎡ −⎜
⎝
n⎟ ⎤
⎛ 3 ⎞
⎠
⎢ 1 − e 128 ⎥ for all n. Here randn() represents a Gaussian random number with zero
⎣⎢ ⎦⎥
mean and unity standard deviation. Plot and observe the behavior of the accumulated jitter
sequence for this signal over a range of 128 delay instants using a sample set of 256.
14.6. The PSD of the voltage level of a 100-MHz clock signal is described by
( )
Sv ( f ) = 10−4 /10 4 + f − 108 + 0.5 × δ ( f − 10−8 ) in units of V2/Hz. What is the RMS
noise level associated with this clock signal over a 1000-Hz bandwidth center around the
first harmonic of the clock signal?
14.7. The PSD of the instantaneous phase of a 2.5-GHz clock signal is described by
Sφ ( f ) = 4 × 10−4 /10 4 + f 2 in units of rad2/Hz. What is the RMS level associated with the
phase of this clock signal over a 1000-Hz bandwidth?
14.8. The PSD of the voltage level of a 100-MHz clock signal is described by
Sv ( f ) = 10−4 /10 4 + ( f − 108 )2 + 0.5 × δ ( f − 108 ) in units of V2/Hz. What is the phase
noise of the clock at a 10,000-Hz offset in dBc/Hz?
14.9. The PSD of the instantaneous phase of a 2.5-GHz clock signal is described by
Sφ ( f ) = 4 × 10−4 /10 4 + f 2 in units of rad2/Hz. What is the phase noise of the clock at
10,000-Hz in dBc/Hz?
14.10. The PSD of the instantaneous phase of a 10-GHz clock signal is described by
Sφ ( f ) = 10−5 /106 + f 2 in units of rad2/Hz. What is the spectrum of a jitter sequence
derived from this clock signal in s2/Hz? How about when expressed in UI2/Hz?
14.11. A phase jitter sequence expressed in radians can be described by the equation
φ[ n] = 0.1sin(50π /1024 n) for n = 0,…,1023 was extracted from a 100-MHz clock sig-
nal using a 10-GHz sampling rate. Plot the phase spectrum in UI2/Hz.
14.12. A time jitter sequence expressed in nanoseconds can be described by the equation
J [ n] = 0.1sin(14π /1024 n) + 0.1sin 50π /1024 n + 0.1sin124π /1024 n for n = 0,…,1023
Chapter 14 • Clock and Serial Data Communications Channel Measurements 675
was extracted from a 100-MHz clock signal using a 10-GHz sampling rate. Plot the phase
spectrum in s2/Hz.
14.13. Determine the following list of convolutions:
1 − t2
(a) δ (t − a ) ⊗
2
e 2σ ,
σ 2π
− ( t − μ )2
1
(b) δ (t − a ) ⊗ e 2σ 2
,
σ 2π
⎡1 1 ⎤ ⎡1 1 ⎤
(c) ⎢ δ (t + a ) + δ (t − a )⎥ ⊗ ⎢ δ (t + b) + δ (t − b)⎥ , and
⎣2 2 ⎦ ⎣2 2 ⎦
(d) ⎡⎢ δ ( f + a ) + δ ( f − a ) + δ ( f + 2 a ) + δ ( f − 2 a )⎤⎥ ⊗ ⎡⎢ δ ( f + b) + δ ( f − b)⎤⎥ .
1 1 1 1 1 1
⎣4 4 4 4 ⎦ ⎣2 2 ⎦
14.14. The PDF for the ideal zero crossings of an eye diagram can be described by
1 1
δ (t ) + δ (t − 10 −8 ) . If the PDF of the random jitter present at the receiver is
2 2
− t2
−9
10 / 2π e 2× 10−18 , what is the PDF of the actual zero crossings?
14.15. The PDF for the zero crossings at a receiver due to inter-symbol interference can be
described by
1
4
( 1
4
) ( 1
4
) ( 1
) (
δ t + 10 −9 + δ t − 10 −9 + δ t − 10 −8 + 10 −9 + δ t − 10 −8 − 10 −9 . .
4 2
)
−t
If the PDF of the random jitter present at the receiver is 109 / 2π e 2× 10−18 , what is the
PDF of the zero crossings?
14.16. In Chapter 5 we learned that the standard Gaussian CDF can be approximated by
⎧
⎪ ⎛ 1 ⎞ 1 − z2
⎪1 − ⎜ ⎟ e2 0≤z<∞
⎪
Φ (z ) ≈ ⎨ ⎝ (1 − α )z + α z 2
+ β ⎠ 2π
⎪⎛ ⎞ 1 − z2
⎪⎜ 1
⎟ e2 −∞ < z < 0
⎪⎝ − (1 − α )z + α z 2 + β ⎠ 2π
⎩
where
1
α= and β =2π
π
Compare the behavior of this function to the exact CDF function using a built-in routine
found in MATLAB, Excel or elsewhere defined by
z − u2
1
Φ (z ) = ∫e 2
du
2π −∞
How well does these function compare for very small values of Φ(z)? How about large
values? Plot the difference between these two functions for -10 < z < 10. What is the
maximum error in percent?
14.17. A logic 0 is transmitted at a nominal level of 0 V and a logic 1 is transmitted at a nominal
level of 1.2 V. Each logic level has equal probability of being transmitted. If a 100-mV
RMS Gaussian noise signal is assumed to be present at the receiver, what is the probabil-
ity of making an single trial bit error if the detector threshold is set at 0.6 V. Repeat for a
detector threshold of 0.75 V.
676 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
14.18. A logic 0 is transmitted at a nominal level of -1.0 V, and a logic 1 is transmitted at a nominal
level of 1.0 V. If a 250-mV RMS Gaussian noise signal is assumed to be present at the receiver,
what is the probability of making an single trial bit error if the detector threshold is set at 0.0
V. Assume that a logic one has twice the probability of being transmitted than a logic zero.
14.19. Data are transmitted to a receiver at a data rate of 2 Gbits/s through a channel that causes
the zero crossings to vary according to a Gaussian distribution with zero mean and a 50-ps
standard deviation. Assume that the 0 to 1 transitions are equally likely to occur as the 1 to
0 transitions. What is the probability of error of a single event if the sampling instant is set
midway between bit transitions? What about if the sampling instant is set at 0.4 ns?
14.20. Describe the PDF of the zero crossings of an ideal receiver operating at a data rate of 1
Gbps when twice the number of logic 0’s are transmitted than the logic 1’s.
14.21. The set of zero crossings appearing on an oscilloscope corresponding to an eye diagram
measurement is described by the following dual-Gaussian PDF:
1( t −1000 ps )2
1 1 − t2 3 1
pdf = e 2σ +
2 2σ 2
e
4 σ 2π 4 σ 2π
where σ = 50 ps. What is the probability of error of a single event if the normalized sam-
pling instant is set at UI/4.
14.22. Data are transmitted to a receiver at a data rate of 2 Gbits/s through a channel that causes
the zero crossings to vary according to a multi-Gaussian distribution described by
− ( t+ 5 ps )2 − ( t − 4 ps )2 − ( t − 500 ps+5 ps )2
1 1 1 1 1 1
pdf = e 2σ 2
+ e 2σ 2
+ e 2σ 2
4 σ 2π 4 σ 2π 4 σ 2π
− ( t − 500 ps -5 ps )2
1 1
+ e 2σ 2
4 σ 2π
What is the probability of error of a single event if the sampling instant is set midway
between bit transitions when σ = 10?
14.23. A series of 10 transmission measurements was conducted involving 1010 bits operating at
a data rate of 600 Mbits/s resulting in the following list of transmisson errors: 5, 6, 4, 6, 3,
7, 3, 6, 1, 0. What is the 95% confidence interval for BER associated with this transmis-
sion system? How long in seconds did the test take to complete?
14.24. Data are transmitted at a rate of 5 Gbps over a channel with a single-bit error probability
of 10-8. If 109 bits are transmitted, what is the probability of no bit errors, one bit error,
two bit errors, and ten bit errors. What is the average number of errors expected?
14.25. Data are transmitted at a rate of 10 Gbps over a channel with a single-bit error probability
of 10-11. If 1012 bits are transmitted, what is the probability of less than or equal to one bit
error, less than or equal to two bit errors, and less than or equal to ten bit errors?
14.26. Demonstrate that the Possion approximation for the binomial distribution is valid over the
range of NT between 109 to 1014 for a BER of 10–12:
NE
⎛ NT ! ⎞ NE
1
∑ ⎜⎝ k !( N
k =0 − k )! ⎟⎠ BER k
(1 − BER) NT − k
≈ ∑
k =0 k !
( N T BER)k e − NT BER
T
14.27. A system transmission test is to be run whereby a BER < 10-12 is to be verified. How many
samples should be collected such that the desired BER is met with a CL of 97% when no
more than 10 errors are deemed acceptable. What is the total test time if the data rate is
4 Gbps?
Chapter 14 • Clock and Serial Data Communications Channel Measurements 677
14.28. A system transmission test is to be run whereby a BER < 10-11is to be verified at a confi-
dence level of 99% when 3 or less bit errors are deemed acccptable. What are the mini-
mum and maximum bit lengths required for this test? Also, what is the minimum and
maximum time for this test if the data rate is 1 Gbps? Does testing for a fail part prior to
the end of the test provide significant test time savings?
14.29. Draw the schematic diagram of a 5-stage LFSR that realizes a primitive generating
polynomial.
14.30. Write a computer program to generate a maximum length sequence corresponding to
a 5-stage LFSR. What is the expected length of the repeating pattern produced by this
LFSR? Is the sequence periodic? How do you check?
14.31. Draw the schematic diagram of a 12-stage LFSR that realizes a primitive generating
polynomial.
14.32. Write a computer program to generate a maximum length sequence corresponding to a
12-stage LFSR. What is the expect length of the repeating pattern produced by this LFSR?
What is the value of the seed used to generate this sequence? Is the sequence periodic?
14.33. A PRBS sequence of degree 3 is required for a BER test. Determine the unique test pat-
tern for a seed of 101. Compare this pattern to one generated with a seed of 001. Are there
any common attributes?
14.34. A system transmission test is to be run whereby a BER < 10–12 is to be verified using the
amplitude-based scan test method. The threshold voltage of the receiver was adjusted to
the levels –200, –100, 100 and 200 mV, and the corresponding BER levels were measured:
5 × 10–6, 5 × 10–9, 5 × 10–9, and 5 × 10–6. What are the levels of the received logic values?
Does the system meet the BER requirements if the reciever threshold is set at 10 mV?
14.35. A digital system is designed to operate at 2.5 Gbps and have a nominal logic 1 level of
3.3 V and a logic 0 at 0.1 V. If the threshold of the receiver is set at 1.65 V and the noise
present at the receiver is 180 mV RMS, select the levels of the amplitude scan test method
so that the system can be verified for a BER < 10–12 at a confidence level of 99.7% and that
the total test time is less than 0.5 s.
14.36. A digital sampling oscilloscope obtained the following eye diagram while observing the
characteristics of a 1-Gbps digital signal. Histograms were obtain using the built-in func-
tion of the scope at a sampling instant midway between transitions (i.e., at the maximum
point of eye opening). Detailed analysis revealed that each histogram is Gaussian. One
histogram has a mean value of 2.8 V and a standard deviation of 150 mV. The other has
a mean of 120 mV and a standard deviation of 75 mV. Estimate the BER level when the
threshold level is set at 2.0 V.
14.37. A digital sampling oscilloscope was used to determine the zero crossings of an eye dia-
gram. At 0 UI, the histogram is Gaussian distributed with zero mean and a standard devia-
tion of 80 ps. At unity UI, the histogram is again Gaussian distributed with zero mean
value and a standard deviation of 45 ps. If the unit interval is equal to 1 ns, what is the
expected BER associated with this system at a normalized sampling instant of 0.5 UI?
14.38. The BER performance of a digital receiver was measured to be 10–16 at a bit rate of 20
Gbps having a sampling instant at one-half the bit duration. If the sampling instant can
vary by ±10% from its ideal position, what is the range of BER performance expected
during production?
14.39. A digital system operates with a 20-GHz clock. How much RJ can be tolerated by the system
if the BER is to be less than 10–14 and the DJ is no greater than 3 ps? Assume that the sampling
instant is in the middle of the data eye and the total jitter is modeled as a dual-Dirac PDF.
14.40. A digital system operates with a 10-GHz clock. How much DJ can be tolerated by the
system if the BER is to be less than 10–14 and the RJ is no greater than 5 ps? Assume that
the sampling instant is in the middle of the data eye. Assume that the sampling instant is
in the middle of the data eye and the total jitter is modeled as a dual-Dirac PDF.
678 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
14.41. A system transmission test is to be run whereby a BER < 10–14 is to be verified using the
dual-Dirac jitter-decomposition test method. The system operates with a 10-GHz clock,
and the following BER measurements are at two different sampling instances: BER =
0.25 × 10–5 at 29 ps, and BER = 0.25 × 10–7 at 34 ps. Assume that the digital system oper-
ates with a sampling instant in the middle of the data eye. Does the system meet spec?
14.42. A system transmission test was run and the RJ and DJ components were found to be 100
ps and 75 ps, respectively. What is the TJ component at a BER level of 10–12?
14.43. A system transmission test was run and the RJ and DJ components were found to be 100
ps and 75 ps, respectively. What is the TJ component at a BER level of 10–14?
14.44. A system transmission test was run where the RJ component was found to be 10 ps
and the TJ component at a BER level of 10–12 was found to be 150 ps. What is the DJ
component?
14.45. The PDF for total jitter is described by the three-term Gaussian model as follows:
2
− ( t + 5 ps )
1 1 1 1 1 1
pdf = + e − ( t −1ps) 2(3 ps)2 +
2 2
e 2 ( 2 ps )2
e − ( t − 5 ps) 2(4 ps)2
4 (2 ps) 2π 2 (3 ps) 2π 4 (4 ps) 2π
For low BER levels, write an expression of the BER level as a function of the sampling
instant. What is the BER at a sampling instant of 30 ps, assuming a 1-Gbps data rate?
14.46. The PDF for total jitter is described by the three-term Gaussian model as follows:
2
− ( t + 5 ps )
1 1 1 1 1 1
pdf = + e − ( t −1ps) 2(3 ps)2 +
2 2
e 2 ( 2 ps )2
e − ( t − 5 ps) 2(4 ps)2
4 (2 ps) 2π 2 (3 ps) 2π 4 (4 ps) 2π
What are the DJ, RJ, and TJ metrics at a desired BER of 10–12?
14.47. The jitter distribution at the transmit side of a communication channel can be described
by a four-term Gaussian mixture having the following parameters:
0.3 × N (–100 ps, 50 ps) + 0.2 × N (–10 ps, 60 ps) + 0.2 × N (50 ps, 60 ps)
+ 0.3 × N (120 ps, 50 ps)
What is the RJ, DJ, and TJ at a BER of 10–12 assuming a bit rate of 600 Mbps?
14.48. The jitter distribution at the receiver side of a communication channel can be described
by a four-term Gaussian mixture having the following parameters:
0.2 × N (–12 ps, 5 ps) + 0.3 × N (–5 ps, 6 ps) + 0.3 × N (4 ps, 6 ps) +
0.2 × N (11 ps, 7 ps)
What is the RJ, DJ, and TJ at a BER of 10–12, assuming a bit rate of 2 Gbps?
14.49. Extract a Gaussian mixture model using the EM algorithm from a data set synthesized
from a distribution consisting of three Gaussians with means –0.04 UI, 0.01 UI, and 0.1
UI, standard deviations of 0.05 UI, 0.03 UI, and 0.04 UI and weighting factors 0.4, 0.2, and
0.4, respectively. How does extracted models compare with the original data set?
14.50. The jitter distribution at the receiver end of a communication channel consisting of 10,000
samples was modeled by a four-term Gaussian mixture using the EM algorithm and the
following model parameters were found:
0.2 × N (−12 ps,5 ps)+ 0.3 × N (−5 ps,6 ps)+ 0.3 × N (4 ps,6 ps)+ 0.2 × N (11 ps,7 ps)
Assuming that the unit interval is 0.12 ns and that the data eye sampling instant is half a
UI, what is the BER associated with this system? Does the system behave with a BER less
than 10–12?
Chapter 14 • Clock and Serial Data Communications Channel Measurements 679
14.51. A time jitter sequence can be described by the following discrete-time equation:
J [ n] = 2 sin(2π 51/1024 n). Sketch the probability density function for this sequence.
14.52. A PRBS sequence of degree 5 is required to excite a digital communication channel. Using
a sequence length of 1024 bits, together with a periodic PRBS sequence, what is the spec-
trum of this pattern? If any tonal behavior is present, what are the RMS values and in which
bins of the FFT do these tones appear?
14.53. A jitter sequence consisting of 512 samples was captured from a serial I/O channel driven
by two separate data patterns, in one case, an alternating sequence of 1’s and 0’s was used,
and in the other a PRBS pattern was used. An FFT analysis performed on the two separate
sets of jitter data, as well as with the PRBS sequence, resulted in the information below.
Determine the test metrics: PJ, DDJ, BUJ.
0 0.01 0 5 × 10-3
-5
1, …, 256 except ~1 × 10 0 ~ 4 × 10-4
33 0 0.041 – j 0.026 0.05 – j 0.03
91 0 0.01 + j 0.02 0.03 + j 0.03
153 0 0.006 – j 0.007 0.005 – j 0.006
211 0 0 0.001 – j 0.02
251 0.03 – j 0.09 0 0.028 + j 0.033
14.54. The RJ, SJ, ISI, and DCD metrics are 5 ps rms, 20 ps pp, 6 ps pp, and 3 ps pp, respectively.
Write an expression for the PDF for each jitter component. Provide a sketch of the total
jitter distribution and the zero-crossing PDF assuming a bit duration of 200 ps.
14.55. The RJ, ISI, and DCD metrics are 5 ps rms, 6 ps pp, and 3 ps pp, respectively. Write an
expression for the PDF of the total jitter distribution.
14.56. A digital receiver operates at a clock rate of 800 MHz. A jitter transfer test needs to be
performed with a 3-kHz phase-modulated sinusoidal signal having an amplitude of 0.001
radians. Select the values of Mc, MJ, and N such that the samples are coherent with a
5-GHz sampling rate.
14.57. A digital receiver operates at a clock rate of 500 MHz. A jitter transfer test needs to be performed
with a 5-kHz phase modulated sinusoidal signal having an amplitude of 1.0 UI. Using an AWG
with a sampling rate of 10 GHz, write a short routine that generates the AWG samples.
14.58. Compute the discrete Hilbert transform of the sequence: d [ n ] = 1 × sin(2π 11/1024)n .
Plot each sequence on the same time axis and compare.
14.59. A 2048-point data set is collected from the following two-tone unity-amplitude phase-
modulated jitter sequence:
⎡ 601 ⎡ 51 ⎤ ⎡ 11 ⎤⎤
dOUT [n ] = sin ⎢ 2π (n −1)+ 1×sin ⎢ 2π (n −1)⎥ + 1×sin ⎢ 2π (n −1)⎥ ⎥
⎢⎣ 2048 ⎢⎣ 2048 ⎥⎦ ⎢⎣ 2048 ⎥⎦ ⎥⎦
.
Extract the jitter sequence using the analytic signal approach and compare this sequence to
the ideal result.
14.60. A golden device operating at a bit rate of 100 MHz has an input–output phase transfer
function described as follows: Φ ( s ) = s ( s + 18849.55 ) / s 2 + 18849.55s + 1.184352 × 108.
Plot the expected jitter tolerance mask for a sinusoidal phase-modulated input signal with
a BER of 10-12.
680 AN INTRODUCTION TO MIXED-SIGNAL IC TEST AND MEASUREMENT
REFERENCES
1. M. P. Li, Jitter, Noise, and Signal Integrity at High-Speed, Prentice Hall, Pearson Education,
Boston, 2008.
2. Application Note, Advanced Phase Noise and Transient Measurement Techniques, Agilent
Technologies, 2007, 5989-7273EN.
3. J. A. McNeil, A simple method for relating time- and frequency-domain measures of oscilla-
tor performance, in 2001 IEEE Southwest Symposium on Mixed Signal Design, Austin, TX,
February, pp. 7–12, 2001.
4. Maxim Application Note: Statistica; Confidence Levels for Estimating Error Probability, down-
load, at http://pdfserv.maxim-ic.com/en/an/AN1095.pdf.
5. T. O. Dickson, E. Laskin, I. Khalid, R. Beerkens, J. Xie, B. Karajica, and S. P. Voinigescu, An
80-Gb/s 231-1 pseudo-random binary sequence generator in SiGe BiCMOS technology,” IEEE
Journal of Solid-State Circuits, 40(12), pp. 2735–2745, December 2005.
6. M. Rowe, BER measurements reveal network health, Test & Measurement World, July 1, 2002.
7. K. Willox, Q factor: The wrong answer for service providers and equipment manufacturers, IEEE
Communications Magazine, 41(2), pp. S18–S21, February 2003.
8. Application Note, Jitter Analysis: The Dual-Dirac Model, RJ/DJ, and Q-Scale, Agilent
Technologies, December 2004, http://cp.literature.agilent.com/litweb/pdf/5989-3206EN.pdf.
9. M. P. Li, J. Wilstru, R, Jessen, and D. Petrich, A new method for jitter decomposition through
its distribution tail fitting, in IEEE Proceedings, International Test Conference Proceedings, pp.
788–794, October 1999.
10. A. P. Dempster, N. M. Laird, and D. B. Rubin, Maximum likelihood from incomplete data via
the EM algorithm. Journal of the Royal Statistical Society: Series B, 39(1), pp. 1–38, November
1977.
11. F. Nan, Y. Wang, F. Li, W. Yang, and X. Ma, Better method than tail-fitting algorithm for jitter
separation based on gaussian mixture model, Journal of Electronic Testing, 25(6), pp. 337–342,
Dec. 2009.
12. N. Vlassis and A. Likas, A kurtosis-based dynamic approach to Gaussian mixture modeling,
IEEE Trans. on Systems, Man, and Cybernetics—Part A: Systems and Humans, 29(4), pp. 393–
399, July 1999.
13. S. Aouini, Gaussian Mixture Parameter Estimation Using the Expectation-Maximization
Algorithm, ECSE 509 Technical Report, McGill University, November 2004.
14. B. Veillette and G. W. Roberts, Reliable analog bandpass signal generation, in proceedings of the
IEEE International Symposium on Circuits and Systems, Monterey, CA, June 1998.
15. S. Aouini, K. Chuai and G. W. Roberts, A low-cost ATE phase signal generation technique for
test applications, in Proceedings of the 2010 IEEE International Test Conference, Austin, TX,
November 2010.
16. C. M. Miller and D. J. McQuate, Jitter analysis of high-speed digital systems, Hewlett-Packard
Journal, pp. 49–56, February 1995.
17. T. J. Yamaguchi, M. Soma, M. Ishida, T. Watanabe and T. Ohmi, Extraction of instantaneous
and RMS sinusoidal jitter using an analytic signal method, IEEE Transactions on Circuits and
Systems II: Analog and Digital Signal Processing, 50(6), pp. 288–298, June 2003.
18. S. C. Kak, The discrete Hilbert transform, Proceedings of the IEEE, 58(4), pp. 585–586, April
1970.