Ece 5sem Ec2302nol
Ece 5sem Ec2302nol
Introduction to DSP
A signal is any variable that carries information. Examples of the types of signals
of interest are Speech (telephony, radio, everyday communication), Biomedical signals
(EEG brain signals), Sound and music, Video and image,_ Radar signals (range and
bearing).
Discrete-time signals
Here n is an integer, and x[n] is the nth sample in the sequence. Discrete-time signals are
often obtained by sampling continuous-time signals. In this case the nth sample of the
sequence is equal to the value of the analogue signal xa(t) at time t = nT:
The sampling period is then equal to T, and the sampling frequency is fs = 1=T .
x[1]
For this reason, although x[n] is strictly the nth number in the sequence, we often refer to
it as the nth sample. We also often refer to \the sequence x[n]" when we mean the entire
sequence. Discrete-time signals are often depicted graphically as follows:
(This can be plotted using the MATLAB function stem.) The value x[n] is unde_ned for
no integer values of n. Sequences can be manipulated in several ways. The sum and
product of two sequences x[n] and y[n] are de_ned as the sample-by-sample sum and
product respectively. Multiplication of x[n] by a is de_ned as the multiplication of each
sample value by a. A sequence y[n] is a delayed or shifted version of x[n] if
with n0 an integer.
The unit sample sequence
is defined as
This sequence is often referred to as a discrete-time impulse, or just impulse. It plays the
same role for discrete-time signals as the Dirac delta function does for continuous-time
signals. However, there are no mathematical complications in its definition.
An important aspect of the impulse sequence is that an arbitrary sequence can be
represented as a sum of scaled, delayed impulses. For
example, the
Sequence can be
represented as
Conversely, the unit sample sequence can be expressed as the _rst backward difference of
the unit step sequence
If A and _ are real numbers then the sequence is real. If 0 < _ < 1 and A is positive, then
the sequence values are positive and decrease with increasing n:
For 1 < _ < 0
the sequence alternates in sign, but decreases in magnitude. For j_j > 1 the sequence
grows in magnitude as n increases. A sinusoidal sequence
The frequency of this complex sinusoid is!0, and is measured in radians per sample. The
phase of the signal is. The index n is always an integer. This leads to some important
Differences between the properties of discrete-time and continuous-time complex
exponentials: Consider the complex exponential with frequency
In the continuous-time case, sinusoidal and complex exponential sequences are always
periodic. Discrete-time sequences are periodic (with period N) if x[n] = x[n + N] for all n:
Discrete-time systems
A discrete-time system is de_ned as a transformation or mapping operator that maps an
input signal x[n] to an output signal y[n]. This can be denoted as
Linear systems
A system is linear if the principle of superposition applies. Thus if y1[n]
is the response of the system to the input x1[n], and y2[n] the response
to x2[n], then linearity implies
Additivity:
Scaling:
In all cases a and b are arbitrary constants. This property generalises to many inputs, so
the response of a linear
system to
Time-invariant systems
A system is time invariant if times shift or delay of the input sequence
Causes a corresponding shift in the output sequence. That is, if y[n] is the response to
x[n], then y[n -n0] is the response to x[n -n0].
For example, the accumulator system
for M a positive integer (which selects every Mth sample from a sequence) is not.
Causality
A system is causal if the output at n depends only on the input at n
and earlier inputs. For example, the backward difference system
is not.
Stability
x[n]
is an example of an unbounded system, since its response to the unit
This has no _nite upper bound.
If the system is additionally time invariant, then the response to _[n -k] is h[n -k]. The
previous equation then becomes
This expression is called the convolution sum. Therefore, a LTI system has the property
that given h[n], we can _nd y[n] for any input x[n]. Alternatively, y[n] is the convolution
It is easy to see that the circular convolution product will be equal to the linear
convolution product on the interval 0 to N 1 as long as we choose N - L + P +1. The
process of augmenting a sequence with zeros to make it of a required length is called zero
padding.
Fast Fourier transforms
The widespread application of the DFT to convolution and spectrum analysis is due to the
existence of fast algorithms for its implementation. The class of methods is referred to as
fast Fourier transforms (FFTs). Consider a direct implementation of an 8-point DFT:
If the factors have been calculated in advance (and perhaps stored in a lookup
table), then the calculation of X[k] for each value of k requires 8 complex multiplications
and 7 complex additions. The 8-point DFT therefore requires 8 * 8 multiplications and 8*
7 additions. For an N-point DFT these become N2 and N (N - 1) respectively. If N =
1024, then approximately one million complex multiplications and one million complex
additions are required. The key to reducing the computational complexity lies in the
Observation that the same values of x[n] are effectively calculated many times as
the computation proceeds | particularly if the transform is long. The conventional
decomposition involves decimation-in-time, where at each stage a N-point transform is
decomposed into two N=2-point transforms. That is, X[k] can be written as X[k] =N
The original N-point DFT can therefore be expressed in terms of two N=2-point DFTs.
The N=2-point transforms can again be decomposed, and the process repeated until only
2-point transforms remain. In general this requires log2N stages of decomposition. Since
each stage requires approximately N complex multiplications, the complexity of the
resulting algorithm is of the order of N log2 N. The difference between N2 and N log2 N
complex multiplications can become considerable for large values of N. For example, if
N = 2048 then N2=(N log2 N) _ 200. There are numerous variations of FFT algorithms,
and all exploit the basic redundancy in the computation of the DFT. In almost all cases an
Of the shelf implementation of the FFT will be sufficient | there is seldom any reason to
implement a FFT yourself.
Some forms of digital filters are more appropriate than others when real-world effects are
considered. This article looks at the effects of finite word length and suggests that some
implementation forms are less susceptible to the errors that finite word length effects
introduce.
In articles about digital signal processing (DSP) and digital filter design, one thing I've
noticed is that after an in-depth development of the filter design, the implementation is
often just given a passing nod. References abound concerning digital filter design, but
surprisingly few deal with implementation. The implementation of a digital filter can take
many forms. Some forms are more appropriate than others when various real-world
effects are considered. This article examines the effects of finite word length. It suggests
that certain implementation forms are less susceptible than others to the errors introduced
by finite word length effects.
UNIT III
Finite word length
Most digital filter design techniques are really discrete time filter design
techniques. What's the difference? Discrete time signal processing theory assumes
discretization of the time axis only. Digital signal processing is discretization on the time
and amplitude axis. The theory for discrete time signal processing is well developed and
can be handled with deterministic linear models. Digital signal processing, on the other
hand, requires the use of stochastic and nonlinear models. In discrete time signal
processing, the amplitude of the signal is assumed to be a continuous value-that is, the
amplitude can be any number accurate to infinite precision. When a digital filter design is
moved from theory to implementation, it is typically implemented on a digital computer.
Implementation on a computer means quantization in time and amplitude-which is true
digital signal processing. Computers implement real values in a finite number of bits.
Even floating-point numbers in a computer are implemented with finite precision-a finite
number of bits and a finite word length. Floating-point numbers have finite precision, but
dynamic scaling afforded by the floating point reduces the effects of finite precision.
Digital filters often need to have real-time performance-that usually requires fixed-point
integer arithmetic. With fixed-point implementations there is one word size, typically
dictated by the machine architecture. Most modern computers store numbers in two's
complement form. Any real number can be represented in two's complement form to
infinite precision, as in Equation 1:
where bi is zero or one and Xm is scale factor. If the series is truncated to B+1 bits,
where b0 is a sign bit, there is an error between the desired number and the truncated
number. The series is truncated by replacing the infinity sign in the summation with B,
the number of bits in the fixed-point word. The truncated series is no longer able to
represent an arbitrary number-the series will have an error equal to the part of the series
discarded. The statistics of the error depend on how the last bit value is determined, either
by truncation or rounding. Coefficient Quantization The design of a digital filter by
whatever method will eventually lead to an equation that can be expressed in the form of
Equation 2:
For a derivation of this result, see Discrete Time Signal Processing.1 Truncating
the value (rounding down) produces slightly different statistics. Multiplying two B-bit
variables results in a 2B-bit result. This 2B-bit result must be rounded and stored into a
B-bit length storage location. This rounding occurs at every multiplication point.
Scaling We don't often think about scaling when using floating-point calculations
because the computer scales the values dynamically. Scaling becomes an issue when
using fixed-point arithmetic where calculations would cause over- or under flow. In a
filter with multiple stages, or more than a few coefficients, calculations can easily
overflow the word length. Scaling is required to prevent over- and under flow and, if
placed strategically, can also help offset some of the effects of quantization.
Signal Flow Graphs Signal flow graphs, a variation on block diagrams, give a
slightly more compact notation. A signal flow graph has nodes and branches. The
examples shown here will use a node as a summing junction and a branch as a gain. All
inputs into a node are summed, while any signal through a branch is scaled by the gain
along the branch. If a branch contains a delay element, it's noted by a z ý 1 branch gain.
Figure 2 is an example of the basic elements of a signal flow graph. Equation 4 results
from the signal flow graph in Figure 2.
Z-Transform:
where is the Z-Transform Transfer Function,
Where:
if
then
If the input sinusoidal frequency has an amplitude of one and a phase of zero, then
the output is a sinusoidal (of the same frequency) with a magnitude
and phase
However, to implement this discrete time filter, finite precision arithmetic (even if it is
floating point) is used.
This implementation is a DIGITAL FILTER.
There are two main effects which occur when finite precision arithmetic is used to
implement a DIGITAL FILTER: Multiplier coefficient quantization, Signal quantization
The multiplier coefficient value has been quantized to a six bit (finite precision) value.
The value of the filter coefficient which is actually implemented is 52/64 or 0.8125
AS A RESULT, THE TRANSFER FUNCTION CHANGES!
The magnitude frequency response of the third order direct form filter (with the gain or
scaling coefficient removed) is:
2. Signal quantization
The signals in a DIGITAL FILTER must also be represented by finite, quantized binary
values. There are two main consequences of this: A finite RANGE for signals (I.E. a
maximum value) Limited RESOLUTION (the smallest value is the least significant bit)
For n-bit two's complement fixed point numbers:
If two numbers are added (or multiplied by and integer value) then the result can be
larger than the most positive number or smaller than the most negative number. When
this happens, an overflow has occurred. If two's complement arithmetic is used, then the
effect of overflow is to CHANGE the sign of the result and severe, large amplitude
nonlinearity is introduced.
For useful filters, OVERFLOW cannot be allowed. To prevent overflow, the digital
hardware must be capable of representing the largest number which can occur. It may be
necessary to make the filter internal word length larger than the input/output signal word
length or reduce the input signal amplitude in order to accommodate signals inside the
DIGITAL FILTER.
Due to the limited resolution of the digital signals used to implement the DIGITAL
FILTER, it is not possible to represent the result of all DIVISION operations exactly and
thus the signals in the filter must be quantized.
The nonlinear effects due to signal quantization can result in limit cycles - the filter
output may oscillate when the input is zero or a constant. In addition, the filter may
exhibit dead bands - where it does not respond to small changes in the input signal
amplitude. The effects of this signal quantization can be modeled by:
where the error due to quantization (truncation of a two's complement number) is:
By superposition, the can determine the effect on the filter output due to each
quantization source. To determine the internal word length required to prevent overflow
and the error at the output of the DIGITAL FILTER due to quantization, find the GAIN
from the input to every internal node. Either increases the internal wordlengh so that
overflow does not occur or reduce the amplitude of the input signal. Find the GAIN from
each quantization point to the output. Since the maximum value of e(k) is known, a
bound on the largest error at the output due to signal quantization can be determined
using Convolution Summation. Convolution Summation (similar to Bounded-Input
Bounded-Output stability requirements):
If
then
Computing the norm for the third order direct form filter:
input node 3, output node 8
L1 norm between (3, 8) ( 17 points) : 1.267824
L1 norm between (3, 4) ( 15 points ) : 3.687712
L1 norm between (3, 5) ( 15 points ) : 3.685358
L1 norm between (3, 6) ( 15 points ) : 3.682329
L1 norm between (3, 7) ( 13 points ) : 3.663403
MAXIMUM = 3.687712
L1 norm between (4, 8) ( 13 points ) : 1.265776
L1 norm between (4, 8) ( 13 points ) : 1.265776
L1 norm between (4, 8) ( 13 points ) : 1.265776
L1 norm between (8, 8) ( 2 points ) : 1.000000
SUM = 4.797328
An alternate filter structure can be used to implement the same ideal transfer function.
Note that the effects of the same coefficient quantization as for the Direct Form filter (six
bits) does not have the same effect on the transfer function. This is because of the
reduced sensitivity of this structure to the coefficients. (A general property of integrator
based ladder structures or wave digital filters which have a maximum power transfer
characteristic.)
# LDI3 Multipliers:
# s1 = 0.394040030717361
# s2 = 0.659572897901019
# s3 = 0.650345952330870
Note that all coefficient values are less than unity and that only three multiplications are
required. There is no gain or scaling coefficient. More adders are required than for the
direct form structure.
Of course, a finite-duration impulse response (FIR) filter could be used. It will still have
an error at the output due to signal quantization, but this error is bounded by the number
of multiplications. A FIR filter cannot be unstable for bounded inputs and coefficients
and piecewise linear phase is possible by using symmetric or anti-symmetric coefficients.
But, as a rough rule an FIR filter order of 100 would be required to build a filter with the
same selectivity as a fifth order recursive (Infinite Duration Impulse Response - IIR)
filter.
Effects of finite word length
Quantization and multiplication errors
Multiplication of 2 M-bit words will yield a 2M bit product which is or to an M bit word.
Truncated rounded
Suppose that the 2M bit number represents an exact value then:
Exact value, x' (2M bits) digitized value, x (M bits) error e = x - x'
Truncation
x is represented by (M -1) bits, the remaining least significant bits of x' being discarded
Quantization errors
Quantization is a nonlinearity which, when introduced into a control loop, can lead to or
Steady state error
Limit cycles
Stable limit cycles generally occur in control systems with lightly damped poles detailed
nonlinear analysis or simulation may be required to quantify their effect methods of
reducing the effects are:
- Larger word sizes
- Cascade or parallel implementations
- Slower sample rates
Integrator Offset
Consider the approximate integral term:
Practical features for digital controllers
Scaling
All microprocessors work with finite length words 8, 16, 32 or 64 bits.
The values of all input, output and intermediate variables must lie within the
Range of the chosen word length. This is done by appropriate scaling of the variables.
The goal of scaling is to ensure that neither underflows nor overflows occur during
arithmetic processing
Range-checking
Check that the output to the actuator is within its capability and saturate
the output value if it is not. It is often the case that the physical causes of saturation are
variable with temperature, aging and operating conditions.
Roll-over
Overflow into the sign bit in output data may cause a DAC to switch from a high positive
Value to a high negative value: this can have very serious consequences for the actuator
and Plant.
Scaling for fixed point arithmetic
Scaling can be implemented by shifting
binary values left or right to preserve satisfactory dynamic range and signal to
quantization noise ratio. Scale so that m is the smallest positive integer that satisfies the
condition
UNIT II
Filter design
1 Design considerations: a framework
The center of symmetry is indicated by the dotted line. The process of linear-phase filter
design involves choosing the a[n] values to obtain a filter with a desired frequency
response. This is not always possible, however | the frequency response for a type II
filter, for example, has the property that it is always zero for! = _, and is therefore not
appropriate for a high pass filter. Similarly, filters of type 3 and 4 introduce a 90_ phase
shift, and have a frequency response that is always zero at! = 0 which makes them
unsuitable for as lowpass filters. Additionally, the type 3 response is always zero at! = _,
making it unsuitable as a high pass filter. The type I filter is the most versatile of the four.
Linear phase filters can be thought of in a different way. Recall that a linear phase
characteristic simply corresponds to a time shift or delay. Consider now a real FIR _lter
with an impulse response that satisfies the even symmetry condition h[n] = h[n] H(ej!).
Increasing the length N of h[n] reduces the main lobe width and hence the transition
width of the overall response. The side lobes of W (ej!) affect the pass band and stop
band tolerance of H (ej!). This can be controlled by changing the shape of the window.
Changing N does not affect the side lobe behavior. Some commonly used windows for
filter design are
All windows trade of a reduction in side lobe level against an increase in main lobe
width. This is demonstrated below in a plot of the frequency response of each of the
window
The Kaiser window has a number of parameters that can be used to explicitly tune the
characteristics. In practice, the window shape is chosen first based on pass band and stop
band tolerance requirements. The window size is then determined based on transition
width requirements. To determine hd[n] from Hd(ej!) one can sample Hd(ej!) closely and
use a large inverse DFT.
The resulting filter will have a frequency response that is exactly the same as the original
response at the sampling instants. Note that it is also necessary to specify the phase of the
desired response Hd(ej!), and it is usually chosen to be a linear function of frequency to
ensure a linear phase filter. Additionally, if a filter with real-valued coefficients is
required, then additional constraints have to be enforced. The actual frequency response
H(ej!) of the _lter h[n] still has to be determined. The z-transform of the impulse response
is
This expression can be used to _nd the actual frequency response of the _lter obtained,
which can be compared with the desired response. The method described only guarantees
correct frequency response values at the points that were sampled. This sometimes leads
to excessive ripple at intermediate points:
Therefore, y[n] = 1=2y [n +1] + x[n], and y[n] is easy to calculate. IIR filter structures
can therefore be far more computationally efficient than FIR filters, particularly for long
impulse responses. FIR filters are stable for h[n] bounded, and can be made to have a
linear phase response. IIR filters, on the other hand, are stable if the poles are inside the
unit circle, and have a phase response that is difficult to specify. The general approach
taken is to specify the magnitude response, and regard the phase as acceptable. This is a
Disadvantage of IIR filters. IIR filter design is discussed in most DSP texts.
UNIT V
DSP Processor- Introduction
DSP processors are microprocessors designed to perform digital signal processing
—the mathematical manipulation of digitally represented signals. Digital signal
processing is one of the core technologies in rapidly growing application areas such as
wireless communications, audio and video processing, and industrial control. Along with
the rising popularity of DSP applications, the variety of DSP-capable processors has
expanded greatly since the introduction of the first commercially successful DSP chips in
the early 1980s. Market research firm Forward Concepts projects that sales of DSP
processors will total U.S. $6.2 billion in 2000, a growth of 40 percent over 1999. With
semiconductor manufacturers vying for bigger shares of this booming market, designers’
choices will broaden even further in the next few years. Today’s DSP processors (or
“DSPs”) are sophisticated devices with impressive capabilities. In this paper, we
introduce the features common to modern commercial DSP processors, explain some of
the important differences among these devices, and focus on features that a system
designer should examine to find the processor that best fits his or her application.
Required for operand accesses in parallel with the execution of arithmetic instructions. In
contrast, general-purpose processors often require extra cycles to generate the addresses
needed to load operands. DSP processor address generation units typically support a
selection of addressing modes tailored to DSP applications. The most common of these is
register-indirect addressing with post-increment, which is used in situations where a
repetitive computation is performed on data stored sequentially in memory. Modulo
addressing is often supported, to simplify the use of circular buffers. Some processors
also support bit-reversed addressing, which increases the speed of certain fast Fourier
transform (FFT) algorithms. Because many DSP algorithms involve performing repetitive
computations, most DSP processors provide special support for efficient looping. Often, a
special loop or repeat instruction is provided, which allows the programmer to implement
a for-next loop without expending any instruction cycles for updating and testing the loop
counter or branching back to the top of the loop. Finally, to allow low-cost, high-
performance input and output, most DSP processors incorporate one or more serial or
parallel I/O interfaces, and specialized I/O handling mechanisms such as low-overhead
interrupts and direct memory access (DMA) to allow data transfers to proceed with little
or no intervention from the rest of the processor. The rising popularity of DSP functions
such as speech coding and audio processing has led designers to consider implementing
DSP on general-purpose processors such as desktop CPUs and microcontrollers. Nearly
all general-purpose processor manufacturers have responded by adding signal processing
capabilities to their chips. Examples include the MMX and SSE instruction set extensions
to the Intel Pentium line, and the extensive DSP-oriented retrofit of Hitachi’s SH-2
microcontroller to form the SH-DSP. In some cases, system designers may prefer to use a
general-purpose processor rather than a DSP processor. Although general-purpose
processor architectures often require several instructions to perform operations that can
be performed with just one DSP processor instruction, some general-purpose processors
run at extremely fast clock speeds. If the designer needs to perform non- DSP processing,
and then using a general-purpose processor for both DSP and non-DSP processing could
reduce the system parts count and lower costs versus using a separate DSP processor and
general-purpose microprocessor. Furthermore, some popular general-purpose processors
feature a tremendous selection of application development tools. On the other hand,
because general-purpose processor architectures generally lack features that simplify
DSP programming, software development is sometimes more tedious than on DSP
processors and can result in awkward code that’s difficult to maintain. Moreover, if
general-purpose processors are used only for signal processing, they are rarely cost-
effective compared to DSP chips designed specifically for the task. Thus, at least in the
short run, we believe that system designers will continue to use traditional DSP
processors for the majority of DSP intensive applications. We focus on DSP processors in
this paper.
Applications
DSP processors find use in an extremely diverse array of applications, from radar
systems to consumer electronics. Naturally, no one processor can meet the needs of all or
even most applications. Therefore, the first task for the designer selecting a DSP
processor is to weigh the relative importance of performance, cost, integration, ease of
development, power consumption, and other factors for the application at hand. Here
we’ll briefly touch on the needs of just a few classes of DSP applications. In terms of
dollar volume, the biggest applications for digital signal processors are inexpensive, high-
volume embedded systems, such as cellular telephones, disk drives (where DSPs are used
for servo control), and portable digital audio players. In these applications, cost and
integration are paramount. For portable, battery-powered products, power consumption is
also critical. Ease of development is usually less important; even though these
applications typically involve the development of custom software to run on the DSP and
custom hardware surrounding the DSP, the huge manufacturing volumes justify
expending extra development effort.
A second important class of applications involves processing large volumes of
data with complex algorithms for specialized needs. Examples include sonar and seismic
exploration, where production volumes are lower, algorithms more demanding, and
product designs larger and more complex. As a result, designers favor processors with
maximum performance, good ease of use, and support for multiprocessor configurations.
In some cases, rather than designing their own hardware and software from scratch,
designers assemble such systems using off-the-shelf development boards, and ease their
software development tasks by using existing function libraries as the basis of their
application software.
(PLLs) that allow the use of a lower-frequency external clock to generate the needed
high-frequency clock on chip.
Some floating-point chips provide relatively little (or no) on-chip memory, but feature
large external data buses. For example, the Texas Instruments TMS320C30 provides 6K
words of on-chip memory, one 24-bit external address bus, and one 13-bit external
address bus. In contrast, the Analog Devices ADSP-21060 provides 4 Mbits of memory
on-chip that can be divided between program and data memory in a variety of ways. As
with most DSP features, the best combination of memory organization, size, and number
of external buses is heavily application-dependent.
Ease of Development
The degree to which ease of system development is a concern depends on the application.
Engineers performing research or prototyping will probably require tools that make
system development as simple as possible. On the other hand, a company developing a
next-generation digital cellular telephone may be willing to suffer with poor development
tools and an arduous development environment if the DSP chip selected shaves $5 off the
cost of the end product. (Of course, this same company might reach a different
conclusion if the poor development environment results in a three-month delay in getting
their product to market!) That said, items to consider when choosing a DSP are software
tools (assemblers, linkers, simulators, debuggers, compilers, code libraries, and real-time
operating systems), hardware tools (development boards and emu- lators), and higher-
level tools (such as block-diagram based code-generation environments). A design flow
using some of these tools is illustrated in Figure 5. A fundamental question to ask when
choosing a DSP is how the chip will be programmed. Typically, developers choose either
assembly language, a high-level language— such as C or Ada—or a combination of both.
Surprisingly, a large portion of DSP programming is still done in assembly language.
Because DSP applications have voracious number-crunching requirements, programmers
are often unable to use compilers, which often generate assembly code that executes
slowly. Rather, programmers can be forced to hand-optimize assembly code to lower
execution time and code size to acceptable levels. This is especially true in consumer
applications, where cost constraints may prohibit upgrading to a higher- performance
DSP processor or adding a second processor. Users of high-level language compilers
often find that the compilers work better for floating-point DSPs than for fixed-point
DSPs, for several reasons. First, most high-level languages do not have native support for
fractional arithmetic. Second, floating-point processors tend to feature more regular, less
restrictive instruction sets than smaller, fixed-point processors, and are thus better
compiler targets. Third, as mentioned, floatingpoint
Floating point processors typically support larger memory spaces than fixed-point
processors, and are thus better able to accommodate compiler-generated code, which
tends to be larger than hand crafted assembly code. VLIW-based DSP processors, which
typically use simple, orthogonal RISC-based instruction sets and have large register files,
are somewhat better compiler targets than traditional DSP processors. However, even
compilers for VLIW processors tend to generate code that is inefficient in comparison to
hand-optimized assembly code. Hence, these processors, too, are often programmed in
assembly language—at least to some degree. Whether the processor is programmed in a
high-level language or in assembly language, debugging and hardware emulation tools
deserve close attention since, sadly, a great deal of time may be spent with them. Almost
all manufacturers provide instruction set simulators, which can be a tremendous help in
debugging programs before hardware is ready. If a high-level language is used, it is
important to evaluate the capabilities of the high-level language debugger: will it run with
the simulator and/or the hardware emulator? Is it a separate program from the assembly-
level debugger that requires the user to learn another user interface? Most DSP vendors
provide hardware emulation tools for use with their processors. Modern processors
usually feature on-chip debugging/emulation capabilities, often accessed through a serial
interface that conforms to the IEEE 1149.1 JTAG standard for test access ports. This
serial interface allows scan-based emulation—programmers can load breakpoints through
the interface, and then scan the processor’s internal registers to view and change the
contents after the processor reaches a breakpoint.
Scan-based emulation is especially useful because debugging may be accomplished
without removing the processor from the target system. Other debugging methods, such
as pod-based emulation, require replacing the processor with a special processor emulator
pod. Off-the-shelf DSP system development boards are available from a variety of
manufacturers, and can be an important resource. Development boards can allow
software to run in real-time before the final hardware is ready, and can thus provide an
important productivity boost. Additionally, some low-production-volume systems may
use development boards in the final product.
Multiprocessor Support
Certain computationally intensive applications with high data rates (e.g., radar and sonar)
often demand multiple DSP processors. In such cases, ease of processor interconnection
(in terms of time to design interprocessor communications circuitry and the cost of
linking processors) and interconnection performance (in terms of communications
throughput, overhead, and latency) may be important factors. Some DSP families—
notably the Analog Devices ADSP-2106x—provide special-purpose hardware to ease
multiprocessor system design. ADSP-2106x processors feature bidirectional data and
address buses coupled with six bidirectional bus request lines. These allow up to six
processors to be connected together via a common external bus with elegant bus
arbitration. Moreover, a unique feature of the ADSP- 2106x processor connected in this
way is that each processor can access the internal memory of any other ADSP-2106x on
the shared bus. Six four-bit parallel communication ports round out the ADSP-2106x’s
parallel processing features. Interestingly, Texas Instrument’s newest floating-point
processor, the VLIW-based TMS320C67xx, does not currently provide similar hardware
support for multiprocessor designs, though it is possible that future family members will
address this issue