Introduction To Sampling: Lia Petracovici and Joseph M. Rosenblatt
Introduction To Sampling: Lia Petracovici and Joseph M. Rosenblatt
1. Introduction
These notes are meant to introduce Shannon's sampling theorem
and some of its various aspects and generalizations. We will see how
this theorem can be viewed as having its origin in earlier types of interpolation series. We will consider some extensions of sampling to
non-uniform sampling and to general integral transforms. We will try
to summarize the conditions on functions needed to obtain a sampling
theorem, and also describe the error in various sampling expansions.
Sampling is of great practical importance. It has many applications
in engineering and physics; for example, it has applications in signal
processing, data transmission, optics, cryptography, time-varying systems, and boundary value problems. Many people have discovered,
or rediscovered, this sampling theorem during this century; but these
notes will not try to unravel the claims to priority. See Higgins [10]
and Jerri [12] for additional information and references concerning the
history, generalizations, and applications of spectral sampling.
2. The Shannon Sampling Theorem = SST
The Shannon Sampling Theorem was apparently discovered by Shannon and described in a manuscript by 1940, but it was not published
until 1949, after World War II had ended. The principal impact of the
Shannon sampling theorem on information theory is that it allows the
replacement of a continuous band-limited signal by a discrete sequence
of its samples without the loss of any information. Also it species the
lowest rate (the Nyquist rate of such sample values) that is possible to
use to reproduce the original signal. Higher rates of sampling do have
advantages for establishing bounds, but would not be necessary for a
general signal reconstruction.
There are many variants on how the Fourier transform is normalized.
Here we will take these denitions for the Fourier Transform (FT) and
its inverse transform (INVFT):
L. Petracovici is partially supported by Rosenblatt's NSF Grant DMS-970522.
J. Rosenblatt is partially supported by NSF Grant DMS-9705228.
1
INTRODUCTION TO SAMPLING
F (x) =
1
X
sin(Tx , n )
F n
:
T
(
Tx
,
n
)
n=,1
INTRODUCTION TO SAMPLING
sin w (t , a , nw)
f (t) = f (a + nw) (t , a , nw) :
(3.1)
w
,1
Actually, he did not call this a cardinal series, the name appears rst
in the work of his son J. M. Whittaker in about 1920, see J. M. Whittaker [28].
Perhaps the most obvious question to ask about SST is what happens
if one samples slower, or faster than the Nyquist sampling rate? From
the viewpoint of simply sampling a known analytic function, it is really
obvious that one can get into trouble if one samples inappropriately.
For example, if we were sampling the function cos(2t), and took the
samples to be at values of t which were whole numbers, we might think
the function is constant. See b) in Remark 2.4. The general question
with such functions would be if there is any rate, or way, of sampling
that could give information about the function, or even perhaps be
used to recreate it elsewhere.
3.1. Sampling Bounds. Consider rst the following sampling problem. We want to bound the general values of a band-limited signal by
its behavior along a doubly-innite arithmetic progression of values n
where n 2 Z. That is, we take a band-limited signal f and ask for a
bound of the form
sup jf (t)j C sup jf (n)j:
t2R
n2Z
itx
if , x ;
ht (x) = e
at(x) if , x , or x .
Here the functions at(x) need to be chosen to be continuous and to make
ht(x) into a continuous 2-periodic function. We can also arrange for
the function ht to be innitely dierentiable, so that1its Fourier series is
P
absolutely summable. That is, we write ht (x) =
cneinx where the
n=,1
f (t) = 21
=
1
P
n=,1
,
1
X
,1
Z
F (x)
1
X
n=,1
cne,inxdx
cnFb(n):
1
P
n=,1
i(t,m)x if , x ;
Ht (x) = e
bt (x) if , x , or x .
INTRODUCTION TO SAMPLING
Remark 3.1 (Carleson's inequality). Let us point out that the core issue here
of getting a bound C which is independent of t for the se1
P
ries
jcn(Ht)j can be resolved easily also by using Carleson's inn=,1
equality. This inequality
is the following. Suppose
one has a Fourier
1
1
P
P
series H (x) =
cneinx. Let kH k`1 =
jcnj. Let kH k2 =
R
,
jH (t)j2
1=2
n=,1
n=,1
. Then
1=2
+ jHb (0)j:
This can be used above since actually our derivatives Ht0(x) are being
kept uniformly bounded independently of t. How is Carleson's inequality proved? It is easy to see that it is enough to prove the inequality
in the case that H 6= 0, but Hb (0) = co = 0. Now, split H (x) into two
N
P
P
sums, 1 =
cneinx and 2 =
cneinx. By the Cauchy-Schwartz
n=,N
jnj>N
jnj>N
p
kH k`1 2 N kH k2 + p2 kH 0k2:
N
0
Take N to be the smallest integer greater than or equal to kkHH kk22 . Notice
0
0
that since co = 0, we have kH k2 kH 0k2. Hence, kkHH kk22 N 2kkHHkk22 .
Thus, with this value of N , one gets
1=2
1=2
p
kH k`1 2 2 kH k2kH 0k2 + 2 kH k2kH 0k2 :
The same analysis as the above can be made in the standardized
band [,2w; 2w]. The above was for the case that w = 12 . The
conclusion can be stated as the following proposition.
Proposition 3.2. For any < 2w, there exists a constant C depending only on and w, such that for any F 2 L1 [,; ], one has the
bound
sup jFb(t)j C sup jFb( 2nw )j:
t2R
n2 Z
Remark 3.3. This theorem has been noticed by many others over the
years. A nice reference for it is the article by Cartwright [6]. Actually,
she derives the existence of such a sampling bound for a more general
See Tschakalo [25] for a proof that will work in this context, or see
Zygmund [30], (7.22), Vol. II. See also Timan [23], pp. 183-184, where
her result is discussed.
One can think of the sampling bound above in terms of convolutions
too. Let F 2 L1 (R ). If G 2 L2 (R ), then the convolution F ? G
is in L2 (R ). Suppose we want the best constant C such that for all
G 2 L2 (R ), one has
kF ? Gk2 C kGk2:
It is well-known that this constant C is sup 2jFb(t)j. The same fact
t2R
holds if one is carrying out the analogous operations on the circle T,
i.e. for 2-periodic functions. But then the constant C is sup 2jFb(l)j.
l2Z
The sampling bound above is thus saying saying this. If a function F
is integrable over [,; ] where < , then there is a constant C (),
depending only on , such that if one has
kF ? GkL2[,;] C kGkL2[,;]
for all 2-periodic functions G, then one has
kF ? GkL2(R) C ()C kGkL2(R)
for all G 2 L2 (R ). This is the content of Proposition 3.2 when w = 21 .
Another interesting aspect of sampling functions of small support
comes out in the above. Indeed, x < and let F 2 L1 [,; ].
Dene E to be the mapping which sends the sequence sF = (Fb(n) :
n 2 Z) to the value Fb(t). The Riemann-Lebesgue Lemma says that
lim Fb(n) = 0. Hence, E is a well-dened, continuous linear funcjnj!1
tional on co(Z) with jE (sF )j C ()ksF k1. Note: at rst E is only
being dened on a proper subspace of co(Z), but the sampling bound
INTRODUCTION TO SAMPLING
not only makes the mapping continuous there, but allows its continuous extension to the closure of this subspace in co (Z), which is actually all of co (Z) in this case. But since the dual space of co (Z) is
l1(Z), there is a uniquely determined sequence (
t(n) : n 2 Z) 2 l1(Z)
1
P
such that E (sF ) =
t(n)sF (n) for all sequences sF . Moreover,
n=,1
k
tkl1(Z) C (). That is, we have an absolutely summable sequence
1
P
(
t(n)) such that
j
t(n)j C (), and such that we have the abn=,1
stract interpolation formula
Fb(t)
1
X
n=,1
t(n)Fb(n)
for all F 2 L1 [,; ]. Using the denition of FT, one sees easily that
1
P
this means that for all x 2 [,; ], we have eixt =
t (n)eint. By
n=,1
examining the proof of Proposition 3.2, one can see that this type of
expansion was also being used there, and as a result the coecients
t(n) can be chosen to be continuous in t. But one reason to include
this alternative derivation is that it will always work whenever there
is a sampling bound as in Proposition 3.2, even when the samples are
some irregular or non-arithmetic sequence.
Compare this abstract interpolation formula with Shannon's formula
Equation 2.1 or with the cardinal series of Whittaker Equation 3.1.
The coecients
t are not uniquely determined as in those formulas,
but they are absolutely summable. It would be perhaps worthwhile
to write down a nice, explicit choice of these coecients; there may
even already be ones available in the existing literature. But the point
we would like to observe is that the Shannon's formula or the cardinal
series of Whittaker could only be better than these expansions if the
coecients in the those interpolation formulas were also absolutely
summable. This would mean there could be a sampling bound as above
even with = 2w. However, without the restriction on the support
of F to being well within the band [,2w; 2w], there actually cannot
be such sampling bounds. That is, the constant C () grows to 1 as
tends to 0. These means that while SST and the cardinal series in it
give a reconstruction of the signal f , they cannot give sampling bounds
of the kind in Proposition 3.2.
First, we prove that bounds do not always exist.
10
n2Z
all k, then any integer is in the interval [h,k 1=2 ; h,k 3=2] for at most one
value of k. Thus, for large K , we have jFb( 12 )j is approximately 1 and
jFb(n)j is uniformly small on the integers.
Remark 3.5. a) See the construction for Proposition 3.6 for another
example of a class of functions which show that there is no such inequality.
b) Sometimes inequalities like this can be shown to fail by constructing a function which has the right-hand side begin 0 while the left-hand
side is not. But in this case, SST shows that this is not possible.
Let us now see how the constant in the sampling bound grows as the
support of F gets closer to being all of [,2w; 2w]. With no loss in
generality, we can take w = 12 and work just on the interval [,; ]. We
claim that the constant C in Proposition 3.2 is on the order of log( ,1 ).
First, to see that this is the right order to bound a suitable choice of
C , we follow the proof of Proposition 3.2 a little more carefully. We
can choose the extension of ei(x,m)t so that Ht (x) is 2-periodic and
continuously dierentiable. We can arrange also that the derivative
INTRODUCTION TO SAMPLING
11
jHct(n)j C log ,1 :
n=,1
The proof of Proposition 3.2 is thus showing that the constant in the
sampling bound is O(log( ,1 ).
But also, the constant has to be at least on this order. To see this,
for N 10, we construct a function fN 2 L1 [, + N1 ; , N1 ] so that
jfcN (n)j C for all n 2 Z, but such that jfcN ( 21 )j C log N . Indeed,
start with
8 N
<P
m imx if jxj ;
GN (x) = :m=1(,1) e
0
if jxj .
=
=
N
X
m=1
N
X
(,1)m
1 Z ei(m,t)x dx
2
,
N
X
1
sin(t) :
=
m=1 t , m
12
So jd
GN ( 12 )j C log N . However, it is also clear that for any integer k,
we have jd
GN (k)j 1. Now let
(
1
FN (x) = GN (x) if jxj 1 , N ;
0
if , N < jxj .
R
Hence,
,
another proof of Proposition 3.4, since the constants C (; w) in Proposition 3.2 do not remain bounded as increases to 2w. The statement
summarizing the above in general is this one.
3.2. Jittered Sampling. Another important issue in sampling signals is that it may not always be feasible to sample regularly as in the
classical case. The sample points may be inherently irregular for reasons beyond the control of the system, e.g. noise creates irregularity.
Such a sample is called a jittered sample. However, if one has some
control of the rate of sampling, and some control on the bandwidth
of the signal, a jittered sample can be as eective in reconstruction as
an arithmetic sample. The critical theorem that gives such results is a
theorem of Bernstein about entire functions of controlled growth.
Let f (z) be a function analytic in the entire complex plane. We
say that f is of nite type if there exist constants A and such that
for all z 2 C , we have jf (z)j Aejzj: The type of f is the smallest
constant such that for all > 0 and all suciently large jzj, we have
jf (z)j e(+)jzj: For example, if F 2 L1 [,; ], then its FT f can be
extended to all of C , by using the same integral formula, and it is of
nite type. The type of f is , if no smaller interval would support the
function F .
INTRODUCTION TO SAMPLING
13
x2R
Proof. See Zygmund [30], Theorem 7.24, Vol. II, for a complete proof.
We just want to observe that the argument proceeds from a careful inspection of the same Tschakalo's interpolation formula used by
Cartwright.
Now assume that we have a sampling set S R with the property
that for some , to be specied later, one has for any x 2 R , there is
some s 2 S such that jx , sj . We take a band-limited signal f ,
whose INVFT is supported on a band [,; ]. We want to estimate
f (to). For s 2 S , let I (to; s) denote the closed interval between to and
s. There is no Mean Value Theorem for analytic functions which is
quite as simple as the one for real-valued functions of a real-variable,
but we can estimate, using Bernstein's Theorem, that
jf (to)j jf (to) , f (s)j + jf (s)j
sup jf 0(t)jjx , sj + jf (s)j
t2I (to ;s)
s2S
t2R
s2S
14
1
X
n=,1
f (tn)Sn(t):
cn =
So
RI
1
X
n=1
1 f (t )
X
n
n=1
RI
f (tn)Sn(t):
Here are some examples of kernels K (x; t) that one might use:
INTRODUCTION TO SAMPLING
K (x; t) = Jm (xt);
K (x; t) = Pt(x);
K (x; t) = Lt (x);
15
R1
m (xt) Jm (xtm;n) dx
0 xJ
R1
2
0 [Jm (xt)] dx
2tm;n Jm (t)
;
2
(tm;n , t2)Jm+1 (tm ; n)
f (t) =
1
X
Jn(t)
:
f (tm;n) (t2 2,tm;n
2
m;n t )Jm+1 (tm;n )
n=1
16
4.2. Time-Varying Systems. The interpretation of GST for the special case K (w; t) = e,iwt (i.e. the SST) is that f (t) is the output of
an ideal low-pass lter with impulse response K (t; tn ) = 2wS (t; tn) =
sin 2w(t,n(2w)) , with the input taken to be f (t ) = f (n=2w). The inn
(t,n(2w))
terpretation of the GST is that f (t) is the output of a band-limited,
low-pass lter in the sense of some general integral transforms, with
a time-varying impulse response that is related directly to the sampling function S (t; tn), and with input ff (tn)g. This can be done for a
transform-limited function
Z
F (w) =
INTRODUCTION TO SAMPLING
17
18
,1
jf (t)j2 dt C
jf (tn)j2:
One should assume that the summation is over the natural numbers N
or the integers Z.
This condition is one half of the frame condition. Let Ba denote all
functions f (t) which are in L2 (R ) and band-limited to [,a; a]. We say
that (eitn x : n) is a frame for Ba if there exists constants c and C such
that for all f 2 Ba ,
)j2
jf (tn
Z 1
,1
jf (t)j2 dt C
jf (tn)j2:
The SST gives the best known simple example of a frame and hence
an energy stable sampling sequence. Let f 2 B2w and let an = f ( 2nw ).
Then the Parseval identity from classical analysis is saying that we
have
1
2 Z 1
X
n
1 X
1
2
2
2w n=,1 janj = 2w f ( 2w ) = ,1 jf (t)j dt:
R1
So SST is showing that every signal f of nite energy ,1
jf (t)j2 dt
and bandwidth 4w may be completely recovered at Nyquist rate of
2w/second, but also, the recovery is energy stable in the sense that
a small error in reading sample values produces only a corresponding
small error in the energy of the recovered signal.
It is implicit in the energy stability condition that (tn ) is a set of
uniqueness: that is, if f 2 L2 [,a; a] and f (tn) = 0 for all n = 1; 2; 3; : : :
then f = 0. Also, the characters (eitn x : n = 1; 2; 3; : : : ) must span
L2[,a; a] because of the bandwidth in question. What the maximum
such bandwidth will be in terms of (tn) is the focus of the article by
Beurling and Malliavin [3], which depends also on the results in Beurling and Malliavin [4]. Of course, this stability estimate would follow
from (eitn x : n = 1; 2; 3; : : : ) being a frame for L2 [,a; a], but energy
stability is not as strong as the frame requirement. It is interesting
to ask what is needed for a set of characters to be a frame of a class
of band-limited signals. The results in Dun and Schaeer [8] give
good conditions sucient for a set of characters to be a frame. They
consider the two conditions: 1) there is d > 0 and L < 1, such that
for all n, jtn , nd j L, and 2) there is > 0, such that for any m 6= n,
jtm , tnj . Under conditions 1) and 2), they show that (eitn x) is a
INTRODUCTION TO SAMPLING
19
frame for any Ba for which a < d. See also Jaard [11] for a characterization of when one has the frame condition, extending the work
of Dun and Schaeer [8]; and see also the article by Benedetto in
Benedetto and Frazier [1] for a discussion of the frame condition and
reconstruction in this context. We will need to use the result of Dun
and Schaeer [8], giving sucient conditions for a set of characters to
have the frame condition.
It is probably worth making a distinction here between energy stability and what we will call proper energy stability, this being energy
stability in which the data series is always nite. The energy stability
criterion P
on the face of it does not require that for all f 2 Ba , the
value of jf (tn)j2 is nite. If this series is innite, then the bound
n
is not a useful one because no conclusions about
P perturbations can be
made. It is not hard to see actually that jf (tn)j2 is nite for all
n
f 2 Ba if and only if the left-hand sidePof the frame condition
holds i.e.
R1
2
there is a constant A < 1 such that jf (tn)j A ,1 jf (t)j2 for all
n
f 2 Ba . Indeed, clearly this inequality is sucient for the niteness.
Conversely, if the series is always nite, then we can consider the linear
mapping T given by T (F ) = (Fb(tn) : n). By assumption, this mapping
is well-dened from L2 [,a; a] to l2 (I ) where I = N of I = Z. It is
easy to show that this mapping T has a closed graph. Therefore, T is
continuous, i.e. there exists a nite constant A such that
X
)j2
jf (tn A
=A
Z 1
,1
Z 1
,1
jF (x)j2 dx
jf (t)j2dt:
for all F 2 L2 [,a; a]. This is to say, a properly energy stable sampling
sequence (tn) is nothing more than onepfor which (eitn x) forms a frame
for Ba . For example, the sequence ( jnj : n 2 Z) is energy stable
because it contains a smaller sampling set that gives a frame for Ba by
the theorem of Dun and Schaeer. Here a is actually not restricted.
In particular, it is a set of uniqueness of all band-limited functions
simultaneously. However, it does not give a frame for any Ba , with
a > 0 and it is not a properly energy stable sampling sequence for any
Ba with a > 0. See Jaard [11].
We have actually been considering here another type of stability
for a sampling set, which could be called uniform stability. This is
the stability that comes from having a constant C such that for all
20
This means that a small error in reading the sample values produces
only a corresponding small uniform error in the recovered signal. It is
implicit in this type of stability as with energy stability that the (tn)
form a set of uniqueness for the corresponding functions and that the
characters corresponding to them span L2[,a; a].
Uniqueness is an interesting property in itself. It is generally a much
less restrictive condition than either stability condition. For example,
it is well-known that the complex zeros of a band-limited signal f have
certain distributional properties. These properties were rst observed
in Titchmarsh [24], and then later are discussed in Levinson [17], Chapter III. See also Rosenblatt
[21]. These results say
p for instance that a
p
sequence like tn = n; n = 1; 2; 3; : : : , or tn = jnj; n 2 Z, is a set
of uniqueness for all bandwidths simultaneously because the density of
the sequence gets bigger without bound. Moreover, such a sequence can
always be modied to have p
larger and larger gaps and still be a set of
uniqueness. In particular, ( n : n = 1; 2; 3; : : : ) is a set if uniqueness
for all bandwidth class, but is not
p uniformly stable or energy stable
for any bandwidth class. Also, ( jnj : n 2 Z) is energy stable, uniformly stable, but not properly energy stable for any bandwidth. See
Jaard [11].
Another interesting issue is to understand the relationship between
the two types of stability, energy stability and uniform stability. First,
the Shannon sampling theorem and Proposition 3.4 show that one can
have a properly energy stable sampling method which is not uniformly
stable for the bandwidth in question. Perhaps, the reverse is true too,
but we do not have an example of this at this time.
However, it is interesting to note that except for a loss of some bandwidth, uniform stability does imply energy stability, and at least proper
energy stability also implies uniform stability. This can be seen as follows. Assume that (tn) is a uniformly stable sampling method for bandwidth [,a; a], with constant C . Take the function F (x) = eibx1[,a;a]
and sample f = Fb. Since jf (t)j jt1,bj , while f (b) = a , the largest
possible distance G from b to an element of (tn) satises G Ca . Hence,
the largest possible closed interval in R which contains no elements of
(tn) is of length at most 2G 2aC . Now take the expanded lattice
( md : m 2 Z). Assume that 1d 3aC . Then let L = 31d . It follows that
each interval [ md , L; md + L] contains some sample point tnm since actually 2L 2aC . For each m, choose such a tnm 2 [ md , L; md + L]. Since
INTRODUCTION TO SAMPLING
21
22
4.6. Band-pass Functions. Up to now the band region was an interval centered at the origin. What happens if the band region is
[a , 2w; a + 2w]? This is actually just a simple translation, so SST
interpolation becomes
1
X
sin
(2
wt
,
n
)
n
n
exp ia t , w :
f (t) =
f( )
n=,1 2w (2w + n)
But it becomes much less trivial in knowing how to sampling eciently
if the band region is of the form [,a , 2w; ,a + 2w] [ [a , 2w; a +
2w].
Acknowledgment: We would like to thank R. Kaufman who helped
quite a bit in understanding the behavior of sampling bounds, as presented in Section 3.
References
[1] J. Benedetto and M. Frazier, eds., Wavelets: mathematics and applications,
CRC Press, Boca Raton, Ann Arbor, London, Tokyo, 1994
[2] J. Benedetto, Irregular sampling and frames, in Wavelets, a Tutorial in Theory
and Applications, C. K. Chui, ed., 1992, Academic Press, Boston, pp. 445-507.
[3] A. Beurling and P. Malliavin, On the closure of characters and the zeros of
entire functions, Acta Math, 118 (1967) 79-93.
[4] A. Beurling and P. Malliavin, On Fourier transforms of measures with compact
support, Acta Math, 107 (1962) 291-309.
[5] L. L. Campbell, A comparison of the sampling theorems of Kramer and Whittaker, SIAM J. 12 (1964) 117-130.
[6] M. Cartwright, On certain integral functions of order one, Oxford Quarterly
Journal of Math, 7 (1936) 46-55.
INTRODUCTION TO SAMPLING
23
24
[30] A. Zygmund, Trigonometric Series, Vols. I and II, Cambridge University Press,
Cambridge, 1979.
(L. Petracovici) Department of Mathematics, University of Illinois at
Urbana, Urbana, IL 61801
E-mail address :