Introduction to Orthogonal Transforms With Applications in Data Processing and Analysis 1st Edition Ruye Wang all chapter instant download
Introduction to Orthogonal Transforms With Applications in Data Processing and Analysis 1st Edition Ruye Wang all chapter instant download
https://ebookfinal.com
https://ebookfinal.com/download/introduction-to-
orthogonal-transforms-with-applications-in-data-
processing-and-analysis-1st-edition-ruye-wang/
https://ebookfinal.com/download/categorical-data-analysis-with-sas-
and-spss-applications-bayo-lawal/
ebookfinal.com
https://ebookfinal.com/download/signals-systems-transforms-and-
digital-signal-processing-with-matlab-1st-edition-michael-corinthios/
ebookfinal.com
https://ebookfinal.com/download/introduction-to-contextual-processing-
theory-and-applications-1st-edition-gregory-vert/
ebookfinal.com
https://ebookfinal.com/download/monte-carlo-simulation-with-
applications-to-finance-1st-edition-hui-wang/
ebookfinal.com
Discrete and continuous Fourier transforms analysis
applications and fast algorithms 1st Edition Eleanor Chu
https://ebookfinal.com/download/discrete-and-continuous-fourier-
transforms-analysis-applications-and-fast-algorithms-1st-edition-
eleanor-chu/
ebookfinal.com
https://ebookfinal.com/download/doing-bayesian-data-analysis-a-
tutorial-introduction-with-r-and-bugs-1st-edition-john-k-kruschke/
ebookfinal.com
https://ebookfinal.com/download/an-introduction-to-statistical-
methods-and-data-analysis-6th-edition-r-lyman-ott/
ebookfinal.com
Introduction to Orthogonal Transforms With
Applications in Data Processing and Analysis 1st Edition
Ruye Wang Digital Instant Download
Author(s): Ruye Wang
ISBN(s): 9780521797252, 052179725X
Edition: 1
File Details: PDF, 8.97 MB
Year: 2012
Language: english
Introduction to Orthogonal Transforms
With Applications in Data Processing and Analysis
R UYE WANG
Harvey Mudd College, California, USA
CAMBRIDGE UNIVERSITY PRESS
Cambridge, New York, Melbourne, Madrid, Cape Town,
Singapore, São Paulo, Delhi, Mexico City
Cambridge University Press
The Edinburgh Building, Cambridge CB2 8RU, UK
Published in the United States of America by Cambridge University Press, New York
www.cambridge.org
Information on this title: www.cambridge.org/9780521516884
C Cambridge University Press 2012
A catalogue record for this publication is available from the British Library
Appendices 546
Bibliography 565
Index 566
Preface
When a straight line standing on a straight line makes the adjacent angles equal to one
another, each of the equal angles is right, and the straight line standing on the other
is called a perpendicular to that on which it stands.
— Euclid, Elements, Book 1, definition 10
The transform methods covered in the book are a collection of both old and
new ideas ranging from the classical Fourier series expansion that goes back
almost 200 years, to some relatively recent thoughts such as the various origins
of the wavelet transform. While all of these ideas were originally developed with
different goals and applications in mind, from solving the heat equation to the
analysis of seismic data, they can all be considered to belong to the same family,
based on the common mathematical framework they all share, and their similar
applications in data processing and analysis. The discussions of specific methods
and algorithms in the chapters will all be approached from such a unified point
of view.
Before the specific discussion of each of the methods, let us first address a fun-
damental issue: why do we need to carry out an orthogonal transform to start
with? A signal, as the measurement of a certain variable (e.g., the temperature of
a physical process) tends to vary continuously and smoothly, as the energy asso-
ciated with the physical process is most probably distributed relatively evenly
in both space and time. Most such spatial or temporal signals are likely to be
correlated, in the sense that, given the value of a signal at a certain point in
space or time, one can predict with reasonable confidence that the signal at a
neighboring point will take a similar value. Such everyday experience is due to
the fundamental nature of the physical world governed by the principles of mini-
mum energy and maximum entropy, in which any abruption and discontinuities,
typically caused by an energy surge of some kind, are relatively rare and unlikely
events (except in the microscopic world governed by quantum mechanics). On
the other hand, from the signal processing viewpoint, the high signal correlation
and even energy distribution are not desirable in general, as it is difficult to
decompose such a signal, as needed in various applications such as information
extraction, noise reduction, and data compression. The issue, therefore, becomes
one of how the signal can be converted in such a way that it is less correlated
and its energy less evenly distributed, and to what extent such a conversion can
be carried out to achieve the goal.
Specifically, in order to represent, process, and analyze a signal, it needs to be
decomposed into a set of components along a certain dimension. While a signal
is typically represented by default as a continuous or discrete function of time
or space, it may be desirable to represent it along some alternative dimension,
most commonly (but not exclusively) frequency, so that it can be processed and
analyzed more effectively and conveniently. Viewed mathematically, a signal is a
vector in some vector space which can be represented by any of a set of different
orthogonal bases all spanning the same space. Each representation corresponds
to a different decomposition of the signal. Moreover, all such representations are
equivalent, in the sense that they are related to each other by certain rotation
in the space by which the total energy or information contained in the signal is
conserved. From this point of view, all different orthogonal transform methods
Preface xv
developed in the last 200 years by mathematicians, scientists, and engineers for
various purposes can be unified to form a family of methods for the same general
purpose.
While all transform methods are equivalent, as they all conserve the total
energy or information of the signal, they can be very different in terms of how the
total energy or information in the signal is redistributed among its components
after the transform, and how much these components are correlated. If, after a
properly chosen orthogonal transform, the signal is represented in such a way that
its components are decorrelated and most of the signal information of interest is
concentrated in a small subset of its components, then the remaining components
could be neglected as they carry little information. This simple idea is essentially
the answer to the question asked above about why an orthogonal transform is
needed, and it is actually the foundation of the general orthogonal transform
method for feature selection, data compression, and noise reduction. In a certain
sense, once a proper basis of the space is chosen so that the signal is represented
in such a favorable manner, the signal-processing goal is already achieved to a
significant extent.
The purpose of the first two chapters is to establish a solid mathematical foun-
dation for the thorough understanding of the topics of the subsequent chapters,
which each discuss a specific type of transform method. Chapter 1 is a brief sum-
mary of the basic concepts of signals and linear time-invariant (LTI) systems. For
readers with an engineering background, much of this chapter may be a quick
review that could be scanned through or even skipped. For others, this chapter
serves as an introduction to the mathematical language by which the signals and
systems will be described in the following chapters.
Chapter 2 sets up the stage for all transform methods by introducing the key
concepts of the vector space, or more strictly speaking the Hilbert space, and the
linear transformations in such a space. Here, a usual N -dimensional space can be
generalized in several aspects: (1) the dimension N of the space may be extended
to infinity, (2) a vector space may also include a function space composed of
all continuous functions satisfying certain conditions, and (3) the basis vectors
of a space may become uncountable. The mathematics needed for a rigorous
treatment of these much-generalized spaces is likely to be beyond the comfort
zone of most readers with a typical engineering or science background, and it is
therefore also beyond the scope of this book. The emphasis of the discussion here
is not mathematical rigor, but the basic understanding and realization that many
of the properties of these generalized spaces are just the natural extensions of
those of the familiar N -dimensional vector space. The purpose of such discussions
is to establish a common foundation for all transform methods so that they can
all be studied from a unified point of view, namely, that any given signal, either
continuous or discrete, with either finite or infinite duration, can be treated
xvi Preface
a real transform such as the cosine transform, which is widely used for data
compression, such as in the image compression standard JPEG.
Chapter 7 combines three transform methods, the Walsh-Hadamard, slant, and
Haar transforms, all sharing some similar characteristics (i.e., the basis functions
associated with these transforms all have square-wave-like waveforms). Moreover,
as the Haar transform also possesses the basic characteristics of the wavelet
transform method, it can also serve as a bridge between the two camps of the
orthogonal transforms and the wavelet transforms, leading to a natural transition
from the former to the latter.
In Chapter 8 we discuss the Karhunen-Loeve transform (KLT), which can
be considered as a capstone of all previously discussed transform methods, and
the associated principal component analysis (PCA), which is popularly used in
many data-processing applications. The KLT is the optimal transform method
among all orthogonal transforms in terms of the two main characteristics of the
general orthogonal transform method, namely the compaction of signal energy
and the decorrelation among all signal components. In this regard, all orthogonal
transform methods can be compared against the optimal KLT for an assessment
of their performances.
We next consider in Chapter 9 both the continuous- and discrete-time wavelet
transforms (CTWT and DTWT), which differ from all orthogonal transforms
discussed previously in two main aspects. First, the wavelet transforms are not
strictly orthogonal, as the bases used to span the vector space and to represent
a given signal may not be necessarily orthogonal. Second, the wavelet trans-
form converts a 1-D time signal into a 2-D function of two variables, one for
different levels of details or scales, corresponding to different frequencies in the
Fourier transform, and the other for different temporal positions, which is totally
absent in the Fourier or any other orthogonal transform. While redundancy is
inevitably introduced into the 2-D transform domain by such a wavelet trans-
form, the additional second dimension also enables the transform to achieve both
temporal and frequency localities in signal representation at the same time (while
all other transform methods can only achieve either one of the two localities).
Such a capability of the wavelet transform is its main advantage over orthogonal
transforms in some applications such as signal filtering.
Finally, in Chapter 10 we introduce the basic concept of multiresolution ana-
lysis (MRA) and Mallat’s fast algorithm for the discrete wavelet transform
(DWT), together with its filter bank implementation. Similar to the orthogo-
nal transforms, this algorithm converts a discrete signal of size N into a set of
DWT coefficients also of size N , from which the original signal can be perfectly
reconstructed; i.e., there is no redundancy introduced by the DWT. However,
different from orthogonal transforms, the DWT coefficients represent the signal
with temporal as well as frequency (levels of details) localities, and can, therefore,
be more advantageous in some applications, such as data compressions.
Moreover, some fundamental results in linear algebra and statistics are also
summarized in the two appendices at the back of the book.
xviii Preface
methods. (There are a lot of such Matlab m-files on the website of the book. In
fact, all functions used to generate many of the figures in the book are provided
on the site.) If a little more interested, the reader can read through the code
to see how things are done. Of course, a step further is to modify the code and
use different parameters and different datasets to better appreciate the various
effects of the algorithms.
Back to Euclid
Finally, let us end by again quoting Euclid, this time, a story about him.
A youth who had begun to study geometry with Euclid, when he had learned the first
proposition, asked, “What do I get by learning these things?” So Euclid called a slave
and said “Give him three pence, since he must make a gain out of what he learns.”
Surely, explicit efforts are made in this book to discuss the practical uses of
the orthogonal transforms and the mathematics behind them, but one should
realize that, after all, the book is about a set of mathematical tools, just like
those propositions in Euclid’s geometry, out of learning which the reader may
not be able to make a direct and immediate gain. However, in the end, it is the
application of these tools toward solving specific problems in practice that will
enable the reader to make a gain out of the book; much more than three pence,
hopefully.
Acknowledgments
General notation
iff if and only if
√
j = −1 = ej π /2 imaginary unit
u + jv = u − jv complex conjugate of u + jv
Re(u + jv) = u real part of u + jv
Im(u + jv)√= v imaginary part of u + jv
|u + jv| = u2 + v 2 magnitude (absolute value) of a complex value u + jv
(u + jv) = tan−1 (v/u) phase of u + jv
xn ×1 an n by 1 column vector
x complex conjugate of x
xT transpose of x, a 1 by n row vector
x∗ = xT conjugate transpose of matrix A
||x|| norm of vector x
Am ×n an m by n matrix of m rows and n columns
A complex conjugate of matrix A
A−1 inverse of matrix A
AT transpose of matrix A
T
A∗ = A = AT conjugate transpose of matrix A
N set of all positive integers including 0
Z set of all real integers
R set of all real numbers
C set of all complex numbers
RN N -dimensional Euclidean space
CN N -dimensional unitary space
L2 space of all square-integrable functions
l2 space of all square-summable sequences
x(t) a function representing a continuous signal
x = [. . . , x[n], . . .]T a vector representing a discrete signal
ẋ(t) = dx(t)/dt first order time derivative of x(t)
ẍ(t) = dx2 /dt2 second order time derivative of x(t)
f frequency (cycle per unit time)
ω = 2πf angular frequency (radian per unit time)
xxii Notation
In the first two chapters we will consider some basic concepts and ideas as the
mathematical background for the specific discussions of the various orthogonal
transform methods in the subsequent chapters. Here, we will set up a framework
common to all such methods, so that they can be studied from a unified point
of view. While some discussions here may seem mathematical, the emphasis is
on the intuitive understanding, rather than the theoretical rigor.
x[n] = x(t)t= n = x(n). (1.1)
Definition 1.1. The discrete unit impulse or Kronecker delta function is defined
as
1 n=0
δ[n] = . (1.2)
0 n = 0
r First, a discrete signal x[n] can be decomposed into a set of unit impulses
each at a different moment n = m and weighted by the signal amplitude x[m]
at the moment, as shown in Fig. 1.1(a).
r Second, the Kronecker delta δ[n − m] acts as a filter that sifts out a particular
value of the signal x[n] at the moment m = n from a sequence of signal samples
x[m] for all m. This is the sifting property of the Kronecker delta.
Note that the width and height of this square impulse are respectively and
1/; i.e, it covers a unit area × 1/ = 1, independent of the value of :
∞
δ (t) dt = · 1/ = 1. (1.5)
−∞
Definition 1.2. The continuous unit impulse or Dirac delta function δ(t) is a
function that has an infinite height but zero width at t = 0, and it covers a unit
area; i.e., it satisfies the following two conditions:
∞ 0+
∞ t=0
δ(t) = and δ(t) dt = δ(t) dt = 1. (1.8)
0 t = 0 −∞ 0−
Note that the discrete impulse function δ[n] has a unit height, while the con-
tinuous impulse function δ(t) has a unit area (product of height and width for
time); i.e., the two types of impulses have different dimensions. The dimension
of the discrete impulse is the same as that of the signal (e.g., voltage), while
the dimension of the continuous impulse is the signal’s dimension divided by
time (e.g., voltage/time). In other words, x(τ )δ(t − τ ) represents the density of
the signal at t = τ , only when integrated over time will the continuous impulse
functions have the same dimension as the signal x(t).
The results above indicate that a time signal, either discrete or continuous, can
be decomposed in the time domain to become a linear combination, either a sum-
mation or an integral, of a set of time impulses (components), either countable
or uncountable. However, as we will see in future chapters, the decomposition
of the time signal is not unique. The signal can also be decomposed in different
domains other than time, such as frequency, and the representations of the signal
in different domains are related by certain orthogonal transformations.
Definition 1.3.
1 n≥0
u[n] = . (1.11)
0 n<0
The Kronecker delta can be obtained as the first-order difference of the unit
step function:
1 n=0
δ[n] = u[n] − u[n − 1] = . (1.12)
0 n = 0
Similarly, in continuous case, the impulse function δ(t) is also closely related
to the continuous unit step function (also called Heaviside step function) u(t).
To see this, we first consider a piece-wise linear function defined as
⎧
⎨ 0 t<0
u (t) = t/ 0 ≤ t < . (1.13)
⎩
1 t≥
Taking the time derivative of this function, we get the square impulse considered
before in Eq. (1.4):
⎧
d ⎨ 0 t<0
δ (t) = u (t) = 1/ 0 ≤ t < . (1.14)
dt ⎩
0 t≥
Signals and systems 5
If we let → 0, then u (t) becomes the unit step function u(t) at the limit:
Definition 1.4.
⎧
⎨ 0 t<0
u(t) = lim u (t) = 1/2 t=0 . (1.15)
→0 ⎩
1 t>0
This process is shown in the three cases for different values of in Fig. 1.2.
Figure 1.2 Generation of unit step and unit impulse. Three functions u (t)
with different values of together with their derivatives δ (t) are shown. In
particular, when δ → 0, these functions become u(t) and δ(t), as shown on the
right.
In addition to the square impulse δ (t), the Dirac delta δ(t) can also be gen-
erated from a variety of different nascent delta functions at the limit when a
certain parameter of the function approaches the limit of either zero or infinity.
Consider, for example, the Gaussian function:
1
e−t
2
/2σ 2
g(t) = √ , (1.18)
2πσ 2
which is the probability density function of a normally distributed random vari-
able t with zero mean and variance σ 2 . Obviously the area underneath this
The argument t of a Dirac delta δ(t) may be scaled so that it becomes δ(at).
In this case Eq. (1.10) becomes
∞ ∞
u 1 1
x(τ )δ(aτ ) dτ = x δ(u) du = x(0), (1.21)
−∞ −∞ a |a| |a|
where we have defined u = aτ . Comparing this result with Eq. (1.10), we see
that
1
δ(at) = δ(t); i.e. |a|δ(at) = δ(t). (1.22)
|a|
More generally, the Dirac delta can also be defined over a function f (t) of a
variable, instead of a variable t. Now the Dirac delta becomes δ(f (t)), which is
zero except when f (tk ) = 0, where t = tk is one of the roots of f (t). To see how
such an impulse is scaled, consider the following integral:
∞ ∞
1
x(τ )δ(f (τ )) dτ = x(τ )δ(u) du, (1.24)
−∞ −∞ |f (τ )|
Signals and systems 7
This is the generalized sifting property of the impulse function. We can now
express the delta function as
δ(t − tk )
δ(f (t)) = , (1.27)
|f (τk )|
k
Here we list a set of important formulas that will be used in the discussions of
various forms of the Fourier transform in Chapters 3 and 4. These formulas show
that the Kronecker and Dirac delta functions can be generated as the sum or
integral of some forms of the general complex exponential function ej 2π f t = ej ω t .
The proofs of these formulas are left as homework problems.
r I. Dirac delta as an integral of a complex exponential:
∞ ∞ ∞
e±j 2π f t dt = cos(2πf t) dt ± j sin(2πf t) dt
−∞ −∞ −∞
∞
=2 cos(2πf t) dt = δ(f ) = 2πδ(ω). (1.28)
0
Note that the integral of the odd function sin(2πf t) over all time −∞ < t < ∞
is zero, while the integral of the even function cos(2πf t) over all time is
twice the integral over 0 < t < ∞. Equation (1.28) can also be interpreted
intuitively. The integral of any sinusoid over all time is always zero, except if
f = 0 and e±j 2π f t = 1, then the integral becomes infinity. Alternatively, if we
integrate the complex exponential with respect to frequency f , we get
∞ ∞
±j 2π f t
e df = 2 cos(2πf t) df = δ(t), (1.29)
−∞ 0
each other at any time t = 0 except if t = 0 and cos(2πf t) = 1 for all f , then
their superposition becomes infinity.
r Ia. This formula is a variation of Eq. (1.28):
∞ ∞
1 1 1
e±j 2π f t dt = e±j ω t dt = δ(f ) ∓ = πδ(ω) ∓ . (1.30)
0 0 2 j2πf jω
Given the above, we can also get:
0 −∞ ∞
±j ω t ±j ω t
e dt = e d(−t) = e∓j ω t dt
−∞ 0 0
1 1 1
= δ(f ) ± = πδ(ω) ± . (1.31)
2 j2πf jω
Adding the two equations above we get the same result as given in Eq. (1.28):
∞ 0 ∞
±j ω t ±j ω t
e dt = e dt + e±j ω t dt = δ(f ) = 2πδ(ω). (1.32)
−∞ −∞ 0
r II. Kronecker delta as an integral of a complex exponential:
1 ±j 2π k t/T 1 1
e dt = cos(2πkt/T ) dt ± j sin(2πkt/T ) dt
T T T T T
T
1
= cos(2πkt/T ) dt = δ[k]. (1.33)
T T
In particular, if T = 1 we have
1
e±j 2π k t dt = δ[k]. (1.34)
0
r III. A train of Dirac deltas with period F as a summation of a complex
exponential:
∞ ∞ ∞
1 ±j 2π f n /F 1 1
e = cos(2πf n/F ) ± j sin(2πf n/F )
F n = −∞ F n = −∞ F n =−∞
∞ ∞ ∞
1
= cos(2πf n/F ) = δ(f − kF ) = 2πδ(ω − 2πkF ).(1.35)
F n = −∞
k = −∞ k =−∞
In particular, if F = 1 we have
∞
∞
∞
e±j 2π f n = δ(f − k) = 2πδ(ω − 2πk). (1.36)
n = −∞ k = −∞ k =−∞
Adding the two equations above we get the same result as given in Eq. (1.36):
∞
−1
∞
±j 2π f n ±j 2π f n
e = e + e±j 2π f n
n = −∞ n = −∞ n =0
∞ ∞
= δ(f − k) = 2π δ(ω − 2πk). (1.39)
k = −∞ k =−∞
Note that |x(t)|2 and |x[n]|2 have different dimensions and they represent
respectively the power and energy of the signal at the corresponding moment.
If the energy contained in a signal is finite E < ∞, then the signal is called
an energy signal . A continuous energy signal is said to be square-integrable,
and a discrete energy signal is said to be square-summable. All signals to be
considered in the future, either continuous or discrete, will be assumed to be
energy signals.
10 Signals and systems
1
N
P = lim |x[n]|2 . (1.44)
N →∞ N
n =1
If E of x(t) is not finite but P is, then x(t) is a power signal. Obviously, the
average power of an energy signal is zero.
r The cross-correlation defined below measures the similarity between two sig-
nals as a function of the relative time shift:
∞ ∞
rxy (τ ) = x(t) y(t) = x(t) y(t − τ ) dt = x(t + τ ) y(t) dt
−∞ −∞
∞
= x(t − τ ) y(t) dt = y(t) x(t) = ry x (τ ). (1.45)
−∞
In particular, when x(t) = y(t) and x[n] = y[n], the cross-correlation becomes
the autocorrelation, which measures the self-similarity of the signal:
∞ ∞
rx (τ ) = x(t)x(t − τ ) dt = x(t + τ )x(t) dt, (1.47)
−∞ −∞
and
∞
∞
rx [m] = x[n] x[n − m] = x[n + m] x[n]. (1.48)
n = −∞ n =−∞
µx = E[x], (1.52)
and
σx2 (t) = E[|x(t)|2 ] − |µx (t)|2 σx2 [n] = E[|x[n]|2 ] − |µx [n]|2 . (1.57)
We see that σx2 (t) represents the average dynamic power of the signal x(t),
and σx2 [n] represents the average dynamic energy contained in the nth signal
component x[n].
Note that these operations are actually applied to the amplitude values of the
two signals x(t) and y(t) at each moment t, and the result becomes the value of
z(t) at the same moment; and the same is true for the operations on the discrete
signals.
12 Signals and systems
1. A two-step process.
r Step 1: define an intermediate signal z(t) = x(t + t0 ) due to translation.
r Step 2: find the transformed signal y(t) = z(at) due to time-scaling (con-
taining time reversal if a < 0).
The two steps can be carried out equivalently in reverse order.
r Step 1: define an intermediate signal z(t) = x(at) due to time-scaling (con-
taining time reversal if a < 0).
r Step 2: find the transformed signal y(t) = z(t + t0 /a) due to translation.
Note that the translation parameters (direction and amount) are different
depending on whether the translation is carried out before or after scaling.
2. A two-point process:
Evaluate x(t) at two arbitrarily chosen time points t = t1 and t = t2 to get
v1 = x(t1 ) and v2 = x(t2 ). Then y(t) = x(at + t0 ) = v1 when its argument is
at + t0 = t1 ; i.e., when t = (t1 − t0 )/a, and y(t) = x(at + t0 ) = v2 when at +
t0 = t1 , i.e., t = (t2 − t0 )/a. As the transformation at + t0 is linear, the value
of y(t) at any other time moment t can be found by linear interpolation based
on these two points.
Signals and systems 13
t 0<t<2
x(t) = . (1.59)
0 else
r Translation: y(t) = x(t + 3) and z(t) = x(t − 1) are shown in Fig. 1.4(a).
r Expansion/compression: y(t) = x(2t/3) and z(t) = x(2t) are shown in Fig.
1.4(b).
r Time reversal: y(t) = x(−t) and z(t) = x(−2t) are shown in Fig. 1.4(c).
r Combination of translation, scaling, and reversal:
3
y(t) = x(−2t + 3) = x −2 t − . (1.60)
2
– Method 1: based on the first expression y(t) = x(−2t + 3) we get (Fig. 1.4
(d)):
3
z(t) = x(−2t), y(t) = z t − (1.62)
2
3
−2t + 3 = t1 = 0 =⇒ t= ,
2
1
−2t + 3 = t2 = 2 =⇒ t= .
2
For example, if N = 2, x(2) [0] = x[0], x(2) [2] = x[1], x(2) [4] = x[2], . . ., and
x[n] = 0 for all other n.
Signals and systems 15
Example 1.2: Given x[n] as shown in Fig. 1.5(a), a transformation y[n] = x[−n +
4], shown in Fig. 1.5(b), can be obtained based on two time points:
−n + 4 = 0 =⇒ n = 4,
−n + 4 = 3 =⇒ n = 1. (1.65)
The down- and up-sampling of the signal in Fig. 1.5(a) can be obtained from the
following table and are shown in Fig. 1.5(c) and (d), respectively.
n · · · −1 0 1 2 3 4 5 6 7 · · ·
x[n] · · · 0 1 2 3 4 0 0 0 0 · · ·
(1.66)
x(2) [n] · · · 0 1 3 0 0 0 0 0 0 · · ·
x(2) [n] · · · 0 1 0 2 0 3 0 4 0 · · ·
where the symbol O[ ] represents the operation applied by the system to its
input. A system is linear if its input-output relationship satisfies both homo-
geneity and superposition.
r Homogeneity
O [ax(t)] = aO[x(t)] = ay(t), (1.68)
r Superposition
If O[xn (t)] = yn (t) (n = 1, 2, . . . , N ), then
N
N
N
O xn (t) = O[xn (t)] = yn (t), (1.69)
n =1 n =1 n =1
or
∞ ∞ ∞
O x(t) dt = O[x(t)] dt = y(t) dt. (1.70)
−∞ −∞ −∞
A system is time-invariant if how it responds to the input does not change over
time. In other words,
if O[x(t)] = y(t), then O[x(t − τ )] = y(t − τ ). (1.73)
A linear and time-invariant (LTI) system is both linear and time-invariant.
As an example, we see that the response of an LTI system y(t) = O[x(t)] to
dx(t)/dt is dy(t)/dt:
1 1
O [x(t + ) − x(t)] = [y(t + t) − y(t)]. (1.74)
Taking the limit → 0, we get
d d
O x(t) = O[ẋ(t)] = y(t) = ẏ(t). (1.75)
dt dt
Il est inutile de dire que dans les profonds ennuis de sa captivité, Napoléon
reproduisant ses souvenirs à mesure que les hasards de la conversation les
réveillaient, ne discutait pas méthodiquement les actes principaux de son règne,
comme nous avons essayé de le faire. Il touchait tantôt à un sujet, tantôt à un
autre, cherchant d'autant plus à s'excuser qu'il était moins excusable.
Ainsi raisonnait Napoléon sur les événements de son règne, sincère, comme on
le voit, sur les points où son amour-propre trouvait des excuses spécieuses,
sophistique sur les points où il n'en trouvait pas, sentant bien ses fautes sans le
dire, et comptant sur l'immensité de sa gloire pour le soutenir auprès des âges
futurs, comme elle l'avait déjà soutenu auprès des contemporains.
En toutes choses Napoléon disait qu'il n'avait pu avoir que des projets, qu'il
n'avait eu le temps de rien achever, que son règne n'était qu'une suite d'ébauches,
et alors se prenant à rêver, il aimait à se représenter tout ce qu'il aurait fait s'il avait
pu obtenir de l'Europe une paix franche et durable, (paix qu'il avait repoussée
malheureusement quand il aurait pu l'obtenir, comme en 1813 par exemple, et qu'il
n'avait voulue qu'en 1815, lorsqu'elle était devenue impossible!)— J'aurais, disait-il,
accordé à mes sujets une large part dans le gouvernement. Je
Ce qu'il aurait fait
s'il avait vieilli sur le les aurais appelés autour de moi dans des assemblées
trône. vraiment libres, j'aurais écouté, je me serais laissé contredire,
et, ne me bornant pas à les appeler autour de moi, je serais
allé à eux. J'aurais voyagé avec mes propres chevaux à travers la France,
accompagné de l'Impératrice et de mon fils. J'aurais tout vu de mes yeux, écouté,
redressé les griefs, observé de près les hommes et les choses, et répandu de mes
mains les biens de la paix, après avoir tant versé de ces mêmes mains les maux
de la guerre. J'aurais vieilli en prince paternel et pacifique, et les peuples, après
avoir si longtemps applaudi Napoléon guerrier, auraient béni Napoléon pacifique, et
voyageant, comme jadis les Mérovingiens, dans un char traîné par des bœufs.—
Tels étaient les rêves de ce grand homme, et si nous les rapportons, c'est qu'ils
contiennent une leçon frappante, celle de ne pas laisser passer le temps de faire le
bien, car une fois passé il ne revient plus. Ainsi s'écoulaient les soirées de la
captivité, et lorsqu'en discourant de la sorte Napoléon s'apercevait qu'il avait atteint
une heure plus avancée que de coutume, il s'écriait avec joie: Minuit, minuit! quelle
conquête sur le temps!... le temps, dont il n'avait jamais assez autrefois, et dont il
avait toujours trop aujourd'hui!
L'année 1816, dont une moitié s'était passée en tracasseries, fut quant à l'autre
moitié beaucoup mieux employée, et consacrée à des travaux historiques assidus.
C'est à M. de Las Cases que Napoléon donnait alors le plus de temps, car il était
plein d'ardeur pour le récit de ses campagnes d'Italie, qui lui rappelaient ses
premiers, ses plus sensibles succès. Quoiqu'il s'occupât aussi de l'expédition
d'Égypte avec le maréchal Bertrand, de la campagne de 1815 avec le général
Gourgaud, l'Italie avait en ce moment la préférence. Il aurait voulu avoir un
Moniteur pour les dates et pour certains détails matériels, et, à défaut du Moniteur,
il se servait de l'Annual register. Du reste, sa mémoire était
Travaux historiques
de Napoléon. rarement en défaut, et presque jamais il n'avait à rectifier ses
souvenirs. M. de Las Cases, forcé pour le suivre d'écrire aussi
vite que la parole, se servait de signes abréviatifs; il était obligé ensuite de recopier
ce qu'il avait écrit, et il y employait une partie des nuits. Il apportait le lendemain
cette copie, que Napoléon corrigeait de sa main. Ce travail ayant singulièrement
affaibli la vue de M. de Las Cases, son fils le relevait souvent, et l'aidait dans ses
efforts pour saisir au vol la pensée impétueuse du puissant historien. À ce travail
Napoléon en avait ajouté un autre. Il sentait l'inconvénient de ne pas savoir
l'anglais, et il avait résolu de l'apprendre en adoptant M. de Las Cases pour maître.
Mais ce génie prodigieux, qui avait à un si haut degré la mémoire des choses,
n'avait pas celle des mots, et il apprenait les langues avec peine. Il s'y appliquait
néanmoins, et commençait à lire l'anglais, sans toutefois pouvoir le parler. Ces
diverses occupations exigeaient de fréquents tête-à-tête avec
Les assiduités de
M. de Las Cases M. de Las Cases, et provoquaient des jalousies dans cette
auprès de colonie si peu nombreuse, et où il semble que l'infortune aurait
Napoléon inspirent dû rapprocher les cœurs. Le général Gourgaud avait fait
des jalousies à preuve envers Napoléon d'un dévouement remarquable, mais
quelques membres il gâtait ses bonnes qualités par un orgueil excessif, et par un
de la colonie. penchant à la jalousie qui ne reposait jamais. N'ayant pas
quitté Napoléon dans ses dernières campagnes, il se
considérait comme devant être le coopérateur exclusif de tous les récits de guerre,
et souffrait avec peine que M. de Las Cases fût en ce moment le confident habituel
de son maître. Cependant chacun devait avoir son tour, et, avec la fin de l'Empire,
que le général Gourgaud connaissait mieux, le privilége des longs tête-à-tête devait
arriver pour lui. Mais, bouillant autant que courageux, il ne savait pas se contenir,
et, dans ce cercle si étroit, où les froissements étaient nécessairement si sensibles,
il devenait souvent querelleur et incommode. Le spectacle de ces divisions
aggravait les peines de Napoléon. Il cherchait à apaiser des
Efforts de Napoléon
pour maintenir brouilles qu'il apercevait même quand on s'efforçait de les lui
l'union entre les cacher, réprimait avec autorité les fougues du général
amis qui lui restent. Gourgaud, et s'appliquait à guérir les blessures faites à la
sensibilité de M. de Las Cases, caractère concentré et un peu
morose.—Quoi, leur disait-il à tous, n'est-ce pas assez de nos chagrins? faut-il que
nous y ajoutions nous-mêmes par nos propres travers? Si la considération de ce
que vous vous devez les uns aux autres ne suffit pas, songez à ce que vous me
devez à moi-même... Ne voyez-vous pas que vos divisions me rendent
profondément malheureux?... Tenez, ajoutait-il, quand vous serez de retour en
Europe, ce qui ne peut manquer d'être prochain, car je n'ai pas beaucoup d'années
à vivre, votre gloire sera de m'avoir accompagné sur ce rocher. Alors vous n'irez
pas avouer que vous viviez en ennemis les uns avec les autres; vous vous direz
frères en Sainte-Hélène, vous affecterez l'union: eh bien, puisqu'il faudra le faire un
jour, pourquoi ne pas commencer aujourd'hui, pour votre dignité, pour mon repos,
pour ma consolation?...—
Napoléon dicte Le 1er janvier 1818 fut plus triste que les
beaucoup précédents, et beaucoup plus que celui de 1817,
moins, et lit quoique ce dernier eût été attristé par le départ de
davantage. M. de Las Cases. Napoléon travaillait moins, et
semblait découragé de dicter le récit de ses
campagnes, s'en fiant à la postérité du soin de sa gloire.—À quoi
bon, disait-il, tous ces mémoires à consulter, présentés à notre juge
à tous, la postérité? Nous sommes des plaideurs qui ennuient leur
juge. La postérité est un appréciateur des événements plus fin que
nous. Elle saura bien découvrir la vérité sans que nous nous
donnions tant de peine pour la lui faire parvenir.—Napoléon dictait
moins, mais il lisait davantage. Sa sensibilité au beau, devenue
exquise par l'âge et la souffrance, savourait avec délices les chefs-
d'œuvre de l'esprit humain. Le soir, parlant un peu moins des
événements de sa vie, il parlait de ses lectures, et parfois lisait à ses
amis des passages des grands écrivains de tous les temps avec
l'accent d'une haute et sûre intelligence.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
ebookfinal.com