Full download Practical Methods for Optimal Control and Estimation Using Nonlinear Programming Second Edition Advances in Design and Control John T. Betts pdf docx
Full download Practical Methods for Optimal Control and Estimation Using Nonlinear Programming Second Edition Advances in Design and Control John T. Betts pdf docx
com
OR CLICK BUTTON
DOWNLOAD EBOOK
https://ebookname.com/product/practical-methods-for-optimal-control-
using-nonlinear-programming-1st-edition-john-t-betts/
ebookname.com
https://ebookname.com/product/nonlinear-optimal-control-theory-1st-
edition-leonard-david-berkovitz/
ebookname.com
https://ebookname.com/product/linear-feedback-control-analysis-and-
design-with-matlab-advances-in-design-and-control-1st-edition-dingyu-
xue/
ebookname.com
https://ebookname.com/product/tribes-of-california-stephen-powers-
editor/
ebookname.com
Applied Nonparametric Statistical Methods 3rd ed Edition
Peter Sprent
https://ebookname.com/product/applied-nonparametric-statistical-
methods-3rd-ed-edition-peter-sprent/
ebookname.com
https://ebookname.com/product/hand-transplantation-1-edition-edition-
lanzetta-m-author/
ebookname.com
Who s Who in the Archers 2010 2010 ed. Edition Keri Davies
https://ebookname.com/product/who-s-who-in-the-archers-2010-2010-ed-
edition-keri-davies/
ebookname.com
https://ebookname.com/product/environmental-mineralogy-and-bio-
geochemistry-of-arsenic-1st-edition-alpers/
ebookname.com
Nasty Nature Nick Arnold
https://ebookname.com/product/nasty-nature-nick-arnold/
ebookname.com
Practical Methods
for Optimal Control
and Estimation Using
Nonlinear Programming
Advances in Design and Control
SIAM’s Advances in Design and Control series consists of texts and monographs dealing with all areas of
design and control and their applications. Topics of interest include shape optimization, multidisciplinary
design, trajectory optimization, feedback, and optimal control. The series focuses on the mathematical and
computational aspects of engineering design and control that are usable in a wide variety of scientific and
engineering disciplines.
Editor-in-Chief
Ralph C. Smith, North Carolina State University
Editorial Board
Athanasios C. Antoulas, Rice University
Siva Banda, Air Force Research Laboratory
Belinda A. Batten, Oregon State University
John Betts, The Boeing Company (retired)
Stephen L. Campbell, North Carolina State University
Eugene M. Cliff, Virginia Polytechnic Institute and State University
Michel C. Delfour, University of Montreal
Max D. Gunzburger, Florida State University
J. William Helton, University of California, San Diego
Arthur J. Krener, University of California, Davis
Kirsten Morris, University of Waterloo
Richard Murray, California Institute of Technology
Ekkehard Sachs, University of Trier
Series Volumes
Betts, John T., Practical Methods for Optimal Control and Estimation Using Nonlinear Programming, Second
Edition
Shima, Tal and Rasmussen, Steven, eds., UAV Cooperative Decision and Control: Challenges and Practical
Approaches
Speyer, Jason L. and Chung, Walter H., Stochastic Processes, Estimation, and Control
Krstic, Miroslav and Smyshlyaev, Andrey, Boundary Control of PDEs: A Course on Backstepping Designs
Ito, Kazufumi and Kunisch, Karl, Lagrange Multiplier Approach to Variational Problems and Applications
Xue, Dingyü, Chen, YangQuan, and Atherton, Derek P., Linear Feedback Control: Analysis and Design
with MATLAB
Hanson, Floyd B., Applied Stochastic Processes and Control for Jump-Diffusions: Modeling, Analysis,
and Computation
Michiels, Wim and Niculescu, Silviu-Iulian, Stability and Stabilization of Time-Delay Systems: An Eigenvalue-Based
Approach
Ioannou, Petros and Fidan, Baris,¸ Adaptive Control Tutorial
Bhaya, Amit and Kaszkurewicz, Eugenius, Control Perspectives on Numerical Algorithms and Matrix Problems
Robinett III, Rush D., Wilson, David G., Eisler, G. Richard, and Hurtado, John E., Applied Dynamic Programming
for Optimization of Dynamical Systems
Huang, J., Nonlinear Output Regulation: Theory and Applications
Haslinger, J. and Mäkinen, R. A. E., Introduction to Shape Optimization: Theory, Approximation, and
Computation
Antoulas, Athanasios C., Approximation of Large-Scale Dynamical Systems
Gunzburger, Max D., Perspectives in Flow Control and Optimization
Delfour, M. C. and Zolésio, J.-P., Shapes and Geometries: Analysis, Differential Calculus, and Optimization
Betts, John T., Practical Methods for Optimal Control Using Nonlinear Programming
El Ghaoui, Laurent and Niculescu, Silviu-Iulian, eds., Advances in Linear Matrix Inequality Methods in Control
Helton, J. William and James, Matthew R., Extending H∞ Control to Nonlinear Systems: Control of Nonlinear
Systems to Achieve Performance Objectives
Practical Methods
for Optimal Control
and Estimation Using
Nonlinear Programming
SECOND EDITION
John T. Betts
10 9 8 7 6 5 4 3 2 1
All rights reserved. Printed in the United States of America. No part of this book may be
reproduced, stored, or transmitted in any manner without the written permission of the
publisher. For information, write to the Society for Industrial and Applied Mathematics,
3600 Market Street, 6th Floor, Philadelphia, PA 19104-2688 USA.
Trademarked names may be used in this book without the inclusion of a trademark
symbol. These names are used in an editorial context only; no infringement of trademark
is intended.
is a registered trademark.
For Theon and Dorothy
He Inspired Creativity
She Cherished Education
Contents
Preface xiii
vii
viii Contents
8 Epilogue 411
Bibliography 417
Index 431
Preface
Solving an optimal control or estimation problem is not easy. Pieces of the puzzle
are found scattered throughout many different disciplines. Furthermore, the focus of this
book is on practical methods, that is, methods that I have found actually work! In fact
everything described in this book has been implemented in production software and used to
solve real optimal control problems. Although the reader should be proficient in advanced
mathematics, no theorems are presented.
Traditionally, there are two major parts of a successful optimal control or optimal
estimation solution technique. The first part is the “optimization” method. The second part
is the “differential equation” method. When faced with an optimal control or estimation
problem it is tempting to simply “paste” together packages for optimization and numerical
integration. While naive approaches such as this may be moderately successful, the goal of
this book is to suggest that there is a better way! The methods used to solve the differential
equations and optimize the functions are intimately related.
The first two chapters of this book focus on the optimization part of the problem. In
Chapter 1 the important concepts of nonlinear programming for small dense applications
are introduced. Chapter 2 extends the presentation to problems which are both large and
sparse. Chapters 3 and 4 address the differential equation part of the problem. Chapter
3 introduces relevant material in the numerical solution of differential (and differential-
algebraic) equations. Methods for solving the optimal control problem are treated in some
detail in Chapter 4. Throughout the book the interaction between optimization and integra-
tion is emphasized. Chapter 5 describes how to solve optimal estimation problems. Chapter
6 presents a collection of examples that illustrate the various concepts and techniques. Real
world problems often require solving a sequence of optimal control and/or optimization
problems, and Chapter 7 describes a collection of these “advanced applications.”
While the book incorporates a great deal of new material not covered in Practical
Methods for Optimal Control Using Nonlinear Programming [21], it does not cover every-
thing. Many important topics are simply not discussed in order to keep the overall presen-
tation concise and focused. The discussion is general and presents a unified approach to
solving optimal estimation and control problems. Most of the examples are drawn from
my experience in the aerospace industry. Examples have been solved using a particular
implementation called SOCS. I have tried to adhere to notational conventions from both
optimization and control theory whenever possible. Also, I have attempted to use consistent
notation throughout the book.
The material presented here represents the collective contributions of many peo-
ple. The nonlinear programming material draws heavily on the work of John Dennis,
Roger Fletcher, Phillip Gill, Sven Leyffer, Walter Murray, Michael Saunders, and Mar-
xiii
xiv Preface
garet Wright. The material on differential-algebraic equations (DAEs) is drawn from the
work of Uri Ascher, Kathy Brenan, and Linda Petzold. Ray Spiteri graciously shared his
classroom notes on DAEs. I was introduced to optimal control by Stephen Citron, and I
routinely refer to the text by Bryson and Ho [54]. Over the past 20 years I have been for-
tunate to participate in workshops at Oberwolfach, Munich, Minneapolis, Victoria, Banff,
Lausanne, Griefswald, Stockholm, and Fraser Island. I’ve benefited immensely simply
by talking with Larry Biegler, Hans Georg Bock, Roland Bulirsch, Rainer Callies, Kurt
Chudej, Tim Kelley, Bernd Kugelmann, Helmut Maurer, Rainer Mehlhorn, Angelo Miele,
Hans Josef Pesch, Ekkehard Sachs, Gottfried Sachs, Roger Sargent, Volker Schulz, Mark
Steinbach, Oskar von Stryk, and Klaus Well.
Three colleagues deserve special thanks. Interaction with Steve Campbell and his
students has inspired many new results and interesting topics. Paul Frank has played a
major role in the implementation and testing of the large, sparse nonlinear programming
methods described. Bill Huffman, my coauthor for many publications and the SOCS soft-
ware, has been an invaluable sounding board over the last two decades. Finally, I thank
Jennifer for her patience and understanding during the preparation of this book.
John T. Betts
Chapter 1
Introduction to Nonlinear
Programming
1.1 Preliminaries
This book concentrates on numerical methods for solving the optimal control problem.
The fundamental principle of all effective numerical optimization methods is to solve a
difficult problem by solving a sequence of simpler subproblems. In particular, the solution
of an optimal control problem will require the solution of one or more finite-dimensional
subproblems. As a prelude to our discussions on optimal control, this chapter will focus
on the nonlinear programming (NLP) problem. The NLP problem requires finding a finite
number of variables such that an objective function or performance index is optimized
without violating a set of constraints. The NLP problem is often referred to as parameter
optimization. Important special cases of the NLP problem include linear programming
(LP), quadratic programming (QP), and least squares problems.
Before proceeding further, it is worthwhile to establish the notational conventions
used throughout the book. This is especially important since the subject matter covers a
number of different disciplines, each with its own notational conventions. Our goal is to
present a unified treatment of all these fields. As a rule, scalar quantities will be denoted by
lowercase letters (e.g., α). Vectors will be denoted by boldface lowercase letters and will
usually be considered column vectors, as in
x1
x2
x = . , (1.1)
..
xn
where the individual components of the vector are x k for k = 1, . . ., n. To save space, it will
often be convenient to define the transpose, as in
xT = (x 1 , x 2 , . . . , x n ). (1.2)
1
2 Chapter 1. Introduction to Nonlinear Programming
where c (x) = dc/d x is the slope of the constraint at x. Using this linear approximation, it
is reasonable to compute x̄, a new estimate for the root, by solving (1.5) such that c(x̄) = 0,
i.e.,
x̄ = x − [c (x)]−1c(x). (1.6)
Typically, we denote p ≡ x̄ − x and rewrite (1.6) as
x̄ = x + p, (1.7)
where
p = −[c (x)]−1c(x). (1.8)
Of course, in general, c(x) is not a linear function of x, and consequently we cannot
expect that c(x̄) = 0. However, we might hope that x̄ is a better estimate for the root x ∗
than the original guess x; in other words we might expect that
|x̄ − x ∗ | ≤ |x − x ∗ | (1.9)
and
|c(x̄)| ≤ |c(x)|. (1.10)
If the new point is an improvement, then it makes sense to repeat the process, thereby
defining a sequence of points x (0) , x (1) , x (2) , . . . with point (k + 1) in the sequence given by
For notational convenience, it usually suffices to present a single step of the algorithm, as in
(1.6), instead of explicitly labeling the information at step k using the superscript notation
x (k) . Nevertheless, it should be understood that the algorithm defines a sequence of points
x (0) , x (1) , x (2) , . . . . The sequence is said to converge to x ∗ if
In practice, of course, we are not interested in letting k → ∞. Instead we are satisfied with
terminating the sequence when the computed solution is “close” to the answer. Further-
more, the rate of convergence is of paramount importance when measuring the computa-
tional efficiency of an algorithm. For Newton’s method, the rate of convergence is said to
be quadratic or, more precisely, q-quadratic (cf. [71]). The impact of quadratic conver-
gence can be dramatic. Loosely speaking, it implies that each successive estimate of the
solution will double the number of significant digits!
Example 1.1 N EWTON ’ S M ETHOD —ROOT F INDING. To demonstrate, let us sup-
pose we want to solve the constraint
c(x) = a1 + a2 x + a3 x 2 = 0, (1.13)
where the coefficients a1 , a2 , a3 are chosen such that c(0.1) = −0.05, c(0.25) = 0, and
c(0.9) = 0.9. Table 1.1 presents the Newton iteration sequence beginning from the initial
guess x = 0.85 and proceeding to the solution at x ∗ = 0.25. Figure 1.1 illustrates the
first three iterations. Notice in Table 1.1 that the error between the computed solution and
the true value, which is tabulated in the third column, exhibits the expected doubling in
significant figures from the fourth iteration to convergence.
So what is wrong with Newton’s method? Clearly, quadratic convergence is a very
desirable property for an algorithm to possess. Unfortunately, if the initial guess is not
sufficiently close to the solution, i.e., within the region of convergence, Newton’s method
may diverge. As a simple example, Dennis and Schnabel [71] suggest applying Newton’s
method to solve c(x) = arctan(x) = 0. This will diverge when the initial guess |x (0) | > a,
converge when |x (0) | < a, and cycle indefinitely if |x (0) | = a, where a = 1.3917452002707.
In essence, Newton’s method behaves well near the solution (locally) but lacks something
permitting it to converge globally. So-called globalization techniques, aimed at correcting
this deficiency, will be discussed in subsequent sections. A second difficulty occurs when
the slope c (x) = 0. Clearly, the correction defined by (1.6) is not well defined in this case.
In fact, Newton’s method loses its quadratic convergence property if the slope is zero at
the solution, i.e., c (x ∗ ) = 0. Finally, Newton’s method requires that the slope c (x) can
be computed at every iteration. This may be difficult and/or costly, especially when the
function c(x) is complicated.
x̄ = x − B −1c(x) = x + p, (1.16)
1.4. Newton’s Method for Minimization in One Variable 5
x k − x k−1
x k+1 = x k − c(x k ). (1.17)
c(x k ) − c(x k−1)
Figure 1.2 illustrates a secant iteration applied to Example 1.1 described in the pre-
vious section.
Clearly, the virtue of the secant method is that it does not require calculation of the
slope c (x k ). While this may be advantageous when derivatives are difficult to compute,
there is a downside! The secant method is superlinearly convergent, which, in general, is
not as fast as the quadratically convergent Newton algorithm. Thus, we can expect conver-
gence will require more iterations, even though the cost per iteration is less. A distinguish-
ing feature of the secant method is that the slope is approximated using information from
previous iterates in lieu of a direct evaluation. This is the simplest example of a so-called
quasi-Newton method.
development of (1.5), let us approximate F(x) by the first three terms in a Taylor series
expansion about the current point x:
1
F(x̄) = F(x) + F (x)(x̄ − x) + (x̄ − x)F (x)(x̄ − x). (1.18)
2
Notice that we cannot use a linear model for the objective because a linear function does
not have a finite minimum point. In contrast, a quadratic approximation to F(x) is the
simplest approximation that does have a minimum. Now for x̄ to be a minimum of the
quadratic (1.18), we must have
dF
≡ F (x̄) = 0 = F (x) + F (x)(x̄ − x). (1.19)
d x̄
Solving for the new point yields
x̄ = x − [F (x)]−1 F (x). (1.20)
The derivation has been motivated by minimizing F(x). Is this equivalent to solving the
slope condition F (x) = 0? It would appear that the iterative optimization sequence defined
by (1.20) is the same as the iterative root-finding sequence defined by (1.6), provided we
replace c(x) by F (x). Clearly, a quadratic model for the objective function (1.18) produces
a linear model for the slope F (x). However, the condition F (x) = 0 defines only a sta-
tionary point, which can be a minimum, a maximum, or a point of inflection. Apparently
what is missing is information about the curvature of the function, which would determine
whether it is concave up, concave down, or neither.
Figure 1.3 illustrates a typical situation. In the illustration, there are two points
with zero slopes; however, there is only one minimum point. The minimum point is dis-
tinguished from the maximum by the algebraic sign of the second derivative F (x). For-
mally, we have
Necessary Conditions:
F (x ∗ ) = 0, (1.21)
F (x ∗ ) ≥ 0; (1.22)
Sufficient Conditions:
F (x ∗ ) = 0, (1.23)
∗
F (x ) > 0. (1.24)
Note that the sufficient conditions require that F (x ∗ ) > 0, defining a strong local
minimizer in contrast to a weak local minimizer, which may have F (x ∗ ) = 0. It is also
important to observe that these conditions define a local rather than a global minimizer.
For the present, let us assume that the number of constraints and variables is the same, i.e.,
m = n. Just as in one variable, a linear approximation to the constraint functions analogous
to (1.5) is given by
c(x) = c(x) + G(x − x), (1.26)
where the Jacobian matrix G is defined by
∂c1 ∂c1 ∂c1
...
∂ x1 ∂ x2 ∂ xn
∂c2 ∂c2 ∂c2
...
∂c
∂ x1 ∂ x2 ∂ xn
G≡ = .. . (1.27)
∂x
.
∂cm ∂cm ∂cm
∂ x1 ∂ x2 ... ∂ xn
Gp = −c (1.28)
8 Chapter 1. Introduction to Nonlinear Programming
x = x + p. (1.29)
Thus, each Newton iteration requires a linear approximation to the nonlinear con-
straints c, followed by a step from x to the solution of the linearized constraints at x. Figure
1.4 illustrates a typical situation when n = m = 2. It is important to remark that the multi-
dimensional version of Newton’s method shares all of the properties of its one-dimensional
counterpart. Specifically, the method is quadratically convergent provided it is within a
region of convergence, and it may diverge unless appropriate globalization strategies are
employed. Furthermore, in order to solve (1.28) it is necessary that the Jacobian G be non-
singular, which is analogous to requiring that c (x) = 0 in the univariate case. And, finally,
it is necessary to actually compute G, which can be costly.
1
F(x) = F(x) + gT (x)(x − x) + (x − x)T H(x)(x − x). (1.30)
2
1.6. Unconstrained Optimization 9
1
F(x) = F(x) + gT p + pT Hp. (1.33)
2
The scalar term gT p is referred to as the directional derivative along p and the scalar term
pT Hp is called the curvature or second directional derivative in the direction p.
It is instructive to examine the behavior of the series (1.33). First, let us suppose
that the expansion is about the minimum point x∗ . Now if x∗ is a local minimum, then the
objective function must be larger at all neighboring points, that is, F(x) > F(x∗ ). In order
for this to be true, the slope in all directions must be zero, that is, (g∗ )T p = 0, which implies
we must have
g1 (x∗ )
..
g(x∗ ) = . = 0. (1.34)
gn (x∗ )
This is just the multidimensional analogue of the condition (1.21). Furthermore, if the
function curves up in all directions, the point x∗ is called a strong local minimum and the
third term in the expansion (1.33) must be positive:
pT H∗ p > 0. (1.35)
A matrix1 that satisfies this condition is said to be positive definite. If there are some
directions with zero curvature, i.e., pT H∗ p ≥ 0, then H∗ is said to be positive semidefinite. If
there are directions with both positive and negative curvature, the matrix is called indefinite.
In summary, we have
1 H∗ ≡ H(x∗ ) (not the conjugate transpose, as in some texts).
10 Chapter 1. Introduction to Nonlinear Programming
Necessary Conditions:
g(x∗ ) = 0, (1.36)
p H∗ p ≥ 0;
T
(1.37)
Sufficient Conditions:
g(x∗ ) = 0, (1.38)
p H∗ p > 0.
T
(1.39)
The preceding discussion was motivated by an examination of the Taylor series about
the minimum point x∗ . Let us now consider the same quadratic model about an arbitrary
point x. Then it makes sense to choose a new point x such that the gradient at x is zero. The
resulting linear approximation to the gradient is just
g = 0 = g + Hp, (1.40)
p = −H−1 g. (1.41)
Just as before, the Newton iteration is defined by (1.29). Since this iteration is based on
finding a zero of the gradient vector, there is no guarantee that the step will move toward a
local minimum rather than a stationary point or maximum. To preclude this, we must insist
that the step be downhill, which requires satisfying the so-called descent condition
gT p < 0. (1.42)
It is interesting to note that, if we use the Newton direction (1.41), the descent condition
becomes
gT p = −gT H−1 g < 0, (1.43)
which can be true only if the Hessian is positive definite, i.e., (1.35) holds.
where the new estimate B is computed from the old estimate B. Typically, this calculation
involves a low-rank modification R(c, x) that can be computed from the previous step:
c = ck − ck−1 , (1.45)
x = xk − xk−1 . (1.46)
The usual way to construct the update is to insist that the secant condition
Bx = c (1.47)
hold and then construct an approximation B that is “close” to the previous estimate B. In
Section 1.3, the simplest form of this condition (1.15) led to the secant method. In fact, the
generalization of this formula, proposed in 1965 by Broyden [50], is
(c − Bx) (x)T
B = B+ , (1.48)
(x)T x
which is referred to as the secant or Broyden update. The recursive formula constructs a
rank-one modification that satisfies the secant condition and minimizes the Frobenius norm
between the estimates.
When a quasi-Newton method is used to approximate the Hessian matrix, as required
for minimization, one cannot simply replace c with g in the secant update. In particular,
the matrix B constructed using (1.48) is not symmetric. However, there is a rank-one update
that does maintain symmetry, known as the symmetric rank-one (SR1) update:
(g − Bx)(g − Bx)T
B = B+ , (1.49)
(g − Bx)T x
where g ≡ gk − gk−1 . While the SR1 update does preserve symmetry, it does not neces-
sarily maintain a positive definite approximation. In contrast, the update
g(g)T Bx(x)T B
B = B+ − (1.50)
(g)T x (x)T Bx
is a rank-two positive definite secant update provided (x)T g > 0 is enforced at each
iteration. This update was discovered independently by Broyden [51], Fletcher [81], Gold-
farb [103], and Shanno [159] in 1970 and is known as the BFGS update.
The effective computational implementation of a quasi-Newton update introduces a
number of additional considerations. When solving nonlinear equations, the search direc-
tion from (1.28) is p = −G−1 c, and for optimization problems the search direction given
by (1.41) is p = −H−1 g. Since the search direction calculation involves the matrix inverse
(either G−1 or H−1 ), one apparent simplification is to apply the recursive update directly
to the inverse. In this case, the search direction can be computed simply by computing the
matrix-vector product. This approach was proposed by Broyden for nonlinear equations,
but has been considerably less successful in practice than the update given by (1.48), and
is known as “Broyden’s bad update.” For unconstrained minimization, let us make the sub-
stitutions x → g, g → x, and B → B−1 in (1.50). By computing the inverse of the
resulting expression, one obtains
(g − Bx)(g)T + g(g − Bx)T
B = B+ − σ g(g)T , (1.51)
(g)T x
Random documents with unrelated
content Scribd suggests to you:
plus qu’on aime. Tout au contraire, le sens du poème de Grainville,
ce qui en fait un livre aimable et bon, d’une lecture sacrée, c’est
l’idée sublime et tendre (aussi spiritualiste que l’autre est matérielle
et basse) que l’amour est la vie même du monde, toute sa raison
d’être, que le monde ne peut mourir tant que l’homme aime encore ;
tellement, que, pour obtenir que le monde se repose et meure, Dieu
est obligé d’obtenir de l’homme qu’il permette cette mort en cessant
d’aimer.
Combien Grainville aurait-il eu le droit de dire de son poème le
mot qu’on a prodigué à des livres moins originaux : prolem sine
matre creatam (fils engendré sans mère) !
Cette mère, s’il fallait la chercher, ce serait la douleur. Sous cette
noble poésie qui relève tout et ne descend jamais à pleurer pour
elle-même.
Grainville, pour se faire imprimer, s’était adressé à Bernardin de
Saint-Pierre, qui avait épousé sa sœur, et il lui avait envoyé son livre.
L’auteur de Paul et Virginie le lut probablement, car il se mit en
quête, il recommanda le livre. Il trouva un libraire, mais non pas un
public. A peine quatre ou cinq exemplaires sortirent du magasin.
Pour saisir l’attention du public, l’arracher un moment à ses
préoccupations, il eût fallu, du moins, un livre ridicule, comme avait
été celui d’Atala, dans la première édition qu’a supprimée l’auteur.
Grainville échappa entièrement à l’attention de la critique. Personne
ne blâma, ne loua. Tous négligèrent également le seul livre du temps
dont la composition fût originale.
Cet oubli, ce silence, furent, pour l’auteur le coup de grâce. Il se
tint condamné sans appel par le sort. Son poème, son espoir et sa
consolation dans ses sombres et dernières années, ce fidèle
compagnon, ce noble ami, qui l’avait souvent relevé, dont la flamme
le réchauffait encore à son foyer glacé, son poème, dis-je, l’avait
quitté ; il était parti, hélas ! pour faire naufrage !… Il faut avoir
produit soi-même pour savoir la tristesse de l’écrivain qui, son livre
achevé, s’en sépare pour toujours et reste solitaire, privé du fils de
sa pensée.
Toutes les réalités odieuses de sa situation le ressaisirent alors. Il
recommença à sentir la faim, le froid. Il se retrouva vieux, dénué,
misérable, seul. Que dis-je ? non, pas seul. La chétive habitation que
la pension, l’école avait remplie, n’était plus occupée par le seul
Grainville. Elle était divisée, comme la plupart des maisons du bas
Amiens, entre plusieurs ménages d’une population indigente,
bruyante, sale, presque toujours ivre. Grainville, relégué dans un
rez-de-chaussée humide et sombre, à travers les faibles cloisons,
avait tous les bruits, les échos, les contre-coups de cet enfer, cris des
enfants, querelles des parents, commérages des femmes. Si
différent de ses voisins, il devenait un objet de risée. On se moquait
du vieux. On le singeait, on l’épiait. Il le croyait du moins. Il
supposait que ses voisins rapportaient à ses ennemis tout ce qu’il
pouvait dire ou faire, en amusaient la ville. Au coin même de son
foyer, il ne se croyait pas en sûreté ; il disait à sa femme : « Parle
bas, on écoute. »
Dans cette vie intolérable, qu’il eût quittée cent fois, sa femme le
retenait encore. Peu à peu, cependant, autant qu’on peut
conjecturer, il se dit qu’après tout, seule, peut-être, elle serait moins
malheureuse, qu’elle échapperait mieux à la dure malédiction qui
avait pesé sur lui. Prévision très juste. Madame de Grainville,
aimable et cultivée, trouva, après la mort de son mari, de faciles
moyens d’existence.
Grainville, depuis longtemps, avait la fièvre et ne dormait plus :
« Le 1er février 1805, à deux heures du matin, pendant une froide
nuit, sous un vent glacé de tempête, il se leva pour rafraîchir sa tête
ardente aux intempéries de la saison. Il traversa le misérable
jardinet abandonné, ouvrit doucement la porte : la referma
doucement et en mit la clé dans la poche de son seul vêtement. Des
jeunes gens attardés qui passaient de l’autre côté, revenant d’une
des folles soirées du carnaval, virent alors un spectre assez étrange
qui se glissait sur le revers opposé, et, un instant après, ils
entendirent un bruit pareil à celui d’un corps qui tombe. Le
lendemain, quand les bateliers arrivèrent à leurs travaux, ils
remarquèrent quelque chose qui flottait entre les glaces brisées, et
ils le ramenèrent du harpon qui arme leurs longs pieux. C’était
Grainville. »
Le mort fut, sans cérémonie, mené au cimetière.
On en parla le jour. Le soir, dans les salons, les dames
s’accordèrent à dire que l’événement était triste, mais qu’enfin c’était
à une juste punition de Dieu. Ce fut toute l’oraison funèbre.
Peu après, un étranger, un antiquaire anglais, chercheur
infatigable des curiosités littéraires, le chevalier Krofft, vint résider à
Amiens. Il connaissait le Dernier homme. Il demanda avidement à
voir l’original et puissant créateur du poème qu’il considérait comme
la seule épopée moderne. Hélas ! il n’était plus !… Krofft pleura
amèrement : « Ah ! dit-il, je l’aurais sauvé ! »
Sort cruel ! on quitte la vie la veille du jour peut-être qui l’aurait
rendue chère !
Krofft n’eut pas de bonheur. Il arrivait toujours trop tard, et
seulement pour enterrer les morts. Déjà en Angleterre, il avait
découvert, admiré les poésies de Chatterton, lorsque ce jeune poète
venait de s’ôter la vie.
Aujourd’hui bien inconnu, Krofft, vivra par cette larme que seul il
versa sur Grainville, lorsque personne en France ne s’était intéressé
encore à l’homme ni au poème. Dans ses notes sur Horace, l’Anglais
enthousiaste, s’élevant au-dessus de tout amour-propre national, a
dit ce mot sur le poème français : « Il ira jusqu’au dernier homme,
jusqu’à la fin du monde, plus sûrement que celui de Milton. »
LIVRE II
ANGLETERRE. FRANCE (1798-1805)
CHAPITRE PREMIER
MALTHUS (1798)
DE L’ANGLETERRE
— FÉVRIER-MAI 1804
(MAI 1802)