Ojolibroorigen
Ojolibroorigen
Stochastic Differential
Equations in Infinite
Dimensions
Series Editors:
Søren Asmussen Peter Jagers
Department of Mathematical Sciences Mathematical Statistics
Aarhus University Chalmers University of Technology
Ny Munkegade and University of Gothenburg
8000 Aarhus C 412 96 Göteborg
Denmark Sweden
asmus@[Link] jagers@[Link]
ISSN 1431-7028
ISBN 978-3-642-16193-3 e-ISBN 978-3-642-16194-0
DOI 10.1007/978-3-642-16194-0
Springer Heidelberg Dordrecht London New York
vii
viii Preface
In addition to presenting these results on SPDEs, we discuss the work of Leha and
Ritter [46, 47] on SDEs in R∞ with applications to interacting particle systems and
the related work of Albeverio et al. [2, 3], and also of Gawarecki and Mandrekar [20,
21] on the equations in the field of Glauber dynamics for quantum lattice systems.
In both cases the authors study infinite systems of SDEs.
We do not present here the approach used in Kalliapur and Xiong [37], as it re-
quires introducing additional terminology for nuclear spaces. For this type of prob-
lem (referred to as “type 2” equations by K. Itô in [35]), we refer the reader to
[22, 23], and [24], as well as to [37].
A third approach, which involves solutions being Hida distribution is presented
by Holden et al. in the monograph [31].
The book is divided into two parts. We begin Part I with a discussion of the
semigroup and variational methods for solving PDEs. We simultaneously develop
stochastic calculus with respect to a Q-Wiener process and a cylindrical Wiener pro-
cess, relying on the classic approach presented in [49]. These foundations allow us to
develop the theory of semilinear partial differential equations. We address the case
of Lipschitz coefficients first and produce unique mild solutions as in [11]; how-
ever, we then extend our research to the case where the equation coefficients depend
on the entire “past” of the solution, invoking the techniques of Gikhman and Sko-
rokhod [25]. We also prove Markov and Feller properties for mild solutions, their
dependence on the initial condition, and the Kolmogorov backward equation for the
related transition semigroup. Here we have adapted the work of B. Øksendal [61],
S. Cerrai [8], and Da Prato and Zabczyk [11].
To go beyond the Lipschitz case, we have adapted the method of approximating
continuous functions by Lipschitz functions f : [0, T ] × Rn → Rn from Gikhman
and Skorokhod [25] to the case of continuous functions f : [0, T ] × H → H [22].
This technique enabled us to study the existence of weak solutions for SDEs with
continuous coefficients, with the solution identified in a larger Hilbert space, where
the original Hilbert space is compactly embedded. This arrangement is used, as we
have already mentioned, due to the invalidity of the Peano theorem. In addition, we
study martingale solutions to semilinear SDEs in the case of a compact semigroup
and for coefficients depending on the entire past of the solution.
The variational method is addressed in Chap. 4, where we study both the weak
and strong solutions. The problem of the existence of weak variational solutions is
not well addressed in the existent literature, and our original results are obtained
with the help of the ideas presented in Kallianpur et al. [36]. We have followed the
approach of Prévôt and Röckner in our presentation of the problem of the existence
and uniqueness of strong solutions.
We conclude Part I with an interesting problem of an infinite system of SDEs that
does not arise from a stochastic partial differential equation and serves as a model of
an interacting particle system and in Glauber dynamics for quantum lattice systems.
In Part II of the book, we present the asymptotic behaviors of solutions to infinite-
dimensional stochastic differential equations. The study of this topic was undertaken
for specific cases by Ichikawa [32, 33] and Da Prato and Zabczyk [12] in the case
of mild solutions. A general Lyapunov function approach for strong solutions in a
Preface ix
Gelfand triplet setting for exponential stability was originated in the work of Khas-
minskii and Mandrekar [40] (see also [55]). A generalization of this approach for
mild and strong solutions involving exponential boundedness was put forward by
R. Liu and Mandrekar [52, 53]. This work allows readers to study the existence of
invariant measure [52] and weak recurrence of the solutions to compact sets [51].
Some of these results were presented by K. Liu in a slightly more general form
in [50].
Although we have studied the existence and uniqueness of non-Markovian solu-
tions, we do not investigate the ergodic properties of these processes, as the tech-
niques in this field are still in development [28].
During the time we were working on this book, we were helped by discussions
with various scholars. We thank Professors A.V. Skorokhod and R. Khasminskii for
providing insights in the problems studied in Parts I and II, respectively. We are
indebted to Professor B. Øksendal for giving us ideas on the organization of the
book. One of us had the privilege of visiting Professors G. Da Prato and Jürgen
Pothoff. The discussions with them helped in understanding several problems. The
latter provided an opportunity to present a preliminary version of the first part to
his students. Clearly, all the participants’ comments improved the presentation. The
content of Chap. 5 bears the influence of Professor Sergio Albeverio, from whom
we got an insight into applications to physics. Professor Kallianpur provided pre-
liminary knowledge of the field and encouragement by extending an invitation to a
conference on SPDEs in Charlotte. We will be amiss if we do not thank the partic-
ipants of the seminar on the subject given at Michigan State University. Our spe-
cial thanks go to Professors Peter Bates, Sheldon Newhouse, and Shlomo Levental,
whose comments and questions led to cleaning up some confusion in the book.
Finally, we thank two referees for their insightful comments which led to a sig-
nificant improvement of our presentation. We are grateful to Dr. M. Reizakis of
Springer Verlag for her timely administrative support and encouragement.
xi
Contents
2 Stochastic Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1 Hilbert-Space-Valued Process, Martingales, and Cylindrical
Wiener Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.1 Cylindrical and Hilbert-Space-Valued Gaussian Random
Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.2 Cylindrical and Q-Wiener Processes . . . . . . . . . . . . 19
2.1.3 Martingales in a Hilbert Space . . . . . . . . . . . . . . . 21
2.2 Stochastic Integral with Respect to a Wiener Process . . . . . . . . 23
2.2.1 Elementary Processes . . . . . . . . . . . . . . . . . . . . 25
2.2.2 Stochastic Itô Integral for Elementary Processes . . . . . . 26
2.2.3 Stochastic Itô Integral with Respect to a Q-Wiener Process 34
2.2.4 Stochastic Itô Integral with Respect to Cylindrical Wiener
Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.2.5 The Martingale Representation Theorem . . . . . . . . . . 49
2.2.6 Stochastic Fubini Theorem . . . . . . . . . . . . . . . . . 57
2.3 The Itô Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2.3.1 The Case of a Q-Wiener Process . . . . . . . . . . . . . . 61
2.3.2 The Case of a Cylindrical Wiener Process . . . . . . . . . 69
xiii
xiv Contents
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Acronyms
xv
xvi Acronyms
The uniqueness of the solution and the major premise of the Hughen’s principle
lead to the following implications. If the temperature distribution u(t, x) at t > 0
is uniquely determined by the initial condition ϕ(x), then u(t, x) can also be ob-
tained by first calculating u(s, x) for some intermediate time s < t and then by
using u(s, x) as the initial condition. Thus, there exist transformations G(t) on ϕ
defined by (G(t)ϕ)(x) = uϕ (t, x) satisfying the semigroup property
G(t)ϕ = G(t − s) G(s)ϕ
The solution to (1.1) is known to be (see [7] Chap. 3 for detailed presentation)
u(t, x) = G(t)ϕ (x).
Exercise 1.1 Show that the operators G(t) defined in (1.3) have the semigroup
property.
1.2 Elements of Semigroup Theory 5
For brevity, L (X) will denote the Banach space of bounded linear operators on X.
The identity operator on X is denoted by I .
Let X ∗ denote the dual space of all bounded linear functionals x ∗ on X. X ∗ is
again a Banach space under the supremum norm
x ∗ X∗ = sup x, x ∗ ,
x∈X, xX =1
T h, h H ≥ 0.
R+ t → S(t)x ∈ X
is continuous.
Definition 1.2 Let S(t) be a C0 -semigroup on a Banach space X. The linear oper-
ator A with domain
S(t)x − x
D(A) = x ∈ X : lim exists (1.5)
t→0+ t
defined by
S(t)x − x
Ax = lim (1.6)
t→0+ t
is called the infinitesimal generator of the semigroup S(t).
d
S(t)x = AS(t)x = S(t)Ax. (1.8)
dt
t
(3) For x ∈ X, 0 S(s)x ds ∈ D(A), and
t
A S(s)x ds = S(t)x − x. (1.9)
0
(5) If S(t) is compact then S(t) is continuous in the operator topology for t > 0,
i.e.,
lim S(s) − S(t)L (H ) = 0. (1.10)
s→t, s,t>0
(9) Let X be a reflexive Banach space. Then the adjoint semigroup S(t)∗ of S(t) is
a C0 -semigroup whose infinitesimal generator is A∗ , the adjoint of A.
If X = H , a real separable Hilbert space, then for h ∈ H , define the graph norm
1/2
hD(A) = h2H + Ah2H . (1.12)
Exercise 1.2 Let A be a closed linear operator on a real separable Hilbert space.
Prove that (D(A), · D(A) ) is a real separable Hilbert space.
Let B(H ) denote the Borel σ -field on H . Then D(A) ∈ B(H ), and
A : D(A), B(H )|D(A) → H, B(H ) .
Consequently, the restricted Borel σ -field B(H )|D(A) coincides with the Borel
σ -field on the Hilbert space (D(A), · D(A) ), and measurability of D(A)-valued
functions can be understood with respect to either Borel σ -field.
8 1 Partial Differential Equations as Equations in Infinite Dimensions
t
Theorem 1.3 Let f : [0, T ] → D(A) be measurable, and let 0 f (s)D(A) < ∞.
Then
t t t
f (s) ds ∈ D(A) and Af (s) ds = A f (s) ds. (1.13)
0 0 0
Definition 1.3 The resolvent set ρ(A) of a closed linear operator A on a Banach
space X is the set of all complex numbers λ for which λI − A has a bounded inverse,
i.e., the operator (λI − A)−1 ∈ L (X). The family of bounded linear operators
In particular,
AR(λ, A)x = R(λ, A)Ax, x ∈ D(A). (1.16)
In addition, we have the following commutativity property:
The following statement is true in greater generality; however, we will use it only in
the real domain.
Since the range R(R(λ, A)) ⊂ D(A), we can define the Yosida approximation of A
by
Aλ x = ARλ x, x ∈ X. (1.22)
Note that by (1.16)
Aλ x = Rλ Ax, x ∈ D(A).
Since λ(λI − A)R(λ, A) = λI , we have λ2 R(λ, A) − λI = λAR(λ, A), so that
Aλ x = λ2 R(λ, A) − λI,
lim Rλ x = x, x ∈ X, (1.26)
λ→∞
is finite.
1.4 The Abstract Cauchy Problem 11
Note that the Sobolev space W m,p (O) is a subset of Lp (O). If O = Rd , then it is
known ([30], Chap. 10, Proposition 1.5) that
W01,2 Rd = W 1,2 Rd . (1.30)
Let A be a linear operator on a real separable Hilbert space H , and let us consider
the abstract Cauchy problem given by
⎧
⎨ du(t)
= Au(t), 0 < t < T ,
dt (1.31)
⎩u(0) = x, x ∈ H.
Exercise 1.4 Argue why if x ∈ D(A), then (1.31) cannot have a solution.
12 1 Partial Differential Equations as Equations in Infinite Dimensions
where f : [0, T [→ H .
We assume that A is an infinitesimal generator of a C0 -semigroup so that the
homogeneous equation (1.31) has a unique solution for all x ∈ D(A). The definition
of a classical solution, Definition 1.4, extends to the case of the nonhomogeneous
initial-value problem by requiring that in this case, the solution satisfies (1.32).
We now define the concept of a mild solution.
Exercise 1.5 Prove that the function u(t) in Definition 1.5 is continuous.
Note that for x ∈ H and f ≡ 0, the mild solution is S(t)x, which is not in general
a classical solution.
When x ∈ D(A), the continuity of f is insufficient to assure the existence of a
classical solution. To see this, following [63], consider f (t) = S(t)x for x ∈ H such
that S(t)x ∈ D(A). Then (1.32) may not have a classical solution even if u(0) = 0 ∈
D(A), as the mild solution
t
u(t) = S(t − s)S(s)x ds = tS(t)x
0
Conclude that in this case the initial-value problem (1.32) has a solution for every
x ∈ D(A).
whose domain is D( ) = W 2,2 (Rd ). Note the difference between a bounded region
and Rd . Here the domain of the infinitesimal generator has this simple form due
to (1.30). Consider the related abstract Cauchy problem
⎧
⎨ du
= u, 0 < t < T ,
dt (1.35)
⎩u(0) = ϕ ∈ L2 (Rd ).
It is known that (Gd (t)ϕ)(x) is a classical solution of problem (1.35) for any ϕ ∈
H = L2 (Rd ) in the sense of Definition 1.4, since the semigroup Gd (t) on H is
differentiable ([63], Chap. 7, Theorem 2.7 and Remark 2.9).
To study nonlinear equations, one also needs to look at the Peano theorem in an
infinite-dimensional Hilbert space, that is, to study the existence of a solution of the
equation
du
(t) = G u(t) , u(0) = x ∈ H,
dt
where G is a continuous function on H . We note that due to the failure of the
Arzela–Ascoli theorem in C([0, T ], H ), the proof in the finite-dimensional case
fails (see the proofs in [15] and [29]). In fact the Peano theorem in a Hilbert space
is not true [26]. However, if we look at semilinear equations
⎧
⎨ du(t)
= Au(t) + G(u(t)), t > 0,
dt
⎩u(0) = x, x ∈ H,
V → H → V ∗
AvV ∗ ≤ MvV
and
2v, Av ≤ λv2H − αv2V , v ∈ V,
for some real number λ and M, α > 0. The following theorem is due to Lions [48].
Theorem 1.6 (Lions) Let x ∈ H and f ∈ L2 ([0, T ]), V ∗ ). Then there exists a
unique function u ∈ L2 ([0, T ], V ) with du(t)/dt ∈ L2 ([0, T ], V ∗ ) and satisfying
⎧
⎨ du(t)
= Au(t) + f (t), t > 0,
dt (1.37)
⎩u(0) = x.
Proof Let 0 < a < T ; we extend u to (−a, T + a) by putting u(t) = u(−t) for a <
t < 0 and u(t) = u(2T − t) for T < t < T + a. Observe that u ∈ L2 ((−a, T + a), V )
and u ∈ L2 ((−a, T + a), V ∗ ). Define
w(t) = θ (t)u(t),
16 1 Partial Differential Equations as Equations in Infinite Dimensions
For v(t) = u(t), this shows that the statement (1.38) is valid in terms of distributions.
Since the RHS in (1.38) is an integrable function of t, we conclude that u(t)2H is
absolutely continuous.
Chapter 2
Stochastic Calculus
with the series convergent P -a.s. by Kolmogorov’s three-series theorem ([5], Theo-
rem 22.3).
However, it is not true that there exists a K-valued random variable X such that
X̃(k)(ω) = X(ω), k K .
with the series being P -a.s. divergent by the strong law of large numbers.
In order to produce a K-valued Gaussian random variable, we proceed as follows.
Denote by L1 (K) the space of trace-class operators on K,
L1 (K) = L ∈ L (K) : τ (L) := tr (LL∗ )1/2 < ∞ , (2.1)
for an ONB {fj }∞j =1 ⊂ K. It is well known [68] that tr([L]) is independent of the
choice of the ONB and that L1 (K) equipped with the trace norm τ is a Banach
space. Let Q : K → K be a symmetric nonnegative definite trace-class operator.
Assume that X : K → L2 (, F , P ) satisfies the following conditions:
(1) The mapping X is linear.
(2) For an arbitrary k ∈ K, X(k) is a Gaussian random variable with mean zero.
(3) For arbitrary k, k ∈ K, E(X(k)X(k )) = Qk, k K .
Let {fj }∞
j =1 be an ONB in K diagonalizing Q, and let the eigenvalues corresponding
to the eigenvectors fj be denoted by λj , so that Qfj = λj fj . We define
∞
X(ω) = X(fj )(ω)fj .
j =1
Since ∞ j =1 λj < ∞, the series converges in L ((, F , P ), H ) and hence P -a.s.
2
Let (, F , {Ft }t≥0 , P ) be a filtered probability space, and, as above, K be a real
separable Hilbert space. We will always assume that the filtration Ft satisfies the
usual conditions
(1) F0 contains
all A ∈ F such that P (A) = 0,
(2) Ft = s>t Fs .
A standard cylindrical Wiener process can now be introduced using the concept
of a cylindrical random variable.
Definition 2.5 We call a family {W̃t }t≥0 defined on a filtered probability space
(, F , {Ft }t≥0 , P ) a cylindrical Wiener process in a Hilbert space K if:
(1) For an arbitrary t ≥ 0, the mapping W̃t : K → L2 (, F , P ) is linear;
(2) For an arbitrary k ∈ K, W̃t (k) is an Ft -Brownian motion;
(3) For arbitrary k, k ∈ K and t ≥ 0, E(W̃t (k)W̃t (k )) = tk, k K .
√
For every t > 0, W̃t / t is a standard cylindrical Gaussian random variable, so
that for any k ∈ K, W̃t (k) can be represented as a P -a.s. convergent series
∞
W̃t (k) = k, fj K W̃t (fj ), (2.2)
j =1
where {fj }∞
j =1 is an ONB in K.
Exercise 2.2 Show that E(W̃t (k)W̃s (k )) = (t ∧ s)k, k K and conclude that
W̃t (fj ), j = 1, 2, . . . , are independent Brownian motions.
20 2 Stochastic Calculus
For the same reason why a cylindrical Gaussian random variable cannot be real-
ized as a K-valued random variable, there is no K-valued process Wt such that
W̃t (k)(ω) = Wt (ω), k K .
We can assume that the Brownian motions wj (t) are continuous. Then, the se-
ries (2.3) converges in L2 (Ω, C([0, T ], K)) for every interval [0, T ], see Exer-
cise 2.3. Therefore, the K-valued Q-Wiener process can be assumed to be con-
tinuous. We denote
∞
1/2
Wt (k) = λj wj (t)fj , k K
j =1
for any k ∈ K, with the series converging in L2 (Ω, C([0, T ], R)) on every interval
[0, T ].
Exercise 2.3 Use Doob’s inequality, Theorem 2.2, for the submartingale
n
1/2
λj wj (t)fj
j =m K
to prove that the partial sums of the series (2.3) defining the Q-Wiener process are
a Cauchy sequence in L2 (Ω, C([0, T ], K)).
Remark 2.1 A stronger convergence result can be obtained for the series (2.3). Since
n n 2
1
1/2 1/2
P sup λj wj (t)fj > ε ≤ 2 E λj wj (T )fj
0≤t≤T ε
j =m K j =m K
T
n
= λj → 0
ε2
j =m
2.1 Hilbert-Space-Valued Process, Martingales, and Cylindrical Wiener Process 21
Theorem 2.1 A K-valued Q-Wiener process {Wt }t≥0 has the following properties:
(1) W0 = 0.
(2) Wt has continuous trajectories in K.
(3) Wt has independent increments.
(4) Wt is a Gaussian process with the covariance operator Q, i.e., for any k, k ∈ K
and s, t ≥ 0,
E Wt (k)Ws (k ) = (t ∧ s)Qk, k K .
(5) For an arbitrary k ∈ K, the law L ((Wt − Ws )(k)) ∼ N (0, (t − s)Qk, k K ).
∞
Exercise 2.4 Consider a cylindrical Wiener process W̃t (k) = j =1 k, fj K W̃t (fj )
and a Q-Wiener process Wt = ∞
1/2
j =1 λj wj (t)fj , as defined in (2.2) and (2.3),
respectively. Show that
(a) Wt1 = W̃t ◦ Q1/2 = ∞
1/2
j =1 λj W̃t (fj )fj defines a Q-Wiener process;
(b) W̃t1 (k) = ∞ j =1 k, fj K wj (t) defines a cylindrical Wiener process.
if 0 ≤ s ≤ t.
If Mt ∈ Lp (, F , P ) is an H -valued martingale than for p ≥ 1, the process
p
Mt H is a real-valued submartingale. We have therefore the following theorem.
22 2 Stochastic Calculus
is a martingale.
Mti = Mt , ei H,
2.2 Stochastic Integral with Respect to a Wiener Process 23
where {ei }∞
i=1 is an ONB in H . Note that the quadratic variation process has to
satisfy
M t (ei ), ej H = Mi , Mj t .
Consequently, we define the quadratic variation process by
∞
M t (h), g H = Mi , Mj t ei , h H ej , g H . (2.5)
i,j =1
The sum in (2.5) converges P -a.s. and defines a nonnegative definite trace-class
operator on H , since
∞
∞
E trM t =E Mi t = E Mt , ei 2
H = EMt 2H < ∞.
i=1 i=1
Exercise 2.5 Show that an H -valued Q-Wiener process {Wt }t≤T is a continuous
square-integrable martingale with W t = t (trQ) and W t = tQ.
Exercise 2.6 Let 0 < t1 , . . . , tn < t ≤ T be a partition of the interval [0, t], t ≤ T ,
and max{tj +1 − tj , 1 ≤ j ≤ n − 1} → 0. Denote ΔWj = Wtj +1 − Wtj . Show that
n
ΔWj 2K → t (tr Q), P -a.s.
j =1
We will introduce the concept of Itô’s stochastic integral with respect to a Q-Wiener
process and with respect to a cylindrical Wiener process simultaneously.
Let K and H be separable Hilbert spaces, and Q be either a symmetric non-
negative definite trace-class operator on K or Q = IK , the identity operator on K.
In case Q is trace-class, we will always assume that its all eigenvalues λj > 0,
j = 1, 2, . . .; otherwise we can start with the Hilbert space ker(Q)⊥ instead of K.
The associated eigenvectors forming an ONB in K will be denoted by fk .
Then the space KQ = Q1/2 K equipped with the scalar product
∞
1
u, v KQ = u, fj K v, fj K
λj
j =1
It is well known (see [68]) that L2 (H1 , H2 ) equipped with the norm
∞
1/2
LL2 (H1 ,H 2) = Lei 2H2
i=1
is a Hilbert space. Since the Hilbert spaces H1 and H2 are separable, the space
L2 (H1 , H2 ) is also separable, as Hilbert–Schmidt operators are limits of sequences
of finite-dimensional linear operators.
Consider L2 (KQ , H ), the space of Hilbert–Schmidt operators from KQ to H .
If {ej }∞
j =1 is an ONB in H , then the Hilbert–Schmidt norm of an operator L ∈
L2 (KQ , H ) is given by
∞
∞
1/2 2 2
L2L2 (KQ ,H ) = L λj f j , ei H = LQ1/2 fj , ei H
j,i=1 j,i=1
2 ∗
= LQ1/2 L2 (K,H )
= tr LQ1/2 LQ1/2 . (2.7)
Since the Hilbert spaces KQ and H are separable, the space L2 (KQ , H ) is also
separable.
Let L ∈ L (K, H ). If k ∈ KQ , then
∞
1/2 1/2
k= k, λj fj λ fj ,
KQ j
j =1
and
L, M L2 (KQ ,H ) = tr LQM ∗ , (2.10)
n−1
Φ(t, ω) = φ(ω)1{0} (t) + φj (ω)1(tj ,tj +1 ] (t), (2.11)
j =0
For an elementary process Φ ∈ E (L (K, H )), we define the Itô stochastic integral
with respect to a Q-Wiener process Wt by
t
n−1
Φ(s) dWs = φj (Wtj +1 ∧t − Wtj ∧t )
0 j =0
for t ∈ [0, T ]. The term φW0 is neglected since P (W0 = 0) = 1. This stochastic
integral is an H -valued stochastic process.
We define the Itô cylindrical stochastic integral of an elementary process Φ ∈
E (L (K, H )) with respect to a cylindrical Wiener process W̃ by
n−1
t
Φ(s) d W̃s (h) = W̃tj +1 ∧t φj∗ (h) − W̃tj ∧t φj∗ (h) (2.12)
0 j =0
for t ∈ [0, T ] and h ∈ H . The following proposition states Itô’s isometry, which is
essential in furthering the construction of the stochastic integral.
for t ∈ [0, T ].
Proof The proof resembles calculations in the real case. Without loss of generality
we can assume that t = T , then
T 2 n−1 2
E Φ(s) dWs = E φj (Wtj +1 − Wtj )
0 H
j =1 H
n−1
2
n−1
=E φj (Wt
j +1 − Wtj ) H + φj (Wtj +1 − Wtj ), φi (Wti+1 − Wti ) H .
j =1 i =j =1
Consider first the single term Eφj (Wtj +1 −Wtj )2H . We use the fact that the random
variable φj and consequently for a vector em ∈ H , the random variable φj∗ em is Ftj -
measurable, while the increment Wtj +1 − Wtj is independent of this σ -field. With
{fl }∞ ∞
l=1 and {em }m=1 , orthonormal bases in Hilbert spaces K and H , we have
∞
2 2
E φj (Wtj +1 − Wtj )H = E φj (Wtj +1 − Wtj ), em H
m=1
∞
2
= E E φj (Wtj +1 − Wtj ), em H Ftj
m=1
2.2 Stochastic Integral with Respect to a Wiener Process 27
∞
2
= E E Wtj +1 − Wtj , φj∗ em K Ftj
m=1
∞
∞
2
= E E Wtj +1 − Wtj , fl ∗
K φj em , fl K
F t
j
m=1 l=1
∞
∞
2
= E E Wtj +1 − Wtj , fl 2
φj∗ em , fl K F t
K j
m=1 l=1
∞
∞
+ E E Wtj +1 − Wtj , fl K φj∗ em , fl K
m=1 l =l =1
× Wtj +1 − Wtj , fl K φj∗ em , fl K Ftj
∞
∞
2
= (tj +1 − tj ) λl φj∗ em , fl K
m=1 l=1
∞
1/2 2
= (tj +1 − tj ) φj λl fl , em H = (tj +1 − tj )φj 2L2 (KQ ,H ) .
m,l=1
Similarly, for the single term Eφj (Wtj +1 − Wtj ), φi (Wti+1 − Wti ) H , we obtain that
E φj (Wtj +1 − Wtj ), φi (Wti+1 − Wti ) H
∞
∞
=E E Wtj +1 − Wtj , fl K φj∗ em , fl K
m=1 l,l =1
× Wti+1 − Wti , fl K
φi∗ em , fl K Ftj =0
We have the following counterpart of (2.13) for the Itô cylindrical stochastic
integral of a bounded elementary process Φ ∈ E (L (K, H )):
2
t t 2
E Φ(s) d W̃s (h) = E Φ ∗ (s)(h)K ds < ∞. (2.14)
0 0
The idea now is to extend the definition of the Itô stochastic integral and cylindri-
cal stochastic integral to a larger class of stochastic processes utilizing the fact that
28 2 Stochastic Calculus
T T
the mappings Φ → 0 Φ(s) dWs and Φ ∗ (·)(h) → ( 0 Φ(s) d W̃s )(h) are isome-
tries due to. (2.13) and (2.14).
When the stochastic integral with respect to martingales is constructed, then the
natural choice of the class of integrands is the set of predictable processes (see, for
example, [57]), and sometimes this restriction is applied when the martingale is a
Wiener process. We will however carry out a construction which will allow the class
of integrands to be simply adapted and not necessarily predictable processes.
Let 2 (KQ , H ) be a class of L2 (KQ , H )-valued processes measurable as
mappings from ([0, T ] × , B([0, T ]) ⊗ F ) to (L2 (KQ , H ), B(L2 (KQ , H ))),
adapted to the filtration {Ft }t≤T (thus F can be replaced with FT ), and satisfying
the condition
T
E Φ(t)2 dt < ∞. (2.15)
L2 (KQ ,H )
0
Obviously, elementary processes satisfying the above condition (2.15) are elements
of 2 .
We note that 2 (KQ , H ) equipped with the norm
T 1/2
Φ2 (KQ ,H ) = E Φ(t)2 dt (2.16)
L2 (KQ ,H )
0
is a Hilbert space. The following proposition shows that the class of bounded ele-
mentary processes is dense in 2 (KQ , H ). It is valid for Q, a trace-class operator,
and for Q = IK .
T
Φn − Φ22 (KQ ,H ) = E Φn (t) − Φ(t)2 dt → 0
L2 (KQ ,H )
0
as n → ∞.
(b) We can assume that Φ(t, ω) ∈ L (K, H ) since every operator L ∈ L2 (KQ , H )
can be approximated in L2 (KQ , H ) by the operators Ln ∈ L (K, H ) defined by
n
1/2 1/2
Ln k = L λj f j λj f j , k K .
Q
j =1
Then
T
E Φ(t) − Φn (t)2 dt → 0
L2 (KQ ,H )
0
as n → ∞ by the Lebesgue DCT, so that Φn → Φ in 2 (KQ , H ).
(c) Now assume, in addition to (a) and (b), that for each ω, the function Φ(·, ω) :
[0, T ] → L2 (KQ , H ) is continuous. For a partition 0 = t0 ≤ t1 ≤ · · · ≤ tn = T ,
define the elementary process
n−1
Φn (t, ω) = Φ(0, ω)1{0} (t) + Φ(tj , ω)1(tj ,tj +1 ] (t).
j =1
due to the continuity of Φ(·, ω). Consequently, due to the Lebesgue DCT,
T
E Φ(t, ω) − Φn (t, ω)2 dt → 0.
L2 (KQ ,H )
0
30 2 Stochastic Calculus
(d) If Φ(t, ω) ∈ L (K, H ) is bounded for every (t, ω) but not necessarily contin-
uous, then we first extend Φ to the entire R by assigning Φ(t, ω) = 0 for t < 0 and
t > T . Next, we define bounded continuous approximations of Φ by
t
Φn (t, ω) = ψn (s − t)Φ(s, ω) ds, 0 ≤ t ≤ T.
0
The second integral converges to zero with h → 0 by the Lebesgue DCT. The first
integral is dominated by
ψn s − (t + h) − ψn (s − t)Φ(s, ω) 1[0,t] (s) ds
L 2 (KQ ,H )
R
= ψn (u + h) − ψn (u)Φ(u + t, ω) 1 (u) du
L2 (KQ ,H ) [−t,0]
R
1/2
= ψn (u + h) − ψn (u)2 du
R
1/2
× Φ(u + t, ω))2 1 (u) du ,
L2 (KQ ,H ) [−t,0]
R
so that it converges to zero due to the continuity of the shift operator in L2 (R) (see
Theorem 8.19 in [73]). Left continuity in T follows by a similar argument.
Since the process Φ(t, ω) is adapted to the filtration Ft , we deduce from the
definition of Φn (t, ω) that it is also Ft -adapted.
We will now show that
T
Φn (t, ω) − Φ(t, ω)2 dt → 0
L2 (KQ ,H )
0
2.2 Stochastic Integral with Respect to a Wiener Process 31
as n → ∞. Consider
t
Φn (t, ω) − Φ(t, ω) ≤ ψn (s − t) Φ(s, ω) − Φ(t, ω) ds
L2 (KQ ,H )
0 L2 (KQ ,H )
t
+
ψn (s − t) − 1 Φ(t, ω) ds
.
0 L2 (KQ ,H )
t
For a fixed ω, denote wn (t) = 0 (ψn (s − t) − 1)Φ(t, ω) dsL2 (KQ ,H ) . Then wn (t)
converges to zero for every t as n → ∞ and is bounded by CΦ(t, ω)L2 (KQ ,H )
for some constant C. From now on, the constant C can change its value from line to
line. The first integral is dominated by
ψn (s − t)Φ(s, ω) − Φ(t, ω)L 1[0,t] (s) ds
2 (KQ ,H )
R
= ψn (u)Φ(u + t, ω) − Φ(t, ω)L 1[−t,0] (u) du
2 (KQ ,H )
R
≤ Φ(u + t, ω) − Φ(t, ω) 1/2 1/2
ψ (u)ψn (u)1[−t,0] (u) du
L2 (KQ ,H ) n
R
1/2
≤ Φ(u + t, ω) − Φ(t, ω)2 ψ (u)1[−t,0] (u) du
L2 (KQ ,H ) n
R
We note that, again by the continuity of the shift operator in L2 ([0, T ], L2 (KQ , H )),
2
T v
v
1[−t,0] Φ + t, ω − Φ(t, ω) dt
0 n n
L2 (KQ ,H )
T
2
≤ Φ v + t, ω − Φ(t, ω) dt → 0
n
0 L2 (KQ ,H )
converges to zero and is bounded by CΦ(·, ω)2L2 ([0,T ],L . This proves that
2 (KQ ,H ))
T
rn (ω) = Φn (t, ω) − Φ(t, ω)2 dt → 0
L2 (KQ ,H )
0
T
E Φn (t, ω) − Φ(t, ω)2 dt = Ern (ω) → 0
L2 (KQ ,H )
0
We will need the following lemma when using the Yosida approximation of an
unbounded operator.
t
lim sup E e(t−s)An − S(t − s) Φ(s)2p ds → 0. (2.18)
n→∞ t≤T L (K 2 Q ,H )
0
∞
Tn L − T L2L2 (KQ ,H ) = (Tn − T )LQ1/2 fj 2
H
j =1
∞
2
≤ Tn − T 2L (H ) LQ1/2 fj H
j =1
∞
2
≤ 4C 2 LQ1/2 fj H = 4C 2 L2L2 (KQ ,H ) .
j =1
The terms (Tn − T )LQ1/2 fj 2H converge to zero as n → ∞ and are bounded
by 4C 2 LQ1/2 fj 2H , so that (a) follows by the Lebesgue DCT with respect to the
counting measure δj .
(b) We will use two facts about the semigroup Sn (t) = etAn . By Proposition 1.2,
we have
supSn (t)L (H ) ≤ Me2αt , n > 2α,
n
p T
= 4M 2 e4αT E Φ(s)2p ds < ∞.
L2 (KQ ,H )
0
The term
2
sup Sn (t − s) − S(t − s) Φ(s)Q1/2 fj H 1[0,t] (s) → 0
0≤t≤T
We are ready to extend the definition of the Itô stochastic integral with respect to a
Q-Wiener process to adapted stochastic processes Φ(s) satisfying the condition
T
E Φ(s)2 ds < ∞,
L2 (KQ ,H )
0
for t ∈ [0, T ]. t
The quadratic variation process of the stochastic integral process 0 Φ(s) dWs
t
and the increasing process related to 0 Φ(s) dWs 2H are given by
· t ∗
Φ(s) dWs = Φ(s)Q1/2 Φ(s)Q1/2 ds
0 t 0
and
· t ∗
Φ(s) dWs = tr Φ(s)Q1/2 Φ(s)Q1/2 ds
0 t 0
t
= Φ(s)2 ds.
L2 (KQ ,H )
0
Proof We note that the stochastic integral process for a bounded elementary pro-
cess in E (L (K, H )) is a continuous square-integrable martingale. Let the se-
quence of bounded elementary processes {Φn }∞ n=1 ⊂ E (L (K, H )) approximate
Φ ∈ 2 (KQ , H ). We can assume that Φ1 = 0 and
1
Φn+1 − Φn 2 (KQ ,H ) < . (2.20)
2n
Then by Doob’s inequality, Theorem 2.2, we have
∞ t
t 1
P sup Φn+1 (s) dWs −
Φn (s) dWs > 2
t≤T 0 0 H n
n=1
∞
T 2
≤ n4 E
Φ n+1 (s) − Φ n (s) dW s
n=1 0 H
∞ T ∞ 4
n
= n4 E Φn+1 (s) − Φn (s)2 ds ≤ .
L2 (KQ ,H ) 2n
n=1 0 n=1
∞
t
t
1/2
Φ(s) dWs , h = λj Φ(s)fj , h H dwj (t),
0 H j =1 0
∞
∗
Φ(s)Q1/2 Φ(s)Q1/2 h, g H = h, Φ(s)Q1/2 fj H g, Φ(s)Q1/2 fj H
j =1
∞
= λj h, Φ(s)fj H g, Φ(s)fj H .
j =1
Now, for 0 ≤ u ≤ t,
t t
E Φ(s) dWs , h Φ(s) dWs , g
0 H 0 H
t ∗
− Φ(s)Q1/2 Φ(s)Q1/2 ds (h), g F u
0 H
∞ ∞
t
1/2 t
1/2
=E λi Φ(s)fi , h H dwi (s) λj Φ(s)fj , g H
dwj (s)
i=1 0 j =1 0
∞
t
− λj h, Φ(s)fj H
g, Φ(s)fj H ds Fu
j =1 0
∞
t t
=E λj Φ(s)fj , h H dwj (s) Φ(s)fj , g H
dw j (s)
j =1 0 0
∞
t
− λj h, Φ(s)fj H
g, Φ(s)fj H ds Fu
j =1 0
∞
1/2 1/2
t
+E λi λj Φ(s)fi , h H dwi (s)
i =j =1 0
2.2 Stochastic Integral with Respect to a Wiener Process 37
t
× Φ(s)fj , g H dwj (s) Fu
0
∞
u
u
= λj Φ(s)fj , h H dwj (s) Φ(s)fj , g H dwj (s)
j =1 0 0
∞
u
− λj h, Φ(s)fj H g, Φ(s)fj H ds
j =1 0
∞
u
1/2 1/2 u
+ λi λj Φ(s)fi , h H dwi (s) Φ(s)fj , g H dwj (s)
i =j =1 0 0
u u
= Φ(s) dWs , h Φ(s) dWs , h
0 H 0 H
u
∗
− Φ(s)Q1/2 Φ(s)Q1/2 ds (h), g .
0 H
The formula for the increasing process follows from Lemma 2.1.
Exercise
t 2.10 Prove the following two properties of the stochastic integral process
0 Φ(s) dWs for Φ ∈ 2 (KQ , H ):
t 1 T 2
P sup
Φ(s, ω) dWs > λ ≤ 2 E Φ(s, ω) ds, (2.21)
λ 2 (KQ ,H )
0≤t≤T 0 H 0
2
t T 2
E sup
Φ(s, ω) dWs
≤4 E Φ(s) ds. (2.22)
2 (KQ ,H )
0≤t≤T 0 H 0
38 2 Stochastic Calculus
Remark 2.2 For Φ ∈ 2 (KQ , H ) such that Φ(s) ∈ L (K, H ), the quadratic varia-
t
tion process of the stochastic integral process 0 Φ(s) dWs and the increasing pro-
t
cess related to 0 Φ(s) dWs 2H simplify to
· t
Φ(s) dWs = Φ(s)QΦ(s)∗ ds
0 t 0
and
· t
Φ(s) dWs = tr Φ(s)QΦ(s)∗ ds.
0 t 0
The final step in constructing the Itô stochastic integral is to extend it to the
class of integrands satisfying a less restrictive assumption on their second moments.
This extension is necessary if one wants to study Itô’s formula even for functions as
simple as x → x 2 . We use the approach presented in [49] for real-valued processes.
In this chapter, we will only need the concept of a real-valued progressively mea-
surable process, but in Chap. 4 we will have to refer to H -valued progressively
measurable processes. Therefore we include a more general definition here.
It is well known (e.g., see Proposition 1.13 in [38]) that an adapted right-
continuous (or left-continuous) process is progressively measurable.
Lemma 2.3 Let Φ ∈ P(KQ , H ). Then there exists a sequence of bounded pro-
cesses Φn ∈ E (L (K, H )) ⊂ 2 (KQ , H ) such that
T
Φ(t, ω) − Φn (t, ω)2 dt → 0 as n → ∞ (2.24)
L2 (KQ ,H )
0
T
E Φn (t, ω) − Φn,k (t, ω)2 dt → 0 as n → ∞.
L2 (KQ ,H )
0
Then
T
P Φ(t, ω) − Φn,k (t, ω)2 dt > ε
L2 (KQ ,H )
0
T
≤P 2 Φ(t, ω) − Φn (t, ω)2 dt > 0
L2 (KQ ,H )
0
40 2 Stochastic Calculus
T
+P 2 Φn (t, ω) − Φn,k (t, ω)2 dt > ε
L2 (KQ ,H )
0
T
≤P Φ(t, ω) − Φn (t, ω)2 dt > 0
L2 (KQ ,H )
0
T
ε
+P Φn (t, ω) − Φn,k (t, ω)2 dt >
L2 (KQ ,H ) 2
0
T
≤P Φ(t, ω)2 dt > n
L2 (KQ ,H )
0
2 T
+ E Φn (t, ω) − Φn,k (t, ω)2 dt,
ε L2 (KQ ,H )
0
which proves the convergence in probability in (2.24) and P -a.s. convergence for a
subsequence.
n−1
Ψ (t, ω) = ψ(ω)1{0} (t) + ψj (ω)1(tj ,tj +1 ] (t), (2.27)
j =0
Lemma 2.5 Let Φ ∈ 2 (KQ , H ). Then for arbitrary δ > 0 and n > 0,
t n T
P sup
Φ(s) dWs
> δ ≤ + P Φ(s)2
2 (KQ ,H )
ds > n . (2.29)
t≤T 0 H δ2 0
2.2 Stochastic Integral with Respect to a Wiener Process 41
Remark 2.3 We note that for Φ ∈ P(KQ , H ), the stochastic integral process
t
0 Φ(s) dWs also has a continuous T
version.
%
Indeed, let n = {ω : n − 1 ≤ 0 Φ(s)22 (KQ ,H ) < n}; then P ( − ∞
n=1 n )
= 0, and if Φn are defined as in (2.26), then on n ,
The stochastic integral process for Φ ∈ P(KQ , H ) may not be a martingale, but
it is a local martingale. We will now discuss this property.
Definition 2.13 A stochastic process {Mt }t≤T , adapted to a filtration Ft , with val-
ues in a separable Hilbert space H is called a local martingale if there exists a
sequence of increasing stopping times τn , with P (limn→∞ τn = T ) = 1, such that
for every n, Mt∧τn is a uniformly integrable martingale.
t
Exercise 2.13 Show an example of Φ ∈ P(KQ , H ) such that 0 Φ(s) dWs is not a
martingale.
Lemma 2.7 Let Φ ∈ P(KQ , H ), and τ be a stopping time relative to {Ft }0≤t≤T
such that P (τ ≤ T ) = 1. Define
τ u
Φ(t) dWt = Φ(t) dWt on the set ω : τ (ω) = u .
0 0
Then
τ T
Φ(t) dWt = Φ(t)1{t≤τ } dWt . (2.30)
0 0
we conclude that
T T
Φn (t)1{t≤τ } dWt → Φ(t)1{t≤τ } dWt
0 0
in probability.
For bounded elementary processes Φ ∈ E (L (K, H )), equality (2.30) can be
τ T
verified by inspection, so that 0 Φn (t) dWt = 0 Φn (s)1{s≤τ } dWs .
τ u
On the set {ω : τ (ω) = u}, we have 0 Φn (t) dWt = 0 Φn (t) dWt . Also, for
every u ≤ T ,
u u
Φn (t) dWt → Φ(t) dWt
0 0
in probability. Thus, for every u ≤ T ,
T u
1{τ =u} Φn (t)1{t≤τ } dWt → 1{τ =u} Φ(t) dWt
0 0
martingale with
2
t∧τn T
E
Φ(s) dWs
=E Φ(s)2 1
L2 (KQ ,H ) {s≤τn }
ds ≤ n.
0 H 0
t∧τn
This proves that for every n, the process 0 Φ(s) dWs is a square-integrable and
hence a uniformly integrable martingale.
We now proceed with the definition of the stochastic integral with respect to a cylin-
drical Wiener process. We will restrict ourselves to the case where the integrand
Φ(s) is a process taking values in L2 (K, H ), following the work in [18]. A more
general approach can be found in [57] and [19].
We recall that if Φ(s) is an elementary process, Φ(s) ∈ E (L (K, H )), then
Φ(s) ∈ L2 (K, H ), since Q = IK . Assume that Φ(s) is bounded in the norm of
L2 (K, H ). Using (2.14), with {ei }∞
i=1 an ONB in H , we calculate,
∞
2
∞
t t 2
E Φ(s) d W̃s (ei ) = E Φ ∗ (s)ei K ds
i=1 0 i=1 0
∞
t t
=E Φ ∗ (s)ei 2 ds = E Φ ∗ (s)2 ds
K L2 (H,K)
0 i=1 0
t
=E Φ(s)2 ds.
L2 (K,H )
0
t
Then we define the stochastic integral 0 Φ(s) d W̃s of a bounded elementary process
Φ(s) as follows:
t ∞
t
Φ(s) d W̃s = Φ(s) d W̃s (ei ) ei . (2.31)
0 i=1 0
t
By the above calculations, 0 Φ(s) d W̃s ∈ L2 (, H ) and is adapted to the filtra-
tion Ft . The equality
T
Φ(s) d W̃s = Φ2 (K,H ) (2.32)
0 L2 (,H )
Similarly as in the case of the stochastic integral with respect to a Q-Wiener pro-
cess, we now conclude the construction of the integral with respect to a cylindrical
Wiener process.
t
Remark 2.4 Since for Φ ∈ 2 (K, H ), the process 0 Φ(s) d W̃s ∈ MT2 (H ), the con-
clusion of Lemma 2.5 holds in the cylindrical case.
Define P(K, H ) = P(KQ , H ) with Q = IK . We can construct the cylindri-
T
cal stochastic integral 0 Φ(t) d W̃t for processes Φ ∈ P(K, H ) by repeating the
arguments in Lemma 2.6.
Exercise 2.15 (a) Prove (2.30) in the cylindrical case, i.e., show that for a process
Φ ∈ P(K, H ) and a stopping time τ relative to {Ft }0≤t≤T such that P (τ ≤ T ) = 1,
τ T
Φ(t) d W̃t = Φ(t)1{t≤τ } d W̃t , (2.33)
0 0
where
τ u
Φ(t) d W̃t = Φ(t) d W̃t on the set ω : τ (ω) = u .
0 0
t
(b) Show that the stochastic integral 0 Φ(s) d W̃s 0 ≤ t ≤ T , is a local martingale
and that it has a continuous version.
∞
t
1/2 1/2
t
Φ(s) dWs = Φ(s)λj fj d Ws , λj fj K
Q
0 j =1 0
∞
t
= Φ(s)fj dWs , fj K. (2.34)
j =1 0
2.2 Stochastic Integral with Respect to a Wiener Process 47
Proof We will prove (2.35), the cylindrical case only, since the proof for a
Q-Wiener process is nearly identical.
We first note that
∞ 2 ∞ 2
t ∞
t
E Φ(s)fj d W̃s (fj ) = E Φ(s)fj d W̃s (fj ), ei
j =1 0 H i=1 j =1 0 H
∞
∞ t 2
= E Φ(s)fj , ei H
i=1 j =1 0
t
=E Φ(s)2 < ∞.
L2 (K,H )
0
∞ t
Thus, j =1 0 (Φ(s)fj ) d W̃s (fj ) ∈ H , P -a.s. For a bounded elementary process
Φ(s) = 1{0} φ + n−1
k=1 φk 1(tk ,tk+1 ] (s) ∈ E (L (K, H )) and any h ∈ H , we have that
a.s.
n−1
t
Φ(s) d W̃s , h = W̃tk+1 φk∗ (h) − W̃tk φk∗ (h)
0 H k=1
∞
n−1
= W̃tk+1 (fj ) − W̃tk (fj ) fj , φk∗ (h) K
k=1 j =1
∞
n−1
= φk (fj ) W̃tk+1 (fj ) − W̃tk (fj ) , h H
j =1 k=1
∞ n−1
= φk (fj ) W̃tk+1 (fj ) − W̃tk (fj ) , h
j =1 k=1 H
∞
t
= Φ(s)fj d W̃s (fj ), h ,
j =1 0 H
⊥
Let Pm+1 denote the orthogonal projection onto span{fm+1 , fm+2 , . . .}. Now
for a general Φ ∈ 2 (K, H ), we have for an approximating sequence Φn (s) ∈
E (L (K, H )), using (2.35) for elementary processes,
2
t
m t
E Φ(s) d W̃s − Φ(s)fj d W̃s (fj )
0
j =1 0 H
t t t
= lim E Φ(s) d W̃s − Φn (s) d W̃s + Φn (s) d W̃s
n→∞ 0 0 0
2
m t
− Φ(s)fj d W̃s (fj )
j =1 0 H
∞
t
m t
= lim E Φn (s)fj d W̃s (fj ) + Φn (s)fj d W̃s (fj )
n→∞
j =m+1 0 j =1 0
2
m t
− Φ(s)fj d W̃s (fj )
j =1 0 H
∞ 2
t
= lim E Φn (s)fj d W̃s (fj )
n→∞
j =m+1 0 H
∞ 2
t
⊥
= lim E Φn (s)Pm+1 fj d W̃s (fj )
n→∞
j =1 0 H
2
t
= lim E ⊥
Φn (s)Pm+1 d W̃s
n→∞ 0
H
2
t
= E
⊥
Φ(s)Pm+1 d W̃s
0 H
t
=E Φ(s)P ⊥ 2
m+1 L2 (K,H ) ds
0
∞
t
=E Φ(s)fj 2 → 0, as m → ∞,
H
0 j =m+1
⊥ → ΦP ⊥ in (K, H ) as n → ∞. This
where we have used the fact that Φn Pm+1 m+1 2
concludes the proof.
2.2 Stochastic Integral with Respect to a Wiener Process 49
where {fj }∞ ∞
j =1 is an ONB in a separable Hilbert space K, {wj (t)}j =1 are inde-
∞
pendent Brownian motions, and {λj }j =1 are summable and assumed to be strictly
positive without loss of generality. Let us denote
∞
j & j
Ft = σ wj (s) : s ≤ t , Gt = σ Ft ,
j =1
and
FtW = σ Ws (k) : k ∈ K, and s ≤ t .
j
is dense in L2 (, FT , P ) ([61], Lemma 4.3.2), we deduce that the linear span
T
1 T 2 !
span e 0 h(t)dwj (t)− 2 0 h (t) dt : h ∈ L2 [0, T ], R , j = 1, 2, . . .
∞ T
where j =1 λj E 0 φj2 (s, ω) ds < ∞.
50 2 Stochastic Calculus
∞
t
1/2
Mt , ei H = EM0 , ei H + λj φji (s, ω) dwj (s).
j =1 0
∞
i=1 Mt , ei H < ∞, we have
Since E 2
∞
Mt = Mt , ei H ei .
i=1
Therefore,
∞
∞
∞ t
1/2
Mt = E M0 , ei H ei + λj φji (s, ω)ei dwj (s). (2.37)
i=1 i=1 j =1 0
Under the assumptions on Mt , we obtain that EM0 H < ∞, so that the first term
is equal to EM0 . Using the assumptions on Mt and the representations (2.36) and
(2.37) above, we obtain
∞
∞ 2 ∞
∞
t
1/2
t 2
E λj φji (s, ω) dwj (s) = λj E φji (s, ω) ds.
i=1 j =1 0 j =1 i=1 0
Define for k ∈ KQ , h ∈ H ,
∞
∞
Φ(s, ω)k, h H = λj h, ei H k, fj KQ φj (s, ω).
i
j =1 i=1
Then Φ(s, ω) ∈ 2 (KQ , H ), and, by the definition of the stochastic integral, the
second term in (2.38) is equal to
t
Φ(s, ω) dW (s).
0
Remark 2.5 If EM0 = 0, then, by Theorem 2.3, the quadratic variation process
corresponding to Mt and the increasing process related to Mt 2H are given by
(see [57])
t ∗
M t = Φ(s)Q1/2 Φ(s)Q1/2 ds,
0
t
(2.39)
t ∗
M t = Φ(s)2 ds = tr Φ(s)Q1/2 Φ(s)Q1/2 ds.
L2 (KQ ,H )
0 0
We shall now prove the converse. We need the following two results.
with respect to Mt exactly as we did for the case of a Wiener process. The integrands
are Ft -adapted processes Ψ (t, ω) with values in linear, but not necessarily bounded,
operators from H to a separable Hilbert space G satisfying the condition
T
tr Ψ (s)QM (s) Ψ (s)QM (s)∗ ds < ∞.
1/2 1/2
E (2.41)
0
The stochastic integral process Nt ∈ MT2 (G), and its quadratic variation is given by
t ∗
1/2 1/2
N t = Ψ (s)QM (s) Ψ (s)QM (s) ds. (2.42)
0
Exercise 2.17 Reconcile formulas (2.42) and (2.44). Hint: use Lemma 2.10 to show
that if L ∈ L (H ), then (LL∗ )1/2 = LJ , where J is a partial isometry on (ker L)⊥ .
Exercise 2.18 Provide details for construction of the stochastic integral (2.40) with
respect to square-integrable martingales for the class of stochastic processes
t satis-
fying condition (2.41). Prove property (2.42). Show (2.43) for Mt = 0 Φ(s) dWs .
Proof To simplify the notation, denote Ψ (s, ω) = Φ(s, ω)Q1/2 . We shall prove that
if Mt is an H -valued continuous Ft -martingale with the quadratic variation process
t
M t = Ψ (s, ω)Ψ ∗ (s, ω) ds
0
T t
such that E 0 tr(Ψ (s)Ψ ∗ (s)) ds < ∞, then Mt = 0 Φ(s, ω) dWs , where Wt is a
Q-Wiener process.
Since M t = 0 Ψ (s, ω)Ψ ∗ (s, ω) ds, the space Im(Ψ (s, ω)Ψ ∗ (s, ω)) will play
t
Then
∞
V ∗ (s, ω)h = gn (s, ω), h H fn ,
n=1
is an orthogonal projection on Im(Ψ (s, ω)Ψ ∗ (s, ω)). Thus we can write
t
Mt = Π(s) + Π ⊥ (s) dMs
0
t t
= Π(s) dMs + Π ⊥ (s) dMs = Mt + Mt .
0 0
t
But M0 = 0 and M t = 0 Π ⊥ (s)Ψ (s)Ψ ∗ (s)Π ⊥ (s) ds = 0, so that Mt = 0. In
conclusion,
t t
Mt = Mt + Mt = V (s)V ∗ (s) dMs = V (s) dNs
0 0
with
t
Nt = V ∗ (s, ω) dMs ,
0
a continuous K-valued square integrable martingale whose quadratic variation is
given by
t
N t = V ∗ (s, ω)Ψ (s, ω)Ψ ∗ (s, ω)V (s, ω) ds.
0
We define now
∞
Λ(s, ω)k = V ∗ (s, ω)Ψ (s, ω)Ψ ∗ (s, ω)V (s, ω) k = λn (s, ω)k, fn K fn ,
n=1
Then it is easy to verify that ŵn , ŵm t = tδn,m , using the mutual independence of
ηn and βm and the fact that
ηn , ηm t = N t fn , fm K = 0.
Thus, by Lévy’s theorem, ŵn (t) are independent Brownian motions, and the expres-
sion
∞
(t (k) =
W ŵn (t)fn , k K
n=1
defines a K-valued cylindrical Wiener process.
Since
t t
1/2 1/2
λn (s) d ŵn (s) = λn (s)δn (s) dηn (s) = ηn (t),
0 0
we get, using, for example, (2.35), that
t
Nt = (s .
Λ1/2 (s) d W
0
Thus we arrive at
t t
Mt = V (s) dNs = (s .
V (s, ω)Λ1/2 (s, ω) d W
0 0
All that is needed now is a modification of the integrand Ψ̂ (s) = V (s)Λ1/2 (s) to the
desired form Ψ (s), and the cylindrical Wiener process W (t needs to be replaced with
a Q-Wiener process.
56 2 Stochastic Calculus
Now we need the following general fact from the operator theory (refer to [11],
Appendix B).
AA∗ = BB ∗ .
where A−1 : R(A) → D(A) is the pseudo-inverse operator, i.e., A−1 h is defined as
the element g of the minimal norm such that Ag = h. Then
B = AJ,
It follows from the formula defining the operator J that there exists an
Ft -adapted process J (t) : (ker Ψ̂ (t))⊥ → (ker Ψ (t))⊥ such that
and such that J (t)J ∗ (t) is an orthogonal projection onto (ker Ψ (t))⊥ . We need an-
other filtered probability space ( , F , {Ft }t≤T , P ) and a cylindrical Wiener
process W ( and extend all processes trivially to the product filtered probability
t
space
× × , F × F × F , Ft × Ft × Ft , P × P × P ,
with K(s) = (J (s)J ∗ (s))⊥ , the projection onto ker Ψ (s), then, using Lévy’s theo-
rem (Theorem 2.6), Wt is a Q-Wiener process, since by Theorem 2.4
t
W t = Q1/2 J (s)J ∗ (s)Q1/2 + Q1/2 K(s)Q1/2 ds = tQ.
0
2.2 Stochastic Integral with Respect to a Wiener Process 57
Also
t t t
Φ(s) dWs = (s +
Ψ (s)J (s) d W (
Ψ (s)K(s) d W s
0 0 0
t
= Ψ (s = Mt .
((s) d W
0
We can now prove the following martingale representation theorem in the cylin-
drical case.
The stochastic version of the Fubini theorem helps calculate deterministic integrals
of an integrand that is a stochastic integral process. In literature, this theorem is
presented for predictable processes, but there is no need for this restriction if the
stochastic integral is relative to a Wiener process.
In order to prove (1)–(3) for an unbounded Φ ∈ L1 ([0, T ]× ×G) with Φ(·, ·, x) ∈
2 (KQ , H ) μ-a.e., we only need to know that (1)–(3) hold for a · L2 (KQ ,H ) -norm
bounded sequence Φn such that 9Φn −Φ9 → 0. We define an appropriate sequence
by
⎧
⎨n Φ(t, ω, x)
if Φ(t, ω, x)L2 (KQ ,H ) > n,
Φn (t, ω, x) = Φ(t, ω, x)L2 (KQ ,H )
⎩
Φ(t, ω, x) otherwise.
By the Lebesgue DCT relative to the · L2 (KQ ,H ) -norm, we have that μ-a.e.
Φn (·, ·, x) − Φ(·, ·, x) → 0.
2 (KQ ,H )
≤ 9Φn − Φ9 → 0 (2.48)
and
T T
E
Φn (t, ·, x) μ(dx) dWt − Φ(t, ·, x) μ(dx) dWt
0 G 0 G H
T 2 1/2
≤ E
Φn (t, ·, x) − Φ(t, ·, x) μ(dx) dWt
0 G H
T
2 1/2
≤ E Φn (t, ·, x) − Φ(t, ·, x) μ(dx)
0 G L2 (KQ ,H )
=
Φn (·, ·, x) − Φ(·, ·, x) μ(dx)
G 2 (KQ ,H )
≤ Φn (·, ·, x) − Φ(·, ·, x) μ(dx)
2 (KQ ,H )
G
= 9Φn − Φ9 → 0. (2.49)
Now, (3) follows for Φ from (2.48) and (2.49), since it is valid for Φn .
(B) If Φ is bounded in the · L2 (KQ ,H ) -norm, then it can be approximated in
9 · 9 by bounded elementary processes
n−1
Φ(t, ω, x) = Φ(0, ω, x)1{0} (t) + Φj (tj , ω, x)1(tj ,tj +1 ] (t), (2.50)
j =1
with Φ(t, ω, x), square integrable with respect to dt ⊗ dP ⊗ dμ, so that Proposi-
tion 2.2 gives the desired approximation.
Clearly, Φn (·, ·, x) is {Ft }t≤T -adapted for any x ∈ G, and the stochastic integral
T
0 Φn (t, ·, x) dWt is FT ⊗ G /B(H )-measurable.
Since for every t ∈ T and A ∈ L2 (KQ , H ),
Φn (t, ·, ·), A L
2 (KQ ,H )
Φn (t, ·, x), A L μ(dx)
2 (KQ ,H )
G
Φn (t, ·, x) μ(dx)
G
Let us now discuss the cylindrical case. In the statement of Theorem t 2.8 we can
consider W̃t , a cylindrical Wiener process, and the stochastic integral 0 Φ(s) d W̃s .
Definitions 2.10 and 2.14 differ only by the choice of Q being either a trace-class op-
erator or Q = IK , but in both cases the integrands are in 2 (KQ , H ). Both stochas-
tic integrals are isometries by either (2.19) or (2.32). We have therefore the following
conclusion.
Corollary 2.3 Under the assumptions of Theorem 2.8, with condition (2.45) re-
placed with
2.3 The Itô Formula 61
9Φ91 = Φ(·, ·, x) μ(dx) < ∞, (2.51)
2 (K,H )
G
conclusions (1)–(3) of the stochastic Fubini theorem hold for the stochastic integral
T
0 Φ(t, ·, ·) d W̃t with respect to a standard cylindrical Wiener process {W̃t }t≥0 .
Hence, if Φ(s) ∈ P(KQ , H ) and Ψ (s) ∈ H are Ft -adapted processes, then the
process Φ ∗ (s)Ψ (s) defined by
∗
Φ (s)Ψ (s) (k) = Ψ (s), Φ(s)(k) H
has values in L2 (KQ , R). If, in addition, P -a.s., Ψ (s) is bounded as a function of
s, then
T
∗ 2
P
Φ (s)Ψ (s) L (K ,R) ds < ∞ = 1,
2 Q
0
T T
Ψ (s), Φ(s) dWs H
= Φ ∗ (s)Ψ (s) dWs .
0 0
and Φ ∈ P(KQ , H ).
Assume that a function F : [0, T ] × H → R is such that F is continuous and
its Fréchet partial derivatives Ft , Fx , Fxx are continuous and bounded on bounded
subsets of [0, T ] × H . Then the following Itô’s formula holds:
t
F t, X(t) = F 0, X(0) + Fx s, X(s) , Φ(s)dWs H
0
t
+ Ft s, X(s) + Fx s, X(s) , Ψ (s) H
0
1 * ∗ +
+ tr Fxx s, X(s) Φ(s)Q1/2 Φ(s)Q1/2 ds (2.53)
2
P -a.s. for all t ∈ [0, T ].
Proof We will first show that the general statement can be reduced to the case of
constant processes Ψ (s) = Ψ and Φ(s) = Φ, s ∈ [0, T ]. For a constant C > 0,
define the stopping time
t
τC = inf t ∈ [0, T ] : max X(t)H , Ψ (s)H ds,
0
t
Φ(s)2 ds ≥ C
L2 (KQ ,H )
0
and
T
E ΦC (s)2 ds < ∞,
L2 (KQ ,H )
0
2.3 The Itô Formula 63
by Lemma 2.4 and Corollary 2.1 it follows that ΨC and ΦC can be approximated re-
spectively by sequences of bounded elementary processes ΨC,n and ΦC,n for which
P -a.s. uniformly in t ≤ T
t
ΨC,n (s) − ΨC (s) ds → 0
H
0
and
t t
ΦC,n (s) dWs − ΦC (s) dWs
→ 0.
0 0 H
Define
t t
XC,n (t) = X(0) + ΨC,n (s) ds + ΦC,n (s) dWs .
0 0
Then
supXC,n (t) − XC (t)H → 0
t≤T
with probability one. Assume that we have shown Itô’s formula for the process
XC,n (t), that is,
t
F t, XC,n (t) = F 0, X(0) + Fx s, XC,n (s) , ΦC,n (s)dWs H
0
t
+ Ft s, XC,n (s) + Fx s, XC,n (s) , ΨC,n (s) H
0
1 * +
1/2 ∗
+ tr Fxx s, XC,n (s) ΦC,n (s)Q1/2
ΦC,n (s)Q ds
2
(2.54)
P -a.s. for all t ∈ [0, T ]. Using the continuity of F and the continuity and local
boundedness of its partial derivatives, we will now conclude that
t
F t, XC (t) = F 0, X(0) + Fx s, XC (s) , ΦC (s)dWs H
0
t
+ Ft s, XC (s) + Fx s, XC (s) , ΨC (s) H
0
1 * +
1/2 ∗
+ tr Fxx s, XC (s) ΦC (s)Q1/2
ΦC (s)Q ds. (2.55)
2
The first integral converges to zero, since the first factor is an integrable process, and
the second factor converges to zero almost surely, so that the Lebesgue DCT applies.
The second integral is bounded by M ΦC∗ (s) − ΦC,n ∗ 2
2 (KQ ,H ) for some constant
M, so that it converges to zero, since ΦC,n (s) → ΦC in the space 2 (KQ , H ).
In conclusion, the stochastic integrals in (2.54) converge to the stochastic integral
in (2.55) in mean square, so that they converge in probability.
We now turn to the nonstochastic integrals.
The first component, involving Ft , of the nonstochastic integral in (2.54) con-
verges P -a.s. to the corresponding component in (2.55) by the continuity and local
boundedness of Ft , so that the Lebesgue DCT can be applied.
Note that, P -a.s., ΨC,nk → ΨC in L1 ([0, t], H ), and Fx is locally bounded, so
that the functions s → Fx (s, XC,nk (s)) and s → Fx (s, XC (s)) are in L∞ ([0, t], H ).
The convergence with probability one of the second component follows from the
duality argument.
To discuss the last nonstochastic integral, we use the fact that
ΦC,n (s) − ΦC (s) →0
(K ,H ) 2 Q
∞
= λj Fxx s, XC,nk (s) ΦC,nk (s)fj , ΦC,nk (s)fj H .
j =1
Since XC,nk (s) is bounded, the continuity of Fxx and (2.56) imply that
Fxx s, XC,nk (s) ΦC,nk (s)fj , ΦC,nk (s)fj H
→ Fxx s, XC (s) ΦC (s)fj , ΦC (s)fj H .
By the Lebesgue DCT (with respect to the counting measure), we get that, a.e. on
[0, T ] × ,
∗
tr Fxx s, XC,nk (s) ΦC,nk (s)QΦC,nk
(s) → tr Fxx s, XC (s) ΦC (s)QΦC∗ (s)
1 This elementary proof can be replaced by the following argument. The space of trace-class op-
erators L1 (H ) can be identified with the dual space to the space of compact linear operators on
H , the duality between the two spaces is the trace operator ([68], Chap. IV, Sect. 1, Theorem 1).
Hence, as a dual separable space, it has the Radon–Nikodym property ([14], Chap. III, Sect. 3, The-
orem 1). Thus, L1 ([0, T ], L1 (H ))∗ = L∞ ([0, T ], L1 (H )∗ ) ([14], Chap. IV, Sect. 1, Theorem 1).
But L∞ ([0, T ], L1 (H )∗ ) = L∞ ([0, T ], L (H )) ([68], Chap. IV, Sect. 1, Theorem 2). Thus the
convergence of the last nonstochastic integral follows from the duality argument.
66 2 Stochastic Calculus
u(t, Wt ) − u(0, 0)
n−1
* + n−1
* +
= u(tj +1 , Wtj +1 ) − u(tj , Wtj +1 ) + u(tj , Wtj +1 ) − u(tj , Wtj )
j =1 j =1
n−1
= ut (t¯j , Wtj +1 )Δtj
j =1
n−1,
-
1
+ ux (tj , Wtj ), ΔWj K
+ uxx (tj , W̄j )(ΔWj ), ΔWj K
2
j =1
n−1
n−1
= ut (tj , Wtj +1 )Δtj + ux (tj , Wtj ), ΔWj K
j =1 j =1
1
n−1
+ uxx (tj , Wtj )(ΔWj ), ΔWj K
2
j =1
n−1
* +
+ ut (t¯j , Wtj +1 ) − ut (tj , Wtj +1 ) Δtj
j =1
1 *
n−1
+
+ uxx (tj , W̄j )(ΔWj ) − uxx (tj , Wtj )(ΔWj ) , ΔWj K , (2.57)
2
j =1
n−1
≤ sup uxx (tj , W̄j )(ΔWj ) − uxx (tj , Wtj )(ΔWj )L (K,K) ΔWj 2 → 0
K
j ≤n−1 j =1
n−1
t * +
uxx (tj , Wtj )(ΔWj ), ΔWj K
→ tr uxx (s, Ws )Q ds (2.58)
j =1 0
in probability P .
Let 1Nj = 1{max{Wti K ≤N,i≤j }} . Then 1j is Ftj -measurable, and, using the rep-
N
In view of the above and the fact that uxx is bounded on bounded subsets of [0, T ] ×
H , we obtain
n−1 2
N
E 1N
j uxx (tj , Wtj )(ΔWj ), ΔWj K − tr 1j uxx (tj , Wtj )Q Δtj
j =1
n−1
N 2
= E 1j uxx (tj , Wtj )(ΔWj ), ΔWj K
j =1
2
− E tr 1N
j uxx (tj , Wtj )Q (Δtj )2
68 2 Stochastic Calculus
n−1
≤ sup uxx (s, h)2 EΔWj 4K − (tr Q)2 (Δtj )2
L (H )
s≤t,hH ≤N j =1
n−1
=2 sup uxx (s, h)2 Q2L2 (K) (Δtj )2 → 0.
L (H )
s≤t,hH ≤N j =1
Also, as N → ∞,
n−1
P 1 − 1N
j uxx (tj , Wtj )(ΔWj ), ΔWj K
j =1
− tr 1N
j uxx (tj , Wtj )QΔtj =0
≤ P sup Ws > N → 0.
s≤t
This proves (2.58). Taking the limit in (2.57), we obtain Itô’s formula for the func-
tion u(t, Wt ),
t 1
u(t, Wt ) = u(0, 0) + ut (s, Ws ) + tr uxx (s, Ws )Q ds
0 2
t
+ ux (s, Ws ), dWs K
. (2.59)
0
As in the case of a Q-Wiener process, for Φ(s) ∈ P(K, H ) and a P -a.s. bounded
H -valued Ft -adapted process Ψ (s), Φ ∗ (s)Ψ (s) ∈ P(K, R). In addition, since
∞
∞
∗ 2 2 2 2
Φ (s)Ψ (s) (fj ) = Ψ (s), Φ(s)(fj ) H ≤ Ψ (s)H Φ(s)L ,
2 (KQ ,H )
j =1 j =1
the process Φ ∗ (s)Ψ (s) can be considered as being K- or K ∗ -valued, and we can
define
T T T
Ψ (s), Φ(s) d W̃s H
= Φ ∗ (s)Ψ (s), d W̃s K
= Φ ∗ (s)Ψ (s) d W̃s .
0 0 0
Theorem 2.10 (Itô Formula) Let H and K be real separable Hilbert spaces, and
{W̃t }0≤t≤T be a K-valued cylindrical Wiener process on a filtered probability space
(, F , {Ft }0≤t≤T , P ). Assume that a stochastic process X(t), 0 ≤ t ≤ T , is given
by
t t
X(t) = X(0) + Ψ (s) ds + Φ(s) d W̃s , (2.60)
0 0
and Φ ∈ P(K, H ).
Assume that a function F : [0, T ] × H → R is such that F is continuous and
its Fréchet partial derivatives Ft , Fx , Fxx are continuous and bounded on bounded
subsets of [0, T ] × H . Then the following Itô’s formula holds:
t
F t, X(t) = F 0, X(0) + Fx s, X(s) , Φ(s)d W̃s H
0
t
+ Ft s, X(s) + Fx s, X(s) , Ψ (s) H
0
1 * ∗ +
+ tr Fxx s, X(s) Φ(s) Φ(s) ds (2.61)
2
Proof The proof is nearly identical to the proof of the Itô formula for a Q-Wiener
process, and we refer to the notation in the proof of Theorem 2.9. The reduction to
70 2 Stochastic Calculus
the processes XC (t) = X(t ∧ τC ), ΨC (t) = Ψ (t)1[0,τC ] (t), ΦC (t) = Φ(t)1[0,τC ] (t),
with
t t
XC (t) = XC (0) + ΨC (s) ds + ΦC (s) d W̃s , t ∈ [0, T ],
0 0
t
ΨC,n (s) − ΨC (s) ds → 0
H
0
and
t t
ΦC,n (s) d W̃s − ΦC (s) d W̃s
→0
0 0 H
is achieved using Lemma 2.4 and Corollary 2.1 with Q = IK , so that we can define
t t
XC,n (t) = X(0) + ΨC,n (s) ds + ΦC,n (s) d W̃s
0 0
with probability one. Then, using the isometry property (2.32) and the arguments
in the proof of Theorem 2.9 that justify the term-by-term convergence of (2.54)
to (2.55), we can reduce the general problem to the case
∞
∞
∗
Φ W̃t = (Φ W̃t )ei ei = W̃t Φ ei ei ∈ H (2.63)
i=1 i=1
for Φ ∈ L2 (K, H ).
From here we proceed as follows. Define
u(t, ξt ) = F t, X(0) + Ψ t + ξt
with ξt = Φ W̃t ∈ MT2 (H ). Similarly as in the proof of Theorem 2.9, with 0 = t1 <
t2 < · · · < tn = t ≤ T , Δtj = tj +1 − tj , and Δξj = ξtj +1 − ξtj , using Taylor’s for-
2.3 The Itô Formula 71
mula, we obtain
u(t, ξt ) − u(0, 0)
n−1
n−1
= ut (tj , ξtj +1 )Δtj + ux (tj , ξtj ), Δξj H
j =1 j =1
1
n−1
+ uxx (tj , ξtj )(Δξj ), Δξj H
2
j =1
n−1
* +
+ ut (t˜j , ξtj +1 ) − ut (tj , ξtj +1 ) Δtj
j =1
1 *
n−1
+
+ uxx (tj , ξ̃j )(Δξj ) − uxx (tj , ξtj )(Δξj ) , Δξj H ,
2
j =1
= S1 + S 2 + S3 + S 4 + S 5 ,
where Δξj = Φ(W̃tj +1 − W̃tj ), ξ̃j = Φ W̃tj + θj Φ(W̃tj +1 − W̃tj ), and t˜j ∈ [tj , tj +1 ],
θj ∈ [0, 1] are random variables.
Using the smoothness of the function u, we conclude that S4 and S5 converge to
zero with probability one as n → ∞ and that
t t
S1 + S2 → ut (s, ξs ) ds + Φ ∗ ux (s, ξs ), d W̃s K
0 0
t t
= ut (s, ξs ) ds + ux (s, ξs ), Φ d W̃s H
.
0 0
To show that
n−1
t * +
uxx (tj , ξtj )(Δξj ), Δξj H
→ tr uxx (s, W̃s )ΦΦ ∗ ds
j =1 0
∞
∗ ∗ 2
= F t
E 1Nj uxx (tj , ξtj )ei , ei H W̃tj +1 Φ ei − W̃tj Φ ei j
i=1
∗
= tr 1N
j uxx (tj , ξtj )ΦΦ Δtj .
Now, to complete the proof, we can now follow the arguments in the proof of The-
orem 2.9 with ΦΦ ∗ replacing Q.
Chapter 3
Stochastic Differential Equations
for ω ∈ and 0 ≤ t ≤ T .
1 A weak (mild) solution is called mild (respectively mild integral) solution in [9], where also a
t
X(t), h H = ξ0 , h H + X(s), A∗ h H + F (s, X), h H ds
0
t
+ h, B(s, X) dWs H
; (3.5)
0
if there exists a filtered probability space (, F , {Ft }t∈[0,T ] , P ) and, on this proba-
bility space, a Q-Wiener process Wt , relative to the filtration {Ft }t≤T , such that Xt
is a mild solution of (3.7).
Unlike the strong solution, where the filtered probability space and the Wiener
process are given, a martingale solution is a system ((, F , {Ft }t≤T , P ), W, X)
where the filtered probability space and the Wiener process are part of the solution.
If A = 0, S(t) = IH (identity on H ), we obtain the SDE
dX(t) = F (t, X) dt + B(t, X) dWt ,
(3.8)
X(0) = x ∈ H (deterministic),
and a martingale solution of (3.8) is called a weak solution (in the stochastic sense,
see [77]).
76 3 Stochastic Differential Equations
Remark 3.1 In the presence or absence of the operator A, there should be no con-
fusion between a weak solution of (3.1) in the sense of duality and a weak solution
of (3.8) in the stochastic context.
Obviously, a strong solution is a weak solution (either meaning) and a mild solu-
tion is a martingale solution.
which will be called stochastic convolution. Let · D(A) be the graph norm on
D(A),
1/2
hD(A) = h2H + Ah2H .
The space (D(A), · D(A) ) is a separable Hilbert space (Exercise 1.2). If f :
T
[0, T ] → D(A) is a measurable function and 0 f (s)D(A) < ∞, then for any
t ∈ [0, T ],
t t t
f (s) ds ∈ D(A) and Af (s) ds = A f (s) ds.
0 0 0
T T
A Φ(t) dWt = AΦ(t) dWt P -a.s. (3.11)
0 0
3.1 Stochastic Differential Equations and Their Solutions 77
Proof Equality (3.11) is true for bounded elementary processes in E (L (K, D(A)).
Let Φn ∈ E (L (K, D(A)) be bounded elementary processes approximating Φ as in
Lemma 2.3,
T
Φ(t, ω) − Φn (t, ω)2 →0 as n → ∞
L2 KQ ,D(A)
0
Proof (a) The proof in [11] relies on the fact which we make a subject of Exer-
cise 3.2. Another method is presented in [9]. We choose to use an Itô formula type
of proof which is consistent with the deterministic approach (see [63]).
Assume that (3.12) holds and let
u(s, x) = x, S ∗ (t − s)h H ,
n−1
* +
+ us s̃j , X(sj +1 ) − us (sj , Wsj +1 ) Δsj . (3.13)
j =1
n−1
s s
us sj , X(sj +1 ) Δsj → us r, X(r) dr = X(r), −A∗ S ∗ (t − r)h H dr.
j =1 0 0
n−1
n−1
∗
ux sj , X(sj ) , ΔXj H = S (t − sj )h, X(sj +1 ) − X(sj ) H
j =1 j =1
n−1
sj +1 sj +1
∗ ∗ ∗
= X(r) dr, A S (t − sj )h + Φ(r) dWr , S (t − sj )h dr
j =1 0 H 0 H
sj sj
− X(r) dr, S ∗ (t − sj )h − Φ(r) dWr , S ∗ (t − sj )h
0 H 0 H
n−1
sj +1 sj +1
= X(r) dr, A∗ S ∗ (t − sj )h + Φ(r) dWr , S ∗ (t − sj )h .
j =1 sj H sj H
For s = t, we have
t
X(t), h H = S(t − r)Φ(r) dWr , h .
0 H
so that the assumptions of the stochastic Fubini theorem, Theorem 2.8, are satisfied.
We obtain
t s
∗
t
S Φ(s) ds, A h = S(s − u)Φ(u) dWu , A∗ h H ds
0 H 0 0
t t
= Ψ (u, ω, s) dWu ds
0 0
t t
= Ψ (u, ω, s) ds dWu
0 0
t t
∗
= 1{(0,s]}(u) S(s − u)Φ(u)(·), A h H ds dWu
0 0
t t
= A S(s − u)Φ(u)(·) ds , h dWu
0 u H
80 3 Stochastic Differential Equations
t
= S(t − u)Φ(u) − Φ(u) (·), h H dWu
0
t
= S(t − u)Φ(u) − Φ(u) dWu , h ,
0 H
t
where we have used the fact that for x ∈ H , the integral 0 S(r)x dr ∈ D(A), and
t
A S(r)x dr = S(t)x − x
0
proving (b).
(c) Recall from (1.22), Chap. 1, the Yosida approximation An = ARn of A, and
let Sn (s) = esAn be the corresponding semigroups. Then part (b) implies that
t t
Sn Φ(t) = An Sn Φ(s) ds + Φ(s) dWs . (3.15)
0 0
Recall the commutativity property (1.16) from Chap. 1 that for x ∈ D(A), ARn x =
Rn Ax. In addition, ASn (t)x = Sn (t)Ax for x ∈ D(A), see Exercise 3.1. Using
Proposition 3.1, we obtain
t
An Sn Φ(t) = ARn Sn (t − s)Φ(s) dWs
0
t
= Rn Sn (t − s)AΦ(s) dWs
0
= Rn Sn AΦ(t).
Hence,
t 2
sup E
An Sn Φ(s) − AS Φ(s) ds
0≤t≤T 0 H
t
≤ T 2 sup E An Sn Φ(s) − AS Φ(s)2 ds
H
0≤t≤T 0
T
≤ T 2E Rn Sn AΦ(s) − S AΦ(s)2 ds
H
0
3.1 Stochastic Differential Equations and Their Solutions 81
T
≤ T 2E Rn Sn AΦ(s) − S AΦ(s) 2 ds
H
0
T
+ T 2E (Rn − I )S AΦ(s)2 ds
H
0
T
≤C E Sn AΦ(s) − S AΦ(s)2 ds
H
0
T
+E (Rn − I )S AΦ(s)2 ds .
H
0
and
(Rn − I )S AΦ(s)2 ≤ C1 S AΦ(s)2
H H
with
T T s
E S AΦ(s)2 ds = E S(s − u)AΦ(u)2 du ds
H L2 (KQ ,H )
0 0 0
Combining (3.16) and (3.17), we obtain that both terms in (3.15) converge uniformly
in mean square to the desired limits, so that (3.9) is satisfied by S Φ(t). This
concludes the proof.
then X(t) is a weak solution of (3.1). If, in addition X(t) ∈ D(A), dP ⊗ dt almost
everywhere, then X(t) is a strong solution of (3.1).
Proof The techniques for proving parts (a) and (b) of Theorem 3.1 are applicable to
a more general case. Consider the process X(t) satisfying the equation
t
X(t), h H = ξ0 , h H + X(s), A∗ h H + f (s), h H ds
0
t
+ h, Φ(s) dWs H
(3.18)
0
For s = t, we have
t
X(t), h H = S(t)ξ0 , h H + S(t − r)Φ(r) dWr , h
0 H
s
+ S(t − r)f (r) dr, h .
0 H
Now it follows that X(t) is a mild solution if we substitute f (t) = F (t, X) and
Φ(t) = B(t, X) and use the fact that D(A∗ ) is dense in H .
To prove the converse statement, consider the process
t
X(t) = S(t)ξ0 + S(t − s)f (s) ds + S Φ(t),
0
where f (t) is as in the first part, and Φ ∈ Λ2 (KQ , H ). We need to show that
X(t), h H = ξ0 , h H
t s
+ S(s)ξ0 + S(s − u)f (u) du + S Φ(s), A∗ h ds
0 0 H
3.1 Stochastic Differential Equations and Their Solutions 83
t
t
+ f (s), h H ds + Φ(s) dWs , h .
0 0 H
t
A S(s)ξ ds = S(t)ξ − ξ,
0
we get
t
S(t)ξ0 , h H = ξ0 , h H + S(s)ξ0 , A∗ h H ds.
0
The following existence and uniqueness result for linear SDEs is a direct appli-
cation of Theorem 3.2.
t t
X(t), ζ (t) H = X(s), ζ (s) + A∗ ζ (s) H ds + ζ (s), B dWs H
. (3.20)
0 0
Hint: Prove the result for a linearly dense subset of C 1 ([0, T ], (D(A∗ ), · D(A∗ ) ))
consisting of functions ζ (s) = ζ0 ϕ(s), where ϕ(s) ∈ C 1 ([0, T ], R).
We first prove the uniqueness and existence of a mild solution to (3.1) in the case
of Lipschitz-type coefficients. This result is known (see Ichikawa [32]) if the coeffi-
cients F (t, ·) and B(t, ·) depend on x ∈ C([0, T ], H ) through x(t) only. We follow
a technique extracted from the work of Gikhman and Skorokhod, [25] and extend it
from R n to H -valued processes.
Note that conditions (A3) and (A4) imply, respectively, that
b
b
F (t, x) dt 1 + sup (θt x)(s)H dt
≤
a H a s≤T
and
b b
F (t, x) − F (t, y) dt sup θt (x − y) (s) dt.
≤K
a H a s≤T
We will now state inequalities useful for proving the existence, uniqueness, and
properties of solutions to the SDE (3.1). We begin with well-known inequalities
(refer to (7.8), (7.9) in [11] and (24) in [34]).
3.2 Solutions Under Lipschitz Conditions 85
Proof The first inequality follows from the fact that the stochastic integral is an
Lp ()-martingale and from Doob’s maximal inequality, Theorem 2.2. The third is
just Hölder’s inequality. We now prove the second. For p = 1, it is the isometry
property of the stochastic integral.
2p
Assume now that p > 1. Let F (·) = · H : H → R. Then F is continuous, and
its partial derivatives
2(p−1)
Fx (x) (h) = 2pxH x, h H , h ∈ H,
2(p−2)
Fxx (x) (h, g) = 4p(p − 1)xH x, h H x, g H
2(p−1)
+ 2pxH h, g H , h, g ∈ H,
. , p - 1
2p / p−1 s p
≤ p(2p − 1) E sup M(u)H Φ(u)2
p
E L2 (KQ ,H )
du
0≤u≤s 0
, 2p - p−1 , p - 1
2p 2p p s p
≤ p(2p − 1)
E M(s) H E Φ(u)2 du .
2p − 1 L2 (KQ ,H )
0
p−1
2p
Dividing both sides by (EM(s)H ) p , we obtain
p
2p c2,p s
E M(s)H ≤ E Φ(u)2
L2 (KQ ,H )
du ,
c1,p 0
1
The constants Cp,α,M,T 2
and Cp,α,M,T depend only on the indicated parameters.
Proof We define G(s) = S(t − s)Φ(s). Then, for u ∈ [0, t], we have by Lemma 3.1
u 2p u 2p
E
S(t − s)Φ(s) dWs = E G(s) dWs
0 H 0 H
u p
c2,p
≤ E G(s)2 ds
c1,p L 2 (KQ ,H )
0
u p
c2,p
= E S(t − s)Φ(s)2 ds
c1,p L2 (KQ ,H )
0
u p
c2,p 2p 2p αT
≤ M e E Φ(s)2 ds .
c1,p L2 (KQ ,H )
0
In particular, for u = t, we get the first inequality in (3.23), the second is the Hölder
inequality.
We will need inequalities of Burkholder type for the process of stochastic convo-
lution. We begin with a supporting lemma [10].
Lemma 3.2 Let 0 < α ≤ 1 and p > 1 be numbers such that α > 1/p. Then, for an
arbitrary f ∈ Lp ([0, T ], H ), the function
t
Gα f (t) = (t − s)α−1 S(t − s)f (s) ds, 0 ≤ t ≤ T, (3.24)
0
3.2 Solutions Under Lipschitz Conditions 87
The following Burkholder-type inequalities concern two cases. The first allows a
general C0 -semigroup but is restricted only to the powers strictly greater than two.
Its proof relies on a factorization technique developed in [10], and it is a conse-
quence of (3.21) and (3.23). The second inequality allows the power of two but
is restricted to pseudo-contraction semigroups only. Curiously, the general case of
power two is still an open problem.
Let An = ARn be the Yosida approximations, and Sn (t) = eAn t . Then a continuous
version of S Φ(t) can be approximated by the (continuous) processes Sn Φ(t) in
the following sense:
2p
lim E sup S Φ(t) − Sn Φ(t)H = 0. (3.26)
n→∞ 0≤t≤T
Proof (a) We follow the proof of Proposition 7.3 in [11], which uses the factor-
ization method introduced in [10]. Let us begin with the following identity (see
Exercise 3.6):
t π
(t − s)α−1 (s − σ )−α ds = , 0 < α < 1, σ < t. (3.29)
σ sin πα
Using this identity and the stochastic Fubini theorem 2.8, we obtain
t
S(t − s)Φ(s) dWs
0
sin πα t t
= (t − s)α−1 (s − σ )−α ds S(t − σ )Φ(σ ) dWσ
π 0 σ
t s
sin πα
= (t − s)α−1 S(t − s) (s − σ )−α S(s − σ )Φ(σ ) dWσ ds
π 0 0
sin πα t
= (t − s)α−1 S(t − s)Y (s) ds P -a.s.
π 0
with
s
Y (s) = (s − σ )−α S(s − σ )Φ(σ ) dWσ , 0 ≤ s ≤ T.
0
Hence, we have the modification
sin πα t
S Φ(t) = (t − s)α−1 S(t − s)Y (s) ds, (3.30)
π 0
3
with some constant Cp,α,M,T > 0, by the theorem about convolution in Lp (Rd ), see
Exercise 3.7. Now (3.25), in case τ =T , follows with Cp,α,M,T = Cp,α,M,T
1 3
Cp,α,M,T .
We will consider (3.25) with a stopping time τ . Let τn ↑ τ P -a.s. be an increasing
sequence of stopping times approximating τ , each τn taking kn values 0 ≤ t1 ≤ · · · ≤
tkn ≤ T . Then
s 2p
E sup S(s − r)Φ(r) dWr
0≤s≤τn ∧t 0 H
2p
kn
s
= E 1{τn =ti } sup
S(s − r)Φ(r) dWr
i=1 0≤s≤ti ∧t 0 H
2p
kn
s
= E sup
1{τn =ti } S(s − r)Φ(r) dWr
i=1 0≤s≤ti ∧t 0 H
kn ti ∧t 2p
≤ Cp,α,M,T E 1{τ Φ(r)L (K dr
n =ti } 2 Q ,H )
i=1 0
τn ∧t
= Cp,α,M,T E Φ(r)2p dr,
L2 (KQ ,H )
0
with
s
Yn (s) = (s − σ )−α Sn (s − σ )Φ(σ ) dWσ , 0 ≤ s ≤ T.
0
Hence,
π t
S Φ(t) − Sn Φ(t) = (t − s)α−1 S(t − s) − Sn (t − s) Y (s) ds
sin πα 0
π t
+ (t − s)α−1 Sn (t − s) Y (s) − Yn (s) ds
sin πα 0
= In (t) + Jn (t).
Let us analyze the terms In (t) and Jn (t) separately. By the Hölder inequality,
2p T
sup In (t)H ≤ C S(t − s) − Sn (t − s) Y (s)2p ds,
H
0≤t≤T 0
with the expression on the right-hand side converging to zero and being bounded by
a P -integrable function, so that
2p
lim E sup In (t)H = 0
n→∞ 0≤t≤T
Similarly to (3.31), using the convolution inequality in Exercise 3.7 (with r = 1 and
s = p), we have
T 2p
E Y (t) − Yn (t)H dt
0
T
2p
t
=E (t − s)−α S(t − s) − Sn (t − s) Φ(s) dWs
dt
0 0 H
3.2 Solutions Under Lipschitz Conditions 91
T t 2
p
≤ Cp4 E (t − s)−2α S(t − s) − Sn (t − s) Φ(s)L ds dt
2 (KQ ,H )
0 0
T t 2
p
≤ Cp4 E (t − s)−2α sup S(u) − Sn (u) Φ(s)L ds dt
2 (KQ ,H )
0 0 0≤u≤T
p
T T 2p
≤ Cp4 E t −2α
dt sup S(u) − Sn (u) Φ(t)L (K dt
2 Q ,H )
0 0 0≤u≤T
T 2p
≤ Cp,α
5
E sup S(u) − Sn (u) Φ(t)L (K dt → 0
2 Q ,H )
0 0≤u≤T
Xn = S Φn (s)
2p
Applying Itô’s formula to F (x) = xH , we get, similarly as in Lemma 3.1,
s 2(p−1)
Xn (s)2p ≤ 2p Xn (u)H Xn (u), Φn (u) dWu H
H
0
s 2(p−1)
+ 2p Xn (u)H X(u), AX(u) H du
0
1 s 2(p−1)
+ 2p(2p − 1)Xn (u)H Φn (u)2
L
du.
2 2 (KQ ,H )
0
2(p−1) t
+ p(2p − 1) sup Xn (s)H Φn (s)2
L2 (KQ ,H )
ds.
0≤s≤t 0
92 3 Stochastic Differential Equations
t
Xn∗ (t) = sup Xn (s)H , and φn (t) = Φn (s)2
L2 (KQ ,H )
ds.
0≤s≤t 0
Let τk = inft≤T {Xn∗ (t) > k}, k = 1, 2, . . . , with the infimum over an empty set being
equal T . By the Burkholder inequality for real-valued martingales, we have
s∧τk
E Xn (u)2(p−1) Xn (u), Φn (u) dWu
H H
0
s∧τk 1/2
≤E Xn (u)2(2p−1) Φn (u)2 du
H L2 (KQ ,H )
0
2p−1 1/2
≤ E Xn∗ (s ∧ τk ) φn (s ∧ τk ) .
and
2(p−1)
E Xn∗ (s ∧ τk ) φn (s ∧ τk )
2p 1−1/p p 1/p
≤ E Xn∗ (s ∧ τk ) E φn (s ∧ τk ) .
since
t∧τk 2p t 2p
Xn∗ (s) ds ≤ Xn∗ (s ∧ τk ) ds.
0 0
t
g(t) ≤ u(t) + pα g(s) ds
0
3.2 Solutions Under Lipschitz Conditions 93
t
g(t) ≤ u(t) + pα u(s)epα(t−s) ds
0
t
≤ u(t) + sup u(s) pα epα(t−s) ds = u(t)epαt .
0≤s≤t 0
Multiplying the obtained inequality by (E(Xn∗ (t ∧ τk ))2p )1/(2p)−1 we can see that
∗ 1/2p
E Xn (t ∧ τk )2p
2p −1/2p 1/p
≤ e2pαt p(2p − 1) E Xn∗ (t ∧ τk ) (E φn (t ∧ τk )p
1/2p
+ 2p E φn (t ∧ τk )p .
giving
1 1/2 1/2p
z≤ 2pe2pαt + 4p 2 e4pαt + 4p(2p − 1)e2pαt Eφn (t ∧ τk )p
2
1/2p
≤ Cp,T e2pαt Eφn (t ∧ τk )p .
(dropping the stopping time in the RHS does not decrease its value).
Since sup0≤s≤t∧τk Xn (s)H ↑ sup0≤s≤t Xn (s)H , P -a.s., as k → ∞, by the
continuity of Xn (t) as a solution of (3.33), we get by the monotone convergence
that
t
2p 2 p
E sup Xn (s) H ≤ Cp,T e 2pαt
E
Φn (s) L .
2
0≤s≤t 0
so that
2p
E sup Xn (t) − X̃(t) → 0, (3.34)
0≤t≤T
Remark 3.2 Under the assumptions of Lemma 3.3, part (a), the continuous modifi-
cation of the stochastic convolution S Φ(s) defined by (3.30) can be approximated,
as in (3.28), by the processes Xn (t) = S Φn (t) defined in the proof of part (b). This
is because for X̃ defined in the proof of part (b),
P X̃(t) = S Φ(t), 0 ≤ t ≤ T = 1
Sn Φ(t) → S Φ(t),
S Φn (t) → S Φ(t),
Lemma 3.4 If F (t, x) and B(t, x) satisfy conditions (A1) and (A3), S(t) is either
a pseudo-contraction semigroup and p ≥ 1 or a general C0 -semigroup and p > 1,
then, for a stopping time τ ,
2p t 2p
E sup I (s, ξ )H ≤ C t + E sup ξ(u)H ds (3.36)
0≤s≤t∧τ 0 0≤u≤s∧τ
and, using (3.25) or (3.27), a bound for the expectation of the second term,
2p
s t∧τ
E sup
S(s − u)B(u, ξ ) dWu
≤ Cp,M,α,t E
B(s, ξ )2p
L2 (KQ ,H )
ds.
0≤s≤t∧τ 0 H 0
96 3 Stochastic Differential Equations
Lemma 3.5 Let conditions (A1) and (A4) be satisfied, and S(t) be either a pseudo-
contraction semigroup and p ≥ 1 or a general C0 -semigroup and p > 1. Then,
2p t 2p
E sup I (s, ξ1 ) − I (s, ξ2 )H ≤ Cp,M,α,T ,K E sup ξ1 (u) − ξ2 (u)H ds
0≤s≤t 0 0≤u≤s
t
≤ Cp,M,α,T E B(s, ξ1 ) − B(s, ξ2 )2p ds
L2 (KQ ,H )
0
t 2p
≤ Cp,M,α,T ,K E sup ξ1 (u) − ξ2 (u)H ds.
0 0≤u≤s
Let H2p denote the space of C([0, T ], H )-valued random variables ξ such that
the process ξ(t) is jointly measurable, adapted to the filtration {Ft }t∈[0,T ] , with
2p
Esup0≤s≤T ξ(s)H < ∞. Then H2p is a Banach space with the norm
2p 2p1
ξ H2p = E sup ξ(s)H .
0≤s≤T
Theorem 3.3 Let the coefficients F and B satisfy conditions (A1), (A3), and (A4).
Assume that S(t) is either a pseudo-contraction semigroup and p ≥ 1 or a general
C0 -semigroup and p > 1. Then the semilinear equation (3.1) has a unique continu-
2p
ous mild solution. If, in addition, Eξ0 H < ∞, then the solution is in H2p .
2p
If A = 0, then (3.8) has unique strong solution. If, in addition, Eξ0 H < ∞,
then the solution is in H2p , p ≥ 1.
2p
Proof We first assume that Eξ0 H < ∞. Let I (t, X) be defined as in (3.35), and
consider I (X)(t) = I (t, X). Then, by Lemma 3.4, I : H2p → H2p . The solution
can be approximated by the following sequence:
X0 (t) = S(t)ξ0 ,
(3.37)
Xn+1 (t) = S(t)ξ0 + I (t, Xn ), n = 0, 1, . . . .
2p
Indeed, let vn (t) = E sup0≤s≤t Xn+1 (s) − Xn (s)H . Then v0 (t) =
2p
E sup0≤s≤t X1 (s) − X0 (s)H ≤ v0 (T ) ≡ V0 , and, using Lemma 3.5, we obtain
2p 2p
v1 (t) = E sup X2 (s) − X1 (s)H = E sup I (s, X1 ) − I (s, X0 )H
0≤s≤t 0≤s≤t
t 2p
≤C E sup X1 (u) − X0 (u)H ds ≤ CV0 t
0 0≤u≤s
and, in general,
t V0 (Ct)n
vn (t) ≤ C vn−1 (s) ds ≤ .
0 n!
Next, similarly to the proof of Gikhman and Skorokhod in [25], we show that
sup Xn (t) − X(t)H → 0 a.s.
0≤t≤T
98 3 Stochastic Differential Equations
for some X ∈ H2p . If we let εn = (V0 (CT )n /n!)1/(1+2p) , then, using Chebychev’s
inequality, we arrive at
2p
P sup Xn+1 (t) − Xn (t)H > εn = P sup Xn+1 (t) − Xn (t)H > εn
2p
0≤t≤T 0≤t≤T
0
V0 (CT )n [V0 (CT )n ]2p/(1+2p)
≤ = εn .
n! n!
∞
Because n=1 εn < ∞, by the Borel–Cantelli lemma, sup0≤t≤T Xn+1 (t) −
Xn (t)H < εn P -a.s. Thus, the series
∞
sup Xn+1 (t) − Xn (t)H
n=10≤t≤T
with 1/2p + 1/2q = 1. Note that q > 1/2; 2p hence,the second series converges.
The first series is bounded by: ∞ v
k=n k (T )k ≤ ∞ k=n V0 (CT ) k /k! → 0 as
k 2p
n → ∞.
To justify that X(t) is a mild solution to (3.1), we note that, a.s., F (s, Xn ) →
F (s, X) uniformly in s. Therefore,
t t
S(t − s)F (s, Xn ) ds → S(t − s)F (s, X) ds a.s.
0 0
2p
Using the fact, proved above, that E supt X(t) − Xn (t)H → 0, we obtain
2p
t
E
S(t − s) B(s, X) − B(s, Xn ) dWs
0 H
3.2 Solutions Under Lipschitz Conditions 99
t
≤ Cp,M,α,T E B(s, X) − B(s, Xn ) 2p ds
L (K Q ,H )
0
2p
≤ Cp,M,α,T ,K E sup X(t) − Xn (t)H → 0.
0≤t≤T
2p
Now, if Eξ0 H ≤ ∞, take the F0 -measurable random variable χk = 1{ξ0 <k} and
let
ξk = ξ0 χk .
Let X k (t) be a mild solution of (3.1) with the initial condition ξk . We will first show
that
X k χk = X k+1 χk .
Let Xnk and Xnk+1 be the approximations of mild solutions X k and X k+1 defined
by (3.37). Since
we deduce that
X0k χk = X0k+1 χk ,
F t, X0k χk = F t, X0k+1 χk ,
B t, X0k χk = B t, X0k+1 χk ,
so that
t t
X1k (t)χk = S(t)ξ0 χk + χk S(t − s)F s, X0k ds + χk S(t − s)B s, X0k dWs
0 0
t
= S(t)ξ0 χk + χk S(t − s)F s, X0k+1 ds
0
t
+ χk S(t − s)B s, X0k+1 dWs
0
= S(t)ξ0 χk+1 χk + I t, X0k+1 χk
= X1k+1 (t)χk .
in H2p , we also have by the generalized Lebesgue DCT, Theorem 3.4, that
in H2p , so that P -a.s., for all t ∈ [0, T ], X k (t)χk = X k+1 (t)χk . The limit
Theorem 3.4 (Generalized Lebesgue DCT) Let (E, μ) be a measurable space, and
gn be a sequence of nonnegative real-valued integrable functions such that gn (x) →
g(x) for μ-a.e. x ∈ E and
gn (x)μ(dx) → g(x)μ(dx).
E E
Let fn be another sequence of functions such that |fn | ≤ gn and fn (x) → f (x) for
μ-a.e. x ∈ E, then fn and f are integrable functions, and
fn (x)μ(dx) → f (x)μ(dx).
E E
3.2 Solutions Under Lipschitz Conditions 101
Proposition 3.2 Let the coefficients F and B satisfy conditions (A1), (A3), and
2p
(A4) and Eξ0 H < ∞. Let X(t) be a mild solution to the semilinear equa-
tion (3.1), and Xn (t) be strong solutions of the approximating problems (3.40). If
S(t) is a pseudo-contraction semigroup and p ≥ 1 or a general C0 -semigroup and
p > 1, then the mild solution X(t) of (3.1) is approximated in H2p by the sequence
of strong solutions Xn (t) to (3.40), that is,
2p
lim E sup Xn (t) − X(t)H = 0
n→∞ 0≤t≤T
Proof First note that under the assumption on the coefficients of (3.40), by The-
orem 3.3, strong solutions Xn exist, and they are unique and coincide with mild
solutions. Moreover,
2p 2p
E sup Xn (s) − X(s) H ≤ C E sup Sn (s) − S(s) ξ0 H
0≤s≤t 0≤s≤t
t
+E Sn (t − s) − S(t − s) F (s, Xn )2p ds
H
0
t
+E Sn (t − s) − S(t − s) B(s, Xn )2p ds
L
0
2p
+ E sup I (s, Xn ) − I (s, X)H .
0≤s≤t
as n → ∞.
102 3 Stochastic Differential Equations
Proof Let τn = inf{t : X(t)H > n}. Lemma 3.4 implies that
2p 2p
E sup X(s ∧ τn )H = E sup S(t ∧ τn )ξ0 + I (s ∧ τn , X)H
0≤s≤t 0≤s≤t
2p 2p
≤ 22p−1 E sup S(s)ξ0 H + E sup I (s, X)H
0≤s≤t∧τn 0≤s≤t∧τn
2p
sup I (s, X)H ds
2p
≤ 22p−1 Eξ0 H Meαt + E
0≤s≤t∧τn
t 2p
≤C
2p
Eξ0 H +t +
E sup X(u ∧ τn ) H ds .
0 0≤u≤s
with F satisfying conditions (A1), (A2), and (A4) and A generating a C0 -semigroup
of operators on H . Then, Theorem 3.3 guarantees the existence of a unique mild
solution to (3.41) in C([0, T ], H ), which is given by
t t
X(t) = S(t)x + S(t − s)F (s, X) ds + S(t − s)B dWt .
0 0
In Sect. 3.3 we will consider a special case where the coefficients F and B of an
SSDE (3.1) depend on X(t) rather than on the entire past of the solution. It is known
that even in the deterministic case where A ≡ 0, B ≡ 0, and F (t, X) = f (t, X(t))
with a continuous function f : R × H → H , a solution to a Peano differential equa-
tion
X (t) = f (t, X(t)),
(3.42)
X(0) = x ∈ H,
may not exist (see [26] for a counterexample), unless H is finite-dimensional. Thus
either one needs an additional assumption on A, or one has to seek a solution in a
larger space. These topics will be discussed in Sects. 3.8 and 3.9.
(A4’) For x1 , x2 ∈ H ,
F̃ (ω, t, x1 ) − F̃ (ω, t, x2 ) + B̃(ω, t, x1 ) − B̃(ω, t, x2 )
H L 2 (KQ ,H )
≤ K x1 − x2 H .
104 3 Stochastic Differential Equations
Let I (t, ξ ) be as in (3.35). By repeating the proofs of Lemmas 3.4 and 3.5, with all
suprema dropped and with (3.23) replacing (3.25) and (3.27), we obtain the follow-
ing inequalities for ξ1 , ξ2 ∈ H˜2p , p ≥ 1, and a general C0 -semigroup:
2p t 2p
E I (t, ξ ) H ≤ Cp,M,α,T , t +
E ξ(s) H ds , p ≥ 1, (3.43)
0
2p
E I (t, ξ1 ) − I (t, ξ2 )H
t 2p
≤ Cp,M,α,T ,K E ξ1 (s) − ξ2 (s)H ds, p ≥ 1. (3.44)
0
If F (t, x) = F (t, x(t)) and B(t, x) = B(t, x(t)) satisfy conditions (A1) and (A4),
then there exists a constant Cp,M,α,T ,K such that
I (·, ξ1 ) − I (·, ξ2 )2p ≤ Cp,M,α,T ,K ξ1 − ξ2 2p . (3.47)
˜H2p ˜ H2p
Theorem 3.5 Let the coefficients F (t, x) = F (t, x(t)) and B(t, x) = B(t, x(t)) sat-
isfy conditions (A1), (A3), and (A4). Assume that S(t) is a general C0 -semigroup.
Then the semilinear equation (3.1) has a unique continuous mild solution. If, in
addition, Eξ0 H < ∞, p ≥ 1, then the solution is in H˜2p .
2p
In this case, either for p > 1 and a general C0 -semigroup or for p = 1 and a
pseudo-contraction semigroup, the solution is in H2p .
Proof We follow the proof of the existence and uniqueness for deterministic
Volterra equations, which uses the Banach contraction principle. The idea is to
change a given norm on a Banach space to an equivalent norm, so that the inte-
gral transformation related to the Volterra equation becomes contractive.
We first assume that
2p
Eξ0 H < ∞.
Let B be the Banach space of processes X ∈ H˜2p , equipped with the norm
2p 2p1
XB = sup e−Lt E X(t)H ,
0≤t≤T
where L = Cp,M,α,T ,K , the constant in Corollary 3.3. The norms ·H˜2p and ·B
are equivalent since
Note that I˜ : B → B by (3.43). We will find a fixed point of the transformation I˜.
Let X, Y ∈ B. We use (3.44) and calculate
I˜(X) − I˜(Y )2p = sup e−Lt E I (t, X) − I (t, Y )2p
B H
0≤t≤T
t 2p
≤ sup e−Lt L E X(s) − Y (s)H ds
0≤t≤T 0
t 2p
= sup e−Lt L eLs e−Ls E X(s) − Y (s)H ds
0≤t≤T 0
t
≤ LX − Y B sup e−Lt
2p
eLs ds
0≤t≤T 0
eLt −1
= LX − Y B sup e−Lt
2p
0≤t≤T L
≤ 1 − e−LT X − Y B ,
2p
106 3 Stochastic Differential Equations
are continuous (see (3.38)), and X̃(t) is a mild solution of (3.1) with the initial
condition ξ0 . By the uniqueness, X̃(t) is the sought continuous modification of X(t).
2p
The proof is complete for ξ0 satisfying Eξ0 H < ∞ and p ≥ 1.
2p
If Eξ H ≤ ∞, then we apply the corresponding part of the proof of Theo-
rem 3.3.
The uniqueness is justified as in the proof of Theorem 3.3.
The final assertion of the theorem is a direct consequence of Theorem 3.3.
2p
Remark 3.3 In case Eξ0 H < ∞, the unique mild solution X constructed in The-
orem 3.5 can be approximated in H˜2p by the sequence
X0 (t) = S(t)ξ0 ,
Xn+1 (t) = I˜(Xn )(t)
t t
= S(t)ξ0 + S(t − s)F s, Xn (s) ds + S(t − s)B s, Xn (s) dWs .
0 0
W,ξ
Then Xn (t) and its limit X(t) are measurable with respect to Ft 0 .
2p
If Eξ0 H ≤ ∞, the mild solution X is obtained as a P -a.e. limit of mild solu-
W,ξ
tions adapted to Ft 0 , so that it is also adapted to that filtration.
Corollary 3.4 Let the coefficients F (t, x) = F (t, x(t)) and B(t, x) = B(t, x(t))
2p
satisfy conditions (A1), (A3), and (A4) and Eξ0 H < ∞. Let X(t) be a mild
3.4 Markov Property and Uniqueness 107
solution to the semilinear equation (3.1), and Xn (t) be strong solutions of the ap-
proximating problems (3.40). If S(t) is a C0 -semigroup and p ≥ 1, then the mild
solution X(t) of (3.1) is approximated in H˜2p by the sequence of strong solutions
Xn (t) to (3.40),
2p
lim sup E Xn (t) − X(t)H = 0.
n→∞ 0≤t≤T
Exercise 3.10 Prove that if (3.49) holds true for any real-valued bounded measur-
able function ϕ, then it is also valid for any ϕ, such that ϕ(X(t + h)) ∈ L1 (, R).
We now want to consider mild solutions to (3.1) on the interval [s, T ]. To that
end, let {Wt }t≥0 , be a Q-Wiener process with respect to the filtration {Ft }t≤T and
consider W̄t = Wt+s − Ws , the increments of Wt . The process W̄t is a Q-Wiener
process with respect to F¯t = Ft+s , t ≥ 0. Its increments on [0, T − s] are identical
to the increments of Wt on [s, T ].
Consider (3.1) with W̄t replacing Wt and F¯0 = Fs replacing F0 . Under the
assumptions of Theorem 3.5, there exists a mild solution X(t) of (3.1), and it is
unique, so that for any 0 ≤ s ≤ T and an Fs -measurable random variable ξ , there
exists a unique process X(·, s; ξ ) such that
t
X(t, s; ξ ) = S(t − s)ξ + S(t − r)F r, X(r, s; ξ ) dr
s
t
+ S(t − r)B r, X(r, s; ξ ) dWr . (3.50)
s
This definition can be extended to functions ϕ such that ϕ(X(t, s; x)) ∈ L1 (, R)
for arbitrary s ≤ t. Note that for any random variable η,
(Ps,t ϕ)(η) = E ϕ X(t, s; x) x=η .
Theorem 3.6 Let the coefficients F and B satisfy conditions (A1), (A3), and (A4).
Assume that S(t) is a general C0 -semigroup. Then, for u ≤ s ≤ t ≤ T , the solutions
X(t, u; ξ ) of (3.50) are Markov processes, i.e., they satisfy the following Markov
property:
E ϕ X(t, u; ξ ) FsW,ξ = (Ps,t ϕ) X(s, u; ξ ) (3.52)
for any real-valued function ϕ, such that ϕ(X(t, s; ξ )) ∈ L1 (, R) for arbitrary
s ≤ t.
for all σ (X(s, u; ξ ))-measurable random variables η. By the monotone class theo-
rem (functional form) it suffices to prove (3.53) for ϕ bounded continuous on H .
Note that if η = x ∈ H , then clearly the solution X(t, s; x) obtained in Theo-
rem 3.3 is measurable with respect σ {Wt − Ws , t ≥ s} and hence independent of
W,ξ W,ξ
Fs , by the fact that the increments Wt − Ws , t ≥ s, are independent of Fs .
This implies
E ϕ X(t, s; x) FsW,ξ = (Ps,t ϕ)(x). (3.54)
Consider a simple function
n
η= xj 1Aj X(s, u; ξ ) ,
j =1
and
n
E ϕ X(t, s; η) FsW,ξ = E ϕ X(t, s; xj ) 1Aj X(s, u; ξ ) FsW,ξ .
j =1
W,ξ
Now ϕ(X(t, s; xj )), j = 1, 2, . . . , n are independent of Fs , and 1Aj (X(s, u; ξ ))
W,ξ
is Fs -measurable, giving that, P -a.e.,
n
E ϕ X(t, s; η) FsW,ξ = (Ps,t ϕ)(xj )1Aj X(s, u; ξ )
j =1
If Eη2H < ∞, then there exists a sequence of simple functions ηn of the above
form such that Eηn 2 < ∞ and Eηn − η2 → 0. Lemma 3.7, in Sect. 3.5, yields
2
E X(t, s; ηn ) − E X(t, s; η) H → 0.
Corollary 3.5 Under the assumptions of Theorem 3.5, for u ≤ s ≤ t ≤ T , the solu-
tions X(t, u; ξ ) of (3.50) satisfy the following Markov property:
E ϕ X(t, u; ξ ) FsX = (Ps,t ϕ) X(s, u; ξ ) (3.57)
W,ξ
Proof Since the RHS of (3.52) is FsX -measurable and FsX ⊂ Fs , which is a
consequence of Remark 3.3, it is enough to take conditional expectation with respect
to FsX in (3.52).
Exercise 3.11 Show that if X(t) = X(t, 0; ξ0 ) is a mild solution to (3.1) as in The-
orem 3.5, then the Markov property (3.52) implies (3.49).
110 3 Stochastic Differential Equations
We now consider the case where F and B are independent of t, and assume that
x ∈ H . Then we get
t+s
X(t + s, t; x) = S(s)x + S(t + s − u)F X(u, t; x) du
t
t+s
+ S(t + s − u)B X(u, t, x) dWu
t
s
= S(s)x + S(s − u)F X(t + u, t; x) du
0
s
+ S(s − u)B X(t + u, t; x) d W̄u ,
0
Pt = P0,t .
As in Sect. 3.4, we consider the semilinear SDE (3.1) with the coefficients F (t, x) =
F (t, x(t)) and B(t, x) = B(t, x(t)) for x ∈ C([0, T ], H ) such that F and B do not
depend on ω.
Before we study the dependence on the initial condition, we need the following
lemma.
Proof Condition (2) follows from (3.44) in Lemma 3.5. To prove (1), we let X ξ (t)
and X η (t) be mild solutions of (3.1) with initial conditions ξ and η, respectively.
Then,
2
E X ξ (t) − X η (t)H
t
≤ 3CT Eξ − ηH + E 2 F s, X ξ (s) − F s, X η (s) 2 ds
H
0
t
+ B s, X ξ (s) − B s, X η (s) 2 ds
L 2 (KQ ,H )
0
t 2
≤ 3CT Eξ − η2H + K 2
E X ξ (s) − X η (s)H ds .
0
We now prove the continuity of the solution with respect to the initial value.
Assume that Fn (t, x) and Bn (t, x) satisfy conditions (A1), (A3), and (A4) of
Sect. 3.1, and in addition, let the following conditions hold:
112 3 Stochastic Differential Equations
As Fn (s, X0 (s)) → F0 (s, X0 (s)), by condition (A3) and Lemma 3.6, we have, for
all s,
2 2
E Fn s, X0 (s) H ≤ 2 2 E 1 + X0 (s)H ≤ C 1 + Eξ0 2H
α1(n) → 0 uniformly in t.
Similarly,
2
t
α2 (t) = S(t − s) Bn s, X0 (s) − B0 x, X0 (s) dWs
(n)
E
0 H
3.5 Dependence of the Solution on the Initial Value 113
T
≤E S(t − s) Bn s, X0 (s) − B0 s, X0 (s) 2 ds → 0
L2 (KQ ,H )
0
uniformly in t ≤ T .
We obtain the result using Gronwall’s lemma.
We discuss differentiability of the solution with respect to the initial value in the
case where the coefficients F : [0, T ] × H → H and B : [0, T ] × H → L2 (KQ , H )
of (3.1) are Fréchet differentiable in the second (Hilbert space) variable.2
t
= S(t − s)D 2 F s, ξ(s) η(s), ζ (s) ds
0
t
+ S(t − s)D 2 B s, ξ(s) η(s), ζ (s) dWs (3.64)
0
Proof Consider
proving the first equality in (3.62). To prove the second equality, let
By Exercise 3.13,
rF (t, x, h) ≤ 2M1 hH and rB (t, x, h) ≤ 2M1 hH .
H L2 (KQ ,H )
∂ I˜(x,ξ )
Now with ∂ξ as defined in (3.62), we have
˜
˜ ˜ ∂ I (x, ξ )
rI˜ (x, ξ, η)(t) = I (x, ξ + η)(t) − I (x, ξ )(t) − η (t)
∂ξ
t t
= S(t − s)rF s, ξ(s), η(s) ds + S(t − s)rB s, ξ(s), η(s) dWs
0 0
= I1 + I2 .
Consider
t 2 1/2
sup0≤t≤T E 0 S(t − s)rF (s, ξ(s), η(s)) ds H
ηH˜2
1/2
T rF (s, ξ(s), η(s))2H η(s)2H
≤C E 1{η(s)H =0} ds .
0 η(s)2H η2
H˜2
3.5 Dependence of the Solution on the Initial Value 115
η(s)2H
1{η(s)H =0} ≤ 1.
η2
H˜2
t 1/2
I2 H˜2 = sup E S(t − s)rB s, ξ(s), η(s) 2 ds ,
L 2 (KQ ,H )
0≤t≤T 0
Exercise 3.13 Let H1 , H2 be two Hilbert spaces. For a Fréchet differentiable func-
tion F : H1 → H2 , define rF (x, h) = F (x + h) − F (x) − DF (x)h. Show that
rF (x, h) ≤ 2 sup DF (x)L (H hH1 .
H2 1 ,H2 )
x∈H1
f (x, u) − f (x, v) ≤ αu − vU , x ∈ X, u, v ∈ V (3.65)
U
and let for every x ∈ X, ϕ(x) denote the unique fixed point of the contraction
f (x, ·) : U → U . Then the unique transformation ϕ : X → U defined by
f x, ϕ(x) = ϕ(x) for every x ∈ X (3.66)
116 3 Stochastic Differential Equations
Let {fn }∞
n=1 be a sequence of mappings in C (X × U ) satisfying condition (3.65),
l
Proof Let F (x, u) = u − f (x, u). Then F (x, u) = 0 generates the implicit func-
tion ϕ(x) defined in (3.66). In addition, Fu (x, u) = I − fu (x, u) is invertible, since
fu (x, u)L (U ) ≤ α < 1. The differentiability and the form of the derivatives of
ϕ(x) follow from the implicit function theorem (see VI.2 in [44]). The last state-
ment follows from the convergence in (3.68) and the form of the derivatives of ϕ(x)
given in (3.67).
We are now ready to prove a result on differentiability of the solution with respect
to the initial condition.
and
⎧
⎪
⎨ dZ(t) = (AZ(t) + DF (t, X (t))Z(t) + D F (t, X (t))(DX (t)y, DX (t)z)) dt
x 2 x x x
Proof Consider the operator I˜ : B → B with I˜ defined in (3.60) and the Banach
space B defined in the proof of Theorem 3.5 in the case p = 1. Since B is just H˜2
renormed with the norm · B that is equivalent to · H˜2 , we can as well prove
the theorem in B, and the result will remain valid in H˜2 .
Since I˜ is a contraction on B, as shown in the proof of Theorem 3.5, the solution
X of (3.1) is the unique fixed point in B of the transformation I˜, so that
x
X x = I˜ X x . (3.73)
We put an additional restriction on the coefficients in this section and assume that
F and B depend only on x ∈ H . We will now discuss analytical properties of the
transition semigroup Pt . Recall that for a bounded measurable function ϕ on H ,
Pt ϕ(x) = P0,t ϕ(x) = E ϕ X x (t) ,
u(t, x) = Pt ϕ(x)
118 3 Stochastic Differential Equations
on t and x, and formulas (3.62) and (3.64) allow to establish a specific form of a
parabolic-type PDE for u(t, x), which is called Kolmogorov’s backward equation,
⎧
⎪
⎪ ∂u(t, x) ∂u(t, x)
⎪
⎪ = Ax + F (x),
⎪
⎪ ∂t ∂x
⎪
⎪ 2 H
⎨ 1 ∂ u(t, x)
1/2 ∗
+ tr B(x)Q 1/2
B(x)Q , (3.74)
⎪
⎪ 2 ∂x 2
⎪
⎪
⎪
⎪ 0 < t < T , x ∈ D(A),
⎪
⎪
⎩u(0, x) = ϕ(x).
We follow the presentation in [11] and begin with the case where A is a bounded
linear operator on H .
Proof We first show that u(t, x) defined by (3.75) satisfies (3.74). Since the operator
A is bounded, the proof follows from the Itô formula applied to the function ϕ(x)
and the strong solution X x (t) of the SSDE (3.1),
x dϕ(X x (s)) x
dϕ X (t) = , AX (t) + F X (t)
x
dx H
2
1 d ϕ(X (t)) x 1/2 x 1/2 ∗
x
+ tr B X (t) Q B X (t) Q dt
2 dx 2
dϕ(X x (s)) x
+ , B X (t) dWt .
dx H
Now Theorem 3.9 and the fact that ϕ ∈ Cb2 (H ) imply that u(t, x) is twice Fréchet
differentiable in x for 0 ≤ t ≤ T , and for y, z ∈ H , we have
∂u(t, x) ∂ϕ(X x (t))
,y = E , DX x (t)y ,
∂x H ∂x H
2 2
∂ u(t, x) ∂ ϕ(X x (t))
y, z = E x x
DX (t)y, DX (t)z (3.77)
∂x 2 H ∂x 2 H
∂ϕ(X x (t)) 2 x
+E , D X (t)(y, z) .
∂x H
giving for x ∈ H ,
∂ + u(0, x) ∂u(0, x)
= Ax + F (x),
∂t ∂x H
2
1 ∂ u(0, x)
1/2 ∗
+ tr B(x)Q1/2
B(x)Q .
2 ∂x 2
By (3.58), u(t + s, x) = u(t, u(s, x)). Hence,
∂ + u(s, x) ∂ + u(0, u(s, x)) Pt (Ps ϕ)(x) − (Ps ϕ)(x)
= = lim .
∂t ∂t t→0 + t
Note that (Ps ϕ)(x) = u(s, x) ∈ Cb2 (H ), so that we can repeat the calculations
in (3.76) with (Ps ϕ)(x) replacing ϕ(x) to arrive at
∂ + u(s, x) ∂u(s, x)
= Ax + F (x),
∂t ∂x H
2
1 ∂ u(s, x)
1/2 ∗
+ tr B(x)Q1/2
B(x)Q
2 ∂x 2
for x ∈ H , 0 ≤ s < T . Note that the functions ∂u(s, x)/∂x and ∂ 2 u(s, x)/∂x 2 are
continuous in t, because they depend on derivatives of ϕ and the derivative processes
120 3 Stochastic Differential Equations
DX x (t)y and D 2 X x (t)(y, z) that are mild solutions of (3.70) and (3.71). Hence,
∂ + u(s, x)/∂t is continuous on [0, T [, which implies ([63], Chap. 2, Corollary 1.2)
that u(·, x) is continuously differentiable on [0, T [. Thus, u(t, x) satisfies (3.74) on
[0, T [.
To prove the uniqueness, assume that ũ(t, ·) ∈ Cb2 (H ), ũ(·, x) ∈ Cb1 ([0, T [), and
ũ(t, x) satisfies (3.74) on [0, T [. For a fixed 0 < t < T , we use Itô’s formula and,
for 0 < s < t, consider the stochastic differential
d ũ t − s, X x (s)
∂ ũ(t − s, X x (s)) ∂ ũ(t − s, X x (s)) x
= − + , AX (s) + F X (s)
x
∂t ∂x H
2
1 ∂ ũ(t − s, X (s)) x 1/2 x 1/2 ∗
x
+ tr B X (s) Q B X (s) Q ds
2 ∂x 2
∂ ũ(t − s, X x (s)) x
+ , B X (s) dWs .
∂x H
t ∂ ũ(t − s, X x (s)) x
ũ 0, X(t) = ũ t, X(0) + , B X (s) dWs .
0 ∂x H
Therefore, applying expectation to both sides and using the initial condition
ũ(0, x) = ϕ(x) yields
ũ(t, x) = Eϕ X(t) .
Using Theorem 3.10 and the Yosida approximation, Da Prato and Zabczyk stated a
more general result when the operator A is unbounded.
Theorem 3.11 (Kolmogorov’s Backward Equation II) Assume that F and B do not
depend on t, F : H → H , and B : H → L2 (KQ , H ). Let the Fréchet derivatives
DF (x), DB(x), D 2 F (x), and D 2 B(x) be continuous and satisfy conditions (3.61)
and (3.63) (with t omitted). If conditions (A1), (A3), and (A4) hold, then for ϕ ∈
Cb2 (H ), there exists a unique solution u of Kolmogorov’s backward equation (3.74)
satisfying (3.74) on [0, T [ and such that
(1) u(t, x) is jointly continuous and bounded on [0, T ] × H ,
(2) u(t, ·) ∈ Cb2 (H ), 0 ≤ t < T ,
(3) u(·, x) ∈ Cb1 ([0, T [) for any x ∈ D(A).
Moreover, u is given by formula (3.75), where X x (t) is the solution to (3.1) with
deterministic initial condition ξ0 = x ∈ H .
Proof To prove that u(x, t) = Eϕ(X x (t)) is a solution, we approximate it with the
sequence un (t, x) = Eϕ(Xnx (t)), where Xn (t) are strong solutions to (3.40) with
3.6 Kolmogorov’s Backward Equation 121
the linear terms An being the Yosida approximations of A. By Corollary 3.4 with
p = 1, we know that the mild solution X(t) of (3.1) is approximated in H˜2p by the
sequence Xnx (t), i.e.,
2
lim sup E Xnx (t) − X x (t)H = 0.
n→∞ 0≤t≤T
This implies, choosing subsequence if necessary, that Xnx (t) → X x (t) a.s., so that,
by the boundedness of ϕ,
un (t, x) = Eϕ Xnx (t) → Eϕ X x (t) = u(t, x). (3.78)
with the last equality following by direct differentiation of Eϕ(X x (t)) under the
expectation. Hence,
∂un (t, x) ∂u(t, x)
, An x + F (x) → , Ax + F (x) . (3.79)
∂x H ∂x H
Next consider
2
∂ un (t, x) 1/2
1/2 ∗
tr B(x)Q B(x)Q
∂x 2
∞
2
d ϕ(Xnx (t))
1/2 ∗
= E DX x
n (t) B(x)Q 1/2
B(x)Q e k , DX x
n (t)e k
dx 2 H
k=1
122 3 Stochastic Differential Equations
∞
dϕ(Xnx (t)) 2 x
1/2 ∗
+ E , D Xn (t) B(x)Q1/2
B(x)Q ek , ek . (3.80)
dx H
k=1
Assume that ek (x) are the eigenvectors of (B(x)Q1/2 )(B(x)Q1/2 )∗ . Let us discuss
the convergence of the first term. We have
∞
2
d ϕ(Xnx (t))
1/2 ∗
E x
DXn (t) B(x)Q1/2
B(x)Q x
ek , DXn (t)ek
dx 2 H
k=1
d2 ϕ(X x (t)) ∗
− DX x (t) B(x)Q1/2 B(x)Q1/2 ek , DX x (t)ek
dx 2
H
2
d ϕ(X x (t)) d2 ϕ(X x (t))
∞
≤ E n
− DXnx (t) B(x)Q1/2
dx 2 dx 2
k=1
∗
B(x)Q1/2 ek , DXnx (t)ek
H
∞ 2
d ϕ(X x (t))
1/2 ∗
+ E x
DXn (t) B(x)Q1/2
B(x)Q x
ek , DXn (t)ek
dx 2 H
k=1
d2 ϕ(X x (t)) ∗
− DX x (t) B(x)Q1/2 B(x)Q1/2 ek , DX x (t)ek
dx 2
H
= S1 + S 2 .
Indeed, DXnx (t)y are mild solutions to (3.70) whose coefficients satisfy the assump-
tions of Theorem 3.5, since we have assumed that the Fréchet derivatives of F
and B satisfy conditions (3.61). By Theorem 3.5 each solution can be obtained in
H˜2p , p ≥ 1 (the initial condition is deterministic), using the iterative procedure that
employs the Banach contraction principle and starting from the unit ball centered
at 0. In addition, the sequence of contraction constants can be bounded by a con-
stant strictly less than one, since eAn t L (H ) ≤ Menαt/(n−α) by estimate (1.29) in
Chap. 2.
Consider the series S1 . For each k, the sequence
2
d ϕ(Xnx (t)) d2 ϕ(X x (t))
E − DXnx (t) B(x)Q1/2
dx 2 dx 2
∗
B(x)Q1/2 ek , DXnx (t)ek
H
3.6 Kolmogorov’s Backward Equation 123
2
d ϕ(Xnx (t)) d2 ϕ(X x (t))
≤ E
2
− 2
λk DX x (t)ek 2 → 0,
n H
dx dx H
where λk (x) is the eigenvalue corresponding to the eigenvector ek (x). The sequence
converges to zero since, by (3.81) with p = 1, we take a scalar product in L2 ()
of two sequences, one converging to zero and one bounded. As functions of k, the
expectations are bounded by Cλk . We conclude that S1 → 0 as n → ∞.
Consider S2 :
∞
2
d ϕ(X x (t))
1/2 ∗
E DX x
n (t) B(x)Q1/2
B(x)Q e k , DX x
n (t)e k
dx 2 H
k=1
d2 ϕ(X x (t))
1/2 ∗
− DX x
(t) B(x)Q1/2
B(x)Q e , DX x
(t)e
dx 2 k k
H
∞ 2
d ϕ(X x (t)) ∗
≤ E 2
DXnx (t) B(x)Q1/2 B(x)Q1/2 ek
dx
k=1
∗
− DX (t) B(x)Q1/2 B(x)Q1/2 ek , DX x (t)ek
x
H
∞ 2
d ϕ(X x (t))
+ E 2
DXnx (t) B(x)Q1/2
dx
k=1
∗
B(x)Q1/2 ek , DX x (t)ek − DXnx (t)ek ,
H
which converges to zero by similar arguments as that used for the first term, but
now we need to employ the fact that an analogue to the bound (3.81) holds for
D 2 Xnx (ek , ek ),
2 x
sup sup D X (y, z) < ∞, p ≥ 1. (3.82)
n H˜2p
n yH ,zH ≤1
124 3 Stochastic Differential Equations
The second equality holds since ũ(t, x) is a solution of (3.74) and since the terms
containing A cancel.
Now, we integrate over the interval [0, t] and take expectation. Then we pass to
the limit as n → ∞ (note that the operators Rn are uniformly bounded (see (1.20))
and follow the argument provided in (6.20)). Finally, use the initial condition to
obtain that
ũ(t, x) = Eϕ X x (t) .
This concludes the proof.
Proof The sequences Fn and Bn can be constructed as follows. Let {en }∞ n=1 be an
ONB in H . Denote
fn (t) = x(t), e1 H , x(t), e2 H , . . . , x(t), en H ∈ R n ,
#n (t) = fn (kT /n) at t = kT /n and linear otherwise,
kT
γn (t, x0 , . . . , xn ) = xk at t = and linear otherwise, with xk ∈ R n ,
n
k = 0, 1, . . . , n.
Fn (t, x) = ··· F t, γn (·, x0 , . . . , xn ), e
n
εn 2 1
n
n ∧ t − xk dxk
fn kT
× exp − xk g . (3.85)
n εn εn
k=0 k=0
The coefficients Bn (t, x) are defined analogously using the ONB in KQ . We note
that conditions (A1)–(A4) are satisfied. To see that, note that the functions Fn and
Bn depend on a finite collection of variables fn (kT /n), and hence the arguments of
Gikhman and Skorokhod in [25] can be applied. We only need to verify the uniform
convergence on compact sets of C([0, T ], H ). We have
sup Fn (t, x) − F (t, x)H
0≤t≤T
≤ ··· sup F t, γn ·, fn (0) + x 0 , . . . , fn (T ∧ t) + x n , e
0≤t≤T
1 n
x k dx k
− F t, γn ·, fn (0), . . . , fn (T ∧ t) , e H g
εn εn
k=0
+ sup F (t, x) − F t, γn ·, fn (0), . . . , fn (T ∧ t) , e H
0≤t≤T
ε n
sup F t, γn (·, x0 , . . . , xn ), e H 1 − exp −
n
+ ··· 2
xk
0≤t≤T n
k=0
kT
1
n
fn n ∧ t − xk dxk
× g .
εn εn
k=0
We will now verify convergence for each of the three components of the sum
above.
3.7 Lipschitz-Type Approximation of Continuous Coefficients 127
Consider the first summand. If K ⊂ C([0, T ], H ) is a compact set, then the col-
lection of functions {γn (·, fn (0), . . . , fn (T )), e), n ≥ 1} ⊂ C([0, T ], H ), fn (t) =
(x(t), e1 H , . . . , x(t), en H ) with x ∈ K, is a subset of some compact set K1 . This
follows from a characterization of compacts in C([0, T ], H ) (see Lemma 3.14 in
the Appendix) and from Mazur’s theorem, which states that a closed convex hull of
a compact set in a Banach space is a compact set. Moreover,
sup sup γn (t, x0 + z0 , . . . , xn + zn ) − γn (t, x0 , . . . , xn )H < εn
x0 ,...,xn 0≤t≤T
if |zk | ≤ εn , k = 0, . . . , n, zk ∈ R n .
The set K1 + B(0, εn ) is not compact (B(0, εn ) denotes a ball of radius εn
centered at 0), but sup0≤t≤T F (t, u) − F (t, v)H can still be made arbitrarily
small if v is sufficiently close to u ∈ K1 . Indeed, given ε > 0, for every u ∈ K1
and t ∈ [0, T ], there exists δut such that if sup0≤t≤T v(t) − u(t)H < δut , then
F (t, v) − F (t, u)H < ε/2. Because t is in a compact set and F (t, u) is continu-
ous in both variables, δu = inft δut > 0. Therefore, for u ∈ K1 , sup0≤t≤T F (t, v) −
F (t, u)H < ε/2 whenever sup0≤t≤T v(t) − u(t)H < δu .
We take a finite covering B(uk , δuk /2) of K1 and let δ = min{δuk /2}. If
u ∈ K1 and sup0≤t≤T v(t) − u(t)H < δ, then for some k, sup0≤t≤T u(t) −
uk (t)H < δuk /2, and sup0≤t≤T v(t) − uk (t)H < δ + δuk /2 ≤ δuk . Therefore,
sup0≤t≤T F (t, v) − F (t, u)H ≤ ε.
Thus taking n sufficiently large and noticing that g(x k /εn ) vanishes if |x k | ≥ εn ,
we get
sup F t, γn ·, fn (0) + x 0 , . . . , fn (T ∧ t) + x n , e
0≤t≤T
− F t, γn ·, fn (0), . . . , fn (T ∧ t) , e H < ε
for any ε, independently of fn associated with x ∈ K. This gives the uniform con-
vergence to zero on K of the first summand.
Now we consider the second summand. Let Pn be the orthogonal projec-
tion onto the linear subspace spanned by {e1 , . . . , en }, and let Pn⊥ denote the
orthogonal projection onto the orthogonal complement of this space. We note
that sup0≤t≤T Pn⊥ x(t)H → 0 as n → ∞; otherwise, there would be a sequence
tn → t0 with Pn⊥ x(tn )H > c > 0, and Pn⊥ x(tn )H ≤ x(tn ) − x(t0 )H +
Pn⊥ x(t0 )H → 0.
Let N = N(x) be chosen such that if m ≥ N , then sup0≤t≤T Pm⊥ x(t)H < ε/3.
Thus also sup0≤t≤T PN⊥ (#m (x)(t), e)H < ε/3.
There exists M = M(x) ≥ N (x) such that for m ≥ M,
sup x(t) − #m (x)(t), e ≤ sup P ⊥ x(t)
H N H
0≤t≤T 0≤t≤T
+ sup PN x(t) − PN #m (x)(t), e H + sup PN⊥ #m (x)(t), e H < ε,
0≤t≤T 0≤t≤T
if m ≥ max{M(xk )}.
The continuity of F and the fact that {(#m (x), e), x ∈ K} ∪ K is a subset of a
compact set, guarantees the uniform convergence to zero of the second summand.
The third summand converges uniformly to zero on compact sets since it is bounded
by
2
1 + εn + sup x(t)H εn sup x(t)H + εn .
0≤t≤T 0≤t≤T
Exercise 3.14 Prove that Fn and Bn defined in (3.85) satisfy conditions (A1)–(A4).
We will now consider methods for proving the existence of solutions to SDEs
and SSDEs that are based on some compactness assumptions. In the case of SDEs,
we will require that the Hilbert space H be embedded compactly into some larger
Hilbert space. In the case of SSDEs, the compactness of the semigroup S(t) will be
imposed to guarantee the tightness of the laws of the approximate solutions.
These cases will be studied separately.
t+h
ξ(t + h) − ξ(t)4 = 4 ξ(u) − ξ(t)2 ξ(u) − ξ(t), F (u, θu ξ ) du
H H H
t
t+h
+2 ξ(u) − ξ(t)2 tr B(u, θu ξ )Q1/2 B(u, θu ξ )Q1/2 ∗ du
H
t
t+h ∗ * +⊗2
+2 B(u, θu ξ )Q1/2 B(u, θu ξ )Q1/2 ξ(u) − ξ(t) du
t
3.8 Existence of Weak Solutions Under Continuity Assumption 129
By Lemma 3.6, this yields the following estimate for the fourth moment:
4 t+h 3
E ξ(t + h) − ξ(t)H ≤ C1 E ξ(u) − ξ(t)H 1 + supξ(v)H du
t v≤u
t+h 2 2
+ C2 E ξ(u) − ξ(t)H 1 + supξ(v)H du
t v≤u
, -3/4
t+h 4
≤ C3
E ξ(u) − ξ(t) H du h1/4
t
, -1/2
t+h 4
+ E ξ(u) − ξ(t)H du h1/2 ≤ Ch.
t
Substituting repeatedly, starting with C(u − t) for Eξn (u) − ξn (t)4Rn , into the
above inequality, we arrive at the desired result.
The next lemma is proved by Gikhman and Skorokhod in [25], Vol. I, Chap. III,
Sect. 4.
Lemma 3.11 The condition supn E(ξn (t + h) − ξn (t)4H ) ≤ Ch2 implies that for
any ε > 0,
lim sup P sup ξn (t) − ξn (s)H > ε = 0.
δ→0 n |t−s|<δ
Corollary 3.6 Let Fn and Bn satisfy conditions (A1) and (A3) with a common con-
stant in the growth condition (A3) (in particular Fn and Bn can be the approximat-
ing sequences from Lemma 3.9). Let Xn be a sequence of solutions to the following
SDEs:
dXn (t) = Fn (t, Xn ) dt + Bn (t, Xn ) dWt ,
Xn (0) = x ∈ H.
Then
(1) the sequence Xn is stochastically bounded, i.e., for every ε > 0, there exists Mε
satisfying
sup P sup Xn (t)H > Mε ≤ ε, (3.86)
n 0≤t≤T
It is known that even a weak solution Xt (·) ∈ C([0, T ], H ) to the SDE (3.8) may
not exist. Therefore our next step is to find a weak solution on a larger Hilbert space.
130 3 Stochastic Differential Equations
Let H−1 be a real separable Hilbert space such that the embedding J : H → H−1
is a compact operator with representation
∞
(J) Jx = λn x, en H hn , λn > 0, n = 1, 2, . . . .
n=1
In general, J has always the above representation; we are only assuming that λn = 0.
Here, {en }∞ ∞
n=1 ⊂ H and {hn }n=1 ⊂ H−1 are orthonormal bases. We will identify H
with J (H ) and P ◦ X with P ◦ X −1 ◦ J −1 . Thus, if x ∈ H , we will also write
−1
Proof For any ε > 0, let Mε be the constant in condition (3.86) of Corollary 3.6.
The ball B(0, Mε ) ⊂ H of radius Mε , centered at 0, is relatively compact in H−1 .
Denote by B its closure in H−1 ; then
P J Xn (t) ∈ / B = P Xn (t)H > Mε < ε.
Condition (3.87) of Corollary 3.6 is also satisfied in H−1 . The relative compactness
now follows from the tightness criterion given in Theorem 3.17 in the Appendix,
and from Prokhorov’s theorem.
In order to construct a weak solution to the SDE (3.8) on C([0, T ], H−1 ), we impose
some regularity assumptions on the coefficients F and B with respect to the Hilbert
space H−1 .
Assume that F : [0, T ]×C([0, T ], H−1 ) → H−1 and B : [0, T ]×C([0, T ], H−1 )
→ L2 (KQ , H−1 ) satisfy the following conditions:
(B1) F and B are jointly measurable, and for every 0 ≤ t ≤ T , they are measurable
with respect to the σ -field C˜t on C([0, T ], H−1 ) generated by cylinders with
bases over [0, t].
(B2) F and B are jointly continuous.
(B3) There exists a constant −1 such that ∀x ∈ C([0, T ], H−1 ),
F (t, x) + B(t, x)L (K ,H ) ≤ −1 1 + sup x(t)H−1 ,
H −1 2 Q −1
0≤t≤T
for ω ∈ and 0 ≤ t ≤ T .
Equation (3.8) is now considered in H−1 , and in the circumstances described above,
we can prove the existence of a weak solution to the SDE (3.8) on C([0, T ], H−1 ).
However we do not need all conditions (A1)–(A3) and (B1)–(B3) to hold simulta-
neously. We state the existence result as follows.
3.8 Existence of Weak Solutions Under Continuity Assumption 131
Theorem 3.12 Let H−1 be a real separable Hilbert space. Let the coefficients F
and B of the SDE (3.8) satisfy conditions (B1)–(B3). Assume that there exists a
Hilbert space H such that the embedding J : H → H−1 is a compact operator with
representation (J) and that F and B restricted to H satisfy
F : [0, T ] × C [0, T ], H → H,
B : [0, T ] × C [0, T ], H → L2 (KQ , H ),
and the linear growth condition (A3). Then the SDE (3.8) has a weak solution X(·) ∈
C([0, T ], H−1 ).
Proof Since the coefficients F and B satisfy assumptions (B1)–(B3), we can con-
struct approximating sequences Fn : [0, T ] × C([0, T ], H−1 ) → H−1 and Bn :
[0, T ]×C([0, T ], H−1 ) → L2 (KQ , H−1 ) as in Lemma 3.9. The sequences Fn → F
and Bn → B uniformly on compact subsets of C([0, T ], H−1 ).
Now we consider restrictions of the functions Fn , Bn to [0, T ] × C([0, T ], H ),
and we claim that they satisfy conditions (A1), (A3), and (A4). Let us consider
the sequence Fn only; similar arguments work for the sequence Bn . We adopt the
notation developed in Lemma 3.9.
If x ∈ C([0, T ], H ), then
Fn (t, x) = ··· F t, γn (·, x1 , . . . , xn ), h
n
εn 2 1
n
f˜n kT
n ∧ t − xk dxk
× exp − xk g ∈ H,
n εn εn
k=0 k=0
where
f˜n (t) = x(t), h1 H , . . . , x(t), hn H
−1 −1
= λ1 x(t), h1 H , . . . , λn x(t), hn H := λfn (t),
e1 en
γn (·, x0 , . . . , xn ), h = γn (·, x0 , . . . , xn ), ,...,
λ1 λn
x0 xn
= γn ·, , . . . , , e ∈ C [0, T ], R n ,
λ λ
xk xk1 xkn
and λ = λ1 , . . . , λn ∈ R .
n Let εn /λk < 1, k = 1, . . . , n. Then
x0 xn
Fn (t, x) = · · · F t, γn ·, , . . . , ,e
λ λ
n kT xk
εn 2 1 λ fn n ∧ t −
n
λ dxk
× exp − xk g
n εn εn
k=0 k=0
132 3 Stochastic Differential Equations
= ··· F t, γn (·, y0 , . . . , yn ), e
n
εn 1
n
n ∧ t − yk
λ fn kT dyk
× exp − (yk λ)2
g 2n .
n εn εn / i=1 λi
k=0 k=0
First, we observe that Fn are measurable with respect to the product σ -field on
[0, T ] × C([0, T ], H ) because Fn satisfy condition (B1), depend only on finitely
many variables, and J −1 y = ∞ k=1 1/λk y, hk H−1 ek on J (H ) is measurable from
J (H ) to H as a limit of measurable functions. The same argument justifies that Fn
are adapted to the family {Ct }t≤T .
The linear growth condition (A3) is satisfied with a universal constant. Indeed,
Fn (t, x) ≤ ··· F t, γn (·, y0 , . . . , yn ), e
H H
n kT
1
λ f n n ∧ t − yk dyk
× g 2n
εn εn / i=1 λi
k=0
1
n
≤ 1 + sup x(t)H + max (εn /λk ) ··· g(zk ) dzk
0≤t≤T 1≤k≤n
k=0
≤
1 + sup x(t)H ,
0≤t≤T
∂Fn (t, x)
kT ≤ ··· 1 + max |yk |
∂fl n ∧ t 0≤k≤n
εn λl
n
dyl
× exp − (yk λ)2 supg (z) 2n
n z εn εn / i=1 λi
k=0
1n kT
λ fn n ∧ t − yk dyk
× g 2
εn εn / ni=1 λi
k=0,k =l
2n n n
εn
n 1
i=1 λi λl
≤C ··· 1 + max |yk | exp − (yk λ)2
dyk
εn εn 0≤k≤n n
k=0 k=0
< ∞.
3.8 Existence of Weak Solutions Under Continuity Assumption 133
Thus, Fn (t, x) has bounded partial derivatives, and hence it is a Lipschitz function
with respect to the variable x. Let Xn be a sequence of strong solutions to equations
t t
Xn (t) = x + Fn (s, Xn ) ds + Bn (s, Xn ) dWs
0 0
Thus, there exists a compact set K̃ε ⊂ C([0, T ], H−1 ) such that lim supn→∞ νn (K̃εc )
≤ ε. Thus, because of the uniform convergence sup0≤s≤T F (s, x) − Fn (s, x)H−1
134 3 Stochastic Differential Equations
It follows that
t+h
lim F (s, x) − Fn (s, x), u g (x) ds μn (dx) = 0,
n→∞ H−1 t
t
and the weak convergence of the measures μn , together with the uniform integra-
bility, implies that, as n → ∞,
x(t + h) − x(t), u H gt (x) dμn → x(t + h) − x(t), u H gt (x) dμ (3.89)
−1 −1
and
t+h t+h
F (s, x), u H gt (x) ds dμn → F (s, x), u H gt (x) ds dμ. (3.90)
−1 −1
t t
2 t ∗ * ⊗2 +
y(t), u H − B(s, x)Q1/2 B(s, x)Q1/2 u ds
−1
0
t ∗ * ⊗2 +
(y(·), u)H−1 t = B(s, x)Q1/2 B(s, x)Q1/2 u ds.
0
3.9 Compact Semigroups and Existence of Martingale Solutions 135
In view of Theorem 2.7, the measure μ is a law of a C([0, T ], H−1 )-valued process
X(t), which is a weak solution to the SDE (3.8) (we can let Φ(s) = B(s, X) in
Theorem 2.7).
Lemma 3.12 Let p > 1 and 1/p < α ≤ 1. Consider the operator Gα defined
in (3.24),
t
Gα f (t) = (t − s)α−1 S(t − s)f (s) ds, f ∈ Lp [0, T ], H .
0
136 3 Stochastic Differential Equations
is relatively compact in C([0, T ], H ). We will show that conditions (1) and (2) of
Lemma 3.14, in the Appendix, hold, i.e., that for any fixed 0 ≤ t ≤ T , the set
Gα f (t) : f Lp ≤ 1 (3.92)
Then
t−ε
Gεα f = S(ε) (t − s)α−1 S(t − ε − s)f (s) ds.
0
Since S(ε) is compact, then so is Gεα . Let q = p/(p − 1). Now, using Hölder’s
inequality, we have
(Gα f )(t) − Gε f (t)
α H
t
= (t − s) S(t − s)f (s) ds
α−1
t−ε H
1 1
t q q t p
≤ (t − s)(α−1)q S(t − s) ds f (s)p p ds
L
t−ε t−ε
1
ε (α−1)q+1 q
≤M f Lp
(α − 1)q + 1
t
+ (t − u)α−1 S(t − u)f (u) du
H
s
T 1
q
≤ (v + u)α−1 S(v + u) − v α−1 S(v)q dv f Lp
L (H )
0
t−s 1
q
+M v (α−1)q
dv f Lp
0
= I1 + I2 ,
by the compactness of the semigroup, see (1.10), and it is bounded by (2M)q v (α−1)q ,
we conclude by the Lebesgue DCT that I1 → 0 as u = t − s → 0. Also, the second
term
(t − s)α−1/p
I2 ≤ M f Lp → 0
((α − 1)q + 1)1/q
as t − s → 0. This concludes the proof.
Proof As in the proof of Theorem 3.12, we begin with a sequence of mild solutions
Xn to equations
t t
Xn (t) = x + AXn (s) + Fn (s, Xn ) ds + Bn (s, Xn ) dWs .
0 0
with 1/(2p) < α < 1/2, then condition (A3) and inequality (3.94) imply that
T
E Yn (s)2p ds < C . (3.95)
H
0
138 3 Stochastic Differential Equations
Using the factorization technique, as in Lemma 3.3, we can express Xn (t) as fol-
lows:
t t
Xn (t) = S(t)x + S(t − s)Fn (s, Xn ) ds + S(t − s)Bn (s, Xn ) dWs
0 0
sin πα
= S(t)x + G1 Fn (·, Xn )(t) + Gα Yn (t),
π
where Gα : L2p ([0, T ], H ) → C([0, T ], H ), defined in (3.24), is a compact opera-
tor for 1/2p < α ≤ 1 by Lemma 3.12.
Inequalities (3.94) and (3.95) and the growth condition (A3) imply that for any
ε > 0, there exists ν > 0 such that for all n ≥ 1,
T 1 1
2p π T 2p
P Yn (s)2p ds ≤ ν ∩ Fn (s, Xn )2p ds ≤ν
H sin πα H
0 0
≥ 1 − ε.
By the compactness of Gα and S(t) and the continuity of the mapping t → S(t)x,
we conclude that the set
K = S(·)x + Gα f (·) + G1 g(·) : f L2p ≤ ν, gL2p ≤ ν
t t
Bn (s, Xn,m ) dWs → Bn (s, Xn ) dWs
0 0
t
a.s. in C([0, T ], H ), which implies that Am 0 Xn,m (s) ds is an a.s. convergent se-
quence in C([0, T ], H ). Because
t t
Xn,m (s) ds → Xn (s) ds
0 0
is a martingale with respect to the family of σ -fields Fn (t) = σ (Xn (s), s ≤ t). Be-
cause the operator A is unbounded, we cannot justify direct passage to the limit with
n → ∞ in (3.97), as we did in Theorem 3.12, but we follow an idea outlined in [11].
Consider the processes
with λ > α (thus λ is in the resolvent set of A). The martingale Mn is square inte-
grable, and so is the martingale Nn,λ . The quadratic variation process of Nn,λ has
140 3 Stochastic Differential Equations
the form
t* +* +∗
Nn,λ t = (A − λI )−1 Bn (s, Xn )Q1/2 (A − λI )−1 Bn (s, Xn )Q1/2 ds.
0
We note that the operator (A − λI )−1 A is bounded on D(A) and can be extended
to a bounded operator on H . We denote this extension by Aλ .
Observe that we are now in a position to repeat the proof of Theorem 3.12
in the current situation (the assumption concerning a compact embedding J in
Theorem 3.12 is unnecessary if the sequence of measures is weakly convergent,
which now is the case, and we carry out the proof for C([0, T ], H )-valued pro-
cesses). The coefficients Fnλ (s, x) = Aλ x(s) + (A − λI )−1 Fn (s, x) and Bnλ (s, x) =
(A − λI )−1 Bn (s, x) satisfy assumptions (A1)–(A4), and the coefficients F λ (s, x) =
Aλ x(s) + (A − λI )−1 F (s, x) and B λ (s, x) = (A − λI )−1 B(s, x) satisfy assump-
tions (A1)–(A3) with Fnλ and Bnλ converging to F λ and B λ , respectively, uniformly
on compact subsets of C([0, T ], H ).
Moreover, by inequality (3.94), we have the uniform integrability,
2 M2 2
E sup Nn,λ (t)H ≤ E sup Mn (t)H < ∞.
0≤t≤T (λ − α) 0≤t≤T
2
The representation theorem, Theorem 2.7, implies the existence of a Q-Wiener pro-
˜ F × F˜ , {Ft × F˜t }, P × P̃ ) such
cess Wt on a filtered probability space ( × ,
that
t t
(A − λI )−1 X(t) − (A − λI )−1 x − Aλ X(s) ds − (A − λI )−1 F (s, X) ds
0 0
t
= (A − λI )−1 B(s, X) dWs .
0
Consequently,
t t t
X(t) = x + (A − λI )Aλ X(s) ds + F (s, X) ds + B(s, X) dWs ,
0 0 0
3.10 Mild Solutions to SSDEs Driven by Cylindrical Wiener Process 141
t t
X(t), u H = x, u H + X(s), A∗ u H ds + F (s, X), u H ds
0 0
t
+ u, B(s, X) dWs H
.
0
It follows by Theorem 3.2 that the process X(t) is a mild solution to (3.1).
We now present an existence and uniqueness result from [12], which later will be
useful in discussing an innovative method for studying invariant measures in the
case of a compact semigroup (see Sect. 7.4.3).
Let K and H be real separable Hilbert spaces, and W )t be a cylindrical Wiener
process in K defined on a complete filtered probability space (, F , {Ft }t≤T , P )
with the filtration {Ft }t≤T satisfying the usual conditions. We consider the follow-
ing SSDE on [0, T ] in H , with an F0 -measurable initial condition ξ :
dX(t) = (AX(t) + F (X(t))) dt + B(X(t)) d W )t ,
(3.98)
X(0) = ξ.
We are interested here only in mild solutions and, taking into account the assumption
(DZ3), we have the following definition.
Recall from Sect. 3.3 that H˜2p denotes a Banach space of H -valued stochas-
tic processes X, measurable as mappings from ([0, T ] × , B([0, T ]) ⊗ F ) to
2p
(H, B(H )), adapted to the filtration {Ft }t≤T , and satisfying sup0≤s≤T Eξ(s)H <
∞ with the norm
1
2p 2p
XH˜2p = sup EX(t)H .
0≤t≤T
Proof For p = 1, the result is just the isometric property (2.32) of the stochastic
t
integral. For p > 1, let M̃(t) = 0 Φ(s) d W)s , and we apply the Itô formula (2.61) to
2p
M̃(t)H and, as in the proof of Lemma 3.1, obtain
2p s
E M̃(s) ≤ p(2p − 1)E M̃(u)2(p−1) Φ(u)2
H L2 (K,H )
du .
0
3.10 Mild Solutions to SSDEs Driven by Cylindrical Wiener Process 143
Consequently,
2p
sup E M̃(t)
0≤t≤T
T 2p (p−1)/p 2p 1/p
≤ p(2p − 1) sup E M̃(s)H E Φ(t)L (K,H ) dt
2
0 0≤s≤t
2p (p−1)/p T 2p 1/p
≤ p(2p − 1) sup E M̃(s)H E Φ(t)L (K,H ) dt,
2
0≤t≤T 0
(b) If, in addition, condition (DZ4) holds, then the solution X(t) is continuous
P -a.s.
where we have used Lemma 3.13 to find a bound on the norm of the stochastic
integral. Next, we compute
t
S(t − s) F X(s) − F Y (s) ds
0
2p
t
+ S(t − s) B X(s) − B Y (s) d Ws
)
0 H
t
2p−1
2p
≤2
S(t − s) F X(s) − F Y (s) ds
0 H
t 2p
+ )
S(t − s) B X(s) − B Y (s) d Ws
0 H
t 2p
≤2 2p−1
C1 E X(s) − Y (s)H ds
0
p
t 2p 1/p
+ C2 K 2 (t − s) E X(s) − Y (s)H ds .
0
T
Let L1 = 22p−1 max{C1 , C2 } and L2 = 2L1 (1 + p 0 K 2 (t) dt). Let, as in the
proof of Theorem 3.5, B denote the Banach space obtained from H̃2p by modi-
fying its norm to an equivalent norm
2p 2p1
XB = sup e−L2 t E X(t)H .
0≤t≤T
Then,
I˜(X) − I˜(Y )2p
B
t 2p
≤ sup e−L2 t L1 E X(s) − Y (s)H ds
0≤t≤T 0
3.10 Mild Solutions to SSDEs Driven by Cylindrical Wiener Process 145
p
t 2p 1/p
+ K 2 (t − s) E X(s) − Y (s)H ds
0
t 2p
= sup e −L2 t
L1 E X(s) − Y (s)H eL2 s e−L2 s ds
0≤t≤T 0
p
t 2p 1/p
+ K (t − s) eL2 s e−L2 s E X(s) − Y (s)H
2
ds
0
t 2p
≤ sup e−L2 t L1 sup e−L2 s E X(s) − Y (s)H eL2 s ds
0≤t≤T 0 0≤s≤T
p
t 2p 1/p
+ K 2 (t − s)eL2 s/p sup E −L2 s E X(s) − Y (s)H ds
0 0≤s≤T
t t p
≤ L1 X − Y B sup e−L2 t
2p
eL2 s ds + K 2 (t − s)eL2 s/p ds
0≤t≤T 0 0
t p t p
2p −L2 t eL2 t−1
≤ L1 X − Y B sup e + K (s) ds 2
e L2 s/p
ds
0≤t≤T L2 0 0
p p
1 − e−L2 T t 1 − e−L2 T 2p
≤ L1 + K (s) ds
2
p p
X − Y B
L2 0 L2
2p
≤ CB X − Y B
with the constant CB < 1.
Hence, I˜ is a contraction on B, and it has a unique fixed point, which is the
solution to (3.98).
To prove (3.103), note that
2p
t t
I˜(0)2p = sup S(t)ξ + S(t − s)F (0) ds + S(t − s)B(0) d Ws
)
˜
H2p
0≤t≤T 0 0 H
2p
≤ C0 1 + ξ H
for a suitable constant C0 .
Since the fixed point can be obtained as a limit, X = limn→∞ I˜n (0) in B, using
the equivalence of the norms in H˜2p and B, we have
∞
n+1
XH˜2p ≤ C1 I˜ (0) − I˜n (0) + I˜(0)
B H˜2p
n=1
∞
n ˜
≤ C2 CB I (0)H˜
2p
n=0
2p
≤ C 1 + ξ H
To prove part (b), the continuity of X(t), it is enough to show that the stochastic
convolution with respect to a cylindrical Wiener process,
t
S ˜ B(X)(t) = )s ,
S(t − s)B X(s) d W
0
1 1
has a continuous version. Similarly as in the proof of Lemma 3.3, let 2p <α< 2
and define
s
Y (s) = )σ .
(s − σ )−α S(s − σ )B X(σ ) d W
0
Using the stochastic Fubini Theorem 2.3 for a cylindrical Wiener process, we have
t t
)s = sin πα
S(t − s)B X(s) d W (t − s)α−1 S(t − s)Y (s) ds.
0 π 0
However,
2p
T 2p T s
E Y (s)H ds = E
(s − σ ) −α
S(s − σ )B X(σ ) d Wσ
)
ds
0 0 0 H
p
T s 2
≤C E (s − σ )−2α S(s − σ )B X(σ ) L dσ ds
2 (K,H )
0 0
T s 2
p
≤C (s − σ ) −2α
K (s − σ ) 1 + X(σ )H dσ
2
ds
0 0
T p
2p
s
≤ C 1 + XH2p (s − σ )−2α K 2 (s − σ ) dσ ds < ∞.
0 0
Since the process Y (t) has almost surely 2p-integrable paths, Lemma 3.2 implies
that
sin πα
S ˜ B(X)(t) = Gα Y (t)
π
has a continuous version.
3.10 Mild Solutions to SSDEs Driven by Cylindrical Wiener Process 147
Proposition 3.3 The solution X ξ (t) of (3.98) as a function of the initial condition
ξ is continuous as mapping from L2p (, H ), p ≥ 1, into itself, and there exists a
constant C such that for ξ , η ∈ L2p (, H ),
2p
E X ξ (t) − X η (t)H ≤ CEξ − ηH .
2p
(3.105)
Proof Using assumptions (DZ2) and (DZ3), Lemma 3.13, and Exercise 3.7, we
calculate
2p
2p t
E X ξ (t) − X η (t)H ≤ C Eξ − ηH + E X ξ (s) − X η (s) ds
2p
H
0
p
t 2p 1/p
+ E K (t − s)X ξ (s) − X η (s)H ds
0
t 2p
2p
≤ C Eξ − ηH + E X ξ (s) − X η (s) ds
H
0
p
t 2p 1/p
+ K 2 (t − s) E X ξ (s) − X η (s)H ds
0
t 2p
2p
≤ C Eξ − ηH + E X ξ (s) − X η (s) ds
H
0
p
t t 2p 1/p
+ K 2 (s) ds E X ξ (s) − X η (s)H ds
0 0
t 2p
E X ξ (s) − X η (s)H ds .
2p
≤ C Eξ − ηH +
0
Proposition 3.4 The solution of (3.98) is a homogeneous Markov and Feller pro-
cess.
We omit the proof since it follows nearly word by word the proof of Theorem 3.6
and the discussion in the remainder of Sect. 3.5 with the σ -field F W,ξ being re-
placed by
∞
&
F W̃ ,ξ
=σ σ W)s (fj ), s ≤ t ∪ σ (ξ ) .
j =1
148 3 Stochastic Differential Equations
for all n.
Proof The compactness of A implies condition (1) easily. Condition (2) follows
from the fact that the function w(x, δ) is a continuous function of x, monotonically
decreasing to 0 as δ → 0. By Dini’s theorem, w(x, δ) converges to 0 uniformly on
a compact set A.
If conditions (1) and (2) hold, then let xn (t) be a sequence of elements in A.
We form a sequence t1 , t2 , . . . from all elements of T and select a convergent
subsequence xnk1 (t1 ). From this subsequence we select a convergent subsequence
xnk2 (t2 ), etc. Using the diagonal method, we construct a subsequence xnk such that
xnk (t) converges for every t ∈ T . Denote the limit by y(t) and let yk (t) = xnk (t).
Then yk (t) → y(t) pointwise on T . For any ε > 0, let δ be chosen, so that
supx∈A w(x, δ) < ε/3. Let t1 , . . . , tN be such that the length of each of the inter-
vals [0, t1 ], [t1 , t2 ], . . . , [tN , T ] is less than δ. Then
sup ρM yk (t), yl (t) ≤ sup ρM yk (ti ), yl (ti )
0≤t≤T 1≤i≤N
+ sup ρM yk (ti ), yk (t) + ρM yl (ti ), yl (t) < ε
|t−ti |≤δ
Proof Conditions (1) and (2) follow easily from the tightness assumption and
Lemma 3.14. Conversely, let {tk }k∈Z+ be a countably dense set in [0, T ]. For each
k, we can find a compact set Ck ⊂ M and δk > 0 such that supn Pn (x(tk ) ∈ / Ck ) ≤
ε/2k+2 (by Prokhorov’s theorem) and, for some n0 , supn≥n0 Pn (w(x, δk ) > 1k ) ≤
ε/2k+2 . Let
c
&
Kε = / Ck ∪ w(x, δk ) > 1/k
x(tk ) ∈
k>n0
3
= x(tk ) ∈ Ck ∩ w(x, δk ) ≤ 1/k .
k>n0
The set Kε satisfies the assumptions of Lemma 3.14; therefore it has a compact
closure. Moreover,
∞
sup Pn Kεc ≤ 2 · ε/2k+2 = ε ⇒ inf Pn (Kε ) ≥ 1 − ε.
n≥n0
k=0
n≥n0
Chapter 4
Solutions by Variational Method
4.1 Introduction
The purpose of this chapter is to study both weak and strong solutions of nonlinear
stochastic partial differential equations, or SPDEs. The first work in this direction
was done by Viot [75]. Since then, Pardoux [62] and Krylov and Rozovskii [42]
studied strong solutions of nonlinear SPDEs. We will utilize the recent publication
by Prévôt and Röckner [64] to study strong solutions.
In all these publications, the SPDEs are recast as evolution equations in a Gelfand
triplet,
V → H → V ∗ ,
where H is a real separable Hilbert space identified with its dual H ∗ . The space
V is a Banach space embedded continuously and densely in H . Then for its dual
space V ∗ , the embedding H → V ∗ is continuous and dense, and V ∗ is necessarily
separable. The norms are denoted by · V , and similarly for the spaces H and V ∗ .
The duality on V × V ∗ is denoted by ·, · , and it agrees with the scalar product in
H , i.e., v, h = v, h H if h ∈ H .
By using the method of compact embedding of Chap. 3, the ideas from [36], and
the stochastic analogue of Lion’s theorem from [42], we show the existence of a
weak solution X in the space C([0, T ], H ) ∩ L∞ ([0, T ], H ) ∩ L2 ([0, T ] × , V )
such that
2
E sup X(t)H < ∞, (4.1)
t∈[0,T ]
under the assumption that the injection V → H is compact without using the as-
sumption of monotonicity. In the presence of monotone coefficients, as in [36], we
obtain a unique strong solution using pathwise uniqueness.
The approach in [64] is to consider monotone coefficients. Under weakened as-
sumptions on growth and without assuming compact embedding, using again the
stochastic analogue of Lion’s theorem, a unique strong solution is produced in
C([0, T ], H ) ∩ L2 ([0, T ] × , V ), which again satisfies (4.1). We will present this
method in Sect. 4.3.
Let the coefficients A and B satisfy the following joint continuity and growth
conditions:
(JC) (Joint Continuity) The mappings
are continuous.
For some constant θ ≥ 0,
(G-A)
A(t, v)2 ∗ ≤ θ 1 + v2 , v ∈ V. (4.4)
V H
(G-B)
B(t, v)2 ≤ θ 1 + v2H , v ∈ V. (4.5)
L2 (KQ ,H )
and the coefficients (a n (t, x))j : [0, T ] × Rn → Rn , (bn (t, x))i,j : [0, T ] × Rn →
Rn × Rn , and (σ n (t, x))i,j : [0, T ] × Rn → Rn × Rn and the initial condition ξ0n by
n
a (t, x) j = ϕj , A(t, Jn x) , 1 ≤ j ≤ n,
n
b (t, x) i,j = Q1/2 B ∗ (t, Jn x)ϕi , fj K , 1 ≤ i, j ≤ n,
n T (4.8)
σ (t, x) i,j = bn (t, x) bn (t, x) i,j ,
n
ξ0 j = ξ0 , ϕj H .
154 4 Solutions by Variational Method
Note that
n
n
n
σ (t, x) i,j = b (t, x) i,k bn (t, x) j,k
k=1
n
= Q1/2 B ∗ (t, Jn x)ϕi , fk K
Q1/2 B ∗ (t, Jn x)ϕj , fk K
.
k=1
Lemma 4.1 The growth conditions (4.4) and (4.5) assumed for the coefficients A
and B imply the following growth conditions on a n and bn :
n
a (t, x)2 n ≤ θn 1 + x2 n , (4.9)
R R
T
tr σ n (t, x) = tr bn (t, x) bn (t, x) ≤ θ 1 + x2Rn . (4.10)
In particular, for a large enough value of θ , the coercivity condition (4.6) implies
that
T
2 a n (t, x), x Rn + tr bn (t, x) bn (t, x) ≤ θ 1 + x2Rn . (4.12)
The constant θn depends on n, but θ does not.
The distribution μn0 of ξ0n on Rn satisfies
2 2 2
E ξ0n Rn ln 3 + ξ0n Rn < c0 . (4.13)
Exercise 4.1 Prove Lemma 4.1. In addition, show that for k ≥ n and x ∈ Rk , the
following estimate holds true:
n
k 2
a (t, x) j ≤ θn 1 + x2Rk . (4.14)
j =1
We will need the following result, Theorem V.3.10 in [17], on the existence of a
weak solution. Consider the following finite-dimensional SDE,
dX(t) = a t, X(t) dt + b t, X(t) dBtn , (4.15)
with an Rn -valued F0 -measurable initial condition ξ0n . Here Btn is a standard Brow-
nian motion in Rn .
4.2 Existence of Weak Solutions Under Compact Embedding 155
We will use the ideas developed in [70], Sect. 1.4, for proving the compactness of
probability measures on C([0, T ], Rn ). The method was adapted to the specific case
involving linear growth and coercivity conditions in [36]. Our first step in proving
the existence result in the variational problem will be establishing the existence and
properties of finite-dimensional Galerkin approximations in the following lemma.
Lemma 4.2 Assume that the coefficients A and B of (4.2) satisfy the assumptions
of joint continuity (4.3), growth (4.4), (4.5), and coercivity (4.6) and that the initial
condition ξ0 satisfies (4.7). Let a n , bn , and ξ0n be defined as in (4.8), and Btn be an
n-dimensional standard Brownian motion. Then the finite-dimensional equation
dX(t) = a n t, X(t) dt + bn t, X(t) dBtn (4.17)
with the initial condition ξ0n has a weak solution X n (t) in C([0, T ], Rn ). The laws
μn = P ◦ (X n )−1 have the property that for any R > 0,
!
sup μn x ∈ C [0, T ], Rn : sup x(t)Rn > R
n 0≤t≤T
4 2
≤ 2c0 eC(θ)T 1 + R 2 ln 3 + R 2 (4.18)
and that
2 2
sup 1 + x(t)Rn ln ln 3 + x(t)Rn μn (dx) < C (4.19)
C([0,T ],Rn ) 0≤t≤T
Proof Since the coefficients a n and bn satisfy conditions (4.9) and (4.10), we can
use Theorem 4.1 to construct a weak solution X n (t) to (4.17) for every n. Let
2
f (x) = 1 + x2Rn ln 3 + x2Rn , x ∈ Rn .
n
n
∂g 1 ∂ 2g
n n
Lt g (x) = (x) a n (t, x) i + (x) σ n (t, x) i,j .
∂xi 2 ∂xi ∂xj
i=1 i=1 j =1
156 4 Solutions by Variational Method
and hence,
t
E sup f X n (s ∧ τR ) ≤ 2c0 + (2C + 16Cθ )E sup f X n (r ∧ τR ) ds.
0≤s≤t 0 0≤r≤s
Then
P sup X n (s)Rn > R ≤ E sup f X n (s ∧ τR ) /f (R)
0≤s≤t 0≤s≤t
4 2
≤ 2c0 eC(θ)T 1 + R 2 ln 3 + R 2 ,
2 2
sup 1 + x(t)Rn ln ln 3 + x(t)Rn μn (dx)
C([0,T ],Rn ) 0≤t≤T
= sup g x(s)Rn μn (dx)
C([0,T ],Rn ) 0≤t≤T
∞
= μn sup g x(s)Rn > p dp
0 0≤t≤T
∞
= ln ln 3 + μn sup x(s)Rn > g −1 (p) dp
ln ln 3 0≤t≤T
∞
≤ ln ln 3 + μn sup x(s)Rn > r g (r) dr
0 0≤t≤T
∞ g (r)
≤ ln ln 3 + 2c0 eC(θ)T dr < ∞,
0 (1 + r 2 )(ln(3 + r 2 ))2
with the very last inequality being left to prove for the reader in Exercise 4.3.
We will need the following lemma from [36] (see also Sect. 1.4 in [70]).
158 4 Solutions by Variational Method
Lemma 4.3 Consider the filtered probability space (C([0, T ], Rn ), B, {Ct }0≤t≤T ,
P ), where B is the Borel σ -field, and Ct is the σ -field generated by the cylin-
ders with bases over [0, t]. Let the coordinate process mt be a square-integrable
Ct -martingale with quadratic variation m t satisfying
m t − m s = tr m t − m s ≤ β(t − s)
for some constant β and all 0 ≤ s < t ≤ T . Then for all ε, η > 0, there exists δ > 0,
depending possibly on β, ε, η, and T , but not on n, such that
P sup mt − ms Rn > ε < η.
|t−s|<δ
We are now going to find bounds for the probabilities in the last line of (4.25).
We have, P -a.s.,
64
≤ E mt+τj −1 − mτj −1 2Rn |Fτj
ε2
64
≤ 2 E m t+τj −1 − m τj −1 |Fτj
ε
64βt
≤ 2 ,
ε
where we have used properties of the regular conditional probability distribution in
the last two lines. Next, for t > 0, P -a.s.,
E e−(τj −τj −1 ) |Fτj −1
≤ P (τj − τj −1 < t|Fτj −1 ) + e−t P (τj − τj −1 ≥ t|Fτj −1 )
≤ e−t + 1 − e−t P (τj − τj −1 < t|Fτj −1 )
≤ e−t + 1 − e−t 64βt/ε 2 = λ < 1
≤ λe−τj −1 ≤ · · · ≤ λj ,
so that
P (N > k) = P (τk < T ) ≤ P e−τk > e−T ≤ eT λk < η/2
for k large enough, depending on T , λ, and η. Finally,
k
P (τj − τj −1 < δ for some j ≤ k) ≤ P (τj − τj −1 < δ for some j ≤ k)
j =1
≤ k 64βδ/ε 2 < η/2
for δ small enough, depending on k, β, ε, and η. Combining the last two inequalities
proves the lemma.
Exercise 4.5 Let (, F , {Ft }0≤t≤T , P ) be a filtered probability space, with a
Polish space and F its Borel σ -field. Assume that Mt is an Rn -valued continuous
martingale and τ is a stopping time.
Show that Mt − Mt∧τ is an Ft -martingale with respect to the conditional prob-
ability P τ (·, ω), except possibly for ω in a set E of P -measure zero.
Hint: prove that
mt ω dP τ ω , ω dP ω
B∩{τ ≤s} A
= ms ω dP τ ω , ω dP ω
B∩{τ ≤s} A
Theorem 4.2 Let the coefficients A and B of (4.2) satisfy conditions (4.3), (4.4),
(4.5), and (4.6). Consider the family of measures μn∗ on C([0, T ], V ∗ ) with support
in C([0, T ], H ), defined by
n
μn∗ (Y ) = μn x ∈ C [0, T ], R :
n
xi (t)ϕi ∈ Y , Y ⊂ C [0, T ], V ∗ ,
i=1
where μn are the measures constructed in Lemma 4.2. Assume that the embed-
ding H → V ∗ is compact. Then the family of measures {μn∗ }∞
n=1 is tight on
C([0, T ], V ∗ ).
Proof We will use Theorem 3.17. Denote by BC([0,T ],H ) (R) ⊂ C([0, T ], H ) the
closed ball of radius R centered at the origin. By the definition of measures μn∗ and
Lemma 4.2, for any η > 0, we can choose R > 0 such that
n
c
n
μ∗ BC([0,T ],H ) (R) = μ x ∈ C [0, T ], R : sup
n n
xi (t)ϕi > R
0≤t≤T i=1
H
!
= μn x ∈ C [0, T ], Rn : sup x(t)Rn > R < η.
0≤t≤T
4.2 Existence of Weak Solutions Under Compact Embedding 161
Denote the closed ball of radius R centered at zero in H by BH (R). Then its closure
V∗
in V ∗ , denoted by BH (R) , is a compact subset of V ∗ , and we have
V ∗
μn∗ ◦ x(t)−1 BH (R) ≥ 1 − η, 0 ≤ t ≤ T,
Recall the modulus of continuity (3.106) and indicate the space, e.g., V ∗ , in the
subscript in wV ∗ (x, δ) if the V ∗ norm is to be used. Then, with BC([0,T ],Rn ) (R)
denoting the closed ball with radius R centered at the origin in C([0, T ], Rn ) and
n > n0 ,
μn∗ x ∈ C [0, T ], V ∗ : x ∈ BC([0,T ],H ) (R) , wV ∗ (x, δ) > ε
n
≤ μ x ∈ BC([0,T ],Rn ) (R) : wV ∗
n
x(·) j ϕj , δ > ε
j =1
n
0
≤μ n
x ∈ BC([0,T ],Rn ) (R) : sup x(t) j − x(s) j ϕj
0≤s,t≤T j =1
V∗
|s−t|<δ
n
+ sup x(t) j − x(s) j ϕj >ε
0≤s,t≤T j =n0 +1
V∗
|s−t|<δ
n
0
≤ μn x ∈ BC([0,T ],Rn ) (R) : C sup x(t) j − x(s) j ϕj + ε/4 > ε
0≤s,t≤T j =1
H
|s−t|<δ
1/2
n0
2
=μ n
x ∈ BC([0,T ],Rn ) (R) : sup x(t) j − x(s) j > 3ε/(4C) .
0≤s,t≤T j =1
|s−t|<δ
162 4 Solutions by Variational Method
with the function tr(bn (s, x(s))(bn (s, x(s))T ) bounded on bounded subsets of Rn
uniformly relative to the variable s, due to condition (4.10). Hence, for t ≥ s,
R R
m (x) t − m (x) s = tr mR (x) t − mR (x) s ≤ β(R)(t − s)
with the constant β(R) not depending on n. Now, by Lemma 4.3, we have
μn x ∈ C [0, T ], Rn : wRn mR (x), δ > ε/(2C) < η (4.28)
·
+ wRn0 a n s, x(s) ds, δ > 3ε/(4C)
0
4.2 Existence of Weak Solutions Under Compact Embedding 163
≤ μn x ∈ BC([0,T ],Rn ) (R) : wRn mn , δ > ε/(2C)
≤ μn wRn mR , δ > ε/(2C) ≤ η.
Summarizing, for any ε, η > 0 and sufficiently small δ > 0, there exists n0 such that
for n > n0 ,
μn∗ x ∈ C [0, T ], V ∗ : wV ∗ (x, δ) > ε
c
≤ μn∗ BC([0,T ],H ) (R) + μn∗ x ∈ BC([0,T ],H ) (R) : wV ∗ (x, δ) > ε
≤ 2η,
We will now summarize the desired properties of the measures μn and μn∗ .
implying the uniform integrability of X n 2Rn . The · H norm of x(t) satisfies the
following properties:
2 2
sup x(t)H ln ln 3 + x(t)H μn∗ (dx)
C([0,T ],V ∗ ) 0≤t≤T
2 2
=E sup Jn X n (t)H ln ln 3 + Jn X n (t)H < C. (4.30)
0≤t≤T
and also,
!
μ∗ x ∈ C [0, T ], V ∗ : sup x(t)H < ∞ = 1. (4.33)
0≤t≤T
and
T
x(t)2 dt μ∗ (dx) < ∞. (4.35)
V
C([0,T ],V ∗ ) 0
Proof Property (4.29) is just (4.19) and inequality (4.30) is just a restatement
of (4.29)
To prove (4.31), assume, using the Skorokhod theorem, that Jn Xn → X a.s. in
C([0, T ], V ∗ ). We introduce the function αH : V ∗ → R by
αH (u) = sup v, u , v ∈ V , vH ≤ 1 .
Property (4.32) follows from the Markov inequality, and (4.33) is a consequence
of (4.31). To prove (4.34), we apply the Itô formula and (4.11) to obtain that
2 2 t
E Jn X n (t)H = E Jn ξ0n H + 2E a n s, X n (s) , X n (s) Rn ds
0
t T
+E tr bn s, X n (s) bn s, X n ds
0
t 2
≤ EJn ξ0 2H + λ E Jn X n (s)H ds
0
t 2
−α E Jn X n (s)V ds + γ .
0
since αV (u) = uV if u ∈ V and, for u ∈ V ∗ \ V , αV (u) = +∞. Now (4.35) fol-
lows by the Fatou lemma.
Exercise 4.6 Justify the statements about αH made in the proof of Corollary 4.1.
and the following Itô formula holds for the square of its H -norm P -a.s.:
t 2
X(t)2 = X(0)2 + 2 X̄(s), Y (s) + Z(s)L ds
H H 2 (KQ ,H )
0
t
+2 X(s), Z(s) dWs H
, t ∈ [0, T ] (4.37)
0
and
T
E X(t)2 dt < ∞. (4.39)
V
0
Proof Let X n (t) be solutions to (4.17), μn be their laws in C([0, T ], Rn ), and μn∗
be the measures induced in C([0, T ], V ∗ ) as in Theorem 4.2, with a cluster point
μ∗ . We need to show that μ∗ is the law of a weak solution to (4.2). Again, using the
Skorokhod theorem, assume that Jn X n (t) and X(t) are processes with laws μn∗ and
μ∗ , respectively, with Jn X n → X P -a.s. By (4.30) and (4.33)–(4.35), Jn X n and X
are P -a.s. in C([0, T ], V ∗ ) ∩ L∞ ([0, T ], H ) ∩ L2 ([0, T ], V ). Denote by {ϕj }∞
j =1 ⊂
V a complete orthogonal system in V , which is an ONB in H . Note that such a
system always exists, see Exercise 4.7. Then, the vectors ψj = ϕj /ϕj V form an
ONB in V . For x ∈ C([0, T ], V ∗ ) ∩ L∞ ([0, T ], H ) ∩ L2 ([0, T ], V ), consider
t
Mt (x) = x(t) − x(0) − A s, x(s) ds.
0
Using (4.4) and (4.30), we have, for any v ∈ V and some constant C,
2 2
v, A s, x(s) ln ln 3 + x(s)H μn∗ (dx)
≤ A s, x(s) 2 ∗ v2 ln ln 3 + x(s)2 μn (dx)
V V H ∗
2 2
≤ θ 1 + x(s)H ln ln 3 + x(s)H v2V μn∗ (dx)
≤ Cθ v2V (4.40)
2
v, A s, x(s) μ∗ (dx) ≤ Cθ v2V . (4.41)
Properties (4.38) and (4.39) are just restatements of (4.31) and (4.35). By involv-
ing (4.41) we conclude that the continuous process v, Mt (·) is μ∗ -square inte-
grable. We will now show that for any v ∈ V , s ≤ t, and any bounded function gs on
C([0, T ], V ∗ ) which is measurable with respect to the cylindrical σ -field generated
4.2 Existence of Weak Solutions Under Compact Embedding 167
i.e., that v, Mt (·) ∈ MT2 (R) (continuous square-integrable real-valued martin-
gales). First, assume that gs is continuous and extend the result to the general case
by the monotone class theorem
(functional form).
Let for v ∈ V , v m = mj =1 v, ψj V ψj . Then, as m → ∞,
gs (x) v − v m , Mt (x) μ∗ (dx) → 0
By the choice of the vectors ϕj and ψj , we have, for x n (t) = (x1 (t), . . . , xn (t)) ∈
Rn ,
n n∧m
m
v , Jn x n (t) = xj (t)ϕj = xj (t)v, ϕj H .
j =1 j =1
t
v m , Mt Jn x n (·) = v m , Jn x n (t) H − v m , x(0) H − v m , A s, Jn x n (s) ds
0
m
n t
= v m , ϕj H
x (t) j − x(0) j − a n s, x n (s) j ds
j =1 0
is a martingale relative to the measure μn . Hence, the above and the uniform inte-
grability of Mt (Jn X n ), v (that follows from (4.30) and (4.40)) imply that
gs (x) v m , Mt (x) − Ms (x) μ∗ (dx)
= E gs (X) v m , Mt (X) − Ms (X)
= lim E gs X n v m , Mt Jn X n − Ms Jn X n
n→∞
= lim gs Jn x n v m , Mt Jn x n − Ms Jn x n μn dx n = 0.
n→∞
The above conclusion, together with (4.43), ensures (4.42). Next, we find the in-
creasing process for the martingale v, Mt (x) . We begin with some estimates. For
168 4 Solutions by Variational Method
x, v ∈ V , we have
∗ 2
v, B s, x(s) Q1/2 B s, x(s) Q1/2 v ≤ v2H B(s, x)L .
2 (KQ ,H )
Hence,
∗
v, B s, x(s) Q1/2 B s, x(s) Q1/2 v μn∗ (dx)
≤ v2H θ 1 + x2H μn∗ (dx)
≤ θ (1 + C)v2H (4.44)
by (4.30), and by (4.31)
∗
v, B s, x(s) Q1/2 B s, x(s) Q1/2 v μ∗ (dx) ≤ θ (1 + C)v2H .
since by (4.31) and (4.41) the integrals above are bounded by Dv m − v2V and
Dv m + v2V , respectively, for some constant D.
By the uniform integrability of v, Mt (Jn X n ) 2 (ensured by (4.30) and (4.40)),
we have
m 2 2
v , Mt (x) − v m , Ms (x) gs (x) μ∗ (dx)
2 2
=E v m , Mt (X) − v m , Ms (X) gs (X)
2 2
= lim E v m , Mt Jn X n − v m , Ms Jn X n gs J n X n
n→∞
4.2 Existence of Weak Solutions Under Compact Embedding 169
m 2
t
= lim E X n
(t) − ξ0n − a u, X n (u) du v, ϕj
n
H gs J n X n
n→∞ 0
j =1 j
m 2
s
− lim E X n (s) − ξ0n − a u, X (u) du v, ϕj
n n
H gs J n X n
n→∞ 0
j =1 j
m
t T
= lim E bn u, X n (u) bn u, X n (u) jj
v, ϕj 2
H gs J n X
n
du
n→∞ s j =1
m n
t ∗ 2
= lim E B u, Jn X n (u))Q1/2 ϕj , fk K v, ϕj 2
H gs Jn X n
du .
n→∞ s j =1 k=1
t t
X n (t) − ξ0n − a n s, X n (s) ds = bn s, X n (s) dBsn
0 0
t
has an increasing process given by 0 tr(b(s, X n (s))(b(s, X n (s))T ) ds.
By using the positive and negative parts of gs (x) separately, we can assume,
without any loss of generality, that gs (x) ≥ 0 in the following argument. Consider
the last expectation above. It is dominated by
m ∞
t ∗ 2
E B u, Jn X n (u) Q1/2 ϕj , fk K v, ϕj 2
H gs Jn X n
du
s j =1 k=1
m
t
=E B u, Jn X n (u) Q1/2 ∗ ϕj 2 v, ϕj 2 n
K H gs J n X du
s j =1
m
t ∗
=E B u, Jn X n (u) Q1/2 B u, Jn X n (u) Q1/2 ϕj , ϕj H
s j =1
v, ϕj 2H gs Jn X n
du
t
1/2 1/2 ∗ m m
=E n
B u, Jn X (u) Q n
B u, Jn X (u) Q n
v , v H gs Jn X du
s
t
1/2 1/2 ∗ m m
→E B u, X(u) Q B u, X(u) Q v , v H gs (X) du
s
t ∗
= B u, x(u) Q1/2 B u, x(u) Q1/2 v m , v m H gs (x) du μ∗ (dx),
s
170 4 Solutions by Variational Method
using the weak convergence and uniform integrability of the integrand ensured
by (4.5) and (4.31). Hence,
m n
t 1/2 ∗ 2
lim E n
B u, Jn X (u) Q ϕj , fk K v, ϕj H gs Jn X
2 n
du
n→∞ 0 j =1 k=1
t ∗
≤ B u, x(u) Q1/2 B u, x(u) Q1/2 v m , v m H gs (x) du μ∗ (dx).
s
m
r
∗ 2
≥ lim inf B u, Jn X n (u) Q1/2 ϕj , fk K v, ϕj 2
H gs Jn X n
n→∞
j =1 k=1
m
r
∗ 2
= B u, Jn Xn (u) Q1/2 ϕj , fk K v, ϕj 2
H gs (X)
j =1 k=1
∗
→ B u, X(u) Q1/2 B u, X(u) Q1/2 v m , v m H gs (X),
Let {ψj∗ }∞
j =1 be the dual orthonormal basis in V ∗ defined by the duality
u, ψj∗ V∗
= ψj , u , u ∈ V ∗.
Since by (4.4)
Mt (x)2 ∗ ≤ C 1 + sup x(t)2 ,
V H
0≤t≤T
Denote Mt (x) = Mt (x), ψj∗ V ∗ . Using (2.5) and the property of the dual basis,
j
Since
j
Mt (x)Mtk (x) = ψj , Mt (x) ψk , Mt (x) ,
we can write
j
M (x), M k (x) t = ψj , M(x) , ψk , M(x) t
t ∗
= B s, x(s) Q1/2 B s, x(s) Q1/2 ψj , ψk ds.
0
Then
∞
∗
Φ ∗ (s)(u) = ψj , u B s, X(s) Q1/2 ψj , u ∈ V ∗,
j =1
∞
t ∗
= B s, X(s) Q1/2 B s, X(s) Q1/2 ψj , ψk ψj , u ψk , v ds
0 j,k=1
= M(X) t (u), v V ∗ ,
giving that
t
M(X) t = Φ(s)Φ ∗ (s) ds.
0
172 4 Solutions by Variational Method
where we have used the assumption on the duality on Gelfand triplet and the fact that
ψj = ϕj /ϕj V with the denominator greater than or equal to one. Consequently,
the growth condition (4.5), together with (4.38), implies that
T
E Φ(t)2 .
L2 (K,V ∗ )
0
Define
∞
Wt = )t Q1/2 fm fm .
W
m=1
∞
∞
t
= )s (fm )
ψj , B s, X(s) Q1/2 fm ψj∗ d W
m=1 0 j =1
∞
∞
t
= )s Q1/2 fm
ψj∗ , B s, X(s) fm V ∗ ψj∗ d W
m=1 0 j =1
∞
t
= )s Q1/2 fm
B s, X(s) fm d W
0 m=1
t
= B s, X(s) dWs .
0
We are now in a position to apply Theorem 4.3 to X(t), Y (t) = A(t, X(t)), and
Z(t) = B(t, X(t)) to obtain that X ∈ C([0, T ], H ), completing the proof.
Exercise 4.7 Show that under the assumption of compact embedding in the Gelfand
triplet, there exists a vector system {ϕj }∞ j =1 ⊂ V which is a complete orthogonal
system in V and an ONB in H .
Hint: show that the canonical isomorphism I : V ∗ → V takes a unit ball in V ∗ to
a subset of the unit ball in V , which is relatively compact in H . For the eigenvectors
hn of I , we have
We now address the problem of the existence and uniqueness of a strong solution
using a version of the Yamada and Watanabe result in infinite dimensions. Recall
the notion of pathwise uniqueness.
Definition 4.2 If for any two H -valued weak solutions (X1 , W ) and (X2 , W )
of (4.2) defined on the same filtered probability space (, F , {Ft }0≤t≤T , P ) with
the same Q-Wiener process W and such that X1 (0) = X2 (0) P -a.s., we have that
P X1 (t) = X2 (t), 0 ≤ t ≤ T = 1,
Theorem 4.5 Let the conditions of Theorem 4.4 hold and assume the weak mono-
tonicity condition (4.48). Then the solution to (4.2) is pathwise unique.
Proof Let X1 , X2 be two weak solutions as in Definition 4.2, Y (t) = X1 (t) − X2 (t),
and denote a V -valued progressively measurable version of the latter by Ȳ . Apply-
ing the Itô formula and the monotonicity condition (4.48) yields
2 t 2
e−θt Y (t)H = −θ e−θs Y (s)H ds
0
t
+ e−θs 2 Ȳ (s), A s, X1 (s) − A s, X2 (s)
0
2
+ B s, X1 (s) − B s, X2 (s) L2 (KQ ,H )
ds
t
+2 e−θs Ys , B s, X1 (s) − B s, X2 (s) dWs H
0
≤ Mt ,
Corollary 4.2 Under the conditions of Theorem 4.5, (4.2) has a unique strong so-
lution.
We will now study the existence and uniqueness problem for strong solutions.
A monotonicity condition will be imposed on the coefficients of the SDE (4.2),
and the compactness of embeddings V → H → V ∗ will be dropped. We empha-
size that the monotonicity condition will allow us to construct approximate strong
solutions using projections of a single Q-Wiener process, as opposed to construct-
ing finite-dimensional weak solutions in possibly different probability spaces. In the
presence of monotonicity, we can weaken other assumptions on the coefficients of
the variational SDE. A reader interested in exploring this topic in more depth is
referred to a detailed presentation in [64], where the authors reduce the conditions
even slightly further.
We assume that in the Gelfand triplet V → H → V ∗ , V is a real separable
Hilbert space.
4.3 Strong Variational Solutions 175
V v → A(t, v) ∈ V ∗ and
∗ (4.49)
V v → B(t, v)Q1/2 B(t, v)Q1/2 ∈ L1 (H )
are continuous.
We now assume that the coefficient A satisfies the following growth condition:
(G-A’)
A(t, v)2 ∗ ≤ θ 1 + v2 , v ∈ V. (4.50)
V V
The coercivity condition (4.6) remains in force, and, in addition, we assume the
weak monotonicity condition (4.48).
We will rely on the following finite-dimensional result for an SDE (4.15) with
the initial condition ξ0 . Its more refined version is stated as Theorem 3.1.1 in [64].
where b(t, x)2 = tr(b(t, x)bT (t, x)). Assume that for all t ≥ 0 and R > 0, on the
set {xRn ≤ R, yRn ≤ R}, we have
2
2 x, a(t, x) Rn + b(t, x) ≤ θ 1 + x2Rn
and
2
2 x − y, a(t, x) − a(t, y) Rn + b(t, x) − b(t, y) ≤ θ x − y2Rn .
Here is the variational existence and uniqueness theorem for strong solutions.
176 4 Solutions by Variational Method
and
T
E X(t)2 dt < ∞. (4.52)
V
0
n
Pn u = ϕi , u ϕi , u ∈ V ∗.
i=1
with the initial condition X n (0) = Pn ξ0 and identify it with the SDE (4.15) in
Rn . It is a simple exercise to show that the conditions of Theorem 4.6 hold (Ex-
ercise 4.9); hence, we have a unique strong finite-dimensional solution X n (t) ∈ Hn .
We will now show its boundedness in the proper L2 spaces. Identifying X n (t) with
an Rn -valued process and applying the finite-dimensional Itô formula yield
n 2 t
X (t) = X n (0)2 + 2 X n (s), A s, X n (s)
H H
0
2
+ Pn B s, X n (s) P̃n L2 (KQ ,H )
ds + M n (t),
where
t
M n (t) = 2 X n (s), Pn B s, X n (s) dWsn H
0
weakly in L2 ([0, T ] × , H ), the reason being that the stochastic integral is a con-
tinuous transformation from 2 (KQ , H ) to L2 ([0, T ] × , H ), and so it is also
continuous with% respect to weak topologies in those spaces (see Exercise 4.10).
For any v ∈ n≥1 Hn and g ∈ L2 ([0, T ] × , R), using the assumption on the
duality, we obtain
T
E g(t)v, X(t) dt
0
T
= lim E g(t)v, X (t) dt n
n→∞ 0
T t
= lim E g(t)v, X n (0) + g(t)v, Pn A s, X n (s) ds
n→∞ 0 0
t
+ Pn B s, X n (s) dWsn , g(t)v dt
0 H
T
T T
= lim E v, X (0) H n
g(t) dt + n
g(t)v dt, Pn A s, X (s) ds
n→∞ 0 0 s
T t
+ Pn B s, X n (s) dWsn , g(t)v dt
0 0 H
T t t
=E g(t)v, X(0) + Y (s) ds + Z(s) dWs dt .
0 0 0
Therefore,
t t
X(t) = X(0) + Y (s) ds + Z(s) dWs , dt ⊗ dP -a.e.,
0 0
We now verify that Y (t) = A(t, X(t)) and Z(t) = B(t, X(t)), dt ⊗ dP -a.e. For a
nonnegative function ψ ∈ L∞ ([0, T ], R), we have
T
E ψ(t)X(t), X n (t) H dt
0
T 5 5
≤E ψ(t)X(t)H ψ(t)X n (t)H dt
0
4.3 Strong Variational Solutions 179
1/2 1/2
T 2 T 2
≤ E ψ(t)X(t)H dt E ψ(t)X n (t)H dt .
0 0
T 2 T
E ψ(t)X(t)H dt = lim E ψ(t)X(t), X n (t) H dt
0 n→∞ 0
1/2 1/2
T 2 T 2
≤ E ψ(t)X(t)H dt lim inf E ψ(t)X n (t)H dt < ∞,
0 n→∞ 0
giving
T 2 T 2
E ψ(t)X(t)H dt ≤ lim inf E ψ(t)X n (t)H dt. (4.56)
0 n→∞ 0
Since by (4.48) the first of the last two integrals is negative, by letting n → ∞, using
the weak convergence (4.55) in L2 , and applying (4.56), we conclude that for any
180 4 Solutions by Variational Method
function ψ as above,
T 2 2
ψ(t)E e−ct X(t)H − X(0)H dt
0
T t 2
≤ ψ(t) e−cs E cφ(s)H − 2c X(s), φ(s) H
0 0
2
+ 2 Z(s), B s, φ(s) L − B s, φ(s) L
2 (KQ ,H ) 2 (KQ ,H )
+ 2 X̄(s), A s, φ(s) + 2 φ(s), Y (s) − A s, φ(s) ds dt. (4.57)
Recall the Itô formula (4.37). With stopping times τl localizing the local martingale
represented by the stochastic integral, we have
2 2
E X(t ∧ τl )H − E X(0)H
t 2
= E 1[0,τl ] (s) 2 X̄(s), Y (s) + Z(s)L ds.
2 (KQ ,H )
0
Since, by (4.36) and by the square integrability of Y and Z, we can pass to the limit
using the Lebesgue DCT, the above equality yields
2 2
E X(t)H − E X(0)H
t 2
= E 2 X̄(s), Y (s) + Z(s)L ds. (4.58)
2 (KQ ,H )
0
We now substitute the expression for the left-hand side of (4.59) into the left-hand
side of (4.57) and arrive at
T t
E ψ(t) e−cs 2 X̄(s) − φ(s), Y (s) − A s, φ(s) (4.60)
0 0
2
+ B s, φ(s) − Z(s)L − c X(s) − φ(s)2 ds dt ≤ 0. (4.61)
2 (KQ ,H ) H
Substituting φ = X̄ gives that Z = B(·, X̄). Now let φ = X̄ − ε φ̃v with ε > 0, φ̃ ∈
L∞ ([0, T ] × , R), and v ∈ V . Let us divide (4.60) by ε and pass to the limit as
4.4 Markov and Strong Markov Properties 181
ε → 0 using the Lebesgue DCT. Utilizing (4.49) and (4.48), we obtain that
T t
E ψ(t) e−cs φ̃(s) v, Y (s) − A s, X̄(s) ds dt ≤ 0.
0 0
This proves that Y = A(·, X̄) due to the choice of ψ and φ̃.
The argument used in the proof of Theorem 4.5 can now be applied to show the
uniqueness of the solution.
Exercise 4.9 Show that the coefficients of (4.53) identified with the coefficients
of (4.15) satisfy the conditions of Theorem 4.6.
Exercise 4.11 Let X and Y be two solutions to (4.2). Using (4.58), show that under
the conditions of Theorem 4.5 or Theorem 4.7,
2 2
E X(t) − Y (t)H ≤ ect E X(0) − Y (0)H , 0 ≤ t ≤ T .
Note that this implies the uniqueness of the solution, providing an alternative argu-
ment to the one used in text.
and this definition can be extended to functions ϕ such that ϕ(X(t, s; x)) ∈
L1 (, R) for arbitrary s ≤ t. As usual, for a random variable η,
(Ps,t ϕ)(η) = E ϕ X(t, s; x) x=η .
182 4 Solutions by Variational Method
The Markov property (3.52) (and consequently (3.57)) of the solution now follows
almost word by word by the arguments used in the proof of Theorem 3.6 and by
Exercise 4.11. A proof given in [64] employs similar ideas.
Theorem 4.8 The unique strong solution to (4.2) obtained in Theorem 4.7 is a
Markov process.
Remark 4.2 In the case where the coefficients A and B are independent of t and
with x ∈ H ,
t+s t+s
X(t + s, t; x) = x + A X(u, t; x) du + B X(u, t, x) dWu
t t
s s
=x+ A X(t + u, t; x) du + B X(t + u, t, x) d W̄u ,
0 0
where W̄u = Wt+u − Wt . Repeating the arguments in Sect. 3.4, we argue that
d
X(t + s, t; x), s ≥ 0 = X(s, 0; x), s ≥ 0 ,
In Sect. 7.5 we will need the strong Markov property for strong variational solu-
tions. We prove this in the next theorem. Consider the following variational SDE:
dX(t) = A X(t) dt + B X(t) dWt (4.64)
Definition 4.3 Let τ be a stopping time with respect to a filtration {Ft }t≥0 (an
Ft -stopping time for short). We define
Fτ = σ A ∈ F : A ∩ {τ ≤ t} ∈ Ft , t ≥ 0 .
4.4 Markov and Strong Markov Properties 183
Exercise 4.12 Show that FτW = σ {Ws∧τ , s ≥ 0} and that FτX = σ {Xs∧τ , s ≥ 0}
for a strong solution to (4.2).
Theorem 4.9 Under the assumptions of Theorem 4.7, the unique strong solution
X(t) of (4.64) in C([0, T ], H ) is a strong Markov process.
Proof By the monotone class theorem (functional form) we only need to show that
W,ξ
for any bounded continuous function ϕ : H → R and A ∈ Fτ ,
E ϕ X(τ + s; ξ ) 1A∩{τ <∞} = E (Ps ϕ) X(τ ; ξ ) 1A∩{τ <∞} . (4.66)
If τ takes finitely many values, then A ∈ Fmax{τ (ω)} , and (4.66) is a consequence of
Theorem 4.8.
W,ξ
Let τn be a sequence of Ft -stopping times, each taking finitely many values,
W,ξ W,ξ
and τn ↓ τ on τ < ∞ (see Exercise 4.14). Since τn ≥ τ , we have Fτn ⊃ Fτ
W,ξ
and A ⊂ Fτn for all n. Consequently, (4.66) holds for τn ,
E ϕ X(τn + s; ξ ) 1A∩{τ <∞} = E (Ps ϕ) X(τn ; ξ ) 1A∩{τ <∞} .
Corollary 4.3 Under the assumptions of Theorem 4.9, the unique strong solution
X(t) of (4.64) in C([0, T ], H ) has the following strong Markov property:
E ϕ X(τ + s; ξ ) FτX = (Ps ϕ) X(τ ; ξ ) P -a.s. on {τ < ∞} (4.67)
184 4 Solutions by Variational Method
for any real-valued function ϕ such that ϕ(X(t; ξ )) ∈ L1 (, R) with FsX =
σ {X(r; ξ ), r ≤ s} and an FtX -stopping time τ .
for any a real-valued function ϕ such that ϕ(X(t; ξ )) ∈ L1 (, R) and any
FtX -stopping time τ .
We now refer the reader to Sect. 4.1 in [64], where several examples and further
references are provided.
Chapter 5
Stochastic Differential Equations
with Discontinuous Drift
5.1 Introduction
In this chapter, we consider genuine infinite-dimensional stochastic differential
equations not connected to SPDEs.
This problem has been discussed in Albeverio’s work on solutions to infinite-
dimensional stochastic differential equations with values in C([0, T ], RZ ), which
d
where {ek }∞
k=1 is an ONB in H . Then the natural choice of the larger space is
(R∞ , ρR∞ ) with its metric defined for coordinate-wise convergence
∞
1 |x k − y k |
ρR∞ (x, y) = ,
2k 1 + |x k − y k |
k=1
(Gq1)
uq k (u) ≤ γk + u2 , u ∈ R; (5.3)
5.2 Unbounded Spin Systems, Solutions in C([0, T ], H w ) 187
(Gq2) Let q̄n (u1 , . . . , un ) = (q 1 (u1 ), . . . , q n (un )) ∈ Rn . There exists a positive in-
teger m, independent on n, such that
q̄n (un )2 n ≤ C 1 + un 2mn . (5.4)
R R
in the following sense. There exists an H -valued Q-Wiener process Wt and a process
X(·) ∈ C([0, T ], H w ) such that for every k = 1, 2, . . . ,
t
X k (t) = x k + F k X(s) + q k X k (s) ds + Wtk . (5.5)
0
Here,
n Pnk iskthe projection of H onto span{e1 , . . . , en }, qn : Pn H → Pn H , qn (y) =
k=1 q (y )e k , y ∈ Pn H , and W t
n is an H -valued Wiener process with covariance
Qn = Pn Q Pn .
We can consider (5.6) in Rn by identifying Pn H with Rn and treating Wtn as an
R -valued Wiener process. Denote
n
n
Gn (x) = F 1 (xn ) + q 1 x 1 , . . . , F n (xn ) + q n x n ∈ Rn , x ∈ R , xn =
n
x k ek .
k=1
or τR = T if the infimum is taken over an empty set. Using the Itô formula for the
function x2Rn on Rn , we get
t∧τR
ξn (t ∧ τR )2 n = Pn x2 + 2 ξn (s), Gn ξn (s) Rn ds
R H
0
188 5 Stochastic Differential Equations with Discontinuous Drift
t∧τR
+ (t ∧ τR ) tr(Qn ) + 2 ξn (s), dWsn Rn
.
0
For the stochastic integral term, we use that 2a ≤ 1 + a 2 ; hence, by the Doob in-
equality, Theorem 2.2, and (5.7), we have
2 t
E sup ξn (s ∧ τR )Rn ≤ C1 + C2 ξn (s ∧ τR )2 n ds.
R
0≤s≤t 0
Let l be a positive integer. Using the Itô formula for the function x2l
Rn on R , we
n
get
t∧τR
ξn (t ∧ τR )2ln = Pn x2l + 2l ξn (s)2(l−1) ξn (s), Gn ξn (s) ds
R H nR Rn
0
t∧τR 1/2
+ 2l(l − 1) ξn (s)2(l−2) Qn ξn (s)2 n ds
nR R
0
t∧τR
+l ξn (s)2(l−1) tr(Qn ) ds
n R
0
t∧τR
+ 2l ξn (s)2(l−1) ξn (s), dWsn Rn .
Rn
0
2l t 2(l−1)
E ξn (t ∧ τR )Rn ≤ C1 + C2 E ξn (t ∧ τR )Rn ds
0
t 2l
+ C3 E ξn (t ∧ τR )Rn ds
0
t 2l
≤ (C1 + C2 T ) + (C2 + C3 ) E ξn (t ∧ τR )Rn ds,
0
where we have used the fact that a 2(l−1) ≤ 1 + a 2l . By Gronwall’s lemma, for some
constant C,
2l
E ξn (t ∧ τR )Rn ≤ C,
5.2 Unbounded Spin Systems, Solutions in C([0, T ], H w ) 189
which, as R → ∞, leads to
2l
E sup ξn (t)Rn ≤ C. (5.9)
0≤t≤T
Using (5.8) and (5.9) and essentially repeating the argument in Lemma 3.10, we now
obtain an estimate for the fourth moment of the increment of the process ξn (t ∧ τR ).
Applying the Itô formula for the function x4Rn on Rn , we get
ξn (t + h) − ξn (t)4 n
R
t+h
=4 ξn (u) − ξn (t)2 n ξn (u) − ξn (t), Gn ξn (u) du
R Rn
t
t+h
+2 ξn (u) − ξn (t)2 n tr(Qn ) du
R
t
t+h
+2 Qn ξn (u) − ξn (t), ξn (u) − ξn (t) Rn du
t
t+h
+4 ξn (u) − ξn (t)2 n ξn (u) − ξn (t), dW n n .
R u R
t
Taking the expectation of both sides and using assumptions (5.2) and (5.4), which
imply the polynomial growth of Gn , we calculate
4
E ξn (t + h) − ξn (t)Rn
t+h
≤ C1 E ξn (u) − ξn (t)3 n 1 + ξn (u) du
R
t
t+h
+ C2 E ξn (u) − ξn (t)2 n du
R
t
, t+h -3/4
≤ C3 E ξn (u) − ξn (t)4 n du h1/4
R
t
, t+h -1/2
+ E ξn (u) − ξn (t)4 n du h1/2 ≤ Ch
R
0
Let us relate these results to the work of Leha and Ritter [47]. We begin with the
general uniqueness and existence theorem in [46].
Theorem 5.2 Let H be real separable Hilbert space, and Wt be a Q-Wiener pro-
cess. Assume that A : H → L (H ) and B ∈ L (H ) satisfy the following growth and
local Lipschitz conditions. There exist constants C and Cn , n = 1, 2, . . . , such that
(1) tr(B(x)QB ∗ (x)) ≤ C(1 + x2H ), x ∈ H ;
(2) x, A(x) H ≤ C(1 + x2H ), x ∈ H ;
(3) A(x) − A(y)L (H ) + B(x) − B(y)H ≤ Cn x − yH for xH ≤ n and
yH ≤ n.
Then there exists a unique strong solution to the equation
t t
X(t) = x + A X(s) ds + B X(s) dWs , t > 0.
0 0
and define Bn in a similar way. Show that An and Bn are globally Lipschitz and
show that for the corresponding solutions Xn (t), there exists a process X(t) such
that X(t ∧ τn ) = Xn (t ∧ τn ) with τn denoting the first exit time of Xn from the
ball of radius n centered at the origin. Finally, show that P (τn < t) → 0 by using
the estimate for the second moment of Xn obtained from the application of the Itô
formula to the function x2H .
Following [47], for a finite subset V of the set of positive integers, define q V :
H → H by
V q k (y k ), k ∈ V ,
q (y), ek H =
0, k∈/ V.
From Theorem 5.2 we know that for any fixed V , under conditions (5.2), (5.3) and
the local Lipschitz condition on the coefficients F and q V , there exists a unique
strong solution to the equation
t
ξ V (t) = x + F ξ V (s) + q V ξ V (s) ds + Wt
0
We now show that, under the global Lipschitz condition independent on V , the laws
of the above solution ξ(t) and the solution X(t) obtained in Theorem 5.1 coincide.
Proof We first note that by Theorem 5.2 under the Lipschitz condition, both ap-
proximating sequences ξnV of Theorem 2.4 in [47] and Xn of Theorem 5.1 can be
constructed as strong solutions on the same probability space. Let Fn = Pn ◦ F ◦ Pn .
Then
t
Xn (t) = xn + Fn Xn (s) + qn Xn (s) ds + Wtn ,
0
t
ξ Vn (t) = x + F ξ Vn (s) + q Vn ξ Vn (s) ds + Wt .
0
The laws L (ξ V ) are tight (see the proof of Theorem 2.4 in [47]). Therefore, for
a sequence Vn = {1, 2, . . . , n}, there is a subsequence Vnk such that L (ξ Vnk ) →
L (ξ ). Therefore, for simplicity, we assume as in Theorem 2.4 in [47] that Vn =
{1, . . . , n} and that ξ Vn → ξ weakly. Denote Yn (t) = Pn ξ Vn (t). Then
t
Yn (t) = xn + Pn F ξ Vn (s) + qn Yn (s) ds + W n (t).
0
We obtain
2 t
E Xn (t) − Yn (t)H ≤ CE Pn F Xn (s) − Pn F ξ Vn (s) 2
H
0
2
+ qn Xn (s) − qn Yn (s) H ds
t
≤ C1 E Xn (s) − ξ Vn (s)2 + Xn (s) − Yn (s)2 ds
H H
0
t
≤ C1 E Yn (s) − ξ Vn (s)2 + 2Xn (s) − Yn (s)2 ds,
H H
0
Note that (5.9) also holds for ξ Vn (the same proof works, with H replacing Rn ).
Hence, by changing the underlying probability space to ensure the a.s. convergence
by means of Skorokhod’s theorem, we have that
t
2 t
E
Yn (s) − ξ (s) H ds ≤ 3 E
Vn Yn (s) − Pn ξ(s)2 ds
H
0 0
t
t
+E Pn ξ(s) − ξ(s)2 ds + E ξ(s) − ξ Vn (s)2 ds → 0
H H
0 0
The components of the drift coefficient of an SDE takes the following form:
∂
F k (x) = k
Jk,l x k x l = Jk,l x l ,
∂x
l =k l =k
so that
∂
F k (x) + q k (x) = − H k (x).
∂xk
194 5 Stochastic Differential Equations with Discontinuous Drift
with an > 0.
The results show that there exists a weak solution only under the assumption that
q k (u) : R → R satisfy growth conditions and are continuous functions. It should be
noted that even if Jk,l = 0 for all k, l, the growth condition (5.3) is necessary for the
existence of a solution without explosion, and continuity is needed in the proof of
the Peano theorem.
In Euclidian quantum field theory continuous spin models serve as lattice ap-
proximations (see [60] for details) with R∞ replaced by RZ . In [60],
d
1 if dj =1 |kj − lj | = 1,
Jk,l =
0 otherwise,
ϕk (u) = d + m2 /2 u2 + P (u),
where k, j ∈ Zd .
We will study such models in the next section.
We use recent ideas from Albeverio et al. [3] to study the dynamics of an infinite
particle system corresponding to a Gibbs measure on the lattice. Our technique is to
study weak solutions of an infinite system of SDEs using the work in [22, 36], and
the methods in Chap. 4 related to the case of a SDE in the dual to a nuclear space.
This allows us to extend the existence result in [3] by removing the dissipativity
condition.
The work [3] provides results for the existence and uniqueness of solutions to
a system of SDEs describing a lattice spin-system model with spins taking values
in “loop spaces.” The space of configurations Ωβ = C(Sβ )Z , where Sβ is a circle
d
with the Euclidian norm and RZ has the metric for coordinate-wise convergence,
d
F : RZ → RZ is defined by
d d
1
F (x) = F (x) k∈Zd = −
k
a(k − j )x j
(5.12)
2 k∈Zd
j ∈BZd (k,ρ)
mapping on the scales of the Hilbert space l2 (Zd ). The functions qk : R → R are
the derivatives of potentials Vk (u). In [3], Vk (u) = λP (u) with P (u) as in (5.10),
which is the case in an important class of the so-called P (ϕ) models.
We note that
k 1 j 2 1/2
F (x) ≤ A x , (5.13)
2
j ∈BZd (k,ρ)
where
1/2
A = a (k − j )
2
.
j ∈BZd (k,ρ)
The embeddings between the Hilbert spaces l2n → l2m with m + d2 < n are compact
(in fact, Hilbert–Schmidt) operators. The space Φ endowed with the projective limit
196 5 Stochastic Differential Equations with Discontinuous Drift
topology is a nuclear space of fast decreasing sequences, and the space Φ , endowed
with the inductive limit topology, is dual to Φ.
Let Q be a continuous quadratic form on Φ and denote its extension (which
always exists) to a nuclear form on some l2−m , m > 0, by the same symbol.
Proof Let us show that for m > 0, F : l2−m → l2−m is Lipschitz continuous. If x, y ∈
l2−m , m > 0, then
−2m k 2
F (x) − F (y)2−m = 1 + |k|Zd F (x) − F k (y)
l 2
k∈Zd
−2m 1 j 2
≤ 1 + |k|Zd A2 x − yj
4
d
k∈Z j ∈BZd (k,ρ)
≤ C1 x − y2l −m .
2
Note that F and q k satisfy all conditions of Theorem 5.1. Following its proof,
we first construct solutions ξn of (5.6). Let |Bn | denote the cardinality of the ball
BZd (0, n) of radius n centered at 0 in Zd . Observe that
ξn ∈ C [0, T ], R|Bn | .
j
Next, we obtain approximations Xn (t) = j ∈B d ξn hj , where we denote by
Z (0,n)
{hk }k∈Zd the canonical basis in l2 . We have
Xn ∈ C [0, T ], l2 .
· l −m ≤ C · l2 ,
2
Lemma 3.14 guarantees that the measures μn = L (Xn ) are tight on C([0, T ], l2−m ).
5.3 Locally Interacting Particle Systems, Solutions in C([0, T ], RZ )
d
197
Let μ be a limit point of the sequence μn . Using the Skorokhod theorem, we can
assume that
Xn → X, P -a.s. in C [0, T ], l2−m .
t
The random variables (Xnk (t))2 and 0 [Fnk (Xn (s))+q k (Xnk (s))]2 ds are P -uniformly
integrable. By the same arguments as in the proof of Theorem 5.1, we conclude that
the sequence of Brownian motions
t* +
Ynk (t) = Xnk (t) − x k − Fnk Xn (s) + q k Xnk (s) ds
0
where {h−m
k }k∈Zd is the basis in l2
−m
obtained by applying the Gramm–Schmidt
orthonormalization of the vectors hk . Then X(t), Wt satisfy (5.15).
When the assumptions on the drifts q k are more restrictive, (5.14) can be considered
in a Hilbert space. We will require that the functions q k (u) have linear growth.
−p
Then there exists a weak solution X(·) ∈ C([0, T ], l2 ), for some p > 0, to the
equation
t t
X(t) = x + F X(s) + q X(s) ds + B X(s) dWs , x ∈ l2 , (5.16)
0 0
−p
where Wt is an l2 -valued Q-Wiener process.
198 5 Stochastic Differential Equations with Discontinuous Drift
showing that G(x) = F (x) + q(x) : l2−m → l2−m , m > d/2 (otherwise, we face a
divergent series, see Exercise 5.3), and providing the estimate
G(x)2−m ≤ C1 1 + x2−m .
l 2 l 2
Moreover, we know from the proof of Theorem 5.4 that F : l2−m → l2−m , m > 0, is
Lipschitz continuous.
We will now use the approach presented in Sect. 3.8. With {ek−r }k∈Zd , r > 0,
denoting the ONB in l2−r consisting of the eigenvectors of Q, let Pn : l2−r → l2−r be
defined by
Pn x = x, ek−r l −r ek−r .
2
k∈BZd (0,n)
−p
a.s. and in L1 (Ω, P ). Next, let {xl }∞
j
l=1 be a sequence converging to x in l2 , xl = 0,
j∈/ BZd (0, l). We consider
E q Xn (t) , x l −p − q X(t) , x l −p ≤ E q Xn (t) , x l −p − q Xn (t) , xl l −p
2 2 2 2
+ q Xn (t) , xl l −p − q X(t) , xl l −p + q X(t) , xl l −p − q X(t) , x l −p
2 2 2 2
j j j
≤ E q Xn (t) l −p x − xl l −p + E q Xn (t) − q X (t) xl
j j
2 2
j ∈BZd (0,l)
+ E q X(t) l −p xl − xl −p
2 2
≤ C E sup 1 + Xn (t) −p + X(t) −p x − xl −p
l2 l2 l2
n
j j
+E q Xn (t) − q j X j (t) x j .
l
j ∈BZd (0,l)
Using the estimate in (5.18), we can choose l, independent of n, such that the first
summand is arbitrarily small. By choosing n large enough and using the continuity
of qk on R, we can make the second summand arbitrarily small. Using the uniform
integrability for the term involving q, we conclude that
t
Mn (t) = Xn (t), x l −p − G Xn (s) , x l −p ds
2 0 2
t
→ X(t), x l −p − G X(s) , x l −p ds = M(t) (5.19)
2 0 2
Since
2
tr Pn B Xn (s) Pn QPn B ∗ Xn (s) Pn ≤ θ 1 + Xn (s)l −p ,
2
the LHS is uniformly integrable with respect to the measure dP × ds, and it con-
verges dP × ds-a.e. to tr(B(X(s))QB ∗ (X(s))) (see Exercise 5.4), implying that
t
E Pn QPn B ∗ Xn (s) Pn x, B ∗ Xn (s) Pn x ds
0
t
→E Q B ∗ X(s) x, B ∗ X(s) x ds,
0
6.1 Introduction
Let (X, · X ) be a Banach space, and let us consider the Cauchy problem
⎧
⎨ du(t)
= Au(t), 0 < t < T ,
dt (6.1)
⎩u(0) = x ∈ X.
We know that if A generates a C0 -semigroup {S(t), t ≥ 0}, then the mild solution
of the Cauchy problem (6.1) is given by
ux (t) = S(t)x.
exists as a bilinear form, and in fact, the equivalence of conditions (1) and (3) above
can be proved (see [13]). We now consider the Cauchy problem in a real Hilbert
space,
⎧
⎨ du(t)
= Au(t), 0 < t < T ,
dt (6.2)
⎩u(0) = x ∈ H.
Theorem 6.1 Let (H, ·, · H) be a real Hilbert space. The following conditions are
equivalent:
(1) The solution of the Cauchy problem (6.2) {ux (t), t ≥ 0} is exponentially stable.
(2) There exists a nonnegative symmetric operator R such that for x ∈ D(A),
A∗ Rx + RAx = −x.
giving (2).
From the above calculations condition (2) implies that
d 2
RS(t)x, S(t)x H = −S(t)x H .
dt
Hence,
t
S(u)x 2 du = Rx, x H − RS(t)x, S(t)x H
H
0
≤ Rx, x H.
∞
Thus, 0 S(t)x2H dt < ∞.
We know that S(t)x → 0 as t → ∞ for each x (see Exercise 6.1). Hence, by the
uniform boundedness principle, for some constant M, we have S(t)L (H ) ≤ M
for all t ≥ 0.
Consider the map T : H → L2 (R+ , H ), T x = S(t)x. Then T is a closed linear
operator on H . Using the closed graph theorem, we have
∞
S(t)x 2 dt ≤ c2 x2 .
H H
0
Let β = Mρ < 1 and t1 > t0 be fixed. For 0 < s < t1 , let t = nt1 + s. Then
S(t) ≤ S(nt1 )L (H ) S(s)L (H )
L (H )
n
≤ M S(t1 )L (H ) ≤ Mβ n ≤ M e−μt ,
Corollary 6.1 If S(·)x ∈ L2 (R+ , H ) for all x in a real separable Hilbert space H ,
then
S(t) ≤ c e−rt , for some r > 0.
L (H )
∞
Exercise 6.1 (a) Find a continuous function f (t) such that 0 (f (t))2 dt < ∞ but
limt→∞ f (t) = 0.
∞
(b) Show that if 0 S(t)x2H dt < ∞ for every x ∈ H , then limt→∞ S(t)xH
= 0 for every x ∈ H .
Hint: recall that S(t)L (H ) ≤ Meαt . Assume that S(tj )xH > δ for some se-
quence tj → ∞. Then, S(t)xH ≥ δ(Me)−1 on [tj − α −1 , tj ].
We note that Rx, x H does not play the role of the Lyapunov function, since
in the infinite-dimensional case, Rx, x H ≥ c1 x2 with c1 > 0 does not hold (see
Example 6.1). We shall show that if A generates a pseudo-contraction semigroup,
then we can produce a Lyapunov function related to R. The function Λ in Theo-
rems 6.2 and 6.3 is called the Lyapunov function. Let us recall that {S(t), t ≥ 0} is
a pseudo-contraction semigroup if there exists ω ∈ R such that
S(t) ≤ eωt .
L (H )
Theorem 6.2 (a) Let {ux (t) t ≥ 0} be a mild solution to the Cauchy problem (6.2).
Suppose that there exists a real-valued function Λ on H satisfying the following
conditions:
(1) c1 x2H ≤ Λ(x) ≤ c2 x2H for x ∈ H ,
(2) Λ (x), Ax H ≤ −c3 Λ(x) for x ∈ D(A),
where c1 , c2 , c3 are positive constants. Then the solution ux (t) is exponentially
stable.
(b) If the solution {ux (t) t ≥ 0} to the Cauchy problem (6.2) is exponentially
stable and A generates a pseudo-contraction semigroup, then there exists a real-
valued function Λ on H satisfying conditions (1) and (2) in part (a).
proving (a).
(b) Conversely, we first observe that for Ψ (x) = Rx, x H with R defined
in (6.3), we have Ψ (x) = 2Rx by the symmetry of R. Since R = R ∗ , we can write
Ψ (x), Ax H
= Rx, Ax H + Rx, Ax H = A∗ Rx, x H + x, RAx H
= A∗ Rx + RAx, x H = −x2H .
Clearly Λ(x) satisfies condition (1) in (a). Since S(t) is a pseudo-contraction semi-
group, there exists a constant λ (assumed positive WLOG) such that (see Exer-
cise 3.5)
x, Ax H ≤ λx2H , x ∈ D(A). (6.4)
We calculate
Λ (x), Ax H
= Ψ (x), Ax H + 2αx, Ax H = x2H (2αλ − 1).
Choosing α small enough, so that 2αλ < 1, and using condition (1), we obtain (2)
in (a).
Let us now consider the case of a coercive operator A (see condition (6.5)), with
a view towards applications to PDEs. For this, we recall some concepts from Part I.
We have a Gelfand triplet of real separable Hilbert spaces
V → H → V ∗ ,
where the embeddings are continuous. The space V ∗ is the continuous dual of V ,
with the duality on V × V ∗ denoted by ·, · and satisfying
v, h = v, h H
if h ∈ H .
Assume that V is dense in H . We shall now construct a Lyapunov function for
determining the exponential stability of the solution of the Cauchy problem (6.2),
where A : V → V ∗ is a linear bounded operator satisfying the coercivity condition
We note that the following energy equality in [72] holds for solutions ux (t) ∈
L2 ([0, T ], V ) ∩ C([0, T ], H ):
x 2 t
u (t) − x2 = 2 ux (s), Aux (s) ds. (6.6)
H H
0
Theorem 6.3 (a) The solution of the Cauchy problem (6.2) with a coercive coeffi-
cient A is exponentially stable if there exists a real-valued function Λ that is Fréchet
differentiable on H , with Λ and Λ continuous, locally bounded on H , and satisfy-
ing the following conditions:
(1) c1 x2H ≤ Λ(x) ≤ c2 x2H .
(2) For x ∈ V , Λ (x) ∈ V , and the function
V x → Λ (x), v ∗ ∈ R
t d x
Λ ux (t) − Λ ux t = Λ u (s) ds.
t ds
d x x
Λ u (s) = Λ u (s) , Aux (s) ≤ −c3 Λ ux (s) .
ds
Denoting Φ(t) = Λ(ux (t)), we can then write
Hence,
x 2 t t
u (t) + α ux (s)2 ds ≤ x2 + |λ| ux (s)2 ds.
H V H H
0 0
we obtain
∞
ux (s)2 ds ≤ 1 1 + |λ|c x2 .
V H
0 α 2γ
Define
∞
Λ(x) = ux (s)2 ds,
V
0
Using the fact that ux (s) ∈ L2 ([0, ∞), V ) and the Schwarz inequality, we can see
that T (x, y) is a continuous bilinear form on V , which is continuous on H . Hence,
T (x, y) = C̃x, y H . Since Λ (x) = 2C̃x (by identifying H with H ∗ ), we can see
that Λ and Λ are locally bounded and continuous on H . By the continuity of the
embedding V → H , we have that for v, v ∈ V , T (v, v ) = Cv, v V for some
bounded linear operator C on V , and property (2) in (a) follows. Now,
x 2 t
u (t) − x2 = 2 Aux (s), ux (s) ds.
H H
0
x 2 t
u (t) − x2 ≥ −2c ux (s)2 ds.
H H 2 V
0
210 6 Stability Theory for Strong and Mild Solutions
Now divide both sides by t and let t → 0. Since Λ is continuous and ux (·) ∈
C([0, T ], H ), we get
1
Λ (x), Ax ≤ − 2 x2H .
c0
then Λ(x) does not satisfy the lower bound in condition (2) of (a) of Theorem 6.3.
Here, H = L2 (R), and V is the Sobolev space W 1,2 (R). We denote by ϕ̂(λ) the
Fourier transform of ϕ(x) and use the similar notation û(t, λ) for the Fourier trans-
form of u(t, x). Then (6.8) can be written as follows:
⎧
⎨ dû(t, λ)
= −a 2 λ2 û(t, λ) + (ibλ + c)û(t, λ),
dt (6.9)
⎩û(0, λ) = ϕ̂(λ).
The solution is
ûϕ (t, λ) = ϕ̂(λ) exp −a 2 λ2 + ibλ + c t .
By Plancherel’s theorem, uϕ (t, ·)H = ûϕ (t, ·)H , so that
ϕ ∞
u (t, ·)2 = ϕ̂(λ)2 exp −2a 2 λ2 + 2c t dλ
H
−∞
≤ ϕ2H exp{γ t} (γ = 2c).
does not satisfy Λ(ϕ) ≥ c1 ϕ2H (see condition (1) in part (a) of Theorem 6.3).
In the next section, we consider the stability problem for infinite-dimensional
stochastic differential equations using the Lyapunov function approach. We shall
show that the fact that a Lyapunov function for the linear case is bounded below can
be used to study the stability for nonlinear stochastic PDEs.
where
(1) A is the generator of a C0 -semigroup {S(t), t ≥ 0} on H .
(2) Wt is a K-valued Ft -Wiener process with covariance Q.
(3) F : H → H and B : H → L (K, H ) are Bochner-measurable functions satis-
fying
F (x)2 + tr B(x)QB ∗ (x) ≤ 1 + x2 ,
H H
F (x) − F (y)2 + tr B(x) − B(y) Q B(x) − B(y) ∗ ≤ K x − y2 .
H H
212 6 Stability Theory for Strong and Mild Solutions
Then (6.10) has a unique Ft -adapted mild solution (Chap. 3, Theorem 3.5), which
is a Markov process (Chap. 3, Theorem 3.6) and depends continuously on the initial
condition (Chap. 3, Theorem 3.7). That is, the integral equation
t t
X(t) = S(t)x + S(t − s)F X(s) ds + S(t − s)B X(s) dWs (6.11)
0 0
%
has a solution in C([0, T ], L2p ((Ω, F , P ), H )), p ≥ 1. Here F = σ ( t≥0 Ft ).
In addition, the solution of (6.11) can be approximated by solutions Xn obtained
by using Yosida approximations of the operator A in the following manner.
Recall from (1.22), Chap. 1, that for n ∈ ρ(A), the resolvent set of A, R(n, A) de-
notes the resolvent of A at n, and if Rn = nR(n, A), then An = ARn are the Yosida
approximations of A. The approximating semigroup is Sn (t) = etAn . Consider the
strong solution Xnx of
dX(t) = (An X(t) + F (X(t))) dt + B(X(t)) dWt ,
(6.12)
X(0) = x ∈ H.
t
Ψ t, X x (t) − Ψ (0, x) = Ψt s, X x (s) + L Ψ s, X x (s) ds
0
t
+ Ψx s, X x (s) , B X x (s) dWs H , (6.14)
0
where
1
L Ψ (t, x) = Ψx (t, x), Ax + F (x) H + tr Ψxx (t, x)B(x)QB ∗ (x) . (6.15)
2
Clearly (6.14) is valid for strong solutions of (6.12), with x ∈ H and A replaced by
An in (6.15).
We are ready to discuss the stability of mild solutions of (6.10).
6.2 Exponential Stability for Stochastic Differential Equations 213
Definition 6.1 Let {X x (t), t ≥ 0} be a mild solution of (6.10). We say that X x (t) is
exponentially stable in the mean square sense (m.s.s.) if for all t ≥ 0 and x ∈ H ,
2
E X x (t)H ≤ ce−βt x2H , c, β > 0. (6.16)
Theorem 6.4 The mild solution of (6.10) is exponentially stable in the m.s.s. if there
exists a function Λ : H → R satisfying the following conditions:
(1) Λ ∈ C2p
2 (H ).
Proof Assume first that the initial condition x ∈ D(A). Let Xnx (t) be the mild solu-
tion of Theorem 3.5 in Chap. 3 to the approximating equation
dX(t) = AX(t) + Rn F (X(t)) dt + Rn B(X(t)) dWt ,
(6.17)
X(0) = x ∈ D(A),
that is,
t t
Xnx (t) = S(t)x + S(t − s)Rn F Xnx (s) ds + S(t − s)Rn B Xnx (s) dWs
0 0
where
1 ∗
Ln Λ(x) = Λ (x), Ax + Rn F (x) H + tr Λ (x) Rn B(x) Q Rn B(x) . (6.18)
2
By condition (3),
c3 Λ(x) + Ln Λ(x) ≤ −L Λ(x) + Ln Λ(x).
The RHS of the above equals to
Λ (x), (Rn − I )F (x) H
1 * ∗ ∗ +
+ tr Λ (x) Rn B(x) Q Rn B(x) − B(x)Q B(x) .
2
Hence,
ec3 t EΛ Xnx (t) − Λ(x)
t x
≤E e c3 s
Λ Xn (s) , (Rn − I )F Xnx (s) H
0
1 * ∗
+ tr Λ Xnx (s) Rn B Xnx (s) Q Rn B Xnx (s)
2
∗ +
− B Xnx (s) Q B Xnx (s) ds. (6.19)
Consider
2
E Xnx (t) − X x (t)H
t
x x
≤ E S(t − s) Rn F Xn (s) − F X (s) ds
0
2
t x x
+ S(t − s) Rn B Xn (s) − B X (s) dWs
0 H
t 2
≤ C E
S(t − s)Rn F Xnx (s) − F X x (s) ds
0 H
t
+E S(t − s)Rn B X x (s) − B X x (s) 2 ds
n L 2 (KQ ,H )
0
t 2
+ E
S(t − s)(Rn − I )F X (s) ds
x
0 H
t
+E S(t − s)(Rn − I )B X x (s) 2 ds .
L 2 (KQ ,H )
0
6.2 Exponential Stability for Stochastic Differential Equations 215
t
The first two summands are bounded by CK E 0 Xnx (s) − X x (s)2H for
n > n0 (n0 sufficiently large), where C depends on sup0≤t≤T S(t)L (H ) and
supn>n0 Rn L (H ) , and K is the Lipschitz constant.
By the properties of Rn , the integrand in the third summand converges to zero,
and, by (2.17) in Lemma 2.2, Chap. 2, the integrand in the fourth summand con-
verges to zero. Both integrands are bounded by C X x (s)2H for some constant C
depending on the norms of S(t) and Rn , similar as above, and the constant in the
linear growth condition. By the Lebesgue DCT, the third and fourth summands can
be bounded uniformly in t by εn (T ) → 0.
An appeal to Gronwall’s lemma completes the argument.
The convergence in (6.20) allows us to choose a subsequence Xnxk such that
with
Λ Xnx (s) Rn B Xnx (s) fj , Rn B Xnx (s) fj H
→ Λ X x (s) B X x (s) fj , B X x (s) fj H .
Hence,
∗
tr Λ Xnx (s) Rn B Xnx (s) Q Rn B Xnx (s)
∗
→ tr Λ X x (s) B X x (s) Q B X x (s) .
Obviously,
∗
tr Λ Xnx (s) B Xnx (s) Q B Xnx (s)
∗
→ tr Λ X x (s) B X x (s) Q B X x (s) .
216 6 Stability Theory for Strong and Mild Solutions
Now we use assumption (1), the continuity and local boundedness of Λ and Λ ,
the growth condition on F and B, and the fact that
2
sup E Xnx (s)H < ∞,
0≤t≤T
and apply Lebesgue’s DCT to conclude that the right-hand side in (6.19) converges
to zero.
By the continuity of Λ and (6.20), we obtain
ec3 t EΛ X x (t) ≤ Λ(x),
and finally, by condition (2),
2 c2
E X x (t)H ≤ e−c3 t x2H , x ∈ D(A). (6.21)
c1
We recall that the mild solution X x (t) depends continuously on the initial condition
x ∈ H in the following way (Lemma 3.7):
2
sup E X x (t) − X y (t)H ≤ cT x − y2H , T > 0.
t≤T
Then for t ≤ T ,
2 2 2
E X x (t)H ≤ E X y (t)H + E X x (t) − X y (t)H
c2 −c3 t
≤ e y2H + cT x − y2H
c1
c2 c2
≤ e−c3 t 2x − y2H + e−c3 t 2x2H + cT x − y2H
c1 c1
for all y ∈ D(A), forcing inequality (6.21) to hold for all x ∈ H , since D(A) is
dense in H .
The concept of exponential stability in the m.s.s. for mild solutions of (6.22) obvi-
ously transfers to this case. We show that the existence of a Lyapunov function is a
necessary condition for stability of mild solutions of (6.22). The following notation
6.2 Exponential Stability for Stochastic Differential Equations 217
will be used:
1
L0 Ψ (x) = Ψ (x), Ax H + tr Ψ (x)(B0 x)Q(B0 x)∗ . (6.24)
2
Proof Let
∞ 2
Λ0 (x) = E X x (t)H dt + αx2H ,
0
where the value of the constant α > 0 will be determined later. Note that X x (t)
depends on x linearly. The exponential stability in the m.s.s. implies that
∞ 2
E X x (t)H dt < ∞.
0
Let
Ψ (x) = T̃ x, x H.
t 2
T̃ (t)x, x H
= E X x (s)H ds.
0
We have
t y 2
Ψn (t) Xnx (s) = E Xn (u)H du .
0 y=Xnx (s)
be the transition semigroup. Using the uniqueness of the solution, the Markov prop-
erty (3.59) yields
t
EΨn (t) Xnx (s) = E (P̃u ϕ) Xnx (s) du
0
t Xx
=E E ϕ Xnx (u + s) Fs n du
0
t 2
= E Xnx (u + s)H du
0
= Ψn (t + s)(x) − Ψn (s)(x). (6.25)
With t and n fixed, we use the Itô formula for the function Ψn (t)(x), then take the
expectation of both sides to arrive at
s
E Ψn (t) Xnx (s) = Ψn (t)(x) + E Ln Ψn (t) Xnx (u) du, (6.26)
0
where
Ln Ψn (t)(x) = 2 T̃n (t)x, An x H + tr T̃n (t)(B0 x)Q(B0 x)∗ .
Putting (6.25) and (6.26) together, we have
s
Ψn (t + s)(x) − Ψn (s)(x) = E Ln Ψn (t) Xnx (u) du + Ψn (t)(x).
0
Hence,
Ψn (s)(x) 1 s 2
lim = lim E Xnx (u)H du = x2H . (6.28)
s→0 s s→0 s 0
Now consider
ELn Ψn (t) Xnx (u)
∗
= E 2 T̃n (t)Xnx (u), An Xnx (u) H + E tr T̃n (t) B0 Xnx (u) Q B0 Xnx (u) .
Since
lim An Xnx (u) = An x, lim T̃n (t)Xnx (u) = T̃n (t)x,
u→0 u→0
and
2
T̃n (t)X x (u), An X x (u) ≤ T̃n (t) An L (H ) Xnx (u)H ∈ L1 (Ω),
n n L (H )
For the term involving the trace, we simplify the notation and denote
j
Φn (u) = B0 Xnx (u) and xn (u) = Φn (u)fj ,
where {fj }∞j =1 is an ONB in K that diagonalizes the covariance operator Q. Using
Exercise 2.19, we have
∗ ∗
tr T̃n (t)Φn (u)Q Φn (u) = tr Φn (u) T̃n (t)Φn (u)Q
∞
= λj T̃n (t)Φn (u)fj , Φn (u)fj H
j =1
∞
j j
= λj T̃n (t)xn (u), xn (u) H
j =1
∞
t j 2
= λj E Xnxn (u) (s)H ds. (6.29)
j =1 0
220 6 Stability Theory for Strong and Mild Solutions
0≤s≤T
∞
t j 2
λj E Xnxn (u) (s)H ds
j =1 0
∞
t j 2
→ λj E Xnx (s)H ds = tr T̃n (t)(B0 x)Q(B0 x)∗ .
j =1 0
dΨn (t)(x)
= Ln Ψn (t)(x) + x2H .
dt
In the next step, we fix t and allow n → ∞. By the mean-square continuity of Xnx (t)
and the definition of T̃n (t)x, x H and T̃ (t)x, x H , we can calculate the derivatives
below, and the convergence follows from condition (6.13):
Consider
T̃n (t)x, An x − T̃ (t)x, Ax
H H
≤ T̃n (t)x H (An − A)x H + T̃n (t) − T̃ (t) x, Ax H → 0.
T
lim E X x (u) − X x (u)2 du = 0, (6.31)
n→∞ n H
0
we thus have the weak convergence of Tn (t)x to T (t)x, and, further, by the Banach–
Steinhaus theorem, we deduce that supn Tn (t)L (H ) < ∞. Using calculations sim-
6.2 Exponential Stability for Stochastic Differential Equations 221
dT̃ (t)x, x H
= L0 T̃ (t)x, x H + x2H .
dt
We will now let t → ∞. Then, by the exponential stability condition,
dT̃ (t)x, x 2
= E X x (t)H → 0.
H
dt
Since T̃ (t)x, x H → T̃ x, x H , using the weak convergence of T̃ (t)x to T̃ x and
the Lebesgue DCT, exactly as above, we obtain that
L0 T̃ (t)x, x H = 2 T̃ (t)x, Ax H + tr T̃ (t)B0 xQ(B0 x)∗
→ 2T̃ x, Ax H + tr T̃ B0 xQ(B0 x)∗ = L0 T̃ x, x H .
In conclusion,
L0 Ψ (x) = −x2H , x ∈ D(A).
Now, Λ0 satisfies conditions (1) and (2). To prove condition (3’), let us note that, as
in Sect. 6.1, since S(t) ≤ eωt , inequality (6.4) is valid for some constant λ > 0.
Hence,
L0 x2H = 2x, Ax H + tr (B0 x)Q(B0 x)∗ ≤ 2λ + d 2 tr Q x2H (6.32)
gives
L0 Λ0 (x) ≤ −x2H + α 2λ + d 2 tr(Q) x2H ≤ −c3 Λ0 (x),
c3 > 0, by choosing α small enough.
Remark 6.1 For the nonlinear equation (6.10), we need to assume F (0) = 0 and
B(0) = 0 to assure that zero is a solution. In this case, if the solution {X x (t), t ≥ 0}
is exponentially stable in the m.s.s., we can still construct
∞ 2
Λ(x) = E X x (t)H dt + αx2H .
0
222 6 Stability Theory for Strong and Mild Solutions
L Ψ (x) = −x2H
and
noting the form of the infinitesimal generator L of the Markov process X x (t). We
obtain
L Λ(x) ≤ −x2H + 2αλx2H + α 2 x, F (x) H + tr B(x)QB ∗ (x) .
Now using the fact that F (0) = 0, B(0) = 0, and the Lipschitz property of F and B,
we obtain
L Λ(x) ≤ −x2H + α 2λ + 2K + K 2 tr(Q) x2H .
Hence, for α small enough, condition (3) follows from condition (2).
As shown in Part I, the differentiability with respect to the initial value requires
stringent assumptions on the coefficients F and B. In order to make the result more
applicable, we provide another technique that uses first-order approximation. We
use trace norm of a difference of nonnegative definite operators in the approximation
condition. Recall that for any trace-class operator T , we defined the trace norm
in (2.1) by
1/2
τ (T ) = tr T T ∗ .
Note (see [68]) that for a trace-class operator T and a bounded operator S,
(a) |tr(T )| ≤ τ (T ),
(b) τ (ST ) ≤ Sτ (T ) and τ (T S) ≤ Sτ (T ).
β
2xH F (x)H + τ B(x)QB ∗ (x) − B0 xQ(B0 x)∗ ≤ x2H . (6.33)
2c
6.2 Exponential Stability for Stochastic Differential Equations 223
Proof Let Λ0 (x) = T̃ x, x H + αx2H , as in the proof of Theorem 6.5. Note that
∞
T̃ x, x H = E 0 X0x (t)2H dt, so that
∞ c
T̃ x, x H ≤ ce−βt x2H dt = x2H .
0 β
Hence, T̃ L (H ) ≤ c/β. Clearly Λ0 satisfies conditions (1) and (2) of Theorem 6.4.
It remains to prove that
L Λ0 (x) ≤ −c3 Λ0 (x).
Consider
L Λ0 (x) − L0 Λ0 (x)
1
= Λ0 (x), F (x) H + tr Λ0 (x) B(x)QB ∗ (x) − (B0 x)Q(B0 x)∗
2
≤ 2 (T̃ + α)x, F (x) H + τ (T̃ + α) B(x)QB ∗ (x) − (B0 x)Q(B0 x)∗
≤ T̃ L (H ) + α 2xH F (x)H + τ B(x)QB ∗ (x) − (B0 x)Q(B0 x)∗
1 β
≤ +α x2H .
2 2c
It follows that
1 β
L Λ0 (x) ≤ L0 Λ0 (x) + +α x2H
2 2c
1 αβ
≤ −x2H + α 2λ + d 2 tr(Q) x2H + + x2H .
2 2c
For α small enough, we obtain condition (3) in Theorem 6.4 using condition (2).
Definition 6.2 Let {X x (t)}t≥0 be the mild solution of (6.10) with F (0) = 0 and
B(0) = 0 (assuring that zero is a solution). The zero solution of (6.10) is called
stable in probability if for any ε > 0,
lim P supX x (t)H > ε = 0. (6.34)
xH →0 t≥0
Once a Lyapunov function satisfying conditions (1) and (2) of Theorem 6.4
is constructed, the following theorem provides a technique for proving condi-
tion (6.34).
Theorem 6.7 Let X x (t) be the solution of (6.10). Assume that there exists a func-
tion Ψ ∈ C2p
2 (H ) having the following properties:
224 6 Stability Theory for Strong and Mild Solutions
Proof The proof is similar to the proof of Theorem 6.4. We assume first that the
initial condition x ∈ D(A) and consider strong solutions Xnx (t) of the approximating
equations (6.17), n = 1, 2, . . . ,
t t
Xnx (t) = S(t)x + S(t − s)Rn F Xnx (s) ds + S(t − s)Rn B Xnx (s) dWs .
0 0
Applying Itô’s formula to Ψ (Xnx (t)) and taking the expectations yield
t∧τε
Eψ Xnx t ∧ τεn − ψ(x) = E Ln Ψ Xnx (s) ds,
0
where
1 ∗
Ln Ψ (x) = Ψ (x), Ax + Rn F (x) H + tr Ψ (x) Rn B(x) Q Rn B(x) .
2
Let ε < δ. Then for x ∈ Bε , using condition (3), we get
Using (6.20) and passing to the limit, as in the proof of Theorem 6.4, show that
the RHS in (6.35) converges to zero. Using condition (2), we conclude that for
x ∈ D(A) ∩ Bε and any n,
Ψ (x) ≥ E Ψ Xnx t ∧ τεn ≥ λε P τεn < t . (6.36)
6.3 Stability in the Variational Method 225
We can select a sequence yn → x, yn ∈ D(A), such that X yn (t) → X x (t) a.s. for
all t. Now using the assumptions on Ψ and the Lebesgue DCT, we obtain (6.36) for
all x ∈ H . Inequality (6.36), together with conditions (2) and (1), implies that for
x ∈ Bε ,
Ψ (x)
P supXtx H > ε ≤ → 0, xH → 0,
t≥0 λε
giving (6.34).
The following results are now obvious from Theorems 6.5 and 6.6.
V → H → V ∗ .
The space V ∗ is the continuous dual of V , V is dense in H , and all embeddings are
continuous. With ·, · denoting the duality between V and V ∗ , we assume that for
h ∈ H,
v, h = v, h H.
226 6 Stability Theory for Strong and Mild Solutions
Since conditions (6.38), (6.39), and (6.40) are stronger than the assumptions in The-
orem 4.7 (also in Theorem 4.4) of Chap. 4, we conclude that there exists a unique
strong solution {X x (t), t ≥ 0} of (6.37) such that
X x (·) ∈ L2 Ω, C [0, T ], H ∩ M 2 [0, T ], V .
Theorem 6.10 (Itô Formula) Suppose that Ψ : H → R satisfies the following con-
ditions:
(1) Ψ is twice Fréchet differentiable, and Ψ , Ψ , Ψ are locally bounded.
(2) Ψ and Ψ are continuous on H .
(3) For all trace-class operators T on H , tr(T Ψ (·)) : H → R is continuous.
6.3 Stability in the Variational Method 227
where
1
L Ψ (u) = Ψ (u), A(u) + tr Ψ (u)B(u)QB ∗ (u) .
2
We extend the notion of exponential stability in the m.s.s. to the variational case.
Definition 6.3 We say that the strong solution of the variational equation (6.37) in
the space L2 (Ω, C([0, T ], H )) ∩ M 2 ([0, T ], V ) is exponentially stable in the m.s.s.
if it satisfies condition (6.16) in Definition 6.1.
The following is the analogue of Theorem 6.4 in the variational context. The
proof for a strong solution is a simplified version of the proof of Theorem 6.4 and is
left to the reader as an exercise.
Theorem 6.11 The strong solution of the variational equation (6.37) in the space
L2 (Ω, C([0, T ], H )) ∩ M 2 ([0, T ], V ) is exponentially stable in the m.s.s. if there
exists a function Ψ satisfying conditions (1)–(5) of Theorem 6.10, and the following
two conditions hold:
(1) c1 x2H ≤ Ψ (x) ≤ c2 x2H , c1 , c2 > 0, x ∈ H .
(2) L Ψ (v) ≤ −c3 Ψ (v), c3 > 0, v ∈ V , with L defined in Theorem 6.10.
Theorem 6.12 Under the coercivity condition (6.41), the solution of the linear
equation (6.42) is exponentially stable in the m.s.s. if and only if there exists a
228 6 Stability Theory for Strong and Mild Solutions
function Ψ satisfying conditions (1)–(5) of Theorem 6.10 and conditions (1) and (2)
of Theorem 6.11.
Proof It remains to prove the necessity. By the Itô formula applied to x2H , taking
expectations, and using condition (6.41), we have
2 t
E X x (t)H = x2H + 2E A0 X x (s), X x (s) ds
0
t ∗
+E tr B0 X x (s)Q B0 X x (s) ds
0
t 2 t 2
≤ x2H + |λ| E X x (s)H ds − α E X x (s)V ds
0 0
t t 2
≤ x2H 1 + |λ|c e−βs ds − α E X x (s)V ds
0 0
Ψ (x) ≥ c1 x2H ,
where c1 = 1/c0 .
To prove the last condition, observe that, similarly as in (6.25), the uniqueness of
the solution and the Markov property (3.59) yield
∞ 2
EΨ X x (t) = E X x (s + t)V ds
0
∞ 2
= E X x (s)V ds
t
∞ 2 t 2
≤ E X x (s)V ds − k E X x (s)H ds,
0 0
since kx2H ≤ x2V for some constant k. Hence, by taking the derivatives of both
sides at t = 0, we get
k
L0 Ψ (x) ≤ −kx2H ≤ − Ψ (x).
c2
Remark 6.3 Note that in case where t → EX x (t)2V is continuous at zero, in the
last step of the proof of Theorem 6.12, we obtain that L0 Ψ (v) = −v2V for v ∈ V .
Let us now state analogues of Theorem 6.6 for the solutions in variational case.
Theorem 6.13 Let {X0 (t)}t≥0 be the solution of the linear equation (6.42) with the
coefficients satisfying condition (6.41). Assume that the function t → EX0 (t)2V is
continuous and that the solution X0 (t) is exponentially stable in the m.s.s. If for a
sufficiently small constant c,
2vV A(v) − A0 v V ∗ + τ B(v)QB ∗ (v) − B0 vQ(B0 v)∗ ≤ cv2V , (6.43)
Theorem 6.14 Let {X0 (t)}t≥0 be the solution of the linear equation (6.42) with the
coefficients satisfying condition (6.41). Assume that the solution X0 (t) is exponen-
tially stable in the m.s.s. Let for v ∈ V , A(v) − A0 v ∈ H . If for a sufficiently small
constant c,
2vH A(v) − A0 v H + τ B(v)QB ∗ (v) − B0 vQ(B0 v)∗ ≤ cv2H , (6.44)
Exercise 6.3 Verify that Theorem 6.7 holds for 6.37 (replacing (6.10)), under ad-
ditional assumptions (1)–(5) of Theorem 6.10.
Remark 6.4 Using an analogue of Theorem 6.7 with the function Ψ satisfying
conditions (1)–(5) of Theorem 6.10, we can also prove conclusions in Theo-
rems 6.8 and 6.9 for (6.37) and its linear counterpart (6.42) under conditions (6.43)
and (6.44).
Proof The necessity part was already proved in Sect. 6.2, Theorem 6.5, with
∞
Rx, y H = E X x (t), X y (t) H dt,
0
where (R) = tr(R(B0 x)Q(B0 x)∗ ) I . The operator I + (R) is invertible, so that
we get
−1
2 R I + (R) x, y H = x, y H .
By Corollary 6.1,
S(t) ≤ Me−λt , λ > 0.
L (H )
We consider the solutions {Xnx (t), t ≥ 0} obtained by using the Yosida approxima-
tions An = ARn of A. Let us apply Itô’s formula to RXnx (t), Xnx (t) H and take the
expectations of both sides to arrive at
Appendix: Stochastic Analogue of the Datko Theorem 231
t
E RXnx (t), Xnx (t) H = Rx, x H + 2E RXnx (s), An Xnx (s) H ds
0
t
+E (R)Xnx (s), Xnx (s) H ds.
0
Hence,
t
E RXnx (t), Xnx (t) H = Rx, x H −E Xnx (s), Rn Xnx (s) H ds
0
t
+E (R)Xnx (s), Xnx (s) − Rn Xnx (s) H ds.
0
We let n → ∞ and use the fact that supn supt≤T EXnx (t)2H < ∞ to obtain
t 2
E RX x (t), X x (t) H = Rx, x H − E X x (s)H ds.
0
and
2 −1
Ξ (t) = −E X x (t)H ≤ Ξ (t),
RL (H )
so that
−1
RL (H ) t
Ξ (t) ≤ Rx, x He ,
since Ξ (0) = Rx, x H. Hence,
2
x 2 2 t
E X (t) H ≤ 2 S(t)x H + 2E
S(t − s)BX (s) dWs
x
0 H
t 2
≤ 2M 2 e−2λt x2H + 2 tr(Q)M 2 B2L (H ) e−2λ(t−s) E X x (s)H ds.
0
We introduce in this chapter the concept of ultimate boundedness in the mean square
sense (m.s.s.) and relate it to the problem of the existence and uniqueness of invari-
ant measure. We consider semilinear stochastic differential equations in a Hilbert
space and their mild solutions under the usual linear growth and Lipschitz condi-
tions on the coefficients. We also study stochastic differential equations in the varia-
tional case, assuming that the coefficients satisfy the coercivity condition, and study
their strong solutions which are exponentially ultimately bounded in the m.s.s.
conditions:
(1) c1 x2H − k1 ≤ Ψ (x) ≤ c2 x2H − k2 ,
(2) L Ψ (x) ≤ −c3 Ψ (x) + k3 ,
for x ∈ H , where c1 , c2 , c3 are positive constants, and k1 , k2 , k3 ∈ R.
Proof Similarly as in the proof of Theorem 6.4, using Itô’s formula for the solutions
of the approximating equations (6.17) and utilizing condition (2), we arrive at
t
EΨ X x (t) − EΨ X x (0) ≤ −c3 EΨ X x (s) + k3 ds.
0
By Gronwall lemma,
k3 k3 −c3 t
Φ(t) ≤ + Φ(0) − e .
c3 c3
Proof Since the mild solution X0x (t) is exponentially ultimately bounded in the
m.s.s., we have
2
E X0x (t)H ≤ ce−βt x2H + M for all x ∈ H.
Let
T 2
Ψ0 (x) = E X0x (s)H ds + αx2H ,
0
where T and α are constants to be determined later.
First, let us show that Ψ0 ∈ C2p
2 (H ). It suffices to show that
T 2
ϕ0 (x) = E X0x (s)H ds ∈ C2p
2
(H ).
0
Now,
c c
ϕ0 (x) ≤ 1 − e−βT x2H + MT ≤ x2H + MT .
β β
If x2H = 1, then ϕ0 (x) ≤ c/β + MT .
Since X0x (t) is linear in x, we have that, for any positive constant k, X0kx (t) =
kX0x (t). Hence, ϕ0 (kx) = k 2 ϕ0 (x), and for any x ∈ H ,
x c
ϕ0 (x) = x2H ϕ ≤ + MT x2H .
xH β
7.1 Exponential Ultimate Boundedness in the m.s.s. 235
we have
d
L0 ϕ0 (x) = Eϕ0 X0x (r)
dr r=0
Eϕ0 (X0x (r)) − Eϕ0 (x)
= lim
r→0 r
1 r 2 1 r+T 2
= lim − E X0x (s)H ds + E X0x (s)H ds
r→0 r 0 r T
= −x2H + E X0x (T )H
2
If T > ln(c/β), then one can choose α small enough such that Ψ0 (x) satisfies con-
dition (2) with L replaced by L0 .
Theorem 7.4 Suppose that the mild solution X0x (t) of the linear equation (6.22)
satisfies condition (7.1). Then the mild solution X x (t) of (6.10) is exponentially
ultimately bounded in the m.s.s. if
2xH F (x)H + τ B(x)QB ∗ (x) − B0 xQ(B0 x)∗ < ω̃x2H + M1 , (7.3)
Proof Let Ψ0 (x) be the Lyapunov function as defined in Theorem 7.2, with T >
ln(c/β), such that the maximum in the definition of ω̃ is achieved. It remains to
show that
L Ψ0 (x) ≤ −c3 Ψ0 (x) + k3 .
Since Ψ0 (x) = Cx, x H + αx2H for some C ∈ L (H ) with CL (H ) ≤ c/β +
MT and α sufficiently small, we have
L Ψ0 (x) − L0 Ψ0 (x)
≤ CL (H ) + α 2xH F (x)H + τ B(x)QB ∗ (x) − B0 xQ(B0 x)∗
≤ (c/β + MT + α) ω̃x2H + M1 .
Using the bound for ω̃, we have −1 + ce−βT + ω̃(c/β + MT ) < 0, so that we can
choose α small enough to obtain condition (2) of Theorem 7.1.
Corollary 7.1 Suppose that the mild solution X0x (t) of the linear equation (6.22) is
exponentially ultimately bounded in the m.s.s. If, as xH → ∞,
F (x) = o xH and τ B(x)QB ∗ (x) − B0 xQ(B0 x)∗ = o xH ,
H
then the mild solution X x (t) of (6.10) is exponentially ultimately bounded in the
m.s.s.
Proof We fix ω̃ < maxs>ln(c/β) (1 − ce−βt /(c/β + Ms), and using the assumptions,
we choose a constant K such that for xH ≥ K, condition (7.3) holds. But for
7.2 Exponential Ultimate Boundedness in Variational Method 237
Example 7.1 (Dissipative Systems) Consider SSDE (6.10) and, in addition to as-
sumptions (1)–(3) in Sect. 6.2, impose the following dissipativity condition:
(D) (Dissipativity) There exists a constant ω > 0 such that for all x, y ∈ H and
n = 1, 2, . . . ,
2 An (x − y), x − y H + 2 F (x) − F (y), x − y H + B(x) − B(y)L (K ,H )
2 Q
Exercise 7.1 (a) Show that condition (D) implies that for any ε > 0, there exists a
constant Cε > 0 such that for any x ∈ H and n = 1, 2, . . . ,
2An x, x H + 2 F (x), x H + B(x)L (K ,H ) ≤ −(ω − ε)x2H + Cε
2 Q
with An , the Yosida approximations of A. Use this fact to prove that the strong solu-
tions Xnx (t) of the approximating SDEs (6.12) are ultimately exponentially bounded
in the m.s.s. Conclude that the mild solution X x (t) of (6.10) is ultimately exponen-
tially bounded in the m.s.s.
(b) Prove that if zero is a solution of (6.10), then the mild solution X x (t) of (6.10)
is exponentially stable in the m.s.s.
Let us begin by noting that the proof of Theorem 7.1 can be carried out in this
case if we assume that the function Ψ satisfies conditions (1)–(5) of Theorem 6.10
and that the operator L is defined by
L Ψ (u) = Ψ (u), A(u) + tr Ψ (u)B(u)QB ∗ (u) . (7.5)
In the linear case, we have both, sufficiency and necessity, and the Lyapunov
function has an explicit form under the general coercivity condition (C).
Theorem 7.6 A solution {X0x (t), t ≥ 0} of the linear equation (6.42) whose coeffi-
cients satisfy coercivity condition (6.39) is exponentially ultimately bounded in the
m.s.s. if and only if there exists a function Ψ0 : H → R satisfying conditions (1)–(5)
of Theorem 6.10 and, in addition, such that
T t 2
Ψ0 (x) = E X0x (s)V ds dt (7.6)
0 0
Proof Assume that the solution {X0x (t), t ≥ 0} of the linear equation (6.42) is ex-
ponentially ultimately bounded in the m.s.s., so that
2
E X0x (t)H ≤ ce−βt x2H + M for all x ∈ H.
7.2 Exponential Ultimate Boundedness in Variational Method 239
Applying Itô’s formula to the function x2H , taking the expectations, and using the
coercivity condition (6.39), we obtain
2 t 2
E X0x (t)H − x2H = EL0 X0x (s)H ds
0
t 2 t 2
≤λ E X0x (s)H ds − α E X0x (s)V ds + γ t. (7.7)
0 0
Hence,
t x 2 1 t x 2
E X0 (s) V ≤ λ
E X0 (s) H ds + xH + γ t .
2
0 α 0
Hence,
1 T −βt
1 c MT
Ψ0 (x) ≥ x2H 1−e dt − MT ≥ T − x2H − .
c 0 c β c
Choose T > c/β to obtain condition (1).
240 7 Ultimate Boundedness and Invariant Measure
T t Xx (r) 2
EΨ0 X0x (r) = E X0 0 (s)V ds dt.
0 0
By the Markov property of the solution and the uniqueness of the solution,
T t 2 T t+r 2
EΨ0 X0x (r) = E X0x (s + r)V ds dt = E X0x (s)V ds dt.
0 0 0 r
We now need the following technical lemma that will be proved later.
which implies that Ψ0 (x) ≤ c x2H for all x ∈ H . For x, y ∈ H , denote
T t 1 1
E X0x (s), X0 (s) H ds dt ≤ Ψ02 (x)Ψ02 (y) ≤ c xH yH .
y
τ (x, y) =
0 0
Proof of Lemma 7.1 We are going to use the Fubini theorem to change the order of
integrals,
t+Δt
T f (s) ds 1 T t+Δt
t
dt = f (s) ds dt
0 Δt Δt 0 0
, Δt s T T
1
= f (s) dt ds + f (s) dt ds
Δt 0 0 Δt s−Δt
T +Δt T -
+ f (s) dt ds
T s−Δt
, Δt T T +Δt -
1
= sf (s) ds + f (s)Δt ds + f (s)(T + Δt − s) ds
Δt 0 Δt T
, Δt T T +Δt -
1
≤ Δt f (s) ds + Δt f (s) ds + Δt f (s) ds
Δt 0 Δt T
Δt T T +Δt
= f (s) ds + f (s) ds + f (s) ds.
0 Δt T
Theorem 7.7 Let the strong solution {X x (t), t ≥ 0} of (6.37) be exponentially ul-
timately bounded in the m.s.s. Let
T t 2
Ψ (x) = E X x (s)V ds dt (7.13)
0 0
with T > α0 (c|λ|/(αβ) + 1/α), where α0 is such that v2H ≤ α0 v2V , v ∈ V . Sup-
pose that Ψ (x) satisfies conditions (1)–(5) of Theorem 6.10. Then Ψ (x) satisfies
conditions (1) and (2) of Theorem 7.5.
To study exponential ultimate boundedness, i.e., condition (7.1), for the strong
solution of (6.37), we use linear approximation and the function Ψ0 of the corre-
sponding linear equation (6.42) as the Lyapunov function. We will prove the fol-
lowing result.
Theorem 7.8 Suppose that the coefficients of the linear equation (6.42) satisfy
the coercivity condition (6.39) and its solution {X0x (t), t ≥ 0} is exponentially ul-
timately bounded in the m.s.s. Let {X x (t), t ≥ 0} be the solution of the nonlinear
equation (6.37). Furthermore, we suppose that
Proof Let
T0 t 2
Ψ0 (x) = E X0x (t)V ds dt
0 0
L Ψ0 (x) − L0 Ψ0 (x)
1
= Ψ0 (x), A(x) − A0 x + tr Ψ0 (x) B(x)QB ∗ (x) − B0 xQ(B0 x)∗
2
1
= Ψ0 (x), A(x) − A0 x H + tr Ψ0 (x) B(x)QB ∗ (x) − B0 xQ(B0 x)∗ .
2
But Ψ0 (x) = 2Cx and Ψ0 (x) = 2C for x ∈ V , where C is defined in (7.11). By
inequality (7.10),
1 c|λ| |λ|M + γ 2
CL (H ) ≤ + T0 + T0 .
α αβ 2α
Hence,
L Ψ0 (x)−L0 Ψ0 (x) ≤ 2 Cx, A(x)−A0 x H +τ C B(x)QB ∗ (x)−B0 xQ(B0 x)∗ ,
and we have
*
L Ψ0 (x) ≤ L0 Ψ0 (x) + CL (H ) 2xH A(x) − A0 xH
+
+ τ B(x)QB ∗ (x) − B0 xQ(B0 x)∗ .
Remark 7.1 Note that the function Ψ0 (x) in Theorem 7.8 is the Lyapunov function
for the nonlinear equation.
Corollary 7.2 Suppose that the coefficients of the linear equation (6.42) satisfy
the coercivity condition (6.39), and its solution {X0x (t), t ≥ 0} is exponentially ul-
timately bounded in the m.s.s. Let {X x (t), t ≥ 0} be a solution of the nonlinear
equation (6.37). Furthermore, suppose that
Proof Under assumption (7.15), for a constant ω̃ satisfying the condition of Theo-
rem 7.8, there exists an R > 0 such that, for all v ∈ V with vH > R,
2vH A(v) − A0 v H + τ B(v)QB ∗ (v) − B0 vQ(B0 v)∗ ≤ ω̃v2H .
Hence, we have
2vH A(v) − A0 v H + τ B(v)QB ∗ (v) − B0 vQ(B0 v)∗
≤ ω̃v2H + (k + 1)R 2 .
Theorem 7.9 Suppose that the coefficients of the linear equation (6.42) satisfy
the coercivity condition (6.39) and its solution {X0x (t), t ≥ 0} is exponentially ul-
timately stable in the m.s.s. with the function t → EX0x (t)2V being continuous for
all x ∈ V . Let {X x (t), t ≥ 0} be a solution of the nonlinear equation (6.37). If for
v∈V,
2vV A(v) − A0 v V ∗ + τ B(v)QB ∗ (v) − B0 vQ(B0 v)∗ ≤ ω̃0 v2V + k0
L Ψ0 (x) − L0 Ψ0 (x)
1
= Ψ0 (x), A(x) − A0 x + tr Ψ0 (x) B(x)QB ∗ (x) − B0 xQ(B0 x)∗
2
with Ψ0 (x) = 2C̃x and Ψ0 (x) = 2C, where the operators C and C̃ are defined
in (7.11) and (7.12). By inequality (7.10) and the continuity of the embedding
V → H ,
1 c|λ| |λ|M + γ 2
CL (H ) ≤ + T0 + T0 ,
α αβ 2α
C̃L (V ) ≤ α0 CL (V ) .
Hence,
L Ψ0 (x) − L0 Ψ0 (x) ≤ 2C̃x, Ax − A0 x H + tr C B(x)QB ∗ (x) − B0 xQ(B0 x)∗ ,
and we have
Since s → EX0x (s)2V is a continuous function, we obtain from earlier relations for
L0 Ψ0 (x) that
c |λ|M + γ
L0 Ψ0 (x) ≤ − x2V + T0 .
β α
Hence,
c |λ|M
L Ψ0 (x) ≤ − x2V + T0 + CL (H ) + C̃L (V ) ω̃0 x2V + k0
β α
c
≤ − + ω̃0 CL (H ) + C̃L (V ) x2V
β
|λ|M + γ
+ k0 CL (H ) + C̃L (V ) + T0 .
α
Since, with the condition on ω̃0 , −c/β + ω̃0 (CL (H ) + C̃L (V ) ) < 0, we see
that conditions analogous to those of Theorem 7.1 are satisfied by Ψ0 , giving the
result.
246 7 Ultimate Boundedness and Invariant Measure
Corollary 7.3 Suppose that the coefficients of the linear equation (6.42) satisfy the
coercivity condition (6.39) and its solution {X0x (t), t ≥ 0} is exponentially ultimately
bounded in the m.s.s. with the function t → EX0x (t)2V being continuous for all
x ∈ V . Let {X x (t), t ≥ 0} be a solution of the nonlinear equation (6.37). If for
v ∈ V , as vV → ∞,
A(v) − A0 v ∗ = o vV
V
and (7.16)
τ B(v)QB ∗ (v) − B0 vQ(B0 v)∗ = o v2V ,
Proof We shall use Theorem 7.9. Under assumption (7.16), for a constant ω̃0 satis-
fying the condition of Theorem 7.9, there exists an R > 0 such that, for all v ∈ V
with vV > R,
2vV A(v) − A0 v V ∗ + τ B(v)QB ∗ (v) − B0 vQ(B0 v)∗ ≤ ω̃0 v2V .
Using that A(v)V ∗ , A0 (v)V ∗ ≤ a1 vV and B(v)L (K,H ) , B0 vL (K,H ) ≤
b1 vV , we have, for v ∈ V such that vH < R,
2vV A(v) − A0 v V ∗ + τ B(v)QB ∗ (v) − B0 vQ(B0 v)∗
2
≤ 4a1 v2V + B(v)L (K,H ) + B0 v2L (K,H ) tr(Q)
≤ 4a1 + 2b12 tr(Q) v2V
≤ 4a1 + 2b12 tr(Q) R 2 .
Hence, for v ∈ V ,
2vV A(v) − A0 v V ∗ + τ B(v)QB ∗ (v) − B0 vQ(B0 v)∗
≤ ω̃0 v2V + 4a1 + 2b12 tr(Q) R 2 .
is exponentially stable (or even exponentially ultimately bounded), then the solution
of (7.18) is exponentially ultimately bounded in the m.s.s.
≤ λ v2H − αv2H + γ
for some constants λ and γ . Hence, the evolution equation (7.18) satisfies the coer-
civity condition (6.39). Under assumption (2)
F (v)2 + tr B(v)QB ∗ (v) ≤ F (v)2 + tr(Q)B(v)2
H H L (K,H )
≤ 1 + tr(Q) K 1 + vH ,
2
248 7 Ultimate Boundedness and Invariant Measure
Example 7.2 (Stochastic Heat Equation) Let S 1 be the unit circle realized as the
interval [−π, π] with identified points −π and π . Denote by W 1,2 (S 1 ) the Sobolev
space on S 1 and by W (t, ξ ) the Brownian sheet on [0, ∞)×S 1 , see Exercise 7.2. Let
κ > 0 be a constant, and f and b be real-valued functions. Consider the following
SPDE:
⎧
⎨ ∂X(t) ∂ 2 X(t) ∂ 2W
(ξ ) = (ξ ) − κf (X(t)(ξ )) + b(X(t)(ξ )) ,
∂t ∂ξ 2 ∂t∂ξ (7.20)
⎩
X(0)(·) = x(·) ∈ L2 (S 1 ).
Let
1/2
xH = 2
x (ξ ) dξ for x ∈ H,
S1
1/2
dx(ξ ) 2
xV = x (ξ ) +
2
dξ for x ∈ V .
S1 dξ
Then we obtain the equation
dX(t) = A0 X(t) dt + F X(t) dt + B X(t) d W̃t ,
By Theorem 6.3(a), with Λ(x) = x2H , the solution of (7.19) is exponentially sta-
ble. If we assume that f and b are Lipschitz continuous and bounded, then con-
ditions (1)–(3) of Proposition 7.1 are satisfied. Using representation (2.35) of the
stochastic integral with respect to a cylindrical Wiener process, we can conclude
that the solution of the stochastic heat equation (7.20) is exponentially ultimately
bounded in the m.s.s.
7.3 Abstract Cauchy Problem, Stability and Exponential Ultimate Boundedness 249
Exercise 7.2 Let S 1 be the unit circle realized as the interval [−π, π] with identified
points −π and π . Denote by {fj (ξ )} an ONB in L2 (S 1 ) and consider
∞
ζ
W (t, ζ ) = wj (t) fj (ξ ) dξ, t ≥ 0, −π ≤ ζ ≤ π, (7.21)
j =1 −π
Conclude that the Gaussian random field W (·, ·) has a continuous version. This
continuous version is called the Brownian sheet on S 1 .
Now, let Φ(t) be an adapted process with values in L2 (S 1 ) (identified with
L (L2 (S 1 ), R)) and satisfying
∞
E Φ(t)2 2 dt < ∞.
L (S 1 )
0
define
∞
Φ ·W = Φ(s, ξ )W (ds, dξ ). (7.24)
0 S1
Clearly Φ · W = W (t, ζ ). Extend the integral Φ · W to general processes. Since
∞
Φ ·W = Φ(s) d W̃s
0
for elementary processes (7.23), conclude that the integrals are equal for general
processes as well.
250 7 Ultimate Boundedness and Invariant Measure
Example 7.3 Consider the following SPDE driven by a real-valued Brownian mo-
tion:
⎧
⎪
⎪ ∂ 2 u(t, x) ∂u(t, x)
⎪
⎪ dt u(t, x) = α 2 +β + γ u(t, x) + g(x) dt
⎪
⎪ ∂x 2 ∂x
⎪
⎨
∂u(t, x) (7.25)
⎪
⎪ + σ 1 + σ 2 u(t, x) dWt ,
⎪
⎪ ∂x
⎪
⎪
⎩u(0, x) = ϕ(x) ∈ L2 (−∞, ∞) ∩ L1 (−∞, +∞),
⎪
where we use the symbol dt to signify that the differential is with respect to t. Let
H = L2 ((−∞, ∞)) and V = W01,2 ((−∞, ∞)) with the usual norms
+∞ 1/2
vH = v 2 dx , v ∈ H,
−∞
+∞ 2 1/2
dv
vV = v2 + dx , v ∈ V.
−∞ dx
d2 v dv
A(v) = α 2 2
+β + γ v + g, v ∈ V,
dx dx
dv
B(v) = σ1 + σ2 v, v ∈ V.
dx
Suppose that g ∈ L2 ((−∞, ∞)) ∩ L1 ((−∞, ∞)). Then, using integration by parts,
we obtain for v ∈ V ,
2 v, A(v) + tr Bv(Bv)∗
2
2
2d v 2 dv
dv
= 2 v, α +β
+ γ v + g + σ 1 + σ2 v
dx 2 ∂x dx
H
= −2α 2 + σ12 v2V + 2γ + σ22 + 2α 2 − σ12 v2H + 2v, g H
1
≤ −2α 2 + σ12 v2V + 2γ + σ22 + 2α 2 − σ12 + ε v2H + g2H
ε
for any ε > 0. Similarly, for u, v ∈ V ,
∗
2 u − v, A(u) − A(v) + tr B(u − v) B(u − v)
≤ −2α 2 + σ12 u − v2V + 2γ + σ22 + 2α 2 − σ12 u − v2H .
If −2α 2 + σ12 < 0, then the coercivity and weak monotonicity conditions, (6.39)
and (6.40), hold, and we know from Theorem 4.7 that there exists a unique strong
7.3 Abstract Cauchy Problem, Stability and Exponential Ultimate Boundedness 251
solution uϕ (t) to (7.25) in L2 (Ω, C([0, T ], H ))∩M 2 ([0, T ], V ). Taking the Fourier
transform yields
dt ûϕ (t, λ) = −α 2 λ2 ûϕ (t, λ) + iλβ ûϕ (t, λ) + γ ûϕ (t, λ) + ĝ(λ) dt
+ iσ1 λûϕ (t, λ) + σ2 ûϕ (t, λ) dWt
= −α 2 λ2 + iλβ + γ ûϕ (t, λ) + ĝ(λ) dt
For fixed λ,
a = −α 2 λ2 + iλβ + γ ,
b = ĝ(λ),
c = iσ1 λ + σ2 .
By Plancherel’s theorem,
2 +∞ 2
E uϕ (t)H = E ûϕ (t, λ) dλ
−∞
and
2
2 2 d ϕ
E uϕ (t)V = E uϕ (t)H + E
dx u (t, x)
H
+∞ 2
= 1 + λ2 E ûϕ (t, λ) dλ.
−∞
T t 2
Ψ (ϕ) = E uϕ (s)V ds dt
0 0
+∞ T t 2
= 1 + λ2 E û(s, λ) ds dt dλ.
−∞ 0 0
252 7 Ultimate Boundedness and Invariant Measure
d2 v dv
A0 (v) = α 2 2
+β + γ v, v ∈ V,
dx dx
B0 (v) = B(v), v ∈ V
(since B is already linear). Taking the Fourier transform and solving explicitly, we
obtain that the solution is the geometric Brownian motion
ϕ 1 2
û0 (t, λ) = ϕ̂(λ)eat− 2 c t+cWt ,
ϕ 2 2
E û0 (t, λ) = ϕ̂(λ) e(a+a+cc)t .
ϕ
The function t → Eu0 (t)2V is continuous for all ϕ ∈ V ,
A(v) − A0 (v) ∗ = gV ∗ = o vV as vV → ∞,
V
and
τ B(v)QB ∗ (v) − (B0 v)Q(B0 v)∗ ) = 0.
Thus, if {u0 (t), t ≥ 0} is exponentially ultimately bounded in the m.s.s., then the
Lyapunov function Ψ0 (ϕ) of the linear system is the Lyapunov function of the non-
linear system, and
+∞ T t 2
Ψ0 (ϕ) = 1 + λ2 E û0 (s, λ) ds dt dλ
−∞ 0 0
+∞
2 exp{(−2α 2 + σ12 )λ2 + 2γ + σ22 )T }
= 1 + λ2 ϕ̂(λ)
−∞ ((−2α 2 + σ12 )λ2 + 2γ + σ22 )2
T 1
− − dλ.
(−2α 2 + σ12 )λ2 + 2γ + σ22 (−2α 2 + σ12 )λ2 + 2γ + σ22
Using Theorem 7.8, we can conclude that the solution of the nonlinear system is
exponentially ultimately bounded in the m.s.s.
where F and B satisfy the conditions of Proposition 7.1. This example is moti-
vated by the work of Funaki. If −A is coercive, a typical case being A = , we
conclude that the solution of the deterministic linear equation is exponentially sta-
ble since the Laplacian has negative eigenvalues. Thus, the solution of the deter-
ministic equation is exponentially bounded, and hence, by Proposition 7.1, the so-
lution of the nonlinear equation above is exponentially ultimately bounded in the
m.s.s.
∞
q(x, y) = λj ej (x)ej (y)
j =1
with tr(Q) = O q(x, x) dx = ∞ j =1 λj < ∞.
Let −A be a linear strongly elliptic differential operator of second order on O,
and B(u) : L2 (O) → L2 (O) with B(u)f (·) = u(·)f (·). By Garding’s inequality,
−A is coercive (see [63], Theorem 7.2.2). Then the infinite-dimensional problem is
as follows:
and we choose Λ(v) = v2H for v ∈ W01,2 (O). We shall check conditions under
which Λ is a Lyapunov function. With L defined in (6.15), using the spectral rep-
resentation of q(x, y), we have
L v2H = 2v, Av + tr B(v)QB ∗ (v)
Let
L (v2H )
λ0 = sup ,v ∈ W01,2 (O), v2H =0
v2H
2v, Av + Qv, v H
= sup , v ∈ W01,2 (O), v2H = 0 .
v2H
If λ0 < 0, then, by Theorem 6.4, the solution is exponentially stable in the m.s.s.
Consider the nonlinear equation in O,
dt u(t, x) = Ã(x, u(t, x)) dt + B̃(u(t, x))dt Wq (t, x),
(7.27)
u(0, x) = ϕ(x), u(t, x)|∂O = 0.
Assume that
so that the nonlinear equation (7.27) has a unique strong solution. Under the as-
sumption
αi (x, 0) = 0,
zero is a solution of (7.27), and if
sup αi (x, v) = o vH , vH → 0,
x∈O
then, by Theorem 6.14, the strong solution of the nonlinear equation (7.27) is expo-
nentially stable in the m.s.s.
On the other hand, let us consider the operator A as above and F and B satisfying
the conditions of Proposition 7.1. Then, under the condition
2v, Av 1,2
sup , u ∈ W 0 (O), v 2
H = 0 < 0,
v2H
the solution of the abstract Cauchy problem (7.19), with A0 replaced by A, is expo-
nentially stable, and we conclude that the solution of the equation
dX(t) = AX(t) dt + F (X(t)) dt + B(X(t)) dWt ,
X(0) = x ∈ H,
Consider now the SSDE (3.1) and assume that A is the infinitesimal generator of
a pseudo-contraction C0 -semigroup {S(t), t ≥ 0} on H (see Chap. 3) with the co-
efficients F : H → H and B : H → L (K, H ), independent of t and ω. We assume
that F and B are in general nonlinear mappings satisfying the linear growth con-
dition (A3) and the Lipschitz condition (A4) (see Sect. 3.3). In addition, the initial
condition is assumed deterministic, so that (3.1) takes the form
dX(t) = (AX(t) + F (X(t))) dt + B(X(t)) dWt ,
(7.28)
X(0) = x ∈ H.
Proposition 7.2 Suppose that the classical solution {ux (t), t ≥ 0} of the abstract
Cauchy problem (7.19) is exponentially stable (or even exponentially ultimately
bounded) and, as hH → ∞,
F (h) = o hH ,
H
B(h) = o hH ,
L (K,H )
then the mild solution of (7.28) is exponentially ultimately bounded in the m.s.s.
ξ0 ξ
E f X ξ0 (t + s) F Xt = (Ps f ) Xt 0 = f (y)P s, X ξ0 (t), dy ,
H
which follows from the semigroup property of Pt , (3.58) applied to ϕ(x) = 1A (x)
and from the fact that P (t, x, dy) is the conditional law of X ξ0 (t).
Let us now define an invariant probability measure and state a general theorem
on its existence.
1 tn
μn (A) = P (t, x, A) dt μ(dx) (7.31)
tn 0 H
1 tn
f (x) μn (dx) = f (y)P (t, x, dy) μ(dx) dt. (7.32)
H tn 0 H H
Proof We can assume without loss of generality that μn ⇒ ν. Observe that, by the
Fubini theorem and the Chapman–Kolmogorov equation,
1 tn
= lim (Pt f )(y)P (s, x, dy) μ(dx) ds
n→∞ tn 0 H H
1 tn
= lim (Pt+s f )(x) μ(dx) ds
n→∞ tn 0 H
, tn
1
= lim (Ps f )(x) μ(dx) ds
n→∞ tn 0 H
tn +t t -
+ (Ps f )(x) μ(dx) ds − (Ps f )(x) μ(dx) ds .
tn H 0 H
Since Ps f (x0 )H ≤ f (x0 H , the last two integrals are bounded by a constant,
and hence, using (7.32),
1 tn
(Pt f )(x) ν(dx) = lim (Ps f )(x0 ) μ(dx) ds
H n→∞ tn 0 H
1 tn
= lim f (y)P (s, x, dy) μ(dx) ds
n→∞ tn 0 H H
Corollary 7.4 If the sequence {μn } is relatively compact, then an invariant measure
exists.
Exercise 7.5 Show that if, as t → ∞, the laws of X x (t) converge weakly to a
probability measure μ, then μ is an invariant measure for the corresponding semi-
group Pt .
Thus, properties of the solution can be used to obtain tightness of the measures μn .
Let C0∞ = {v ∈ C0∞ (D) × C0∞ (D); ∇v = 0}, with ∇ denoting the gradient. Let
H = C0∞ in L2 (D) × L2 (D), and V = {v : W01,2 (D) × W01,2 (D), ∇v = 0}. Then
V ⊆ H ⊆ V ∗ is a Gelfand triplet, and the embedding V → H is compact.
It is known [76] that
L2 (D) × L2 (D) = H ⊕ H ⊥ ,
1 T 2 c
sup E uξ (t)V dt ≤ tr(Q)
T T 0 2ν
1 T
lim sup P uξ (t)V > R dt = 0.
R→∞ T T 0
7.4 Ultimate Boundedness and Invariant Measure 259
1 T
sup P uξ (t)V > Rε dt < ε.
T T 0
Thus, as tn → ∞,
1 tn
sup )Rε dt μξ (dx) < ε,
P t, x, B
n tn H 0
where B)Rε is the image of the set {v ∈ V ; vV > Rε } under the compact embed-
ding V → H , and μξ is the distribution of ξ on H . Since B )Rε is a complement of a
compact set, we can use Prokhorov’s theorem and Corollary 7.4 to conclude that an
invariant measure exists. Note that its support is in V , by the weak convergence.
Example 7.7 (Linear equations with additive noise [79]) Consider the mild solution
of the equation
dX(t) = AX(t) dt + dWt ,
X(0) = x ∈ H,
and assume that tr(Qt ) < ∞. We know from Theorems 3.1 and 3.2 that
t
X(t) = S(t)x + S(t − s) dWs (7.35)
0
t
is the mild solution of the above equation. The stochastic convolution 0 S(t −
s) dWs is an H -valued Gaussian process with covariance
t
Qt = S(u)QS ∗ (u) du
0
for any t. The Gaussian process X(t) is also Markov and Feller, and it is called an
Ornstein–Uhlenbeck process. The probability measure μ on H is invariant if for
f ∈ Cb (H ) and any t ≥ 0,
f (x) μ(dx) = E f X x (t) μ(dx)
H H
t
= Ef S(t)x + S(t − s) dWs μ(dx).
H 0
260 7 Ultimate Boundedness and Invariant Measure
or
1
Qt λ, λ H ≤ −2 lnμ̂(λ) = 2 ln .
|μ̂(λ)|
Since μ̂(λ) is the characteristic function of a measure μ on H , then by Sazonov’s
theorem [74], for ε > 0, there exists a trace-class operator S0 on H such that
|μ̂(λ)| ≥ 1/2 whenever S0 λ, λ H ≤ 1. Thus, we conclude that
Qt λ, λ H ≤ 2 ln 2
if S0 λ, λ H ≤ 1. This yields
0 ≤ Qt ≤ (2 ln 2)S0 .
Hence, supt tr(Qt ) < ∞.
On the other hand, if supt tr(Qt ) < ∞, let us denote by P the limit in trace norm
of Qt and observe that
∞ ∞
S(t)P S ∗ (t) = S(t + r)QS ∗ (t + r) dr = S(u)QS(u) du = P − Qt .
0 t
Thus,
1 1 1
S(t)P S ∗ (t)λ, λ H = P λ, λ H − Qt λ, λ H,
2 2 2
implying
1 1 ∗ (t)λ,S ∗ (t)λ 1
e− 2 P λ,λ H = e− 2 P S H e− 2 Qt λ,λ H .
1
In conclusion, μ with the characteristic functional e− 2 P λ,λ H is an invariant mea-
sure. We observe that the invariant measure exists for the Markov process X(t)
defined in (7.35) if and only if supt tr(Qt ) < ∞. Also, if S(t) is an exponentially
stable semigroup (i.e., S(t)L (H ) ≤ Me−μt for some positive constants M and μ)
or if St x → 0 for all x ∈ H as t → ∞, then the Gaussian measure with covariance
P is the invariant (Maxwell) probability measure.
Definition 7.5 A stochastic process X(t) satisfying condition (7.36) is called ulti-
mately bounded in the m.s.s.
7.4 Ultimate Boundedness and Invariant Measure 261
We focus our attention now on the variational equation with a deterministic initial
condition,
dX(t) = A(X(t)) dt + B(X(t)) dWt ,
(7.37)
X(0) = x ∈ H,
which is driven by a Q-Wiener process Wt . The coefficients A : V → V ∗ and B :
V → L (K, H ) are independent of t and ω, and they satisfy the linear growth,
coercivity (C), and weak monotonicity (WM) conditions (6.38), (6.39), (6.40). By
Theorem 4.8 and Remark 4.2 the solution is a homogeneous Markov process, and
the associated semigroup is Feller.
We note that in Theorem 7.5, we give conditions for exponential ultimate bound-
edness in the m.s.s. in terms of the Lyapunov function. Assume that Ψ : H → R
satisfies the conditions of Theorem 6.10 (Itô’s formula) and define
L ψ(u) = ψ (u), A(u) + (1/2) tr ψ (u)B(u)QB ∗ (u) . (7.38)
Let {X x (t), t ≥ 0} be the solution of (7.37). We apply Itô’s formula to Ψ (X x (t)),
take the expectation, and use condition (2) of Theorem 7.5 to obtain
t
EΨ X x (t) − EΨ X x (t ) = E L Ψ X x (s) ds
t
t
≤ −c3 EΨ X t (s) + k3 ds.
t
Let us now state the theorem connecting the ultimate boundedness with the exis-
tence of invariant measure.
Theorem 7.11 Let {X x (t), t ≥ 0} be a solution of (7.37). Assume that the embed-
ding V → H is compact. If X x (t) is ultimately bounded in the m.s.s., then there
exists an invariant measure μ for {X x (t), t ≥ 0}.
Proof Applying Itô’s formula to the function x2H and using the coercivity condi-
tion, we have
2 t 2
E X x (t)H − x2H = EL X x (t)H ds
0
t 2 t 2
≤λ E X x (s)H ds − α E X x (s)V + γ t
0 0
with L defined in (7.38). Hence,
t x 2 1 t x 2
E X (s) V ds ≤ λ
E X (s) H ds + xH + γ t .
2
0 α 0
Therefore,
1 T 1 EX x (t)2V T
P X x (t)V > R dt ≤ dt
T 0 T 0 R2
T
1 1
≤ |λ| E X x (t)2 dt + x2 + γ T .
H H
αR 2 T 0
Remark 7.2 Note that a weaker condition on the second moment of X x (t), i.e.,
1 T 2
sup E X x (t)H dt < M for some T0 ≥ 0,
T >T0 T 0
Proof Let f (x) = x2V and fn (x) = 1[0,n] (f (x)). Now fn (x) ∈ L1 (V , μ). We use
the ergodic theorem for a Markov process with an invariant measure (see [78],
p. 388). This gives
1 T
lim (Pt fn )(x) dt = fn∗ (x) μ-a.e.
T →∞ T 0
and Eμ fn∗ = Eμ fn , where Eμ fn = V fn (x) μ(dx).
By the assumption of ultimate boundedness, we have, as in the proof of Theo-
rem 7.11,
1 T 2 C|λ|
lim sup E X x (t)V dt ≤ , C < ∞.
T →∞ T 0 α
Hence,
1 T
fn∗ (x) = lim (Pt fn )(x) dt
T →∞ T 0
1 T
≤ lim sup Pt f (x) dt
T →∞ T 0
1 T 2 C|λ|
= lim sup E X x (t)V dt ≤ .
T →∞ T 0 α
Remark 7.3 (a) For parabolic Itô equations, one can easily derive the result using
Ψ (x) = x2H and Theorem 7.11.
(b) Note that if μn ⇒ μ and the support of μn is in V with the embedding
V → H being compact, then by the weak convergence the support of μ is in V
by the same argument as in Example 7.6.
Theorem 7.13 Suppose that for ε, δ, and R > 0, there exists a constant T0 (ε, δ, R)
> 0 such that for T ≥ T0 ,
1 T
P X x (t) − X y (t)V ≥ δ dt < ε
T 0
Proof Suppose that μ, ν are invariant measures with support in V . We need to show
that
f (x)μ(dx) = f (x)ν(dx)
H H
for f uniformly continuous bounded on H , since such functions form a determining
class.
For G ∈ B(H ), define
1 T
μxT (G) = P X x (t) ∈ G dt, x ∈ H, T > 0.
T 0
Let
F (y, z) = f (x)μzT (dx).
y
f (x)μT (dx) −
H H
Then, using the fact that μ, ν have the supports in V , we have
f (x) μ(dx) − f (x) ν(dx)≤ F (y, z) μ(dy)ν(dz).
H H V ×V
7.4 Ultimate Boundedness and Invariant Measure 265
Then,
f (x) μ(dx) − f (x) ν(dx) ≤ F (y, z) μ(dy)ν(dz) + 4ε + 2ε 2 M,
H H VR ×VR
1 T
≤ 2M sup P X y (t) − X z (t)V > δ + sup f (y) − f (z)
y,z∈VR T 0 y,z∈VR
y−z<δ
≤ 2Mε + ε
Let us now give a condition on the coefficients of the SDE (7.37) which guaran-
tees the uniqueness of the invariant measure. We have proved in Theorem 7.11 (see
Remark 7.3), that the condition
1 T
2
sup E X (t) H dt ≤ M for some T0 ≥ 0
x
T >T0 T 0
implies that there exists an invariant measure to the strong solution {X x (t), t ≥ 0},
whose support is in V .
where the norm · L2 (KQ ,H ) is the Hilbert–Schmidt norm defined in (2.7). Assume
that the solution {X x (t), t ≥ 0} of (7.37) is ultimately bounded in the m.s.s. Then
there exists a unique invariant measure.
2 t t
E X x (t)H + αE X x (s)2 ds ≤ x2 + γ t + λE
V H
X x (s)2 ds.
H
0 0
Hence, using the arguments in Example 7.6, an invariant measure exists and is sup-
ported on V . To prove the uniqueness, let X x1 (t), X x2 (t) be two solutions with
initial values x1 , x2 . We apply Itô’s formula to X(t) = X x1 (t) − X x2 (t) and obtain
2 t
E X(t)H ≤ x1 − x2 2H + 2E X(s) − A X x1 (s) − A X x2 (s) ds
0
t
+E B X x1 (s) − B X x2 (s) 2 ds.
L 2 (KQ ,H )
0
2 t 2
E X(t)H ≤ x1 − x2 2H − c E X(s)V ds,
0
Let us consider now the existence of an invariant measure for a mild solution of a
semilinear SDE with deterministic initial condition
dX(t) = (AX(t) + F (X(t))) dt + B(X(t)) dWt ,
(7.39)
X(0) = x ∈ H,
7.4 Ultimate Boundedness and Invariant Measure 267
Proposition 7.4 Suppose that the mild solution {X x (t)} of (7.39) is ultimately
bounded in the m.s.s. Then any invariant measure ν of the Markov process
{X x (t), t ≥ 0} satisfies
y2H ν(dy) ≤ M,
H
where M is as in (7.36).
The proof is similar to the proof of Theorem 7.12 and is left to the reader as an
exercise.
with BH (R) = {x ∈ H, x ≤ R}, then there exists at most one invariant measure
for the Markov process X x (t).
Proof Let μi , i = 1, 2, be two invariant measures. Then, by Proposition 7.4, for each
ε > 0, there exists R > 0 such that μi (H \ BH (R)) < ε. Let f be a bounded weakly
continuous function on H . We claim that there exists a constant T = T (ε, R, f ) > 0
such that
Pt f (x) − Pt f (y) ≤ ε for x, y ∈ BH (R) if t ≥ T .
Let C be a weakly compact set in H . The weak topology on C is given by the metric
∞
1
,
d(x, y) = ek , x − y H x, y ∈ C, (7.41)
2k
k=1
where {ek }∞
k=1 in an orthonormal basis in H .
268 7 Ultimate Boundedness and Invariant Measure
By the ultimate boundedness, there exists T1 = T1 (ε, R) > 0 such that for
T ≥ T1 ,
P X x (t) ∈ BH (R) > 1 − ε/2 for x ∈ BH (R).
Now f is uniformly continuous w.r.t. the metric (7.41) on BH (R). Hence, there
exists δ > 0 such that x, y ∈ HR with d(x, y) < δ imply that |f (x) − f (y)| ≤ δ,
and there exists J > 0 such that
∞
1
≤ δ /2 for x, y ∈ BH (R).
ek , x − y H
2k
k=J +1
Since P (|ek , X x (t) − X y (t) | > δ) ≤ P (X x (t) − X y (t)H > δ), by the given as-
sumption we can choose T2 ≥ T1 such that for t ≥ T2 ,
J
2
P ek , X (t) − ek , X (t) > δ /2 ≥ 1 − ε/3
x y
(7.42)
k=1
For t ≥ T , we have
f (x)μ1 (dx) − f (y)μ2 (dy)
H H
7.4 Ultimate Boundedness and Invariant Measure 269
* +
= f (x) − f (y) μ1 (dx)μ2 (dy)
H H
* +
= (Pt f )(x) − (Pt f )(y) μ1 (dx)μ2 (y)
H H
= + +
BH (R) H \BH (R) BH (R) H \BH (R)
* +
× (Pt f )(x) − (Pt f )(y) μ1 (dx)μ2 (dy)
≤ ε + 2(2M0 )ε + 2M0 ε 2 .
In case we look at the solution to (7.39), whose coefficients satisfy the linear
growth and Lipschitz conditions (A3) and (A4) of Sect. 3.1 in Chap. 3, we conclude
that under assumption (7.40) and conditions for exponential ultimate boundedness,
there exists at most one invariant measure.
Note that in the problem of existence of the invariant measure, the relative weak
compactness of the sequence μn in Theorem 7.10 is crucial. In the variational case,
we achieved this condition, under ultimate boundedness in the m.s.s., assuming that
the embedding V → H is compact. For mild solutions, Ichikawa [33] and Da Prato
and Zabczyk [11], give sufficient conditions. Da Prato and Zabczyk use a factoriza-
tion technique introduced in [10]. We start with the result in [32].
1 T 2
E X x (s)H ds ≤ M 1 + x2H . (7.43)
T 0
Then there exists an invariant measure for the Markov semigroup generated by the
solution of (7.39).
Lemma 7.2 Under the conditions of Theorem 7.16, the set of measures
1 t
μt (·) = P (s, x, ·) for t ≥ 0
t 0
Proof Let yk (t) = X x (t), ek H . Then, by a well-known result about the weak com-
pactness ([25], Vol. I, Chap. VI, Sect. 2, Theorem 2), we need to show that the
expression
∞
1 T 2
Eyk (t) dt
T 0
k=1
is uniformly convergent in T .
Let S(t) be the C0 -semigroup generated by A. Since S(t)ek = e−λk t ek for each
k, yk (t) satisfies
t
yk (t) = e−λk t xk0 + e−λk (t−s) ek , F X x (s) H ds
0
t
+ e−λk (t−s) ek , B X x (s) dW (s) H ,
0
2
0 2 t x
Eyk2 (t) ≤ 3e −2λk t
xk + 3E e −λk (t−s)
ek , F X (s) H ds
0
t 2
+ 3E e−λk (t−s) ek , B X x (s) dWs .
0
For N large enough, so that λN > 0, and any m > 0, using Exercise 7.8 and assump-
tion (7.43), we have
+m
N 2
1 T t x
E e −λk (t−s)
ek , F X (s) H ds dt
T 0 0
k=N
1 T t 2
≤ e2(−λk +ε)(t−s) ek , F X x (s) H ds dt
2εT 0 0
1 T T 2
= e2(−λk +ε)(t−s) dt ek , F X x (s) H ds
T 0 r
T
0 EF (X x (s))2H ds c1 (1 + x2H )
≤ ≤
4ε(λN − ε)T ε(λN − ε)
+m
N
1 T t x 2
E e −λk (t−s)
ek , B X (s) dW (s) dt
T 0 0
N
T
tr(Q) 0 EB(X x (t))2L (K,H ) dt c2 tr(Q)(1 + x2H )
≤ ≤
2λN T λN
7.4 Ultimate Boundedness and Invariant Measure 271
+m
N
1 T 3x2H
Eyk2 (t) dt ≤
T 0 2λN
N
, -
1 tr(Q)
+ 3(c1 + c2 ) 1 + xH
2
+ .
δ(λN − δ) λN
Exercise 7.8 Let p > 1, and let g be a nonnegative locally p-integrable function on
[0, ∞). Then for all ε > 0 and real d,
t p p/q t
1
e d(t−r)
g(r) dr ≤ ep(d+ε)(t−r) g p (r) dr,
0 qε 0
We finally present a result in [12], which uses an innovative technique to prove the
tightness of the laws L (X x (t)). We start with the problem
dX(t) = (AX(t) + F (X(t))) dt + B(X(t)) d W̃t ,
(7.44)
X(0) = x ∈ H,
Hypothesis (DZ) Let conditions (DZ1)–(DZ4) of Sect. 3.10 hold, and, in addition,
assume that:
(DZ5) {S(t), t > 0} is a compact semigroup.
(DZ6) For all x ∈ H and ε > 0, there exists R > 0 such that for every T ≥ 1,
1 T
P X x (t)H > R dt < ε,
T 0
Theorem 7.17 Under Hypothesis (DZ), there exists an invariant measure for the
mild solution of (7.44).
Proof We recall the factorization formula used in Lemma 3.3. Let x ∈ H , and
t
Y x (t) = (t − s)−α S(t − s)B X x (s) dWs .
0
Then
sin πα
X x (1) = S(1)x + G1 F X x (·) (1) + Gα Y x (·)(1) P -a.s.
π
By Lemma 3.12, the compactness of the semigroup {S(t), t ≥ 0} implies that the
operators Gα defined by
t
Gα f (t) = (t − s)α−1 S(t − s)f (s) ds, f ∈ Lp [0, T ], H ,
0
are compact from Lp ([0, T ], H ) into C([0, T ], H ) for p ≥ 2 and 1/p < α ≤ 1.
Consider γ : H × Lp ([0, 1], H ) × Lp ([0, 1], H ) → H ,
Lemma 7.3 Assume that p > 2, α ∈ (1/p, 1/2), and that Hypothesis (DZ) holds.
Then there exists a constant c > 0 such that for r > 0 and all x ∈ H with xH ≤ r,
p
P X x (1) ∈ K(r) ≥ 1 − cr −p 1 + xH .
1
p
s
=E (s − u)−α S(s − u)B X x (u) dWu
ds
0 0 H
1 s
2
p/2
−2α
≤ kE (s − u) S(s − u)B X (u) L x
(K,H )
du ds
2
0 0
1 s 2
p/2
≤ k2p/2 E (s − u)−2α K 2 (s − u) 1 + X x (u)H du ds.
0 0
giving
x πr x
P X (1) ∈ K(r) ≥ P
x Y (·) p ≤
∩ F X (·) Lp ≤ r
L sin απ
p
≥ 1 − r −p π −p k1 + k2 1 + xH .
p
≥ 1 − c r −p 1 + r1 P (t − 1, x, dy)
yH ≤r1
p
= 1 − c r −p 1 + r1 P X x (t − 1)H ≤ r1 ,
giving
1 T p 1
T
P X x (t) ∈ K(r) dt ≥ 1 − cr −p 1 + r1 P X x (t)H ≤ r1 dt.
T 0 T 0
where P x is the conditional probability under the condition X(0) = x. The set C is
called a recurrent region. From now on recurrent means recurrent to a compact set.
Theorem 7.18 Suppose that V → H is compact and the coefficients of (7.45) sat-
isfy the coercivity and the weak monotonicity conditions (6.39) and (6.40). If its
solution {X x (t), t ≥ 0} is ultimately bounded in the m.s.s., then it is weakly recur-
rent.
so that
P x (1 ∩ 2 ) < (1 − δ)2 .
By repeating the above argument, we obtain
n
3
Px i < (1 − δ)n ,
i=1
Lemma 7.5 Let {X(t), t ≥ 0} be a continuous strong Markov process. If there exists
a positive Borel-measurable function γ defined on H , a closed set C, and a constant
δ > 0 such that
γ (x)+1
P x X(t) ∈ C dt ≥ δ for all x ∈ H, (7.46)
γ (x)
276 7 Ultimate Boundedness and Invariant Measure
Proof By the assumption (7.46), there exists tx ∈ [γ (x), γ (x) + 1) such that
P x X(tx ) ∈ C ≥ δ.
Define
*
ρ(x) = inf t ∈ γ (x), γ (x) + 1 : P x ω : X(t, ω) ∈ C ≥ δ .
Since the mapping t → X(t) is continuous and the characteristic function of a closed
set is upper semicontinuous, we have that the function
t → P x X(t) ∈ C
We need to show that the function x → ρ(x) is Borel measurable. Let us define
Bt (H ) = B(H ), for t > 0. Since {X(t), 0 ≤ t ≤ T } is a Feller process, the map
Θ : (t, x) → P x (ω : X(t) ∈ C) from ([0, T ] × H, B([0, T ] × H )) to (R1 , B(R1 ))
is measurable (see [54], [27]). Hence, Θ is a progressively measurable process with
respect to {Bt (H )}. By Corollary 1.6.12 in [16], x → ρ(x) is Borel measurable.
Lemma 7.6 Suppose that the coefficients of (7.45) satisfy the coercivity condi-
tion (6.39) and, in addition, that its solution {X x (t), t ≥ 0} exists and is ultimately
bounded in the m.s.s. Then there exists a positive Borel-measurable function ρ on
H such that
1
P x ω : X ρ(x), ω ∈ B r ≥ 1 − 2 |λ|M1 + M1 + |γ | , x ∈ H, (7.48)
αr
and
0 1
P x ω : X ρ(x), ω ∈ Brc ≤ 2 |λ|M1 + M1 + |γ | , x ∈ H, (7.49)
αr
where α, λ, γ are as in the coercivity condition, and M1 = M + 1 with M as in the
ultimate boundedness condition (7.36).
7.5 Ultimate Boundedness and Weak Recurrence of the Solutions 277
Proof Since lim supt→∞ E x X(t)2H ≤ M < M1 for all x ∈ H , there exist positive
numbers {Tx , x ∈ H } such that
2
E x X(t)H ≤ M for t ≥ Tx .
It follows that
γ (x)+1 2 1
E X(s)V ds ≤ |λ|M1 + M1 + |γ | .
γ (x) α
Hence,
γ (x)+1 0 1
P x ω : X(t, ω) ∈ Brc ≤ 2 |λ|M1 + M1 + |γ | ,
γ (x) αr
and consequently,
γ (x)+1 1
P x ω : X(t, ω) ∈ B r dt ≥ 1 − 2 |λ|M1 + M1 + |γ | .
γ (x) αr
278 7 Ultimate Boundedness and Invariant Measure
We now conclude the proof of Theorem 7.18. Using (7.48), we can choose r
large enough such that
1
P x ω : X ρ(x), ω ∈ B r ≥ for x ∈ H.
2
Since the mapping V → H is compact, the set B r is compact in H , giving that X(t)
is weakly recurrent to B r by Lemma 7.4.
Theorem 7.19 Suppose that V → H is compact and the coefficients of (7.45) sat-
isfy the coercivity condition (6.39) and the monotonicity condition (6.40). If its so-
lution {X x (t), t ≥ 0} is exponentially ultimately bounded in the m.s.s., then it is
weakly positively recurrent.
and
∞
w((l + 1)N )
<∞ for any N ≥ 0. (7.50)
l2
l=1
√ √
Let K = (1 + Δ) |λ|M1 + M1 + |γ |/ α, and let us define the sets
E0 = B K ,
c 0
El = B (l+1)K − B lK = B (l+1)K ∩ BlK for l ≥ 1,
&
i−1
Ai = ci − ω : xj (ω) ∈ E0 = ω : x1 (ω) ∈
/ E0 , . . . , xi−1 ∈
/ E 0 , xi ∈ E 0 .
j =1
%∞
Then differs from i=1 Ai by at most a set of P x -measure zero. For i ≥ 2, let us
further partition
& j ,...,ji−1
Ai = Ai 1 ,
j1 ,j2 ,...,jn−1
where
j ,...,ji−1
Ai 1 = ω : x1 (ω) ∈ Ej1 , . . . , xi−1 (ω) ∈ Eji−1 , xi (ω) ∈ E0 .
j ,...,ji−1
and for ω ∈ Ai 1 ,
τ (ω) ≤ τi (ω) ≤ τi−1 (ω) + ρ xi−1 (ω) .
280 7 Ultimate Boundedness and Invariant Measure
j ,...,ji−1
Moreover, for ω ∈ Ai 1 ,
Hence,
xi−1 (ω) ≤ α0 xi−1 (ω) ≤ α0 (ji−1 + 1)K,
H V
giving
ρ xi−1 (ω) ≤ w xi−1 (ω)H + 1 ≤ w α0 (ji−1 + 1)K + 1 = w (ji−1 + 1)
and
τ (ω) ≤ τi−1 + w (ji−1 + 1).
Using induction,
τ (ω) ≤ w xH + 1 + w (j1 + 1) + · · · + w (ji−1 + 1).
Hence,
j ,...,ji−1 1
P x Ai 1 ≤ 2 (1 + Δ)2
P x ω : x1 (ω) ∈ Ej1 , . . . , xi−2 (ω) ∈ Eji−2 .
ji−1
By induction,
j ,...,ji−1 1 1
P x Ai 1 ≤ ,
(1 + Δ)2(i−1) j12 · · · ji−1
2
Now
j ,...,ji−1
E x (τ ) ≤ P x Ai 1
i,j1 ,...,ji−1 ≥1
* +
× w xH + 1 + w (j1 + 1) + · · · + w (ji−1 + 1)
∞
1
≤ w xH + 1 +
(1 + Δ)2(i−1)
i=2
w (xH ) + 1 + w (j1 + 1) + · · · + w (ji−1 + 1)
j1 ,...,ji−1 ≥1
j12 · · · ji−1
2
∞
1
= w xH + 1 + w xH + 1
(1 + Δ)2(i−1)
i=2
1 w (j1 + 1)
+ (i − 1)
j 2 · · · ji−1
j1 ,...,ji−1 ≥1 1
2 j 2 · · · ji−1
j1 ,...,ji−1 ≥1 1
2
∞ i−1
A
= w xH + 1 1 +
(1 + Δ)2
i=2
∞
i−2
B A
+ (i − 1),
(1 + Δ)2 (1 + Δ)2
i=2
∞ 1
where A = ∞ l=1 l 2 , and B =
1
l=1 l 2 w (l + 1), with both series converging due
to (7.50).
Consequently, E x (τ ) is finite for Δ large enough. The set E0 is compact since
the embedding V → H is compact.
We have given precise conditions using a Lyapunov function for exponential ulti-
mate boundedness in the m.s.s. We can thus obtain sufficient conditions for weakly
(positive) recurrence of the solutions in terms of a Lyapunov function.
We close with important examples of stochastic reaction–diffusion equations.
Let O ⊂ Rn be a bounded domain with smooth boundary ∂O, and p be a positive
integer. Let V = W 1,2 (O) and H = W 0,2 (O) = L2 (O). We know that V → H is a
compact embedding. Let
∂ α1 ∂ αn
A0 (x) = aα (x) · · · ,
∂x1α1 ∂xnαn
|α|≤2p
(3) f (h1 ) − f (h2 )2H + tr((B(h1 ) − B(h2 ))Q(B(h1 ) − B(h2 ))∗ ) ≤ λh1 − h2 ,
h1 , h2 ∈ H .
If the solution to the equation
du(t, x) = A0 u(t, x) dt
1
L Λ(v) ≤ −2α 2 + σ12 + 2γ + σ 22 + $ v2H + g2H .
$
7.5 Ultimate Boundedness and Weak Recurrence of the Solutions 283
Hence, if −2α 2 + σ12 + 2γ + σ22 < 0, then the strong variational solution of (7.53) is
exponentially ultimately bounded by Theorem 7.5, and hence it is weakly positive
recurrent.
Exercise 7.9 Let f ∈ W 0,2 ((a, b)). Prove the Poincaré inequality
b b df (x) 2
f 2 (x) dx ≤ (b − a)2 dx.
a a dx
References
1. S. Agmon. Lectures on Elliptic Boundary Value Problems, Mathematical Studies No. 2, Van
Nostrand, Princeton (1965).
2. S. Albeverio and R. Hoegh-Krohn. Homogeneous random fields and statistical mechanics,
J. Funct. Anal. 19, 242–272 (1975).
3. S. Albeverio, Yu. G. Kondratiev, M. Röckner, and T. V. Tsikalenko. Glauber dynamics for
quantum lattice systems, Rev. Math. Phys. 13 No. 1, 51–124 (2001).
4. P. Billingsley. Convergence of Probability Measures, Wiley, New York (1968).
5. P. Billingsley. Probability and Measure, Wiley, New York (1979).
6. P.L. Butzer and H. Berens. Semi-Groups of Operators and Approximation, Springer, New
York (1967).
7. J. R. Cannon. The One-Dimensional Heat Equation, Encyclopedia of Mathematics and Its
Applications 23, Addison–Wesley, Reading (1984).
8. S. Cerrai. Second Order PDE’s in Finite and Infinite Dimension, Lecture Notes in Mathematics
1762, Springer, Berlin (2001).
9. A. Chojnowska-Michalik. Stochastic Differential Equations in Hilbert Space, Banach Center
Publications 5, PWN–Polish Scientific Publishers, Warsaw (1979).
10. G. Da Prato, S. Kwapien, and J. Zabczyk. Regularity of solutions of linear stochastic equations
in Hilbert spaces, Stochastics 23, 1–23 (1987).
11. G. Da Prato and J. Zabczyk. Stochastic Equations in Infinite Dimensions, Encyclopedia of
Mathematics and its Applications 44, Cambridge University Press, Cambridge (1992).
12. G. Da Prato and J. Zabczyk. Ergodicity for Infinite Dimensional Systems, London Mathemat-
ical Society Lecture Note Series 229, Cambridge University Press, Cambridge (1996).
13. R. Datko. Extending a theorem of A. M. Liapunov to Hilbert space, J. Math. Anal. Appl. 32,
610–616 (1970).
14. J. Diestel and J.J. Uhl. Vector Measures, Mathematical Surveys 15, AMS, Providence (1977).
15. J. Dieudonné. Treatise on Analysis, Academic Press, New York (1969).
16. R. J. Elliott. Stochastic Calculus and Applications, Springer, New York (1982).
17. S. N. Ethier and T. G. Kurtz. Markov Processes: Characterization and Convergence. Wiley,
New York (1986).
18. B. Gaveau. Intégrale stochastique radonifiante, C.R. Acad. Sci. Paris Ser. A 276, 617–620
(1973).
19. L. Gawarecki. Extension of a stochastic integral with respect to cylindrical martingales, Stat.
Probab. Lett. 34, 103–111 (1997).
20. L. Gawarecki and V. Mandrekar. Stochastic differential equations with discontinuous drift in
Hilbert space with applications to interacting particle systems, J. Math. Sci. 105, No. 6, 2550–
2554 (2001). Proceedings of the Seminar on Stability Problems for Stochastic Models, Part I
(Naleczow, 1999).
21. L. Gawarecki and V. Mandrekar. Weak solutions to stochastic differential equations with dis-
continuous drift in Hilbert space. In: Stochastic Processes, Physics and Geometry; New In-
terplays, II (Leipzig, 1999), CMS Conf. Proc. 29, Amer. Math. Soc., Providence, 199–205
(2000).
22. L. Gawarecki, V. Mandrekar, and P. Richard. Existence of weak solutions for stochastic differ-
ential equations and martingale solutions for stochastic semilinear equations, Random Oper.
Stoch. Equ. 7, No. 3, 215–240 (1999).
23. L. Gawarecki, V. Mandrekar, and B. Rajeev. Linear stochastic differential equations in the
dual to a multi-Hilbertian space, Theory Stoch. Process. 14, No. 2, 28–34 (2008).
24. L. Gawarecki, V. Mandrekar, and B. Rajeev. The monotonicity inequality for linear stochastic
partial differential equations, Infin. Dimens. Anal. Quantum Probab. Relat. Top. 12, No. 4,
1–17 (2009).
25. I.I. Gikhman and A.V. Skorokhod. The Theory of Stochastic Processes, Springer, Berlin
(1974).
26. A.N. Godunov. On Peano’s theorem in Banach spaces, Funct. Anal. Appl. 9, 53–55 (1975).
27. K. Gowrisankaran. Measurability of functions in product spaces, Proc. Am. Math. Soc. 31,
No. 2, 485–488 (1972).
28. M. Hairer. Ergodic properties of a class of non-Markovian processes. In: Trends in Stochastic
Analysis, London Mathematical Society Lecture Note Series 353. Ed. J. Blath et al. 65–98
(2009).
29. E. Hille. Lectures on Ordinary Differential Equations, Addison–Wesley, Reading (1969)
30. F. Hirsch and G. Lacombe. Elements of Functional Analysis, Graduate Texts in Mathematics
192, Springer, New York (1999).
31. H. Holden, B. Øksendal, J. Uboe, and T. Zhang. Stochastic Partial Differential Equations:
A Modeling, White Noise Functional Approach, Birkhauser, Boston (1996).
32. A. Ichikawa. Stability of semilinear stochastic evolution equations, J. Math. Anal. Appl. 90,
12–44 (1982).
33. A. Ichikawa, Semilinear stochastic evolution equations: boundedness, stability and invariant
measures, Stochastics 12, 1–39 (1984).
34. A. Ichikawa. Some inequalities for martingales and stochastic convolutions, Stoch. Anal. Appl.
4, 329–339 (1986).
35. K. Itô. Foundations of Stochastic Differential Equations in Infinite Dimensional Spaces,
CBMS–NSF 47 (1984).
36. G. Kallianpur, I. Mitoma, and R. L. Wolpert. Diffusion equations in dual of nuclear spaces,
Stoch. Stoch. Rep. 29, 285–329 (1990).
37. G. Kallianpur and J. Xiong. Stochastic differential equations in infinite dimensions: a brief
survey and some new directions of research. In: Multivariate Analysis: Future Directions,
North-Holland Ser. Statist. Probab. 5, North-Holland, Amsterdam, 267–277 (1993).
38. I. Karatzas and S. E. Shreve. Brownian Motion and Stochastic Calculus, Graduate Texts in
Mathematics, Springer, New York (1991).
39. R. Khasminskii. Stochastic Stability of Differential Equations, Sijthoff and Noordhoff, Alphen
aan den Rijn (1980).
40. R. Khasminskii and V. Mandrekar. On stability of solutions of stochastic evolution equations.
In: The Dynkin Festschrift, Progr. Probab. Ed. M. Freidlin, Birkhäuser, Boston, 185–197
(1994).
41. H. König. Eigenvalue Distribution of Compact Operators, Birkhäuser, Boston (1986).
42. N. V. Krylov and B. L. Rozovskii. Stochastic evolution equations, J. Sov. Math. 16, 1233–1277
(1981).
43. K. Kuratowski and C. Ryll-Nardzewki, A general theorem on selectors, Bull. Acad. Pol. Sci.
13, 349–403 (1965).
44. S. Lang. Analysis II, Addison–Wesley, Reading (1969).
45. M. Ledoux and M. Talagrand. Probability in Banach Spaces, Springer, Berlin (1991).
46. G. Leha and G. Ritter. On diffusion processes and their semigroups in Hilbert spaces with an
application to interacting stochastic systems, Ann. Probab. 12, No. 4, 1077–1112 (1984).
References 287
47. G. Leha and G. Ritter. On solutions to stochastic differential equations with discontinuous
drift in Hilbert space, Math. Ann. 270, 109–123 (1985).
48. J. L. Lions. Équations Différentielles Opérationelles et Problèmes aux Limites, Springer,
Berlin (1961).
49. R. S. Liptzer and A. N. Shiryaev. Statistics of Stochastic Processes, Nauka, Moscow (1974).
50. K. Liu. Stability of Infinite Dimensional Stochastic Differential Equations with Applica-
tions, Chapman & Hall/CRC Monographs and Surveys in Pure and Applied Mathematics
135 (2006).
51. R. Liu. Ultimate boundedness and weak recurrence of stochastic evolution equations, Stoch.
Anal. Appl. 17, 815–833 (1999).
52. R. Liu and V. Mandrekar. Ultimate boundedness and invariant measures of stochastic evolu-
tion equations, Stoch. Stoch. Rep. 56, No. 1–2, 75–101 (1996).
53. R. Liu and V. Mandrekar. Stochastic semilinear evolution equations: Lyapunov function, sta-
bility, and ultimate boundedness, J. Math. Anal. Appl. 212, No. 2, 537–553 (1997).
54. G.W. Mackey. A theorem of Stone and von Neuman, Duke Math. J. 16, No. 2, 313–326
(1949).
55. V. Mandrekar. On Lyapunov stability theorems for stochastic (deterministic) evolution equa-
tions. In: Stochastic Analysis and Applications in Physics, NATO Adv. Sci. Inst. Ser. C Math.
Phys. Sci. 449, Kluwer, Dordrecht, 219–237 (1994).
56. M. Metivier. Stochastic Partial Differential Equations in Infinite Dimensional Spaces, Scuola
Normale Superiore, Quaderni, Pisa (1988).
57. M. Metivier and J. Pellaumail. Stochastic Integration, Academic Press, New York (1980).
58. M. Metivier and M. Viot. On weak solutions of stochastic partial differential equations. In:
Stochastic Analysis, LNM 1322. Ed. M. Metivier, S. Watanabe, Springer, Berlin, 139–150
(1988).
59. Y. Miyahara. Ultimate boundedness of the system governed by stochastic differential equa-
tions, Nagoya Math. J. 47, 111–144 (1972).
60. E. Nelson. Probability Theory and Euclidian Field Theory, Lecture Notes in Phys. 25, Springer,
Berlin, 94–124 (1973).
61. B. Øksendal. Stochastic Differential Equations, Springer, New York (1998).
62. E. Pardoux. Stochastic partial differential equations and filtering of diffusion processes,
Stochastics 3, 127–167 (1979).
63. A. Pazy. Semigroups of Linear Operators and Applications to Partial Differential Equations,
Applied Mathematical Sciences, 44, Springer, New York (1983).
64. C. Prévôt and M. Röckner. A Concise Course on Stochastic Partial Differential Equations,
LNM 1905, Springer, Berlin (2007).
65. Yu. V. Prokhorov. Convergence of random processes and limit theorems in probability theory,
Theory Probab. Appl. 1, 157–214 (1956).
66. B. L. Rozovskii. Stochastic Evolution Systems: Linear Theory and Applications to Non-Linear
Filtering, Kluwer, Boston (1983).
67. M. Röckner, B. Schmuland, and X. Zhang. Yamada–Watanabe theorem for stochastic evolu-
tion equations in infinite dimensions, Condens. Matter Phys. 11, No. 2(54), 247–259 (2008).
68. R. S. Schatten. Norm Ideals of Continuous Operators, Springer, New York (1970).
69. A.V. Skorokhod. Personal communication.
70. D.W. Stroock and S.R.S. Varadhan. Multidimensional Diffusion Processes, Springer, New
York (1979).
71. H. Tanabe. Equations of Evolution, Pitman, London (1979).
72. L. Tubaro. An estimate of Burkholder type for stochastic processes defined by the stochastic
integral, Stoch. Anal. Appl. 62, 187–192 (1984).
73. R. Wheeden, A. Zygmund. Measure and Integral, Marcel Dekker, New York (1977).
74. N. N. Vakhania, V. I. Tarieladze, and S. A. Chobanyan. Probability Distributions on Banach
Spaces, Mathematics and Its Applications (Soviet Series) 14, Reidel, Dordrecht (1987).
75. M. Viot. Solutions faibles d’équations aux dérivées partielles non linéaires, Thése, Université
Pierre et Marie Curie, Paris (1976).
288 References
A H
Abstract Cauchy problem, 4, 11 Heat equation, 3
in Rd , 13
B one–dimensional, 3
Burkholder inequality, 87 stochastic, 248
Hille–Yosida theorem, 8
C
Chapman–Kolmogorov equation, 256 I
Covariance Increasing process, 22
of a Gaussian random variable, 18 of Q–Wiener process, 23
of Gaussian stochastic convolution, 259 Infinitesimal generator, 6
Cylindrical Wiener process, 19 Invariant probability measure, 256
the linear case, 260
D Itô’s formula
Datko’s theorem the case of cylindrical Wiener process, 69
stochastic analogue of, 230 the case of Q–Wiener process, 61
Dissipative system, 237 the variational case, 226
Dissipativity condition, 237
Doob’s Maximal Inequalities, 22
K
Kolmogorov’s backward equation, 118
E
Eigenvalues, 18
Eigenvectors, 18 L
Energy equality, 208 Lebesgue’s DCT
generalized, 100
F Lévy’s theorem, 51
Factorization formula, 89 Linear equation with additive noise, 259
Factorization technique, 87 Lions’s lemma, 15
Feller property, 110 Lions’s theorem, 15
Lyapunov function, 206, 216
G
Gaussian measure with covariance Q, 19 M
Gaussian random variable, 17 Markov
cylindrical standard, 17 process, 107
with covariance Q, 18 property, 107
Gaussian semigroup, 4 transition function, 256
Gibbs measure, 194 transition probabilities, 255
Martingale, 21 S
square integrable, 22 Semigroup, 5
Martingale Representation Theorem I, 51 adjoint, 7
Martingale Representation Theorem II, 53 C0 , 5
Maxwell probability measure, 260 compact, 6
Monotonicity, 173 differentiable, 6
weak, 173 exponentially stable, 260
Feller, 110
N of contractions, 6
Navier-Stokes equation, 258 pseudo–contraction, 6
Norm semigroup property, 5
strong continuity property, 5
graph, 7
strongly continuous, 5
Hilbert–Schmidt, 24
uniformly bounded, 6
trace, 18
uniformly continuous, 6
Sequence
O stochastically bounded, 129
Operator Sobolev space, 11
adjoint, 5 Solution, 74
coercive, 246 classical, 11
Hilbert–Schmidt, 24 martingale, 75
non-negative definite, 5 mild, 12, 75
symmetric, 5 mild integral, 74
trace–class, 18 strong, 74
weak, 75
P weakened, 74
Poincaré inequality, 283 Spin system, 193
Probability space, 17 lattice, 194
complete, 17 Stability, 203
filtered, 19 deterministic condition for, 204
Process, 19 exponential in the m.s.s., 213
elementary, 25 of the mild solution, 213
bounded, 25 of the variational solution, 227
Feller, 110 exponential stability of the solution of the
Gaussian, 19 Cauchy problem, 203
Ornstein–Uhlenbeck, 259 in probability of the zero solution, 223
progressively measurable, 38 of a solution to a linear equation with
weakly positive recurrent, 278 additive noise, 259
Prokhorov’s theorem, 148 Stochastic convolution, 76
Gaussian, 259
Stochastic differential equation
Q of evolution type, 247
Q–Wiener process, 20 semilinear, 73
continuity, 20 variational, 152
properties, 21 Stochastic Fubini Theorem, 57
Quadratic variation process, 22 Stochastic integral
existence and uniqueness, 22 with respect to a Q–Wiener process
of Q–Wiener process, 23 of a process in P (KQ , H ), 42
of a process in 2 (KQ , H ), 34
R with respect to a standard cylindrical
Reaction–Diffusion Equation, 282 Wiener process, 45
Recurrent region, 274 Stochastic process, 260
Resolvent, 8 ultimately bounded in the m.s.s., 260
set, 8 weakly recurrent, 274
Index 291
T V
Theorem Variational method, 15
about convolution, 94 Variational solution, 152
Tight family of measures, 148 strong, 152
Trace, 18 weak, 152
U Y
Uniqueness, 173 Yosida approximation, 9