Stochastic_Processes (4)
Stochastic_Processes (4)
Manan Rawat
School Of Mathematics
Contents
1 Brownian Motion 1
2 Poisson Process 16
i
CONTENTS CONTENTS
3 Lévy Processes 27
4 Itô Integral 40
References 53
ii
Abstract
Through the summer project, we aim to study stochastic processes. We first study examples
of the Lévy process, the Brownian motion and Poisson Process, and then generalize the results
and study Lévy process. We conclude the project by studying the Itô Integral and briefly fa-
We study Brownian Motion mainly from Mörters and Peres (2003), along with Klenke (2008).
We study the construction, continuity and differential property of the Brownian motion along
with the Cameron Martin theorem, the martingale and Markov property, along with Filtration
We then study Poisson Process from Billingsley (1986), Taylor and Karlin (1984) and Klenke
(2008). We study Random measures, the equivalent conditions for Poisson Processes, the law
of rare events, the Poisson point process and conclude with Multi-Dimensional Poisson processes.
We study Lévy processes from Applebaum (2004). We study The Jumps in a Lévy process,
Poisson Integration and examples of Lévy Processes and introduces ourselves to It̂o integrals
We study Itô Integrals, its properties, its extensions. We study the Itô Formula and get intro-
iii
Chapter 1
Brownian Motion
Before we study Brownian Motion,we must be familiarized with basic definitions and topics on
the subject.
1. X∈ C
A function f:X is said to be measurable if for every a ∈ R , the set {x ∈ X | f(x) < a}
is measurable.
1
1.1. Basic Definitions Chapter 1. Brownian Motion
A Probability Space P on the sample space (S,S’) is a real valued function defined on the
• P(S)=1.
A Probability space is a triple (Ω, F, P), where Ω is a sample space, F is a sigma algebra of
A random vector X = (X1 ,X2 ,X3 .....Xd )T with values in Rd has the d-dimensional standard
Gaussian distribution if its d coordinates are standard normally distributed and independent.
Let X1 ,X2 ,X3 ....... be a sequence of random variables on a probability space (Ω,F, P) and
for all finite permutations σ : N → N. Here finite permutations means that σ is a bijection with
We say that a process { X(t) : t ≥ 0 } has the property S almost surely if there exist
where the supremum is over all k ∈ N and partitions 0 = t0 ≤t1 ≤t2 ≤.....tk−1 ≤tk =t.If the supre-
2
Chapter 1. Brownian Motion 1.2. Paul Levy’s Construction of Brownian Motion
For a measurable space (X, A). For two non zero measures µ and ν on the same space we
say that µ ⊥ ν i.e. µ and ν are singular if there exists a Borel set A with µ(A)=0 and
ν(Ac )=0.
We say they they are equivalent if they are mutually absolutely continuous, ie. if µ<<ν
A function f:[0,∞]→R is said to be locally α-Holder continuous at x≥0, there exists ϵ > 0
We refer to α > 0 as the Holder exponent and to c > 0 as the Holder constant.
A probability space together with the filtration is called a filtered probability space.
A stochastic process {X(t) : t ≥ 0 } defined on a filtered probability space with filtration (F(t):t
A random variable T with values in [0, ∞], defined on a probability space with filtration (F(t)
• B(0) = x.
• the process has independent increments, i.e. for all times 0 ≤ t1 ≤ t2 .... ≤ tn , the
increments B(tn ) − B(tn−1 ), B(tn−1 ) − B(tn−2 ), ...B(t2 ) − B(t1 ) are independent random
3
1.2. Paul Levy’s Construction of Brownian Motion Chapter 1. Brownian Motion
variables.
• for all t ≥ 0 and h ≥ 0, the increments B(t+h)-B(t) are normally distributed with
X is a Brownian motion if and only if X = (Xt )t∈[0,∞) is a continuous centred Gaussian process
We construct Brownian motion on the interval [0,1] as a random element on the space C[0,1] of
be defined. Let B(0):=0 and B(1):=Z1 . For each n ∈ N we define the random variables B(d),
d ∈ Dn such that
1) for all r< s< t in Dn the random variable B(t)-B(s) is normally distributed with mean zero
and variance t-s, and is independent of B(s)-B(r), 2) the vectors (B(d)): d ∈ Dn ) and (Zt : t
∈ D \ Dn ) are independent.
B(d−2−n )+B(d+2−n ) Zd
B(d) = 2 + n+1
2 2
• Z1 for t=1
• 0 for t=0
• linear in between
• Zt
n+1 for t ∈ Dn \ Dn−1
2 2
4
Chapter 1. Brownian Motion 1.3. Properties of Brownian Motion
• 0 for t ∈ Dn−1
Then we see that these functions are continuous on [0,1] and for all n and d ∈ Dn , the we define
We have thus constructed a continuous function B[0,1] → R with the same finite dimensional
random variables with the distribution of this process and define {B(t) : t ≥ 0 } by gluing
This defines a continuous random function B : [0, ∞) → R , this completes our Paul Levy’s
Suppose {B(t):t≥0} is a standard Brownian motion and let a>0. Then the process{X(t):t≥0}
B(a2 t)
defined by X(t) = a is also a standard Brownian Motion.
Suppose {B(t)≥0} is a standard Brownian motion. Then the process { X(t):t≥0} defined by
• 0 for t = 0
B(t)
• t for t > 0
Almost surely,
B(t)
lim =0
t→∞ t
5
1.3. Properties of Brownian Motion Chapter 1. Brownian Motion
1. As in the definition of Brownian Motion, the function is continuous almost surely. In any
compact interval the sample functions are uniformly continuous, i.e. there exist some (random
function) ϕ with limh↓0 ϕ(h) = 0 called a modulus of continuous of the function B : [0, 1] → R,
such that
|B(t+h)−B(t)|
limsuph↓0 sup0≤t≤1−h ϕ(h) ≤1
2. There exists a constant C > 0 such that, almost surely, for every sufficiently small h > 0
|B(t+h)−B(t)|
limsuph↓0 sup0≤t≤1−h q
1
=1
2h log h
√
5. For any fixed m and c> 2, almost surely, there exists n0 ∈ N such that, for any n ≥ n0 ,
q
1
|B(t) − B(s)| ≤ c (t − s)log t−s for all [s.t] ∈ Λm (n) where
Λm (n) = [ k−1+b
2n−a
, k+b
2n−a
], 1 2
k ∈ {0, 1...2n }, a, b ∈ {0, m , m ... m−1
m }.
6. Given ϵ > 0, there exists m∈ N such that for every interval [0,1] ⊂ Λ(m) there exists an
interval [s’,t’] ∈ Λ(m) with |t − t′ | <ϵ (t-s)and |s − s′ | <ϵ(t-s) where Λ(m) = n Λm (n).
S
7.α-Holder property For the holder’s constant α < 12 , then, almost surely, Brownian motion
is everywhere locally α-Holder continuous. i.e. for |y − x| < ϵ, |B(y) − B(x)| < c(x − y)α for
1. Almost surely, for all 0 < a < b < ∞, Brownian motion is not monotone on the interval
[a,b].
Almost surely,
6
Chapter 1. Brownian Motion 1.4. Cameron-Martin theorem
B(n)
limsupn→∞ √
n
= + ∞, and
B(n)
liminfn→∞ √
n
=-∞
f (t+h)−f (t)
D∗ f(t) = limsuph↓0 h
f (t+h)−f (t)
D∗ f(t) = liminfh↓0 h
Fix t≥0. Then almost surely, Brownian motion is not differentiable at t. Moreover D∗ B(t) =
+ ∞ and D∗ B(t) = - ∞
3. Almost surely, Brownian motion is nowhere differentiable. Furthermore, almost surely, for
all t
is nested, i.e. at each step one or more partition points are added and the mesh
(n) (n)
∆(n) := sup1≤j≤k(n) { tj - tj−1 }
and therefore Brownian motion is of unbounded variation. For a sequence of partition as above,
we call
Pk(n) (n) (n)
limn→∞ j=1 (B(tj ) − B(tj−1 ))2
the quadratic variation of Brownian motion. The Brownian motion has finite quadratic
variation.
The process {Bµ (t), t≥ 0} is called Brownian Motion with drift µ if it satisfies the following
conditions
7
1.5. Markov Property and Filtration of Brownian Motion Chapter 1. Brownian Motion
• Bµ (0) = 0. a.s.
• Bµ (t) ∼ N (µt,t).
where µ ∈ R. Thus we can write Brownian Motion with a drift µ in the form
Bµ (t) = µt + B(t)
where B(t) is a standard Brownian Motion. For which time-dependent drift function F the
process {B(t) + F (t) : t ≥ 0} has same behaviour as a Brownian Motion path. Law of standard
Brownian Motion Let us denote the law of standard Brownian Motion {B(t) : t ∈ [0,1]} as L0 .
For the function F:[0,1] → R, write LF for the law of {B(t)+F(t) : t ∈[0,1]}. The Dirichlet
For a function F ∈ D[0,1] the associated f is uniquely determined as an element of L2 [0,1] and
1. If F ∈
/ D [0,1] then LF ⊥ L0 .
As a result of the theorem we see that any almost sure property of a Brownian Motion B also
Markov Property
Suppose that {B(t) : t ≥0 } is a Brownian motion started in x ∈ Rd . Let s >0. then the process
If B1 ,B2 ...Bd are independent linear Brownian motions started in x1 ,x2 ..xd then the stochas-
tic process { B(t):t≥0 } given by B(t) = (B1 (t),B2 (t)...Bd (t))T is called a d-dimensional
8
Chapter 1. Brownian Motion 1.5. Markov Property and Filtration of Brownian Motion
One-dimensional Brownian motion is also called linear, two dimensional Brownian motion is
F 0 (t) = σ(B(s) : 0 ≤ s ≤ t)
be the σ algebra generated by the random variables B(s), for 0 ≤ s ≤ t. Observe that this
σ-algebra contains all information available from observing the process up to time t. Let us
define
F + (s) = 0 (t)
T
t<s F
F + (s).
2. Theorem Suppose that { B(t) : t≥ 0 } is a linear Brownian motion. Define τ = inf { t >
P0 {τ = 0} = P0 {σ = 0} = 1
3. Zero-one law for tail Events Define the tail σ−-algebra of a Brownian motion as fol-
lows.
T
Let G(t) = σ(B(s): s≥t). Let T = t≥0 G(t) be the σ-algebra of all tail events.
• the set of times where the local maxima are attained is countable and dense ;
9
1.6. The strong Markov property and the Reflection principle Chapter 1. Brownian Motion
The Markov property states that Brownian motion is started anew at each deterministic time
instance. It is a crucial property of Brownian motion that this holds also for an important
class of random times. These random times are called stopping times. The random time T
is a stopping time if we can decide whether {T ≤ t} by just knowing the path of stochastic
process up to time t.
• Every deterministic time t ≥ 0 is a stopping time with respect to every filtration (F(t) :
t ≥ 0).
0) and Tn ↑ T, then T, is also a stopping time with respect to (F(t) : t ≥ 0). This is so
because
T∞
{T ≤ t} = n=1 {Tn ≤ t} ∈ F(t).
• Every stopping time T with respect to (F 0 (t) : t ≥ 0) is also a stopping time with respect
• Suppose H is a closed set, for example a singleton. Then the first hitting time T = inf{ t
≥ 0 : B(t) ∈ H } of the set H is a stopping time with respect to (F 0 (t) : t ≥ 0). Indeed
we not that
T∞
{B(s) ∈ B(x, n1 } ∈ F 0 (t).
S S
{T≤t}= n=1 s∈Q
T
(0,t) x∈Qd
T
H
T = inf {t ≥ 0 : B(t) ∈ G }
is a stopping time with respect to the filtration (F + (t) : t ≥ 0), but not necessarily with
10
Chapter 1. Brownian Motion 1.6. The strong Markov property and the Reflection principle
Note that the property which distinguishes (F + (t) : t ≥ 0) from (F 0 (t) : t ≥ 0) is right continuity,
+ (t + ϵ) = F + (t)
T
ϵ>0 F
Suppose that a random variable T with values in [0,∞] satisfies {T < t} ∈ F(t), for every t ≥
Theorem For every almost sure finite stopping time T, the process
{B(T + t) − B(T ) : T ≥ 0}
Equivalently
For any bounded measurable f :C([0, ∞), Rd ) → and x ∈ Rd , we have almost surely
where the expectation on the right is with respect to a Brownian motion {B̃(t) : t ≥ 0} started
Denoted by Px , the probability measure such that B = (Bt )t≥0 is a Brownian motion started at
x ∈ R. Then the Brownian motion B with distribution (Px )x∈R has the strong Markov property.
Theorem If T is a stopping time and {B(t) : t ≥ 0} is a standard Brownian motion, then the
Levy’s Arcsine Law Let T > 0 and ζT =sup{ t≤T : BT = 0}. Then, for t ∈ [0,T],
2
p
P[ζT ≤ t] = π arcsin( t/T ).
11
1.7. Markov processes derived from Brownian Motion Chapter 1. Brownian Motion
To prove the Levy’s Theorem on area of planar Brownian motion, we use the following lemma
L2 ({x ∈ R2 : L2 (A1
T
(A2 + x2 )) > 0}) > 0.
Equivalently
Result on the Theorem For any points x,y ∈ Rd , d ≥ 2, we have Px {y ∈ B(0, 1]} = 0.
Zero set of Brownian motion Let {B(t) : t ≥ 0}be a one dimensional Brownian motion and
Zeroes = {t≥0:B(t) = 0}
its zero set. Then, almost surely, Zeroes is a closed set with no isolated points.
kernel p with respect to a filtration (F(t) : t ≥ 0), if for all t≥s and Borel sets A ∈ B we have,
almost surely,
12
Chapter 1. Brownian Motion 1.8. Martingale Property of Brownian Motion
Theorem (Levy 1948) Let {M(t) : t ≥ 0} be the maximum process of a linear standard
Then, the process {Y(t) : ≥ 0 } defined by Y(t) = M(t) - B(t) is a reflected Brownian Motion.
Ta = inf {t ≥0 : B(t) = a }.
Then { Ta : a ≥0 } is an increasing Markov process with transition kernel given by the densities
a a2
p(a, t, s) = √ exp(− 2(s−t) )1{s > t}, f or a > 0.
2π(s−t)3
Theorem Let {B(t) : ≥ 0 } be a planar Brownian motion and denote B(t) = (B1 (t), B2 (t)).
V (a) = {(x, y) ∈ R2 : x = a}
and let T(a) = τ (V(a)) be the first hitting time of V(a). Then the process { X(a) : a ≥ 0 }
1 a
R
p(a, x, A) = π A a2 +(x−y)2 dy
filtration (F(t) : t ≥ 0) if it is adapted to the filtration, E|X(t)| < ∞ for all t ≥0 and, for any
pair of times 0 ≤ s ≤ t,
respect to a filtration (F(t) : t ≥ 0) if it is adapted to the filtration, E|X(t)| < ∞ for all t ≥0
13
1.8. Martingale Property of Brownian Motion Chapter 1. Brownian Motion
respect to a filtration (F(t) : t ≥ 0) if it is adapted to the filtration, E|X(t)| < ∞ for all t ≥0
A martingale is process where the current state X(t)is always the best prediction for its further
p p
E[(sup0≤s≤t |X(s)|)p ] ≤ ( p−1 ) E[|X(t)|p ].
Wald’s Lemma for Brownian Motion Let {B(t) : t ≥ 0} be a standard linear Brownian
• E[T] < ∞, or
{B(t)2 − t : t ≥ 0} is a martingale.
Wald’s Second lemma Let T be a stopping time for standard Brownian motion such that
14
Chapter 1. Brownian Motion 1.8. Martingale Property of Brownian Motion
E[B(T )2 ] = E[T ].
Theorem Let a < 0 < b and, for a standard linear Brownian motion {B(t) : t ≥ 0}, define T
|a|
• P{B(T ) = a} = b
|a|+b and P{B(T ) = b} = |a|+b .
• E[T ] = |a|b.
Theorem Let {B(t) : t ≥ 0} be a standard linear Brownian motion and T a stopping time
d-dimensional Brownian motion. Further suppose that, for all t>0, and x∈ Rd , we have
Rt
Ex |f (B(t))| < ∞ and Ex 0 |∆f (B(s))|ds < ∞. Then the process defined by
1
Rt
X(t) = f (B(t)) − 2 0 ∆f (B(s))ds
is a martingale.
Corollary Suppose f: Rd → R satisfies ∆f (x) =0 and Ex |f (B(t))| < ∞, for every x ∈ Rd and
15
Chapter 2
Poisson Process
A σ-finite measure µ on (E, ϵ) is called Borel Measure if, for any x ∈ E, there exists an open
A Polish space is a separable completely metrizable topological space; that is, a space home-
Let us say E be a locally compact Polish space with Borel σ− algebra B(E). Let
be the system of bounded Borel sets and M (E) the space of Radon Measure on E.
Theorem Denote by M = σ(IA : A ∈ Bb (E)) the smallest σ−algebra on M (E) with respect to
IA : µ → µ(A), A ∈ Bb (E),
are measurable.
16
Chapter 2. Poisson Process 2.2. Random Measures
A σ-finite measure µ on (E, ϵ) is called a Radon measure if µ is an inner regular Borel measure.
f ∈ B + (E).
R
ψ(f ) = E[exp(− f dX)],
(iii) The increments Mt − Ms are independent, stationary and follows Gamma distribution i.e.
Let M̃ (E) be the space of all measures on E endowed with sigma algebra, i.e.
M̃ = σ(IA : A ∈ Bb (E)).
Theorem Let X be a random measure on E. Then the set function E[X] : B(E) → [0, ∞], A →
E[X] ∈ M (E).
17
2.3. Introduction to Poisson Process Chapter 2. Poisson Process
E[X] = µ
E[(X(X − 1))] = µ2
E[X 2 ] = µ2 + µ
2 =µ
σX
Then if we assume X to be the waiting time for an occurrence. Let us say X1 to be the waiting
time of the first event, let X2 be the waiting time between the first and the second events and
so on. Then we see the infinite sequence X1 , X2 , X3 , ....of random variables on some probability
space, and Sn = X1 + X2 ...Xn represent the time of occurrence of the nth event; it is convenient
to write S0 = 0.
Note that if no two events occur simultaneously, then Sn must be strictly increasing. Also
observe that if only finitely many of the events are to occur in each finite interval of the time,
Sn must go to infinity.
The above conditions are equivalent if they hold for each ω, let us say that the condition fulfills
00 . Say the number Nt denote the number of events that occur in the time interval [0, t] is the
18
Chapter 2. Poisson Process 2.3. Introduction to Poisson Process
Nt = max[n : Sn ≤ t].
[Nt ≥ n] = [Sn ≤ t]
Then we can see that each Nt is a random variable. Thus we see that the collection [Nt : t ≥ 0]
Condition 00
For each ω, Nt (ω) is a nonnegative integer for t ≥ 0, N0 (ω) = 0 and limt→∞ Nt (ω) = ∞; further,
for each ω, Nt (ω) as a function of t is non decreasing and right - continuous and at the points
Condition 10
Condition 20
(i) For 0 < t1 < t2 < ..... < tk the increments Nt1 , Nt2 − Nt1 , ...., Ntk − Ntk−1 are independent.
Condition 30
(i) For 0 < t1 < t2 ....tk the increments Nt1 , Nt2 − Nt1 , ....Ntk − Ntk−1 are independent.
Condition 40
If 0 < t1 < t2 < t3 .... < tk and if n1 , n2 ....nk are non negative integers, then
Theorem If Condition 00 holds and [Nt : t ≥ 0] has independent increments and no fixed
19
2.4. The Law of Rare Events Chapter 2. Poisson Process
Theorem Suppose that for each n, Zn1 , Zn2 , Znrn are independent random variables and Znk
Σrk=1
n
pnk → λ ≥ 0 max1≤k≤rn pnk → 0, then
λi
P[Σrk=1
n
Znk = i] → e−λ i! , i = 0, 1, 2.....
Consider a large number N of independent Bernoulli trials where the probability p of success
on each trial is small and constant from trial to trial. Let XN,p denote the total number of
P{XN,p = k} = N! k
k!(N −k)! p (1 − p)N −k for k = 0, 1, 2.....N.
Now consider the limiting case in which N → ∞ and p → 0 in such a way that N p = λ > 0 where
λ is constant. Then the distribution for XN,p becomes in the limit, the Poisson distribution i.e.
k
P{Xλ = k} = e−λ λk!
Derivation
Now if λ = N p. Then
and (1 − λ N
N) → e−λ in
(1 − p)N −k = (1 − λ N
N ) (1 − λ −k
N) → e−λ × 1 as N → ∞
For a large number N of independent trials and a small constant probability p of success on
each trial, then the total number of successes follow approximately a Poisson distribution with
the parameter λ = N p.
20
Chapter 2. Poisson Process 2.5. Distributions associated with Poisson Process
Let N((a,b]) denote the number of events that occur during the interval (a,b]. That is, if
t1 < t2 < t3 .....denote the times (or locations, etc.) of successive events, then N((a,b]) is the
number of values of ti for which a < ti ≤ b. We now state the following postulates
(i) The number of events happening in disjoint intervals are independent random variables.
That is, for every integer m = 2,3,.... and time points t0 = 0 < t1 < t2 .... < tm , the random
variables
are independent.
(ii) For any time t and positive number h , the probability distribution of N ((t, t + h]), the
number of events occurring between time t and t + h, depends only on the interval length h and
(iii) There is a positive constant λ for which the probability of at least one event happening in
(Conforming to a common notation, here o(h) as h ↓ 0 stands for a general and unspecified
o(h)
remainder time for which h → 0 as h ↓ 0. That is, a remainder term of smaller order than h
as h vanishes.)
(iv) The probability of two or more events occurring in an interval of length h is o(h), or
The time of occurrence of the nth event, is called the waiting time. It is denoted by Wn . We
The difference Sn = Wn − Wn−1 are called Sojourn times Sn measures the duration that the
Theorem The waiting time Wn has the gamma distribution. Thus, its probability density
function is given by
21
2.6. Poisson Point Process Chapter 2. Poisson Process
λn tn−1 −λt
fWn (t) = (n−1)! e , n = 1, 2......, t ≥ 0.
Theorem The sojourn times S0 , S1 , ....Sn−1 are independent random variables, each having
Let {X(t)} be a Poisson process of rate λ > 0. Then for 0 < u < t and 0 ≤ k ≤ n,
n! u k
P{X(u) = k|X(t) = n} = k!(n−k)! ( t ) (1 − ut )n−k .
Theorem Let W1 , W2 , ... be the occurrence times in a Poisson process of rate λ > 0. Conditioned
on N (t) = n, the random variables W1 , W2 , .....Wn have the joint probability density function
fW1 ,.....Wn |X(t)=n (w1 , w2 ....wn ) = n! t−n for 0 < w1 < ....wn ≤ t.
The theorem, asserts that conditioned on a fixed total number of events in an interval, the
location of those events are uniformly distributed in a certain way. This theorem has a wide
variety of applications.
Let µ ∈ M (E). A random measure X with independent increments is called a Poisson point
process with intensity measure µ if, for any A ∈ Bb (E), we have PX(A) = P oiµ(A) . For every
For an atom free measure µ ∈ M (E), Let X be a random measure on E with P[X(A) ∈
S
N0 {∞}] = 1 for every A ∈ B(E). Then the following are equivalent.
Theorem Let µ ∈ M (E) and let X be a Poisson point process with intensity measure µ. Then
22
Chapter 2. Poisson Process 2.6. Poisson Point Process
Moments of PPP
Mapping theorem Let E and F be locally compact Polish space and let ϕ : E → F be a
measurable map. Let µ ∈ M (E) with µ ◦ ϕ−1 ∈ M (F ) and let X be a PPP on E with intensity
If (i)-(iii) hold, then Y is an infinitely divisible nonnegative random variable with Levy measure
ν.
Then Y is an infinity divisible random measure with independent increments. For A ∈ B(E), Y (A)
Colouring Theorem Let F be a further locally compact Polish space, let µ ∈ M (E) be an
atom-free and let (Yx )x∈E be i.i.d random variables with values in F and distribution ν ∈ M1 (F ).
Then
R
Z(A) := 1A (x, Yx )X(dx), A ∈ B(E × F )
is a P P Pµ⊗ν on E × F.
23
2.7. Multi dimensional Poisson Chapter 2. Poisson Process
Let S be a set in n−dimensional space and let A be a family of subsets of S. A Point process
in S is a stochastic process N (A) indexed by the sets A in A and having the set of nonnegative
integers {0, 1, 2....} as its possible values where N (A) is a counting function.
If S is a subset of the real line, two dimensional plane or three-dimensional space; let A be the
family of subsets of S and for any set A A, let |A| denote the size (length, area and volume re-
intensity λ > 0 if
(i) for each A ∈ A, the random variable N (A) has a Poisson distribution with parameter λ|A|;
(ii) for every finite collection {A1 , A2 , ...An } of disjoint subsets of S, the random variables
A Poisson Point Process N((s,t]) counts the number of events occurring in an interval (s,t].
A Poisson Counting Process or a Poisson Process X(t) counts the number of events oc-
We can restate the Postulates for multidimensional Poisson Process for a given point process
{N (A) : A ∈ A} as follows
• The possible for N (A) are the non negative integers {0, 1, 2...} and 0 < P{N (A) = 0} < 1
• The probability distribution of N (A) depends on the set A only through its size |A| with
• For m = 2, 3, .... if A1 , A2 ....Am are disjoint regions, then N (A1 ), N (A2 ), ....N (Am ) are
S S S
independent random variables and N (A1 A2 .... Am ) = N (A1 )+N (A2 )+....N (Am ).
P{N (A)≥1}
• lim|A|→0 P{N (A)=1} = 1.
For a random point process N (A) defined with respect to subsets A of Euclidean n space
satisfying the above postulates, then N (A) is a homogeneous Poisson Point process of intensity
λ > 0 and
24
Chapter 2. Poisson Process 2.7. Multi dimensional Poisson
e−λ|A| (λ|A|)k
P{N (A) = k} = k! f or k = 0, 1.....
Consider a region A containing exactly one point i.e. N (A) = 1. Then we see that for any
subset B of A
|B|
P{N (B) = 1|N (A) = 1} = |A| f or B ⊂ A.
|A| > 0 and containing N (A) = n ≥ 1 points. Then these n points are independent and
uniformly distributed in A in the sense that for any disjoint partition A1 , A2 , ....Am of A where
S S
A1 A2 ....Am = A, and positive integers k1 , k2 ....km where k1 + k2 .... + km = n, then
n! |A1 | k1 |Am | km
P{N (A1 ) = k1 , N (A2 ) = k2 , ....N (Am ) = km |N (A) = n} = k1 !k2 !..km ! ( |A| ) ...( |A| ) .
Given a Poisson Process X(t) of rate λ > 0, suppose that each event has associated with it
a random variable, possibly representing a value or a cost. The successive values Y1 , Y2 ... are
X(t)
Z(t) = Σk=1 Yk f ort ≥ 0.
If λ > 0 is the rate for the process X(t) and µ = E[Y1 ] and ν 2 = V ar[Y1 ] are the common mean
and variance for Y1 , Y2 ... then the moments of Z(t) are given by
E[Z(t)] = λµt;
(i) Risk Theory Suppose claims arrive at an insurance company in accordance with a Poisson
X(t)
process having the rate λ. Let Yk be the magnitude of the kth claim. Then Z(t) = Σk=1 Yk
(ii) Stock Prices Suppose that transaction in a certain stock takes place according to a Poisson
process of rate λ. Let Yk deonte the change in market price of the stock between the kth and
k − 1st transaction. Then we assume the stock prices follows the random walk hypothesis
X(t)
which asserts that Y1 , Y2 .... are independent random variables. Then Z(t) = Σk=1 Yk represent
25
2.7. Multi dimensional Poisson Chapter 2. Poisson Process
X(t)
The distribution for the compound Poisson Process Z(t) = Σk=1 Yk
A Marked Poisson Process is the sequence of pairs (W1 , Y1 ), (W2 , Y2 )... where W1 , W2 ... are
the waiting times or event times in the Poisson process X(t). For a fixed p(0 < p < 1) suppose
P{Yk = 1} = p P{Yk = 0} = q = 1 − p.
Consider separate processes of points marked up with ones and of points marked with zeroes.
X(t)
X1 (t) = Σk=1 Yk and X0 (t) = X(t) − X1 (t).
The nonoverlapping increments in X1 (t) are independent random variables, X1 (0) = 0, then
we see that X1 (t) has a Poisson distribution with mean λpt. We see that X1 (t) is a Poisson
process with rate λp, and the parallel argument shows that X0 (t) is a Poisson process with rate
λ(1 − p). We also see that X0 (t) and X1 (t) are independent processes.
Let θ = θ(x, y) be a nonnegative function defined on a region S in the (x, y) plane. For
RR
each subset A of S, let µ(A) = A θ(x, y)dxdy be the volume under θ(x, y) enclosed by
A. A non homogeneous Poisson Point Process of intensity function θ(x, y) is a point process
(i) for each subset A of S, the random variable N (A) has a Poisson distribution with mean
µ(A)
(ii) for disjoint subsets A1 , A2 .....Am of S, the random variables N (A1 ), N (A2 )..., N (Am ) are
independent.
Note that homogeneous Poisson process of intensity λ corresponds to the function θ(x, y) being
26
Chapter 3
Lévy Processes
A process is said to be stochastically continuous for all a > 0 and for all s ≥ 0.
Let X = (X(t), t ≥ 0) be a stochastic process defined on a probability space (Ω, F, P). We say
that it has independent increment if for each n ∈ N and each 0 ≤ t1 < l2 < .... < tn+1 < ∞
A stochastic process has Stationary increments if for all t ≥ 0 and h > 0, the distribu-
Yt,h = Xt+h − Xt
Let (S, A) be a measurable space and (Ω, F, P) be a probability space. A random measure M
(1) M(Φ) = 0
(3) Independently Scattered property for each disjoint family (B1 , .....Bn ) in A, the ran-
27
3.2. Cádlág Processes Chapter 3. Lévy Processes
Let P = {a = t1 < t2 .... < tn < tn+1 = b} be a partition of t of the interval [a, b] in R,
and define its mesh to be δ = max1≤i≤n |ti+1 − ti |. We define variation varP (g) of a cádlág
If V (g) =supP (g)varP (g) < ∞, we say that g has finite variation on [a,b].
A Martingale is sequence of random variates X0 , X1 , .... with finite means such that conditional
probability space (Ω, F, P). The mapping η is the Lévy symbol of X, so that
E(ei(u,X(t)) ) = etη(u)
We say that a Lévy process X has Bounded jumps if there exists C > 0 with
Let I = [a,b] be an interval in R+ . A mapping f :→ Rd is said to be cádlág if, for all t ∈ (a, b], f
• for all sequences (tn , n ∈ N) in I with each tn < t and limn→∞ tn we have the limn→∞
f (tn ) exists ;
28
Chapter 3. Lévy Processes 3.2. Cádlág Processes
• for all sequences (tn , n → N) in I with each tn ≥ t and limn→∞ tn = t we have that
• for all sequences (tn , n ∈ N) in I with each tn ≤ t and limn→∞ tn = t we have that
• for all sequences (tn , n ∈ N) in I with each tn > t and limn→∞ tn = t we have that
Clearly any continuous function is cádlág. Also note that a cádlág function can only have jump
discontinuities.
Theorem If f is a cádlág function then the set S = {t, ∆f (t) ̸= 0} is at most countable.
Suppose that Sk has at least one accumulation point x and choose a sequence (xn , n ∈ N) in
Sk that converges to x. We assume, without the loss of generality, that the convergence is from
Now, given any n ∈ N , since xn ∈ Sk it follows that f has a left limit at xn and so, given
ϵ > 0, we can find δ > 0 such that, for all y with y < xn satisfying xn − y < δ, we have
Now fix n0 ∈ N such that , for all m > n > n0 , xm − xn < δ; then if |k0 | > k we have
From this we can see that (f (xn ), n ∈ N) cannot be Cauchy and so f does not have a limit at
• Let D(a, b) denote the set of all cádlág functions on [a.b];then D(a, b) is a linear space
29
3.3. Introduction to Lévy Processes Chapter 3. Lévy Processes
• If f, g ∈ D(a, b) then f g ∈ D(a, b). Furthermore, if f (x) ̸= 0 for all x ∈ [a, b] then
1
f ∈ D(a, b).
• Every cádlág function is bounded on finite closed intervals and attain its bounds there.
step functions.
whenever x ∈ (a, b]. We see that f and f˜ differ at most on a countable number of points and f˜
• X(0) = 0 (a.s);
• X is stochastically continuous.
If X is a Lévy process, then X(t) is infinitely divisible for each t ≥ 0. i.e, A random measure Y
is called infinitely divisible if, for any n ∈ N, there exist i.i.d random measures Y1 , Y2 ....Yn with
Y = Y1 + Y2 ...... + Yn .
30
Chapter 3. Lévy Processes 3.3. Introduction to Lévy Processes
(n)
The Yk (t) are i.i.d.
Lemma If ϕX(t) (u) = eη(t,u) where η(t, ·) is a Lévy symbol. Then, If X = (X(t), t ≥ 0) is a
stochastically continuous, then the map t → ϕX(t) (u) is continuous for each u ∈ Rd .
Proof For each s, t ≥ 0 with each t ̸= s, write X(s, t) = X(t) − X(s). Fix u ∈ Rd . Since the
map y → ei(u,y) is continuous at the origin. given any ϵ > 0 we can find δ1 > 0 such that
ϵ
sup0≤|y|<δ1 |ei(u,y) − 1| < 2
and by stochastic continuity we can find δ2 such that whenever 0 < |t − s| < δ2 , P(|X(s, t)| >
δ1 ) < 4ϵ .
Hence for all 0 < |t − s| < δ2 we have |ϕX(t) (u) − ϕX(s) (u)| =
<ϵ
Proof Suppose that X is a Lévy process and that for each u ∈ Rd , t ≥ 0. Define ϕu (t) =
ϕu (t + s) = E(ei(u,X(t+s)) )
= E(ei(u,X(t+s)−X(s)) ei(u,X(s)) )
= E(ei(u,X(t+s)−X(s)) )E(ei(u,X(s)) )
ϕu (0) = 1.
We see that the map t → ϕu (t) is continuous. Thus the unique continuous solution to the above
equations is ϕu (t) = etα(u) , where α : Rd → C, thus we see that X(1) is infinitely divisible.
31
3.4. Jumps of a Lévy process Chapter 3. Lévy Processes
processes (Xn , n ∈ N) with each Xn = (Xn (t), t ≥ 0) such that Xn (t) converges in probability
to X(t) for each t ≥ 0 and limn→∞ limsupt→0 P(|Xn (t) − X(t)| > a) = 0 for all a > 0, then X
is a Lévy process.
Theorem If N is a Lévy process that is increasing almost surely and is such that (∆N (t), t ≥ 0)
Lemma If X is a Lévy process, then almost surely, for fixed t > 0, ∆X(t) = 0.
Result We see that Σ0≤s≤t |∆X(s)| < ∞| almost surely if X is a compound Poisson process.
For each ω ∈ Ω, t ≥ 0, the set function A → N (t, A)(ω) is a counting measure on B(Rd − {0}).
and thus
R
E(N (t, A)) = N (t, A)(ω)dP (ω)
We write µ(·) = E(N (1, ·)) and call it intensity measure associated with X. We say that
Lemma If A is bounded below, then N (t, A) < ∞ almost surely for all t ≥ 0.
Theorem
(1) If A is bounded below, then (N (t, A), t ≥ 0) is a Poisson processes with intensity µ(A).
(2) If A1 , ....., Am ∈ B(Rd − {0}) are disjoint, then the random variables N (t, A1 ), ....., N (t, Am )
are independent.
Theorem Given a σ−finite measure λ on a measurable space (S, A), there exists a Poisson
random measure M on a probability space (Ω, F, P) such that λ(A) = E(M (A)) for all A ∈ A.
32
Chapter 3. Lévy Processes 3.5. Poisson Integration
Let U = Rd − {0} and C be its Borel σ−algebra. Let X be a Lévy process; then ∆X is a
Poisson point process and N is its associated Poisson random measure, For each t ≥ 0 and A
(1) For each t > 0, ω ∈ Ω, N (t, ·)(ω) is a counting measure on B(Rd − {0}).
(2) For each A bounded below, N (t, A), t ≥ 0) is a Poisson process with intensity µ(A) =
(3) (Ñ (t, A), t ≥ 0) is a martingale-valued measure, where Ñ (t, A) = N (t, A) − tµ(A), for A
bounded below.
Let f be a Borel measure function from Rd to Rd and let A be bounded below; then for each
Let (TnA , n ∈ N) be the arrival times for the Poisson process (N (t, A), t ≥ 0). Then
u ∈ Rd ,
i(u,x)
R R
E(exp[i(u, A f (x)N (t, dx)]) = exp[t A (e − 1)µf (dx)]
33
3.5. Poisson Integration Chapter 3. Lévy Processes
where µf = µ ◦ f −1 ;
2
R R
Var(| A f (x)N (t, dx)|) =t A |f (x)| µ(dx)
Consider the sequence of jump size random variable (YfA (n), n ∈ N), where each
YfA (n) = A A
R R
A f (x)N (Tn , dx) − A f (x)N (Tn−1 , dx).
Theorem
f −1 (B))
T
µ(A
P(YfA (n) ∈ B) = µ(A)
Result Let N be a Poisson random measure. with intensity µ, that counts the jump of a Lévy
process X and let f : Rd → Rd be Borel measurable. For A bounded below, let Y = (Y (t), t ≥ 0)
R
be given by Y (t) = A f (x)N (t, dx); then Y of finite variation on [0, t] for each t ≥ 0. Then, for
Let Y be a Poisson integral and let η be its Lévy symbol. For each u ∈ Rd consider the
34
Chapter 3. Lévy Processes 3.6. Examples Of Lévy Processes
Then We can see that the characteristic function of Brownian motion B is given by
for each u ∈ Rd , t ≥ 0.
We see that for marginal processes Bi = (Bi (t), t ≥ 0) where Bi (t) is the ith component of B(t):
then we can see that Bi are mutually independent Brownian motions in R. We see that these
(λt)n −λt
P (N (t) = n) = n! e
for each n = 0, 1, 2.... We have seen that the Waiting time Tn are Gamma distributed, also the
1
inter arrival time Tn − Tn−1 is exponentially distributed with the mean λ.
The compound Poisson process Y is a Lévy process. We can clearly see, for a > 0, by condi-
and thus we see the required result by dominated convergence. We see that the sample paths
of Y are piecewise constants on finite intervals with ’jump discontinuities’ at the random times
(T (n), n ∈ N); the sizes in these jumps are random and the jump at T (n) can take any value in
35
3.6. Examples Of Lévy Processes Chapter 3. Lévy Processes
Proposition If (N1 (t), t ≥ 0) and (N2 (t), t ≥ 0) are two independent Poisson Processes defined
(j)
on the same probability space, with arrival times (Tn , n ∈ N) for j = 1, 2 respectively, then
(1) (2)
P (Tm = Tn for some m, n ∈ N) = 0.
Let C be a Gaussian Lévy process and Y be a compound Poisson process that is independent
X(t) =
and so on recursively This is called interlacing, since a continuous path is interlaced with random
jumps.
We say that a stochastic process X = (X(t), t ≥ 0) is stable if all its finite-dimensional distri-
butions are stable. A stable Lévy process is a Lévy process X in which each X(t) is a stable
random variable. We can see in the rotationally invariant case, the Lévy symbol is given by
η(u) = −σ α |u|−α ;
We see that Lévy processes display self similarity. In general, a stochastic process (X(t); t ≥ 0)
is self-similar with Hurst index H > 0 if the two processes (Y (at), t ≥ 0) is self similar with
Hurst index H > 0 if the two processes Y (at) : t ≥ 0) and (aH Y (t), t ≥ 0) have the same
finite-dimensional distribution for all a ≥ 0. We can see that rotationally invariant stable Lévy
1
Process is similar with Hurst index H = α. In particular we see that a Lévy process X is self
36
Chapter 3. Lévy Processes 3.7. Lévy-Itô decomposition
is L2 and that for each t ≥ 0, E(|V (Mk (t)|2 ) < ∞ where k ̸= j; then
Result Let A and B be bounded below and suppose that f ∈ L2 (A, µA ), g ∈ L2 (B, µB ). For
R R
each t ≥ 0 let M1 (t) = A f (x) ∼ N (t, dx) and M2 (t) = B g(x) ∼ N (t, dx); then
R R R R
V (M1 (t)) ≤ V ( A f (x)N (t, dx)) + V (t A f (x)µ(dx)) ≤ A |f (x)|N (t, dx) +t A f (x)µ(dx).
Note that the proposition does not hold when M1 = M2 = B where B is a standard Brownian
motion.
Result Let N = (N (t), t ≥ 0) be a Poisson process with arrival times (Tn , n ∈ N) and let M
Result Let A be bounded below and M be a centred cádlág L2 −martingale that is continuous
at the arrival times of (N (t, A), t ≥ 0). Then M is orthogonal to every process in MA . Then
is the sum of all jumps taking values in the set A up to the time t. Since the paths of X are
Theorem If X is a Lévy process jumps then we have E(|X(t)|m ) < ∞ for all m ∈ N.
Corollary A Lévy process has bounded jumps if and only if it is of the form Ya for some a > 0.
37
3.7. Lévy-Itô decomposition Chapter 3. Lévy Processes
where Yc and Yd are independent Lévy processes, Yc has continuous sample paths and
R
Yd (t) = |x|<1 xÑ (t, dx).
Corollary For the intensity measure µ of the Poisson random measure N . µ is a Lévy measure.
Result An α−stable Lévy process has finite mean if 1 < α ≤ 2 and infinite mean otherwise.
Corollary The characteristics (b, A, µ) of a Lévy processes are uniquely determined by the
process.
Corollary Let G be a group f matrices acting on Rd . A Lévy process is G-invariant if and only
38
Chapter 3. Lévy Processes 3.7. Lévy-Itô decomposition
A Lévy process is O(d)− invariant if and only if it has characteristics (0, aI, µ) where a ≥ 0
(0, A, µ) where A is an arbitrary positive definite symmetric matrix and µ is symmetric i.e.
n µ(dx)
R
|x|≥1 |x| <∞
X (n) (t) = Σ0≤s≤t [∆X(s)]n and Y (n) (t) = X (n) (t) − E(X (n) (t)).
Then each (Y (n) (t), t ≥ 0) is a martingale. Such process are called Teugels martingales.
Suppose that X is a Lévy process with Lévy process-Itô decomposition of the form
R
X(t) = |x|<1 x(Ñ )(t, dx).
for all t ≥ 0.We see the outcome of a competition between an infinite number of jumps of
small size and an infinite drift. We see that, here d = 1 and µ((0, 1)) > 0. For each x > 0, let
so that either paths, jumps across x or they hit x continuously. Futhermore, either P(X(Tx ) =
x) > 0 for all x > 0 or P(X(Tx ) = x) = 0 for all x > 0. In the first case, every positive point can
be hit continuously in X and this phenomena is called creep. In the second case, only jump can
occur almost surely, we can classify completely the condition for creep or jump for general one-
dimensional Lévy process, in terms of their characteristic. For example, a sufficient condition
R0 R1
for creep is A = 0 and −1 |x|µ(dx) = ∞, 0 xµ(dx) < ∞. This is satisfied by spectrally negative
α−stable Lévy processes (0 < α < 2) i.e. those for which c1 = 0 where the characteristic is
given by
c1 c2
µ(dx) = χ
x1+α (0,∞)
(x)dx + χ
|x|1+α (−∞,0)
(x)dx.
39
Chapter 4
Itô Integral
dN
dt = a(t)N (t), N (0) = N0 (constant)
It might happen that a(t) is not completely known, and subject to some ”noise”. We thus write
dX
dt = b(t, Xt ) + σ(t, xt )·”noise”,
Consider the case where the noise is 1-dimensional. It is reasonable to look for some stochastic
dX
dt = b(t, Xt ) + σ(t, Xt ) · Wt .
• {Wt } is stationary, i.e. the (joint) distribution of {Wt1 +t , ....Wtk +t } does not depend on t.
However we see that there is no ”reasonable” stochastic process satisfying (i) and (ii): Such a
40
Chapter 4. Itô Integral 4.1. Introduction Itô Integrals
space S ′ of tempered distributions on the interval [0, ∞), and not as a probability measure on
the much smaller space R[0,∞) , like any ordinary process. Let 0 = t0 < t1 < ... < tm = t and
Replace Wk ∆tk by ∆Vk = Vtk+1 − Vtk , where {Vt }t≥0 is some suitable stochastic process, the
assumptions (i), (ii) and (iii) on Wt suggests that Vt should have stationary independent incre-
ments with mean 0. It turns that the only such process with continuous paths is the Brownian
k−1
Xk = X0 + Σj=0 b(tj , Xj )∆tj + Σk−1
j=0 σ(tj , Xj )∆Bj .
when ∆(tj ) → 0
Rt Rt
Xt = X0 + 0 b(s, Xs )ds + ” 0 σ(s, Xs )dBs ”
where Bt (ω) is 1-dimensional Brownian motion starting at the origin, for a wide class of functions
f : [0, ∞] × Ω → R.
We begin by the definition for a simple class of functions f and then extend by some approxi-
where χ denote the characteristic(indicator) function and n is a natural number, such functions
it is reasonable to define
RT
S ϕ(t, ω)dBt (ω) = Σj≥0 ej (ω)[Btj+1 − Btj ](ω),
where
(n)
tk = tk =
Result Choose
41
4.1. Introduction Itô Integrals Chapter 4. Itô Integral
Then
RT
E[ 0 ϕ1 (t, ω)dBt (ω)] = Σj≥0 E[Btj (Btj+1 − Btj )] = 0
We see that both ϕ1 and ϕ2 appear to be reasonable approximations to f (t, ω) = Bt (ω) their
integrals are not close to each other at all, no matter how large n is chosen.
We see that the variations of the paths of Bt are too big to enable us to define the integral
RT
S f (t, ω)dBt (ω). We know that the paths t → Bt of Brownian motion are nowhere differen-
tiable almost surely. Thus the total variation of the path is infinite almost surely, We can thus
(tj +tj+1 ) RT
• t∗j = 2 which leads to the Stratonovich integral, denoted by S f (t, ω) ◦ dBt (ω).
We observe from the procedure that f has the property that each of the functions ω → f (tj , ω)
the σ−algebra generated by the random variables {Bi (s)}1≤i≤n,0≤s≤t . In other words, Ft is the
where tj ≤ t and Fj ⊂ Rn are Borel sets, j ≤ k = 1, 2, ... (Assume that all sets of measure zero
are included in Ft ).
and only if h can be written as the pointwise almost every limit of sum functions of the form
42
Chapter 4. Itô Integral 4.2. Construction of Itô Integral
where g1 , ...gk are bounded, continuous functions and tj ≤ t for j ≤ k, k = 1, 2, ... Intuitively,
that h is Ft − measurable, while h2 (ω) = B2t (ω) is not. Fs ⊂ Ft for s < t (i.e. {Ft } is
ω → g(t, ω)
is Nt −measurable.
Thus the process h1 (t, ω) = B t (ω) is Ft −adapted, while h2 (t, ω) = B2t (ω) is not.
2
f (t, ω) : [0, ∞) × Ω → R
such that
• (t, ω) → f (t, ω) is B × F− measurable, where B denotes the Borel σ−algebra on [0, ∞).
• f (t, ω) is Ft −adapted.
RT
• E[ S f (t, ω)2 dt] < ∞.
We define I|ϕ| for a simple class of functions ϕ. We see that for each f ∈ V , can be approximated
Since ϕ ∈ V each function ej must be Ftj − measurable. We see that from earlier ϕ1 is elementary
whereas ϕ2 is not.
43
4.2. Construction of Itô Integral Chapter 4. Itô Integral
RT
S ϕ(t, ω)dBt (ω) = Σj≥0 ej (ω)[Btj+1 − Btj ](ω).
We now use the isometry to extend the definitions from elementary functions to functions in V
• Step 1 Let g ∈ V be bounded and g(·, ω) continuous for each ω. Then there exists
• Step 2 Let h ∈ V be bounded. Then there exist bounded functions gn ∈ V such that
• Step 3 Let f ∈ V. Then there exists a sequence {hn } ⊂ V such that hn is bounded for
each n and
RT
E[ S (f − hn )2 dt] → 0 as n → ∞.
Then define
RT RT
I[f ](ω) := S f (t, ω)dBt (ω) := limn→∞ S ϕn (t, ω)dBt (ω).
RT
The limit exists as an element of L2 (P ), since { S ϕn (t, ω)dBt (ω)} forms a Cauchy sequence in
Itô Integral Let f ∈ V(S, T ). Then the Itô integral of f (from S to T ) is defined by
RT RT
S f (t, ω)dBt (ω) = limn→∞ S ϕn (t, ω)dBt (ω) (limit in L2 (P ))
44
Chapter 4. Itô Integral 4.3. Properties of Itô Integral
The extra term − 12 t shows that the Itô stochastic integral behave like ordinary integrals.
Thus we can see that Itô integral will also follow Doob’s maximal inequality as it is a martingale.
1
RT
P[sup0≤t≤T |Mt | ≥ λ] ≤ λ2
· E[ 0 f (s, ω)2 ds]; λ, T > 0.
45
4.4. Extension of Itô Integral Chapter 4. Itô Integral
• (t, ω) → f (t, ω) is B × F− measurable, where B denotes the Borel σ−algebra on [0, ∞).
(b) ft is Ht −adapted.
RT
• E[ S f (t, ω)2 dt] < ∞.
Note that (ii)(a) implies that Ft ⊂ Ht . The essence of this extension is that we can allow ft to
depend on more than Ft as long as Bt remains a martingale with respect to the ”history” of
fs ; s ≤ t. If (ii) holds then E[Bs − Bt |Ht ] = 0 for all s > t and we see that this is sufficient to
satisfies (i) and (iii) measurability condition with respect to some filtration H = {H⊔ }t≥0 .
v11 . . . v1n dB1
. . . . .
.
RT RT
S vdB = S . . . . .
.
. . . . . .
vm1 . . . vmn dBn
to be the m × 1 matrix (column vector) whose ith component is the following sum of (extended)
(n)
If H = F (n) = {Ft }t≥0 we write V m×m (S, T ) and if m = 1 we write VH
n (S, T )(respectively
46
Chapter 4. Itô Integral 4.5. A comparison of Itô and Stratonovich integrals
n×1
V n (S, T )) instead of VH (S, T ) (respectively V n×1 (S, T )). We also put
Now consider the Itô integral defined for a larger class with measurability conditions relaxed to
• (t, ω) → f (t, ω) is B × F− measurable, where B denotes the Borel σ−algebra on [0, ∞).
(b) ft is Ht −adapted.
RT
• P[ S f (s, ω)2 ds] = 1.
Let WH (S, T ) denote the class of processes f (t, ω) ∈ R satisfying the above measurability
T m×n
criteria. Let WH = T >0 WH (0, T ) and in the matrix case, we write WH (S, T ) etc. If
Let Bt denote the 1−dimensional Brownian motion. If f ∈ WH we see that for all t there exist
Rt
step function fn ∈ WH such that 0 |fn − f |2 ds → 0 in probability, i.e. in measure with respect
Rt
to P. For such a sequence we have that 0 fn (s, ω)dBs converges in probability to some random
variable and the limit only depends on f and not on the sequence {fn } . Thus we define
Rt Rt
0 f (s, ω)dBs (ω) = limn→∞ 0 fn (s, ω)dBs (ω) (limit in probability) for f ∈ WH .
We see that for the mathematical interpretation of the white noise equation
dX
dt = b(t, Xt ) + σ(t, Xt ) · Wt
for suitable interpretation of the integral. However as indicated earlier, the Itô of an integral of
the form
Rt
” 0 f (s, ω)dBs (ω)”
is one of the several reasonable choices. The Stratonovich integral is another possibility, leading
to a different result. We see that the Stratonovich interpretation in some situation may e the
47
4.5. A comparison of Itô and Stratonovich integrals Chapter 4. Itô Integral
(n)
most appropriately: Choose t−continuity differentiable processes Bt such that for a.a. ω
(n)
uniformly (in t) in bounded intervals. For each ω let Xt (ω) be the solution of the corresponding
(n)
Then Xt (ω) converges to some function Xt (ω) in the same sense: For a.a.ω we have that
(n)
Xt (ω) → Xt (ω) as n → ∞, uniformly (in t) bounded intervals. We see that this solution
This implies that Xt is the solution of the following modified Itô equation
Rt Rt Rt
Xt = X0 + 0 b(s, Xs )ds + 1
2 0 σ ′ (s, Xs )σ(s, Xs )ds + 0 σ(s, Xs )dBs
Therefore, from the point of view, we see it is reasonable to use the Stratonovich interpretation−
The specific feature of the Itô model of ”not looking into the future” seems to be the reason for
Because of the explicit connection between the two models(and similar connection in higher
dimensions), it will for many purposes suffice to do the general mathematical treatment for one
of the two types of integrals. In genera, we see that the Stratonovich integral has the advantage
of leading to ordinary chain rule formulas under a transformation(change of variable), i.e. there
are no second order terms in the Stratonovich analogue of the Itô transformation formula.
This property makes the Stratonovich integral natural to use for example in connection with
However we see that Stratonovich integral are not martingales, as we have seen Itô integrals
are. This gives the Itô integral an important computational advantage, even though it does not
48
Chapter 4. Itô Integral 4.6. It̂o Formula
Definition Let Bt be 1−dimensional Brownian motion on (Ω, F, P). A (1-dimensional) It̂o process
where v ∈ WH , so that
Rt
P[ 0 v(s, ω)2 ds < ∞ for all t ≥ 0] = 1
Let g(t, x) ∈ C2 ([0, ∞) × R)(i.e. g is twice continuously differentiable on [0, ∞) × R). Then
Yt = g(t, Xt )
∂g ∂g 1 ∂2g
dYt = ∂t (t, Xt )dt + ∂x (t, Xt )dXt + 2 ∂x2 (t, Xt ) · (dXt )2 ,
Theorem Suppose f (s, ω) is continuous of bounded variation with respect to s ∈ [0, t]. for
a.a.ω. Then
Rt Rt
0 f (s)dBs = f (t)Bt − 0 Bs dfs .
Let B(t, ω) = ((B1 (t, ω), ..., Bm (t, ω)) denote the m−dimensional Brownian motion. If each of
the processes ui (t, ω) and vij (t, ω) satisfies the condition, then we can form the following n Itô
processes
49
4.7. Stochastic Differential Equation Chapter 4. Itô Integral
i.e.
where
X1 (t) u1 v11 ... v1m dB1 (t)
. . . . . .
X(t) = .
,u =
. ,v = .
. . , dB(t) =
.
. . . . . .
Xn (t) un vn1 ... vnm dBm (t)
Theorem Let
be an n-dimensional Itô process. Let g(t, x) = (g1 (t, x), ..., gp (t, x)) be an C2 map from [0, ∞) ×
∂gk 2
dYk = ∂t (t, X)dt + Σi ∂g k 1 ∂ gk
∂xi (t, X)dXi + 2 Σi,j ∂xi ∂xj (t, X)dXi dXj
Theorem Let T > 0 and b(·, ·) : [0, T ] × Rn → Rn , σ(·, ·) : [0, T ] × Rn → Rn×m be measurable
functions satisfying |b(t, x)| + |σ(t, x)| ≤ C(1 + |x|); x ∈ Rn , t ∈ [0, T ] for some constant C,
(m)
for some constant D. Let Z be a random variable which is independent of the σ−algebra F∞
E[|Z|2 ] < ∞
50
Chapter 4. Itô Integral 4.7. Stochastic Differential Equation
If we are given the function b(t, x) and σ(t, x) and ask for a pair of processes ((X̃t , B̃t ), Ht ) on
Then the solution X̃t (or more precisely (X̃t , B̃t )) is called a weak solution. Here Ht is an
increasing family of σ−algebras such that X̃t is Ht −adapted and B̃t is a martingale w.r.t
A strong solution is also a weak solution but the converse is not true in general.
A strong or path wise uniqueness means that if X1 (t, ω) and X2 (t, ω) are two continuous
processes , then
Whereas weak uniqueness implies that any two solutions are identical in law, i.e. have the
Lemma If b and σ satisfy the conditions of uniqueness and existence of stochastic differential
A weak solution concept is seen as natural, as it does not specify beforehand the explicit
representation of white noise. Moreover, there exists a stochastic differential equations which
sign(x) = +1 if x ≥ 0
51
4.7. Stochastic Differential Equation Chapter 4. Itô Integral
sign(x) = −1 if x < 0.
Note that σ(t, x) = σ(x) =sign(x) does not satisfy the Lipschitz condition, thus we are not
guaranteed existence and uniqueness of a solution. We later see that the equation has no strong
solution.
Solution To see this, let B̂t be a Brownian motion generating the filtration F̂t and define
Rt
Yt = 0 sign(B̂s )dB̂s .
By the Tanaka formula we know that, if L̂(ω) is local time for B̂t (ω) at 0. It follows that Yt is
measurable w.r.t the σ−algebra Gt generated by |B̂s (·)|; s ≤ t, which is clearly strictly contained
in F̂t .
Thus the σ−algebra Nt generated by Ys (·); s ≤ t is also strictly contained in F̂t . Now suppose
that Xt is a strong solution. Then we see that Xt is a Brownian motion w.r.t the measure P.
Let ]Mt be the σ−algebra generated by Xs (·); s ≤ t. Since (sign(x))2 = 1 we write the original
equation as
Mt . But we had assumed that Xt is a strong solution, thus we see that the equation has no
strong solution.
To find a weak solution of the equation, we choose Xt to be any Brownian motion B̂t . Then we
define B̃t by
Rt Rt
B̃t = 0 sign(B̂s )dB̂s = 0 sign(Xs )dXs
i.e.
Then
so Xt is a weak solution.
We see that as any weak solution Xt must be a Brownian motion w.r.t P, this implies that the
52
References
Applebaum, David (2004). Levy Processes and Stochastic Calculus. Cambridge University Press.
Billingsley, Patrick (1986). Probability and Measure. Wiley Series in Probability and Mathemat-
ical Sciences.
Mörters, Peter and Peres, Yuval (2003). Brownian Motion. Cambridge Series in Statistical and
Probabilistic Mathematics.
New York.
Taylor, Howard M. and Karlin, Samuel (1984). An Introduction To Stochastic Modelling. Aca-
demic Press.
53