0% found this document useful (0 votes)
0 views

Stochastic_Processes (4)

The document is a summer project report on stochastic processes by Manan Rawat under Prof. Suprio Bhar at the Indian Institute of Technology, Kanpur. It covers topics such as Brownian motion, Poisson processes, Lévy processes, and Itô integrals, providing definitions, properties, and theorems related to each. The project aims to study these processes in detail and concludes with an introduction to stochastic differential equations.

Uploaded by

akkushay25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

Stochastic_Processes (4)

The document is a summer project report on stochastic processes by Manan Rawat under Prof. Suprio Bhar at the Indian Institute of Technology, Kanpur. It covers topics such as Brownian motion, Poisson processes, Lévy processes, and Itô integrals, providing definitions, properties, and theorems related to each. The project aims to study these processes in detail and concludes with an introduction to stochastic differential equations.

Uploaded by

akkushay25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

Stochastic Processes

Manan Rawat

Prof. Suprio Bhar Indian Institute Of Technology, Kanpur

Summer Project (2023) Mathematics

School Of Mathematics
Contents

1 Brownian Motion 1

1.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 Paul Levy’s Construction of Brownian Motion . . . . . . . . . . . . . . . . . . . . 3

1.2.1 Wiener’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.3 Properties of Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.3.1 Continuity Property Of Brownian Motion . . . . . . . . . . . . . . . . . . 6

1.3.2 Non Differentiability of Brownian Motion . . . . . . . . . . . . . . . . . . 6

1.4 Cameron-Martin theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.4.1 Brownian Motion with Drift . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.5 Markov Property and Filtration of Brownian Motion . . . . . . . . . . . . . . . . 8

1.5.1 Results on Filtration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.6 The strong Markov property and the Reflection principle . . . . . . . . . . . . . 10

1.6.1 Results on Stopping time . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.6.2 Strong Markov Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.6.3 The Reflection Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.7 Markov processes derived from Brownian Motion . . . . . . . . . . . . . . . . . . 12

1.8 Martingale Property of Brownian Motion . . . . . . . . . . . . . . . . . . . . . . 13

1.8.1 Martingale with respect to a filtration . . . . . . . . . . . . . . . . . . . . 13

1.8.2 Optional Stopping Theorem and Doob’s Maximal Inequality . . . . . . . 14

2 Poisson Process 16

2.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.2 Random Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.3 Introduction to Poisson Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

i
CONTENTS CONTENTS

2.4 The Law of Rare Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.4.1 Postulates of the Poisson Process . . . . . . . . . . . . . . . . . . . . . . . 21

2.5 Distributions associated with Poisson Process . . . . . . . . . . . . . . . . . . . . 21

2.6 Poisson Point Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.7 Multi dimensional Poisson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2.7.1 Compound Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.7.2 Marked Poisson Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3 Lévy Processes 27

3.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.2 Cádlág Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.3 Introduction to Lévy Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3.4 Jumps of a Lévy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

3.5 Poisson Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

3.6 Examples Of Lévy Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3.6.1 Brownian Motion and Gaussian Process . . . . . . . . . . . . . . . . . . . 35

3.6.2 Poisson Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3.6.3 Compound Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3.6.4 Interlacing Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

3.6.5 Stable Lévy Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

3.7 Lévy-Itô decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4 Itô Integral 40

4.1 Introduction Itô Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

4.2 Construction of Itô Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

4.3 Properties of Itô Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

4.4 Extension of Itô Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

4.5 A comparison of Itô and Stratonovich integrals . . . . . . . . . . . . . . . . . . . 47

4.6 It̂o Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

4.7 Stochastic Differential Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

4.7.1 Weak and Strong Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

References 53

ii
Abstract

Through the summer project, we aim to study stochastic processes. We first study examples

of the Lévy process, the Brownian motion and Poisson Process, and then generalize the results

and study Lévy process. We conclude the project by studying the Itô Integral and briefly fa-

miliarizing ourselves with Stochastic Differential equations.

We study Brownian Motion mainly from Mörters and Peres (2003), along with Klenke (2008).

We study the construction, continuity and differential property of the Brownian motion along

with the Cameron Martin theorem, the martingale and Markov property, along with Filtration

of the Brownian motion.

We then study Poisson Process from Billingsley (1986), Taylor and Karlin (1984) and Klenke

(2008). We study Random measures, the equivalent conditions for Poisson Processes, the law

of rare events, the Poisson point process and conclude with Multi-Dimensional Poisson processes.

We study Lévy processes from Applebaum (2004). We study The Jumps in a Lévy process,

Poisson Integration and examples of Lévy Processes and introduces ourselves to It̂o integrals

via Lévy Itô decomposition.

We study Itô Integrals, its properties, its extensions. We study the Itô Formula and get intro-

duced to stochastic differential equations from Øksendal (2000).

iii
Chapter 1

Brownian Motion

1.1 Basic Definitions

Before we study Brownian Motion,we must be familiarized with basic definitions and topics on

the subject.

σ-Algebra - Let X be a set and S be a collection of subsets of X. We call X a σ-algebra if

the following conditions are true.

1. X∈ C

2. for any T ∈ S, X\T ∈ S


S
3. for any T1 ,T2 , ..... Tj ∈ S .... Ti ∈ S.

We call a nonempty set X equipped with a σ algebra S. of subsets of X a measurable space.

We write it as a pair (X,S).

A function f:X is said to be measurable if for every a ∈ R , the set {x ∈ X | f(x) < a}

is measurable.

A random variable X is a measurable function X : Ω →E from a sample space Ω as a

set of possible outcomes to a measurable space E.

If the probability of occurrence of an event A is not affected by the occurrence of another

event B, then A and B are said to be independent events.

1
1.1. Basic Definitions Chapter 1. Brownian Motion

A Probability Space P on the sample space (S,S’) is a real valued function defined on the

collection of events S’ that satisfies the following

• P(A) ≥ 0 for every event A.

• P(S)=1.

• If {Ai :i∈I } is a countable, pairwise disjoint collection of events then


S
P( i∈I Ai ) = Σi∈I P(Ai )

A Probability space is a triple (Ω, F, P), where Ω is a sample space, F is a sigma algebra of

events and P is a probability measure on F.

A random vector X = (X1 ,X2 ,X3 .....Xd )T with values in Rd has the d-dimensional standard

Gaussian distribution if its d coordinates are standard normally distributed and independent.

Let X1 ,X2 ,X3 ....... be a sequence of random variables on a probability space (Ω,F, P) and

consider a set A of sequences such that

{ X1 ,X2 .... ∈A} ∈ F

The event { X1 ,X2 .... ∈A} is exchangeable if

{ X1 ,X2 .... ∈A} ⊂ { Xσ1 ,Xσ2 ....... ∈ A }

for all finite permutations σ : N → N. Here finite permutations means that σ is a bijection with

σn = n for all sufficiently large n.

We say that a process { X(t) : t ≥ 0 } has the property S almost surely if there exist

A ∈ A such that P(A)=1 and A ⊂ { ω ∈ Ω : t→X(t,ω) has the property S } .

A right continuous function f [0,t] → R is a function of bounded variation if


Pk
Vf 1 (t)= sup j=0 | f(tj )- f(tj−1 ) | < ∞

where the supremum is over all k ∈ N and partitions 0 = t0 ≤t1 ≤t2 ≤.....tk−1 ≤tk =t.If the supre-

mum is infinite f is said to be of unbounded variation.

2
Chapter 1. Brownian Motion 1.2. Paul Levy’s Construction of Brownian Motion

For a measurable space (X, A). For two non zero measures µ and ν on the same space we

say that µ ⊥ ν i.e. µ and ν are singular if there exists a Borel set A with µ(A)=0 and

ν(Ac )=0.

If we define S = { A ∈ A | µ(A)= 0} and T = { A ∈ A | ν(A)= 0} as the two null sets

of measures µ and ν respectively, then the measure µ is said to be absolutely continuous in

reference to ν if and only if T ⊆S. This is denoted as µ << nu.

We say they they are equivalent if they are mutually absolutely continuous, ie. if µ<<ν

and ν<<µ. It is denoted by µ ∼ ν.

A function f:[0,∞]→R is said to be locally α-Holder continuous at x≥0, there exists ϵ > 0

and c > 0 such that

|f (x) − f (y)| ≤ c|x − y|α for all y ≥ 0 with |y − x| < ϵ.

We refer to α > 0 as the Holder exponent and to c > 0 as the Holder constant.

A filtration on a probability space (Ω, F, P) is a family (F(t) :≥ 0) of σ-algebras such that

F(s) ⊂ F(t) ⊂ F for all s < t.

A probability space together with the filtration is called a filtered probability space.

A stochastic process {X(t) : t ≥ 0 } defined on a filtered probability space with filtration (F(t):t

≥0) is called adapted if X(t) is F(t)-measurable for any t ≥ 0.

A random variable T with values in [0, ∞], defined on a probability space with filtration (F(t)

: t≥ 0) is called a stopping time respect to (F(t) : t≥ 0) if {T ≤ t} ∈ F(t), for every t ≥ 0.

1.2 Paul Levy’s Construction of Brownian Motion

A real-valued stochastic process { B(t) : t ≥ 0 } is called a (linear) Brownian motion with

start in x ∈ R if the following holds:

• B(0) = x.

• the process has independent increments, i.e. for all times 0 ≤ t1 ≤ t2 .... ≤ tn , the

increments B(tn ) − B(tn−1 ), B(tn−1 ) − B(tn−2 ), ...B(t2 ) − B(t1 ) are independent random

3
1.2. Paul Levy’s Construction of Brownian Motion Chapter 1. Brownian Motion

variables.

• for all t ≥ 0 and h ≥ 0, the increments B(t+h)-B(t) are normally distributed with

expectation zero and variance h.

• almost surely, the function t → B(t) is continuous.

We say that {B(t) : t ≥ 0 } is a standard Brownian motion if x = 0.

X is a Brownian motion if and only if X = (Xt )t∈[0,∞) is a continuous centred Gaussian process

with Cov[Xs ,Xt ]= s ∧ t for all s,t ≥ 0.

1.2.1 Wiener’s Theorem

Theorem 1.1 Wiener 1923 - Standard Brownian Motion Exists.

We construct Brownian motion on the interval [0,1] as a random element on the space C[0,1] of

continuous function on [0,1].

Consider the finite set of of dyadic points as Dn = { k/2n : 0 ≤ k ≤ 2n }

Let D be the infinite union of Dn , i.e. D= ∞


S
n=0 Dn let (Ω,A,P) be a probability space on which

a collection { Z t : t ∈ D} of independent, standard normally distributed random variables can

be defined. Let B(0):=0 and B(1):=Z1 . For each n ∈ N we define the random variables B(d),

d ∈ Dn such that

1) for all r< s< t in Dn the random variable B(t)-B(s) is normally distributed with mean zero

and variance t-s, and is independent of B(s)-B(r), 2) the vectors (B(d)): d ∈ Dn ) and (Zt : t

∈ D \ Dn ) are independent.

We define B(d) for d ∈ Dn \Dn−1 by

B(d−2−n )+B(d+2−n ) Zd
B(d) = 2 + n+1
2 2

Now define F0 (t) as

• Z1 for t=1

• 0 for t=0

• linear in between

Now define Fn (t) as

• Zt
n+1 for t ∈ Dn \ Dn−1
2 2

4
Chapter 1. Brownian Motion 1.3. Properties of Brownian Motion

• 0 for t ∈ Dn−1

• linear between consecutive points in Dn

Then we see that these functions are continuous on [0,1] and for all n and d ∈ Dn , the we define

Brownian motion B(d) as


P∞ Pn
B(d) = i=0 Fi (d) = i=0 Fi (d)

We have thus constructed a continuous function B[0,1] → R with the same finite dimensional

distribution as Brownian motion. Take a sequence B0 ,B1 ..... of independent C[0,1]-valued

random variables with the distribution of this process and define {B(t) : t ≥ 0 } by gluing

together the parts more precisely by, then if ⌊t⌋=l,

B(t) = Bl (t-l) + l−1


P
i=0 Bi (i)

This defines a continuous random function B : [0, ∞) → R , this completes our Paul Levy’s

construction of Brownian motion.

1.3 Properties of Brownian Motion

Scaling Invariance Property of Brownian motion

Suppose {B(t):t≥0} is a standard Brownian motion and let a>0. Then the process{X(t):t≥0}
B(a2 t)
defined by X(t) = a is also a standard Brownian Motion.

Time Inversion Property of Brownian motion

Suppose {B(t)≥0} is a standard Brownian motion. Then the process { X(t):t≥0} defined by

• 0 for t = 0

B(t)
• t for t > 0

is also a standard Brownian Motion.

Laws Of Large Numbers

Almost surely,
B(t)
lim =0
t→∞ t

5
1.3. Properties of Brownian Motion Chapter 1. Brownian Motion

1.3.1 Continuity Property Of Brownian Motion

1. As in the definition of Brownian Motion, the function is continuous almost surely. In any

compact interval the sample functions are uniformly continuous, i.e. there exist some (random

function) ϕ with limh↓0 ϕ(h) = 0 called a modulus of continuous of the function B : [0, 1] → R,

such that

|B(t+h)−B(t)|
limsuph↓0 sup0≤t≤1−h ϕ(h) ≤1

2. There exists a constant C > 0 such that, almost surely, for every sufficiently small h > 0

and all 0 ≤ t ≤ 1-h


q
|B(t + h) − B(t)| ≤ C h log h1

3. For every constant C < 2 such that, almost surely, for every ϵ > 0, there exists 0 <h<ϵ

and t ∈ [0, 1-h] with


q
|B(t + h) − B(t)| ≤ C h log h1

4. Levy’s modulus of continuity (1937) Almost surely

|B(t+h)−B(t)|
limsuph↓0 sup0≤t≤1−h q
1
=1
2h log h

5. For any fixed m and c> 2, almost surely, there exists n0 ∈ N such that, for any n ≥ n0 ,
q
1
|B(t) − B(s)| ≤ c (t − s)log t−s for all [s.t] ∈ Λm (n) where

Λm (n) = [ k−1+b
2n−a
, k+b
2n−a
], 1 2
k ∈ {0, 1...2n }, a, b ∈ {0, m , m ... m−1
m }.

6. Given ϵ > 0, there exists m∈ N such that for every interval [0,1] ⊂ Λ(m) there exists an

interval [s’,t’] ∈ Λ(m) with |t − t′ | <ϵ (t-s)and |s − s′ | <ϵ(t-s) where Λ(m) = n Λm (n).
S

7.α-Holder property For the holder’s constant α < 12 , then, almost surely, Brownian motion

is everywhere locally α-Holder continuous. i.e. for |y − x| < ϵ, |B(y) − B(x)| < c(x − y)α for

some c>0 for α < 21 , ∀y > 0.

1.3.2 Non Differentiability of Brownian Motion

1. Almost surely, for all 0 < a < b < ∞, Brownian motion is not monotone on the interval

[a,b].

Hewitt–Savage 0-1 law If E is an exchangeable event, then P(E) is 0 or 1.

Almost surely,

6
Chapter 1. Brownian Motion 1.4. Cameron-Martin theorem

B(n)
limsupn→∞ √
n
= + ∞, and

B(n)
liminfn→∞ √
n
=-∞

2. We define upper and lower right derivatives D∗ f(t) and D∗ f(t) as

f (t+h)−f (t)
D∗ f(t) = limsuph↓0 h
f (t+h)−f (t)
D∗ f(t) = liminfh↓0 h

Fix t≥0. Then almost surely, Brownian motion is not differentiable at t. Moreover D∗ B(t) =

+ ∞ and D∗ B(t) = - ∞

3. Almost surely, Brownian motion is nowhere differentiable. Furthermore, almost surely, for

all t

either D∗ f(t) = +∞ or D∗ f(t) = -∞ or both.

4. Suppose that the sequence of partitions

0 = tn0 ≤ tn1 ≤ tn2 ≤ .... ≤ tnk(n)−1 ≤ tnk(n) = t

is nested, i.e. at each step one or more partition points are added and the mesh

(n) (n)
∆(n) := sup1≤j≤k(n) { tj - tj−1 }

converges to zero. Then, almost surely,


Pk(n) (n) (n)
limn→∞ j=1 (B(tj ) − B(tj−1 ))2 = t

and therefore Brownian motion is of unbounded variation. For a sequence of partition as above,

we call
Pk(n) (n) (n)
limn→∞ j=1 (B(tj ) − B(tj−1 ))2

the quadratic variation of Brownian motion. The Brownian motion has finite quadratic

variation.

1.4 Cameron-Martin theorem

1.4.1 Brownian Motion with Drift

The process {Bµ (t), t≥ 0} is called Brownian Motion with drift µ if it satisfies the following

conditions

7
1.5. Markov Property and Filtration of Brownian Motion Chapter 1. Brownian Motion

• Bµ (0) = 0. a.s.

• The process has independent and stationary increments.

• Bµ (t) ∼ N (µt,t).

where µ ∈ R. Thus we can write Brownian Motion with a drift µ in the form

Bµ (t) = µt + B(t)

where B(t) is a standard Brownian Motion. For which time-dependent drift function F the

process {B(t) + F (t) : t ≥ 0} has same behaviour as a Brownian Motion path. Law of standard

Brownian Motion Let us denote the law of standard Brownian Motion {B(t) : t ∈ [0,1]} as L0 .

For the function F:[0,1] → R, write LF for the law of {B(t)+F(t) : t ∈[0,1]}. The Dirichlet

Space We denote the Dirichlet Space by D[0,1]


Rt
D[0,1] = {F ∈ C[0,1] : exists f ∈ L2 [0,1] such that F(t) = 0 f (s)ds ∀t ∈[0,1]}

For a function F ∈ D[0,1] the associated f is uniquely determined as an element of L2 [0,1] and

is denoted byF’, the derivative of F .

The Cameron Martin Theorem

Let F ∈ C[0,1], satisfy F(0) = 0. Then

1. If F ∈
/ D [0,1] then LF ⊥ L0 .

2. If F ∈ D[0,1] then LF and L0 are equivalent.

As a result of the theorem we see that any almost sure property of a Brownian Motion B also

holds surely for B + F , when F ∈ D[0,1]. Conversely if F ∈


/ D[0,1] some almost sure property

of Brownian motion fails for B + F.

1.5 Markov Property and Filtration of Brownian Motion

Markov Property

Suppose that {B(t) : t ≥0 } is a Brownian motion started in x ∈ Rd . Let s >0. then the process

{B(t+s)-B(s) : t ≥0 } is again a Brownian motion started in the origin and it is independent

of the process {B(t):0≥ t ≥ s}.

If B1 ,B2 ...Bd are independent linear Brownian motions started in x1 ,x2 ..xd then the stochas-

tic process { B(t):t≥0 } given by B(t) = (B1 (t),B2 (t)...Bd (t))T is called a d-dimensional

8
Chapter 1. Brownian Motion 1.5. Markov Property and Filtration of Brownian Motion

Brownian Motion started in (x1 ,x2 ..xd )T .

One-dimensional Brownian motion is also called linear, two dimensional Brownian motion is

called planar Brownian motion.

We define a filtration (F 0 (t) : t ≥ 0) for a Brownian motion {B(t) : t ≥ 0} as

F 0 (t) = σ(B(s) : 0 ≤ s ≤ t)

be the σ algebra generated by the random variables B(s), for 0 ≤ s ≤ t. Observe that this

σ-algebra contains all information available from observing the process up to time t. Let us

define

F + (s) = 0 (t)
T
t<s F

We see that intuitively that (F + (t) : t ≥ 0) is a bit larger than F 0 (s).

Theorem For every s ≥ 0 the process {B(t+s)-B(s) : t ≥ 0} is independent of the σ-algebra

F + (s).

1.5.1 Results on Filtration

1. Blumenthal’s 0-1 law Let x ∈ Rd and A∈ F + (0). Then Px (A)∈ {0,1}.

2. Theorem Suppose that { B(t) : t≥ 0 } is a linear Brownian motion. Define τ = inf { t >

0 :B(t) > 0} and σ=inf { t > 0: B(t) = 0}. Then

P0 {τ = 0} = P0 {σ = 0} = 1

3. Zero-one law for tail Events Define the tail σ−-algebra of a Brownian motion as fol-

lows.
T
Let G(t) = σ(B(s): s≥t). Let T = t≥0 G(t) be the σ-algebra of all tail events.

Let x ∈ Rd and suppose that A ∈ T is a tail event. Then Px (A)∈{0,1}.

4. Theorem For a linear Brownian motion {B(t) : 0 ≤ t ≤ 1}, almost surely,

• every local maximum is a strict local maximum ;

• the set of times where the local maxima are attained is countable and dense ;

• the global maximum is attained at a unique time.

9
1.6. The strong Markov property and the Reflection principle Chapter 1. Brownian Motion

1.6 The strong Markov property and the Reflection principle

The Markov property states that Brownian motion is started anew at each deterministic time

instance. It is a crucial property of Brownian motion that this holds also for an important

class of random times. These random times are called stopping times. The random time T

is a stopping time if we can decide whether {T ≤ t} by just knowing the path of stochastic

process up to time t.

1.6.1 Results on Stopping time

• Every deterministic time t ≥ 0 is a stopping time with respect to every filtration (F(t) :

t ≥ 0).

• If (Tn : n = 1,2...) is an increasing sequence of stopping time with respect to (F(t) : t ≥

0) and Tn ↑ T, then T, is also a stopping time with respect to (F(t) : t ≥ 0). This is so

because
T∞
{T ≤ t} = n=1 {Tn ≤ t} ∈ F(t).

• Let T be a stopping time with respect to (F(t) : t ≥ 0). Define time Tn by

Tn = (m + 1)2−n if (m)2−n ≤ T < (m + 1)2−n .

Then Tn is a stopping time with respect to (F(t) : t ≥ 0).

• Every stopping time T with respect to (F 0 (t) : t ≥ 0) is also a stopping time with respect

to (F + (t) : t ≥ 0) as F 0 (t) ⊂ F + (t) for every t ≥ 0.

• Suppose H is a closed set, for example a singleton. Then the first hitting time T = inf{ t

≥ 0 : B(t) ∈ H } of the set H is a stopping time with respect to (F 0 (t) : t ≥ 0). Indeed

we not that
T∞
{B(s) ∈ B(x, n1 } ∈ F 0 (t).
S S
{T≤t}= n=1 s∈Q
T
(0,t) x∈Qd
T
H

• Suppose G ∈ Rd is open, then

T = inf {t ≥ 0 : B(t) ∈ G }

is a stopping time with respect to the filtration (F + (t) : t ≥ 0), but not necessarily with

respect to (F 0 (t) : t ≥ 0).

10
Chapter 1. Brownian Motion 1.6. The strong Markov property and the Reflection principle

Note that the property which distinguishes (F + (t) : t ≥ 0) from (F 0 (t) : t ≥ 0) is right continuity,

which means that

+ (t + ϵ) = F + (t)
T
ϵ>0 F

Suppose that a random variable T with values in [0,∞] satisfies {T < t} ∈ F(t), for every t ≥

0, and (F(t) : t ≥ 0) is right-continuous, then T is a stopping with respect to (F(t) : t ≥ 0).

1.6.2 Strong Markov Property

Theorem For every almost sure finite stopping time T, the process

{B(T + t) − B(T ) : T ≥ 0}

is a standard Brownian motion independent of F + (T).

Equivalently

For any bounded measurable f :C([0, ∞), Rd ) → and x ∈ Rd , we have almost surely

Ex [f ({B(T + t) : t ≥ 0})|F + (T )] = EB(T ) [f ({B̃(t) : t ≥ 0})]

where the expectation on the right is with respect to a Brownian motion {B̃(t) : t ≥ 0} started

in the fixed point B(T).

Denoted by Px , the probability measure such that B = (Bt )t≥0 is a Brownian motion started at

x ∈ R. Then the Brownian motion B with distribution (Px )x∈R has the strong Markov property.

1.6.3 The Reflection Principle

Theorem If T is a stopping time and {B(t) : t ≥ 0} is a standard Brownian motion, then the

process {B ∗ (t) : t ≥ 0} called Brownian motion reflected at T and defined as

B ∗ (t) = B(t)1{t≤T } + 2 (B(T) - B(t))1{t>T }

is also a standard Brownian motion.

Theorem Let us assume M(t) = max0≤s≤1 B(s) .Then If a >0 then

P0 {M (t) > a} = 2P0 {B(t) > a} = P0 {|B(t)| > a}.

Levy’s Arcsine Law Let T > 0 and ζT =sup{ t≤T : BT = 0}. Then, for t ∈ [0,T],

2
p
P[ζT ≤ t] = π arcsin( t/T ).

11
1.7. Markov processes derived from Brownian Motion Chapter 1. Brownian Motion

The area of planar Brownian Motion

To prove the Levy’s Theorem on area of planar Brownian motion, we use the following lemma

along with the Markov Property.

If we denote the Lebesgue measure on Rd by Ld

Lemma If A1 , A2 , ⊂ R2 are Borel sets with positive area, then

L2 ({x ∈ R2 : L2 (A1
T
(A2 + x2 )) > 0}) > 0.

Levy’s Theorem on area of planar Brownian motion Almost surely L(B[0,1]) =

Equivalently

The range of planar Brownian motion has zero area.

Result on the Theorem For any points x,y ∈ Rd , d ≥ 2, we have Px {y ∈ B(0, 1]} = 0.

Zero set of Brownian motion Let {B(t) : t ≥ 0}be a one dimensional Brownian motion and

Zeroes = {t≥0:B(t) = 0}

its zero set. Then, almost surely, Zeroes is a closed set with no isolated points.

1.7 Markov processes derived from Brownian Motion

Markov Transitional Kernel A function p:[0,∞) × Rd × B → R, where B is the Borel

σ−algebra in Rd is a Markov transition kernel if

• p(·, ·, A) is a measurable as a function of (t,x), for eachA ∈ B;

• p(t, x, ·) is a Borel probability measure on Rd for all t ≥ 0 for x ∈ Rd when integrating a

function f with respect to this measure we write


R
f (y)p(t, x, dy);

• For all A ∈ B, x ∈ Rd and t,s > 0,


R
p(t+s,x,A)= Rd p(t, y, A)p(s, x, dy)

An adapted process {X(t): t≥0 } is a (time-homogeneous) Markov Process with transition

kernel p with respect to a filtration (F(t) : t ≥ 0), if for all t≥s and Borel sets A ∈ B we have,

almost surely,

P{X(t) ∈ A|F (s)} = p(t − s, X(s), A).

12
Chapter 1. Brownian Motion 1.8. Martingale Property of Brownian Motion

Theorem (Levy 1948) Let {M(t) : t ≥ 0} be the maximum process of a linear standard

Brownian motion {B(t) : t ≥0}, i.e. the process defined by

M (t) = max0≤s≤t B(s).

Then, the process {Y(t) : ≥ 0 } defined by Y(t) = M(t) - B(t) is a reflected Brownian Motion.

Theorem For any a≥0 define the stopping times

Ta = inf {t ≥0 : B(t) = a }.

Then { Ta : a ≥0 } is an increasing Markov process with transition kernel given by the densities

a a2
p(a, t, s) = √ exp(− 2(s−t) )1{s > t}, f or a > 0.
2π(s−t)3

This process is called the stable subordinator of index 12 .

Theorem Let {B(t) : ≥ 0 } be a planar Brownian motion and denote B(t) = (B1 (t), B2 (t)).

Define a family (V(a) : a≥ 0) of vertical lines by

V (a) = {(x, y) ∈ R2 : x = a}

and let T(a) = τ (V(a)) be the first hitting time of V(a). Then the process { X(a) : a ≥ 0 }

defined by X(a)=B2 (T (a)) is a Markov Process with transition kernel given by

1 a
R
p(a, x, A) = π A a2 +(x−y)2 dy

This process is called the Cauchy process.

1.8 Martingale Property of Brownian Motion

1.8.1 Martingale with respect to a filtration

Martingale A real-valued stochastic process {X(t) : t ≥ 0} is a martingale with respect to a

filtration (F(t) : t ≥ 0) if it is adapted to the filtration, E|X(t)| < ∞ for all t ≥0 and, for any

pair of times 0 ≤ s ≤ t,

E[X(t)|F (s)] = X(s) almost surely.

Submartingale A real-valued stochastic process {X(t) : t ≥ 0} is a submartingale with

respect to a filtration (F(t) : t ≥ 0) if it is adapted to the filtration, E|X(t)| < ∞ for all t ≥0

and, for any pair of times 0 ≤ s ≤ t,

E[X(t)|F (s)] ≥ X(s) almost surely.

13
1.8. Martingale Property of Brownian Motion Chapter 1. Brownian Motion

Supermartingale A real-valued stochastic process {X(t) : t ≥ 0} is a supermartingale with

respect to a filtration (F(t) : t ≥ 0) if it is adapted to the filtration, E|X(t)| < ∞ for all t ≥0

and, for any pair of times 0 ≤ s ≤ t,

E[X(t)|F (s)] ≤ X(s) almost surely.

A martingale is process where the current state X(t)is always the best prediction for its further

states. In this sense, the martingale describes fair games.

If {X(t) : t ≥ 0} is a martingale, the process {|X(t)| : t ≥ 0} need not be martingale, but is

still a submartingale as a result of the triangle inequality.

1.8.2 Optional Stopping Theorem and Doob’s Maximal Inequality

Optional Stopping Theorem Suppose {X(t) : t ≥ 0} is a continuous martingale, and 0

≤ S ≤ T are stopping times. If the process {X(t ∧T) : t ≥ 0} is dominated by an integrable

random variable X, i.e. |X(t ∧ T )| ≤ X almost surely, for all t ≥ 0, then

E[X(T )|F (S)] = X(S), almost surely.

Doob’s Maximal Inequality Suppose {X(t) : t ≥ 0} is a continuous martingale and p > 1.

Then for any t≥ 0,

p p
E[(sup0≤s≤t |X(s)|)p ] ≤ ( p−1 ) E[|X(t)|p ].

Wald’s Lemma for Brownian Motion Let {B(t) : t ≥ 0} be a standard linear Brownian

motion, and T be a stopping time such that either

• E[T] < ∞, or

• {B(t ∧ T) : t ≥ 0} is dominated by an integrable random variable.

Then we have E[B(T)] = 0.

Let S ≤ T be stopping times and E[ T ] < ∞. Then

E[(B(T ))2 ] = E[(B(S))2 ] + E[(B(T ) − B(S))2 ].

Suppose {B(t) : t ≥ 0} is a linear Brownian motion. Then the process

{B(t)2 − t : t ≥ 0} is a martingale.

Wald’s Second lemma Let T be a stopping time for standard Brownian motion such that

E[T] < ∞. Then

14
Chapter 1. Brownian Motion 1.8. Martingale Property of Brownian Motion

E[B(T )2 ] = E[T ].

Theorem Let a < 0 < b and, for a standard linear Brownian motion {B(t) : t ≥ 0}, define T

= min { t ≥ 0 : B(t) ∈ {a,b}}.Then

|a|
• P{B(T ) = a} = b
|a|+b and P{B(T ) = b} = |a|+b .

• E[T ] = |a|b.

Theorem Let {B(t) : t ≥ 0} be a standard linear Brownian motion and T a stopping time

with E[T 1/2 ] < ∞. Then E[B(T )] = 0.

Theorem Let f : Rd → R be twice continuously differentiable, and {B(t) : t ≥ 0} be a

d-dimensional Brownian motion. Further suppose that, for all t>0, and x∈ Rd , we have
Rt
Ex |f (B(t))| < ∞ and Ex 0 |∆f (B(s))|ds < ∞. Then the process defined by

1
Rt
X(t) = f (B(t)) − 2 0 ∆f (B(s))ds

is a martingale.

Corollary Suppose f: Rd → R satisfies ∆f (x) =0 and Ex |f (B(t))| < ∞, for every x ∈ Rd and

t > 0. Then the process {f(B(t)) : t ≥0 } is a martingale.

15
Chapter 2

Poisson Process

2.1 Basic Definitions

A σ-finite measure µ on (E, ϵ) is called Borel Measure if, for any x ∈ E, there exists an open

neighborhood U ∋ x such that µ(U ) < ∞.

A σ-finite measure µ on (E, ϵ) is called inner regular if

µ(A) = sup{µ(K) : K ⊂ A is compact} for all A ∈ ϵ.

A Polish space is a separable completely metrizable topological space; that is, a space home-

omorphic to complete metric space that has a countable dense subset.

Let us say E be a locally compact Polish space with Borel σ− algebra B(E). Let

Bb (E) = {B ∈ B(E) : B is relatively compact}

be the system of bounded Borel sets and M (E) the space of Radon Measure on E.

Theorem Denote by M = σ(IA : A ∈ Bb (E)) the smallest σ−algebra on M (E) with respect to

which all maps

IA : µ → µ(A), A ∈ Bb (E),

are measurable.

16
Chapter 2. Poisson Process 2.2. Random Measures

Let τv be the vague topological M (E). Then

M = B(τv ) = σ(If : f ∈ Cc (E)) = σ(If : f ∈ C+


c (E)).

A σ-finite measure µ on (E, ϵ) is called a Radon measure if µ is an inner regular Borel measure.

The Laplace transform of a Random measure X on E is


R
L(f ) = E[exp(i f dX)], f ∈ BbR (E).

The characteristic function of a random measure X on E is

f ∈ B + (E).
R
ψ(f ) = E[exp(− f dX)],

(Mt )t≥0 is said to be the Moran-Gamma subordinator, if it is a stochastic process which

(i) is right continuous

(ii) the paths t → Mt are monotone increasing

(iii) The increments Mt − Ms are independent, stationary and follows Gamma distribution i.e.

for t > s ≥ 0, Mt − Ms ∼ Γ1,t−s .

2.2 Random Measures

Let M̃ (E) be the space of all measures on E endowed with sigma algebra, i.e.

M̃ = σ(IA : A ∈ Bb (E)).

A Random Measure on E is a random variable X on some probability space (Ω, A, P) with

values in (M̃ (E), M̃) and with P[X ∈ M (E)] = 1.

Theorem Let X be a random measure on E. Then the set function E[X] : B(E) → [0, ∞], A →

E is a measure. We call E[X] the intensity measure of X. We say that X is integrable if

E[X] ∈ M (E).

Theorem Let PX be the distribution of a random measure X. Then PX is uniquely deter-

mined by the distribution of either of the families

((If1 , ....Ifn ) : n ∈ N; f1 , ...fn ∈ Cc+ (E)) or

((IA1 , ....IAn ) : n ∈ N; A1 , ....An ∈ Bb (E) pairwise disjoint).

17
2.3. Introduction to Poisson Process Chapter 2. Poisson Process

Theorem The distribution PX of a random measure X is characterized by its Laplace trans-

form LX (f ), f ∈ Cc+ (E), as well as by its characteristic function ΨX (f ), f ∈ Cc (E).

Corollary The distribution of a random measure X on E with independent increments is

uniquely determined by the family (PX(A) , A ∈ Bb (E)).

2.3 Introduction to Poisson Process

The Poisson distribution with the parameter µ > 0 is given by


k
pk = e−u µk! for k = 0,1 .....

Let X be a random variable having the Poisson distribution. We see that

E[X] = µ

E[(X(X − 1))] = µ2

E[X 2 ] = µ2 + µ
2 =µ
σX

Suppose that X has the exponential distribution with the parameter a:

P[X > x] = e−ax , x ≥ 0. T hen

P[X > x + y|X > x] = P[X > y], x, y ≥ 0.

Then if we assume X to be the waiting time for an occurrence. Let us say X1 to be the waiting

time of the first event, let X2 be the waiting time between the first and the second events and

so on. Then we see the infinite sequence X1 , X2 , X3 , ....of random variables on some probability

space, and Sn = X1 + X2 ...Xn represent the time of occurrence of the nth event; it is convenient

to write S0 = 0.

Note that if no two events occur simultaneously, then Sn must be strictly increasing. Also

observe that if only finitely many of the events are to occur in each finite interval of the time,

Sn must go to infinity.

0 = S0 (ω) < S1 (ω) < S2 (ω) < ....., supn Sn (ω) = ∞

X1 (ω) > 0, X2 (ω) > 0 ....... Σn Xn (ω) = ∞

The above conditions are equivalent if they hold for each ω, let us say that the condition fulfills

00 . Say the number Nt denote the number of events that occur in the time interval [0, t] is the

largest integer n such that Sn < t:

18
Chapter 2. Poisson Process 2.3. Introduction to Poisson Process

Nt = max[n : Sn ≤ t].

Then Nt = 0 if t < S1 = X1 ; in particular N0 = 0. The number of events in (s, t] is the increment

Nt − Ns . Then clearly we see that

[Nt ≥ n] = [Sn ≤ t]

[Nt = n] = [Sn ≤ t < Sn+1 ].

Then we can see that each Nt is a random variable. Thus we see that the collection [Nt : t ≥ 0]

is a stochastic process. Now let us restate the condition C 0 as following

Condition 00

For each ω, Nt (ω) is a nonnegative integer for t ≥ 0, N0 (ω) = 0 and limt→∞ Nt (ω) = ∞; further,

for each ω, Nt (ω) as a function of t is non decreasing and right - continuous and at the points

of discontinuity the saltus Nt (ω) − sups<t Ns (ω) is exactly 1.

Condition 10

The Xn are independent and each is exponentially distributed with parameter α.

Condition 20

(i) For 0 < t1 < t2 < ..... < tk the increments Nt1 , Nt2 − Nt1 , ...., Ntk − Ntk−1 are independent.

(ii) The individual increments have poisson distribution i.e.


n
P[Nt − Ns = n] = e−α(t−s) (α(t−s))
n! , n = 0, 1, 2......., 0 ≤ s < t.

A collection [Nt : t ≥ 0] of random variables satisfying condition 20 is called Poisson Process,

and α is said to be the rate of the process.

Condition 30

(i) For 0 < t1 < t2 ....tk the increments Nt1 , Nt2 − Nt1 , ....Ntk − Ntk−1 are independent.

(ii) The distribution of Nt − Ns depends only on the difference t − s.

Condition 40

If 0 < t1 < t2 < t3 .... < tk and if n1 , n2 ....nk are non negative integers, then

P[Ntk +h − Ntk = 1|Ntj = nj , j ≤ k] = αh + o(h)and

P[Ntk +h − Ntk ≥ 2|Ntj = nj , j ≤ k] = o(h)

as h ↓ 0. Moreover [Nt : t ≥ 0] has no fixed discontinuities.

Theorem Condition 10 and 20 are equivalent in the presence of Condition 00 .

Theorem If Condition 00 holds and [Nt : t ≥ 0] has independent increments and no fixed

discontinuity, then each increment has a Poisson distribution.

19
2.4. The Law of Rare Events Chapter 2. Poisson Process

Theorem Condition 10 , 20 and 30 are equivalent in the presence of Condition 00 .

Theorem Condition 10 , 20 , 30 and 40 are equivalent in the presence of Condition 00 .

Approximations Of Poisson Process

Theorem Suppose that for each n, Zn1 , Zn2 , Znrn are independent random variables and Znk

assume the value 1 and 0 with probabilities pnk and 1 −pnk . If

Σrk=1
n
pnk → λ ≥ 0 max1≤k≤rn pnk → 0, then
λi
P[Σrk=1
n
Znk = i] → e−λ i! , i = 0, 1, 2.....

2.4 The Law of Rare Events

Consider a large number N of independent Bernoulli trials where the probability p of success

on each trial is small and constant from trial to trial. Let XN,p denote the total number of

successes in the N trials, where XN,p , follows the binomial distribution

P{XN,p = k} = N! k
k!(N −k)! p (1 − p)N −k for k = 0, 1, 2.....N.

Now consider the limiting case in which N → ∞ and p → 0 in such a way that N p = λ > 0 where

λ is constant. Then the distribution for XN,p becomes in the limit, the Poisson distribution i.e.
k
P{Xλ = k} = e−λ λk!

Derivation

Write the probability of the distribution as

(N p)k N (N −1).....(N −k+1)


P{XN,p = k} = k! Nk
(1 − p)N −k

Now if λ = N p. Then

N (N −1).....(N −k+1) 1 2 k−1


Nk
= 1(1 − N )(1 − N )......(1 − N ) → 1 as N → ∞

and (1 − λ N
N) → e−λ in

(1 − p)N −k = (1 − λ N
N ) (1 − λ −k
N) → e−λ × 1 as N → ∞

For a large number N of independent trials and a small constant probability p of success on

each trial, then the total number of successes follow approximately a Poisson distribution with

the parameter λ = N p.

20
Chapter 2. Poisson Process 2.5. Distributions associated with Poisson Process

2.4.1 Postulates of the Poisson Process

There are four postulates associated to the Poisson Process.

Let N((a,b]) denote the number of events that occur during the interval (a,b]. That is, if

t1 < t2 < t3 .....denote the times (or locations, etc.) of successive events, then N((a,b]) is the

number of values of ti for which a < ti ≤ b. We now state the following postulates

(i) The number of events happening in disjoint intervals are independent random variables.

That is, for every integer m = 2,3,.... and time points t0 = 0 < t1 < t2 .... < tm , the random

variables

N ((t0 , t1 ]), N ((t1 , t2 ]), ..N ((tm−1 , tm ])

are independent.

(ii) For any time t and positive number h , the probability distribution of N ((t, t + h]), the

number of events occurring between time t and t + h, depends only on the interval length h and

not on the time t.

(iii) There is a positive constant λ for which the probability of at least one event happening in

a time interval of length h is

P{N ((t, t + h]) ≥ 1} = λh + o(h) as h ↓ 0.

(Conforming to a common notation, here o(h) as h ↓ 0 stands for a general and unspecified
o(h)
remainder time for which h → 0 as h ↓ 0. That is, a remainder term of smaller order than h

as h vanishes.)

(iv) The probability of two or more events occurring in an interval of length h is o(h), or

P{N ((t, t + h]) ≥ 2} = o(h), h ↓ 0.

2.5 Distributions associated with Poisson Process

The time of occurrence of the nth event, is called the waiting time. It is denoted by Wn . We

can conveniently observe that W0 = 0.

The difference Sn = Wn − Wn−1 are called Sojourn times Sn measures the duration that the

Poisson Process sojourns in the state n.

Theorem The waiting time Wn has the gamma distribution. Thus, its probability density

function is given by

21
2.6. Poisson Point Process Chapter 2. Poisson Process

λn tn−1 −λt
fWn (t) = (n−1)! e , n = 1, 2......, t ≥ 0.

In particular W1 , the time to the first event, is exponentially distributed i.e.

fW1 (t) = λe−λt , t ≥ 0.

Theorem The sojourn times S0 , S1 , ....Sn−1 are independent random variables, each having

the exponential probability density function

fSk (s) = λe−λs , s ≥ 0.

Let {X(t)} be a Poisson process of rate λ > 0. Then for 0 < u < t and 0 ≤ k ≤ n,

n! u k
P{X(u) = k|X(t) = n} = k!(n−k)! ( t ) (1 − ut )n−k .

Theorem Let W1 , W2 , ... be the occurrence times in a Poisson process of rate λ > 0. Conditioned

on N (t) = n, the random variables W1 , W2 , .....Wn have the joint probability density function

fW1 ,.....Wn |X(t)=n (w1 , w2 ....wn ) = n! t−n for 0 < w1 < ....wn ≤ t.

The theorem, asserts that conditioned on a fixed total number of events in an interval, the

location of those events are uniformly distributed in a certain way. This theorem has a wide

variety of applications.

2.6 Poisson Point Process

Let µ ∈ M (E). A random measure X with independent increments is called a Poisson point

process with intensity measure µ if, for any A ∈ Bb (E), we have PX(A) = P oiµ(A) . For every

µ ∈ M (E), there exists a Poisson process X with intensity measure µ.0

For an atom free measure µ ∈ M (E), Let X be a random measure on E with P[X(A) ∈
S
N0 {∞}] = 1 for every A ∈ B(E). Then the following are equivalent.

(i)X ∼ P P Pµ where P P Pµ denotes a Poisson Point process with intensity µ.

(ii)X almost surely has no double points; that is

P[X{x}) ≥ 2 for some x ∈ E] = 0, and

P[X(A) = 0] = e−µ(A) for all A ∈ Bb (E).

Theorem Let µ ∈ M (E) and let X be a Poisson point process with intensity measure µ. Then

X has the Laplace transform

L(f ) = exp( µ(dx)(e−f (x) − 1)), f ∈ B + (E)


R

22
Chapter 2. Poisson Process 2.6. Poisson Point Process

and characteristic function

ψx (f ) = exp( µ(dx)(eif (x) − 1)),


R
f ∈ BbR (E)

Moments of PPP

Let µ ∈ M (E) and X ∼ P P Pµ .

(i) If f ∈ L1 (µ), then E[ f dX] = f dµ.


R R

If f ∈ L2 (µ) L1 (µ), then Var[ f dX] = f 2 dµ.


T R R

Mapping theorem Let E and F be locally compact Polish space and let ϕ : E → F be a

measurable map. Let µ ∈ M (E) with µ ◦ ϕ−1 ∈ M (F ) and let X be a PPP on E with intensity

measure µ. Let X ◦ ϕ−1 is a PPP on F with intensity measure µ ◦ ϕ−1 .


R
Theorem Let ν ∈ M ((0, ∞)) and let X ∼ P P Pν on (0,∞). Further, define Y = xX(dx),

then the following are equivalent.

(i) P[Y < ∞] > 0.

(ii) P[Y < ∞] = 1.


R
(iii) ν(dx)(1 ∧ x) < ∞.

If (i)-(iii) hold, then Y is an infinitely divisible nonnegative random variable with Levy measure

ν.

Theorem Let ν ∈ M ((0, ∞) × E) with

∞1A (t)(1 ∧ x)ν(d(x, t)) < ∞ for all A ∈ Bb (E),

and let α ∈ M (E), Let X be a P P Pnu and


R
Y (A) := α(A) + x1A (t)X(d(x, t)) A ∈ B(E).

Then Y is an infinity divisible random measure with independent increments. For A ∈ B(E), Y (A)

has the Levy measure ν(· × A).

Colouring Theorem Let F be a further locally compact Polish space, let µ ∈ M (E) be an

atom-free and let (Yx )x∈E be i.i.d random variables with values in F and distribution ν ∈ M1 (F ).

Then
R
Z(A) := 1A (x, Yx )X(dx), A ∈ B(E × F )

is a P P Pµ⊗ν on E × F.

23
2.7. Multi dimensional Poisson Chapter 2. Poisson Process

Theorem X κ is a random measure with PX κ = P P Pµκ .

2.7 Multi dimensional Poisson

Let S be a set in n−dimensional space and let A be a family of subsets of S. A Point process

in S is a stochastic process N (A) indexed by the sets A in A and having the set of nonnegative

integers {0, 1, 2....} as its possible values where N (A) is a counting function.

If S is a subset of the real line, two dimensional plane or three-dimensional space; let A be the

family of subsets of S and for any set A A, let |A| denote the size (length, area and volume re-

spectively) of A. Then we say {N (A) : A ∈ A} is a homogeneous Poisson point process of

intensity λ > 0 if

(i) for each A ∈ A, the random variable N (A) has a Poisson distribution with parameter λ|A|;

(ii) for every finite collection {A1 , A2 , ...An } of disjoint subsets of S, the random variables

N (A1 ), ....N (An ) are independent.

A Poisson Point Process N((s,t]) counts the number of events occurring in an interval (s,t].

A Poisson Counting Process or a Poisson Process X(t) counts the number of events oc-

curring up to time t. i.e.

X(t) = N ((0, t]).

We can restate the Postulates for multidimensional Poisson Process for a given point process

{N (A) : A ∈ A} as follows

• The possible for N (A) are the non negative integers {0, 1, 2...} and 0 < P{N (A) = 0} < 1

if 0 < |A| < ∞.

• The probability distribution of N (A) depends on the set A only through its size |A| with

further property P{N (A) ≥ 1} = λ|A| + o(|A|) as |A| ↓ 0.

• For m = 2, 3, .... if A1 , A2 ....Am are disjoint regions, then N (A1 ), N (A2 ), ....N (Am ) are
S S S
independent random variables and N (A1 A2 .... Am ) = N (A1 )+N (A2 )+....N (Am ).

P{N (A)≥1}
• lim|A|→0 P{N (A)=1} = 1.

For a random point process N (A) defined with respect to subsets A of Euclidean n space

satisfying the above postulates, then N (A) is a homogeneous Poisson Point process of intensity

λ > 0 and

24
Chapter 2. Poisson Process 2.7. Multi dimensional Poisson

e−λ|A| (λ|A|)k
P{N (A) = k} = k! f or k = 0, 1.....

Consider a region A containing exactly one point i.e. N (A) = 1. Then we see that for any

subset B of A

|B|
P{N (B) = 1|N (A) = 1} = |A| f or B ⊂ A.

The generalization to n points in a region A is as follows. For a set A of positive size

|A| > 0 and containing N (A) = n ≥ 1 points. Then these n points are independent and

uniformly distributed in A in the sense that for any disjoint partition A1 , A2 , ....Am of A where
S S
A1 A2 ....Am = A, and positive integers k1 , k2 ....km where k1 + k2 .... + km = n, then

n! |A1 | k1 |Am | km
P{N (A1 ) = k1 , N (A2 ) = k2 , ....N (Am ) = km |N (A) = n} = k1 !k2 !..km ! ( |A| ) ...( |A| ) .

2.7.1 Compound Process

Given a Poisson Process X(t) of rate λ > 0, suppose that each event has associated with it

a random variable, possibly representing a value or a cost. The successive values Y1 , Y2 ... are

independent random variables sharing a common distribution function

G(y) = P{Yk ≤ y}.

A Compound Poisson Process is the cumulative value process defined by

X(t)
Z(t) = Σk=1 Yk f ort ≥ 0.

If λ > 0 is the rate for the process X(t) and µ = E[Y1 ] and ν 2 = V ar[Y1 ] are the common mean

and variance for Y1 , Y2 ... then the moments of Z(t) are given by

E[Z(t)] = λµt;

V ar[Z(t)] = λ(ν 2 + µ2 )t. Compound Process can be seen in

(i) Risk Theory Suppose claims arrive at an insurance company in accordance with a Poisson
X(t)
process having the rate λ. Let Yk be the magnitude of the kth claim. Then Z(t) = Σk=1 Yk

represent the cumulative amount claimed upto time t.

(ii) Stock Prices Suppose that transaction in a certain stock takes place according to a Poisson

process of rate λ. Let Yk deonte the change in market price of the stock between the kth and

k − 1st transaction. Then we assume the stock prices follows the random walk hypothesis
X(t)
which asserts that Y1 , Y2 .... are independent random variables. Then Z(t) = Σk=1 Yk represent

the total price.

25
2.7. Multi dimensional Poisson Chapter 2. Poisson Process

X(t)
The distribution for the compound Poisson Process Z(t) = Σk=1 Yk

2.7.2 Marked Poisson Process

A Marked Poisson Process is the sequence of pairs (W1 , Y1 ), (W2 , Y2 )... where W1 , W2 ... are

the waiting times or event times in the Poisson process X(t). For a fixed p(0 < p < 1) suppose

P{Yk = 1} = p P{Yk = 0} = q = 1 − p.

Consider separate processes of points marked up with ones and of points marked with zeroes.

We define relevant Poisson process as

X(t)
X1 (t) = Σk=1 Yk and X0 (t) = X(t) − X1 (t).

The nonoverlapping increments in X1 (t) are independent random variables, X1 (0) = 0, then

we see that X1 (t) has a Poisson distribution with mean λpt. We see that X1 (t) is a Poisson

process with rate λp, and the parallel argument shows that X0 (t) is a Poisson process with rate

λ(1 − p). We also see that X0 (t) and X1 (t) are independent processes.

Non Homogeneous Poisson Point Process in a Plane

Let θ = θ(x, y) be a nonnegative function defined on a region S in the (x, y) plane. For
RR
each subset A of S, let µ(A) = A θ(x, y)dxdy be the volume under θ(x, y) enclosed by

A. A non homogeneous Poisson Point Process of intensity function θ(x, y) is a point process

{N (A) : A ⊂ S} for which

(i) for each subset A of S, the random variable N (A) has a Poisson distribution with mean

µ(A)

(ii) for disjoint subsets A1 , A2 .....Am of S, the random variables N (A1 ), N (A2 )..., N (Am ) are

independent.

Note that homogeneous Poisson process of intensity λ corresponds to the function θ(x, y) being

constant, and θ(x, y) = λ for all x, y.

26
Chapter 3

Lévy Processes

3.1 Basic Definitions

A process is said to be stochastically continuous for all a > 0 and for all s ≥ 0.

limt→s P(|X(t) − X(s)| > a) = 0.

Let X = (X(t), t ≥ 0) be a stochastic process defined on a probability space (Ω, F, P). We say

that it has independent increment if for each n ∈ N and each 0 ≤ t1 < l2 < .... < tn+1 < ∞

the random variables (X(tj+1 ) − X(tj ), 1 ≤ j ≤ n) are independent.

A stochastic process has Stationary increments if for all t ≥ 0 and h > 0, the distribu-

tion of the random variables

Yt,h = Xt+h − Xt

depends only on h and not on t.

Let (S, A) be a measurable space and (Ω, F, P) be a probability space. A random measure M

on (S, A) is a collection of random variables (M (B), B ∈ A) such that

(1) M(Φ) = 0

(2) (σ−additivity) given any sequence (An , n ∈ N) of mutually disjoint sets in A,


S
M( n∈N An ) = Σn∈N M (An ) almost surely;

(3) Independently Scattered property for each disjoint family (B1 , .....Bn ) in A, the ran-

27
3.2. Cádlág Processes Chapter 3. Lévy Processes

dom variables M (B1 ), ....M (Bn ) are independent.

Let P = {a = t1 < t2 .... < tn < tn+1 = b} be a partition of t of the interval [a, b] in R,

and define its mesh to be δ = max1≤i≤n |ti+1 − ti |. We define variation varP (g) of a cádlág

mapping g : [a, b] → Rd over partition P by

varP (g) = Σni=1 |g(ti+1 − g(ti )|.

If V (g) =supP (g)varP (g) < ∞, we say that g has finite variation on [a,b].

A Martingale is sequence of random variates X0 , X1 , .... with finite means such that conditional

expectation of Xn+1 given X0 , X1 , ...., Xn is equal to Xn , i.e.,

⟨Xn+1 |X0 , ...., Xn ⟩ = Xn

Consider a Lévy process X = (X(t), t ≥ 0) adapted to a given filtration (Ft , t ≥ 0) in a

probability space (Ω, F, P). The mapping η is the Lévy symbol of X, so that

E(ei(u,X(t)) ) = etη(u)

for all u ∈ Rd . η is a continuous, hermitian, conditionally positive mapping from Rd to C that

satisfies η(0) = 0 and whose precise form is given by Lévy-Khintchine formula.

We say that a Lévy process X has Bounded jumps if there exists C > 0 with

sup0≤t<∞ |∆X(t)| < C.

3.2 Cádlág Processes

Let I = [a,b] be an interval in R+ . A mapping f :→ Rd is said to be cádlág if, for all t ∈ (a, b], f

has a left limit at t and f is right-continuous at t, i.e.

• for all sequences (tn , n ∈ N) in I with each tn < t and limn→∞ tn we have the limn→∞

f (tn ) exists ;

28
Chapter 3. Lévy Processes 3.2. Cádlág Processes

• for all sequences (tn , n → N) in I with each tn ≥ t and limn→∞ tn = t we have that

limn→∞ f (tn ) = f (t).

Let I = [a, b] be an interval in R+ . A mapping f : I → Rd is said to be cáglád if, for all

t ∈ (a, b], f has a right limit at t and f is left continuous at t, i.e.

• for all sequences (tn , n ∈ N) in I with each tn ≤ t and limn→∞ tn = t we have that

limn→∞ f (tn ) = f (t);

• for all sequences (tn , n ∈ N) in I with each tn > t and limn→∞ tn = t we have that

limn→∞ f (tn ) exists.

Clearly any continuous function is cádlág. Also note that a cádlág function can only have jump

discontinuities.

Theorem If f is a cádlág function then the set S = {t, ∆f (t) ̸= 0} is at most countable.

Proof For each k > 0, define

Sk = {t, |∆f (t)| > k}

Suppose that Sk has at least one accumulation point x and choose a sequence (xn , n ∈ N) in

Sk that converges to x. We assume, without the loss of generality, that the convergence is from

the left and that the sequence is increasing.

Now, given any n ∈ N , since xn ∈ Sk it follows that f has a left limit at xn and so, given

ϵ > 0, we can find δ > 0 such that, for all y with y < xn satisfying xn − y < δ, we have

f (xn −) − f (y) = ϵ0 (y) where |ϵ0 (y)| < ϵ .

Now fix n0 ∈ N such that , for all m > n > n0 , xm − xn < δ; then if |k0 | > k we have

f (xn ) − f (xm ) = f (xn ) − f (xn −) + f (xn −) − f (xm ) = k0 + ϵ0 (m),

From this we can see that (f (xn ), n ∈ N) cannot be Cauchy and so f does not have a limit at

x. Hence Sk has no accumulation points and so is at most countable. However


S
S= 1,
n∈N S n

thus we see that S is countable, as required.

Properties of Cádlág Functions

• Let D(a, b) denote the set of all cádlág functions on [a.b];then D(a, b) is a linear space

with respect to pointwise addition and scalar multiplication.

29
3.3. Introduction to Lévy Processes Chapter 3. Lévy Processes

• If f, g ∈ D(a, b) then f g ∈ D(a, b). Furthermore, if f (x) ̸= 0 for all x ∈ [a, b] then
1
f ∈ D(a, b).

• If f ∈ C(R, R) and g ∈ D(a, b) then the composition f ◦ g ∈ D(a, b).

• Every cádlág function is bounded on finite closed intervals and attain its bounds there.

• Every cádlág function is uniformly right-continuous on finite closed intervals.

• The uniform limit of a sequence can be uniformly approximated on finite intervals by a

sequence of step functions.

• Any cádlág function can be uniformly approximated on finite intervals by a sequence of

step functions.

• Every cádlág function is Borel measurable.

If f ∈ D(a, b) we consider the associated mapping f˜ : (a, b] → R defined by f˜(x) = f (x−)

whenever x ∈ (a, b]. We see that f and f˜ differ at most on a countable number of points and f˜

is cádlág on (a, b]. We can see that

supa<x≤b |f (x−)| ≤ supa≤x≤b |f (x)|.

3.3 Introduction to Lévy Processes

We say a stochastic process X is a Lévy Process if

• X(0) = 0 (a.s);

• X has independent and stationary increments;

• X is stochastically continuous.

If X is a Lévy process, then X(t) is infinitely divisible for each t ≥ 0. i.e, A random measure Y

is called infinitely divisible if, for any n ∈ N, there exist i.i.d random measures Y1 , Y2 ....Yn with

Y = Y1 + Y2 ...... + Yn .

For each n ∈ N, we can write

(n) (n) (n)


X(t) = Y1 (t) + Y2 (t) + ...Yn (t), where each
(n) (k−1)t
Yk (t) = X( kt
n ) − X( n ).

30
Chapter 3. Lévy Processes 3.3. Introduction to Lévy Processes

(n)
The Yk (t) are i.i.d.

Lemma If ϕX(t) (u) = eη(t,u) where η(t, ·) is a Lévy symbol. Then, If X = (X(t), t ≥ 0) is a

stochastically continuous, then the map t → ϕX(t) (u) is continuous for each u ∈ Rd .

Proof For each s, t ≥ 0 with each t ̸= s, write X(s, t) = X(t) − X(s). Fix u ∈ Rd . Since the

map y → ei(u,y) is continuous at the origin. given any ϵ > 0 we can find δ1 > 0 such that

ϵ
sup0≤|y|<δ1 |ei(u,y) − 1| < 2

and by stochastic continuity we can find δ2 such that whenever 0 < |t − s| < δ2 , P(|X(s, t)| >

δ1 ) < 4ϵ .

Hence for all 0 < |t − s| < δ2 we have |ϕX(t) (u) − ϕX(s) (u)| =

| Ω ei(u,X(s)(ω)) [ei(u,X(s,t)(ω)) − 1]P(dω)|


R

≤ Rd |ei(u,y) − 1|pX(s,t) (dy)


R

= Bδ (0) |ei(u,y) − 1|pX(s,t) (dy) + Bδ (0)c |ei(u,y) − 1|pX(s,t) (dy)


R R
1 1

≤ sup0≤|y|≤δ1 |ei(u,y) − 1| + 2P (|X(s, t)| > δ1 )

Theorem If X is a Lévy process, then

ϕX(t) (u) = etη (u)

for each u ∈ Rd , t ≥ 0, where η is the Lévy Symbol of X(1).

Proof Suppose that X is a Lévy process and that for each u ∈ Rd , t ≥ 0. Define ϕu (t) =

ϕX(t) (u): then we have for all s ≥ 0

ϕu (t + s) = E(ei(u,X(t+s)) )

= E(ei(u,X(t+s)−X(s)) ei(u,X(s)) )

= E(ei(u,X(t+s)−X(s)) )E(ei(u,X(s)) )

= ϕu (t)ϕu (s) and

ϕu (0) = 1.

We see that the map t → ϕu (t) is continuous. Thus the unique continuous solution to the above

equations is ϕu (t) = etα(u) , where α : Rd → C, thus we see that X(1) is infinitely divisible.

Lévy-Khinchine Formula for Lévy process

E(ei(u,X(t)) ) = exp(t{i(b, u) − 12 (u, Au) + i(u,y)


R
Rd −{0} [e − 1 − i(u, y)χB̂ (y)]µ(dy)})

Theorem If X = (X(t), t ≥ 0) is a stochastic process and there exists a sequence of Lévy

31
3.4. Jumps of a Lévy process Chapter 3. Lévy Processes

processes (Xn , n ∈ N) with each Xn = (Xn (t), t ≥ 0) such that Xn (t) converges in probability

to X(t) for each t ≥ 0 and limn→∞ limsupt→0 P(|Xn (t) − X(t)| > a) = 0 for all a > 0, then X

is a Lévy process.

3.4 Jumps of a Lévy process

The Jump process ∆X = (∆X(t), t ≥ 0) is defined by

∆X(t) = X(t) − X(t− )

for each t ≥ 0(X(t− ) is the left limit at the point t).

Theorem If N is a Lévy process that is increasing almost surely and is such that (∆N (t), t ≥ 0)

take values {0, 1}, then N is a Poisson process.

Result Let N be a Poisson Process and choose 0 ≤ t1 ≤ t2 < ∞. Show that

P(∆N (t2 ) − ∆N (t1 ) = 0|∆N (t1 ) = 1) ̸= P(∆N (t2 ) − ∆N (t1 ) = 0).

so that ∆N cannot have independent increments.

Lemma If X is a Lévy process, then almost surely, for fixed t > 0, ∆X(t) = 0.

Result We see that Σ0≤s≤t |∆X(s)| < ∞| almost surely if X is a compound Poisson process.

Let 0 ≤ t < ∞ and A ∈ B(Rd − {0}). Then we define

N (t, A) = #{0 ≤ s ≤ t; ∆X(s) ∈ A} = Σ0≤s≤t χA (∆X(s)).

For each ω ∈ Ω, t ≥ 0, the set function A → N (t, A)(ω) is a counting measure on B(Rd − {0}).

and thus
R
E(N (t, A)) = N (t, A)(ω)dP (ω)

We write µ(·) = E(N (1, ·)) and call it intensity measure associated with X. We say that

A ∈ B(Rd − {0}) is bounded below if 0 ∈


/ Ā.

Lemma If A is bounded below, then N (t, A) < ∞ almost surely for all t ≥ 0.

Theorem

(1) If A is bounded below, then (N (t, A), t ≥ 0) is a Poisson processes with intensity µ(A).

(2) If A1 , ....., Am ∈ B(Rd − {0}) are disjoint, then the random variables N (t, A1 ), ....., N (t, Am )

are independent.

Theorem Given a σ−finite measure λ on a measurable space (S, A), there exists a Poisson

random measure M on a probability space (Ω, F, P) such that λ(A) = E(M (A)) for all A ∈ A.

32
Chapter 3. Lévy Processes 3.5. Poisson Integration

Let U = Rd − {0} and C be its Borel σ−algebra. Let X be a Lévy process; then ∆X is a

Poisson point process and N is its associated Poisson random measure, For each t ≥ 0 and A

bounded below, then define the Compensated Poisson random measure by

(Ñ )(t, A) = N (t, A) − tµ(A).

Properties of Poisson random measures

The main properties of random measures are given by

(1) For each t > 0, ω ∈ Ω, N (t, ·)(ω) is a counting measure on B(Rd − {0}).

(2) For each A bounded below, N (t, A), t ≥ 0) is a Poisson process with intensity µ(A) =

E(N (1, A)).

(3) (Ñ (t, A), t ≥ 0) is a martingale-valued measure, where Ñ (t, A) = N (t, A) − tµ(A), for A

bounded below.

3.5 Poisson Integration

Let f be a Borel measure function from Rd to Rd and let A be bounded below; then for each

t > 0, ω ∈ Ω, we define the Poisson integral of f as a random finite sum by


R
A f (x)N (t, dx)(ω) = Σx∈A f (x)N (t, {x})(ω).

is an Rd -valued random variables and gives rise to a cádlág


R
Note that each A f (x)N (t, dx)(ω)

stochastic processes as we vary t.

Now since N (t, {x}) ̸= 0 ←→ ∆X(u) = x for at least one 0 ≤ u ≤ t, we have


R
A f (x)N (t, dx) = Σ0≤u≤t f (∆X(u))χA (∆X(u)).

Let (TnA , n ∈ N) be the arrival times for the Poisson process (N (t, A), t ≥ 0). Then

= Σn∈N f (∆(X(TnA )))χ[0,t] (TnA ).


R
A f (x)N (t, dx)

Denote µA as restriction to A of the measure µ.

Theorem Let A be bounded below. Then :


R
(1) For each t ≥ 0, A f (x)N (t, dx) has a compound Poisson distribution such that, for each

u ∈ Rd ,

i(u,x)
R R
E(exp[i(u, A f (x)N (t, dx)]) = exp[t A (e − 1)µf (dx)]

33
3.5. Poisson Integration Chapter 3. Lévy Processes

where µf = µ ◦ f −1 ;

(2) if f ∈ L1 (A, µA ), we have


R R
E( A f (x)N (t, dx)) = t A f (x)µ(dx);

(3) If f ∈ L2 (A, µA ), we have

2
R R
Var(| A f (x)N (t, dx)|) =t A |f (x)| µ(dx)

Result If f : Rd → Rd is Borel measure then, almost surely

Σ0≤u≤t |f (∆X(u))|χA (∆X(u)) < ∞.

Consider the sequence of jump size random variable (YfA (n), n ∈ N), where each

YfA (n) = A A
R R
A f (x)N (Tn , dx) − A f (x)N (Tn−1 , dx).

Theorem

(1) (YfA (n), n ∈ N) are i.i.d with common law given by

f −1 (B))
T
µ(A
P(YfA (n) ∈ B) = µ(A)

for each B ∈ B(Rd ).


R
(2) ( A f (x)N (t, dx), t ≥ 0) is a compound Poisson process.

For A, B bounded below and f ∈ L2 (A, µA ), g ∈ L2 (B, µB ), show that


R R R
⟨ A f (x)Ñ (t, dx), B g(x)Ñ (t, dx)⟩ = t A T B f (x)g(x)µ(dx).

For each A bounded below define

M = { A f (x)Ñ (t, dx), f ∈ L2 (A, µA )}.


R

Then MA is a closed subspace of the martingale space M.

Deduce that limn→∞ TnA = ∞ almost surely whenever A is bounded below.

Result Let N be a Poisson random measure. with intensity µ, that counts the jump of a Lévy

process X and let f : Rd → Rd be Borel measurable. For A bounded below, let Y = (Y (t), t ≥ 0)
R
be given by Y (t) = A f (x)N (t, dx); then Y of finite variation on [0, t] for each t ≥ 0. Then, for

all partitions P of [0, t], almost surely

varP (Y ) ≤ Σ0≤s≤t |f (∆X(s))|χA (∆X(s)) < ∞.

Let Y be a Poisson integral and let η be its Lévy symbol. For each u ∈ Rd consider the

martingales Mu = (Mu (t), t ≥ 0) where each

34
Chapter 3. Lévy Processes 3.6. Examples Of Lévy Processes

Mu (t) = ei(u,Y (t))−tη(u) .

, then Mu is of finite variation.

3.6 Examples Of Lévy Processes

3.6.1 Brownian Motion and Gaussian Process

A standard Brownian motion in Rd is a Lévy process B = (B(t), t ≥ 0) for which

(1) B(t) ∼ N (0, tI) for each t ≥ 0,

(2) B has continuous sample path.

Then We can see that the characteristic function of Brownian motion B is given by

ϕB(t) (u) = exp(− 21 t|u|2 )

for each u ∈ Rd , t ≥ 0.

We see that for marginal processes Bi = (Bi (t), t ≥ 0) where Bi (t) is the ith component of B(t):

then we can see that Bi are mutually independent Brownian motions in R. We see that these

are one-dimensional Brownian motions.

3.6.2 Poisson Process


S
The Poisson process of intensity λ > 0 is a Lévy process N taking values in {N 0} wherein

each N (t) ∼ π(λt), so that we have

(λt)n −λt
P (N (t) = n) = n! e

for each n = 0, 1, 2.... We have seen that the Waiting time Tn are Gamma distributed, also the
1
inter arrival time Tn − Tn−1 is exponentially distributed with the mean λ.

3.6.3 Compound Process

The compound Poisson process Y is a Lévy process. We can clearly see, for a > 0, by condi-

tioning and independence we have

P (|Y (t)| > a) = Σ∞


n=0 P (|Z(1) + ... + Z(n)| > a)P (N (t) = n).

and thus we see the required result by dominated convergence. We see that the sample paths

of Y are piecewise constants on finite intervals with ’jump discontinuities’ at the random times

(T (n), n ∈ N); the sizes in these jumps are random and the jump at T (n) can take any value in

35
3.6. Examples Of Lévy Processes Chapter 3. Lévy Processes

the range of the random variable Z(n).

Proposition If (N1 (t), t ≥ 0) and (N2 (t), t ≥ 0) are two independent Poisson Processes defined
(j)
on the same probability space, with arrival times (Tn , n ∈ N) for j = 1, 2 respectively, then

(1) (2)
P (Tm = Tn for some m, n ∈ N) = 0.

3.6.4 Interlacing Processes

Let C be a Gaussian Lévy process and Y be a compound Poisson process that is independent

of C. Define a new process X by

X(t) = C(t) + Y (t).

X(t) =

i) C(t) for 0 ≤ t < T1

ii) C(T1 ) + Z1 for t = T1

iii) X(T1 ) + C(t) − C(T1 ) for T1 < t < T2

iv) X(T2 ) + Z2 for t = T2 ,

and so on recursively This is called interlacing, since a continuous path is interlaced with random

jumps.

3.6.5 Stable Lévy Process

We say that a stochastic process X = (X(t), t ≥ 0) is stable if all its finite-dimensional distri-

butions are stable. A stable Lévy process is a Lévy process X in which each X(t) is a stable

random variable. We can see in the rotationally invariant case, the Lévy symbol is given by

η(u) = −σ α |u|−α ;

here 0 < α ≤ 2 is the index of stability and σ > 0.

We see that Lévy processes display self similarity. In general, a stochastic process (X(t); t ≥ 0)

is self-similar with Hurst index H > 0 if the two processes (Y (at), t ≥ 0) is self similar with

Hurst index H > 0 if the two processes Y (at) : t ≥ 0) and (aH Y (t), t ≥ 0) have the same

finite-dimensional distribution for all a ≥ 0. We can see that rotationally invariant stable Lévy
1
Process is similar with Hurst index H = α. In particular we see that a Lévy process X is self

similar and if only if each X(t) is strictly stable.

36
Chapter 3. Lévy Processes 3.7. Lévy-Itô decomposition

3.7 Lévy-Itô decomposition

Proposition Let Mj , j = 1, 2 be two cádlág-centred martingales. Suppose that, for some j, Mj ,

is L2 and that for each t ≥ 0, E(|V (Mk (t)|2 ) < ∞ where k ̸= j; then

E[(M1 (t), M2 (t))] = E(Σ0≤s≤t (∆M1 (s), ∆M2 (s))).

Result Let A and B be bounded below and suppose that f ∈ L2 (A, µA ), g ∈ L2 (B, µB ). For
R R
each t ≥ 0 let M1 (t) = A f (x) ∼ N (t, dx) and M2 (t) = B g(x) ∼ N (t, dx); then
R R R R
V (M1 (t)) ≤ V ( A f (x)N (t, dx)) + V (t A f (x)µ(dx)) ≤ A |f (x)|N (t, dx) +t A f (x)µ(dx).

Note that the proposition does not hold when M1 = M2 = B where B is a standard Brownian

motion.

Result Let N = (N (t), t ≥ 0) be a Poisson process with arrival times (Tn , n ∈ N) and let M

be centred cádlág L2 −martingale. Then for each t ≥ 0,

E(M (t)N (t)) = E(Σn∈N ∆M (Tn )χ{Tn ≤t} ).

Result Let A be bounded below and M be a centred cádlág L2 −martingale that is continuous

at the arrival times of (N (t, A), t ≥ 0). Then M is orthogonal to every process in MA . Then

for A bounded below note that, for each t ≥ 0,


R
A xN (t, dx) = Σ0≤u≤t ∆X(u)χA (∆X(u))

is the sum of all jumps taking values in the set A up to the time t. Since the paths of X are

cádlág, and thus a finite random sum.


R
Theorem If Ap , p = 1, 2 are disjoint and bounded below, then ( A1 xN (t, dx), t ≥ 0) and
R
( A2 xN (t, dx), t ≥ 0) are independent stochastic processes.

Theorem If X is a Lévy process jumps then we have E(|X(t)|m ) < ∞ for all m ∈ N.

Consider the compound Poisson process


R
( |x|≥a xN (t, dx), t ≥ 0)

and define a new stochastic process Ya = (Ya (t), t ≥ 0) by the prescription


R
Ya (t) = X(t) − |x|≥a xN (t, dx).

Theorem Ya is a Lévy process.

Corollary A Lévy process has bounded jumps if and only if it is of the form Ya for some a > 0.

37
3.7. Lévy-Itô decomposition Chapter 3. Lévy Processes

Result Show that E(Ya (t)) = tE(Ya (1)) for each t ≥ 0.

Theorem For each t ≥ 0,

Ŷ (t) = Yc (t) + Yd (t),

where Yc and Yd are independent Lévy processes, Yc has continuous sample paths and
R
Yd (t) = |x|<1 xÑ (t, dx).

Corollary For the intensity measure µ of the Poisson random measure N . µ is a Lévy measure.

Corollary For each t ≥ 0, u ∈ Rd

E(ei(u,Yd (t)) ) = exp{t i(u,x)


R
|x|<1 [e − 1 − i(u, x)]µ(dx)}.

Result For each t ≥ 0, 1 ≤ i ≤ d,

⟨Ydi , Ydi ⟩(t) = 2


R
|x|<1 xi µ(dx).

Theorem Yc is a brownian motion.

The Lévy-Itô decomposition If X is a Lévy process, then there exists b ∈ Rd , a Brownian

motion BA with covariance matrix A and an independent Poisson random measures N on

R+ × (Rd − {0}) such that, for each t ≥ 0,


R R
X(t) = bt + BA (t) + |x|<1 xÑ (t, dx) + |x|≥1 xN (t, dx).

Result An α−stable Lévy process has finite mean if 1 < α ≤ 2 and infinite mean otherwise.

Result If X is a Lévy process then, for each t ≥ 0, almost surely

Σ0≤s≤t [∆X(s)]2 < ∞.

Corollary If X is a Lévy process then for each u ∈ Rd , t ≥ 0

E(ei(u,X(t)) ) = exp({i(b, u) − 21 (u, Au) + i(u,y)


R
Rd −{0} [e − 1 − i(u, y)χB (y)]µ(dy)}).
R
The process ( |x|<1 xÑ (t, dx), t ≥ 0) is the compensated sum of small jumps. The process
R
( |x|≥1 xN (t, dx), t ≥ 0)

Corollary The characteristics (b, A, µ) of a Lévy processes are uniquely determined by the

process.

Corollary Let G be a group f matrices acting on Rd . A Lévy process is G-invariant if and only

if, for each g ∈ G.

b = gb, A = gAg t and µ is G−invariant.

38
Chapter 3. Lévy Processes 3.7. Lévy-Itô decomposition

A Lévy process is O(d)− invariant if and only if it has characteristics (0, aI, µ) where a ≥ 0

and µ is O(d)−invariant. A Lévy process is symmetric if and only if it has characteristics

(0, A, µ) where A is an arbitrary positive definite symmetric matrix and µ is symmetric i.e.

µ(B) = µ(−B) for all B ∈ B(Rd − {0}).

Result Let X be a Lévy process for which

n µ(dx)
R
|x|≥1 |x| <∞

for all n ≥ 2. For each n ≥ 2, t ≥ 0, define

X (n) (t) = Σ0≤s≤t [∆X(s)]n and Y (n) (t) = X (n) (t) − E(X (n) (t)).

Then each (Y (n) (t), t ≥ 0) is a martingale. Such process are called Teugels martingales.

Jump and Creep

Suppose that X is a Lévy process with Lévy process-Itô decomposition of the form
R
X(t) = |x|<1 x(Ñ )(t, dx).

for all t ≥ 0.We see the outcome of a competition between an infinite number of jumps of

small size and an infinite drift. We see that, here d = 1 and µ((0, 1)) > 0. For each x > 0, let

Tx = inf {t ≥ 0; X(t) > x}; then

P(X(Tx −) = x < X(Tx )) = P(X(Tx −) < x = X(Tx )) = 0,

so that either paths, jumps across x or they hit x continuously. Futhermore, either P(X(Tx ) =

x) > 0 for all x > 0 or P(X(Tx ) = x) = 0 for all x > 0. In the first case, every positive point can

be hit continuously in X and this phenomena is called creep. In the second case, only jump can

occur almost surely, we can classify completely the condition for creep or jump for general one-

dimensional Lévy process, in terms of their characteristic. For example, a sufficient condition
R0 R1
for creep is A = 0 and −1 |x|µ(dx) = ∞, 0 xµ(dx) < ∞. This is satisfied by spectrally negative

α−stable Lévy processes (0 < α < 2) i.e. those for which c1 = 0 where the characteristic is

given by

c1 c2
µ(dx) = χ
x1+α (0,∞)
(x)dx + χ
|x|1+α (−∞,0)
(x)dx.

39
Chapter 4

Itô Integral

4.1 Introduction Itô Integrals

Consider the simple model

dN
dt = a(t)N (t), N (0) = N0 (constant)

It might happen that a(t) is not completely known, and subject to some ”noise”. We thus write

the equation in the form

dX
dt = b(t, Xt ) + σ(t, xt )·”noise”,

Consider the case where the noise is 1-dimensional. It is reasonable to look for some stochastic

processes Wt to represent the noise term, so that

dX
dt = b(t, Xt ) + σ(t, Xt ) · Wt .

We can assume that Wt has at least approximately these properties,

• t1 ̸= t2 ⇒ Wt1 and Wt2 are independent.

• {Wt } is stationary, i.e. the (joint) distribution of {Wt1 +t , ....Wtk +t } does not depend on t.

• E[Wt ] = 0 for all t.

However we see that there is no ”reasonable” stochastic process satisfying (i) and (ii): Such a

Wt cannot have a continuous path. We can however represent Wt as a generalized stochastic

process known as the white noise process.

A process is said to be generalized if it can be constructed as a probability measure on the

40
Chapter 4. Itô Integral 4.1. Introduction Itô Integrals

space S ′ of tempered distributions on the interval [0, ∞), and not as a probability measure on

the much smaller space R[0,∞) , like any ordinary process. Let 0 = t0 < t1 < ... < tm = t and

consider a discrete version of the equation as

Xk+1 − Xk = b(tk , Xk )∆tk + σ(tk , Xk )Wk ∆tk , where

Xj = X(tj ), Wk = Wtk , ∆tk = tk+1 − tk .

Replace Wk ∆tk by ∆Vk = Vtk+1 − Vtk , where {Vt }t≥0 is some suitable stochastic process, the

assumptions (i), (ii) and (iii) on Wt suggests that Vt should have stationary independent incre-

ments with mean 0. It turns that the only such process with continuous paths is the Brownian

motion Bt . Thus we put Vt = Bt and thus

k−1
Xk = X0 + Σj=0 b(tj , Xj )∆tj + Σk−1
j=0 σ(tj , Xj )∆Bj .

when ∆(tj ) → 0
Rt Rt
Xt = X0 + 0 b(s, Xs )ds + ” 0 σ(s, Xs )dBs ”

where Bt (ω) is 1-dimensional Brownian motion starting at the origin, for a wide class of functions

f : [0, ∞] × Ω → R.

Suppose that 0 ≤ S < T and f (t, w) is given. We want to define


RT
S f (t, ω)dBt (ω).

We begin by the definition for a simple class of functions f and then extend by some approxi-

mation procedure, Let us assume that f has the form

ϕ(t, ω) = Σj≥0 ej (ω) · χ[j·2−n ,(j+1)·2−n ) (t),

where χ denote the characteristic(indicator) function and n is a natural number, such functions

it is reasonable to define
RT
S ϕ(t, ω)dBt (ω) = Σj≥0 ej (ω)[Btj+1 − Btj ](ω),

where

(n)
tk = tk =

(i)k · 2−n if S ≤ k · 2−n ≤ T

(ii)S if k · 2−n < S

(iii)T if k · 2−n > T

Result Choose

41
4.1. Introduction Itô Integrals Chapter 4. Itô Integral

ϕ1 (t, ω) = Σj≥0 Bj·2−n (ω)χ[j·2−n ,(j+1)2−n ) (t)

ϕ2 (t, ω) = Σj≥0 B(j+1)2−n ) (ω) · χ[j·2−n ,(j+1)2−n ) (t).

Then
RT
E[ 0 ϕ1 (t, ω)dBt (ω)] = Σj≥0 E[Btj (Btj+1 − Btj )] = 0

since {Bt } has independent increments. But


RT
E[ 0 ϕ2 (t, ω)dBt (ω)] = Σj≥0 E[Btj+1 · (Btj+1 − Btj )] = Σj≥0 E[(Btj+1 − Btj )2 ] = T ,

We see that both ϕ1 and ϕ2 appear to be reasonable approximations to f (t, ω) = Bt (ω) their

integrals are not close to each other at all, no matter how large n is chosen.

We see that the variations of the paths of Bt are too big to enable us to define the integral
RT
S f (t, ω)dBt (ω). We know that the paths t → Bt of Brownian motion are nowhere differen-

tiable almost surely. Thus the total variation of the path is infinite almost surely, We can thus

approximate a given function f (t, ω) by

Σj f (t∗j , ω) · χ[tj ,tj+1] (t)


RT
where the points t∗j belong to the interval [tj , tj+1 ] and then define S f (t, ω)dBt (ω) as the limit

of ΣSj f (t∗j , ω)[Btj+1 − Btj ](ω) as n → ∞. We have two choices


RT
• t∗j = tj (the left end point), which leads to the Itô integral, denoted by S f (t, ω)dBt (ω).

(tj +tj+1 ) RT
• t∗j = 2 which leads to the Stratonovich integral, denoted by S f (t, ω) ◦ dBt (ω).

We observe from the procedure that f has the property that each of the functions ω → f (tj , ω)

only depends on the behaviour of Bs (ω) up to time tj .


(n)
Definition Let Bt (ω) be n−dimensional Brownian motion. Then we define Ft = Ft to be

the σ−algebra generated by the random variables {Bi (s)}1≤i≤n,0≤s≤t . In other words, Ft is the

small σ−algebra containing all sets of the form

{ω; Bt1 (ω) ∈ F1 , ...., Btk (ω) ∈ Fk },

where tj ≤ t and Fj ⊂ Rn are Borel sets, j ≤ k = 1, 2, ... (Assume that all sets of measure zero

are included in Ft ).

We can think of Ft as ”history of Bs up to time t. A function h(ω) will be Ft − measurable if

and only if h can be written as the pointwise almost every limit of sum functions of the form

g1 (Bt1 )g2 (Bt2 )....gk (Btk ),

42
Chapter 4. Itô Integral 4.2. Construction of Itô Integral

where g1 , ...gk are bounded, continuous functions and tj ≤ t for j ≤ k, k = 1, 2, ... Intuitively,

that h is Ft − measurable, while h2 (ω) = B2t (ω) is not. Fs ⊂ Ft for s < t (i.e. {Ft } is

increasing) and that Ft ⊂ F for all t.

Definition Let {Nt }t≥0 be an increasing family of σ−algebras of subsets of Ω. A process

g(t, ω) : [0, ∞) × Ω → Rn is called Nt −adapted if for each t ≥ 0 the function

ω → g(t, ω)

is Nt −measurable.

Thus the process h1 (t, ω) = B t (ω) is Ft −adapted, while h2 (t, ω) = B2t (ω) is not.
2

4.2 Construction of Itô Integral

Definition Let V = V(S, (T )) be the class of functions

f (t, ω) : [0, ∞) × Ω → R

such that

• (t, ω) → f (t, ω) is B × F− measurable, where B denotes the Borel σ−algebra on [0, ∞).

• f (t, ω) is Ft −adapted.
RT
• E[ S f (t, ω)2 dt] < ∞.

For function f ∈ V we will define the Itô integral as


RT
I|f |(ω) = S f (t, ω)dBt (ω),

where Bt is a 1−dimensional Brownian motion.

We define I|ϕ| for a simple class of functions ϕ. We see that for each f ∈ V , can be approximated

by such ϕ′ s and we use this define f dB as the limit of ϕdB as ϕ → f . A function ϕ ∈ V is


R R

called elementary if it has the form

ϕ(t, ω) = Σj ej (ω) · χ[tj ,tj+1 ) (t).

Since ϕ ∈ V each function ej must be Ftj − measurable. We see that from earlier ϕ1 is elementary

whereas ϕ2 is not.

For elementary function ϕ(t, ω) we define the integral accordingly as

43
4.2. Construction of Itô Integral Chapter 4. Itô Integral

RT
S ϕ(t, ω)dBt (ω) = Σj≥0 ej (ω)[Btj+1 − Btj ](ω).

Lemma - The Itô isometry If ϕ(t, ω) is bounded and elementary then


RT RT
E[( S ϕ(t, ω)dBt (ω))2 ] = E[ S ϕ(t, ω)2 dt].

We now use the isometry to extend the definitions from elementary functions to functions in V

using the following steps :

• Step 1 Let g ∈ V be bounded and g(·, ω) continuous for each ω. Then there exists

elementary functions ϕn ∈ V such that


RT
E[ S (g − ϕn )2 dt] → 0 as n → ∞.

• Step 2 Let h ∈ V be bounded. Then there exist bounded functions gn ∈ V such that

gn (·, ω) is continuous for all ω and n, and


RT
E[ S (h − gn )2 dt] → 0.

• Step 3 Let f ∈ V. Then there exists a sequence {hn } ⊂ V such that hn is bounded for

each n and
RT
E[ S (f − hn )2 dt] → 0 as n → ∞.

If f ∈ V , we choose by Steps 1-3, elementary functions ϕn ∈ V such that


RT
E[ S |f − ϕn |2 dt] → 0.

Then define
RT RT
I[f ](ω) := S f (t, ω)dBt (ω) := limn→∞ S ϕn (t, ω)dBt (ω).
RT
The limit exists as an element of L2 (P ), since { S ϕn (t, ω)dBt (ω)} forms a Cauchy sequence in

L2 (P ). Now we define the Itô Integral as

Itô Integral Let f ∈ V(S, T ). Then the Itô integral of f (from S to T ) is defined by
RT RT
S f (t, ω)dBt (ω) = limn→∞ S ϕn (t, ω)dBt (ω) (limit in L2 (P ))

where {ϕn } is a sequence of elementary functions such that


RT
E[ S (f (t, ω) − ϕn (t, ω))2 dt] → 0 as n → ∞.
RT RT
Corollary (The Itô Isometry) E[ S f (t, ω)dBt )2 ] = E[ S f 2 (t, ω)dt] for all f ∈ V(S, T ).
RT
Corollary If f (t, ω) ∈ V(S, T ) and fn (t, ω) ∈ V(S, T ) for n = 1, 2, ... and E[ S (fn (t, ω) −

44
Chapter 4. Itô Integral 4.3. Properties of Itô Integral

f (t, ω))2 dt] → 0 as n → ∞, then


RT RT
S (fn (t, ω)dBt (ω) → S f (t, ω)dBt (ω) in L2 (P ) as n → ∞.

Result Assume B0 = 0 . Then


Rt
0 Bs dBs = 21 Bt2 − 12 t.

The extra term − 12 t shows that the Itô stochastic integral behave like ordinary integrals.

4.3 Properties of Itô Integral

Theorem Let f, g ∈ V(0, T ) and let 0 ≤ S < U < T . Then


RT RU RT
• S f dBt = S f dBt + U f dBt for a.a. ω
RT RT RT
• S (cf + g)dBt = c · S f dBt + S gdBt (c constant) for a.a.ω
RT
• E[ S f dBt ] = 0.
RT
• S f dBt is FT −measurable.

An important property of the Itô integral is that it is a martingale.

Brownian motion Bt in Rn is a martingale w.r.t the σ−algebra generated by {Bs : s ≤ t}.

Thus we can see that Itô integral will also follow Doob’s maximal inequality as it is a martingale.

Theorem Let f ∈ V ∈ (0, T ). Then there exists a t−continuous version of


Rt
0 f (s, ω)BS (ω); 0≤t≤T,

i.e. there exists a t−continuous stochastic process Jt on (Ω, F, P) such that


Rt
P[Jt = 0 f dB] = 1 for all t, 0 ≤ t ≤ T .

Corollary Let f (t, ω) ∈ V(0, T ) for all T . Then


Rt
Mt (ω) = 0 f (s, ω)dBs

is a martingale w.r.t Ft and

1
RT
P[sup0≤t≤T |Mt | ≥ λ] ≤ λ2
· E[ 0 f (s, ω)2 ds]; λ, T > 0.

4.4 Extension of Itô Integral


R
The Itô integral f dB can be defined for a larger class of integrands f than V . First the

45
4.4. Extension of Itô Integral Chapter 4. Itô Integral

measurability condition earlier can be relaxed to

• (t, ω) → f (t, ω) is B × F− measurable, where B denotes the Borel σ−algebra on [0, ∞).

• There exists an increasing family of σ−algbras Ht ; t ≥ 0 such that

(a) Bt is a martingale with respect to Ht and

(b) ft is Ht −adapted.
RT
• E[ S f (t, ω)2 dt] < ∞.

Note that (ii)(a) implies that Ft ⊂ Ht . The essence of this extension is that we can allow ft to

depend on more than Ft as long as Bt remains a martingale with respect to the ”history” of

fs ; s ≤ t. If (ii) holds then E[Bs − Bt |Ht ] = 0 for all s > t and we see that this is sufficient to

carry out the construction of the Itô integral as seen earlier.

Suppose a situation where Bt (ω) = Bk (t, ω) is the k th coordinate of n−dimensional Brownian


(n)
motion (B1 , ...., Bn ). Let Ft be the σ−algebra generated by B1 (s1 , ·), ....., Bn (sn , ·); sk ≤ t.
(n)
Then Bk (t, ω) is a martingale with respect to Ft because Bk (s, ·) − Bk (t, ·) is indepen-
(n) (n)
dent of Ft when s > t. We choose Ht = Ft in (ii) above. Thus we have now defined
Rt (n)
0 f (s, ω)dBk (s, ω) for Ft −adapted integrands f (t, ω) . That includes integrals like

sin(B12 + B22 )dB2


R R
B2 dB1 or

involving several components of n−dimensional Brownian motion.

Multi-Dimensional Itô Integral Let B = (B1 , B2 , ...Bn ) be n−dimensional Brownian mo-


m×m
tion. Then VH (S, T ) denote the set of m×n matrices v = [vij (t, ω)] where each entry vij (t, ω)

satisfies (i) and (iii) measurability condition with respect to some filtration H = {H⊔ }t≥0 .
  
v11 . . . v1n dB1
  
  
 . . . . . 
 . 
 

RT RT   
S vdB = S  . . . .  . 
. 
  
  
 . . . . .  . 
  
  
vm1 . . . vmn dBn

to be the m × 1 matrix (column vector) whose ith component is the following sum of (extended)

1-dimensional Itô integrals:


RT
Σnj=1 S vij (s, ω)dBj (s, ω).

(n)
If H = F (n) = {Ft }t≥0 we write V m×m (S, T ) and if m = 1 we write VH
n (S, T )(respectively

46
Chapter 4. Itô Integral 4.5. A comparison of Itô and Stratonovich integrals

n×1
V n (S, T )) instead of VH (S, T ) (respectively V n×1 (S, T )). We also put

V m×n = V m×n (0, ∞) = m×n (0, T


T
T >0 V ).

Now consider the Itô integral defined for a larger class with measurability conditions relaxed to

• (t, ω) → f (t, ω) is B × F− measurable, where B denotes the Borel σ−algebra on [0, ∞).

• There exists an increasing family of σ−algbras Ht ; t ≥ 0 such that

(a) Bt is a martingale with respect to Ht and

(b) ft is Ht −adapted.
RT
• P[ S f (s, ω)2 ds] = 1.

Let WH (S, T ) denote the class of processes f (t, ω) ∈ R satisfying the above measurability
T m×n
criteria. Let WH = T >0 WH (0, T ) and in the matrix case, we write WH (S, T ) etc. If

H = F (n) we write W(S, T ) instead of WF (n) ((S, T ) and so on.

Let Bt denote the 1−dimensional Brownian motion. If f ∈ WH we see that for all t there exist
Rt
step function fn ∈ WH such that 0 |fn − f |2 ds → 0 in probability, i.e. in measure with respect
Rt
to P. For such a sequence we have that 0 fn (s, ω)dBs converges in probability to some random

variable and the limit only depends on f and not on the sequence {fn } . Thus we define
Rt Rt
0 f (s, ω)dBs (ω) = limn→∞ 0 fn (s, ω)dBs (ω) (limit in probability) for f ∈ WH .

4.5 A comparison of Itô and Stratonovich integrals

We see that for the mathematical interpretation of the white noise equation

dX
dt = b(t, Xt ) + σ(t, Xt ) · Wt

is that Xt is a solution of the integral equation


Rt
Xt = X0 + 0 b(s, Xs )ds + ”σ(s, Xs )dBs ”,

for suitable interpretation of the integral. However as indicated earlier, the Itô of an integral of

the form
Rt
” 0 f (s, ω)dBs (ω)”

is one of the several reasonable choices. The Stratonovich integral is another possibility, leading

to a different result. We see that the Stratonovich interpretation in some situation may e the

47
4.5. A comparison of Itô and Stratonovich integrals Chapter 4. Itô Integral

(n)
most appropriately: Choose t−continuity differentiable processes Bt such that for a.a. ω

B (n) (t, ω) → B(t, ω) as n → ∞

(n)
uniformly (in t) in bounded intervals. For each ω let Xt (ω) be the solution of the corresponding

(deterministic) differential equation


(n)
dXt dBt
dt = b(t, Xt ) + σ(t, Xt ) dt .

(n)
Then Xt (ω) converges to some function Xt (ω) in the same sense: For a.a.ω we have that
(n)
Xt (ω) → Xt (ω) as n → ∞, uniformly (in t) bounded intervals. We see that this solution

coincides with the solution obtained by using Stratonovich integrals, i.e.


Rt Rt
Xt = X0 + 0 b(s, Xs )ds + 0 σ(s, Xs ) ◦ dBs

This implies that Xt is the solution of the following modified Itô equation
Rt Rt Rt
Xt = X0 + 0 b(s, Xs )ds + 1
2 0 σ ′ (s, Xs )σ(s, Xs )ds + 0 σ(s, Xs )dBs

where σ ′ denote the derivative of σ(t, x) with respect to x.

Therefore, from the point of view, we see it is reasonable to use the Stratonovich interpretation−

and not the Itô interpretation


Rt Rt
Xt = X0 + 0 b(s, Xs )ds + 0 σ(s, Xs )dBs

as the model for the original white noise equation.

The specific feature of the Itô model of ”not looking into the future” seems to be the reason for

choosing the Itô interpretation in many cases.

Because of the explicit connection between the two models(and similar connection in higher

dimensions), it will for many purposes suffice to do the general mathematical treatment for one

of the two types of integrals. In genera, we see that the Stratonovich integral has the advantage

of leading to ordinary chain rule formulas under a transformation(change of variable), i.e. there

are no second order terms in the Stratonovich analogue of the Itô transformation formula.

This property makes the Stratonovich integral natural to use for example in connection with

stochastic differential equation on manifolds.

However we see that Stratonovich integral are not martingales, as we have seen Itô integrals

are. This gives the Itô integral an important computational advantage, even though it does not

behave so nicely under transformation.

48
Chapter 4. Itô Integral 4.6. It̂o Formula

4.6 It̂o Formula

Definition Let Bt be 1−dimensional Brownian motion on (Ω, F, P). A (1-dimensional) It̂o process

or stochastic integral is a stochastic process Xt on (Ω, F, P) on the form


Rt Rt
Xt = X0 + 0 u(s, ω)ds + 0 v(s, ω)dBs ,

where v ∈ WH , so that
Rt
P[ 0 v(s, ω)2 ds < ∞ for all t ≥ 0] = 1

We also assume that u is H⊔ − adapted and


Rt R
P[ 0 |u(s, ω)|ds < for all t ≥ 0] = 1.

Theorem Let Xt be an Itô process given by

dXt = udt + vdBt

Let g(t, x) ∈ C2 ([0, ∞) × R)(i.e. g is twice continuously differentiable on [0, ∞) × R). Then

Yt = g(t, Xt )

is again an Itô process, and

∂g ∂g 1 ∂2g
dYt = ∂t (t, Xt )dt + ∂x (t, Xt )dXt + 2 ∂x2 (t, Xt ) · (dXt )2 ,

where (dXt )2 = (dXt ) · (dXt ) is computed according to rules

dt · dt = dt · dBt = dBt · dt = 0, dBt · dBt = dt.

Theorem Suppose f (s, ω) is continuous of bounded variation with respect to s ∈ [0, t]. for

a.a.ω. Then
Rt Rt
0 f (s)dBs = f (t)Bt − 0 Bs dfs .

Multi-Dimensional Itô Formula

Let B(t, ω) = ((B1 (t, ω), ..., Bm (t, ω)) denote the m−dimensional Brownian motion. If each of

the processes ui (t, ω) and vij (t, ω) satisfies the condition, then we can form the following n Itô

processes

dX1 = u1 dt + v11 dB1 + ... + v1m dBm

dX2 = u2 dt + v21 dB1 + ... + v2m dBm

49
4.7. Stochastic Differential Equation Chapter 4. Itô Integral

.. .. .. .. .. .. dXn = un dt + vn1 dB1 + ... + vnm dBm

i.e.

dX(t) = udt + vdB(t),

where
       
X1 (t) u1 v11 ... v1m dB1 (t)
       
       
 .   .   . . .   . 
       
       
X(t) =  . 

,u =
 . ,v =  .
   . .  , dB(t) =

 . 
 
       
 .   .   . . .   . 
       
       
Xn (t) un vn1 ... vnm dBm (t)

Such a process X(t) is called an n - dimensional Itô process.

Theorem Let

dX(t) = udt + vdB(t)

be an n-dimensional Itô process. Let g(t, x) = (g1 (t, x), ..., gp (t, x)) be an C2 map from [0, ∞) ×

Rn into Rp . Then the process

Y (t, ω) = g(t, X(t))

is again an Irô process, whose component number k, Yk , is given by

∂gk 2
dYk = ∂t (t, X)dt + Σi ∂g k 1 ∂ gk
∂xi (t, X)dXi + 2 Σi,j ∂xi ∂xj (t, X)dXi dXj

where dBi dBj = δij dt, dBi dt = dtdBi = 0.

4.7 Stochastic Differential Equation

Theorem Let T > 0 and b(·, ·) : [0, T ] × Rn → Rn , σ(·, ·) : [0, T ] × Rn → Rn×m be measurable

functions satisfying |b(t, x)| + |σ(t, x)| ≤ C(1 + |x|); x ∈ Rn , t ∈ [0, T ] for some constant C,

(where |σ|2 = Σ|σij |2 ) and such that

|b(t, x) − b(t, y)| + |σ(t, x) − σ(t, y)| ≤ D|x − y|; x, y ∈ Rn , t ∈ [0, T ]

(m)
for some constant D. Let Z be a random variable which is independent of the σ−algebra F∞

generated by Bs (·), s ≥ 0 and such that

E[|Z|2 ] < ∞

50
Chapter 4. Itô Integral 4.7. Stochastic Differential Equation

Then the stochastic differential equation

dXt = b(t, Xt )dt + σ(t, Xt )dBt , 0 ≤ t ≤ T , X0 = Z.

has a unique t−continuous solution Xt (ω) with the property that

Xt (ω) is adapted to the filtration FtZ generated by Z and Bs (·); s ≤ t and


RT
E[ 0 |Xt |2 dt] < ∞.

4.7.1 Weak and Strong Solution

A solution Xt is called a Strong solution, if the version Bt of Brownian motion is given in

advance and the solution Xt constructed from it is FtZ −adapted.

If we are given the function b(t, x) and σ(t, x) and ask for a pair of processes ((X̃t , B̃t ), Ht ) on

a probability space (Ω, H, P) such that

|b(t, x) − b(t, y)| + |σ(t, x) − σ(t, y)| ≤ D|x − y|; x, y ∈ Rn , t ∈ [0, T ]

Then the solution X̃t (or more precisely (X̃t , B̃t )) is called a weak solution. Here Ht is an

increasing family of σ−algebras such that X̃t is Ht −adapted and B̃t is a martingale w.r.t

Ht (and so E[B̃t+h − B̃t |Ht ] = 0 for all t, h ≥ 0).

A strong solution is also a weak solution but the converse is not true in general.

A strong or path wise uniqueness means that if X1 (t, ω) and X2 (t, ω) are two continuous

processes , then

X1 (t, ω) = X2 (t, ω) for all t ≤ T, almost surely.

Whereas weak uniqueness implies that any two solutions are identical in law, i.e. have the

same finite distribution.

Lemma If b and σ satisfy the conditions of uniqueness and existence of stochastic differential

equation then we have A solution is weakly unique.

A weak solution concept is seen as natural, as it does not specify beforehand the explicit

representation of white noise. Moreover, there exists a stochastic differential equations which

have no strong solutions but still a weakly unique solution.

The Tanka equation Consider the 1−dimensional stochastic differential equation

dXt =sign(Xt )dBt ; X0 = 0, where

sign(x) = +1 if x ≥ 0

51
4.7. Stochastic Differential Equation Chapter 4. Itô Integral

sign(x) = −1 if x < 0.

Note that σ(t, x) = σ(x) =sign(x) does not satisfy the Lipschitz condition, thus we are not

guaranteed existence and uniqueness of a solution. We later see that the equation has no strong

solution.

Solution To see this, let B̂t be a Brownian motion generating the filtration F̂t and define
Rt
Yt = 0 sign(B̂s )dB̂s .

By the Tanaka formula we know that, if L̂(ω) is local time for B̂t (ω) at 0. It follows that Yt is

measurable w.r.t the σ−algebra Gt generated by |B̂s (·)|; s ≤ t, which is clearly strictly contained

in F̂t .

Yt = |B̂t | − |B̂0 | − L̂t (ω),

Thus the σ−algebra Nt generated by Ys (·); s ≤ t is also strictly contained in F̂t . Now suppose

that Xt is a strong solution. Then we see that Xt is a Brownian motion w.r.t the measure P.

Let ]Mt be the σ−algebra generated by Xs (·); s ≤ t. Since (sign(x))2 = 1 we write the original

equation as

dBt =sign(Xt )dXt

We apply the above argument to B̂t = Xt , Yt = Bt we conclude that Ft is strictly contained in

Mt . But we had assumed that Xt is a strong solution, thus we see that the equation has no

strong solution.

To find a weak solution of the equation, we choose Xt to be any Brownian motion B̂t . Then we

define B̃t by
Rt Rt
B̃t = 0 sign(B̂s )dB̂s = 0 sign(Xs )dXs

i.e.

dB̃t =sign(Xt )dXt .

Then

dXt = sign(Xt )dB̃t ,

so Xt is a weak solution.

We see that as any weak solution Xt must be a Brownian motion w.r.t P, this implies that the

solution Xt is weakly unique.

52
References

Applebaum, David (2004). Levy Processes and Stochastic Calculus. Cambridge University Press.

Billingsley, Patrick (1986). Probability and Measure. Wiley Series in Probability and Mathemat-

ical Sciences.

Klenke, Dr. Achim (2008). Probability Theory. Springer London.

Mörters, Peter and Peres, Yuval (2003). Brownian Motion. Cambridge Series in Statistical and

Probabilistic Mathematics.

Øksendal, Bernt (2000). Stochastic Differential equations. Springer-Verlag Berlin Heidelberg

New York.

Taylor, Howard M. and Karlin, Samuel (1984). An Introduction To Stochastic Modelling. Aca-

demic Press.

53

You might also like