2 - WQU - CTSP - Module 2 - Compiled - Content PDF
2 - WQU - CTSP - Module 2 - Compiled - Content PDF
In Module 2, the concept of stochastic integrals with respect to a Brownian motion and their
properties are introduced. The module begins by reviewing Stieltjes integrals to provide a
smooth transition from deterministic calculus to stochastic integrals. The module continues by
defining the Ito integral and explaining the Ito diffusion process and concludes with an
introduction of the Brownian Martingale Representation Theorem (MRT).
𝑏 𝑏
∫𝑎 𝑓 𝑑𝑔 or ∫𝑎 𝑓(𝑥) 𝑑𝑔(𝑥).
|𝑆(𝑓, 𝑔, 𝑃) − 𝐴| < 𝜖.
The unique number 𝐴 is called the RS integral of 𝑓 with respect to 𝑔 over [𝑎, 𝑏], and is
denoted by
𝑏 𝑏
𝐴 ≔ ∫ 𝑓 𝑑𝑔 ≡ ∫ 𝑓(𝑥) 𝑑𝑔(𝑥).
𝑎 𝑎
lim 𝑆(𝑓, 𝑔, 𝑃) = 𝐴,
‖𝑃‖→0
in the sense that for every ϵ > 0, there exists δ > 0 such that if 𝑃 is a tagged partition
with ‖𝑃‖ < 𝛿, then |𝑆(𝑓, 𝑔, 𝑃) − 𝐴|<𝜖. This definition is still used in many textbooks.
When 𝑔(𝑥) = 𝑥, then RS integration is the same as Riemann integration and the two
definitions are in fact equivalent.
and
where the 𝑠𝑢𝑝 is taken over all partitions of [𝑎, 𝑏]. We call 𝑉𝑔 ([𝑎, 𝑏]) the total variation of
𝑔 on [𝑎, 𝑏] and we say that 𝑔 is of bounded variation on [𝑎, 𝑏] if 𝑉𝑔 ([𝑎, 𝑏]) < ∞.
If 𝑔 is continuously differentiable on [𝑎, 𝑏], then it is of bounded variation and the total
variation is given by
𝑏
𝑉𝑔 ([𝑎, 𝑏]) = ∫ |𝑔′ (𝑥)| 𝑑𝑥.
𝑎
Let's look at a simple example. Suppose that 𝑔 is increasing on [𝑎, 𝑏]. Then for any
partition 𝑃, we have
hence
Theorem 1.1. Let 𝑓, 𝑔: [𝑎, 𝑏] → ℝ be functions. If 𝑓is continuous on [𝑎, 𝑏] and 𝑔has
bounded variation on [𝑎, 𝑏], then 𝑓is RS integrable with respect to 𝑔.
So, a general rule of thumb is that good integrators are functions of bounded variation.
Here is an important scenario that we will encounter in many sections of this module.
Suppose that 𝑔: [0, ∞) → ℝ is such that 𝑉𝑔 ([0, 𝑏]) < ∞ for each 𝑏 > 0. Also assume that 𝑔
∫ 𝑓 𝑑𝜇𝑖 .
[𝑎,𝑏]
An elementary argument shows that, just like in the case of the Lebesgue measure,
these integrals (with respect to 𝜇𝑖 ) generalize the RS integral in the sense that if 𝑓 is RS
integrable on [𝑎, 𝑏], then 𝑓 is integrable with respect to 𝜇𝑖 for 𝑖 = 1,2 and
𝑏
∫ 𝑓 𝑑𝑔 = ∫ 𝑓 𝑑𝜇1 − ∫ 𝑓 𝑑𝜇2 .
𝑎 [𝑎,𝑏] [𝑎,𝑏]
We will call the integral on the right-hand side the Lebesgue-Stieltjes integral, and
𝑏
because of this result, we will sometimes write ∫𝑎 𝑓 𝑑𝑔 even when referring to the
(more general) Lebesgue-Stieltjes integral.
Hi, in this video we introduce the Ito integral with respect to a Brownian motion.
So, let 𝑊 be a Brownian motion and 𝜑 be a stochastic process. We want to define a new
stochastic process, which we are going to denote by {(𝜑 ⦁ 𝑋)𝑡 : 0 ≤ 𝑡 ≤ 𝑇}, where 𝑇 could
be infinity, but for now we’re going to assume that T is finite. (𝜑 ⦁ 𝑋)𝑡 is called the Ito
integral and we will sometimes denote it by:
𝑡
(𝜑⦁𝑋) =∫0 𝜑𝑠 𝑑𝑋𝑠 .
The intuition is that this should represent the gains from trading if the asset price is
given by 𝑊 and a holding is given by 𝜑. So, this is a continuous-time analog of the
martingale transform that we defined in discrete time.
Now, since the parts of Brownian motion do not have finite variation, we cannot define
this as a Riemann-Stieltjes or Lebesque-Stieltjes integral. We thus need to consider a
new approach, starting with what is called a simple process.
∑ 𝐻𝑖𝐼(𝑡𝑖−1,𝑡𝑖](𝑡) ,
𝐼=1
where 𝐻𝑖 is an 𝐹𝑡𝑖−1 - measurable random variable and 𝑡1 up to 𝑡𝑛 is just a partition of
the interval 0 to 𝑇. This can be illustrated in a diagram as follows:
We start with a partition. This is 𝑡0 , 𝑡1 , 𝑡2 , and so on, to 𝑡𝑛−1 ; and then, between the
interval 𝑡0 and 𝑡1 , this random variable takes on one value, which is 𝐻1 ; and then,
between 𝑡1 and 𝑡2 , it takes on another value, and so on and so forth. So, that is what the
sample path of a simple process looks like. They are piecewise constant and this
changes if we change Ω as well.
Now, if 𝜑 is a simple process, we are going to define the stochastic integral of this here,
𝑡
(∫0 𝜑𝑠 𝑑𝑊𝑠 ). This will be defined as:
∑ 𝐻𝑖 (𝑊𝑡𝑖∧𝑡 − 𝑊𝑡𝑖−1 ∧𝑡 ).
𝑖=1
So, that’s the stochastic integral and it agrees, of course, with the martingale transform
in discrete time, but that is defined only for simple functions.
We will denote the set of all simple processed by 𝕊. A stochastic integral satisfies the
following properties:
𝑡
1 𝐸 (∫0 𝜑𝑠 𝑑𝑊𝑠 ) = 0.
𝑡 2 𝑡
2 𝐸 ((∫0 𝜑𝑠 𝑑𝑊𝑠 ) ) = 𝐸 (∫0 𝜑𝑠2 𝑑𝑠), where of course, by Fubini’s theorem, we can
𝑡
3 {∫0 𝜑𝑠 𝑑𝑊𝑠 : 0 ≤ 𝑡 ≤ 𝑇} is a square integrable martingale, because we’re still
distribution with mean 0 and the variance is the integral from 0 to 𝑡 of 𝑑𝑠, which
of course coincides with the quadratic variation.
So, those are four properties that are satisfied by the stochastic integral is the integrand
is a simple process. Now, we will extend this stochastic integral.
For the first extension, we’ll consider a wider class called 𝐿2 . We will define 𝐿2 (𝑊), with
𝑊 ofcourse representing Brownian motion, to be the set of all progressive processes. So,
𝜑, such that 𝜑 is progressive, and the norm of 𝜑 is finite, where this norm with respect
1
𝑡
to Brownian motion is just simply equal to (𝐸(∫0 𝜑𝑡 2 𝑑𝑡))2 . We can show that this is
indeed an 𝐿2 space over the product of the Lebesque measure and the original
probability measure. So, these are the processes that we are going to define the integral
with respect to. The important result is that for every 𝜑 in 𝐿2 , that allows us to extend
the stochastic integral, we can find a sequence of simple processes such that this
sequence converges to 𝜑 with respect to this 𝐿2 norm. In other words, the distance
between 𝜑𝑛 and 𝜑 goes to 0 and 𝑛 tends to infinity.
With that, we will then define the stochastic integral of 𝜑, where 𝜑 belongs to 𝐿2 , as the
limit 𝑛 tends to infinity of the integrals of the 𝜑𝑛 . We can show that this limit does not
depend on the sequence 𝜑𝑛 that we choose to represent 𝜑. We say that this is a layer of
extension of the stochastic integral. Again, we can show that with this extension the
stochastic integral still satisfies the four properties discussed above in this class 𝐿2 of
𝑊.
𝑡
The final extension is to consider the set of all processes 𝜑 such that ∫0 𝜑𝑠2 𝑑𝑠 < ∞ for
every 𝑡 (i.e. 𝑡 ≥ 0), and this holds almost surely. Now, in this final extension,
unfortunately, we lose the martingale property. In general, the Ito integral, when 𝜑
satisfies only this condition, but not necessarily the stronger condition, will not be a
martingale, but it will always be what is known as a local martingale, and we’re going to
see that in the next module.
Now that we’ve looked at the Ito integral, in the next video we’re going to move on to
Ito’s Lemma.
In this section we introduce - for the first time- the stochastic integral of a stochastic
process 𝜑 with respect to another process 𝑋. We will denote this by
𝑡
(𝜑⦁𝑋) or ∫0 𝜑𝑠 𝑑𝑋𝑠 .
We shall think of the interaction between a simple process 𝜑𝑠 = Δ(𝑡) and, for instance, a
Brownian motion 𝑋𝑠 = 𝑊(𝑡). Regard 𝑊(𝑡) as the price per share of an asset at time 𝑡. (Of
course, it is not the best possible example since Brownian motion can take negative
values, but let’s ignore that issue for the sake of this illustration.) Now, think of
𝑡0 , 𝑡1 , … , 𝑡𝑛−1 as the trading dates in the asset, and think of Δ(𝑡0 ), Δ(𝑡1 ), … , Δ(𝑡𝑛−1 ) as the
position (number of shares) taking in the asset at each trading date and held to the next
trading date. The total gain of this trading strategy is defined by the following
stochastic integral:
𝑡
∫ Δ(𝑡)𝑑𝑊(𝑢) ,
0
which is a specific case of the more general stochastic integral introduced above.
Throughout the section, we fix a filtered probability space (Ω, ℱ, 𝔽, ℙ), an adapted
process 𝑋, and a positive (extended) number 𝑇 ∈ (0, ∞]. The stochastic integral
will then be regarded as a stochastic process (𝜑⦁𝑋)𝑡 , defined for each 0 ≤ 𝑡 ≤ 𝑇.
First, if the paths of 𝑋 have finite variation, then we can define (𝜑⦁𝑋) as a (path-wise)
Stieltjesintegral for appropriate processes 𝜑. This is certainly the case, for instance,
when 𝑋𝑡 (𝜔) = 𝑡 for every 𝑡 ∈ [𝑂, 𝑇] and 𝜔 ∈ Ω. In this case, the integral will correspond
to the Riemann (or more generally, Lebesgue) integral.
Definition 2.1. If 𝑋𝑡 (𝜔) = 𝑡 for each 𝑡 ∈ [0, 𝑇],we define the integral of a stochastic
process 𝜑with respect to 𝑋 as the path-wise Riemann integral
𝑡
(𝜑⦁𝑋)𝑡 (𝜔) ≔ ∫ 𝜑𝑠 (𝜔) 𝑑𝑠, 𝜔 ∈ 𝛺, 𝑡 ∈ [0, 𝑇],
0
provided the integral exists. In general, if the sample paths of 𝑋have finite
variation, we define the integral of 𝜑with respect to 𝑋as a path-wise Stieltjes integral
𝑡
(𝜑⦁𝑋)𝑡 (𝜔) ≔ ∫ 𝜑𝑠 (𝜔) 𝑑𝑋𝑠 (𝜔), 𝜔 ∈ 𝛺, 𝑡 ∈ [0, 𝑇],
0
Theorem 2.1. The sample paths of a Brownian motion have (a.s.) infinite variation on
any interval [0, 𝑡], where 𝑡 > 0.
𝑡
∫ 𝜑𝑠 𝑑𝑊𝑠
0
as a (pathwise) Stieltjes integral. We will introduce a new type of integration – called Ito
Integration – to deal with such problems. Let us first define this integral for simple
processes.
Simple process
A stochastic process 𝜑 is called a simple process (or elementary process) if there exists
a bounded partition 0 = 𝑡0 < 𝑡1 < 𝑡2 < ⋯ < 𝑡𝑛 = 𝑇 of [0, 𝑇] and bounded random
variables 𝐻1 , … , 𝐻𝑛 , with 𝐻𝑖 ∈ 𝑚ℱ𝑡𝒾−1 for 𝑖 = 1,2, … , 𝑛, such that
(1)
Ito integral
If 𝜑 is a simple process with representation (8.1), we will define the Ito integral of 𝜑 with
respect to a Brownian motion 𝑊 as the stochastic process{(𝜑 ⦁ 𝑊)𝑡 : 0 ≤ 𝑡 ≤ 𝑇} defined
by
𝑡 𝑛
𝑡
1 𝔼 (∫0 𝜑𝑠 𝑑𝑊𝑠 ) = 0 for all 0 ≤ 𝑡 ≤ 𝑇.
2 Ito isometry
𝑡 2 𝑡 𝑡
𝔼 ((∫0 𝜑𝑠 𝑑𝑊𝑠 ) ) = 𝔼 (∫0 𝜑𝑠2 𝑑𝑠) = ∫0 𝔼(𝜑𝑠2 ) 𝑑𝑠 for all 0 ≤ 𝑡 ≤ 𝑇.
𝑡
3 {∫0 𝜑𝑠 𝑑𝑊𝑠 : 0 ≤ 𝑡 ≤ 𝑇} is a martingale.
𝑡 𝑡
∫0 𝜑𝑠 𝑑𝑊𝑠 ∼ 𝑁 (0, ∫0 𝜑𝑠2 𝑑𝑠) for all 0 ≤ 𝑡 ≤ 𝑇.
where
1
𝑇 2
‖𝜑‖𝑊 ≔ (𝔼 (∫ 𝜑𝑠2 𝑑𝑠)) .
0
It is clear that 𝐿2 (𝑊) is simply equal to 𝐿2 ([0, 𝑇] × Ω, 𝑃𝑟𝑜𝑔, λ1 ⊗ ℙ), where 𝑃𝑟𝑜𝑔 is the
progressive 𝜎-algebra. Also, 𝕊 ⊆ 𝐿2 (𝑊) is dense in 𝐿2 (𝑊) in the sense that for each 𝜑 ∈
𝐿2 (𝑊), there exists a sequence (𝜑𝑛 ) of elements of 𝕊such that ‖𝜑𝑛 − 𝜑||𝑊 → 0 as 𝑛 → ∞.
For any 𝜑 ∈ 𝐿2 (𝑊), we will define the stochastic integral of 𝜑 with respect to 𝑊 as
𝑡 𝑡
∫ 𝜑𝑠 𝑑𝑊𝑠 : =𝑙𝑖𝑚 𝑛
𝑛→∞ ∫ 𝜑 𝑑𝑊𝑠 ,
0 0
The general Ito integral also has the same properties as the corresponding integral for
elementary processes.
𝑡
1 𝔼 (∫0 𝜑𝑠 𝑑𝑊𝑠 ) = 0for all 0 ≤ 𝑡 ≤ 𝑇.
2 Ito isometry
𝑡 𝑡 𝑡
𝔼((∫0 𝜑𝑠 𝑑𝑊𝑠 )2 ) = 𝔼(∫0 𝜑𝑠2 𝑑𝑠) = ∫0 𝔼(𝜑𝑠2 ) 𝑑𝑠 for all 0 ≤ 𝑡 ≤ 𝑇.
𝑡
3 {∫0 𝜑𝑠 𝑑𝑊𝑠 : 0 ≤ 𝑡 ≤ 𝑇} is a square integrable martingale.
4 If 𝜑, ψ ∈ 𝐿2 (𝑊), then
. . 𝑡
⟨∫ 𝜑𝑠 𝑑𝑊𝑠 , ∫ ψ𝑠 𝑑𝑊𝑠 ⟩ = ∫ 𝜑𝑠 ψ𝑠 𝑑𝑠,
0 0 0
𝑡
and
. 𝑡
⟨∫ 𝜑𝑠 𝑑𝑊𝑠 ⟩ = ∫ 𝜑𝑠2 𝑑𝑠.
0 0
𝑡
𝑡 𝑡
∫0 𝜑𝑠 𝑑𝑊𝑠 ∼ 𝑁 (0, ∫0 𝜑𝑠2 𝑑𝑠) for all 0 ≤ 𝑡 ≤ 𝑇.
1
has a normal distribution with mean 0 and variance ∫0 𝑡 4 𝑑𝑡 = 1/5.
is a martingale. Furthermore,
𝑡 𝑡 𝑡
𝔼(𝐼𝑡 ) = 0 and 𝔼(𝐼𝑡2 ) = 𝔼(∫0 𝑊𝑠4 𝑑𝑠) = ∫0 𝔼(𝑊𝑠4 )𝑑𝑠 = ∫0 3𝑠 2 𝑑𝑠 = 𝑡 3 .
The final extension of the stochastic integral is to the class of progressive processes 𝜑
such that
𝑡
∫ 𝜑𝑠2 𝑑𝑠 < ∞ ∀𝑡 𝑎. 𝑠.
0
𝑡 𝑡
𝑋𝑡 = 𝑋0 + ∫0 𝜇𝑠 𝑑𝑠 + ∫0 𝜎𝑠 𝑑𝑊𝑠,
𝑡 𝑡
For this to make sense, we of course need ∫0 |𝜇𝑠 |𝑑𝑠 to be finite and we need ∫0 𝜎𝑠2 𝑑𝑠 to be
finite for all 𝑡. Note that we are working on the interval 0 ≤ 𝑡 ≤ 𝑇 as always.
𝑑𝑋𝑡 = 𝜇𝑡 𝑑𝑡 + 𝜎𝑡 𝑑𝑊𝑡 .
(Note that this is just shorthand notation for the full expression of 𝑋𝑡 written above).
We call 𝜇𝑡 the drift coefficient and we call 𝜎 the diffusion coefficient of this Ito process.
Now, Ito's Lemma tells us that certain functions of Ito processes are themselves Ito
processes, and it also gives us the stochastic differential of a function of 𝑋𝑡 in terms of
the stochastic differential of 𝑋𝑡 .
δℱ δℱ 1 δ2 ℱ
𝑑ℱ(𝑡, 𝑋𝑡 ) = 𝑑𝑡 + + 𝑑𝑋𝑡 𝑑 < 𝑋 > 𝑡.
δ𝑡 δ𝑥 2 δ𝑥 2
In other words, take the first derivative with respect to the time variable, 𝑑𝑡, plus the
1
first derivative with respect to the spatial variable, 𝑑𝑋𝑡 , plus 2, times the second
derivative with respect to the spatial variable, 𝑑, with respect to the quadratic variation
of 𝑋 at time 𝑡. That gives us a stochastic differential of the function of 𝑋 and 𝑡.
𝑑𝑋𝑡 = 𝑋𝑡 𝑑𝑡 + 𝑋𝑡 2 𝑑𝑊𝑡.
equal to 2𝑙𝑛𝑥.
We can, therefore, apply Ito’s Lemma to find a stochastic definition of 𝑌𝑡 . Written in full:
2𝑡 1 −2𝑡
𝑑𝑌𝑡 = (2𝑙𝑛𝑋𝑡 )𝑑𝑡 + 𝑑𝑋𝑡 + ( 2 ) 𝑑 < 𝑋 > 𝑡.
𝑥𝑡 2 𝑋𝑡
Firstly, it will be 2𝑙𝑛𝑋𝑡 , as that has a 𝑑𝑡 in it. 𝑑𝑋𝑡 is given by 𝑋𝑡 𝑑𝑡, so we will multiply this
and substitute this term, so we get plus 2𝑡, which also has a 𝑑𝑡. Remember that the
quadratic variation, 𝑑 < 𝑋 > 𝑡 = 𝑋𝑡4 𝑑𝑡. Written in full:
2𝑡 2
[2𝑙𝑛𝑋𝑡 + 2𝑡 − 𝑋 ] 𝑑𝑡.
2 𝑡
Secondly,
[2𝑡𝑋𝑡]𝑑𝑊𝑡.
We can further simplify this if we want to.
Now that we have shown an application of Ito's Lemma, in the next video we're going to
move on to stochastic differential equations.
An adapted stochastic process 𝑋 = {𝑋𝑡 : 0 ≤ 𝑡 ≤ 𝑇} is an Ito process if there exists
stochastic process 𝜇 = {𝜇𝑡 : 0 ≤ 𝑡 ≤ 𝑇} and 𝜎 = {𝜎𝑡 : 0 ≤ 𝑡 ≤ 𝑇} such that
𝑡 𝑡
𝑋𝑡 = 𝑋0 + ∫0 𝜇𝑠 𝑑𝑠 + ∫0 𝜎𝑠 𝑑𝑊𝑠 for all 0 ≤ 𝑡 ≤ 𝑇,
where
𝑡 𝑡
∫0 𝜎𝑠2 𝑑𝑠 < ∞ and ∫0 | 𝜇𝑠 |𝑑𝑠 < ∞ for all 0 ≤ 𝑡 ≤ 𝑇. 𝑎. 𝑠.
Stochastic differential
𝑑𝑋𝑡 = 𝜇𝑡 𝑑𝑡 + 𝜎𝑡 𝑑𝑊𝑡 ,
and call it the stochastic differential of 𝑋. The process 𝜇 is called the drift term and 𝜎 is
called the diffusion term of 𝑋. Note that if 𝜎 ∈ 𝐿2 (𝑊), then 𝑋 is a martingale if and only if
𝜇 ≡ 0; i.e. if and only if 𝑋 is driftless.
𝑡
𝑊𝑡 = 𝑊0 + ∫ 1 𝑑𝑊𝑠 .
0
Thus,
𝑑𝑊𝑡 = 0𝑑𝑡 + 1𝑑𝑊𝑡 .
2 The process
𝑡 𝑡 𝑡
𝑋𝑡 = 𝑡 3 + ∫ 𝑊𝑠 𝑑𝑊𝑠 = 0 + ∫ 3𝑠 2 𝑑𝑠 + ∫ 𝑊𝑠 𝑑𝑊𝑠
0 0 0
𝑡 𝑡
𝑋𝑡 = 𝑋0 + ∫ 𝜇𝑋𝑠 𝑑𝑠 + ∫ 𝜎𝑋𝑠 𝑑𝑊𝑠 , 𝜇 ∈ ℝ, 𝜎 ∈ (0, ∞).
0 0
We will later see how to “solve” this stochastic differential equation to get 𝑋
explicitly.
𝑑𝑋𝑡 = 𝜇𝑡 𝑑𝑡 + 𝜎𝑡 𝑑𝑊𝑡 ,
One of the most important results in stochastic calculus is the Ito's lemma. Before we
state it, we need to introduce the quadratic variation of a stochastic process.
Quadratic variation
𝑛
[𝑋, 𝑌]𝑡 ≔ lim ∑(𝑋𝑡𝑖 − 𝑋𝑡𝑖−1 )( 𝑌𝑡𝑖 − 𝑌𝑡𝑖−1 ),
||𝑃||→0
𝑖=1
where 𝑃 = {𝑡0 , 𝑡1 , … , 𝑡𝑛 } is a partition of [0, 𝑡], provided the limit exists (in probability). We
call [𝑋] ≔ [𝑋, 𝑋] the quadratic variation of 𝑋.
The quadratic variation behaves in a similar way to the predictable quadratic variation,
⟨. ⟩. If 𝑋 is an Ito process with stochastic differential
then
𝑡
[𝑋]𝑡 = ∫ 𝜎𝑠2 𝑑𝑠.
0
𝜕𝑓 𝜕𝑓 2
𝜕 𝑓
, and
𝜕𝑡 𝜕𝑥 𝜕𝑥2
all exist and are continuous. Suppose that 𝑋is an Ito process with
𝑑𝑋𝑡 = 𝜇𝑡 𝑑𝑡 + 𝜎𝑡 𝑑𝑊𝑡 .
∂f 1 ∂2 f ∂f 𝜕𝑓
𝑑𝑌𝑡 = 𝑑𝑓(𝑡, 𝑋𝑡 ) = [∂t + 2 𝜎s2 ∂x2 + 𝜇t ∂x] 𝑑𝑡 + 𝜎𝑡 𝜕𝑥 𝑑𝑊𝑡 .
𝑡
𝜕𝑓 1 𝜕 2 𝑓 𝜕𝑓 𝑡
𝜕𝑓
𝑌𝑡 = 𝑌0 + ∫ [ + 𝜎𝑠2 2 + 𝜇s ] 𝑑𝑠 + ∫ 𝜎𝑠 𝑑𝑊𝑠 .
0 𝜕𝑠 2 𝜕𝑥 𝜕𝑥 0 𝜕𝑥
An easier way to remember this formula is to write it as follows:
𝜕𝑓 𝜕𝑓 1 ∂2 f
𝑑𝑌𝑡 = 𝑑𝑓(𝑡, 𝑋𝑡 ) = 𝑑𝑡 + 𝑑𝑋𝑡 + 𝑑[𝑋]𝑡 .
𝜕𝑡 𝜕𝑥 2 ∂x 2
𝜕𝑓 𝜕𝑓 𝜕2𝑓
= 4𝑡𝑒 2𝑥 , = 4𝑡 2 𝑒 2𝑥 𝑎𝑛𝑑 = 8𝑡 2 𝑒 2𝑥 .
𝜕𝑡 𝜕𝑥 𝜕𝑥 2
Thus,
𝜕𝑓 𝜕𝑓 1 ∂2 f 1
𝑑𝑌𝑡 = 𝑑𝑡 + 𝑑𝑋𝑡 + 2
(𝑑𝑋𝑡 )2 = 4𝑡𝑒 2𝑋𝑡 𝑑𝑡 + 4𝑡 2 𝑒 2𝑋𝑡 𝑑𝑋𝑡 + 8𝑡 2 𝑒 2𝑋𝑡 (𝑑𝑋𝑡 )2
𝜕𝑡 𝜕𝑥 2 ∂x 2
𝑡 𝑡
𝑋𝑡 = 𝑋0 + ∫ 𝜇(𝑠, 𝑋𝑠 ) 𝑑𝑠 + ∫ 𝜎(𝑡, 𝑋𝑠 ) 𝑑𝑊𝑠 , 𝑡 ≥ 0.
0 0
Our first example is geometric Brownian motion. Consider the following SDE:
where 𝜇 and 𝜎 are constants, with 𝜎 > 0. We want to solve this equation by finding 𝑋
explicitly in terms of Brownian motion 𝑊. We apply Ito's lemma to 𝑌𝑡 = ln 𝑋𝑡 to get
1 1 1
𝑑𝑌𝑡 = 𝑑 ln 𝑋𝑡 = 0 𝑑𝑡 + 𝑑𝑋𝑡 − 2 𝑑[𝑋]𝑡 = (𝜇 − 𝜎 2 ) 𝑑𝑡 + 𝜎𝑑𝑊𝑡 .
𝑋𝑡 2𝑋𝑡 2
Hence
d[X]_t = sigma^2 dX^2 = sigma^2 dt
𝑡 1 𝑡 1
𝑌𝑡 = 𝑌0 + ∫0 (𝜇 − 𝜎 2 ) 𝑑𝑠 + ∫0 𝜎𝑑𝑊𝑠 = 𝑌0 + ( 𝜇 − 𝜎 2 )𝑡 + 𝜎𝑊𝑡 .
2 2
Therefore,
1 1
𝑋𝑡 = 𝑒 𝑌𝑡 = 𝑒𝑥𝑝 (𝑙𝑛𝑋0 + (𝜇 − 𝜎 2 ) 𝑡 + 𝜎𝑊𝑡 ) = 𝑋0 𝑒𝑥𝑝 ((𝜇 − 𝜎 2 ) 𝑡 + 𝜎𝑊𝑡 ).
2 2
1 1
𝑙𝑛 𝑋𝑡 = 𝑌𝑡 = 𝑙𝑛 𝑋0 + (𝜇 − 𝜎 2 ) 𝑡 + 𝜎𝑊𝑡 ~𝑁 (𝑙𝑛 𝑋0 + (𝜇 − 𝜎 2 ) 𝑡, 𝜎 2 𝑡) ,
2 2
which gives
𝑡 𝑡 𝑡
𝑌𝑡 = 𝑌0 + ∫ α 𝜇𝑒 𝛼𝑠 𝑑𝑠 + ∫ 𝜎𝑒 𝛼𝑠 𝑑𝑊𝑠 = 𝑌0 + 𝜇(𝑒 𝛼𝑡 − 1) + ∫ 𝜎𝑒 𝛼𝑠 𝑑𝑊𝑠 .
0 0 0
𝑡
𝑋𝑡 = 𝑒 −𝛼𝑡 𝑌𝑡 = 𝑋0 𝑒 −𝛼𝑡 + 𝜇(1 − 𝑒 −𝛼𝑡 ) + 𝑒 −𝛼𝑡 ∫0 𝜎𝑒 𝛼𝑠 𝑑𝑊𝑠 .
and
𝑡 𝑡
𝜎2
𝑉𝑎𝑟(𝑋𝑡 ) = 𝑉𝑎𝑟(𝑒 −𝛼𝑡 ∫ 𝜎𝑒 𝛼𝑠 𝑑𝑊𝑠 ) = 𝑒 −2𝛼𝑡 ∫ 𝜎 2 𝑒 2𝛼𝑠 𝑑𝑠 = (1 − 𝑒 −2𝛼𝑡 ).
0 0 2𝛼
Hi, in this video we go through an example of solving a stochastic differential equation.
We are going to consider what is called geometric Brownian motion. This is the
stochastic differential equation that looks like this: 𝑑𝑋𝑡 = 𝑋𝑡 𝜇𝑡 𝑑𝑡 + 𝑋𝑡 𝜎d𝑊𝑡 , where 𝜇 and
𝜎 are constants. We are going to assume, of course, that 𝜎 is strictly positive.
𝑡 𝑡
What this means is that 𝑋𝑡 = 𝑋0 + ∫0 𝑋𝑠 𝜇ds + ∫0 𝑋𝑠 𝜎𝑑𝑊𝑠 (i.e. with respect to Brownian
We are also going to assume that 𝑋0 = 1. This is not very important, we can assume
anything, but just to be sure, we are going to assume that 𝑋0 = 1.
So, the trick in solving stochastic differential equations of this form is to try and find a
transformation of 𝑋𝑡 . In other words, we want to find a new stochastic process 𝑌𝑡 that is
a function of 𝑋𝑡 , such that the stochastic differential of 𝑌𝑡 can be solved easily, and then
you invert that transformation to get back 𝑋𝑡 . Put differently, we have to find 𝑌𝑡 , which
is a function of 𝑡 and 𝑋𝑡 , such that 𝑌𝑡 is easy to solve. In other words, we can solve for 𝑌𝑡
explicitly, and this function is invertible, so that we get back 𝑋𝑡 and calculate the
distribution of 𝑋𝑡 . The motivation for that is obtained from looking at this (𝑑𝑋𝑡 = 𝑋𝑡 𝜇d𝑡),
at least initially, as an ordinary differential equation. So, without the stochastic term
then the solution would be the log of 𝑋𝑡 .
𝑌𝑡 = ℱ(𝑡1 , 𝑋𝑡 ) = 𝑙𝑛𝑋𝑡 .
So, we have made 𝑌𝑡 the log of 𝑋𝑡 and now let’s find the stochastic differential equation
of 𝑌𝑡 using Ito’s lemma
By taking 𝑑𝑌𝑡 , using Ito’s lemma, we have to find the first derivative with respect to
time, which is 0 in this case. We add 𝑑𝑌𝑡 to the first derivative with respect to 𝑋, which
1 1 −1
is 𝑋 𝑑𝑋𝑡 + 2 times the second derivative with respect to the spatial variable, which is 𝑋 2 ,
𝑡 𝑡
and that is multiplied by 𝑑 of the quadratic variation of 𝑋𝑡 . So, that’s the stochastic
differential of 𝑌𝑡 . Written in full:
1 1 −1
𝑑𝑌𝑡 = 𝑑𝑋𝑡 + ( 2 ) 𝑑〈𝑋〉𝑡
𝑋𝑡 2 𝑋𝑡
1
We will change the 𝑑𝑡 term to 𝜇, and then this part here, (minus 2 times 𝑑 and the
quadratic variation), will be 𝑋𝑡 𝜎 2. So, 𝑋𝑡 cancels with this part here and we get minus
1 2
𝜎 , which is the 𝑑𝑡 term, plus the 𝑑𝑊 terms, which is equal to 𝜎𝑑𝑊𝑡 . That’s a stochastic
2
1
(𝜇 − 2 𝜎 2 )𝑑𝑡 + 𝜎𝑑𝑊𝑡 .
Since we assume that 𝑋0 is equal to 1, this means that 𝑌0 is equal to 0 and, therefore, 𝑌𝑡
𝑡 1 𝑡
is equal to ∫0 (𝜇 − 2 𝜎 2 ) 𝑑𝑠 + ∫0 𝜎𝑑𝑊𝑠 . This is something that we can actually evaluate
explicitly as it doesn’t depend on 𝑌𝑡 or any other unknown processes. These are all
known stochastic processes and that is what we wanted.
1
So, this will be equal to 𝜇 minus 2 𝜎 2 times 𝑡, because this is a constant, plus, again since
this is a constant, this will be 𝜎 times 𝑊𝑡 minus 𝑊0 , which is 0. That is the expression
for 𝑌𝑡 . And as we can see, 𝑌𝑡 has a normal distribution with a mean of itself as it is non-
1
stochastic. So, that's 𝜇 minus 2 𝜎 2 𝑡. This part has mean 0, so that is 0. Then the variance:
as this does not have variance it equals 𝜎 2 times 𝑡. This gives us the distribution of 𝑌𝑡 .
Written in full:
𝑡 𝑡
1 2
𝑌𝑡 = ∫ (𝜇 − 𝜎 ) 𝑑𝑠 + ∫ 𝜎𝑑𝑊𝑠
0 2 0
1 1
= (𝜇 − 𝜎 2 ) 𝑡 + 𝜎Wt ∼ ℕ ((𝜇 − 𝜎 2 ) 𝑡, 𝜎 2 𝑡).
2 2
Now we go back and find 𝑋𝑡 by inverting this. So, we have to exponentiate on both
1
sides: 𝑋𝑡 is equal to e to the power 𝑌𝑡 , which in this case is 𝑒 to the power 𝜇 minus 2 𝜎 2 𝑡
plus 𝜎𝑊𝑡 . That is the expression for 𝑋𝑡 and we've solved it explicitly in terms of
Brownian motion. Written in full:
1
(𝜇− 𝜎 2 )𝑡+𝜎𝑊𝑡
𝑋𝑡 = 𝑒 𝑌𝑡 = 𝑒 2 .
Since the log of 𝑋𝑡 , which is 𝑌𝑡 , has a normal distribution with these parameters here,
that implies that 𝑋𝑡 has a log normal distribution and we apply, of course, the same
parameters of that normal distribution. Written in full:
1
𝑙𝑛𝑋𝑡 = 𝑌𝑡 ∼ 𝑁 ((𝜇 − 𝜎 2 ) 𝑡, 𝜎 2 𝑡)
2
1
⇒ 𝑋𝑡 ∼ 𝑙𝑜𝑔𝑁 ((𝜇 − 𝜎 2 ) 𝑡, 𝑟 2 𝑡)
2
So, 𝑋𝑡 has a log normal distribution and this is a very popular model for stock price
returns. As an exercise you can calculate the expected value of 𝑋𝑡 and the variance of
𝑋𝑡 using the moment-generating function (MGF) of 𝑌𝑡 .
Now that we've illustrated how to solve the stochastic differential equation, in the next
video we're going to move onto the Martingale representation theorem.
We now turn to a result that is of great importance in hedging derivative securities.
Let W be a Brownian motion on (Ω, ℱ, 𝔽, ℙ). We know that if 𝜑 ∈ 𝐿2 (W), then the process
X such that dX t = 𝜑t dWt is a martingale. The martingale representation theorem (MRT)
says that, under certain conditions, the converse is also true – that is, all martingales
are just stochastic integrals with respect to W.
Theorem 4.1 (MRT). Let 𝑀be a martingale that is adapted to the natural filtration of 𝑊 .
Then there exists a predictable process 𝜑 such that
𝑡
𝑀𝑡 = 𝑀0 + ∫0 𝜑𝑠 𝑑𝑊𝑠 , for every 𝑡.
Let us apply this result to the martingale 𝑋𝑡 = 𝑊𝑡2 − 𝑡. We want to find 𝜑 such that
𝑡
𝑊𝑡2 − 𝑡 = ∫0 𝜑𝑠 𝑑𝑊𝑠 , for every 𝑡.
𝑡
𝑋𝑡 = ∫0 2𝑊𝑠 𝑑𝑊𝑠 ,
giving
𝜑𝑡 = 2𝑊𝑡 .
𝑇 𝑇
𝑊𝑇2 = 𝑇 + ∫ 2𝑊𝑠 𝑑𝑊𝑠 = 𝔼(𝑊𝑇2 ) + ∫ 2𝑊𝑠 𝑑𝑊𝑠 .
0 0
Theorem 4.2. Let 𝐻 be an ℱ𝒯𝒲 -measurable random variable with𝔼(|𝐻|) < ∞.Then there
exists a predictable process 𝜑such that.
𝑇
𝐻 = 𝔼(𝐻) + ∫ 𝜑𝑠 𝑑𝑊𝑠 .
0
The proof just applies the MRT to 𝑀𝑡 = 𝔼(𝐻|ℱ𝓉 ). (Remember that we assume that
ℱ0 is trivial.)
We now consider extending the results of the previous sections to multidimensional
processes.
𝑚
𝑖𝑗 𝑗
𝑑𝑋𝑡𝑖 = 𝜇𝑡𝑖 𝑑𝑡 + ∑ 𝜎𝑡 𝑑𝑊𝑡
𝑗=1
for some processes 𝜇𝑖 and 𝜎 𝑖𝑗 . We will sometimes write this in matrix/vector notation
as
𝑑𝑋𝑡 = 𝜇𝑡 𝑑𝑡 + 𝜎𝑡 𝑑𝑊𝑡 ,
Theorem 4.3 [Ito]. Let 𝑋be a 𝑑-dimensional Ito process and 𝑓: ℝ𝑑 → ℝ be a function that
is twice continuously differentiable. Then the prices 𝑌defined by 𝑌𝑡 ≔ 𝑓(𝑋𝑡 ) is also an
Ito process and
𝑑 𝑑 𝑑
𝜕𝑓 1 𝜕2𝑓
𝑑𝑌𝑡 = ∑ 𝑑𝑋𝑡𝑖 + ∑ ∑ 𝑑[𝑋 𝑖 , 𝑋𝑗 ]𝑡 .
𝜕𝑥𝑖 2 𝜕𝑥𝑖 𝜕𝑥𝑗
𝑖=1 𝑖=1 𝑗=1
The martingale representation theorem says that if, in continuous time, 𝑀 = {𝑀𝑡 : 0 ≤
𝑡 ≤ 𝑇}, is a martingale that is adapted to the Brownian motion filtration, which we will
denote by 𝔽𝑤 , then there exists a predictable process, 𝜑, such that 𝑀𝑡 = 𝑀0 plus the
stochastic integral with respect to this process 𝜑. Written in full:
𝑡
𝑀𝑡 = 𝑀0 + ∫ 𝜑𝑠 𝑑𝑊𝑠
0
𝑡
Of course,𝜑 has to satisfy the usual condition that ∫0 𝜑𝑠2 𝑑𝑠 must be finite for all 𝑡, almost
surely.
𝑇
𝐻 = 𝐸(𝐻) + ∫ 𝜑𝑡 𝑑𝑊𝑠 , 𝜑 ∈ 𝐿2 (𝑊).
0
Let's look at an example.
Let's take 𝐻 to be equal to 𝑊𝑇 2 and find out what this stochastic process is. So, since
this is a Brownian motion, the expected value of 𝐻, or the expected value of 𝑊𝑇 2 , will be
equal to 𝑇.
Now, we have to calculate what 𝜑 is and for that we will define the following
martingale: 𝐹(𝑡, 𝑊𝑡 ). This is defined to be the expected value of 𝐻 given ℱ𝓉𝒲 , which is the
expected value of 𝑊𝑇 2 , given ℱ𝓉𝒲 , which is equal to the expected value of 𝑊𝑇 2 given 𝑊𝑡 ,
because 𝑊 is a Markov process, so conditioning on the filtration up to time 𝑡 is
equivalent to just conditioning on 𝑊𝑇 2 .
So, we have to calculate this and for that we will rewrite this by creating an increment
as 𝑊𝑇 minus 𝑊𝑡 plus 𝑊𝑡 all squared, given 𝑊𝑇 , which is equal to the expected value of
𝑊𝑇 minus 𝑊𝑇 2 plus 2 times 𝑊𝑇 times 𝑊𝑇 minus 𝑊𝑇 plus 𝑊𝑇 2 all given 𝑊𝑇 , which is equal
to: now this part here, this expectation conditional with 𝑊𝑇 is independent of this
section here, so this would just be the variance, which is 𝑇 minus 𝑡, and then this part
here, we can take out and get this part, which is independent of 𝑊𝑇 ; and, therefore, the
expectation of this will be equal to 0.
Finally, this part here, since this is 𝑊𝑇 2 given 𝑊𝑡 , this will just be 𝑊𝑇 2 . So, this is plus
𝑊𝑇 2 . And here we have the martingale that we are dealing with.
Written in full:
𝐻 = 𝑊𝑇 2
𝐸(𝐻) = 𝐸(𝑊𝑇2 ) = 𝑇.
𝐹(𝑡, 𝑊𝑡 ): = 𝐸(𝐻|ℱ𝓉𝒲 ) = 𝐸(𝑊𝑇2 |ℱ𝓉𝒲 )
= 𝑇 − 𝑡 + 𝑊𝑡2 .
Now, since this is a martingale, we can therefore notice that this is actually equal to
Ft Wt where 𝐹(𝑡, 𝑥) is this function here: 𝐹(𝑡, 𝑥) = 𝑇 − 𝑡 + 𝑥 2 . If you define this function,
then this is just 𝐹 applied to Brownian motion. We can therefore apply Ito's lemma,
since this satisfies all the conditions of Ito's lemma, to get the following: 𝑑𝐹(𝑡, 𝑊𝑡 ) is
equal to the first derivative with respect to time, 𝑑𝑡, plus the first derivative with respect
1
to 𝑥, 𝑑𝑊𝑡 , plus 2 times the second derivative with respect to the spatial variable, the
1
This simplifies to the first derivative with respect to time plus 2 second derivative with
respect to the spatial variable, and that's 𝑑𝑡, plus this part here: the partial derivative
with respect to 𝑥, 𝑑𝑊𝑡 , and this is equal to 0, because of the Martingale property. This
means that the drift should be equal to 0. This derivative here will be 2𝑥, and then we
substitute Brownian motion inside there, and we get 2𝑊𝑡 𝑑𝑊𝑡 .
Therefore, we have the following: 𝐹(𝑡, 𝑊𝑡 ) is simply equal to the integral, which is a
stochastic differential. So, it's 𝐹(0, 𝑊0 ) plus the integral from 0 to 𝑡 of 2𝑊𝑠 𝑑𝑊𝑠 . And,
therefore, if we substitute 𝑇, 𝐹 of 𝑇, 𝑊𝑇 will just be 𝐻 itself. So, this implies that 𝐻 is
equal to 𝐹(0, 𝑊0 ) and when we substitute 0 here, we get T plus the integral from 0
to 𝑇 of 2𝑊𝑡 𝑑𝑊𝑡 . And therefore, this is our integrand. So that's what 𝜑 is.
Written in full:
δ𝐹 δ𝐹 1 δ2 𝐹
𝑑𝐹(𝑡, 𝑊𝑡 ) = 𝑑𝑡 + 𝑑𝑊𝑡 + 𝑑𝑡
δ𝑡 δ𝑥 2 δ𝑥 2
δ𝐹 1 δ2 𝐹 δ𝐹
=[ + 2
] 𝑑𝑡 + 𝑑𝑊𝑡 = 2𝑊𝑡 𝑑𝑊𝑡
δ𝑡 2 δ𝑥 δ𝑥
𝑡
𝐹(𝑡, 𝑊𝑡 ) = 𝐹(0, 𝑊0 ) + ∫ 2𝑊𝑠 𝑑𝑊𝑠
0
𝑇
⇒ 𝐻 = 𝑇 + ∫ 2𝑊𝑡 𝑑𝑊𝑡 .
0
That brings us to the end of the module. In the next module we're going to look at
Stochastic Calculus II: Semimartingales.
If 𝑑𝑋𝑡 = 2𝑑𝑡 + 3𝑑𝑊𝑡 and 𝑌𝑡 = 𝑋𝑡2 , then 𝑑𝑌𝑡 is equal to…?
Solution:
𝜕𝐹 𝜕𝐹 1 𝜕2𝐹 2
𝑑𝑌𝑡 = 𝑑𝐹(𝑡, 𝑋𝑡 ) = 𝑑𝑡 + 𝑑𝑋𝑡 + 𝑑𝑋 .
𝜕𝑡 𝜕𝑋𝑡 2 𝜕𝑋𝑡2 𝑡
∂𝐹
In our case, the first derivative, is zero, and the other terms give us:
∂𝑡
Take into account that in order to get the last term above we make use of 𝑑𝑋𝑡2 =
32 𝑑𝑊𝑡2 = 9𝑑𝑡.
Solution:
If 𝜑 is deterministic, then:
𝑡 𝑡
∫ 𝜑𝑠 𝑑𝑊𝑡 ∽ 𝑁 (0, ∫ 𝜑𝑠2 𝑑𝑠) .
0 0
𝑡 2
𝑠 5 2 32
∫ 𝜑𝑠2 𝑑𝑠 = ∫ 𝑠 4 𝑑𝑠 = | = .
0 0 5 0 5
If 𝑑𝑋𝑡 = 𝑋𝑡 𝑑𝑡 + 2𝑋𝑡 𝑑𝑊𝑡 and 𝑑𝑌𝑡 = 3𝑑𝑡 − 3𝑌𝑡2 𝑑𝑊𝑡 , then what is 𝑑[𝑋, 𝑌]𝑡 ?
Solution:
Remember the following result from the theory. Let 𝑋 = (𝑋1 , 𝑋 2 )satisfy the
following SDEs:
𝑑𝑋𝑡1 = 𝜇1 𝑋𝑡1 𝑑𝑡 + 𝜎1 𝑑𝑊𝑡1 + 𝜎1 𝑑𝑊𝑡2 , 𝑑𝑋𝑡2 = 𝜇2 𝑑𝑡 + 𝜎2 𝑑𝑊𝑡1 .
Where 𝑑[𝑋1 , 𝑋 2 ]𝑡 = 𝜎1 𝜎2 𝑑𝑡. Thus, the result should be: −6𝑋𝑡 𝑌𝑡2 𝑑𝑡. You can derive this by
yourself, taking into account the "multiplication rules":
and
𝑑[𝑋, 𝑌]𝑡 = 3𝑋𝑡 𝑑𝑡 2 − 3𝑋𝑡 𝑌𝑡2 𝑑𝑡𝑑𝑊𝑡 + 6𝑋𝑡 𝑑𝑊𝑡 𝑑𝑡 − 6𝑋𝑡 𝑌𝑡2 𝑑𝑊𝑡 𝑑𝑊𝑡 = −6𝑋𝑡 𝑌𝑡2 𝑑𝑡.
𝑡
The martingale 𝑀𝑡 = 𝑊𝑡3 − 3𝑡𝑊𝑡 can be represented as 𝑀𝑡 = ∫0 𝜑𝑠 𝑑𝑊𝑠 where. . . ?
Solution:
𝜕𝑓 𝜕𝑓 𝜕2 𝑓
𝜕𝑡
= −3𝑋𝑡 , 𝜕𝑋
= 3𝑋𝑡2 − 3𝑡 and 𝜕𝑋 2 = 6𝑋𝑡 .
Thus,
1
𝑑𝑀𝑡 = −3𝑋𝑡 𝑑𝑡 + (3𝑋𝑡2 − 3𝑡)𝑑𝑋𝑡 + 6𝑋𝑡 𝑑𝑡 = 3(𝑋𝑡2 − 𝑡)𝑑𝑋𝑡.
2
𝜑𝑡 = 3(𝑋𝑡2 − 𝑡).
If 𝑑𝑋𝑡 = 2𝑑𝑡 + 3𝑑𝑊𝑡 and 𝑌𝑡 = 𝑋𝑡2 , then what is 𝑑[𝑌]𝑡 equal to?
Solution:
𝜕𝑓 𝜕𝑓 1 𝜕2𝑓
𝑑𝑀𝑡 = 𝑑𝑓(𝑡, 𝑋𝑡 ) = 𝑑𝑡 + 𝑑𝑋𝑡 + 𝑑[𝑋]𝑡 .
𝜕𝑡 𝜕𝑋 2 𝜕𝑋 2
7
Let 𝜑𝑡 = −2𝐼(0,4] (𝑡) + 5𝑊4 𝐼(4,7] (𝑡). Then ∫0 𝜑𝑡 𝑑𝑊𝑡 is…?
Solution:
7 7
∫ 𝜑𝑡 𝑑𝑊𝑡 = ∫ (−2𝐼(0,4] (𝑡) + 5𝑊4 𝐼(4,7] (𝑡)) 𝑑𝑊𝑡 =
0 0
4 7
= −2 ∫ 𝑑𝑊𝑡 + 5 ∫ 𝑊4 𝑑𝑊𝑡 = −2𝑊4 + 5𝑊4 (𝑊7 − 𝑊4 ).
0 4
The quadratic variation behaves in a similar way to the predictable quadratic variation,
⟨. ⟩. If 𝑋 is an Ito process with stochastic differential
then
𝑡
[𝑋]𝑡 = ∫ 𝜎𝑠2 𝑑𝑠.
0
𝑡 𝑡
4
[𝑋]𝑡 = ∫ 𝜎𝑠2 𝑑𝑠 = ∫ 4 𝑡 8 𝑑𝑠 = 𝑡 9 .
0 0 9