64818
64818
com
https://ebookmeta.com/product/advanced-probability-and-
statistics-applications-to-physics-and-engineering-1st-
edition-harish-parthasarathy/
OR CLICK HERE
DOWLOAD EBOOK
Harish Parthasarathy
Professor
Electronics & Communication Engineering
Netaji Subhas Institute of Technology (NSIT)
New Delhi, Delhi-110078
First published 2023
by CRC Press
4 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN
and by CRC Press
6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-2742
© 2023 Harish Parthasarathy and Manakin Press
CRC Press is an imprint of Informa UK Limited
The right of Harish Parthasarathy to be identified as author of this work has been asserted in
accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988.
All rights reserved. No part of this book may be reprinted or reproduced or utilised in any
form or by any electronic, mechanical, or other means, now known or hereafter invented,
including photocopying and recording, or in any information storage or retrieval system,
without permission in writing from the publishers.
For permission to photocopy or use material electronically from this work, access www.
copyright.com or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive,
Danvers, MA 01923, 978-750-8400. For works that are not available on CCC please contact
[email protected]
Trademark notice: Product or corporate names may be trademarks or registered trademarks,
and are used only for identification and explanation without intent to infringe.
Print edition not for sale in South Asia (India, Sri Lanka, Nepal, Bangladesh, Pakistan or
Bhutan).
British Library Cataloguing-in-Publication Data
A catalogue record for this book is available from the British Library
Library of Congress Cataloging-in-Publication Data
A catalog record has been requested
ISBN: 9781032384375 (hbk)
ISBN: 9781032384382 (pbk)
ISBN: 9781003345060 (ebk)
DOI: 10.1201/9781003345060
Typeset in Arial, MinionPro, Symbol, CalisMTBol, TimesNewRoman, RupeeForadian,
Wingdings, ZapDingbats, Euclid, MT-Extra
by Manakin Press, Delhi
Preface
This book is primarily a book on advanced probability and statistics that
could be useful for undergraduate and postgraduate students of physics, engi-
neering and applied mathematics who desire to learn about the applications of
classical and quantum probability to problems of classical physics, signal pro-
cessing and quantum physics and quantum field theory. The prerequisites for
reading this book are basic measure theoretic probability, linear algebra, differ-
ential equations, stochastic differential equations, group representation theory
and quantum mechanics. The book deals with classical and quantum probabili-
ties including a decent discussion of Brownian motion, Poisson process and their
quantum non-commutative analogues. The basic results of measure theoretic
integration which are important in constructing the expectation of random vari-
ables are discussed. The Kolmogorov consistency theorem for the existence of
stochastic processes having given consistent finite dimensional probability dis-
tributions is also outlined. For doing quantum probability in Boson Fock space,
we require the construction of the tensor product between Hilbert spaces. This
construction based on the GNS principle, Schur’s theorem of positive definite
matrices and Kolmogorov’s consistency theorem has been outlined. The laws of
large numbers for sums of independent random variables are introduced here and
we state the fundamental inequalities and properties of Martingales originally
due to J.L.Doob culminating finally in the proof of the Martingale convergence
theorem based on the downcrossing/upcrossing inequalities. Doob’s Martingale
inequality can be used to give an easy proof of the strong law of large numbers
and we mention it here. Doob’s optional stopping theorem for submartingales is
proved and applied to calculating the distribution of the hitting times of Brow-
nian motion. We give another proof of this distribution based on the reflection
principle of Desire’ Andre which once again rests on the strong Markov property
of Brownian motion.
vi
fields and hence is a problem in advanced stochastic field theory. We also discuss
some applications of advanced probability to general electromagnetics and ele-
mentary quantum mechanics like what is the statistics of the far field radiation
pattern produced by a random current source and how when this random elec-
tromagnetic field is incident upon an atom modeled by the Schrodinger or Dirac
equation, the stochastically averaged transition probability can be computed in
terms of the classical current correlations. Many other problems in statistical
signal processing like prediction and filtering of stationary time series, Kalman,
Extended Kalman and Unscented Kalman filters, the MUSIC and ESPRIT al-
gorithms for estimating the directions of random signal emitting sources, the
recursive least squares lattice algorithm for order and time recursive prediction
and filtering are discussed. We have also included some material on superstring
theory since it is closely connected with the theory of operators in Boson and
Fermion Fock spaces which is now an integral component of non-commutative
probability theory. Some aspects of supersymmetry have also been discussed
in this book with the hope that supersymmetric quantum systems can be used
to design quantum gates of very large size. Various aspects and applications
of large deviation theory have also been included in this book as it forms an
integral part of modern probability theory which is used to calculate the prob-
ability of rare events like the probability of a stochastic dynamical system with
weak noise exiting the stability zone. The computation of such probabilities
enables us to design controllers that will minimize this deviation probability.
The chapter on applied differential equations focuses on problems in robotics
and other engineering or physics problems wherein stochastic processes and field
inevitably enter into the description of the dynamical system. The chapter on
circuit theory and device physics has also been included since it tells us how to
obtain the governing equations for diodes and transistors from the band struc-
ture of semiconductors. When a circuit is built using such elements and thermal
noise is present in the resistances, then the noise gets distorted and even am-
plified by the nonlinearity of the device and the mathematical description of
such circuits can be calculated by perturbatively solving associated nonlinear
stochastic differential equations. Quantum scattering theory has also been in-
cluded since it tells us how quantum effects make the probability distribution of
scattered particles different from that obtained using classical scattering theory.
Thus, quantum scattering theory is an integral part of quantum probability.
Many discussions on the Boltzmann kinetic transport equation in a plasma are
included in this book since the Boltzmann distribution function at each time
t can be viewed as an evolving probability density function of a particle in
phase space. In fact, the Bolzmann equation is so fundamental that it can be
used to derive not only more precise forms of the fluid dynamical equations but
also describe the motion of conducting fluids in the presence of electromagnetic
fields. Any book on applications of advanced probability theory must therefore
necessarily include a discussion of the Boltzmann equation. It can be used to
derive the Fokker-Planck equation for diffusion processes after making approx-
imations and by including the nonlinear collision terms, it can also be used to
prove the H-theorem ie the second law of thermodynamics. The section on the
vii
Atiyah-Singer index theorem has been included because it forms an integral part
of calculating anomalies in quantum field theory which is in turn a branch of
non-commutative probability theory.
At this juncture, it must be mentioned that this book in the course of dis-
cussing applied probability and statistics, also surveys some of the research
work carried out by eminent scientists in the field of pure and applied prob-
ability, quantum probability, quantum scattering theory, group representation
theory and general relativity. In some cases, we also indicate the train of thought
processes by which these eminent scientists arrived at their fundamental con-
tributions. To start with, we review the axiomatic foundations of probability
theory due to A.N.Kolmogorov and how the Indian school of probabilists and
statisticians used this theory effectively to study a host of applied probability
and statistics problems like parameter estimation, convergence of a sequence
of probability distributions, martingale characterization of diffusions enabling
one to extend the scope of the Ito stochastic differential equations to situations
when the drift and diffusion coefficients do not satisfy Lipschitz conditions, gen-
eralization of the large deviation principle and apply it to problems involving
random environment, interacting particle systems etc. We then discuss the work
of R.L.Hudson along with K.R.Parthasarathy on developing a coherent theory
of quantum noise and apply it to study in a rigorous mathematical way the
Schrodinger equation with quantum noise. This gives us a better understand-
ing of open quantum systems, ie systems in which the system gets coupled to
a bath with the joint system-bath universe following a unitary evolution and
after carrying a out a partial trace of the environment, how one ends up with
the standard Gorini-Kossokowski-Sudarshan-Lindblad (GKSL) equation for the
system state alone–this is a non-unitary evolution. The name of George Su-
darshan stands out here as not only one of the creators of open quantum sys-
tem theory but also as the physicist involved in developing the non-orthogonal
resolution of the identity operator in Boson Fock space which enables one to
effectively solve the GKSL equation. We then discuss the work of K.B.Sinha
along with W.O.Amrein in quantum scattering theory especially in the devel-
opment of the time delay principle which computes the average time spent by
the scattered particle in a scattering state relative to the time spent by it in the
free state. We discuss the heavy contributions of the Indian school of general
relativitsts like the Nobel laureate Subramaniyam Chandraskehar on developing
perturbative tools for solving the Einstein-Maxwell equations in a fluid (post-
Newtonian hydrodynamics) and also the work of Abhay Ashtekar and Ashoke
Sen on canonical quantization of the gravitational field and superstring the-
ory. We discuss the the train of thought that led the famous Indian probabilist
S.R.S.Varadhan to develop along with Daniel W.Stroock the martingale charac-
terization of diffusions and along with M.D.Donsker to develop the variational
formulation of the large deviation principle which plays a fundamental role in
assessing the role of weak noise on a system to cause it to exit a stability zone
by computing the probability of this rare event. We then discuss the work of
the legendary Inidian mathematician Harish-Chandra on group representation
viii
theory especially his creation of the discrete series of representations for groups
having both non-compact and compact Cartan subgroups to finally obtain the
Plancherel formula for such semisimple Lie groups. We discuss the impact of
Harish-Chandra’s work on modern group theoretical image processing, for ex-
ample in estimating the element of the Lorentz group that transforms a given
image field into a moving and rotating image field. We then discuss the con-
tributions of the famous Indian probabilist Gopinath Kallianpur to developing
non-linear filtering theory in its modern form along with the work of some other
probabilists like Kushner and Striebel. Kallianpur’s theory of nonlinear filter-
ing is the most general one as we know today since it is applicable to situations
when the process and measurement noises are correlated. Kallianpur’s mar-
tingale approach to this problem has in fact directly led to the development
of the quantum filter of V.P.Belavkin as a non-commutative generalization of
the classical version. Nonlinear filtering theory has been applied in its linearized
approximate form-the Extended Kalman filter to problems in robotics and EEG-
MRI analysis of medical data. It is applicable in fact to all problems where the
system dynamics is described by a noisy differential or difference equation and
one desires to estimate both the state and parameters of this dynamical system
on a real time basis using partial noisy measurement data. We conclude this
work with brief discussions of some of the contributions of the Indian school
of robotics and quantum signal processing to image denoising, robot control
via teleoperation and to artificial intelligence/machine learning algorithms for
estimating the nature of brain diseases from slurred speech data. This review
also includes the work of K.R.Parthasarathy on the noiseless and noisy Shan-
non coding theorems in information theory especially to problems involving the
transmission of information in the form of stationary ergodic processes through
finite memory channels and the computation the Shannon capacity for such
problems. It also includes the pedagogical work of K.R.Parthasarathy in sim-
plfying the proof of Andreas Winter and A.S.Holevo on computing the capacity
of iid classical-quantum channels wherein classical alphabets are encoded into
quantum states and decoding positive operator valued measures are used in the
decoding process. We also include here the work of K.R.Parthasarathy on realiz-
ing via a quantum circuit, the recovery operators in the Knill-Laflamme theorem
for recovering the input quantum state after it has been transmitted through a
noisy quantum channel described in the form of Choi-Krauss-Stinespring opera-
tors. Some generalizations of the single qubit error detection algorithm of Peter
Shor based on group theory due to K.R.Parthasarathy are also discussed here.
This book also contains some of the recent work of the Indian school of robotics
involving modeling the motion of rigid 3-D links in a robot using Lie group-Lie
algebra theory and differential equations on such Lie groups. After setting up
these kinematic differential equation in the Lie algebra domain of SO(3)⊗n , we
include weak noise terms coming from the torque and develop a large deviation
principle for computing the approximate probability of exit of the robot from
the stability zone. We include feedback terms to this robot system and optimize
this feedback controller so that the probability of stability zone exit computed
using the large deviation rate function is as small as possible.
ix
Table of Contents
8. Superconductivity 245–248
We note that
T r(ρ.χF ) = |en (ω)|2 dP (ω)
n F
1
2 Advanced Probability and Statistics: Applications to Physics and Engineering
In particular, if we choose the en s such that n |en (ω)|2 = 1 for P a.e.ω, then
we get
T r(ρ.χF ) = P (F ), F ∈ F
(H, F, P ) is a quantum probability space.
to choose a state such that X can be localised then in the same state Y will
have infinite variance, so that the two can never be simultaneously measured.
Another way to state this is that if we define the joint characteristic function of
X and Y in the state ρ as
ψ(t, s) = T r(ρ.exp(tX + sY )), t, s ∈ R
then ψ will in general not be positive definite and hence its inverse bivariate
Fourier transform by Bochner’s theorem will not generally be a probability
distribution. If however, X, Y commute, then ψ will always be positive definite:
c̄k cm ψ(tk −tm , sk −sm )
k,m
= T r(ρ.( ck exp(tk X+sk Y )))( cm exp(tm X+sm Y ))∗ ≥ 0
k m
[5] Proofs of the main theorems in classical integration theory.
[a] Monotone convergence, [b] Fatou’s lemma, [c] Dominated convergence,
[d]Fubini’s theorem.
If (Ω, F, μ) is a σ-finite measurable space and fn is a sequence of increas-
ing measurable functions bounded below by an integrable function, then the
monotone convergence theorem states that
lim fn dμ = limfn dμ
If fn is any sequence of measurable functions bounded below by an integrable
function, then by using the increasing sequence infk≥n fk in the monotone con-
vergence theorem, we can establish Fatou’s lemma:
liminf fn dμ ≥ liminf fn dμ
K :S×S →C
by
K((x1 , ..., xn ), (y1 , ..., yn )) = Πnk=1 < xk , yk >k
4 Advanced Probability and Statistics: Applications to Physics and Engineering
[8] The weak law of large numbers and the central limit theorem for sums of
independent random variables.
If X(n), n = 1, 2, ... is a sequence of independent r.vs with corresponding
means μn and corresponding finite variances σn2 and if
n
V ar(Sn ) = σk2
k=1
and hence
2
P (|Sn /n − (μ1 + ... + μn )/n| > ) ≤ V ar(Sn /n)/ = (σ12 + ... + σn2 )/n2 2
→0
ie, we get
P (|Sn /n − μ| > ) → 0
[9] Sums of independent random variables and the strong law of large num-
bers.
6 Advanced Probability and Statistics: Applications to Physics and Engineering
[10] The weak and strong laws of large numbers and the Levy-Khintchine
theorem on representing the characteristic function of an infinitely divisible
probability distributions in its most general form and its subsequent generaliza-
tion to infinitely divisible distributions for Hilbert space random variables by
S.R.S.Varadhan.
and then prove that if Xn is an iid sequence of N (0, 1) r.v’,s then the sequence of
n
continuous processes Bn (t) = k=1 Xk φk (t), k = 1, 2..., n converges uniformly
almost surely over [0, 1] and hence the limit is a Gaussian process with almost
surely continuous sample paths. This limiting process has all the properties of
Brownian motion. Some of the fundamental properties of BM proved using the
Borel-Cantelli lemmas are (a)
P (limh→0 supt∈[0,1−h] |B(t + h) − B(t)|/ Ch.log(1/h)) ≤ 1) = 1
sets to any given degree of accuracy with respect to any probability distribution
on Rn . The proof also uses a fundamental result in topology on compact sets,
namely, given a nested sequence of non-empty compact sets, the intersection
of all these sets is then non-empty. The consistency theorem states that given
a consistent sequence of probability distributions Fn on Rn , for n = 1, 2, ...,
there exists a probability space (Ω, F, P ) and an infinite sequence of real valued
random variables Xn , n = 1, 2, ... on this probability space such that
Then, t
xn+1 (t) − xn (t) = (μ(s, xn (s)) − μ(s, xn−1 (s)))ds
0
t
+ (σ(s, xn (s)) − σ(s, xn−1 (s))dB(s)
0
Thus, writing
Δn+1 (t) = max0≤s≤t |xn+1 (s) − xn (s)|2
we get t
E(Δn+1 (t)) ≤ 2K 2 t E(Δn (s))ds
0
t
+8K 2 E(Δn (s))ds
0
t t
2 2
= K (2t + 8) E(Δn (s))ds ≤ (2T + 8)K E(Δn (s))ds, 0 ≤ t ≤ T
0 0
from which, we get by iteration that
n+r−1
≤ E[max0≤t≤T |xj+1 (t) − xj (t)|]
j=n
n+r−1
n+r−1 √
1/2
≤ (E(Δj (t))) ≤ (CT n )/ n!
j=n j=n
[15a] Paul Levy’s construction of Brownian motion using the Haar basis.
Let
D(n) = {k/2n : 0 ≤ k ≤ 2n }
I(n) = D(n) − D(n − 1) = {k/2n : 0 ≤ k ≤ 2n , kodd}
Let {ξ(n, k) : k ∈ I(n), n ≥ 0} be iid N (0, 1) random variables. Define the Haar
wavelet functions Hn (t), t ∈ [0, 1] by Hn,k (t) = 2(n−1)/2 , (k − 1)/2n < t < k/2n
and Hn (t) = −2(n−1)/2 , k/2n < t < (k + 1)/2n }. Clearly {Hn,k : n ≥ 0 : k ∈
I(n)} is an onb for L2 [0, 1]. Define the Schauder functions
t
Sn,k (t) = Hn,k (s)ds
0
Then,
|Sn,k (t)| ≤ 2−(n+1)/2
We evaluate
Hn,k (t)Hn,k (s) = δ(t − s)
n,k
≤ (2C/n)exp(−n2 /2)
and since
(2n /n).exp(−n2 /2) < ∞
n≥1
Hence, for a.e. ω, there exists an integer N (ω) such that n > N (ω) implies
b(n) ≤ n. Then
|ξ(n, k)|Sn,k (t) ≤
n>N (ω) k∈I(n)
n.2−(n+1)/2 < ∞
n≥1
Note that
Sn,k (t)
k∈I(n)
has a minimum value of zero and a maximum value of 2−(n+1)/2 over the interval
[0, 1]. Its graph consists of nonoverlapping triangles of height 2−(n+1)/2 and base
widths 2−(n−1) .
The above argument implies that for a.e.ω, and for all n > N (ω)
supt∈[0,1] |Bn+r (t, ω) − Bn (t, ω)| ≤ n.2−(n+1)/2
m>n
which converges to zero as n, r → ∞. This means that for a.e.ω, the processes
Bn (., ω), n ≥ 1 converge uniformly over the interval [0, 1] and since each of
these processes is continuous, the limiting process B(., ω) is also continuous
with autocorrelation min(t, s) and hence we have explicitly constructed a mean
zero Gaussian process over the time interval [0, 1] that is a.e. continuous and
hs autcorrelation min(t, s). In other words, the limiting process is Brownian
motion over the time interval [0, 1].
by
P (D) = Pt1 ...tn (B)
Then by the Caratheodory theorem, it is sufficient to prove that P is countably
additive on C. To prove this it is in turn sufficient to show that if Dn ∈ C and
Dn ↓ φ, then P (Dn ) ↓ 0. Suppose that P (Dn ) ↓ δ > 0. Then, we must arrive
at a contradiction. Since Dn+1 ⊂ Dn , it follows that if we write
Dn = {ω : (ω(t1 ), ...ω(tn )) ∈ Bn }
where
Bm ⊂ Bn × Rm−n
Thus, we can pad sets Dn,1 , ..., Dn,m−n−1 in between the sets Dn and Dn+1 so
that
such that
Bn ∈ Rn , Bn+1 ⊂ Bn × R
Now by the regularity of probability measures on Rn , we can choose for each n
a non-empty compact set Kn ⊂ Bn such that
Now define
we find that
n
n
En = {ω : (ω(t1 ), ...ω(tm )) ∈ Km } = Fm
m=1 m=1
where
Fm = {ω : (ω(t1 ), ...ω(tm )) ∈ Km }
Then, En ↓ and
n
n
P (Dn − En ) = P ( (Dn − Fm )) ≤ P ( (Dm − Fm ))
m=1 m=1
n
n
≤ P (Dm − Fm ) = Pt1 ...tm (Bm − Km )
m=1 m=1
n
≤ δ/2m < δ
m=1
contains
the point {ω : (ω(1), ..., ω(m)) = (x(1), ..., x(m))} for each m. Thus,
m≥1 m contains the point (x(1), x(2), ...) and in particular,
F this set is non-
empty. But, Fm ⊂ Dm by our construction and hence m≥1 Dm is non-empty
which is a contradiction. This completes the proof of Kolmogorov’s existence
theorem.
ie, a continuous Gaussian stochastic process with zero mean and autocorrelation
min(t, s). Let X(t) be a stochastic process such that
for all t, s ∈ [0, 1] with C, a, b positive constants. Then X(.) has a continuous
modification, ie, there exists a continuous stochastic process Y (t) defined on the
same probability space such that for any distinct t1 , ..., tn ∈ [0, 1], n = 1, 2, ...,
we have
P (X(tj ) = Y (tj ), j = 1, 2, ..., n) = 0
In particular, all the finite dimensional distributions of X(.) coincide with the
corresponding distributions of Y . The idea behind the use of this theorem
to construct Brownian motion is to first construct using infinite sequences of
iid Gaussian random variables, a zero mean Gaussian process X(t) having the
same autocorrelation function as that of Brownian motion, then prove that X(.)
satisfies the conditions of this theorem and hence use the theorem to deduce the
existence of a continuous modification of the X(.) process, ie, we get a process
Y with continuous trajectories having the same finite dimensional distributions
as that of Brownian motion and hence conclude that Y is a Brownian motion
process.
Proof of the theorem: Let
D(n) = {k/2n : 0 ≤ k ≤ 2n }
Thus, by the union bound and Chebyshev’s inequality, for any γ > 0
2
n
−1
P (|X((k + 1)/2n ) − X(k/2n )| > 2−nγ ) ≤
k=0
it follows that
and matrices equal to vectors and matrices multiplied by the time domain spec-
tral projection and then derive the quantum Ito formula using the commutation
relations between the creation and annihilation operator fields. Explain using
basic physical principles why this derivation shows that the Ito formula can be
alternatively viewed as a manifestation of the Heisenberg uncertainty principle
between position and momentum.
[18] Ito’s formula for Brownian motion and Poisson processes and their quan-
tum generalizations. Take a Brownian motion process B(t) and verify the Levy
oscillation property
N −1
limN →∞ (B((n + 1)t/N ) − B(nt/N ))2 = t
n=0
in the mean square sense by computing the mean and variance of the lhs. Ex-
plain intuitively how this relationship can be cast in the form
(dB(t))2 = dt
M −1
M −1
limM →∞ [ (N ((k + 1)h) − N (kh))2 − (N ((k + 1)h) − N (kh))]2 = 0
k=0 k=0
To prove this, make use of the independent increment property of the Poisson
process and calculate E[N (h)2 ] and E(N (h)4 ) using
By taking limits prove in the mean square sense that if f (t) is a continuous
function, then
T T
E( f (t)dB(t))2 = f 2 (t)dt
0 0
where the rhs is interpreted as the limit of
N −1
f (nt/N )(B((n + 1)t/N ) − B(nt/N ))
n=0
Show that
df (B(t)) = f (B(t))dB(t) + (1/2)f (B(t))dt
Advanced Probability and Statistics: Applications to Physics and Engineering 15
Then
P (En , i.o) = 0
Proof:
{En , i.o} = Ek
n k≥n
Thus,
P (En , i.o) = limn→∞ P ( Ek ) ≤ P (Ek ) → 0
k≥n k≥n
Then
P (En , i.o) = 1
In fact,
1 − P (En , i.o) = P ( Ekc )
n k≥n
K
f (x) = C.exp( c(k)gk (x))
k=1
[2] Suppose ρ is a mixed quantum state such that its Von-Neumann entropy
H(ρ) = −T r(ρ.log(ρ)) is a maximum subject to constraints
T r(ρHk ) = μk , k = 1, 2, ..., K
δρ = ρ.((1 − exp(−ad(Z))/ad(Z))(δZ)
So
δ(ρ.log(ρ)) = δ(Z.exp(Z))
= δZ.exp(Z) + Z.exp(Z)((1 − exp(−ad(Z))/ad(Z))(δZ)
We may assume that Z is a Hermitian matrix. Then,
δ(T r(ρ.log(ρ))) =
[3] Let μ and ν be two probability measures on the same measurable space.
Define
H = supf ( f dν − log( exp(f )dμ))
f = log(dν/dμ) + C
[4] Let ρ, σ be two quantum states in a given Hilbert space and let X vary
over all observables (Hermitian matrices) in the same Hilbert space. Compute
Then,
T r(ρ.X) − log(T r(σ.exp(X))) =
p(k) < ek |ρ|ek > −log( exp(p(k)) < ek |σ|ek >)
k k
hint: Let τ be the first time at which the process hits either zero or y. Then τ
is a finite stop-time and hence by Doob’s optional stopping theorem,
and hence
P (T > t) = x/y
Now let s(t) = min(B(u) : u ≤ t). Then
Px (B(t) ∈ dy, T > t) = (2πt)−1/2 (exp(−(y −x)2 /2t)−exp(−(y +x)2 /2t)), y > 0
Let X(t) be a d− dimensional diffusion process with drift μ(x) and diffusion
coefficient σ(x). Suppose f (x) satisfies
Then, f (X(t)) is a Martingale and hence if τ denotes the first time at which
f (X(t)) hits either a or b starting from x ∈ (a, b), then by Doob’s optional
stopping theorem,
P (X(τ ) = a) + P (X(τ ) = b) = 1
Thus, the probability that X(t) hits f −1 ({a}) before it hits f −1 ({b}) staring at
x is given by
P (X(τ ) = a) = (f (x) − f (b))f (a) − f (b))
and the probability that X(t) hits f −1 ({b}) before it hits f −1 ({a}) is given by
we get taking
Z = 2n/2 (B((k + 1)/2n ) − B(k/2n )),
√ √
a = c.log(2). n
that
P (An,k ) ≤ C.n−1/2 .exp(−cn.log(2)/2) = C.n−1/2 .2−nc/2
Thus,
2
n
n
P( An,k ) ≤ P (An,k ) ≤ Cn−1/2 .2n(1−c/2)
k=1 k=1
So for c > 2,
2
n
limn→∞ P ( An,k ) = 0
k=1
and we deduce from the above equation and the continuity of the Brownian
paths that
P (limsuph→0 sup0<t<1−h |B(t + h) − B(t)|/ c.h.log(1/h) > 1) = 0
20 Advanced Probability and Statistics: Applications to Physics and Engineering
for all c > 2. Note that this is equivalent to the statement that
limδ→0 P (sup0<h<δ sup0<t<1−h |B(t + h) − B(t)|/ c.h.log(1/h) > 1) = 0∀c > 2
for all sufficiently large n. This is not summable. Hence by the Borel-Cantelli
Lemma we have that with probability one the events {B(q n ) − B(q n−1 ) >
ψ(q n − q n−1 )}, n = 1, 2, ... occur infinitely often. On the other hand, for any
a > 1,
[26](For Rohit Singh). Estimating the image intensity field when the noise is
Poisson plus Gaussian with the mean of the Poisson field being the true intensity
field.
The image field has the model
where N (x, y) is Poisson with unknown mean u0 (x, y) and W (x, y) is N (0, σ 2 ).
Further we assume that N (x, y), W (x, y), x, y = 1, 2, ..., M are all independent
r.v’s. {u0 (x, y)} is the denoised image field and is to be estimated from mea-
surements of {u(x, y)}.
Remark: We write
and interpret u0 (x, y) as the signal/denoised image field and N (x, y)−u0 (x, y)+
W (x, y) as the noise. The pdf of the noisy image field is
p(u|u0 ) = ΠM
x,y=1 φ((u(x, y) − n)/σ).exp(−u0 (x, y))u0 (x, y)n /n!
n≥0
22 Advanced Probability and Statistics: Applications to Physics and Engineering
and hence the log-likelihood function to be maximized after taking into account
a regularization term that minimizes the error energy in the image field gradient,
ie, reduces prominent edges is given by
L(u|u0 ) = log(p(u|u0 ))−E(u0 )
M
= log( φ((u(x, y)−n)/σ).exp(−u0 (x, y))u0 (x, y)n /n!)
x,y=1 n≥0
M
−c. |∇u0 (x, y)|
x,y=1
where
|∇u0 (x, y)|2 = (u0 (x + 1, y) − u0 (x, y))2 + (u0 (x, y + 1) − u0 (x, y))2
Setting the gradient of this w.r.t u0 (x, y) to zero gives us the optimal equation
[ φ((u(x, y) − n)/σ).exp(−u0 (x, y))u0 (x, y)n /n!]−1
n
×[ φ((u(x, y) − n)/σ)exp(−u0 (x, y))(u0 (x, y)n−1 /(n − 1)! − u0 (x, y)n /n!)]
n
u0 (t+1, x, y)
= u0 (t, x, y)+μ.[[ φ((u(x, y)−n)/σ).exp(−u0 (t, x, y))u0 (t, x, y)n /n!]−1
n
×[ φ((u(x, y)−n)/σ)exp(−u0 (t, x, y))(u0 (t, x, y)n−1 /(n−1)!−u0 (t, x, y)n /n!)]
n
−cdiv(∇u0 (t, x, y)/|∇u0 (t, x, y)|)]
Note that √
φ(x) = ( 2π)−1 .exp(−x2 /2)
Simulating this algorithm: First we explain how to simulate a Poisson ran-
dom variable with given mean λ. We use the fact that a binomial random vari-
able with parameters n, p = λ/n converges in distribution to a Poisson random
variable with mean λ. Further, a binomial random variable with parameters
Advanced Probability and Statistics: Applications to Physics and Engineering 23
The joint pdf of all the pixel intensities in the image is then
P (u|u0 ) = ΠM
x,y=1 p(u(x, y)|u0 )
and in in this case, we do not introduce any regularization. Thus, the problem
amounts to constructing the mle of u0 given the matrix of measurements U =
((u(x, y))). We therefore consider the problem of measuring the parameter θ on
which the pdf of X1 depends given an iid sequence X1 , ..., Xn and ask how this
esimator behaves as n → ∞. let
We write
θ = θ0 + δθ
where θ0 is the true value of θ and then note that
L(Xk |θ0 + δθ) = L(Xk |θ0 ) + Lk (Xk |θ0 )δθ + (1/2)Lk (Xk |θ0 )(δθ)2
θ̂[n] = θ0 + δ θ̂[n]
where
n
δ θ̂[n] = argmaxδθ [Lk (Xk |θ0 )δθ + (1/2)Lk (Xk |θ0 )(δθ)2 ]
k=1
Thus,
n
n
δ θ̂[n] = −[ L (Xk |θ0 )]−1 .[ L (Xk |θ0 )]
k=1 k=1
since
E[L (X1 |θ0 )] = p (X1 |θ)dX1 = ∂θ p(X1 |θ)dX1 = 0
The LDP tells us at what rate does δ θ̂[n] converge to zero. From the contraction
principle of LDP, if I(x, y) is the rate function of n−1 . k=1 (L (Xk |θ0 ), L (Xk |θ0 )),
n
0r 0r×N −r
H = diag(0r , IN −r ) =
0N −r×r IN −r
Define
Ir 0r×N −r
K = (−1)H = diag(Ir , −In−r ) =
0N −r×r −IN −r
Then consider the Boson Fock space
Γs (L2 (R+ ) ⊗ CN )
Note that
K 2 = IN , K ∗ = K
We shall now prove that for s < t,
G(t)dΛab (s) = (−1)σab dΛab (s).G(t)
Indeed, we have for s < t,
This proves the claim. Now define the process ξab (t) by
The grading of ξab (t) process is σab = σ(Eab ). From (1), it follows on integration
that
[dξab (t), ξcd (s)]S = 0, s < t
Note that we have used the easily proved identity
G(t)G(s) = G(s)G(t)∀s, t
Advanced Probability and Statistics: Applications to Physics and Engineering 27
Now consider
Thus we get
[dξab (t), dξcd (t)]S = a
d dξcb − (−1)σab σcd cb dξad
Now Let A, B be N + 1 × N + 1 matrices. We define
where summation over the repeated indices a, b is implied. Then, we have from
the above,
[dξA (t), dξB (t)]S = Aab Bcd ad dξcb (t) − (−1)σab σcd Aab Bcd cb dξad (t)
Edvard ja Kate.
Edvard sai veitsen, ja sitten Alfred pyssy olalla lähti kotiin päin.
Kun hän palasi, oli Edvard nylkenyt kaksi härkää, ja Virkku oli taas
jalkeilla.
Hän oli oikein vilkas ja hauska, ja Alice kertoi, että poika oli
huvittanut häntä ja Editiä kaikenmoisilla taikatempuilla, muun
muassa hän oli heittänyt kolmella perunalla palloa yhtä haavaa, oli
saanut lautasen pyörimään hiilihangon päässä ja viiputtanut sitä
leukansa päällä.
— Huomenaamulla varhain.
— Entä Oswald?
— En, neiti, minä en ole ollut metsällä siitä asti kuin viimeksi
tapasimme.
— Metsän tuolla puolen, talossa, joka ennen oli isoisäni, vaan nyt
on minun.
— Asutteko yksin?
— En.
— On.
— Entä sisarenne, ovatko hekin nuoremmat?
— Ovat.
— Niin.
Satimessa.
Näissä mietteissä hän oli saapunut niin syvälle metsään, että hän
arveli voivansa ampua vähän metsänriistaa. Täällä hän kai ainakin
saattoi olla rauhassa äreältä metsänvartijalta. Hän tiesi, että
läheisyydessä oli lampi, jonka reunalla peurojen oli tapana levätä
keskipäivän helteessä. — Sinne minä menen, ajatteli hän, heittäytyi
maahan ja alkoi varovasti ryömiä nuoren metsän lävitse.
Edvard hiipi pois samaa tietä kuin hän oli tullutkin, ja oli pian
viidakon toisella puolella. Mutta minne oli Valpas joutunut? — Valpas,
Valpas! Missä olet? Kas siinä! Niin, Valpas oli tehnyt tuhmuuksia. Se
oli näet haistanut, että Tomin taskuissa oli lihaa. Edvardin poistuttua
se oli mennyt nuuskimaan metsänvartijan taskuja. Siitä oli tämä
luonnollisesti herännyt. Hän tunsi heti koiran eilisestä ja arvasi, ettei
sen isäntäkään ollut kaukana. Kun koira juoksi Edvardin jäljestä,
seurasi Tom sitä kappaleen matkan päässä.
— Minä Tom!
— Oletko vahingoittunut?
Vaikea yö oli heillä kaikilla ollut, varsinkin Alfredilla. Hän viipyi nyt
pari tuntia Oswaldin luona levähtääkseen vähän, ennenkuin taas
lähti matkalle.
Uusi kuningas.
— Minä sanoin, että sinä asut kovin kaukana, ja että minä olen
ollut niin harvoin luonanne, etten ole oikein varma tiestä. Mutta
metsäpäällikön tytär tahtoisi myöskin mielellään käydä tervehtimässä
sinua ja sisariasi.